The Miller Principle (2007)

(puredanger.github.io)

75 points | by FelipeCortez 5 days ago ago

51 comments

  • donatj 8 hours ago

    For fun, I recently rebuilt a little text adventure some friends and I had built in the early 2000s. Originally written in QBasic, I translated it line by line in Go, and set it up as a little SSH server.

    For posterity, I didn't want to change anything about the actual game itself but knew beforehand that the commands were difficult to figure out organically. To try and help modern players, I added an introductory heading when you start playing informing the player that this was a text adventure game and that "help" would give them a basic list of commands.

    Watching people attempt to play it in the logs, it became painfully obvious no one read the heading, at all. Almost no one ever typed "help". They'd just type tens of invalid commands, get frustrated and quit.

    • m3047 3 minutes ago

      > They'd just type tens of invalid commands, get frustrated and quit.

      Is that like the "players" who send HTTP requests to my mail server?

    • datameta 2 hours ago

      I wonder how different the outcome would be if the idiom used was not help but "instructions", as in, what portion of users did not want to admit they needed assistance?

      I'm not refuting the fact that people seldom read, but this seems like an interesting additional vector to explore.

  • torben-friis 10 hours ago

    I wish this was the case. Then we wouldn't have a minority of us deeply frustrated :)

    'Thanks for the doc, let's set a meeting' (implied: so you can read the doc aloud to us ) is the bane of my existence.

  • hermitcrab 3 hours ago

    A customer contacts me and says 'I have an error'. After several emails I manage to get them to send me a screenshot of the error. The error message describes the exact problem and what to do about in one short sentence. I type pretty much exactly the error message text into my reply. This solves their problem. I think they see 'error' or 'warning' and they don't even read the rest of the sentence. Extraordinary. But it has happened more than once.

    • staticshock 10 minutes ago

      They were taught not to read errors because they encountered thousands of errors (in other software) that were less helpful than that one.

      Most people have an adversarial relationship with software: it is just the pile of broken glass they have to crawl through on the way to getting their task done. This understanding is reinforced and becomes more entrenched with each next paper cut.

  • sdevonoes 9 hours ago

    I think this is more true now than ever. Before LLMs, when someone came up with an ADR/RFC/etc you had to read it because you had to approve it or reject it. People were putting effort and, yeah, you could use them in your next perf. review to gain extra points. You could easily distinguish well written docs from the crap (that also made the job of reviewing them easie)

    Nowadays everyone can generate a 20-page RFC/ADR and even though you can tell if they are LLM generated, you cannot easily reject them based on that factor only. So here we are spending hours reading something the author spent 5 min. to generate (and barely knows what’s about).

    Same goes for documentation, PRs, PRs comments…

    • jodrellblank 8 hours ago

      Watching the Artemis II splashdown and following media event, I’m suspicious that a woman from TechTalk Media read out some LLM blurb instead of asking a question; I can’t prove it, but I can almost hear the em-dash in:

      "What you have done this week is remind the people of Earth that wonder is worth chasing. That curiosity is the most human thing we have. You didn't just test a spacecraft -- you tested mankind's potential...”

      • nathan_compton 7 hours ago

        I think the good news here is that very soon, parroting some shit an LLM wrote will be a sure sign to everyone that that person is a moron or lazy or otherwise useless. If all you do is repeat what an AI gives you, then you can be replaced by the AI. I can't imagine why anyone would want to signal that to potential employers or, really, any other human being.

    • ghgr 9 hours ago

      As a counterexample, thanks to LLMs many long-form articles that get posted with clickbaity (but devoid of content) headlines that I would have ignored otherwise now get "read" (albeit indirectly, with the prompt "Summarize the insights of the article $ARTICLE_URL in an academic, dry, technical and information-dense way")

      • eru 9 hours ago

        I notice that with YouTube videos.

    • manmal 8 hours ago

      Those generated ADRs are pure crap, full of unnecessary hedges and superficial solutions that don’t survive scrutiny longer than 10 seconds. I do generate ADR skeleton drafts because I hate empty pages, but I need to add the substance or they are not helpful at all.

      What we are doing is probably not in training data, maybe that’s why.

  • bachmeier 3 hours ago

    Probably true. Also probably true: people have read enough of the things he listed and concluded that they wasted their time. I remember trying Linux in the RTFM days, and let me tell you, those were some terrible documents even when they did talk about the problem.

  • sebastianconcpt 8 hours ago

    This signals something that is happening somehow predictably due to the increasing abundance of code. It exponentially grows the surface offered for understanding (text as in comments, docs etc) and our attention bandwidth, well, is not exponentially growing, so...

    • caminante 6 hours ago

      Not a new trend.

      Mini-article is from 2007.

      At the time, more reports were generated than humans could read. People weren't reading them for good reason.

      I suspect author is more annoyed about people being (grossly) negligent in reading important things.

  • nathell 6 hours ago

    Joel Spolsky in 2000 [0]: „Users can’t read anything, and if they could, they wouldn’t want to.”

    [0]: https://www.joelonsoftware.com/2000/04/26/designing-for-peop...

  • comrade1234 10 hours ago

    Despite using an ai while programming I still have open Java doc and other api documents and find them very useful as the ai often gives code based on old apis instead of what I'm actually using. So I do read those documents.

    But also, I have a somewhat mentally ill (as in he takes medication for it) coworker that sends rambling extra-long emails, often all one paragraph. If I can't figure out what he's asking by reading the first couple and last couple of sentences I ask him to summarize it with bullet pouts and it actually works. Lol.

  • coopykins 9 hours ago

    It's one of the main things I learned when working as tech support and I talked with users all day. Nobody reads anything.

    • layer8 6 hours ago

      Or maybe those who do read the docs require less tech support.

    • zero-sharp 7 hours ago

      A totally understandable situation. Most people just want to use technology to accomplish their immediate goal. I'm tech savvy and I lose my mind every time I get distracted by broken/misconfigured technology.

    • funnybeam 9 hours ago

      I used to refer to the helpdesk as the reading desk - “Hello, you’re through to the IT Helpdesk, what can i read for you today?”

  • hamdouni 10 hours ago

    Yeah, i'm also surprised people just read post title and jump to conclusions ...

  • Animats 10 hours ago

    The LLMs read everything.

    • krona 9 hours ago

      It doesn't mean they're paying attention.

    • OutOfHere 6 hours ago

      They do, but it doesn't mean that entire texts will remain in their context. Increasingly they can use agentic reading, whereby they will spawn an agent to read long texts, then present a condensed version back to the parent LLM, leading to a theoretical opportunity for information loss.

    • formerly_proven 10 hours ago

      Only because they are architecturally unable to not read something.

      • Wowfunhappy 6 hours ago

        Well, the LLMs architecturally have to read everything they see. The agents attached to LLMs can choose what to look at.

      • simultsop 9 hours ago

        until one day

  • taffydavid 10 hours ago

    I read this entire post and all the comments this disproving the Miller principle

    • armchairhacker 9 hours ago

          This principle applies to the following:
      
          - User documentation
          - Specifications
          - Code comments
          - Any text on a user interface
          - Any email longer than one line
      
      Not blog posts or comments. Ironic
      • layer8 6 hours ago

        If you read closely, you’ll see that there is no claim that this would be an exhaustive list rather than an exemplifying one, and the principle itself unambiguously states “anything”.

      • taffydavid 9 hours ago

        Damn, I guess I didn't read it closely enough

        • sebastianconcpt 9 hours ago

          Proved that read is not causation of understanding but mere correlation.

          So if the read of the Miller principle is interpreted as read+understanding (it should) an interesting deeper discussion can happen.

          It can be invoked with a way more dramatic "None understands anything"

  • smitty1e 10 hours ago

    I have found much value in reading the python and sqlite documentation. The Arch wiki is another reliable source.

    Good documentation is hard.

    • simultsop 9 hours ago

      I don't know. Under pressure and stress all docs are ugly.

    • Akcium 10 hours ago

      I would love to answer your comment but I didn't read it :P

  • sikk01 7 hours ago

    Unironically I was pasting the URL of this article into chat GPT to summarise

  • spiderfarmer 10 hours ago

    The Laravel documentation is GREAT when you're getting started. Every chapter starts by answering the very question you might ask yourself if you're going through it top to bottom.

    I'm a one-man-band so if I write code comments, I write them for future me because up to this point he has been very grateful. Creating API documentation is also easy if you can generate it based on the comments in your code.

    Maybe rename it the Filler principle. Nobody reads mindless comments that are 'filler'.

  • Borg3 8 hours ago

    We are reaching society shown in "Johny Mnemonic" movie.. So much (useless) information around that people gets overloaded. I barely read anything these days on NH, too much (crap) information. I skim and only read stuff that is very close to my interest.

    I used to read a lot more in the past, not the case anymore..

    • andai 7 hours ago

      Well, half the articles I see posted now, the author didn't even bother to write themselves, but outsourced to a machine.

      I've heard this sentiment: "If you didn't even bother to write it, why should I bother to read it?"

      But often there is real value there, and I sometimes force myself to cringe my way through the GPT-isms, to find the gems buried within.

  • realaleris149 10 hours ago

    The agents will read them

  • pfdietz 2 hours ago

    Except now LLMs read everything.

  • fmajid 9 hours ago

    Write-only memory

  • stevage 9 hours ago

    Should probably be "The Miller Principle (2007)"

  • ekjhgkejhgk 9 hours ago

    Damn, this is thin content even for HN.

    Anyway, this is just projection. The Miller principle really should be "Miller doesn't read anything". I read plenty.

  • makach 10 hours ago

    ..and emails

    • stevage 9 hours ago

      > Any email longer than one line

      it's in there

    • sarreph 9 hours ago

      The irony.

  • timrobinson33 9 hours ago

    tl;dr

  • ineedasername 7 hours ago

    tl;dr: ' '