Claude Code Unpacked : A visual guide

(ccunpacked.dev)

362 points | by autocracy101 4 hours ago ago

86 comments

  • brauhaus 44 minutes ago

    Even today, I'm still astounded that there are people capable of building a gorgeous and interesting site like this in less than 2 days...

    • spondyl 14 minutes ago

      Well, I assume this is all just generated with Claude Code, right? Whether there is much back and forth with the LLM is a valid question and nothing wrong with generating websites (I do it too for some side projects). Claude loves generating websites with a particular style of serif font. We also saw this with https://tboteproject.com/timeline/ and I've just generally seen it from various designs that coworkers have spit out over months using Claude defaults.

      I guess I just find it weird because all the signals are messed up so whenever I see these sorts of layouts, I feel like I'm looking at the average where I don't think "gorgeous and interesting" at all. Instead, I'm forced to think "I should be skeptical of this based on the presentation because it presents as high quality but this may be hiding someone who is not actually aware of what they're presenting in any depth" as the author may have just shoved in a prompt and let it spin.

      There's actually a similarly designed website (font weights, font styles etc) here in New Zealand (https://nzoilwatch.com/) where at a glance, it might seem like some overloaded professional-backed thing but instead it's just some guy who may or may not know anything about oil at all, yet people are linking it around the place like some sort of authoritative resource.

      I would have way less of an issue if people just put their names by things and disclosed their LLM usage (which again, is fine) rather than giving the potentially false impression to unequipped people that the information presented is actually as accurate and trustworthy as the polish would suggest.

    • oasisbob 24 minutes ago

      Is this gorgeous?

      Content resizing, needing to juggle a speed knob to read, and the overall presentation makes it feel like Edward Tufte flavored nightmare fuel.

    • ricardobeat 20 minutes ago

      Claude itself can generate this in minutes if you know how to ask.

    • raincole 19 minutes ago

      But somehow, according to HN, LLMs make you less productive, not more :)

      • supersparrow 11 minutes ago

        The people who don’t know how to use an LLM to make them more productive, or are scared it’s going to take their job, are louder than the people who are making good use of them to make them more productive.

        That just seems to be human nature unfortunately - the complainers are always louder.

    • piker 33 minutes ago

      .

      • comboy 25 minutes ago

        I mean, tools change, but I'd be happy to hear if any tool can create that by just saying create "Claude Code Unpack" with nice graphics. or some other single prompt. It likely was iterative process and it would be lovely if more people started sharing that process because itself is also interesting.

        I've created some chinese characters learning website and I took me typing 1/3 of LoTR to get there[1]. I would have typed like 1% of that writing code directly. It is a different process, but it still needs some direction.

        1. https://hanzirama.com/making-of

      • ipnon 25 minutes ago

        I think it is accurate. Where are the autonomous AI who beat the creator to the punch? When we write "Hello, World!" in C and compile it with `gcc`, do we give credit to every contributor to GNU? AI is a tool that thus far only humans are capable of using with the unique inspiration. Will this change in the future? Certainly. But is it the case now? I think my questions imply some reasonable objections.

      • oriettaxx 28 minutes ago

        “Che cos’è il genio? È fantasia, intuizione, colpo d’occhio e velocità di esecuzione”

  • Andebugulin 2 hours ago

    If it was 2020, it would be hard to imagine that after some hours/days you getting a visual representation of the leak with such detailed stats lol

    • makapuf 2 hours ago

      How was this generated ? I'm quite sure "with ai/claude code" but what are the actual steps ?

      • rzmmm an hour ago

        For the animations specifically, it's using Motion (fka Framer Motion) Javascript library. If you describe some animations from the site to an LLM and ask it to use Framer motion, you get very similar results. The creator likely just prompted for a while until they were happy with the outcome.

        • FartyMcFarter 16 minutes ago

          Is there a reason to think it was done by an LLM?

          • rzmmm 9 minutes ago

            It states "curation assisted by AI" at the bottom.

  • dheerajmp 3 hours ago

    Feel free to add this to Awesome Claude code. https://github.com/rosaboyle/awesome-cc-oss

  • stingraycharles 3 hours ago

    I guess they really do eat their own dogfood and vibe code their way through it without care for technical debt? In a way, it’s a good challenge, but it’s fairly painful to watch the current state of the project (which is about a year old now, so it should be in prime shape).

    • brabel 2 hours ago

      > is about a year old now, so it should be in prime shape

      A 1yo project may be in good shape if written by just one dev, maybe a few. But if you have many devs, I can guarantee it will be messy and buggy. If anything, at 1yo it is probably still full of bugs because not enough time has elapsed for people to run into them.

      • mattmanser an hour ago

        It's only 510k LoC, at ~100 lines of code a day for a year, this code base would take 23 engineers a year to write. That's for 220 working days in somewhere civilized.

        And I'm sure we all know that when working on a greenfield project you can produce a lot more LoC per day than maintaining a legacy one.

        Given that vibe code is significantly more verbose, you're probably talking about ~15 engineers worth of code?

        I know that's all silly numbers, but this is just attempting to give people some context here, this isn't a massive code base. I've not read a lot of it, so maybe it's better than the verbose code I see Claude put out sometimes.

        • cududa 13 minutes ago

          When you say it’s not a massive codebase, I’m curious, what are you comparing it to?

    • coldtrait 3 hours ago

      Boris Cherny, the creator of Claude Code said he uses CC to build CC.

      • Cthulhu_ 2 hours ago

        Which makes for an interesting thought / discussion; code is written to be read by humans first, executed by computers second. What would code look like if it was written to be read by LLMs? The way they work now (or, how they're trained) is on human language and code, but there might be a style that's better for LLMs. Whatever metric of "better" you may use.

        Just a thought experiment, I very much doubt I'm the first one to think of it. It's probably in the same line of "why doesn't an LLM just write assembly directly"

        • syphia 2 hours ago

          LLMs read and write human-code because humans have been reading and writing human-code. The sample size of assembly problems is, in my estimate, too small for LLMs to efficiently read and write it for common use cases.

          I liken it to the problem of applying machine learning to hard video games (e.g. Starcraft). When trained to mimic human strategies, it can be extremely effective, but machine learning will not discover broadly effective strategies on a reasonable timescale.

          If you convert "human strategies" to "human theory, programming languages, and design patterns", perhaps the point will be clear.

          But: could the ouroboric cycle of LLM use decay the common strategies and design patterns we use into inexplicable blobs of assembly? Can LLMs improve at programming if humans do not advance the theory or invent new languages, patterns, etc?

          • Mentlo 5 minutes ago

            But starcraft training is not through mimicking human strategies - it was pure RL with a reward function shaped around winning, which allows it to emerge non-human and eventually super-human strategies (such as the worker oversaturation).

            The current training loop for coding is RL as well - so a departure from human coding patterns is not unexpected (even if departure from human coding structure is unexpected, as that would require development of a new coding language).

        • tempay 2 hours ago

          > It's probably in the same line of "why doesn't an LLM just write assembly directly"

          My suspicion is that the "language" part of LLMs means they tend to prefer languages which are closer to human languages than assembly and benefit from much of the same abstractions and tooling (hence the recent acquisition of bun and astral).

      • stingraycharles an hour ago

        Yes but my point was that they seem to explicitly not care about code quality and/or the insane amount of bloat, and seem to just want the LLM to be able to deal with it.

        • lukaslalinsky an hour ago

          I've heard somewhere that they have roughly 100% code churn every few months, so yes, they unfortunately don't care about code quality. It's a shame, because it's still the best coding agent, in my experience.

          • menaerus 34 minutes ago

            > they unfortunately don't care about code quality.

            > It's a shame, because it's still the best coding agent, in my experience.

            If it is the best, and if it delivers the value users are asking for, then why would they have an incentive to make further $$$ investments to make it of a "higher" quality if the value this difference could make is not substantial or hurts the ROI?

            On many projects I found this "higher quality" not only to be false of delivering more substantial value but actually I found it was hurting the project to deliver the value that matters.

            Maybe we are after all entering the era of SWE where all this bike-shedding is gone and only type of engineers who will be able to survive in it will be the ones who are capable of delivering the actual value (IME very few per project).

          • stingraycharles an hour ago

            Yes, but as I said, it’s in a way the ultimate form of dogfooding: ideally they’ll be able to get the LLM smart enough to keep the codebase working well long-term.

            Now whether that’s actually possible is a second topic.

    • troupo 34 minutes ago

      They explicitly boast about using claude code to write code: https://x.com/bcherny/status/2007179836704600237

      That's how you get "oh this TUI API wrapper needs 68GB of RAM" https://x.com/jarredsumner/status/2026497606575398987 or "we need 16ms to lay out a few hundred characters on screen that's why it's a small game engine": https://x.com/trq212/status/2014051501786931427

      • 000ooo000 16 minutes ago

        Just finished looking at Ink here.. frontend world has no shame. Love the gloating about 40x less RAM as if that amount of memory for a text REPL even approaches defensible. "CC built CC" is not the flex people seem to suggest it is.

  • lanbin an hour ago

    However, excellent development practices involve modularizing code based on functional domains or responsibilities.

    The utils directory should only contain truly generic, business-agnostic utilities (such as date retrieval, simple string manipulation, etc.).

    We can see that the code produced by Vibe is not what a professional engineer would write. This may be due to the engineers using the Vibe tool.

    • afferi300rina 38 minutes ago

      That's the hallmark of "vibe coding": optimizing for immediate output while treating the utils folder as a generic junk drawer.

      • TeMPOraL 19 minutes ago

        Another "hallmark" that happens to describe pretty much every codebase people wrote even before LLMs were a thing.

        • lll-o-lll 6 minutes ago

          Sadly, the AI’s have been trained on human developed repos.

  • AJRF 26 minutes ago

    This is AI slop.

    First command I looked at:

      /stickers:
      
      Displays earned achievement stickers for milestones like first commit, 100 tool calls, or marathon sessions. Stickers are stored in the user profile and rendered as ASCII art in the terminal.
    
    
    That is not what it does at all - it takes you to a stickermule website.

    What is the motivation for someone to put out junk like this?

    • ricardobeat 19 minutes ago

      Clout and reaching the top of HN apparently.

      The animated explanation at the top is also way too fast at 1x, almost impossible to follow; that immediately hinted at the author not fully reading/experiencing the result before publishing this.

    • thepasch 18 minutes ago

      > What is the motivation for someone to put out junk like this?

      Getting something with a link to their GitHub onto the frontpage of HN. Because form matters much more in this world than substance.

  • swyx 2 hours ago

    > also related: https://www.ccleaks.com

    This deployment is temporarily paused

  • restlessforge 4 hours ago

    Okay those "hidden features" are amazing, especially the cross-session referencing. I hope we can look forward to that in the future

    Also I definitely want a Claude Code spirit animal

    • jwilliams 4 hours ago

      It's live! If you're on the latest cc you can use /buddy now.

      • jen729w 3 hours ago

        It's a ridiculous folly. I've already lost a well-constructed question because I accidentally tabbed into my pointless 'buddy'.

        (Yes, I know I can turn it off. I have.)

        • binocarlos 2 hours ago

          I find Claude Code features fall into 2 categories, "hmmmm that could be actually useful" vs "there is more kool aid where that came from"

      • Nevermark 3 hours ago

        Ok! First prompt, obviously:

        “Complete thyself.”

        And I want an octopus. Who orchestrates octopuses.

  • cjlm an hour ago

    I prefer this mapping from Nikita @ CosmoGraph: https://run.cosmograph.app/public/dfb673fc-bdb9-4713-a6d6-20...

  • jatins 4 hours ago

    There's this weird thing about AI generated content where it has the perfect presentation but conveys very little.

    For example the whole animation on this website, what does it say beyond that you make a request to backend and get a response that may have some tool call?

    • roughly 3 hours ago

      Also it's just randomly incorrect in places. For instance, it lists "fox" as one of the "Buddy" species, but that's not in the code.

      • autocracy101 3 hours ago

        That's been corrected, I did another fact checking pass!

        • dare944 an hour ago

          Another? Why weren't all the facts checked on the first pass?

          • afferi300rina an hour ago

            We've moved from "move fast and break things" to "hallucinate fast and patch later." It's the inevitable side effect of using AI to curate AI-written codebases.

    • IsTom 2 hours ago

      When you're picking most likely tokens, you get least surprising tokens, ones with least entropy and least information per token.

    • autocracy101 3 hours ago

      That's fair. The site isn't meant to be a deep technical dive, it's more of a visual high-level guide of what I've curated while exploring the codebase while assisted by AI, 500k loc codebase is just too much to sift through in a short amount of time.

    • siva7 4 hours ago

      Really Weird but then it's so easy spot AI text by this pattern

    • bonoboTP 40 minutes ago

      I agree with you and I'm generally an AI "defender" when people superficially dismiss AI capabilities, but this is a more subtle point.

      If you prompt with little raw material and little actual specification of what you want to see in the end, eg you just say make a detailed breakdown dashboard-like site that analyzes this codebase, the result will have this uncanny character.

      I'd describe it as a kind of "fanfic", it (and now I'm not just talking about this website but my overall impression related to this phenomenon) reminds me a bit like how when I was 15 or so, I had an idea about how the world works then things turned out to be less flashy, less movie-like, less clear-cut, less-impressive-to-a-teenage-boy than I had thought.

      If you know the concept of "stupid man's idea of a smart man", I'd say AI made stuff (with little iteration) gives this outward appearance of a smart man from the Reddit-midwit-cinematic-universe. It's like how guns in movies sound more like guns than real guns. It's hyperreality.

      Again this is less about the capabilities of AI and it's more connected to the people-pleasing nature of it. It's like you prompt it for some epic dinner and it heaps you up some hmmm epic bacon with bacon yeah (referring to the hivemind-meme). Or BigMac on the poster vs the tray, and the poster one is a model made with different components that are more photogenic. It's a simulacrum.

      It looks more like your naive currently imagined thing about what you think you need vs what you'd actually need. It's like prompting your ideal girlfriend into AI avatar existence. I'm sure she will fit your ideal thought and imagination much better but your actual life would need the actual thing.

      This relates to the Persona thing that Anthropic has been exploring, that each prompt guides the model towards adopting a certain archetypal fiction character as it's persona and there are certain attraction basins that get reinforced with post training. And in the computer world, simulated action can be easily turned into real action with harnesses and tools, so I'm not saying that it doesn't accomplish the task. But it seems that there are more sloppy personas, and it seems that experts can more easily avoid summoning them by giving them context that reflects more mundane reality than a novice or an expert who gives little context. Otherwise the AI persona will be summoned from the Reddit midwit movie.

      I'm not fully clear about all this, but I think we have a lot to figure out around how to use and judge the output of AI in a productive workflow. I don't think it will go away ever, but will need some trimming at the edges for sure.

  • sibtain1997 3 hours ago

    Kairos and auto-dream are more interesting than anything in the agent loop section. Memory consolidation between sessions is the actual unsolved problem. The rest is just plumbing tbh

    • giancarlostoro 2 hours ago

      Projects like Beads help with memory consolidation by making it somewhat moot, since it stays "offline" and can be recollected at any moment.

  • vivzkestrel 3 hours ago

    would be nice if the transformers code for one of these frontier LLM models got leaked, HN will have a field day with a reveal like that

    • loveparade 3 hours ago

      I doubt there is anything special about the transformer code the frontier labs use. The only thing proprietary in it are probably the infrastructure-specific optimizations for very large scale distributed training and some GPU kernel tricks. The real moat is the training data, especially the RLHF/finetuning data and verifiable reward environments, and the GPU clusters of course.

      The open source models are quite close, and they'd probably be just as good with the equivalent amount of compute/data the frontier labs have access to.

      • dgb23 2 hours ago

        That’s what I‘m thinking as well.

        However, I assume that usage data could be increasingly valuable as well. That will likely help the big commercial cloud models to maintain a head start for general use.

  • fersarr 27 minutes ago

    why do people care so much? it's just an agentic loop

  • jen729w 3 hours ago

    Is it just me or do I not find the Claude Code application that fascinating?

    I use it all day and love it. Don't get me wrong. But it's a terminal-based app that talks to an LLM and calls local functions. Ooookay…

    • 59nadir 2 hours ago

      I think it's good that it's out there, and I wonder why Anthropic have been keeping it closed source; clearly they can't possibly think that the CC source code is a competitive advantage...?

      Agents in general are easy to make, and trivial to make for yourself especially, and the result will be much better than what any of the big providers can make for you.

      `pi` with whatever commands/extensions you want to make for yourself is better than CC if you really don't want to go through the trouble of making your own thing.

      • ariwilson 2 hours ago

        why do you think agents you make yourself will be better for you? integration with tooling that you prefer? your local dev setup built in?

        curious as i haven't gotten around to writing my own agent yet

    • parasti 3 hours ago

      I feel the same way. Given it's AI-written, looking at the code isn't even worth it to me. I would rather read a blog post about how they develop it day to day.

    • dgb23 2 hours ago

      That’s what every agent does. They are fundamentally simple.

      But you can do a lot of interesting things on top of this. I highly recommend writing an agent and hooking it up to a local model.

    • j45 3 hours ago

      Clever architecture often can still beat clever programming.

  • rhofield 3 hours ago

    Really nice visualisation of this, makes understanding the flow at a high levle pretty clear. Also the tool system and command catalog, particularly the gated ones are super interesting.

  • p2detar 2 hours ago

    So it does use ripgrep and not unix grep. [0] I knew it from some other commenters here on HN, but it's nice to see it in the source as well.

    0 - https://github.com/zackautocracy/claude-code/blob/main/src/u...

  • simonreiff 3 hours ago

    Nice site. I might suggest moving SendMessage to the Hidden Features as they don't appear to have implemented a ReadMessage or ListMessages tools.

  • lastdong 3 hours ago

    I hope /Buddy is ported across to OpenCode.

  • nitnelave an hour ago

    Ah, good well-architected code, finally... With most of the code in utils/other :D

  • spirelab 44 minutes ago

    I got a goose

    War flashbacks to genshin

  • m132 3 hours ago

    I mean, I get it: vibe-coded software deserves vibe-coded coverage. But I would at least appreciate it if the main part of it, the animation, went at a speed that at least makes it possible to follow along and didn't glitch out with elements randomly disappearing in Firefox...

    How is this on the front page?

    • brabel 2 hours ago

      It's on the front page because it looks really cool. You can complain about it being vibe coded, but it still looks good. If you ask Claude to allow the user to slow down the animation, it can do that quite easily, that's just not a problem caused by vibe coding. And I'm on FF and didn't notice anything glitching out.

  • fsniper an hour ago

    Source leak or free code review? I can say that there is no bad publicity.

  • ramon156 3 hours ago

    I expect dozens more "research articles" that

    - find nothing - still manage to fill entire lages - somehow have a similar structure - are boring as fuck

    At least this one is 3/4, the previous one had BINGO.

  • fartfeatures 3 hours ago

    Ccleaks is down?

  • mdavid626 4 hours ago

    How the hell is it 500k lines?

    • twsted 3 hours ago

      It is vibe coded.

  • inside_story 4 hours ago

    cool Archaeologization Collection Output

  • jruohonen 4 hours ago

    Thanks, I'll use this for teaching next week (on what not to do). BashTool.ts :D But, in general, I guess it just shows yet again that the emperor has no clothes.

    • dgb23 an hour ago

      Are you not feeling the vibes?

      In all seriousness. I think you‘re supposed to run these in some kind of sandbox.

    • petesergeant 2 hours ago

      > it just shows yet again that the emperor has no clothes

      Which emperor, specifically?