I was looking at a production service we run that was using a few GBs of memory. When I add up all the actual data needed in a naive compact representation I end up with a few MBs. So much waste. That's before thinking of clever ways to compress, or de-duplicate or rearrange that data.
Back in the day getting the 16KB expansion pack for my 1KB RAM ZX81 was a big deal. And I also wrote code for PIC microcontrollers that have 768 bytes of program memory [and 25 bytes of RAM]. It's just so easy to not think about efficiency today, you write one line of code in a high level language and you blow away more bytes than these platforms had without doing anything useful.
Long ago working for a retail store chain, I made some excel DSL to encode business rules to update inventory spreadsheets. While coding I realized that their excel template had a bunch of cells with whitespace in them on row 100000. This forced excel to store the sparse matrix for 0:100000 region, adding 100s of Kb for no reason. Multiplied by 1000s of these files over their internal network. Out of curiosity I added empty cell cleaning in my DSL and I think I managed to fit the entire company excel file set on a small sd card (circa 2010).
At some point, you just stop measuring the thing until the thing becomes a problem again. That lets you work a lot faster and make far more software for far less money.
It's the "fast fashion" of software. In the middle ages, a shirt used to cost about what a car does now, and was just as precious. Now, most people can just throw away clothes they no longer like.
It usually is. I try to think of these things not as "waste" but as "cost." As in, what does it cost vs. the alternative? You're using 40Gb of some kind of storage. Let's say it's reasonably possible to reduce that to 20Gb. What's the cost of doing so compared to the status quo? That memory reduction effort, both the initial effort, and the ongoing maintenance, isn't free. Unless it costs a lot less to do that than to continue using more memory, we should probably continue to use the memory.
Yeah, there may be other benefits, but as a first order of approximation, that works. And you'll usually find that it's cheaper to just use more memory.
Sure, if you donât count safety features like memory management, crash handling, automatic bounds checks and encryption cyphers; as anything useful.
I do completely agree that there is a lot of waste in modern software. But equally there is also a lot more that has to be included in modern software that wasnât ever a concern in the 80s.
Networking stacks, safety checks, encryption stacks, etc all contribute massively to software âbloatâ.
You can see how this quickly adds up if you write a âhello worldâ CLI in assembly and compare that to the equivalent in any modern language that imports all these features into its runtime.
And this is all before you take into account that modern graphics and audio is bitmap / PCM and running at resolutions literally orders of magnitude greater than anything supported by 80s micro computers.
Yes, but this doesn't prevent you from being mindful and selecting the right tools with smaller memory footprint while providing the features you need.
Go's "GC disadvantage" is turned on its head by developing "Zero Allocation" libraries which run blazingly fast with fixed memory footprints. Similarly, rolling your own high performance/efficient code where it matters can save tremendous amounts of memory where it matters.
Of course more features and safety nets will consume memory, but we don't have to waste it like there are no other things running on the system, no?
> And this is all before you take into account that modern graphics and audio is bitmap / PCM and running at resolutions literally orders of magnitude greater than anything supported by 80s micro computers.
This demo [0] is a 4kB executable. 4096 bytes. A single file. All assets, graphics, music and whatnot, and can run at high resolutions with real time rendering.
This is [1] 64kB and this [2] is 177kB. This game from the same group is 96kB with full 3D graphics [3].
Programming these days, in some realms, is a lot like shopping for food - some people just take the box off the shelf, don't bother with reading the ingredients, throw it in with some heat and fluid and serve it up as a 3-star meal.
Others carefully select the ingredients, construct the parts they don't already have, spend the time to get the temperatures and oxygenation aligned, and then sit down to a humble meal for one.
Not many programmers, these days, do code-reading like baddies, as they should.
However, kids, the more you do it the better you get at it, so there is simply no excuse for shipping someone elses bloat.
Do you know how many blunt pointers are lined up underneath your BigFatFancyFeature, holding it up?
> Go's "GC disadvantage" is turned on its head by developing "Zero Allocation" libraries which run blazingly fast with fixed memory footprints. Similarly, rolling your own high performance/efficient code where it matters can save tremendous amounts of memory where it matters.
The savings there would be negligible (in modern terms) but the development cost would be significantly increased.
> Of course more features and safety nets will consume memory, but we don't have to waste it like there are no other things running on the system, no?
Safety nets are not a waste. Theyâre a necessary cost of working with modern requirements. For example, If your personal details were stolen from a MITM attack then Iâm sure youâd be asking why that piece of software wasnât encrypting that data.
The real waste in modern software is:
1. Electron: but we are back to the cost of hiring developers
2. Application theming. But few actual users would want to go back to plain Windows 95 style widgets (many, like myself, on HN wouldnât mind, but we are a niche and not the norm).
> This demo [0] is a 4kB executable. 4096 bytes. A single file. All assets, graphics, music and whatnot, and can run at high resolutions with real time rendering.
You quoted where i said that modern resolutions are literally orders of magnitude greater and assets stored in bitmaps / PCM then totally ignored that point.
When you wrote audio data in the 80s, you effectively wrote midi files in machine code. Obviously it wasnât literally midi, but youâd describe notes, envelopes etc. Youâd very very rarely store that audio as a waveform because audio chips then simply donât support a high enough bitrate to make that audio sound good (nor had the storage space to save it). Whereas these days, PCM (eg WAV, MP3, FLAC, etc) sound waaaay better than midi and are much easier for programmers to work with. But even a 2 second long 16bit mono PCM waveform is going to be more than 4KB.
And modern graphics arenât limited to 2 colour sprites (more colours were achieved via palette swapping) at 8x8 pixels. Scale that up to 32bits (not colours, bits) and youâre increasing the colour depth by literally 32 times. And thatâs before you scale again from 64 pixels to thousands of pixels.
Youâre then talking exponential memory growth in all dimensions.
Iâve written software for those 80s systems and modern systems too. And itâs simply ridiculous to Compare graphics and audio of those systems to modern systems without taking into account the differences in resolution, colour depth, and audio bitrates.
Software 30 years ago was more amenable to theming. The more system widgets you use, the more effective theming works by swapping them.
Now, we have grudging dark-mode toggles that aren't consistent or universal, not even rising to the level of configurabilty you got with Windows 3.1 themes, let alone things like libXaw3d or libneXtaw where the fundamental widget-drawing code could be swapped out silently.
I get the impression that since about 2005, theming has been on the downturn. Windows XP and OSX both were very close to having first class, user-facing theming systems, but both sort of chickened out at the last minute, and ever since, we've seen less and less control every release.
I think what you're describing as "theming" is more "custom UI". It used to be reserved for games, where stock Windows widgets broke immersion in a medieval fantasy strategy simulator and you were legally obliged to make the cursor a gauntlet or sword. But Electron said to the entire world "go to town, burn the system Human Interface Guidelines and make a branded nightmare!" when your application is a smart-bulb controller or a text editor that could perfectly well fit with native widgets.
We are talking about software development not user configuration. So âthemingâ here clearly refers specifically to the applications shipping non-standard UIs.
This also isnât a trend that Electron started. Software has been shipping with bespoke UIs for nearly as long as UI toolkits have been a thing.
> The savings there would be negligible (in modern terms)
A word of praise for Go: it is pretty performant, while using very little memory. I inherited a few Django apps, and each thread just grows to 1GB. Running something like celery quickly eats up all memory and start thrashing. My Go replacements idle at around 20MB, and are a lot faster. It really works.
> The savings there would be negligible (in modern terms) but the development cost would be significantly increased.
...and this effort and small savings here and there is what brings the massive savings at the end of the day. Electron is what "4KB here and there won't hurt", "JS is a very dynamic language so we can move fast", and "time to market is king, software is cheap, network is reliable, YOLO!" banged together. It's a big "Leeroy Jenkins!" move in the worst possible sense, making users pay everyday with resources and lost productivity to save a developer a couple of hours at most.
Users are not cattle to milk, they and their time/resources also deserve respect. Electron is doing none of that.
> You quoted where i said that modern resolutions are literally orders of magnitude greater and assets stored in bitmaps / PCM then totally ignored that point.
Did you watch or ran any of these demos? Some (if not all) of them scale to 4K and all of them have more than two colors. All are hardware accelerated, too.
> And modern graphics arenât limited to 2 colour sprites (more colours were achieved via palette swapping) at 8x8 pixels. Scale that up to 32bits (not colours, bits) and youâre increasing the colour depth by literally 32 times. And thatâs before you scale again from 64 pixels to thousands of pixels.
Sorry to say that, but I know what graphics and high performance programming entails. Had two friends develop their own engines, and I manage HPC systems. I know how much memory matrices need, because everything is matrices after some point.
> Safety nets are not a waste.
I didn't say they are waste. That quote is out of context. Quoting my comment's first paragraph, which directly supports the part you quoted: "Yes, but this doesn't prevent you from being mindful and selecting the right tools with smaller memory footprint while providing the features you need."
So, what I argue is, you don't have to bring in everything and the kitchen sink if all you need is a knife and a cutting board. Bring in the countertop and some steel gloves to prevent cutting yourself.
> Iâve written software for those 80s systems and modern systems too. And itâs simply ridiculous to Compare graphics and audio of those systems to modern systems without taking into account the differences in resolution, colour depth, and audio bitrates.
Me too. I also record music and work on high performance code. While they are not moving much, I take photos and work on them too, so I know what happens under the hood.
I agree. I even said Electron was one piece of bloat I didnât agree with my my comment. So it wasnât factored into the calculations I was presenting to you.
> Did you watch or ran any of these demos? Some (if not all) of them scale to 4K and all of them have more than two colors.
You mean the ones you added after I replied?
> I didn't say they are waste. That quote is out of context.
Every part of your comment was quoted in my comment. Bar the stuff you added after I commented.
> Had two friends develop their own engines
I have friends who are doctors but that doesnât mean I should be giving out medical advice ;)
> Just watch the demos. It's worth your time.
Iâm familiar with the demo scene. I know whatâs possible with a lot of effort. But writing cool effects for the demo scene is very different to writing software for a business which has to offset developer costs against software sales and delivery deadlines.
Iâm also not advocating that software should be written in Electron. My point was modern software, even without Electron, is still going to be orders of magnitude larger in size and for the reasons I outlined.
I did no edits after your comment has appeared. Yep, I did edits, but your reply was not visible to me while I did these. Sometimes HN delays replies and you're accusing me of things I'm not. That's not nice.
> writing cool effects for the demo scene is very different to writing software for a business which has to offset developer costs against software sales and delivery deadlines.
The point is not "cool effects" and "infinite time" though. If we continue about talking farbrausch, they are not bunch of nerds which pump out raw assembly for effects. They have their own framework, libraries and whatnot. Not dissimilar to business software development. So, their code is not that different from a business software package.
For the size, while you can't fit a whole business software package to 64kB, you don't need to choose the biggest and most inefficient library "just because". Spending a couple of hours more, you might find a better library/tool which might allow you to create a much better software package, after all.
Again, for the third time, while safety nets and other doodads make software packages bigger, cargo culting and worshipping deadlines and ROI more than the product itself contributes more to software bloat. That's my point.
Oh I overlooked this gem:
> I have friends who are doctors but that doesnât mean I should be giving out medical advice ;)
Yet, we designed some part of that thing together, and I had the pleasure of fighting with GPU drivers with them trying to understand what it's trying to do while neglecting our requests from it.
IOW, yep, I didn't wrote one, but I was neck deep in both of them, for years.
> I did no edits after your comment has appeared. Yep, I did edits, but your reply was not visible to me while I did these.
Which isnât the same thing as what I said.
Iâm not suggesting you did it maliciously, but the fact remains they were added afterwards so itâs understandable I missed them.
> Yet, we designed some part of that thing together, and I had the pleasure of fighting with GPU drivers with them trying to understand what it's trying to do while neglecting our requests from it.
That is quite a bit different from your original comment though. This would imply you also worked on game engines and it wasnât just your friends.
I'm not following the scene for the last couple of years, but I doubt that. On the other hand, there are other very capable people doing very interesting things.
That C64 demo doing sprite wizardy and 8088MPH comes to my mind. The latter one, as you most probably know, can't be emulated since it (ab)uses hardware directly. :D
As a trivia: After watching .the .product, I declared "if a computer can do this with a 64kB binary, and people can make a computer do this, I can do this", and high performance/efficient programming became my passion.
From any mundane utility to something performance sensitive, that demo is my northern star. The code I write shall be as small, performant and efficient as possible while cutting no corners. This doesn't mean everything is written in assembly, but utmost care is given how something I wrote works and feels while it's running.
Iâm on my phone so cannot run it, but you cannot generate data and not store it somewhere. Itâs going to consume either system resources (RAM/storage) or video resources (VRAM).
If your point is that it uses gigabytes of VRAM instead of system memory, then I think that is an extremely weak argument for how modern software doesnât need much memory because all youâre doing is shifting that cost from one stack of silicon to a a different stack silicon. But the cost is still the same.
The only way around that is to dynamically generate those assets on the fly and streaming them to the video card. But then youâre sacrificing CPU efficiency for memory efficiency. So the cost is still there.
And Iâve already discussed how data compresses better as vectors than as bitmaps and PCM but is significantly harder to work with than bitmaps and waveforms. using vectors / trackers are another big trick for demos that arenât really practical for a lot of day to day development because they take a little more effort and the savings in file sizes are negligible for people with multi-GB (not even TB!!!) disks.
As the saying goes: thereâs no such thing as a free lunch.
All demos I have shared with you are designed to run on resource constrained systems. Using all the resources available on the system is a big no no from the start.
Instead, as you guessed, these demos generate assets on the fly and stream to the respective devices. You cite inefficiencies. I tell they run at more than 60 FPS on these constrained systems. Remember, these are early 2000s systems. They are not that powerful by todayâs standards, yet these small binaries use these systems efficiently and generate real time rendered CG on the fly.
Nothing about them is inefficient or poor. Instead they are marvels.
Thatâs not what I said. I said youâre trading memory footprint for CPU footprint.
This is the correct way to design a demo but absolutely the wrong way to design a desktop application.
They are marvels, I agree. But, and as I said before, thereâs no such things as a free lunch. at risk of stating the obvious; If there wasnât a trade off to be made then all software would be written that way already.
I would also add internationalization. There were multi-language games back in the day, but the overhead of producing different versions for different markets was extremely high. Unicode has .. not quite trivialized this, but certainly made a lot of things possible that weren't.
Much respect to people who've manage to retrofit it: there are guerilla translated versions of some Japanese-only games.
> this is all before you take into account that modern graphics and audio is bitmap / PCM and running at resolutions literally orders of magnitude greater
Yes, people underestimate how much this contributes, especially to runtime memory usage.
The 48k Spectrum had a 1-bit "framebuffer" with colours allocated to 8x8 character tiles. Most consoles of the time were entirely tile/sprite based, so you never had a framebuffer in RAM at all.
I think it's a valid view that (a) we have way more resources and (b) sometimes they are badly used in ways that results in systems being perceptibly slower than the C64 sometimes, when measured in raw latency between user input and interaction response. Usually because of some crippling system bottleneck that everything is forced through.
Not âmostâ, but definitely a depressing increasing number.
And as I said elsewhere, I do consider Electron to be bloat.
But itâs also worth discussing Electron as an entirely separate topic because itâs a huge jump in memory requirements from even âbloatedâ native apps.
This I think is a core part of the problem when discussing sizes from C64 era to modern applications:
1. You have modern native apps vs Electron
2. Encryption vs plain text
3. High resolution media vs low resolution graphics and audio
4. Assembly vs high level runtimes
5. static vs dynamically linked libraries
6. Safety harnesses vs unsafe code
7. Expected features like network connectivity vs an era when that wouldnât be a requirement
8. Code that needs to be supported for years of updates by a team of developers vs a one man code base that never gets looked at again after the cassettes get shipped to retail stores.
âŚand so on.
Each of these individually can contribute massively to differences in file sizes and memory footprints. And yet we are not defining those parameters in this discussion so we are each imagining a different context in our argument.
And then you have other variables like:
1. Which is large? 5 GB is big by todayâs standards but even 5 MB would have been unimaginable by C64 standards and that is 4 orders of magnitude smaller. One commenter even discussed 250 GB as âbigâ which is unimaginable by todayâs standard users.
2. Are we talking about disk space or RAM? One commenter discussed using GBs of GPU memory as a way to save sure memory but that feels like a cop out to me because itâs still GBs of system resources that the C64 used.
3. Software Complexity: it takes a lot more effort to release software these days because you work as a team, and need to adhere to security best practices. And we still see plenty of occasions where people get that wrong. So it makes sense that people will use general purpose libraries instead of building everything from scratch to reduce the footprint. Particularly when developers are expensive and projects have (and always have had) deadlines that need to be met. So do we factor in developer efficiency into our equation or not?
In short, this is such a fuzzy topic that I bet everyone is arguing a similar point but from a different context.
I implemented a system recently that is a drop in replacement for a component of ours, old used 250gb of memory, new one uses 6gb, exact same from the outside.
Bad code is bad code, poor choices are poor choices â but I think itâs often times pretty fair to judge things harshly on resource usage sometimes.
Sure, but if youâre talking about 250GB of memory then youâre clearly discussing edge cases vs normal software running on an average persons computer. ;)
Back the day people had BASIC and some machines had Forth and it was like
print "Hello world"
or
." Hello world " / .( Hello world )
for Forth.
By comparison, giving how they optimized the games for 8 and 16 bit machines I should have been able to compile Cataclysm DDA:BN under my potato netbook and yet it needs GIGABYTES of RAM to compile, it crazy that you need damn swap for something it required far less RAM 15 years ago for the same features.
If the game was reimplemented in Golang it wouldn't feel many times slower. But no, we are suffering the worst from both sides of the coin: something that should have been replaced by Inferno -plan9 people, the C and Unix creators and now Golang, their cousin- with horrible compiline times, horrible and incompatible ABI's, featuritis, crazy syntax with templates and if you are lucky, memory safety.
Meanwhile I wish the forked Inferno/Purgatorio got a seamless -no virtual desktops- mode so you fired the application in a VM integrated with the guest window manager -a la Java- and that's it. Limbo+Tk+Sqlite would have been incredible for CRUD/RAD software once the GUI was polished up a little, with sticky menus as TCL/Tk and the like. In the end, if you know Golang you could learn Limbo's syntax (same channels too) with ease.
BASIC was slow in the 80s. Games for the C64 (and similar machines) were written in machine code.
> By comparison, giving how they optimized the games for 8 and 16 bit machines I should have been able to compile Cataclysm DDA:BN under my potato netbook and yet it needs GIGABYTES of RAM to compile, it crazy that you need damn swap for something it required far less RAM 15 years ago for the same features.
Thatâs not crazy. Youâre comparing interpreted, line delimited, ASCII, with a compiler that converts structured ASCII into machine code.
The two processes are as different to one another as a driving a bus is to being a passenger on it.
I donât understand what your point is in the next two paragraphs. What Go, TCL, UNIX nor Inferno have to do with the C64 or modern software. So youâll have to help out there.
Compare Limbo+Tk under Inferno with current C#/Java. Or C++ against Plan9C.
We have impressive CPU's running really crappy software.
Remember Claude Code asking 66GB for a damn CLI AI agent for something NetBSD under a Vax (real or physical) from 1978 could do with NCurses in miliseconds every time you spawn Nethack or any other NCurses tool/game.
On speed, Forth for the ACE was faster than Basic running under the ZX80. So, it wasn't about using a text-parsed language. Forth was fast, but people was not ready for neither RPN nor to manage the stack, people tought in an algebraic way.
But that was an 'obsolete' mindset, because once you hit HS you were supposed to split 'big problems into smaller tasks (equations). In order to implement a 2nd degree equation solver in Forth you wouldn't juggle with the stack; you created discrete functions (words) for the discrimination part and so on.
In the end you just managed two stack items per step.
If Forth won instead of Basic, instead of allowing spaghetti code as a normal procedure we would be pretty much asking to decompose code into small functions as the right thing to do from the start.
Most dialects of BASIC actually had functions too. They just werenât popularised because line numbers were still essential for line editing on home micros.
> On speed, Forth for the ACE was faster than Basic running under the ZX80. So, it wasn't about using a text-parsed language.
Forth and BASIC are completely different languages and youâre arguing a different point to the one I made too.
Also I donât see much value in hypothetical arguments like âif Forth won instead of BASICâ because it didnât and thus we are talking about actual systems people owned.
I mean, I could list a plethora of technologies Iâd have preferred to dominate: Pascal and LISP being two big examples. But the C64 wasnât a lisp machine and people arenât writing modern software in Pascal. So theyâre completely moot to the conversation.
They were different but both came in-ROM and with similar storage options (cassette/floppy).
On Pascal, Delphi was used for tons of RAD software in the 90's, both for the enterprise and for home users with zillions of shareware (and shovelware). And Lazarus/FPC+SQLITE3 today is not bad at all.
On Lisp... it was used on niche places such as game engines, Emacs -Org Mode today it's a beast-, a whole GNU supported GNU distro (Scheme) and Maxima among others.
Still, the so called low-level C++ it's an example on things picking the wrong route. C++ and QT5/6 can be performant enough. But, for a roguelike, the performance on compiling it's atrocious and by design Go with the GC would fix a 90% of the problems and even gain more portability.
Iâm very aware of Lazarus, Delphi and Emacs. But theyâre exceptions rather than industry norms.
And thus pointing them out misses the point I was making when, ironically, I was pointing out how youâre missing the original point of this discussion.
My point was about performance. Yes, Basic vs Forth was the worst choice back in the day, and you could say low level stuff was done under assembler.
Fine. But the correct choice for 'low level' stuff it's C++ and I state that most of the C++ compilers have huge compiling times for software (GCC), or much better but they still eat ram like crazy (clang) and except for few software, the performance boost compared to Go doesn't look as huge for mosts tasks except for Chromium/Electron and QT.
For what software it's doing a 90% of the time, Go + a nice toolkit UI would be enough to cover most tasks while having a safe language to use. Even for bloated propietary IM clones such as Discord and Slack.
Because, ironically, most of the optimized C++ code is to run bloated runtimes like Electron tossing out any C++ gives to you, because most Electron software it's implementing half an OS with every application.
With KDE and QT at least you are sharing code, even by using Flatpak, which somehow deduplicates stuff a little bit. With Electron you are running separate, isolated silos with no awareness of each other. You are basically running several 'desktop environments' at once.
You can say, hey, Go statically builds everything, there's no gain on shared libraries then... until you find the Go compiler can still do a better job using less RAM than average than tons of stuff.
With Electron often you are shipping the whole debugging environment with yourself. Loaded, and running graphical software with far less performance than the 'bloated' KDE3 software back in the day doing bells and wistles under a Kopete chat window under an AMD Athlon. QT3 tools felt snappy. Seeing Electron based software everywhere has the appeal of running everything GUI based under TCL/Tk under a Pentium modulo video decoders and the like. It will crawl against pure Win32/XLib under a Pentium 90 if everything it's a TK window with debugging options enabled.
So, these are our current times. You got an i7 with 16GB of RAM and barely got any improvement with modern 'apps' over an i3 with 2GB of RAM and native software.
Youâre talking about compiler footprint and runtime footprint in the same conversation but theyâre entirely different processes (obviously) and I donât think it makes any sense to compare the two.
C++ is vastly more performant than Go. I love Go as a language but letâs not get ourselves carried away here about Gos performance.
It also makes no sense no sense to talk about Electron as C++. The problem with Electron isnât that it was written in C++, itâs that itâs ostensibly an entire operating system running inside a virtual machine executing JIT code.
You talked about using Go for UI stuff, but have you actually tried? Iâve written a terminal emulator in Go and performance UI was a big problem. Almost everything requires either CGO (thus causing portability problems) or uses of tricks like WASM or dynamic calls that introduced huge performance overheads. This was something I benchmarked in SDL so have first hand experience.
Then you have issues that GUI operations need to be owned by the OS thread, this causes issues writing idiomatic Go that calls GUI widgets.
And then you have a crap load of edge cases for memory leaks where Goâs GC will clear pointers but any allocations happening outside of Go will need to be manually deallocated.
In the end I threw out all the SDL code. It was slow to develop, hard to make pretty, and hard to maintain. It worked well but it was just far too limiting. So switched to Wails, which basically displays a WebKit (on MacOS) window so itâs lower footprint than Electron, allows you to write native Go code, but super easy to build UIs with. I hate myself for doing this but it was by far the best option available, depressingly.
I know C++ it's far more performant than Go but for some games and software C++ wouldn't be needed at all, such as nchat with tdlib (the library should be a Go native one by itself, is not rocket science). These could be working close in low end machines with barely performance losses.
In these cases there's nothing to gain with C++, because even compared to C, most C++ software -save for Dillo and niche cases- won't run as snappy as C ones. Running them under Golang won't make them unusable, for sure.
On the GUI, there's Fyne; but what Go truly needs it's a default UI promoted from the Golang developers written in the spirit of Tk.Tk itself would be good enough. Even Limbo for Inferno (Go's inspiration) borrowed it from TCL. Nothing fancy, but fast and usable enough for most entry tasks.
Python ships it by default because it weights near NIL and most platforms have a similar syntax to pack the widgets. Is not fancy and under mobile you need to write dedicated code and set theming but again if people got to set Androwish as a proof of concept, Golang could do it better...
Another good use case for Go would be Mosh. C++ and Protobuf? Goland should have been good for this. C++ mosh would be far snappier (it feels with some software like Bombadillo and Anfora vs Telescope) but for 'basic' modern machines (first 64 bit machines with Core Duo's or AMD64 processors) it would be almost no delay for the user.
Yes, 32 bit machines, sorry, but for 2030 and up I expect these be like using 16 bit DOS machines in 1999. Everyone moved on and 32 bit machines were cheap enough. Nowadays it's the same, I own an Atom n270 and I love it, but I don't expect to reuse it as a client or Go programming (modulo for Eforth) in 4 years, I'd expect to compute everything in the low 64 end bit machines I own.
But it will be a good Go testing case, for sure. If it runs fast in the Atom, it would shine under amd64.
With the current crysis, everyone should expect to refurbish and keep 'older' machines just in case. And be sure that long compiling times should be cut in half, even if you use ccache. RAM and storage will be expensive and current practices will be pretty much discarded. Yes, C++ will be used in these times, but Golang too. Forget Electron/Chromium being used as a standalone toolkit outside of being the engine of a browser.
And if oil/gas usage it's throttled for the common folk, E/V and electric heating will reach crazy numbers. Again, telecomms and data centers will have their prices skyrocketted so the power rise doesn't blackout a whole country/state. Again, expect power computing caps, throttled resolutions for internet media/video/RDP content, even bandwith caps (unless you pay a premium price, that's it) and tons of changes. React developers using 66GB of RAM for Claude Code... forget it.
Either they rebase their software in Go... or they already lost.
>Sure, if you donât count safety features like memory management, crash handling, automatic bounds checks and encryption cyphers; as anything useful.
>Networking stacks, safety checks, encryption stacks, etc all contribute massively to software âbloatâ.
They had most of this stuff in the 1980s, and even earlier really. Not on your little 8-bit microcomputer that cost $299 that might have had as a kid, but they certainly did exist on large time-sharing systems used in universities and industry and government. And those systems had only a tiny fraction of the memory that a typical x86-64 laptop has now.
> They had most of this stuff in the 1980s, and even earlier really. Not on your little 8-bit microcomputer that cost $299 that might have had as a kid
Those are the systems we are talking about though.
> but they certainly did exist on large time-sharing systems used in universities and industry and government. And those systems had only a tiny fraction of the memory that a typical x86-64 laptop has now.
Actually this systems didnât. In the early 80s most protocols were still ASCII based. Even remote shell connections werenât encrypted. Remember that SSH wasnât released until 1995. Likewise for SSL.
Time sharing systems were notoriously bad for sandboxing users too. Smart pointers, while available since the 60s, werenât popularised in C++ until the 90s. Memory overflow bugs were rife (and still are) in C-based languages.
If you were using Fortran or ALGOL, then it was a different story. But by the time the 80s came around, mainframe OSs werenât being written in FORTRAN / ALGOL any longer. Software running on top of it might, but youâre still at the mercy of all that insecure C code running beneath it.
DES wasnât common place though (or at least not on the mainframes I worked on). But maybe than says more about the places I worked early on in my career?
Also DES is trivial to crack because it has a short key length.
Longer keys require more compute power and thus the system requirements to handle encryption increase as the hardware to decrypt becomes more powerful.
The box size at IBM was larger before standardisation. DES is trivial to break, because of NSA involvement in weakening all the corners. [0]
> In the development of the DES, NSA convinced IBM that a reduced key size was sufficient;
Minitel used DES, and other security layers, and was in use for credit cards, hospitals, and a bunch of other places. The "French web" very nearly succeeded, and did have these things in '85. It wasn't just mainframes - France gave away Minitel terminals to the average household.
Yeah Iâd written about minitel in a tech journal several years back. Itâs a fascinating piece of technology but safely never got to see one in real life.
I worked for one payroll mainframe in the 80s that didnât have DES. So it wasnât quite as ubiquitous as you might think. But it does still sound like it was vastly more widespread than I realised too.
This. An old netbook cam emulate a PDP10 with ITS, Maclisp and some DECNET-TCP/IP clients and barely suffer any lag...
Also the Amiga's have AmiSSL and it will run on a 68040 or some FPGA with same constraints. IRC over TLS, Gemini, JS-less web, Usenet, EMail... not requiring tons of GB.
Nowadays even the Artemis crew can't properly launch Outlook. If I were the IT manager I'd just set Claws-mail/thunderbird with file attachments, MSMTP+ISYNC as backends (caching and batch sending/receiving emails, you know, high end technology inspired by the 80's) and NNCP to relay packets where cuts in space are granted and thus NNCP can just push packets on demand.
The cost? my Atom n270 junk can run NNCP and it's written in damn Golang. Any user can understand Thunderbird/Claws Mail. They don't need to setup anything, the IT manager would set it all and the mail client would run seamlessly, you know, with a fancy GUI for everything.
Yet we are suffering the 'wonders' of vibe coding and Electron programmers pushing fancy tecnology where the old one would just work as it's tested like crazy.
> Also the Amiga's have AmiSSL and it will run on a 68040 or some FPGA with same constraints. IRC over TLS, Gemini, JS-less web, Usenet, EMail... not requiring tons of GB.
The AmiSSL came out long after the C64 was a relic and required hardware that was an order of magnitude more powerful than the C64 ;)
The BASIC 10Liner competition wants you to know that there is a growing movement of hackers who recognize the bloat and see, with crystal clarity, where things kind of went wrong ...
".. and time and again it leads to amazingly elegant, clever, and sometimes delightfully crazy solutions. Over the past 14 editions, more than 1,000 BASIC 10Liners have been created â each one a small experiment, a puzzle, or a piece of digital creativity .."
Pretty sure I made it clear I looked at it, and looks like a domain squatter with no relation to the original comment. Why would I click around further?
Edit: Also y'know what? Those years aren't there on page load. They zoom in a few seconds later. I may not have even seen them, just Wix and then scrolled down to the German text that apparently refers to a school computer lab.
There was one time I was troubleshooting why an app used at a company would crash after some amount of time passed. Investigating the crash dumps showed it using 4GB of ram before it died, suspiciously the 32 bit limit of its application.
Turned out they never closed the files it worked on, so over time it just consumed ram until there wasnât any more for it to access.
I grew up with and absolutely adore The Last Ninja series. I'm not going to comment on the size thing because it's so trite.
Instead - here's [0] Ben Daglish (on flute) performing "Wastelands" together with the Norwegian C64/Amiga tribute band FastLoaders. He unfortunately passed away in 2018, just 52 years old.
If that tickled your fancy, here's [1] a full concert with them where they perform all songs from The Last Ninja.
The first time I ever heard The Glitch Mob I had such a clear memory of this games soundtrack come to mind that I mentioned it to my brother soon after (as it was his commodore and his copy of the game I was playing when I was young). I'm not even sure if the song I heard even sounds like the game soundtrack particularly closely, but the connection in my mind was very strong.
> isometric on the C64 with such an amazing level of detail - simply gorgeous
Or a convincing representation of that. A lot of old tricks mean that the games are doing less than you think that they are, and are better understood when you stop thinking âhow do they do thatâ and âhow are they convincing my brain that is what they are doingâ.
Look at how little RAM the original Elite ran in on a BBC Model B, with some swapping of code on diskâ°. 32KB, less the 7.75KB taken by the game's custom screen mode² and a little more reserved for other thingsš. I saw breathy reviews at the time and have seen similar nostalgic reviews more recently talking about â8 whole galaxies!â when the game could easily have had far more than that and was at one point going to. They cut it down not for technical reasons but because having more didn't feel usefully more fun and might actually put people off. The galaxies were created by a clever little procedural generator so adding more would have only added a couple of bytes (to hold the seed and maybe other params for the generator) each.
Another great example of not quite doing what it looks like the game is doing is the apparently live-drawn 3D view in the game Sentinel on a number of 8-bit platforms.
--------
[0] There were two blocks of code that were swapped in as you entered or self a space station: one for while docked and one for while in-flight. Also the ship blueprints were not all in memory at the same time, and a different set was loaded as you jumped from one system to another.
[1] the CPU call stack (technically up to a quarter K tough the game code only needed less than half of that), scratch-space on page-zero mostly used for game variables but some of which was used by things like the disk controller ROM and sound generator, etc.
[2] Normal screen modes close to that consumed 10KB. Screen memory consumption on the BBC Master Enhanced version was doubled as it was tweaked to use double the bit depths (4ppb for the control panel and 2bbp for the exterior, instead of 2bbp and 1ppb respectively).
I'd say up to a couple of hundred is much more than 40. Not a full decimal order of magnitude, but even without compression the 170KB on one side is up to 4½Ă.
You can access nearly 64kb of RAM on the C64, if you don't need the BASIC or Kernal (sic) ROMs. They can be software toggled in or out. Agreed that even the tape had more game data than that, but not much more.
However, very few tapeloader games ever tried to load more assets from tape. Generally it would just load a memory image and that would be that for the entire game.
But that's also kind of what makes it impressive in a different way. Even if the game was larger on disk/tape, they still had to stream it in tiny chunks and make it run within those constraints
If we're talking about fitting a quart into a pint pot, it would be remiss not to mention Elite fitting into a BBC Model B, 32kb, and the excellent code archaeology of it, and variants by Mark Moxon here: https://www.bbcelite.com/
We lost something in the bloat, folks. Its time to turn around and take another look at the past - or at least re-adjust the rearview mirror to actually look at the road and not ones makeup ..
Gluecode-First Engineering: the free-love utopia of sharing code resulted in engineers abandoning whole-design and defaulting to just creating mash-ups of pre-existing code.
Nobody designs whole-apps anymore, itâs all about minimizing the gluecode written for the 1200 dependencies that make your app buzzword-compliant.
Funny because I rewrote a bad port of dragons lair for a custom console with a tiny engine and huge dataset relatively, each frame having one "if press X goto frame Y" instruction.
Most games back then where small. An C64 only had 64k and most game didn't use all of it. An Atari 800 had max 48k. It wasn't until the 1200 that it went up. Both systems are cartridge based games, many of which were 8k.
Honestly though, I don't read much into the sizes. Sure they were small games and had lots of game play for some defintion of game play. I enjoyed them immensely. But it's hard to go back to just a few colors, low-res graphics, often no way to save, etc... for me at least, the modern affordances mean something. Of course I don't need every game to look like Horizon Zero Dawn. A Short Hike was great. It's also 400meg (according to steam)
> Sure they were small games and had lots of game play for some defintion of game play. I enjoyed them immensely. But it's hard to go back to just a few colors, low-res graphics, often no way to save, etc... for me at least, the modern affordances mean something.
On one hand, you're of course right. It is hard to go back, except for the nostalgia.
On the other, do you know there is a scene of people still making brand new games for the Commodore 64 (and other home computers)? And selling them, too, these are not just free games. Of course the target audience is themselves, they make, sell and buy games within the community, but the point is it still exists.
Also there are artists making art in C64 graphics resolutions and color modes, and even PETSCII art enthusiasts (PETSCII is C64's text mode, which had some interesting symbols which facilitate creativity).
>But it's hard to go back to just a few colors, low-res graphics, often no way to save, etc... for me at least, the modern affordances mean something.
All those old games have a way to save now, if you run them in an emulator as is commonly done these days. That's how I played through Metroid and finally beat the mother brain in just a day or two during the pandemic.
Pretty much every 8-bit computer game of 1987 or earlier (before the 128kB machines became popular) were < 40Kb? The Spectrum and Commodore combined probably had a library in excess of 50,000 games.
The publisher for this game was Activision. They absolutely had deadlines, lots of (1987) money invested in this, outsourced to a third party company in Hungary, had the outsource team fail, moved development platforms a few times, wrote a programming language and a game engine, and then became the best selling C64 game.
X264 supports a lossless mode without chroma subsampling, which produces very good compression for raw emulator captures of retro game footage. It is much better than other codecs like HuffYuv, etc.
But for some reason, Firefox refuses to play back those kinds of files.
But for some reason, Firefox refuses to play back those kinds of files.
And that reason is because x264 is a free and open source implementation of the H.264 codec, and you still need to pay a license to use the patented technology regardless of how you do that. Using a free implementation of the code doesn't get you a free license for the codec.
I'm not sure this is particularly telling. You can write a tiny program that generates a 4K image, and the image could be 1000x larger.
Or, if I write a short description "A couple walks hand-in-hand through a park at sunset. The wind rustles the orange leaves.", I don't think it would be surprising to anyone that an image or video of this would be relatively huge.
I shipped a browser game that was 8KB. Okay, plus 30 million lines of Chromium ;)
Most of my games are roughly in that range though. I think my MMO was 32KB, and it had a sound effects generator and speech synth in it. (Jsfxr and SAM)
I built it in a few days for a game jam.
I'm not trying to brag, I'm trying to say this stuff is easy if you actually care. Just look at JS13K. Every game there is 13KB or below, and there's some real masterpieces there. (My game was just squares, but I've seen games with whole custom animation systems in them.)
Once you learn how, it's pretty easy. But you'll never learn if you don't care.
You have to care because there's nothing forcing you. Arguably The Last Ninja would have been a lot more than 40KB if there weren't the hardware limitations of the time.
They weren't trying to make it 40KB, they were just trying to make a game.
In my case, I enjoy the challenge! (Also I like it when things load instantly :)
I think I'll make a PS1 game next. I was inspired by this guy who made a Minecraft clone for Playstation:
Yeah, the games industry is in a pretty big crisis right now, and I think change needs to happen both ways:
Consumers need to understand that keeping games at the same price for decades despite rising costs and inflation is not realistic. If they want the industry to thrive, they need to be ok with games being more expensive.
Meanwhile, developers need to stop making games so expensive. This is an entertainment industry / corpo problem, really. Companies have seen the big profits and decided that only the big profits will do, which means you need to make a big open world cinematic experience, which is expensive, and because it's expensive, they won't take risks on making anything actually interesting.
The only way gaming moves forward is if we make riskier games that cost less to produce, which is why indies are the ones making the good games these days.
A few years ago, I decompiled a good part of the PC version of Might & Magic 1 for fun. According to Wikipedia, it had been released in 1986, although I don't know whether that refers to the PC version or to the original Apple II version.
It is a quite big game: the main executable is 117KB, plus around 50 overlay files of 1.5 KB each for the different dungeons and cities, plus the graphics files. I guess it was even too big for the average PC hardware at that time, or it was a limitation inherited from the original Apple II version: When you want to cast a spell you have to enter the number of the spell from the manual, maybe because there was not enough memory to fit the names of the 94 spells into RAM. Apart from that and the limited graphics and the lack of sound, the internal ruleset is very complete. You have all kind of spells and objects, capabilities, an aging mechanism, shops, etc.. The usual stuff that you also see in today's RPGs.
The modern uninstall.exe that came with it (I bought the game on GOG) was 1.3MB big.
>When you want to cast a spell you have to enter the number of the spell from the manual, maybe because there was not enough memory to fit the names of the 94 spells into RAM
Probably not ;) "Enter things from a manual" was a tried old copy protection technique. If you used the warez version you presumably did not have a manual so you got stuck. This didn't run on the 8008 or whatever, I'm sure the game could have known the names of spells fairly easily.
Ah, that makes more sense than my theory. It's a weak copy protection method, though, as you can just try and see what happens, and I think they dropped it in M&M3.
We made the most of limited resources back then. Back in 1980, I was living large with my 64KB Apple II with dual 140KB floppy drives and a 10 inch (9 inch? I canât quite remember) amber monochrome monitor. Most had less.
A lot of trial and error. I've built graphical tools with GD in PHP, the difficult part for me what that the coordinates where inverted..
I only knew how to draw lines and pixels, but I got the job done.
Around the time DirectX came around and first games requiring it appeared, which in my memory coincided with hard drives getting way bigger and first games being delivered on a CD instead of floppies, I've been apalled at how I could see literal BMPs being written to disk during the installation. This was the same time when cracked games were being distributed via BBS at a fraction of the original size with custom installers which decompressed MP3s to their original WAV files. I've asked the same questions then: why WAV, why BMP, why the bloat? With time I've learned the answer: disk space is cheap, memory and CPU cycles are not, if you can afford to save yourself the decoding step, you just do it, your players will love it. You work with constraints you have and when there loosen up, your possibilities expand too.
I remember playing a version of this game on ZX Spectrum but I cannot find it on the internet. I remember it had bees that you had to avoid and a boat which you were able to untie so that it floats down a stream.
I remember this game, the way it drew itself on each screen, the nice graphics. Growing up with games on Atari, Commodore, Amstrad, and Spectrum, was a lot of fun.
By comparison, COD Modern Warfare 3 is 6,000,000 times larger at 240GB. Imagine telling that to someone in 1987.
The Last Ninja ran at resolution 160x200, with effectively 2-bit color for graphic assets. It had amazing animations for that level of detail, but all the variety of the graphics could not take too much RAM even if it wanted to.
The quest for photorealistic "movie-like" rendering which requires colossal amounts of RAM and compute feels like a dead end to me. I much appreciate the expressly unrealistic graphics of titles like Monument Valley.
Hardware sprite accelerators, the first GPUs. I swear there's something visceral you learn by programming that sort of system where you can literally see what it's doing, in the order it's doing it, which you just can't get any other way.
That's just incredible. People used to be so much better at programming, or at least great programmers had it easier to get funded. Most of what I see today is exceptionally low quality and just getting worse with time.
I never figured out how they did the turtle graphics in this game. The C64 didn't have whole screen bitmaps, you could either use sprites or user defined character sets, neither of which made this straightforward.
And the loading screens were also amazing, particularly for tape loading.
As others have said, the C64 does have bitmap modes, though it's understandable not being aware of it as they weren't that commonly used for games since it was often easier to use user defined character sets as tilesets if you had repetition.
The C64 does have a couple of bitmap modes. The Last Ninja uses mode 3, which is multicolor bitmap mode. It occupies 9000 bytes including pixels (8000 bytes) and color RAM (1000 bytes).
The TI99/4a version of the Logo language which has turtle graphics used user defined characters to implement them. There were only (I think) 128 user definable characters, and when the turtle graphics had redefined all of them to create its output, it gave the user a message, "out of ink".
Even on NES a lot of games use CHR-RAM so arbitrary bitmaps are at least possible, though only a small part of the screen is unique without some rarely used mapper hardware. Zelda and Metroid mostly just use this to compress the graphics in ROM, Qix is a simple example with line drawing, Elite is an extreme one.
I made a demo of the Mystify screensaver using the typical 8KB CHR-RAM. Even with a lot of compromises it has pretty large borders to avoid running out of unique tiles. https://youtube.com/watch?v=1_MymcLeew8
Speaking of the size: my first PC, built by a family friend, had a 80MB disk, split into two partitions. The second 40MB partition had Windows 3.1 and about two Norton Commander columns full of games on it, largest of which were Wolfenstein 3D and Lost Vikings with about 1.4MB each. Truly a different era.
Some comments here sound like the ones I hear from car "enthusiasts" praising old engines for being simple to run and easy to fix, then complaining about modern engines being too complicated and how we should return to the "good old days", all that without taking into account the decades of progress since then.
Want to prove a point? Give me Skyrim in 64k of ram. Go ahead! I dare you!
Not as small as The Last Ninja, but when I was a teenager first getting into emulation, I genuinely thought there was a mistake or my download got interrupted when I downloaded Super Mario Bros. 3, because it was only like 500kb [1], and I didn't think it was possible for a game that huge to be less than a megabyte.
It is still impressive to me how much game they could squeeze out of the NES ROM chips.
[1] Or something like that, I don't remember the exact number.
40kb and it felt like a full world... I'm burning through tokens to get AI to decide whether to go to the tavern or the market. Something went wrong somewhere
It really was. I was just wondering if Last Ninja 2 (Amiga) was the first game I actually liked playing. I mostly hated old games and I still don't like most games. Particularly ones with twitchy controls or platforming. LN wasn't that easy and it was very linear, but it was still somehow incredibly fun. And the music and even the graphics were great.
Some PokĂŠmon Crystal ROMs pack a huge amount of gaming in very few MB. Z80-ish ASM, KB's of RAM.
The ZMachine games, ditto. A few kb's and an impressive simulated environment will run even under 8bit machines running a virtual machine. Of course z3 machine games will have less features for parsing/obj interaction than z8 machine games, but from a 16 bit machine and up (nothing today, a DOS PC would count) will run z8 games and get pretty complex text adventures. Compare Tristam Island or the first Zork I-III to Spiritwrak, where a subway it's simulated, or Anchorhead.
And you can code the games with Inform6 and Inform6lib with maybe a 286 with DOS or 386 and any text editor. Check Inform Beginner's Guide and DM4.pdf
And not just DOS, Windows, Linux, BSD, Macs... even Android under Termux. And the games will run either Frotz for Termux or Lectrote, or Fabularium. Under iOS, too.
Nethack/lashem weights MB's and has tons of replayability. Written in C. It will even run under a 68020 System 7 based Mac... emulated under 9front with an 720 CPU as the host. It will fly from a 486 CPU and up.
Meanwhile, Cataclysm DDA uses C++ and it needs a huge chunk of RAM and a fastly CPU to compile it today. Some high end Pentium4 with 512MB of RAM will run it well enough, but you need to cross compile it.
If I had the skills I would rewrite (no AI/LLM's please) CDDA:BN into Golang. The compiling times would plummet down and the CPU usage would be nearly the same. OFC the GC would shine here prunning tons of unused code and data from generated worlds.
Despite being a mid-late-millennial, I can see how this played out. Even compared to the second family computer my parents got in the late 90's, which was an absolute monster at the time, I do realize how many corners and shortcuts developers had to make to get a game going in a few hundred megabytes, seeing mobile games today easily exceeding 10 times that, and not just now but even 10 years ago when I was working at a company that made mobile games. These days, developers are automatically assuming everyone has what are effectively unlimited resources by 90's standards(granted they haven't transitioned to slop-coding, which makes it substantially worse). Personally, I have a very strange but useful habit: when I find myself with some spare time at work, I spin up a very under-powered VM and start running what is in production and try to find optimizations. One of the data pipelines I have is pretty much insanity in terms of scale and running it took over 48 hours. Last time(a few weeks ago actually), I did the VM thing and started looking for optimizations and I found a few, which were completely counter-intuitive at first and everyone was like "na, that makes no sense". But now the pipeline runs in just over 10 hours. It's insane how much shortcuts you force yourself to find when you put a tight fence around you.
Yes this is a great methodology. I found developing BrowserBox (which is real time interactive streaming for remote browsers), using slow links, and a variety of different OS, really stresses parts of the system and causes improvements to be necessary that strengthen the whole.
Masterpieces like these are a perfect demonstration that performance relies not only on fast processors, but on understanding how your data and code compete for resources. Truly admirable. Thanks for the trip down memory lane.
How times have changed. My best-selling program "Apple Writer", for the Apple II, ran in eight kilobytes. It was written entirely in 6502 assembly language.
Wow that search/interact mechanic is obnoxious, you can see the player fumbling it every time, despite knowing exactly where the item is theyâre trying to collect.
This is sort of the defining mechanic of these games in my memory. The first thing that pops into my head when I think of Last Ninja is aligning and realigning myself, and squatting, awkwardly and repeatedly (just like a real ninja, lol), until that satisfying new item icon appears. Perhaps surprisingly, these are very fond memories.
This mechanic is augmented by not even always knowing which graphics in the environment can be picked up, or by invisible items that are inside boxes or otherwise out of sight (I think LN2 had something in a bathroom? You have to position yourself in the doorway and do a squat of faith).
The other core memory is the spots that require a similarly awkward precision while jumping. These are worse, because each failure loses you one of your limited lives. The combat is also finicky. I remember if you come into a fight misaligned, your opponent might quickly drain your energy while you fail to get a hit in.
At the time, it seemed appropriate to me that it required such a difficult precision to be a ninja. I was also a kid, who approached every game non-critically, assuming each game was exactly as it was meant to be. Thus I absolutely loved it, lol.
27 unique levels. 40KB minus a handful of spare bytes and some unused code. The max the NES can support without mappers. Modern NES homebrew and demoscene can do fancier stuff with this budget given the extra decades of learned tricks, but for the state of console gaming in 1985, SMB1 is damn impressive.
Also remember all of that was ROM, the NES had a mere 2 kilobytes of RAM for all your variables and buffers.
> I still struggle to comprehend, even in the slightest, how programmers back then did what they did - and the worlds they created with the limitations they had to work with.
Highly related: two videos covering exactly how they fit...
I highly advise watching the actual videos to best understand, since all the techniques used were very likely devised from a game-dev perspective, rather than by invoking any abstract CS textbook learning.
But if I did want to summarize the main "tricks" used, in terms of such abstract CS concepts:
1. These old games can be understood as essentially having much of their data (level data, music data, etc) "compressed" using various highly-domain-specific streaming compressors. (I say "understood as" because, while the decompression logic literally exists in the game, there was likely no separate "compression" logic; rather, the data "file formats" were likely just designed to represent everything in this highly-space-efficient encoding. There were no "source files" using a more raw representation; both tooling and hand-edits were likely operating directly against data stored in this encoding.)
2. These streaming compressors act similar to modern multimedia codecs, in the sense that they don't compress sequences-of-structures (which would give low sequence correlation), but instead first decompose the data into distinct, de-correlated sub-streams / channels / planes (i.e. structures-of-sequences), which then "compress" much better.
3. Rather than attempting to decompose a single lossless description of the data into several sub-streams that are themselves lossless descriptions of some hyperplane through the data, a different approach is used: that of each sub-channel storing an imperative "painting" logic against a conceptual mutable canvas or buffer shared with other sub-channels. The data stream for any given sub-channel may actually be lossy (i.e. might "paint" something into the buffer that shouldn't appear in the final output), but such "slop"/"bleed" gets overwritten â either by another sub-channel's output, or by something the same sub-channel emits later on in the same "pass". The decompressor essentially "paints over" any mistakes it makes, to arrive at a final flattened canvas state that is a lossless reproduction of the intended state.
4. Decompression isn't something done in its entirety into a big in-memory buffer on asset load. (There isn't the RAM to do that!) But nor is decompression a pure streaming operation, cleanly producing sequential outputs. Instead, decompression is incremental: it operates on / writes to one narrow + moving slice of an in-memory data "window buffer" at a time. Which can somewhat be thought of as a ring buffer, because the decompressor coroutine owns whichever slice it's writing to, which is expected to not be read from while it owns it, so it can freely give that slice to its sub-channel "painters" to fill up. (Note that this is a distinct concept from how any long, larger-than-memory data [tilemaps, music] will get spooled out into VRAM/ARAM as it's being scrolled/played. That process is actually done just using boring old blits; but it consumes the same ring-buffer slices the decompressor is producing.)
5. Different sub-channels may be driven at different granularities and feed into more or fewer windowing/buffering pipeline stages before landing as active state. For example, tilemap data is decompressed into its "window buffer" one page at a time, each time the scroll position crosses a page boundary; but object data is decompressed / "scheduled" into Object Attribute Memory one column at a time (or row at a time, in SMB2, sometimes) every time the scroll position advances by a (meta)tile width.
6. Speaking of metatiles â sub-channels, rather than allowing full flexibility of "write primitive T to offset Y in the buffer", may instead only permit encodings of references to static data tables of design-time pre-composed patterns of primitives. For tilemaps, these patterns are often called "meta-tiles" or "macro-blocks". (This is one reason sub-channels are "lossy" reconstructors: if you can only encode macro-blocks, then you'll often find yourself wanting only some part of a macro-block â which means drawing it and then overdrawing the non-desired parts of it.)
7. Sub-channels may also operate as fixed-function retained-mode procedural synthesis engines, where rather than specifying specific data to write, you only specify for each timestep how the synthesis parameters should change. This is essentially how modular audio synthesis encoding works; but more interestingly, it's also true of the level data "base terrain" sub-channel, which essentially takes "ceiling" and "ground" brush parameters, and paints these in per column according to some pattern-ID parameter referencing a table of [ceiling width][floor height] combinations. (And the retained-mode part means that for as long as everything stays the same, this sub-channel compresses to nothing!)
8. Sub-channels may also contain certain encoded values that branch off into their own special logic, essentially triggering the use of paint-program-like "brushes" to paint arbitrarily within the "canvas." For example, in SMB1, a "pipe tile" is really a pipe brush invocation, that paints a pipe into the window, starting from the tile's encoded position as its top-left corner, painting right two meta-tiles, and downward however-many meta-tiles are required to extend the pipe to the current "base terrain" floor height.
9. Sub-channels may encode values ("event objects") that do not decode to any drawing operation to the target slice buffer, but which instead either immediately upon being encountered ("decompression-time event objects") or when they would be "placed" or "scheduled" if they were regular objects ("placement-time event objects"), just execute some code, usually updating some variable being used during the decompression process or at game runtime. (The thing that prevents you from scrolling the screen past the end of map data, is a screen-scroll-lock event object dropped at just the right position that it comes into effect right before the map would run out of tiles to draw. The thing that determines where a "warp-enabled pipe" will take you, is a warp-pipe-targeting event object that applies to all warp-enabled pipes will take you after it runs, until the next warp-pipe-targeting event object is encountered.)
If at least some of these sub-channels are starting to sound like essentially a bytecode ISA for some kind of abstract machine â yes, exactly. Things like "event objects" and "brush invocations" can be more easily understood as opcodes (sometimes with immediates!); and the "modal variables" as the registers of these instruction streams' abstract machines.
10. The interesting thing about these instruction streams, though, is that they're all being driven in lockstep externally by the decompressor. None of the level-data ISAs contain anything like a backward JMP-like opcode, because each level-data sub-channel's bytecode interpreter has a finite timeslice to execute per decompression timestep, so allowing back-edges [and so loops] would make the level designers into the engine developers' worst enemy. But most of the ISAs do contain forward JMPs, to essentially encode things like "no objects until [N] [columns/pages] from now." (And a backward JMP instruction does exist in the music-data parameterized-synthesis sub-channel ISA [which as it happens isn't interpreted by the CPU, but is rather the native ISA of the NES's Audio Processing Unit.] If you ever wondered how music keeps not only playing but looping even if the game crashes, it's because the music program is loaded and running on the APU and just happily executing its own loop instructions forever, waiting for the CPU to come interrupt it!)
11. These sub-channel ISAs are themselves designed to be as space-efficient as possible while still being able to be directly executed without any kind of pre-transformation. They're often variable-length, with most instructions being single-byte. Opcodes are hand-placed into the same kind of bit-level Huffman trie you'd expect a DEFLATE-like algorithm to design if it were tasked with compressing a large corpus of fixed-length bytecode. Very common instructions (e.g. a brush to draw a horizontal line of a particular metatile across a page up to the page boundary) might be assigned a very short prefix code (e.g. `11`), allowing the other six bits in that instruction byte to to select a metatile to paint with from a per-tilemap metatile palette table. Rarer instructions, meanwhile, might take 2 bytes to express, because they need to "get out of the way of" all the common prefixes. (You could think of these opcodes as being filed under a chain of "Misc -> Misc -> Etc -> ..." prefixes.)
IMHO, these are all (so far) things that could be studied as generalizable data-compression techniques.
But here are two more techniques that are much more specific to game-dev, where you can change and constrain the data (i.e. redesign the level!) to fit the compressor:
12. Certain ISAs have opcodes that decode to entirely-distinct instructions, depending on the current states of some modal variables! (My guess is that this came about either due to more level features being added late in development after the ISAs has mostly been finalized; or due to wanting to further optimize data size and so seeing an opportunity to "collapse" certain instructions together.) This mostly applies to "brush" opcodes. The actual brush logic they invoke can depend on what the decoder currently sees as the value of the "level type" variable. In one level type, opcode X is an Nx[floor distance] hill; while in another level type, opcode X is a whale, complete with water spout! (In theory, they could have had an opcode to switch level type mid-level. Nothing in this part of the design would have prevented that; it is instead only impractical for other reasons that are out-of-scope here, to do with graphics memory / tileset loading.)
13. And, even weirder: certain opcodes decode to entirely-distinct instructions depending on the current value of the 'page' or 'column' register, or even the precise "instruction pointer" register (i.e. the current 'row' within the 'column'). In other words, if you picture yourself using a level editor tool, and dragging some particular object/brush type across the screen, then it might either "snap" to / only allow placement upon metatiles where the top-left metatile of the object lands on a metatile at a position that is e.g. X%4==1 within its page; or it might "rotate" the thing being dragged between being one of four different objects as you slide it across the different X positions of the metatile grid. (This one's my favorite, because you can see the fingerprint of it in much of the level design of the game. For example: the end of every stage returns the floor height to 2, so that "ground level" is at Y=13. Why? Because flagpole and castle objects are only flagpole and castle objects when placed at Y=13!)
"I still struggle to comprehend, even in the slightest, how programmers back then did what they did - and "
First of all, if you're going to LLMize your tweets, do it correctly and run a 2nd pass after you're done editing. Second, read a book. That's how we learned things in 1987.
I was looking at a production service we run that was using a few GBs of memory. When I add up all the actual data needed in a naive compact representation I end up with a few MBs. So much waste. That's before thinking of clever ways to compress, or de-duplicate or rearrange that data.
Back in the day getting the 16KB expansion pack for my 1KB RAM ZX81 was a big deal. And I also wrote code for PIC microcontrollers that have 768 bytes of program memory [and 25 bytes of RAM]. It's just so easy to not think about efficiency today, you write one line of code in a high level language and you blow away more bytes than these platforms had without doing anything useful.
Long ago working for a retail store chain, I made some excel DSL to encode business rules to update inventory spreadsheets. While coding I realized that their excel template had a bunch of cells with whitespace in them on row 100000. This forced excel to store the sparse matrix for 0:100000 region, adding 100s of Kb for no reason. Multiplied by 1000s of these files over their internal network. Out of curiosity I added empty cell cleaning in my DSL and I think I managed to fit the entire company excel file set on a small sd card (circa 2010).
I think you're right about the waste, but I'm not sure it's entirely "accidental"... a lot of it is traded for different kinds of efficiency
At some point, you just stop measuring the thing until the thing becomes a problem again. That lets you work a lot faster and make far more software for far less money.
It's the "fast fashion" of software. In the middle ages, a shirt used to cost about what a car does now, and was just as precious. Now, most people can just throw away clothes they no longer like.
It usually is. I try to think of these things not as "waste" but as "cost." As in, what does it cost vs. the alternative? You're using 40Gb of some kind of storage. Let's say it's reasonably possible to reduce that to 20Gb. What's the cost of doing so compared to the status quo? That memory reduction effort, both the initial effort, and the ongoing maintenance, isn't free. Unless it costs a lot less to do that than to continue using more memory, we should probably continue to use the memory.
Yeah, there may be other benefits, but as a first order of approximation, that works. And you'll usually find that it's cheaper to just use more memory.
Sure, if you donât count safety features like memory management, crash handling, automatic bounds checks and encryption cyphers; as anything useful.
I do completely agree that there is a lot of waste in modern software. But equally there is also a lot more that has to be included in modern software that wasnât ever a concern in the 80s.
Networking stacks, safety checks, encryption stacks, etc all contribute massively to software âbloatâ.
You can see how this quickly adds up if you write a âhello worldâ CLI in assembly and compare that to the equivalent in any modern language that imports all these features into its runtime.
And this is all before you take into account that modern graphics and audio is bitmap / PCM and running at resolutions literally orders of magnitude greater than anything supported by 80s micro computers.
Yes, but this doesn't prevent you from being mindful and selecting the right tools with smaller memory footprint while providing the features you need.
Go's "GC disadvantage" is turned on its head by developing "Zero Allocation" libraries which run blazingly fast with fixed memory footprints. Similarly, rolling your own high performance/efficient code where it matters can save tremendous amounts of memory where it matters.
Of course more features and safety nets will consume memory, but we don't have to waste it like there are no other things running on the system, no?
> And this is all before you take into account that modern graphics and audio is bitmap / PCM and running at resolutions literally orders of magnitude greater than anything supported by 80s micro computers.
This demo [0] is a 4kB executable. 4096 bytes. A single file. All assets, graphics, music and whatnot, and can run at high resolutions with real time rendering.
This is [1] 64kB and this [2] is 177kB. This game from the same group is 96kB with full 3D graphics [3].
[0]: https://www.pouet.net/prod.php?which=52938
[1]: https://www.pouet.net/prod.php?which=1221
[2]: https://www.pouet.net/prod.php?which=30244
[3]: https://en.wikipedia.org/wiki/.kkrieger
Programming these days, in some realms, is a lot like shopping for food - some people just take the box off the shelf, don't bother with reading the ingredients, throw it in with some heat and fluid and serve it up as a 3-star meal.
Others carefully select the ingredients, construct the parts they don't already have, spend the time to get the temperatures and oxygenation aligned, and then sit down to a humble meal for one.
Not many programmers, these days, do code-reading like baddies, as they should.
However, kids, the more you do it the better you get at it, so there is simply no excuse for shipping someone elses bloat.
Do you know how many blunt pointers are lined up underneath your BigFatFancyFeature, holding it up?
> Go's "GC disadvantage" is turned on its head by developing "Zero Allocation" libraries which run blazingly fast with fixed memory footprints. Similarly, rolling your own high performance/efficient code where it matters can save tremendous amounts of memory where it matters.
The savings there would be negligible (in modern terms) but the development cost would be significantly increased.
> Of course more features and safety nets will consume memory, but we don't have to waste it like there are no other things running on the system, no?
Safety nets are not a waste. Theyâre a necessary cost of working with modern requirements. For example, If your personal details were stolen from a MITM attack then Iâm sure youâd be asking why that piece of software wasnât encrypting that data.
The real waste in modern software is:
1. Electron: but we are back to the cost of hiring developers
2. Application theming. But few actual users would want to go back to plain Windows 95 style widgets (many, like myself, on HN wouldnât mind, but we are a niche and not the norm).
> This demo [0] is a 4kB executable. 4096 bytes. A single file. All assets, graphics, music and whatnot, and can run at high resolutions with real time rendering.
You quoted where i said that modern resolutions are literally orders of magnitude greater and assets stored in bitmaps / PCM then totally ignored that point.
When you wrote audio data in the 80s, you effectively wrote midi files in machine code. Obviously it wasnât literally midi, but youâd describe notes, envelopes etc. Youâd very very rarely store that audio as a waveform because audio chips then simply donât support a high enough bitrate to make that audio sound good (nor had the storage space to save it). Whereas these days, PCM (eg WAV, MP3, FLAC, etc) sound waaaay better than midi and are much easier for programmers to work with. But even a 2 second long 16bit mono PCM waveform is going to be more than 4KB.
And modern graphics arenât limited to 2 colour sprites (more colours were achieved via palette swapping) at 8x8 pixels. Scale that up to 32bits (not colours, bits) and youâre increasing the colour depth by literally 32 times. And thatâs before you scale again from 64 pixels to thousands of pixels.
Youâre then talking exponential memory growth in all dimensions.
Iâve written software for those 80s systems and modern systems too. And itâs simply ridiculous to Compare graphics and audio of those systems to modern systems without taking into account the differences in resolution, colour depth, and audio bitrates.
> Application theming
Software 30 years ago was more amenable to theming. The more system widgets you use, the more effective theming works by swapping them.
Now, we have grudging dark-mode toggles that aren't consistent or universal, not even rising to the level of configurabilty you got with Windows 3.1 themes, let alone things like libXaw3d or libneXtaw where the fundamental widget-drawing code could be swapped out silently.
I get the impression that since about 2005, theming has been on the downturn. Windows XP and OSX both were very close to having first class, user-facing theming systems, but both sort of chickened out at the last minute, and ever since, we've seen less and less control every release.
I think what you're describing as "theming" is more "custom UI". It used to be reserved for games, where stock Windows widgets broke immersion in a medieval fantasy strategy simulator and you were legally obliged to make the cursor a gauntlet or sword. But Electron said to the entire world "go to town, burn the system Human Interface Guidelines and make a branded nightmare!" when your application is a smart-bulb controller or a text editor that could perfectly well fit with native widgets.
We are talking about software development not user configuration. So âthemingâ here clearly refers specifically to the applications shipping non-standard UIs.
This also isnât a trend that Electron started. Software has been shipping with bespoke UIs for nearly as long as UI toolkits have been a thing.
>But Electron said to the entire world "go to town, burn the system Human Interface Guidelines and make a branded nightmare!"
TBH this sounds pretty medieval too.
> The savings there would be negligible (in modern terms)
A word of praise for Go: it is pretty performant, while using very little memory. I inherited a few Django apps, and each thread just grows to 1GB. Running something like celery quickly eats up all memory and start thrashing. My Go replacements idle at around 20MB, and are a lot faster. It really works.
Iâve written a $SHELL and a terminal emulator in Go. It has its haters on HN but I personally rather like the language.
> The savings there would be negligible (in modern terms) but the development cost would be significantly increased.
...and this effort and small savings here and there is what brings the massive savings at the end of the day. Electron is what "4KB here and there won't hurt", "JS is a very dynamic language so we can move fast", and "time to market is king, software is cheap, network is reliable, YOLO!" banged together. It's a big "Leeroy Jenkins!" move in the worst possible sense, making users pay everyday with resources and lost productivity to save a developer a couple of hours at most.
Users are not cattle to milk, they and their time/resources also deserve respect. Electron is doing none of that.
> You quoted where i said that modern resolutions are literally orders of magnitude greater and assets stored in bitmaps / PCM then totally ignored that point.
Did you watch or ran any of these demos? Some (if not all) of them scale to 4K and all of them have more than two colors. All are hardware accelerated, too.
> And modern graphics arenât limited to 2 colour sprites (more colours were achieved via palette swapping) at 8x8 pixels. Scale that up to 32bits (not colours, bits) and youâre increasing the colour depth by literally 32 times. And thatâs before you scale again from 64 pixels to thousands of pixels.
Sorry to say that, but I know what graphics and high performance programming entails. Had two friends develop their own engines, and I manage HPC systems. I know how much memory matrices need, because everything is matrices after some point.
> Safety nets are not a waste.
I didn't say they are waste. That quote is out of context. Quoting my comment's first paragraph, which directly supports the part you quoted: "Yes, but this doesn't prevent you from being mindful and selecting the right tools with smaller memory footprint while providing the features you need."
So, what I argue is, you don't have to bring in everything and the kitchen sink if all you need is a knife and a cutting board. Bring in the countertop and some steel gloves to prevent cutting yourself.
> Iâve written software for those 80s systems and modern systems too. And itâs simply ridiculous to Compare graphics and audio of those systems to modern systems without taking into account the differences in resolution, colour depth, and audio bitrates.
Me too. I also record music and work on high performance code. While they are not moving much, I take photos and work on them too, so I know what happens under the hood.
Just watch the demos. It's worth your time.
> Electron is doing none of that.
I agree. I even said Electron was one piece of bloat I didnât agree with my my comment. So it wasnât factored into the calculations I was presenting to you.
> Did you watch or ran any of these demos? Some (if not all) of them scale to 4K and all of them have more than two colors.
You mean the ones you added after I replied?
> I didn't say they are waste. That quote is out of context.
Every part of your comment was quoted in my comment. Bar the stuff you added after I commented.
> Had two friends develop their own engines
I have friends who are doctors but that doesnât mean I should be giving out medical advice ;)
> Just watch the demos. It's worth your time.
Iâm familiar with the demo scene. I know whatâs possible with a lot of effort. But writing cool effects for the demo scene is very different to writing software for a business which has to offset developer costs against software sales and delivery deadlines.
Iâm also not advocating that software should be written in Electron. My point was modern software, even without Electron, is still going to be orders of magnitude larger in size and for the reasons I outlined.
I did no edits after your comment has appeared. Yep, I did edits, but your reply was not visible to me while I did these. Sometimes HN delays replies and you're accusing me of things I'm not. That's not nice.
> writing cool effects for the demo scene is very different to writing software for a business which has to offset developer costs against software sales and delivery deadlines.
The point is not "cool effects" and "infinite time" though. If we continue about talking farbrausch, they are not bunch of nerds which pump out raw assembly for effects. They have their own framework, libraries and whatnot. Not dissimilar to business software development. So, their code is not that different from a business software package.
For the size, while you can't fit a whole business software package to 64kB, you don't need to choose the biggest and most inefficient library "just because". Spending a couple of hours more, you might find a better library/tool which might allow you to create a much better software package, after all.
Again, for the third time, while safety nets and other doodads make software packages bigger, cargo culting and worshipping deadlines and ROI more than the product itself contributes more to software bloat. That's my point.
Oh I overlooked this gem:
> I have friends who are doctors but that doesnât mean I should be giving out medical advice ;)
Yet, we designed some part of that thing together, and I had the pleasure of fighting with GPU drivers with them trying to understand what it's trying to do while neglecting our requests from it.
IOW, yep, I didn't wrote one, but I was neck deep in both of them, for years.
> I did no edits after your comment has appeared. Yep, I did edits, but your reply was not visible to me while I did these.
Which isnât the same thing as what I said.
Iâm not suggesting you did it maliciously, but the fact remains they were added afterwards so itâs understandable I missed them.
> Yet, we designed some part of that thing together, and I had the pleasure of fighting with GPU drivers with them trying to understand what it's trying to do while neglecting our requests from it.
That is quite a bit different from your original comment though. This would imply you also worked on game engines and it wasnât just your friends.
That first one was discussed on HN before, as its source code was also released: https://news.ycombinator.com/item?id=11848097
I was sure once I saw the descriptions that what you're posting is Farbrausch prods! Do you know if anyone came close to this level since?
I'm not following the scene for the last couple of years, but I doubt that. On the other hand, there are other very capable people doing very interesting things.
That C64 demo doing sprite wizardy and 8088MPH comes to my mind. The latter one, as you most probably know, can't be emulated since it (ab)uses hardware directly. :D
As a trivia: After watching .the .product, I declared "if a computer can do this with a 64kB binary, and people can make a computer do this, I can do this", and high performance/efficient programming became my passion.
From any mundane utility to something performance sensitive, that demo is my northern star. The code I write shall be as small, performant and efficient as possible while cutting no corners. This doesn't mean everything is written in assembly, but utmost care is given how something I wrote works and feels while it's running.
Your third example seems to generate 2G of data at runtime, so misleadingly minimalistic
All of them generates tons (up to tens of gigabytes or more) of data during runtime, but they all output it, and don't store them on disk or RAM.
They are highly dynamic programs, and not very different from game engines on that regard.
> misleadingly minimalistic.
That's the magic of these programs or demoscene in general. No misleading. That's the goal.
Iâm on my phone so cannot run it, but you cannot generate data and not store it somewhere. Itâs going to consume either system resources (RAM/storage) or video resources (VRAM).
If your point is that it uses gigabytes of VRAM instead of system memory, then I think that is an extremely weak argument for how modern software doesnât need much memory because all youâre doing is shifting that cost from one stack of silicon to a a different stack silicon. But the cost is still the same.
The only way around that is to dynamically generate those assets on the fly and streaming them to the video card. But then youâre sacrificing CPU efficiency for memory efficiency. So the cost is still there.
And Iâve already discussed how data compresses better as vectors than as bitmaps and PCM but is significantly harder to work with than bitmaps and waveforms. using vectors / trackers are another big trick for demos that arenât really practical for a lot of day to day development because they take a little more effort and the savings in file sizes are negligible for people with multi-GB (not even TB!!!) disks.
As the saying goes: thereâs no such thing as a free lunch.
All demos I have shared with you are designed to run on resource constrained systems. Using all the resources available on the system is a big no no from the start.
Instead, as you guessed, these demos generate assets on the fly and stream to the respective devices. You cite inefficiencies. I tell they run at more than 60 FPS on these constrained systems. Remember, these are early 2000s systems. They are not that powerful by todayâs standards, yet these small binaries use these systems efficiently and generate real time rendered CG on the fly.
Nothing about them is inefficient or poor. Instead they are marvels.
> You cite inefficiencies.
Thatâs not what I said. I said youâre trading memory footprint for CPU footprint.
This is the correct way to design a demo but absolutely the wrong way to design a desktop application.
They are marvels, I agree. But, and as I said before, thereâs no such things as a free lunch. at risk of stating the obvious; If there wasnât a trade off to be made then all software would be written that way already.
I would also add internationalization. There were multi-language games back in the day, but the overhead of producing different versions for different markets was extremely high. Unicode has .. not quite trivialized this, but certainly made a lot of things possible that weren't.
Much respect to people who've manage to retrofit it: there are guerilla translated versions of some Japanese-only games.
> this is all before you take into account that modern graphics and audio is bitmap / PCM and running at resolutions literally orders of magnitude greater
Yes, people underestimate how much this contributes, especially to runtime memory usage.
The framebuffer size for a single 320x200 image with 16 colours is 32k, so nearly the same amount of memory as this entire game.
320x200 being an area of screen not much larger than a postage stamp on my 4k monitor.
The technical leap from 40 years ago never fails to astound me.
The 48k Spectrum had a 1-bit "framebuffer" with colours allocated to 8x8 character tiles. Most consoles of the time were entirely tile/sprite based, so you never had a framebuffer in RAM at all.
I think it's a valid view that (a) we have way more resources and (b) sometimes they are badly used in ways that results in systems being perceptibly slower than the C64 sometimes, when measured in raw latency between user input and interaction response. Usually because of some crippling system bottleneck that everything is forced through.
> all contribute massively to software âbloatâ.
Could you point to an example where those gigs were really "massively" due crash handling and bounds checks etc?
Most software doesnât consume multiple gigabytes of memory outside of games and web browsers.
And it should be obvious why games and web browsers do.
Unfortunately "most software" might be a web browser these days.
Not âmostâ, but definitely a depressing increasing number.
And as I said elsewhere, I do consider Electron to be bloat.
But itâs also worth discussing Electron as an entirely separate topic because itâs a huge jump in memory requirements from even âbloatedâ native apps.
This I think is a core part of the problem when discussing sizes from C64 era to modern applications:
1. You have modern native apps vs Electron
2. Encryption vs plain text
3. High resolution media vs low resolution graphics and audio
4. Assembly vs high level runtimes
5. static vs dynamically linked libraries
6. Safety harnesses vs unsafe code
7. Expected features like network connectivity vs an era when that wouldnât be a requirement
8. Code that needs to be supported for years of updates by a team of developers vs a one man code base that never gets looked at again after the cassettes get shipped to retail stores.
âŚand so on.
Each of these individually can contribute massively to differences in file sizes and memory footprints. And yet we are not defining those parameters in this discussion so we are each imagining a different context in our argument.
And then you have other variables like:
1. Which is large? 5 GB is big by todayâs standards but even 5 MB would have been unimaginable by C64 standards and that is 4 orders of magnitude smaller. One commenter even discussed 250 GB as âbigâ which is unimaginable by todayâs standard users.
2. Are we talking about disk space or RAM? One commenter discussed using GBs of GPU memory as a way to save sure memory but that feels like a cop out to me because itâs still GBs of system resources that the C64 used.
3. Software Complexity: it takes a lot more effort to release software these days because you work as a team, and need to adhere to security best practices. And we still see plenty of occasions where people get that wrong. So it makes sense that people will use general purpose libraries instead of building everything from scratch to reduce the footprint. Particularly when developers are expensive and projects have (and always have had) deadlines that need to be met. So do we factor in developer efficiency into our equation or not?
In short, this is such a fuzzy topic that I bet everyone is arguing a similar point but from a different context.
I implemented a system recently that is a drop in replacement for a component of ours, old used 250gb of memory, new one uses 6gb, exact same from the outside.
Bad code is bad code, poor choices are poor choices â but I think itâs often times pretty fair to judge things harshly on resource usage sometimes.
Sure, but if youâre talking about 250GB of memory then youâre clearly discussing edge cases vs normal software running on an average persons computer. ;)
Back the day people had BASIC and some machines had Forth and it was like
or for Forth.By comparison, giving how they optimized the games for 8 and 16 bit machines I should have been able to compile Cataclysm DDA:BN under my potato netbook and yet it needs GIGABYTES of RAM to compile, it crazy that you need damn swap for something it required far less RAM 15 years ago for the same features.
If the game was reimplemented in Golang it wouldn't feel many times slower. But no, we are suffering the worst from both sides of the coin: something that should have been replaced by Inferno -plan9 people, the C and Unix creators and now Golang, their cousin- with horrible compiline times, horrible and incompatible ABI's, featuritis, crazy syntax with templates and if you are lucky, memory safety.
Meanwhile I wish the forked Inferno/Purgatorio got a seamless -no virtual desktops- mode so you fired the application in a VM integrated with the guest window manager -a la Java- and that's it. Limbo+Tk+Sqlite would have been incredible for CRUD/RAD software once the GUI was polished up a little, with sticky menus as TCL/Tk and the like. In the end, if you know Golang you could learn Limbo's syntax (same channels too) with ease.
BASIC was slow in the 80s. Games for the C64 (and similar machines) were written in machine code.
> By comparison, giving how they optimized the games for 8 and 16 bit machines I should have been able to compile Cataclysm DDA:BN under my potato netbook and yet it needs GIGABYTES of RAM to compile, it crazy that you need damn swap for something it required far less RAM 15 years ago for the same features.
Thatâs not crazy. Youâre comparing interpreted, line delimited, ASCII, with a compiler that converts structured ASCII into machine code.
The two processes are as different to one another as a driving a bus is to being a passenger on it.
I donât understand what your point is in the next two paragraphs. What Go, TCL, UNIX nor Inferno have to do with the C64 or modern software. So youâll have to help out there.
Compare Limbo+Tk under Inferno with current C#/Java. Or C++ against Plan9C.
We have impressive CPU's running really crappy software.
Remember Claude Code asking 66GB for a damn CLI AI agent for something NetBSD under a Vax (real or physical) from 1978 could do with NCurses in miliseconds every time you spawn Nethack or any other NCurses tool/game.
On speed, Forth for the ACE was faster than Basic running under the ZX80. So, it wasn't about using a text-parsed language. Forth was fast, but people was not ready for neither RPN nor to manage the stack, people tought in an algebraic way.
But that was an 'obsolete' mindset, because once you hit HS you were supposed to split 'big problems into smaller tasks (equations). In order to implement a 2nd degree equation solver in Forth you wouldn't juggle with the stack; you created discrete functions (words) for the discrimination part and so on.
In the end you just managed two stack items per step.
If Forth won instead of Basic, instead of allowing spaghetti code as a normal procedure we would be pretty much asking to decompose code into small functions as the right thing to do from the start.
Most dialects of BASIC actually had functions too. They just werenât popularised because line numbers were still essential for line editing on home micros.
> On speed, Forth for the ACE was faster than Basic running under the ZX80. So, it wasn't about using a text-parsed language.
Forth and BASIC are completely different languages and youâre arguing a different point to the one I made too.
Also I donât see much value in hypothetical arguments like âif Forth won instead of BASICâ because it didnât and thus we are talking about actual systems people owned.
I mean, I could list a plethora of technologies Iâd have preferred to dominate: Pascal and LISP being two big examples. But the C64 wasnât a lisp machine and people arenât writing modern software in Pascal. So theyâre completely moot to the conversation.
They were different but both came in-ROM and with similar storage options (cassette/floppy).
On Pascal, Delphi was used for tons of RAD software in the 90's, both for the enterprise and for home users with zillions of shareware (and shovelware). And Lazarus/FPC+SQLITE3 today is not bad at all.
On Lisp... it was used on niche places such as game engines, Emacs -Org Mode today it's a beast-, a whole GNU supported GNU distro (Scheme) and Maxima among others.
Still, the so called low-level C++ it's an example on things picking the wrong route. C++ and QT5/6 can be performant enough. But, for a roguelike, the performance on compiling it's atrocious and by design Go with the GC would fix a 90% of the problems and even gain more portability.
Iâm very aware of Lazarus, Delphi and Emacs. But theyâre exceptions rather than industry norms.
And thus pointing them out misses the point I was making when, ironically, I was pointing out how youâre missing the original point of this discussion.
My point was about performance. Yes, Basic vs Forth was the worst choice back in the day, and you could say low level stuff was done under assembler.
Fine. But the correct choice for 'low level' stuff it's C++ and I state that most of the C++ compilers have huge compiling times for software (GCC), or much better but they still eat ram like crazy (clang) and except for few software, the performance boost compared to Go doesn't look as huge for mosts tasks except for Chromium/Electron and QT.
For what software it's doing a 90% of the time, Go + a nice toolkit UI would be enough to cover most tasks while having a safe language to use. Even for bloated propietary IM clones such as Discord and Slack.
Because, ironically, most of the optimized C++ code is to run bloated runtimes like Electron tossing out any C++ gives to you, because most Electron software it's implementing half an OS with every application.
With KDE and QT at least you are sharing code, even by using Flatpak, which somehow deduplicates stuff a little bit. With Electron you are running separate, isolated silos with no awareness of each other. You are basically running several 'desktop environments' at once.
You can say, hey, Go statically builds everything, there's no gain on shared libraries then... until you find the Go compiler can still do a better job using less RAM than average than tons of stuff.
With Electron often you are shipping the whole debugging environment with yourself. Loaded, and running graphical software with far less performance than the 'bloated' KDE3 software back in the day doing bells and wistles under a Kopete chat window under an AMD Athlon. QT3 tools felt snappy. Seeing Electron based software everywhere has the appeal of running everything GUI based under TCL/Tk under a Pentium modulo video decoders and the like. It will crawl against pure Win32/XLib under a Pentium 90 if everything it's a TK window with debugging options enabled.
So, these are our current times. You got an i7 with 16GB of RAM and barely got any improvement with modern 'apps' over an i3 with 2GB of RAM and native software.
Youâre talking about compiler footprint and runtime footprint in the same conversation but theyâre entirely different processes (obviously) and I donât think it makes any sense to compare the two.
C++ is vastly more performant than Go. I love Go as a language but letâs not get ourselves carried away here about Gos performance.
It also makes no sense no sense to talk about Electron as C++. The problem with Electron isnât that it was written in C++, itâs that itâs ostensibly an entire operating system running inside a virtual machine executing JIT code.
You talked about using Go for UI stuff, but have you actually tried? Iâve written a terminal emulator in Go and performance UI was a big problem. Almost everything requires either CGO (thus causing portability problems) or uses of tricks like WASM or dynamic calls that introduced huge performance overheads. This was something I benchmarked in SDL so have first hand experience.
Then you have issues that GUI operations need to be owned by the OS thread, this causes issues writing idiomatic Go that calls GUI widgets.
And then you have a crap load of edge cases for memory leaks where Goâs GC will clear pointers but any allocations happening outside of Go will need to be manually deallocated.
In the end I threw out all the SDL code. It was slow to develop, hard to make pretty, and hard to maintain. It worked well but it was just far too limiting. So switched to Wails, which basically displays a WebKit (on MacOS) window so itâs lower footprint than Electron, allows you to write native Go code, but super easy to build UIs with. I hate myself for doing this but it was by far the best option available, depressingly.
I know C++ it's far more performant than Go but for some games and software C++ wouldn't be needed at all, such as nchat with tdlib (the library should be a Go native one by itself, is not rocket science). These could be working close in low end machines with barely performance losses. In these cases there's nothing to gain with C++, because even compared to C, most C++ software -save for Dillo and niche cases- won't run as snappy as C ones. Running them under Golang won't make them unusable, for sure.
On the GUI, there's Fyne; but what Go truly needs it's a default UI promoted from the Golang developers written in the spirit of Tk.Tk itself would be good enough. Even Limbo for Inferno (Go's inspiration) borrowed it from TCL. Nothing fancy, but fast and usable enough for most entry tasks.
Python ships it by default because it weights near NIL and most platforms have a similar syntax to pack the widgets. Is not fancy and under mobile you need to write dedicated code and set theming but again if people got to set Androwish as a proof of concept, Golang could do it better...
Another good use case for Go would be Mosh. C++ and Protobuf? Goland should have been good for this. C++ mosh would be far snappier (it feels with some software like Bombadillo and Anfora vs Telescope) but for 'basic' modern machines (first 64 bit machines with Core Duo's or AMD64 processors) it would be almost no delay for the user.
Yes, 32 bit machines, sorry, but for 2030 and up I expect these be like using 16 bit DOS machines in 1999. Everyone moved on and 32 bit machines were cheap enough. Nowadays it's the same, I own an Atom n270 and I love it, but I don't expect to reuse it as a client or Go programming (modulo for Eforth) in 4 years, I'd expect to compute everything in the low 64 end bit machines I own.
But it will be a good Go testing case, for sure. If it runs fast in the Atom, it would shine under amd64. With the current crysis, everyone should expect to refurbish and keep 'older' machines just in case. And be sure that long compiling times should be cut in half, even if you use ccache. RAM and storage will be expensive and current practices will be pretty much discarded. Yes, C++ will be used in these times, but Golang too. Forget Electron/Chromium being used as a standalone toolkit outside of being the engine of a browser.
And if oil/gas usage it's throttled for the common folk, E/V and electric heating will reach crazy numbers. Again, telecomms and data centers will have their prices skyrocketted so the power rise doesn't blackout a whole country/state. Again, expect power computing caps, throttled resolutions for internet media/video/RDP content, even bandwith caps (unless you pay a premium price, that's it) and tons of changes. React developers using 66GB of RAM for Claude Code... forget it. Either they rebase their software in Go... or they already lost.
>Sure, if you donât count safety features like memory management, crash handling, automatic bounds checks and encryption cyphers; as anything useful.
>Networking stacks, safety checks, encryption stacks, etc all contribute massively to software âbloatâ.
They had most of this stuff in the 1980s, and even earlier really. Not on your little 8-bit microcomputer that cost $299 that might have had as a kid, but they certainly did exist on large time-sharing systems used in universities and industry and government. And those systems had only a tiny fraction of the memory that a typical x86-64 laptop has now.
> They had most of this stuff in the 1980s, and even earlier really. Not on your little 8-bit microcomputer that cost $299 that might have had as a kid
Those are the systems we are talking about though.
> but they certainly did exist on large time-sharing systems used in universities and industry and government. And those systems had only a tiny fraction of the memory that a typical x86-64 laptop has now.
Actually this systems didnât. In the early 80s most protocols were still ASCII based. Even remote shell connections werenât encrypted. Remember that SSH wasnât released until 1995. Likewise for SSL.
Time sharing systems were notoriously bad for sandboxing users too. Smart pointers, while available since the 60s, werenât popularised in C++ until the 90s. Memory overflow bugs were rife (and still are) in C-based languages.
If you were using Fortran or ALGOL, then it was a different story. But by the time the 80s came around, mainframe OSs werenât being written in FORTRAN / ALGOL any longer. Software running on top of it might, but youâre still at the mercy of all that insecure C code running beneath it.
> Actually this systems didnât. In the early 80s most protocols were still ASCII based.
DES was standardised in '77. In use, before that. SSL was not the first time the world adopted encrypted protocols.
The NSA wouldn't have weakened the standard, it was something nobody used.
DES wasnât common place though (or at least not on the mainframes I worked on). But maybe than says more about the places I worked early on in my career?
Also DES is trivial to crack because it has a short key length.
Longer keys require more compute power and thus the system requirements to handle encryption increase as the hardware to decrypt becomes more powerful.
The box size at IBM was larger before standardisation. DES is trivial to break, because of NSA involvement in weakening all the corners. [0]
> In the development of the DES, NSA convinced IBM that a reduced key size was sufficient;
Minitel used DES, and other security layers, and was in use for credit cards, hospitals, and a bunch of other places. The "French web" very nearly succeeded, and did have these things in '85. It wasn't just mainframes - France gave away Minitel terminals to the average household.
[0] https://www.intelligence.senate.gov/wp-content/uploads/2024/...
Yeah Iâd written about minitel in a tech journal several years back. Itâs a fascinating piece of technology but safely never got to see one in real life.
I worked for one payroll mainframe in the 80s that didnât have DES. So it wasnât quite as ubiquitous as you might think. But it does still sound like it was vastly more widespread than I realised too.
This. An old netbook cam emulate a PDP10 with ITS, Maclisp and some DECNET-TCP/IP clients and barely suffer any lag...
Also the Amiga's have AmiSSL and it will run on a 68040 or some FPGA with same constraints. IRC over TLS, Gemini, JS-less web, Usenet, EMail... not requiring tons of GB.
Nowadays even the Artemis crew can't properly launch Outlook. If I were the IT manager I'd just set Claws-mail/thunderbird with file attachments, MSMTP+ISYNC as backends (caching and batch sending/receiving emails, you know, high end technology inspired by the 80's) and NNCP to relay packets where cuts in space are granted and thus NNCP can just push packets on demand.
The cost? my Atom n270 junk can run NNCP and it's written in damn Golang. Any user can understand Thunderbird/Claws Mail. They don't need to setup anything, the IT manager would set it all and the mail client would run seamlessly, you know, with a fancy GUI for everything.
Yet we are suffering the 'wonders' of vibe coding and Electron programmers pushing fancy tecnology where the old one would just work as it's tested like crazy.
> Also the Amiga's have AmiSSL and it will run on a 68040 or some FPGA with same constraints. IRC over TLS, Gemini, JS-less web, Usenet, EMail... not requiring tons of GB.
The AmiSSL came out long after the C64 was a relic and required hardware that was an order of magnitude more powerful than the C64 ;)
The BASIC 10Liner competition wants you to know that there is a growing movement of hackers who recognize the bloat and see, with crystal clarity, where things kind of went wrong ...
https://basic10liner.com/
".. and time and again it leads to amazingly elegant, clever, and sometimes delightfully crazy solutions. Over the past 14 editions, more than 1,000 BASIC 10Liners have been created â each one a small experiment, a puzzle, or a piece of digital creativity .."
That website seems to be gone now, unless itâs supposed to redirect to a sketchy German wix adâŚ
The website is there as of this comment. Yes there's a wix ad, but it seems normal (it just points to a wix sign up page) and not sketchy to me.
It's redirecting to homeputerium.de and seems to have nothing to do with what they're referring to.
Yall can't spend more than 5 seconds looking at the UI before giving up?
One of the only UI components on the homepage is a list of years you can click to see the entries.
Pretty sure I made it clear I looked at it, and looks like a domain squatter with no relation to the original comment. Why would I click around further?
Edit: Also y'know what? Those years aren't there on page load. They zoom in a few seconds later. I may not have even seen them, just Wix and then scrolled down to the German text that apparently refers to a school computer lab.
.. a pity you missed it, in case you did, because the Basic 10 Liner competition is really, really cool.
The tragic result of attention span atrophy and deskilling.
There was one time I was troubleshooting why an app used at a company would crash after some amount of time passed. Investigating the crash dumps showed it using 4GB of ram before it died, suspiciously the 32 bit limit of its application.
Turned out they never closed the files it worked on, so over time it just consumed ram until there wasnât any more for it to access.
I grew up with and absolutely adore The Last Ninja series. I'm not going to comment on the size thing because it's so trite.
Instead - here's [0] Ben Daglish (on flute) performing "Wastelands" together with the Norwegian C64/Amiga tribute band FastLoaders. He unfortunately passed away in 2018, just 52 years old.
If that tickled your fancy, here's [1] a full concert with them where they perform all songs from The Last Ninja.
[0] https://www.youtube.com/watch?v=ovFgdcapUYI [1] https://www.youtube.com/watch?v=PTZ1O1LJg-k
Reyn Ouwahand who composed The Last Ninja 3 with Fastloaders.
https://www.youtube.com/watch?v=0bobBcV4HcY
He also has a few nostalgia triggering covers of some Galway tracks.
https://www.youtube.com/watch?v=n7niD6i4020
https://www.youtube.com/watch?v=PTSUR3RHh9M
R.I.P Ben. He was such a positive human being and encouraging you to do great things, even if you doubted yourself.
Here is a little clip of him from Bedroom to Billions: https://www.youtube.com/watch?v=aRsLOUYL3mk
The first time I ever heard The Glitch Mob I had such a clear memory of this games soundtrack come to mind that I mentioned it to my brother soon after (as it was his commodore and his copy of the game I was playing when I was young). I'm not even sure if the song I heard even sounds like the game soundtrack particularly closely, but the connection in my mind was very strong.
I know exactly how you feel - The Way Out Is In (https://youtu.be/kqFqG-h3Vgk) heavily evokes video games for me
Here's more from FastLoaders:
https://c64audio.com/pages/fastloaders
> isometric on the C64 with such an amazing level of detail - simply gorgeous
Or a convincing representation of that. A lot of old tricks mean that the games are doing less than you think that they are, and are better understood when you stop thinking âhow do they do thatâ and âhow are they convincing my brain that is what they are doingâ.
Look at how little RAM the original Elite ran in on a BBC Model B, with some swapping of code on diskâ°. 32KB, less the 7.75KB taken by the game's custom screen mode² and a little more reserved for other thingsš. I saw breathy reviews at the time and have seen similar nostalgic reviews more recently talking about â8 whole galaxies!â when the game could easily have had far more than that and was at one point going to. They cut it down not for technical reasons but because having more didn't feel usefully more fun and might actually put people off. The galaxies were created by a clever little procedural generator so adding more would have only added a couple of bytes (to hold the seed and maybe other params for the generator) each.
Another great example of not quite doing what it looks like the game is doing is the apparently live-drawn 3D view in the game Sentinel on a number of 8-bit platforms.
--------
[0] There were two blocks of code that were swapped in as you entered or self a space station: one for while docked and one for while in-flight. Also the ship blueprints were not all in memory at the same time, and a different set was loaded as you jumped from one system to another.
[1] the CPU call stack (technically up to a quarter K tough the game code only needed less than half of that), scratch-space on page-zero mostly used for game variables but some of which was used by things like the disk controller ROM and sound generator, etc.
[2] Normal screen modes close to that consumed 10KB. Screen memory consumption on the BBC Master Enhanced version was doubled as it was tweaked to use double the bit depths (4ppb for the control panel and 2bbp for the exterior, instead of 2bbp and 1ppb respectively).
Apparently this person is referring to the available ram on a Commodore 64. The media (data) on disk or tape was much more than that.
Not much more. It all fits on a single side of a 1541 floppy. Even considering compression it couldn't be more than a couple hundred kilobytes.
https://csdb.dk/release/?id=99145
It's not much, but relatively speaking it's much more.
I'd say up to a couple of hundred is much more than 40. Not a full decimal order of magnitude, but even without compression the 170KB on one side is up to 4½Ă.
> Not much more. It all fits on a single side of a 1541 floppy.
It could still be much more depending on how much data fits on a single side of a 1541 floppy.
You can access nearly 64kb of RAM on the C64, if you don't need the BASIC or Kernal (sic) ROMs. They can be software toggled in or out. Agreed that even the tape had more game data than that, but not much more.
However, very few tapeloader games ever tried to load more assets from tape. Generally it would just load a memory image and that would be that for the entire game.
But that's also kind of what makes it impressive in a different way. Even if the game was larger on disk/tape, they still had to stream it in tiny chunks and make it run within those constraints
If we're talking about fitting a quart into a pint pot, it would be remiss not to mention Elite fitting into a BBC Model B, 32kb, and the excellent code archaeology of it, and variants by Mark Moxon here: https://www.bbcelite.com/
A multi-level generative dungeon-crawler in 10 lines of code:
https://bunsen.itch.io/the-snake-temple-by-rax
We lost something in the bloat, folks. Its time to turn around and take another look at the past - or at least re-adjust the rearview mirror to actually look at the road and not ones makeup ..
Gluecode-First Engineering: the free-love utopia of sharing code resulted in engineers abandoning whole-design and defaulting to just creating mash-ups of pre-existing code.
Nobody designs whole-apps anymore, itâs all about minimizing the gluecode written for the 1200 dependencies that make your app buzzword-compliant.
It's kind of amazing how much of those old games was actual logic instead of data.
Feels like they were closer to programs, while modern games are closer to datasets.
Chris Crawford called this "process intensity", he noted it at least back to 1983 with Dragon's Lair, discussed in this 1987 article https://www.erasmatazz.com/library/the-journal-of-computer/j...
Funny because I rewrote a bad port of dragons lair for a custom console with a tiny engine and huge dataset relatively, each frame having one "if press X goto frame Y" instruction.
Most games back then where small. An C64 only had 64k and most game didn't use all of it. An Atari 800 had max 48k. It wasn't until the 1200 that it went up. Both systems are cartridge based games, many of which were 8k.
Honestly though, I don't read much into the sizes. Sure they were small games and had lots of game play for some defintion of game play. I enjoyed them immensely. But it's hard to go back to just a few colors, low-res graphics, often no way to save, etc... for me at least, the modern affordances mean something. Of course I don't need every game to look like Horizon Zero Dawn. A Short Hike was great. It's also 400meg (according to steam)
The modern classic, Animal Well, is only 35mb in size!
https://store.steampowered.com/app/813230/ANIMAL_WELL/
Wow, you could fit tens of those in one bit!
(sorry)
> Sure they were small games and had lots of game play for some defintion of game play. I enjoyed them immensely. But it's hard to go back to just a few colors, low-res graphics, often no way to save, etc... for me at least, the modern affordances mean something.
On one hand, you're of course right. It is hard to go back, except for the nostalgia.
On the other, do you know there is a scene of people still making brand new games for the Commodore 64 (and other home computers)? And selling them, too, these are not just free games. Of course the target audience is themselves, they make, sell and buy games within the community, but the point is it still exists.
Also there are artists making art in C64 graphics resolutions and color modes, and even PETSCII art enthusiasts (PETSCII is C64's text mode, which had some interesting symbols which facilitate creativity).
>But it's hard to go back to just a few colors, low-res graphics, often no way to save, etc... for me at least, the modern affordances mean something.
All those old games have a way to save now, if you run them in an emulator as is commonly done these days. That's how I played through Metroid and finally beat the mother brain in just a day or two during the pandemic.
Pretty much every 8-bit computer game of 1987 or earlier (before the 128kB machines became popular) were < 40Kb? The Spectrum and Commodore combined probably had a library in excess of 50,000 games.
I love how you can put all the games ever made for a given 8 bit platform on a single flash drive.
Amazing what you can accomplish when you have more than "a sprint" to deliver something and no project manager asking "are you done yet?"
Recovering game dev here
The publisher for this game was Activision. They absolutely had deadlines, lots of (1987) money invested in this, outsourced to a third party company in Hungary, had the outsource team fail, moved development platforms a few times, wrote a programming language and a game engine, and then became the best selling C64 game.
Very much development hell.
> âThe Last Ninjaâ was 40 kilobytes
I have got 1.1 GB of MP3s with just remixes of the music from the three games, some of which are from a Kickstarter from the composer for the second.
That short video of the game on twitter is 11.5MB, or about 300x larger than the game itself.
X264 supports a lossless mode without chroma subsampling, which produces very good compression for raw emulator captures of retro game footage. It is much better than other codecs like HuffYuv, etc.
But for some reason, Firefox refuses to play back those kinds of files.
But for some reason, Firefox refuses to play back those kinds of files.
And that reason is because x264 is a free and open source implementation of the H.264 codec, and you still need to pay a license to use the patented technology regardless of how you do that. Using a free implementation of the code doesn't get you a free license for the codec.
Haven't those patents expired by now?
Some have, but it depends on the profile used, and also on the country: https://meta.wikimedia.org/wiki/Have_the_patents_for_H.264_M...
Just in the US. Not in Europe. At least for decoding.
I'm not sure this is particularly telling. You can write a tiny program that generates a 4K image, and the image could be 1000x larger.
Or, if I write a short description "A couple walks hand-in-hand through a park at sunset. The wind rustles the orange leaves.", I don't think it would be surprising to anyone that an image or video of this would be relatively huge.
I shipped a browser game that was 8KB. Okay, plus 30 million lines of Chromium ;)
Most of my games are roughly in that range though. I think my MMO was 32KB, and it had a sound effects generator and speech synth in it. (Jsfxr and SAM)
I built it in a few days for a game jam.
I'm not trying to brag, I'm trying to say this stuff is easy if you actually care. Just look at JS13K. Every game there is 13KB or below, and there's some real masterpieces there. (My game was just squares, but I've seen games with whole custom animation systems in them.)
Once you learn how, it's pretty easy. But you'll never learn if you don't care.
You have to care because there's nothing forcing you. Arguably The Last Ninja would have been a lot more than 40KB if there weren't the hardware limitations of the time.
They weren't trying to make it 40KB, they were just trying to make a game.
In my case, I enjoy the challenge! (Also I like it when things load instantly :)
I think I'll make a PS1 game next. I was inspired by this guy who made a Minecraft clone for Playstation:
https://youtu.be/aXoI3CdlNQc?is=sDNnrGbQGJt_qnV6
P.S. most Flash games were only a few kilobytes, if you remove the music!
I was comparing games prices last week and I found that prices from the 80s aren't too different from modern game prices.
Elite was ÂŁ20 in 1984 and that would be ÂŁ66 today, which is not very different from what a good game for the PS5 costs today.
Except that games then were made by one or two people and nowadays games are made by teams with coders, musicians, artists, etc.
Yeah, the games industry is in a pretty big crisis right now, and I think change needs to happen both ways:
Consumers need to understand that keeping games at the same price for decades despite rising costs and inflation is not realistic. If they want the industry to thrive, they need to be ok with games being more expensive.
Meanwhile, developers need to stop making games so expensive. This is an entertainment industry / corpo problem, really. Companies have seen the big profits and decided that only the big profits will do, which means you need to make a big open world cinematic experience, which is expensive, and because it's expensive, they won't take risks on making anything actually interesting.
The only way gaming moves forward is if we make riskier games that cost less to produce, which is why indies are the ones making the good games these days.
A few years ago, I decompiled a good part of the PC version of Might & Magic 1 for fun. According to Wikipedia, it had been released in 1986, although I don't know whether that refers to the PC version or to the original Apple II version.
It is a quite big game: the main executable is 117KB, plus around 50 overlay files of 1.5 KB each for the different dungeons and cities, plus the graphics files. I guess it was even too big for the average PC hardware at that time, or it was a limitation inherited from the original Apple II version: When you want to cast a spell you have to enter the number of the spell from the manual, maybe because there was not enough memory to fit the names of the 94 spells into RAM. Apart from that and the limited graphics and the lack of sound, the internal ruleset is very complete. You have all kind of spells and objects, capabilities, an aging mechanism, shops, etc.. The usual stuff that you also see in today's RPGs.
The modern uninstall.exe that came with it (I bought the game on GOG) was 1.3MB big.
>When you want to cast a spell you have to enter the number of the spell from the manual, maybe because there was not enough memory to fit the names of the 94 spells into RAM
Probably not ;) "Enter things from a manual" was a tried old copy protection technique. If you used the warez version you presumably did not have a manual so you got stuck. This didn't run on the 8008 or whatever, I'm sure the game could have known the names of spells fairly easily.
Ah, that makes more sense than my theory. It's a weak copy protection method, though, as you can just try and see what happens, and I think they dropped it in M&M3.
We made the most of limited resources back then. Back in 1980, I was living large with my 64KB Apple II with dual 140KB floppy drives and a 10 inch (9 inch? I canât quite remember) amber monochrome monitor. Most had less.
A lot of trial and error. I've built graphical tools with GD in PHP, the difficult part for me what that the coordinates where inverted.. I only knew how to draw lines and pixels, but I got the job done.
Around the time DirectX came around and first games requiring it appeared, which in my memory coincided with hard drives getting way bigger and first games being delivered on a CD instead of floppies, I've been apalled at how I could see literal BMPs being written to disk during the installation. This was the same time when cracked games were being distributed via BBS at a fraction of the original size with custom installers which decompressed MP3s to their original WAV files. I've asked the same questions then: why WAV, why BMP, why the bloat? With time I've learned the answer: disk space is cheap, memory and CPU cycles are not, if you can afford to save yourself the decoding step, you just do it, your players will love it. You work with constraints you have and when there loosen up, your possibilities expand too.
I remember playing a version of this game on ZX Spectrum but I cannot find it on the internet. I remember it had bees that you had to avoid and a boat which you were able to untie so that it floats down a stream.
Anybody remember this one?
I remember this game, the way it drew itself on each screen, the nice graphics. Growing up with games on Atari, Commodore, Amstrad, and Spectrum, was a lot of fun.
By comparison, COD Modern Warfare 3 is 6,000,000 times larger at 240GB. Imagine telling that to someone in 1987.
The Last Ninja ran at resolution 160x200, with effectively 2-bit color for graphic assets. It had amazing animations for that level of detail, but all the variety of the graphics could not take too much RAM even if it wanted to.
The quest for photorealistic "movie-like" rendering which requires colossal amounts of RAM and compute feels like a dead end to me. I much appreciate the expressly unrealistic graphics of titles like Monument Valley.
Hardware sprite accelerators, the first GPUs. I swear there's something visceral you learn by programming that sort of system where you can literally see what it's doing, in the order it's doing it, which you just can't get any other way.
That's just incredible. People used to be so much better at programming, or at least great programmers had it easier to get funded. Most of what I see today is exceptionally low quality and just getting worse with time.
My website https://midzer.de/ is themed like "The Last Ninja II" which is the first game I've encountered when I was young.
See also Elite in 22KB
https://youtu.be/lC4YLMLar5I
Previously: https://news.ycombinator.com/item?id=38707095
I never figured out how they did the turtle graphics in this game. The C64 didn't have whole screen bitmaps, you could either use sprites or user defined character sets, neither of which made this straightforward.
And the loading screens were also amazing, particularly for tape loading.
As others have said, the C64 does have bitmap modes, though it's understandable not being aware of it as they weren't that commonly used for games since it was often easier to use user defined character sets as tilesets if you had repetition.
The C64 does have a couple of bitmap modes. The Last Ninja uses mode 3, which is multicolor bitmap mode. It occupies 9000 bytes including pixels (8000 bytes) and color RAM (1000 bytes).
The TI99/4a version of the Logo language which has turtle graphics used user defined characters to implement them. There were only (I think) 128 user definable characters, and when the turtle graphics had redefined all of them to create its output, it gave the user a message, "out of ink".
You might be thinking of another system (like the NES, perhaps), because the C64 has 160x100 and 320x200 bitmap modes.
Even on NES a lot of games use CHR-RAM so arbitrary bitmaps are at least possible, though only a small part of the screen is unique without some rarely used mapper hardware. Zelda and Metroid mostly just use this to compress the graphics in ROM, Qix is a simple example with line drawing, Elite is an extreme one.
I made a demo of the Mystify screensaver using the typical 8KB CHR-RAM. Even with a lot of compromises it has pretty large borders to avoid running out of unique tiles. https://youtube.com/watch?v=1_MymcLeew8
Elite is my go-to example for madness in a tile-based grapgics system. Watching the CHR-RAM in an emulator while Elite is running is mesmerizing.
Speaking of the size: my first PC, built by a family friend, had a 80MB disk, split into two partitions. The second 40MB partition had Windows 3.1 and about two Norton Commander columns full of games on it, largest of which were Wolfenstein 3D and Lost Vikings with about 1.4MB each. Truly a different era.
The consequence of "space is cheap" / "If I didn't use that RAM, it would just sit there unused anyway" etc.
Some comments here sound like the ones I hear from car "enthusiasts" praising old engines for being simple to run and easy to fix, then complaining about modern engines being too complicated and how we should return to the "good old days", all that without taking into account the decades of progress since then.
Want to prove a point? Give me Skyrim in 64k of ram. Go ahead! I dare you!
So you can read replies etc. without having to be logged in to X: https://xcancel.com/exQUIZitely/status/2040777977521398151
Not as small as The Last Ninja, but when I was a teenager first getting into emulation, I genuinely thought there was a mistake or my download got interrupted when I downloaded Super Mario Bros. 3, because it was only like 500kb [1], and I didn't think it was possible for a game that huge to be less than a megabyte.
It is still impressive to me how much game they could squeeze out of the NES ROM chips.
[1] Or something like that, I don't remember the exact number.
40kb and it felt like a full world... I'm burning through tokens to get AI to decide whether to go to the tavern or the market. Something went wrong somewhere
God I loved that game. Don't think I ever managed to finish and now I'm tempted to try again!
I played the game. Music was exceptional.
It really was. I was just wondering if Last Ninja 2 (Amiga) was the first game I actually liked playing. I mostly hated old games and I still don't like most games. Particularly ones with twitchy controls or platforming. LN wasn't that easy and it was very linear, but it was still somehow incredibly fun. And the music and even the graphics were great.
The music and atmosphere was gorgeous. Fond memories of wasted youth
I never finished the game, sadly.
We live in an age of abundant memory â until you check RAM prices.
It really puts into perspective how different the constraints were
That game felt like a graphics demo though. Almost unplayable.
Some PokĂŠmon Crystal ROMs pack a huge amount of gaming in very few MB. Z80-ish ASM, KB's of RAM.
The ZMachine games, ditto. A few kb's and an impressive simulated environment will run even under 8bit machines running a virtual machine. Of course z3 machine games will have less features for parsing/obj interaction than z8 machine games, but from a 16 bit machine and up (nothing today, a DOS PC would count) will run z8 games and get pretty complex text adventures. Compare Tristam Island or the first Zork I-III to Spiritwrak, where a subway it's simulated, or Anchorhead.
And you can code the games with Inform6 and Inform6lib with maybe a 286 with DOS or 386 and any text editor. Check Inform Beginner's Guide and DM4.pdf And not just DOS, Windows, Linux, BSD, Macs... even Android under Termux. And the games will run either Frotz for Termux or Lectrote, or Fabularium. Under iOS, too.
Nethack/lashem weights MB's and has tons of replayability. Written in C. It will even run under a 68020 System 7 based Mac... emulated under 9front with an 720 CPU as the host. It will fly from a 486 CPU and up.
Meanwhile, Cataclysm DDA uses C++ and it needs a huge chunk of RAM and a fastly CPU to compile it today. Some high end Pentium4 with 512MB of RAM will run it well enough, but you need to cross compile it.
If I had the skills I would rewrite (no AI/LLM's please) CDDA:BN into Golang. The compiling times would plummet down and the CPU usage would be nearly the same. OFC the GC would shine here prunning tons of unused code and data from generated worlds.
Oh man the tape loading time. I dreamed about being able to afford a disk drive.
The loading music is exceptional and I enjoyed listening to it while waiting.
I still occasionally listen to it.
Last Ninja has my favorite music from the C64 era.
Have you listened to the live versions by the Fastloaders? They had Ben Daglish before he passed away.
I will now :D
Well I got to listen to it a lot lol.
Since you enjoy SID music checkout this crazy hack someone did with 8 SID chips.
https://www.youtube.com/watch?v=nhz3vHYX0E0
Despite being a mid-late-millennial, I can see how this played out. Even compared to the second family computer my parents got in the late 90's, which was an absolute monster at the time, I do realize how many corners and shortcuts developers had to make to get a game going in a few hundred megabytes, seeing mobile games today easily exceeding 10 times that, and not just now but even 10 years ago when I was working at a company that made mobile games. These days, developers are automatically assuming everyone has what are effectively unlimited resources by 90's standards(granted they haven't transitioned to slop-coding, which makes it substantially worse). Personally, I have a very strange but useful habit: when I find myself with some spare time at work, I spin up a very under-powered VM and start running what is in production and try to find optimizations. One of the data pipelines I have is pretty much insanity in terms of scale and running it took over 48 hours. Last time(a few weeks ago actually), I did the VM thing and started looking for optimizations and I found a few, which were completely counter-intuitive at first and everyone was like "na, that makes no sense". But now the pipeline runs in just over 10 hours. It's insane how much shortcuts you force yourself to find when you put a tight fence around you.
Yes this is a great methodology. I found developing BrowserBox (which is real time interactive streaming for remote browsers), using slow links, and a variety of different OS, really stresses parts of the system and causes improvements to be necessary that strengthen the whole.
I wonder could you make a fame that small by using svgs?
Is this even correct? It was a two-sided disk, and each side was 174 KB.
Masterpieces like these are a perfect demonstration that performance relies not only on fast processors, but on understanding how your data and code compete for resources. Truly admirable. Thanks for the trip down memory lane.
> ... 40 kilobytes.
How times have changed. My best-selling program "Apple Writer", for the Apple II, ran in eight kilobytes. It was written entirely in 6502 assembly language.
Wow that search/interact mechanic is obnoxious, you can see the player fumbling it every time, despite knowing exactly where the item is theyâre trying to collect.
This is sort of the defining mechanic of these games in my memory. The first thing that pops into my head when I think of Last Ninja is aligning and realigning myself, and squatting, awkwardly and repeatedly (just like a real ninja, lol), until that satisfying new item icon appears. Perhaps surprisingly, these are very fond memories.
This mechanic is augmented by not even always knowing which graphics in the environment can be picked up, or by invisible items that are inside boxes or otherwise out of sight (I think LN2 had something in a bathroom? You have to position yourself in the doorway and do a squat of faith).
The other core memory is the spots that require a similarly awkward precision while jumping. These are worse, because each failure loses you one of your limited lives. The combat is also finicky. I remember if you come into a fight misaligned, your opponent might quickly drain your energy while you fail to get a hit in.
At the time, it seemed appropriate to me that it required such a difficult precision to be a ninja. I was also a kid, who approached every game non-critically, assuming each game was exactly as it was meant to be. Thus I absolutely loved it, lol.
> LN2 had something in a bathroom?
Toilet flush chains. You entered two different park restrooms (both marked F) and combined them to a nunchuks.
> LN2 had something in a bathroom? You have to position yourself in the doorway and do a squat of faith)
Sounds like every time I go to the bathroom ... :D
joysticks only had one fire button.
And it was one of the best games ever made. Back in the day equivalent to a AAA tier game of today.
Constraints breed creativity.
The same size as Super Mario Bros. (NES, 1985)
A game which was actually 40 kilobytes: Super Mario Bros. It had 32 side-scrolling levels.
27 unique levels. 40KB minus a handful of spare bytes and some unused code. The max the NES can support without mappers. Modern NES homebrew and demoscene can do fancier stuff with this budget given the extra decades of learned tricks, but for the state of console gaming in 1985, SMB1 is damn impressive.
Also remember all of that was ROM, the NES had a mere 2 kilobytes of RAM for all your variables and buffers.
> I still struggle to comprehend, even in the slightest, how programmers back then did what they did - and the worlds they created with the limitations they had to work with.
Highly related: two videos covering exactly how they fit...
- Super Mario Bros 1 into 40KiB (https://www.youtube.com/watch?v=1ysdUajrhL8)
- and Super Mario Bros 2 into 256KiB (https://www.youtube.com/watch?v=UdD26eFVzHQ)
I highly advise watching the actual videos to best understand, since all the techniques used were very likely devised from a game-dev perspective, rather than by invoking any abstract CS textbook learning.
But if I did want to summarize the main "tricks" used, in terms of such abstract CS concepts:
1. These old games can be understood as essentially having much of their data (level data, music data, etc) "compressed" using various highly-domain-specific streaming compressors. (I say "understood as" because, while the decompression logic literally exists in the game, there was likely no separate "compression" logic; rather, the data "file formats" were likely just designed to represent everything in this highly-space-efficient encoding. There were no "source files" using a more raw representation; both tooling and hand-edits were likely operating directly against data stored in this encoding.)
2. These streaming compressors act similar to modern multimedia codecs, in the sense that they don't compress sequences-of-structures (which would give low sequence correlation), but instead first decompose the data into distinct, de-correlated sub-streams / channels / planes (i.e. structures-of-sequences), which then "compress" much better.
3. Rather than attempting to decompose a single lossless description of the data into several sub-streams that are themselves lossless descriptions of some hyperplane through the data, a different approach is used: that of each sub-channel storing an imperative "painting" logic against a conceptual mutable canvas or buffer shared with other sub-channels. The data stream for any given sub-channel may actually be lossy (i.e. might "paint" something into the buffer that shouldn't appear in the final output), but such "slop"/"bleed" gets overwritten â either by another sub-channel's output, or by something the same sub-channel emits later on in the same "pass". The decompressor essentially "paints over" any mistakes it makes, to arrive at a final flattened canvas state that is a lossless reproduction of the intended state.
4. Decompression isn't something done in its entirety into a big in-memory buffer on asset load. (There isn't the RAM to do that!) But nor is decompression a pure streaming operation, cleanly producing sequential outputs. Instead, decompression is incremental: it operates on / writes to one narrow + moving slice of an in-memory data "window buffer" at a time. Which can somewhat be thought of as a ring buffer, because the decompressor coroutine owns whichever slice it's writing to, which is expected to not be read from while it owns it, so it can freely give that slice to its sub-channel "painters" to fill up. (Note that this is a distinct concept from how any long, larger-than-memory data [tilemaps, music] will get spooled out into VRAM/ARAM as it's being scrolled/played. That process is actually done just using boring old blits; but it consumes the same ring-buffer slices the decompressor is producing.)
5. Different sub-channels may be driven at different granularities and feed into more or fewer windowing/buffering pipeline stages before landing as active state. For example, tilemap data is decompressed into its "window buffer" one page at a time, each time the scroll position crosses a page boundary; but object data is decompressed / "scheduled" into Object Attribute Memory one column at a time (or row at a time, in SMB2, sometimes) every time the scroll position advances by a (meta)tile width.
6. Speaking of metatiles â sub-channels, rather than allowing full flexibility of "write primitive T to offset Y in the buffer", may instead only permit encodings of references to static data tables of design-time pre-composed patterns of primitives. For tilemaps, these patterns are often called "meta-tiles" or "macro-blocks". (This is one reason sub-channels are "lossy" reconstructors: if you can only encode macro-blocks, then you'll often find yourself wanting only some part of a macro-block â which means drawing it and then overdrawing the non-desired parts of it.)
7. Sub-channels may also operate as fixed-function retained-mode procedural synthesis engines, where rather than specifying specific data to write, you only specify for each timestep how the synthesis parameters should change. This is essentially how modular audio synthesis encoding works; but more interestingly, it's also true of the level data "base terrain" sub-channel, which essentially takes "ceiling" and "ground" brush parameters, and paints these in per column according to some pattern-ID parameter referencing a table of [ceiling width][floor height] combinations. (And the retained-mode part means that for as long as everything stays the same, this sub-channel compresses to nothing!)
8. Sub-channels may also contain certain encoded values that branch off into their own special logic, essentially triggering the use of paint-program-like "brushes" to paint arbitrarily within the "canvas." For example, in SMB1, a "pipe tile" is really a pipe brush invocation, that paints a pipe into the window, starting from the tile's encoded position as its top-left corner, painting right two meta-tiles, and downward however-many meta-tiles are required to extend the pipe to the current "base terrain" floor height.
9. Sub-channels may encode values ("event objects") that do not decode to any drawing operation to the target slice buffer, but which instead either immediately upon being encountered ("decompression-time event objects") or when they would be "placed" or "scheduled" if they were regular objects ("placement-time event objects"), just execute some code, usually updating some variable being used during the decompression process or at game runtime. (The thing that prevents you from scrolling the screen past the end of map data, is a screen-scroll-lock event object dropped at just the right position that it comes into effect right before the map would run out of tiles to draw. The thing that determines where a "warp-enabled pipe" will take you, is a warp-pipe-targeting event object that applies to all warp-enabled pipes will take you after it runs, until the next warp-pipe-targeting event object is encountered.)
If at least some of these sub-channels are starting to sound like essentially a bytecode ISA for some kind of abstract machine â yes, exactly. Things like "event objects" and "brush invocations" can be more easily understood as opcodes (sometimes with immediates!); and the "modal variables" as the registers of these instruction streams' abstract machines.
[continued...]
10. The interesting thing about these instruction streams, though, is that they're all being driven in lockstep externally by the decompressor. None of the level-data ISAs contain anything like a backward JMP-like opcode, because each level-data sub-channel's bytecode interpreter has a finite timeslice to execute per decompression timestep, so allowing back-edges [and so loops] would make the level designers into the engine developers' worst enemy. But most of the ISAs do contain forward JMPs, to essentially encode things like "no objects until [N] [columns/pages] from now." (And a backward JMP instruction does exist in the music-data parameterized-synthesis sub-channel ISA [which as it happens isn't interpreted by the CPU, but is rather the native ISA of the NES's Audio Processing Unit.] If you ever wondered how music keeps not only playing but looping even if the game crashes, it's because the music program is loaded and running on the APU and just happily executing its own loop instructions forever, waiting for the CPU to come interrupt it!)
11. These sub-channel ISAs are themselves designed to be as space-efficient as possible while still being able to be directly executed without any kind of pre-transformation. They're often variable-length, with most instructions being single-byte. Opcodes are hand-placed into the same kind of bit-level Huffman trie you'd expect a DEFLATE-like algorithm to design if it were tasked with compressing a large corpus of fixed-length bytecode. Very common instructions (e.g. a brush to draw a horizontal line of a particular metatile across a page up to the page boundary) might be assigned a very short prefix code (e.g. `11`), allowing the other six bits in that instruction byte to to select a metatile to paint with from a per-tilemap metatile palette table. Rarer instructions, meanwhile, might take 2 bytes to express, because they need to "get out of the way of" all the common prefixes. (You could think of these opcodes as being filed under a chain of "Misc -> Misc -> Etc -> ..." prefixes.)
IMHO, these are all (so far) things that could be studied as generalizable data-compression techniques.
But here are two more techniques that are much more specific to game-dev, where you can change and constrain the data (i.e. redesign the level!) to fit the compressor:
12. Certain ISAs have opcodes that decode to entirely-distinct instructions, depending on the current states of some modal variables! (My guess is that this came about either due to more level features being added late in development after the ISAs has mostly been finalized; or due to wanting to further optimize data size and so seeing an opportunity to "collapse" certain instructions together.) This mostly applies to "brush" opcodes. The actual brush logic they invoke can depend on what the decoder currently sees as the value of the "level type" variable. In one level type, opcode X is an Nx[floor distance] hill; while in another level type, opcode X is a whale, complete with water spout! (In theory, they could have had an opcode to switch level type mid-level. Nothing in this part of the design would have prevented that; it is instead only impractical for other reasons that are out-of-scope here, to do with graphics memory / tileset loading.)
13. And, even weirder: certain opcodes decode to entirely-distinct instructions depending on the current value of the 'page' or 'column' register, or even the precise "instruction pointer" register (i.e. the current 'row' within the 'column'). In other words, if you picture yourself using a level editor tool, and dragging some particular object/brush type across the screen, then it might either "snap" to / only allow placement upon metatiles where the top-left metatile of the object lands on a metatile at a position that is e.g. X%4==1 within its page; or it might "rotate" the thing being dragged between being one of four different objects as you slide it across the different X positions of the metatile grid. (This one's my favorite, because you can see the fingerprint of it in much of the level design of the game. For example: the end of every stage returns the floor height to 2, so that "ground level" is at Y=13. Why? Because flagpole and castle objects are only flagpole and castle objects when placed at Y=13!)
.. and Claude Code for Linux, the CLI binary is 200+ mb :(
"I still struggle to comprehend, even in the slightest, how programmers back then did what they did - and "
First of all, if you're going to LLMize your tweets, do it correctly and run a 2nd pass after you're done editing. Second, read a book. That's how we learned things in 1987.
https://archive.org/details/commodore-64-programmers-referen...
The LLM-use witch hunt accusations are rampant on every single article. That snippet doesnât sound like an LLM to me.
Have your coffee and consider that this person was just complimenting coders from back in the day.
Doesn't matter what you feel it sounds like - the data held within the syntax, and this is the biggest one.
https://tropes.fyi/tropes-md#:~:text=The%20single%20most%20c...