I find the dependency creep for both rust and node unfortunate. Almost anything I add explodes the deps and makes me sweat for maintenance, vulnerabilities, etc. I also feel perpetually behind, which I think is basically frontend default mode. Go does the one thing I wish Rust had more of which is a pretty darn great standard library with total backwards compatibility promises. There are awkward things with Go, but man, not needing to feel paranoid and how much can be built with so little feels good. But I totally understand just getting crap done and taking off the tin foil. Depends on what you prioritize. Solo devs don't have the luxury.
Those deps have to come from somewhere, right? Unless you're actually rolling your own everything, and with languages that don't have package managers what you end up doing is just adding submodules of various libraries and running their cmake configs, which is at least as insecure as NPM or Crates.io.
Go is a bit unique a it has a really substantial stdlib, so you eliminate some of the necessary deps, but it's also trivial to rely on established packages like Tokio etc, vendor them into your codebase, and not have to worry about it in the future.
These are two sides of the same coin. Go has its quirks because they put things in the standard library so they can't iterate (in breaking manners), while Rust can iterate and perfect ideas much faster as it's driven by the ecosystem.
There is a moral hazard here. By accepting that APIs are forever, you tend to be more cautious and move toward getting it right the first time.
Slower is better... And also faster in the long run, as things compose.
Personally, I do believe that there is one best way to do things quite often, but time constraints make people settle.
At least it is my experience building some systems.
Not sure it is always a good calculus to defer the hard thinking to later.
The cost of "perfecting" an idea here is ruining the broader ecosystem. It is much much better for an API to be kinda crappy (but stable) for historical reasons than dealing with the constant churn and fragmentation caused by, for example, the fifth revision of that URL routing library that everyone uses because everyone uses it. It only gets worse by the orthogonal but comorbid attitude of radically minimizing the scope of dependencies.
I think âthe fifth revision of that URL routing library that everyone usesâ is a much less common case than âcrate tried to explore a problem space, five years later a new crate thinks it can improve upon the solutionâ, which is what Rustâs conservatism really helps prevent. When you bake a particular crate into std, competitor crates now have a lot of inertia to overcome; when they're all third-party, the decision is not âadd a crate?â but âreplace a crate?â which is more palatable.
Letting an API evolve in a third-party crate also provides more accurate data on its utility; you get a lot of eyes on the problem space and can try different (potentially breaking) solutions before landing on consensus. Feedback during a Rust RFC is solicited from a much smaller group of people with less real-world usage.
Which has been working great for go, right. They shipped "log" and "flag" stdlib packages, so everyone uses... well, not those. I think "logrus" and "zap" are probably the most popular, but there's a ton of fragmentation in Go because of the crappy log package, including Go itself now shipping two logging packages in the stdlib ('log/slog').
Rust on the other hand has "log" as a clear winner, and significantly less overall fragmentation there.
> It is much much better for an API to be kinda crappy (but stable) for historical reasons
But this does more than just add a maintenance burden. If the API can't be removed, architectural constraints it imposes also can't be removed.
e.g. A hypothetical API that guarantees a callback during a specific phase of an operation means that you couldn't change to a new or better algorithm that doesn't have that phase.
Realize the "log" api is bad? Make "log/slog". Realize the "rand" api is bad? Make "rand/v2". Realize the "image/draw" api is bad? Make "golang.org/x/image/draw". Realize the "ioutil" package is bad? Move all the functions into "io".
Te stdlib already has at least 3 different patterns for duplicating API functionality with minor backwards-incompatible changes, and you can just do that and mark the old things as deprecated, but support it forever. Easy enough.
I've found Go's standard library to be really unfortunate compared to rust.
When I update the rust compiler, I do so with very little fear. My code will still work. The rust stdlib backwards compatible story has been very solid.
Updating the Go compiler, I also get a new stdlib, and suddenly I get a bunch of TLS version deprecation, implicit http2 upgrades, and all sorts of new runtime errors which break my application (and always at runtime, not compiletime). Bundling a large standard library with the compiler means I can't just update the tls package or just update the image package, I have to take it or leave it with the whole thing. It's annoying.
They've decided the go1 promise means "your code will still compile, but it will silently behave differently, like suddenly 'time1 == time2' will return a different result, or 'http.Server' will use a different protocol", and that's somehow backwards compatible.
I also find the go stdlib to have so many warts now that it's just painful. Don't use "log", use "log/slog", except the rest of the stdlib that takes a logger uses "log.Logger" because it predates "slog", so you have to use it. Don't use the non-context methods (like 'NewRequest' is wrong, use 'NewRequestWithContext', don't use net.Dial, etc), except for all the places context couldn't be bolted on.
Don't use 'image/draw', use 'golang.org/x/image/draw' because they couldn't fix some part of it in a backwards compatible way, so you should use the 'x/' package. Same for syscall vs x/unix. But also, don't use 'golang.org/x/net/http2' because that was folded into 'net/http', so there's not even a general rule of "use the x package if it's there", it's actually "keep up with the status of all the x packages and sometimes use them instead of the stdlib, sometimes use the stdlib instead of them".
Go's stdlib is a way more confusing mess than rust. In rust, the ecosystem has settled on one logging library interface, not like 4 (log, slog, zap, logrus). In rust, updates to the stdlib are actually backwards compatible, not "oh, yeah, sha1 certs are rejected now if you update the compiler for better compile speeds, hope you read the release notes".
Man, I've been using Go as my daily driver since 2012 and I think I can count the number of breaking changes I've run into on one finger, and that was a critical security vulnerability. I have no doubt there have been others, but I've not had the misfortune of running into them.
> Don't use "log", use "log/slog", except the rest of the stdlib that takes a logger uses "log.Logger" because it predates "slog", so you have to use it.
What in the standard library takes a logger at all? I don't think I've ever passed a logger into the standard library.
> the ecosystem has settled on one logging library interface, not like 4 (log, slog, zap, logrus)
I've only seen slog since slog was added to the standard library. Pretty sure I've seen logrus or similar in the Kubernetes code, but that predated slog by a wide margin and anyway I don't recall seeing _any_ loggers in library code.
> In rust, the ecosystem has settled on one logging library interface
I mean, in Rust everyone has different advice on which crates to use for error handling and when to use each of them. You definitely don't have _more standards_ in the Rust ecosystem.
> I don't think I've ever passed a logger into the standard library.
`net/http.Server.ErrorLog` is the main (only?) one, though there's a lot of third-party libraries that take one.
> I've only seen slog since slog was added to the standard library
Most go libraries aren't updated yet, in fact I can't say I've seen any library using slog yet. We're clearly interfacing with different slices of the go ecosystem.
> in Rust everyone has different advice on which crates to use for error handling and when to use each of them. You definitely don't have _more standards_ in the Rust ecosystem.
They all are still using the same error type, so it interoperates fine. That's like saying "In go, every library has its own 'type MyError struct { .. }' that implements error, so go has more standards because each package has its own concrete error types", which yeah, that's common... The rust libraries like 'thiserror' and such are just tooling to do that more ergonomically than typing out a bunch of structs by hand.
Even if one dependency in rust uses hand-typed error enums and another uses thiserror, you still can just 'match' on the error in your code or such.
On the other hand, in Go you end up having to carefully read through each dependency's code to figure out if you need to be using 'errors.Is' or 'errors.As', and with what types, but with no help from the type-system since all errors are idiomatically type-erased.
I'm a heavy Rust user and fan, but I'd never pick Rust for web. There are way more mature ecosystems out there to choose from. Why would you waste "innovation tokens" in a Rust-based web application?
I enjoyed using Rust/WASM for a web application I made. Once I got the build step figured out, which took a week, the application worked like I wanted right away.
I was trying to build an HTML generator in Rust and got pretty far, but I don't think I'll ever be happy with the API unless I learn some pretty crazy macro stuff, which I don't want. For the latter project, the "innovation tokens" really rings true for me, I spent months on the HTML gen for not much benefit.
the idea of one language to rule them all is very compelling. itâs been promised a lot, and now everyone hates Java.
but the truth is that Rust is not meant for everything. UI is an abstraction layer that is very human and dynamic. and i can come and say, âwell, we can hide that dynamism with clever graph composition tricksâ Ă la Elm, React, Compose, etc, but the machinery that you have to build for even the simplest button widget in almost every Rust UI toolkit is a mess of punctuation, with things like lifetimes and weird state management systems. you end up building a runtime when what you want is just the UI. thatâs what higher level languages were made for. of course data science could be done in Rust as well, but is the lifetime of the file handle youâre trying to open really what youâre worried about when doing data analysis?
i think Rust has a future in the UI/graphics engine space, but you have to be pretty stubborn to use it for your front end.
> And the occasional struggles with typescript where the runtime seems to be changing too often; is it ts-node? tsx? tsm? The built-in typescript runtime in node? deno? bun?
This whole paragraph is so true. The last couple of years have been pretty rough in Node land.
Yes. This is one of the things that drives me nuts about a lot of titles on here: the context like âfor the webâ changes how itâs is interpreted a great deal. I see the same thing when I see posts about other languages and AI and such. Context matters versus making it sound like a broad, general statement. Alas, the broad, general statements likely get more engagement..
Aiui they are also migrating their backend api(s) from rust to node. They were already using astro with rust on the backend (after dropping ssr with tera).
due to the nature of safety in Rust, Iâd find myself writing boilerplate code just to avoid calling .unwrap(). Iâd get long chain calls of .ok_or followed by .map_err. I defined a dozen of custom error enums, some taking other enums, because you want to be able to handle errors properly, and your functions canât just return any error.
This can be a double edged sword. Yes, languages like python and typescript/JavaScript will let you not catch an exception, which can be convenient. But that also often leads to unexpected errors popping up in production.
It's a throwaway comment in the article, but I feel it's important to push back on: HTML is very definitely a programming language, by any reasonable definition of "programming language".
Edit to add: It might not be an imperative language, but having written some HTML and asked the computer to interpret it, the computer now has a programmed capability, determined by what was written, that's repeatable and that was not available apart from the HTML given. QED.
The TS/React ecosystem is so mature, it's hard for Rust to compete with it. My optimal stack is currently: Rust on the backend, Typescript/React for web with OpenAPI for shared types.
React and its ecosystem is a pile of garbage perpetuated by industry inertia. UseState, useMemo, useThisAndThat where you have to guess whether that dependency will cause a re-render? Or 20 different routers, state managers, query builders? I'm not even talking about html-in-ts with `!!a && (<div>...</div>)` A stodgy, bloated, overhyped and misused monstrosity, that's what React is.
Running rust in wasm works really well. I feel like I'm the world's biggest cheerleader for it, but I was just amazed at how well it works. The one annoying thing is using web APIs through rust - you can do it with web-sys and js-sys, but it's rarely as ergonomic as it is in javascript. I usually end up writing wrapper libraries that make it easy, sometimes even easier than javascript (e.g. in rust I can use weblocks with RAII)
It does work well logically but performance is pretty bad. I had a nontrivial Rust project running on Cloudflare Workers, and CPU time very often clocked 10-60ms per request. This is >50x what the equivalent JS worker probably would've clocked. And in that environment you pay for CPU time...
The rust-js layer can be slow. But the actual rust code is much faster than the equivalent JS in my experience. My project would not be technically possible with javascript levels of performance
I'm doing this now and it's mostly great but the openapi generators are not good. At least the Typescript ones produce confusing function signatures and invalid type syntax in some cases.
> Similar thing can be said about writing SQL. I was really happy with using sqlx, which is a crate for compile-time checked SQL queries. By relying on macros in Rust, sqlx would execute the query against a real database instance in order to make sure that your query is valid, and the mappings are correct. However, writing dynamic queries with sqlx is a PITA, as you canât build a dynamic string and make sure itâs checked during compilation, so you have to resort to using non-checked SQL queries. And honestly, with kysely in Node.js, I can get a similar result, without the need to have a connection to the DB, while having ergonomic query builder to build dynamic queries, without the overhead of compilation time.
I've used sqlx, and its alright, but I've found things much easier after switching to sea-orm. Sea-orm has a wonderful query builder that makes it feel like you are writing SQL. Whereas with sqlx you end up writing Rust that generates SQL strings, ie re-inventing query builders.
You also get type checking; define your table schema as a struct, and sea-orm knows what types your columns are. No active connection required. This approach lets you use Rust types for fields, eg Email from the email crate or Url from the url crate, which lets you constrain fields even further than what is easy to do at the DB layer.
ORMs tend to get a bad reputation for how some ORMs implement the active record pattern. For example, you might forget something is an active record and write something like "len(posts)" in sqlalchemy and suddenly you are counting records by pulling them from the DB in one by one. I haven't had this issue with sea-orm, because it is very clear about what is an active record and what is not, and it is very clear when you are making a request out to the DB. For me, it turns out 90% of the value of an ORM is the query builder.
sqlx doesn't build queries, or at least it minimally builds them. Which I think is the thing the OP is complaining about.
And, IMO, making dynamic queries harder is preferable. Dynamic queries are inherently unsafe. Sometimes necessary, however you have to start considering things like sql injection attacks with dynamic queries.
This isn't to poo poo sea-orm. I'm just saying that sqlx's design choice to make dynamic queries hard is a logical choice from a safety standpoint.
They didn't make them hard by design, I think, it's just the limitations of the current API and prioritisation. Dynamic queries are possible, just not trivial
Rust shines in user-space systems-level applications (databases, cloud infrastructure, etc.) but definitely feels a bit out of place in more business-logic heavy applications.
Well, yep. People underappreciate the Typescript/JS ecosystem.
Typescript is pretty type-safe, and it's perfectly integrated with hot code reload, debuggers, and all the usual tools. Adding transpilation in that flow only creates friction.
That's also why things like Blazor are going nowhere. C# is nicer than Typescript, but the additional friction of WASM roundtrips just eats all the advantage.
I don't know about what other strictness you're referring to but exhaustive enum matching is common check in most TS stacks via eslint. Yea, it's not builtin, just saying there's a solution and it's super common.
This is oddly timed in as much as one of the big success stories I've heard from a friend is their new practice of having Claude Code develop in Rust, than translate that to WebAssembly.
That seems much more like the future than embracing Node... <emoji here>
If youâre making a web app your fancy rust wasm module still has to interface with the dom, so you canât escape that. Claude might offer you some fake simplicity on that front for awhile, but skeptical thatâs it fully scalable
I find the dependency creep for both rust and node unfortunate. Almost anything I add explodes the deps and makes me sweat for maintenance, vulnerabilities, etc. I also feel perpetually behind, which I think is basically frontend default mode. Go does the one thing I wish Rust had more of which is a pretty darn great standard library with total backwards compatibility promises. There are awkward things with Go, but man, not needing to feel paranoid and how much can be built with so little feels good. But I totally understand just getting crap done and taking off the tin foil. Depends on what you prioritize. Solo devs don't have the luxury.
Those deps have to come from somewhere, right? Unless you're actually rolling your own everything, and with languages that don't have package managers what you end up doing is just adding submodules of various libraries and running their cmake configs, which is at least as insecure as NPM or Crates.io.
Go is a bit unique a it has a really substantial stdlib, so you eliminate some of the necessary deps, but it's also trivial to rely on established packages like Tokio etc, vendor them into your codebase, and not have to worry about it in the future.
Python used to have a great standard library, too. But now it's stuck with a bunch of obsolete packages and the packaging story for Python is awful.
In a decade or so Go the awkward things about Go will have multiplied significantly and it'll have many of the same problems Python currently has.
Lots of removals have already happened and uv took over packaging in Python-land.
Which, ironically, is written in rust
Well, Python is largely written in C, so there's that.
I just ported (this week) a 20-year-old Python app to uv/polars. (With AI it took two days). App is now 20x faster.
Both uv and polars are technically Rust, too.
These are two sides of the same coin. Go has its quirks because they put things in the standard library so they can't iterate (in breaking manners), while Rust can iterate and perfect ideas much faster as it's driven by the ecosystem.
There is a moral hazard here. By accepting that APIs are forever, you tend to be more cautious and move toward getting it right the first time. Slower is better... And also faster in the long run, as things compose. Personally, I do believe that there is one best way to do things quite often, but time constraints make people settle.
At least it is my experience building some systems.
Not sure it is always a good calculus to defer the hard thinking to later.
The cost of "perfecting" an idea here is ruining the broader ecosystem. It is much much better for an API to be kinda crappy (but stable) for historical reasons than dealing with the constant churn and fragmentation caused by, for example, the fifth revision of that URL routing library that everyone uses because everyone uses it. It only gets worse by the orthogonal but comorbid attitude of radically minimizing the scope of dependencies.
I think âthe fifth revision of that URL routing library that everyone usesâ is a much less common case than âcrate tried to explore a problem space, five years later a new crate thinks it can improve upon the solutionâ, which is what Rustâs conservatism really helps prevent. When you bake a particular crate into std, competitor crates now have a lot of inertia to overcome; when they're all third-party, the decision is not âadd a crate?â but âreplace a crate?â which is more palatable.
Letting an API evolve in a third-party crate also provides more accurate data on its utility; you get a lot of eyes on the problem space and can try different (potentially breaking) solutions before landing on consensus. Feedback during a Rust RFC is solicited from a much smaller group of people with less real-world usage.
Which has been working great for go, right. They shipped "log" and "flag" stdlib packages, so everyone uses... well, not those. I think "logrus" and "zap" are probably the most popular, but there's a ton of fragmentation in Go because of the crappy log package, including Go itself now shipping two logging packages in the stdlib ('log/slog').
Rust on the other hand has "log" as a clear winner, and significantly less overall fragmentation there.
> It is much much better for an API to be kinda crappy (but stable) for historical reasons
But this does more than just add a maintenance burden. If the API can't be removed, architectural constraints it imposes also can't be removed.
e.g. A hypothetical API that guarantees a callback during a specific phase of an operation means that you couldn't change to a new or better algorithm that doesn't have that phase.
Yes you can, and Go has done exactly that.
Realize the "log" api is bad? Make "log/slog". Realize the "rand" api is bad? Make "rand/v2". Realize the "image/draw" api is bad? Make "golang.org/x/image/draw". Realize the "ioutil" package is bad? Move all the functions into "io".
Te stdlib already has at least 3 different patterns for duplicating API functionality with minor backwards-incompatible changes, and you can just do that and mark the old things as deprecated, but support it forever. Easy enough.
Same. Thatâs why Go is such a great tool.
I've found Go's standard library to be really unfortunate compared to rust.
When I update the rust compiler, I do so with very little fear. My code will still work. The rust stdlib backwards compatible story has been very solid.
Updating the Go compiler, I also get a new stdlib, and suddenly I get a bunch of TLS version deprecation, implicit http2 upgrades, and all sorts of new runtime errors which break my application (and always at runtime, not compiletime). Bundling a large standard library with the compiler means I can't just update the tls package or just update the image package, I have to take it or leave it with the whole thing. It's annoying.
They've decided the go1 promise means "your code will still compile, but it will silently behave differently, like suddenly 'time1 == time2' will return a different result, or 'http.Server' will use a different protocol", and that's somehow backwards compatible.
I also find the go stdlib to have so many warts now that it's just painful. Don't use "log", use "log/slog", except the rest of the stdlib that takes a logger uses "log.Logger" because it predates "slog", so you have to use it. Don't use the non-context methods (like 'NewRequest' is wrong, use 'NewRequestWithContext', don't use net.Dial, etc), except for all the places context couldn't be bolted on.
Don't use 'image/draw', use 'golang.org/x/image/draw' because they couldn't fix some part of it in a backwards compatible way, so you should use the 'x/' package. Same for syscall vs x/unix. But also, don't use 'golang.org/x/net/http2' because that was folded into 'net/http', so there's not even a general rule of "use the x package if it's there", it's actually "keep up with the status of all the x packages and sometimes use them instead of the stdlib, sometimes use the stdlib instead of them".
Go's stdlib is a way more confusing mess than rust. In rust, the ecosystem has settled on one logging library interface, not like 4 (log, slog, zap, logrus). In rust, updates to the stdlib are actually backwards compatible, not "oh, yeah, sha1 certs are rejected now if you update the compiler for better compile speeds, hope you read the release notes".
Man, I've been using Go as my daily driver since 2012 and I think I can count the number of breaking changes I've run into on one finger, and that was a critical security vulnerability. I have no doubt there have been others, but I've not had the misfortune of running into them.
> Don't use "log", use "log/slog", except the rest of the stdlib that takes a logger uses "log.Logger" because it predates "slog", so you have to use it.
What in the standard library takes a logger at all? I don't think I've ever passed a logger into the standard library.
> the ecosystem has settled on one logging library interface, not like 4 (log, slog, zap, logrus)
I've only seen slog since slog was added to the standard library. Pretty sure I've seen logrus or similar in the Kubernetes code, but that predated slog by a wide margin and anyway I don't recall seeing _any_ loggers in library code.
> In rust, the ecosystem has settled on one logging library interface
I mean, in Rust everyone has different advice on which crates to use for error handling and when to use each of them. You definitely don't have _more standards_ in the Rust ecosystem.
> I don't think I've ever passed a logger into the standard library.
`net/http.Server.ErrorLog` is the main (only?) one, though there's a lot of third-party libraries that take one.
> I've only seen slog since slog was added to the standard library
Most go libraries aren't updated yet, in fact I can't say I've seen any library using slog yet. We're clearly interfacing with different slices of the go ecosystem.
> in Rust everyone has different advice on which crates to use for error handling and when to use each of them. You definitely don't have _more standards_ in the Rust ecosystem.
They all are still using the same error type, so it interoperates fine. That's like saying "In go, every library has its own 'type MyError struct { .. }' that implements error, so go has more standards because each package has its own concrete error types", which yeah, that's common... The rust libraries like 'thiserror' and such are just tooling to do that more ergonomically than typing out a bunch of structs by hand.
Even if one dependency in rust uses hand-typed error enums and another uses thiserror, you still can just 'match' on the error in your code or such.
On the other hand, in Go you end up having to carefully read through each dependency's code to figure out if you need to be using 'errors.Is' or 'errors.As', and with what types, but with no help from the type-system since all errors are idiomatically type-erased.
I'm a heavy Rust user and fan, but I'd never pick Rust for web. There are way more mature ecosystems out there to choose from. Why would you waste "innovation tokens" in a Rust-based web application?
I enjoyed using Rust/WASM for a web application I made. Once I got the build step figured out, which took a week, the application worked like I wanted right away.
I was trying to build an HTML generator in Rust and got pretty far, but I don't think I'll ever be happy with the API unless I learn some pretty crazy macro stuff, which I don't want. For the latter project, the "innovation tokens" really rings true for me, I spent months on the HTML gen for not much benefit.
the idea of one language to rule them all is very compelling. itâs been promised a lot, and now everyone hates Java.
but the truth is that Rust is not meant for everything. UI is an abstraction layer that is very human and dynamic. and i can come and say, âwell, we can hide that dynamism with clever graph composition tricksâ Ă la Elm, React, Compose, etc, but the machinery that you have to build for even the simplest button widget in almost every Rust UI toolkit is a mess of punctuation, with things like lifetimes and weird state management systems. you end up building a runtime when what you want is just the UI. thatâs what higher level languages were made for. of course data science could be done in Rust as well, but is the lifetime of the file handle youâre trying to open really what youâre worried about when doing data analysis?
i think Rust has a future in the UI/graphics engine space, but you have to be pretty stubborn to use it for your front end.
> And the occasional struggles with typescript where the runtime seems to be changing too often; is it ts-node? tsx? tsm? The built-in typescript runtime in node? deno? bun?
This whole paragraph is so true. The last couple of years have been pretty rough in Node land.
Better title: "Farewell, Rust for Web"
Yes. This is one of the things that drives me nuts about a lot of titles on here: the context like âfor the webâ changes how itâs is interpreted a great deal. I see the same thing when I see posts about other languages and AI and such. Context matters versus making it sound like a broad, general statement. Alas, the broad, general statements likely get more engagement..
Ok, we'll use that above. Thanks!
Yeah Astro is a great choice for a static or mostly static website. Moving to Astro is not a slight on any other language or framework.
Aiui they are also migrating their backend api(s) from rust to node. They were already using astro with rust on the backend (after dropping ssr with tera).
due to the nature of safety in Rust, Iâd find myself writing boilerplate code just to avoid calling .unwrap(). Iâd get long chain calls of .ok_or followed by .map_err. I defined a dozen of custom error enums, some taking other enums, because you want to be able to handle errors properly, and your functions canât just return any error.
This can be a double edged sword. Yes, languages like python and typescript/JavaScript will let you not catch an exception, which can be convenient. But that also often leads to unexpected errors popping up in production.
It's a throwaway comment in the article, but I feel it's important to push back on: HTML is very definitely a programming language, by any reasonable definition of "programming language".
Edit to add: It might not be an imperative language, but having written some HTML and asked the computer to interpret it, the computer now has a programmed capability, determined by what was written, that's repeatable and that was not available apart from the HTML given. QED.
agreed, it's a hill i am very willing to die on too.
The TS/React ecosystem is so mature, it's hard for Rust to compete with it. My optimal stack is currently: Rust on the backend, Typescript/React for web with OpenAPI for shared types.
React and its ecosystem is a pile of garbage perpetuated by industry inertia. UseState, useMemo, useThisAndThat where you have to guess whether that dependency will cause a re-render? Or 20 different routers, state managers, query builders? I'm not even talking about html-in-ts with `!!a && (<div>...</div>)` A stodgy, bloated, overhyped and misused monstrosity, that's what React is.
Running rust in wasm works really well. I feel like I'm the world's biggest cheerleader for it, but I was just amazed at how well it works. The one annoying thing is using web APIs through rust - you can do it with web-sys and js-sys, but it's rarely as ergonomic as it is in javascript. I usually end up writing wrapper libraries that make it easy, sometimes even easier than javascript (e.g. in rust I can use weblocks with RAII)
It does work well logically but performance is pretty bad. I had a nontrivial Rust project running on Cloudflare Workers, and CPU time very often clocked 10-60ms per request. This is >50x what the equivalent JS worker probably would've clocked. And in that environment you pay for CPU time...
The rust-js layer can be slow. But the actual rust code is much faster than the equivalent JS in my experience. My project would not be technically possible with javascript levels of performance
I'm doing this now and it's mostly great but the openapi generators are not good. At least the Typescript ones produce confusing function signatures and invalid type syntax in some cases.
I want to address this one point:
> Similar thing can be said about writing SQL. I was really happy with using sqlx, which is a crate for compile-time checked SQL queries. By relying on macros in Rust, sqlx would execute the query against a real database instance in order to make sure that your query is valid, and the mappings are correct. However, writing dynamic queries with sqlx is a PITA, as you canât build a dynamic string and make sure itâs checked during compilation, so you have to resort to using non-checked SQL queries. And honestly, with kysely in Node.js, I can get a similar result, without the need to have a connection to the DB, while having ergonomic query builder to build dynamic queries, without the overhead of compilation time.
I've used sqlx, and its alright, but I've found things much easier after switching to sea-orm. Sea-orm has a wonderful query builder that makes it feel like you are writing SQL. Whereas with sqlx you end up writing Rust that generates SQL strings, ie re-inventing query builders.
You also get type checking; define your table schema as a struct, and sea-orm knows what types your columns are. No active connection required. This approach lets you use Rust types for fields, eg Email from the email crate or Url from the url crate, which lets you constrain fields even further than what is easy to do at the DB layer.
ORMs tend to get a bad reputation for how some ORMs implement the active record pattern. For example, you might forget something is an active record and write something like "len(posts)" in sqlalchemy and suddenly you are counting records by pulling them from the DB in one by one. I haven't had this issue with sea-orm, because it is very clear about what is an active record and what is not, and it is very clear when you are making a request out to the DB. For me, it turns out 90% of the value of an ORM is the query builder.
sqlx doesn't build queries, or at least it minimally builds them. Which I think is the thing the OP is complaining about.
And, IMO, making dynamic queries harder is preferable. Dynamic queries are inherently unsafe. Sometimes necessary, however you have to start considering things like sql injection attacks with dynamic queries.
This isn't to poo poo sea-orm. I'm just saying that sqlx's design choice to make dynamic queries hard is a logical choice from a safety standpoint.
They didn't make them hard by design, I think, it's just the limitations of the current API and prioritisation. Dynamic queries are possible, just not trivial
Nope, it really was part of the design [1]
[1] https://github.com/launchbadge/sqlx/issues/333#issuecomment-...
I looove Rust for the backend.
I've supported backends in typescript, python, Java, and Rust.
Rust pages me the least at night. Sleep is beautiful.
Rust shines in user-space systems-level applications (databases, cloud infrastructure, etc.) but definitely feels a bit out of place in more business-logic heavy applications.
rescript [https://rescript-lang.org/] would make a nice middle ground between rust and typescript
Rust for Web is awesome for adding control interfaces etc to other programs who have a different primary purpose.
And even then I do it by serving JSON API's and not by serving HTML.
Well, yep. People underappreciate the Typescript/JS ecosystem.
Typescript is pretty type-safe, and it's perfectly integrated with hot code reload, debuggers, and all the usual tools. Adding transpilation in that flow only creates friction.
That's also why things like Blazor are going nowhere. C# is nicer than Typescript, but the additional friction of WASM roundtrips just eats all the advantage.
IDK, I still miss Rust's strictness and exhaustive enum matching.
I don't know about what other strictness you're referring to but exhaustive enum matching is common check in most TS stacks via eslint. Yea, it's not builtin, just saying there's a solution and it's super common.
This is oddly timed in as much as one of the big success stories I've heard from a friend is their new practice of having Claude Code develop in Rust, than translate that to WebAssembly.
That seems much more like the future than embracing Node... <emoji here>
If youâre making a web app your fancy rust wasm module still has to interface with the dom, so you canât escape that. Claude might offer you some fake simplicity on that front for awhile, but skeptical thatâs it fully scalable