dav2d is the fastest AV2 decoder on all platforms :)
Targeted to be small, portable and very fast.
If you're out of the loop like me:
AV2 is the next-generation video coding specification from the Alliance for Open Media (AOMedia). Building on the foundation of AV1, AV2 is engineered to provide superior compression efficiency, enabling high-quality video delivery at significantly lower bitrates. It is optimized for the evolving demands of streaming, broadcasting, and real-time video conferencing.
They've done the same thing with AV1, and I can't see that having prevented adoption, nor can I imagine Sisvel wanting to poke the bear that is AOMedia unless they're certain their case is absolutely watertight.
I see zero public evidence that they've filed any lawsuits against the members of AOM in any jurisdiction. I'm sure there's been a lot of threatening letters sent...
Yup. The Dolby/Disney vs Snapchat lawsuit is going to be the first one. So far it's only been filed.
The big question is if AOMedia is going to make good on their Mutually Assured Destruction promise of using their patent and financial war chest to to countersue into oblivion anyone trying to go after AV1 adaptors.
Same, which is what makes it seem to me that that case is absolutely not watertight. Those patents are probably all about esoteric minutiae (to be fair, that's because that's what it takes to make a better video codec these days) and everything and anything that can seemingly be connected to AV2 (or AV1 for that matter), many of which have only gotten a patent because the person approving it only barely understands what it's saying.
We need a more efficient way to eliminate bullshit patents or bullshit patent infringement claims than "violate them then spend millions on lawyers to fight them in court".
Stop big companies from ever forming. They are not a natural force that cannot be reckoned with. We allow them to exist. Revoke the charters of any business over 500 employees.
Sisvel is a patent troll. Take a look at the combined list of all companies that are the AOM and tell me with a straight face that all of their corporate in house counsel specializing in intellectual property law are wrong.
I don't know this stuff super well but I imagine it's not necessarily about the lawyers being right or wrong so much as what they can convince people of. The ideal scenario for the patent troll is they can intimidate you into licensing with them. Another good outcome for them (though more costly) is they can convince some non-expert in court. In either case the big players behind the codec can defend themselves but a small one just picking it up downstream as OSS can't.
Trolls will always be trolls. The need to fight them just shows the need to reform the garbage patent system to make sure no one can ever patent software.
Not on topic, but wow the internet has very quickly devolved into: click -> "making sure you're not a bot", click -> "making sure you're a human", click -> "COOKIES COOKIES COOKIES", click -> "cloudflare something something"
Maybe Iâm naive about this, but I didnât expect AI scrapers to be that big of a load? I mean, itâs not that they need to scrape the same at 1000+ QPS, and even then I wouldnât expect them to download all media and images either?
What am I missing that explains the gap between this and âconstant DDoSâ of the site?
You cant really cache the dynamic content produced by the forges like Gitlab and, say, web forums like phpbb. So it means every request gets through the slow path. Media/JS is of course cached on the edge, so it's not an issue.
Even when the amount of AI requests isnt that high - generally it's in hundreds per second tops for our services combined - that's still a load that causes issues for legitimate users/developers. We've seen it grow from somewhat reasonable to pretty much being 99% of responses we serve.
Can it be solved by throwing more hardware at the problem? Sure. But it's not sustainable, and the reasonable approach in our case is to filter off the parasitic traffic.
Thanks, appreciate the details. 99% is far above the amount I expected, and if it specifically hits hard to cache data then I can see how that brings a system to its knees.
You kind of can though. You serve cached assets and then use JavaScript to modify it for the individual user. The specific user actions can't be cached, but the rest of it can.
Totally. Remember slashdot in the 1990s used to house a dynamic page on a handful of servers with horsepower dwarfed by a Nintendo Switch that had a user base capable of bringing major properties down.
The "can't" comes from the fact that VLC is not going to rewrite their forum software or software forge.
Software written in PHP is in most cases frankly still abysmally slow and inefficient. Wordpress runs like 70% of the web and you can really feel it from the 1500ms+ TFFB most sites have. PhpBB is not much better. Pathetic throughput at best and it has not gotten better in decades now.
I don't know how GitLab became so disgustingly slow. But yeah, I'm not surprised bots can easily bring it to its knees.
The funniest part about WordPress is that you can usually achieve at least a 50% speed boost or more by adding a plugin that just minifies and caches the ridiculous number of dynamic CSS and JS files that most themes and plugins add to every page. Set those up with HTTP 103 Early Hints preload headers (so the browser can start sending subresource requests in the background before the HTML is even sent out, exactly the kind of thing HTTP/2 and /3 were designed to make possible) and then throw Cloudflare or another decent CDN on top, and you're suddenly getting TTFBs much closer to a more "modern" stack.
The bizarre thing is that pretty much no CMS, even the "new" ones, seems to automate all of that by default. None of those steps are that difficult to implement, and provide a serious speed boost to everything from WordPress to MediaWiki in my experience, and yet the only service that seems to get close to offering it is Cloudflare.
Even then, Cloudflare's tooling only works its best if you're already emitting minified and compressed files and custom written preload headers on the origin side, since the hit on decompressing all the origin traffic to make those adjustments and analyses is way worse for performance than just forwarding your compressed responses directly, hence why they removed Auto Minify[1] and encourage sending pre-compressed Brotli level 11 responses from the origin[2] so people on recent browsers get pass-through compression without extra cycles being spent on Cloudflare's servers.
The solution seems pretty clear: aim to get as much stuff served statically, preferably pre-compressed, as you can. But it's still weird that actually implementing that is still a manual process on most CMSes, when it shouldn't be that hard to make it a standard feature.
And as for Git web interfaces, the correct solution is to require logins to view complete history. Nobody likes saying it, nobody likes hearing it. But Git is not efficient enough on its own to handle the constant bombardment of random history paginations and diffs that AI crawlers seem to love. It wasn't an issue before, because old crawlers for things like search engines were smart enough to ignore those types of pages, or at least to accept when the sysadmin says it should ignore those types of pages. AI crawlers have no limits, ignore signals from site operators, make no attempts to skip redundant content, and in general are very dumb about how they send requests (this is a large part of why Anubis works so well; it's not a particularly complex or hard to bypass proof of work system[3], but AI bots genuinely don't care about anything but consuming as many HTTP 200s as a server can return, and give up at the slightest hint of pushback (but do at least try randomizing IPs and User-Agents, since those are effectively zero-cost to attempt).
[3]: https://lock.cmpxchg8b.com/anubis.html but see also https://news.ycombinator.com/item?id=45787775 and then https://news.ycombinator.com/item?id=43668433 and https://news.ycombinator.com/item?id=43864108 for how it's working in the real world. Clearly Anubis actually does work, given testimonials from admins and wide deployment numbers, but that can only mean that AI scrapers aren't actually implementing effective bypass measures. Which does seem pretty in line with what I've heard about AI scrapers, summarized well in https://news.ycombinator.com/item?id=43397361, in that they are basically making no attempt to actually optimize how they're crawling. The general consensus seems to be that if they were going to crawl optimally, they'd just pull down a copy of Common Crawl like every other major data analysis project has done for the last two decades, but all the AI companies are so desperate to get just slightly more training data than their competitors that they're repeatedly crawling near-identical Git diffs just on the off-chance they reveal some slightly different permutation of text to use. This is also why open source models have been able to almost keep pace with the state of the art models coming out of the big firms: they're just designing way more efficient training processes, while the big guys are desperately throwing hardware and crawlers at the problem in the desperate hope that they can will it into an Amazon model instead of a Ben and Jerryâs model[4].
> And as for Git web interfaces, the correct solution is to require logins to view complete history.
Why logins, exactly? Who would have such logins; developers only, or anyone who signs up? I'm not sure if this is an effective long-term mitigation, or simply a âwall of minimal heightâ like you point out that Anubis is.
- AI scrapers will pull a bunch of docs from many sites in parallel (so instead of a human request where someone picks a single Google result, it hits a bunch of sites)
- AI will crawl the site looking for the correct answer which may hit a handful of pages
- AI sends requests in quick succession (big bursts instead of small trickle over longer time)
- Personal assistants may crawl the site repeatedly scraping everything (we saw a fair bit of this at work, they announced themselves with user agents)
- At work (b2b SaaS webapp) we also found that the personal assistant variety tended to hammer really computationally expensive data export and reporting endpoints generally without filters. While our app technically supported it, it was very inorganic traffic
That said, I don't think the solution is blanket blocks. Really it's exposing sites are poorly optimized for emerging technology.
Also, relevant for forges: AI doesn't understand what it's clicking on. Git forges tend to e.g. have a lot of links like âdownload a tarball at this revisionâ which are super-expensive as far as resources go, and AI crawlers will click on those because they click on every link that looks shiny. (And there are a lot of revisions in a project like VLC!) Much, much more often than humans do.
They are a scourge, they never rate-limit themselves, there are a hundred of them, and a significant number donât respect robots.txt. Many of them also end up our meta:no-index,no-follow search pages leading to cost overruns on our Algolia usage. We spend way too much time adjusting WAF and other bot-controls than we should have.
Thanks. I imagine there is a (a) a lot of interest in scraping source code, and (b) many requests to forges hitting expensive paths. 99% of volume though, wow, much more than expected.
I highly doubt there is no other technically feasible option to block the AI bots.
You end up blocking not just bots, but many humans too. When I clicked on the link and the bot block came up, I just clicked back.
I think HN posts should have warnings when the site blocks you from seeing it until you somehow, maybe, prove you are human.
I'm sure there are many solutions for many problems, but expecting a small Foss development team to know or implement them all is rather unreasonable.
I think the world gains more if the VLAN team focuses on their amazing, free contribution to the world, than if they spend the same time trying to figure out how to save you two clicks.
We all hate that this is happening, but you don't need to attack everyone that is unfortunately caught up in it.
> I highly doubt there is no other technically feasible option to block the AI bots.
If you have discovered such an option, you could get very wealthy: minimizing friction for humans in e-commerce is valuable. If you're a drive-by critic not vested in the project, then yours is an instance of talk being cheap.
First off, don't block the first connection of the day from a given IP. Rate limit/block from there, for example how sshguard does it.
I've seen several posts on HN and elsewhere showing many bots can be fingerprinted and blocked based on HTTP headers and TLS.
For the bots that perfectly match the fingerprint of an interactive browser and don't trigger rate limits, use hidden links to tarpits and zip bombs. Many of these have been discussed on HN. Here's the first one that came to memory:
https://news.ycombinator.com/item?id=42725147
Its pretty explicitly not a tragedy of the commons. Its a tragedy of the ruling class abusing the resources of the 'commons' to extract value. There is nothing 'commons' about trillion dollar companies extracting all available value from the labor of the working class. That's just the tragedy that'll bring around the death of society, the same tragedy that brings all other tragedys
While a lot of the plebs do suck, a pleb who sucks causes way less problems than a big corp that sucks simply by virtue of not having too much resources.
But whether you agree with me or not, most paradigm shifting changes come from billionaires/corps because they are the only ones with the money to pull off massive shifts. Most innovation is not grassroots and heavily funded by the âelitesâ. This is how most successful countries have been for atleast the last 100 years. So billionaires add a lot of value even as they cause a lot of pain.
The solution in my mind is we absolutely need uncapped billionaires but they need to be effectively taxed (not like 90% but closer to 50%) and they have to have absolutely no influence on the government.
> its citizens that act selfishly and in bad faith will slowly make it unusable
It's rarely been the citizens that have been the problem, but the governments and companies that seek the use the network connection for their overwhelming benefit.
Re (above):
> Not on topic, but wow the internet has very quickly devolved into: click -> "making sure you're not a bot", click -> "making sure you're a human", click -> "COOKIES COOKIES COOKIES", click -> "cloudflare something something"
No one's even clicking anymore, everything implores me to tap or swipe these days, and everything is optimised for humans with one eye above the other.
Then I press the X to close the all-caps banner commanding me to install the app, upon which I get sent to the app store. Users of the website refer to it as an app.
AV2 video codec delivers 30% lower bitrate than AV1, final spec due in late 2025 (videocardz.com) | Oct 2025 | 277 points | 223 comments | https://news.ycombinator.com/item?id=45547537
I care much less about bitrates than about hopefully finally settling on one series of "standards". It looks like H.266 is dead in the water (I haven't even heard of it existing) so we might finally settle on AV2 as "the" new standard, rather than having the infight with half of hard/software only supporting either the state-of-the-art codec from the H26x or the AVx series...
Glorious. Really looking forward to seeing how much better than AV1 it actually turns out to be. It's a shame it'll take a while before we'll have a decent encoder (it took an annoyingly long time until SVT-AV1 was usable).
Mostly ASM for performance critical paths is a pattern that never gets old. The VideoLAN team did the same with dav1d and it paid off. Curious how much of dav2d ends up staying C as AV2 matures.
off topic, but related to the recent github alternative discussion:
Wow, this gitlab instance looked so much cleaner/simpler and less clunky than my past experiences! Also loaded really fast on first page load as well as subsequent actions
About 30% better compression than AV1 at equivalent quality. But it'll be a while before it's a good idea to use AV2 in your home media server. (AV1 is still not that broadly supported)
With the first RVA23 boards shipping this month, I find it a mistake to still focus on legacy ISAs like x86 or ARM rather than what will be dominant by the time AV2 is deployed.
What's the current state of of Dolby trying too attack AV1 ecosystem (Snapchat more specifically)? I hope there is an organized fight back by AOM against these trolls.
We must not continue to develop media codecs in memory unsafe languages. Small, auditable sections can opt-out perhaps, but choosing default-unsafe for this type of software is close to professional negligence.
Cryptography and video codecs are notable exceptions, they put a lot of effort to making the code provably memory safe: no recursion, limited use of stack variables, no dynamic allocations, etc. As a result, memory safe languages bring nothing but trouble by making it non deterministic, thatâs especially true for crypto where compiler âoptimisationsâ guarantee you side channels attacks.
Video codecs just don't need to do dynamic allocations because it's not relevant to the problem. There's still certainly plenty of opportunities for memory bugs because there's a lot of pointer math.
In cryptography, you want operations to run in constant time, even if itâs wasteful, otherwise an attacker could guess information about the key or plaintext by measuring execution times.
Modern compilers are extremely clever and will produce machine code that takes full advantage of modern CPU branch predictors, and reorder instructions to better take advantage of pipelining. This in itself will make the same code run at different speeds depending on the input data.
Then there is the whole issue of compiler version roulette. As a developer you have no idea which version of compilers your users and distros will use, and what new and wonderful optimisation they will bring.
Of the 3 software AV1 encoders, the only one that is fully dead is the Rust encoder (rav1e). If people truly wanted memory safe encoders/decoders, they would fund and develop them.
I can totally understand why people would want a memory-safe decoder, but a memory-safe encoder is niche. Finding a memory-safety bug in a decoder is a matter of finding a single unchecked integer field somewhere; finding a memory-safety bug in an encoder requires first finding some sort of logic bug in the encoder and then crafting an adversarial input that survives a number of highly lossy transformations.
Compare the number of CVEs against x264 (included decoders don't count!) and FFmpeg's H.264 decoder.
I think these conversations are directed by the parties funding the efforts. Example: "we (large company) want a fast AV2 decoder" -> they pay a specialized team to do it -> this team works in C for the most part, so it is done in C. If there were financial incentives to do it in Rust, they'd pay more for a Rust decoder.
For the codec itself, the majority of it is performance sensitive and often has a significant amount of assembly even, so a memory safe language doesn't change much.
However for the container/extractor... those should absolutely be in a memory safe language, and those are were a lot of the exploits/crashes are, too, as metadata is more fuzzy.
As a practical example of this see something like CrabbyAVIF. All the parser code is rust, but it delegates to dav1d for the actual codec portion
Project description:
If you're out of the loop like me: - from https://av2.aomedia.org/looks at if AV2 is dead in the water
https://www.sisvel.com/insights/av2-is-coming-sisvel-is-prep...
yep
They've done the same thing with AV1, and I can't see that having prevented adoption, nor can I imagine Sisvel wanting to poke the bear that is AOMedia unless they're certain their case is absolutely watertight.
I see zero public evidence that they've filed any lawsuits against the members of AOM in any jurisdiction. I'm sure there's been a lot of threatening letters sent...
Yup. The Dolby/Disney vs Snapchat lawsuit is going to be the first one. So far it's only been filed.
The big question is if AOMedia is going to make good on their Mutually Assured Destruction promise of using their patent and financial war chest to to countersue into oblivion anyone trying to go after AV1 adaptors.
Context: https://www.techspot.com/news/111865-dolby-sues-snap-over-vi...
I would love to live in a world where this happens. I will place big bets that it will, regrettably not happen.
Same, which is what makes it seem to me that that case is absolutely not watertight. Those patents are probably all about esoteric minutiae (to be fair, that's because that's what it takes to make a better video codec these days) and everything and anything that can seemingly be connected to AV2 (or AV1 for that matter), many of which have only gotten a patent because the person approving it only barely understands what it's saying.
The illusion breaks once tested in court.
Which is why they'd never sue, only threaten and try to settle.
This is a thinly veiled extortion racket and any competent system would fine them into bankruptcy.
We need a more efficient way to eliminate bullshit patents or bullshit patent infringement claims than "violate them then spend millions on lawyers to fight them in court".
Sure, and at the same time we need a more efficient way to ensure big companies canât just take what they want and bury anyone who complains.
Itâs not an easy problem.
Stop big companies from ever forming. They are not a natural force that cannot be reckoned with. We allow them to exist. Revoke the charters of any business over 500 employees.
I can see a number of ways to work around that limitation, without even lobbying and bribing. And I'm not even a lawyer or an accountant.
Eventually all the money and power will converge in a few sub 500, or sub 50, companies and nothing will change.
Sisvel is a patent troll. Take a look at the combined list of all companies that are the AOM and tell me with a straight face that all of their corporate in house counsel specializing in intellectual property law are wrong.
I don't know this stuff super well but I imagine it's not necessarily about the lawyers being right or wrong so much as what they can convince people of. The ideal scenario for the patent troll is they can intimidate you into licensing with them. Another good outcome for them (though more costly) is they can convince some non-expert in court. In either case the big players behind the codec can defend themselves but a small one just picking it up downstream as OSS can't.
I don't doubt for a minute that they are going to attempt to intimidate companies using av1 which are much smaller than the AOM founders.
Trolls will always be trolls. The need to fight them just shows the need to reform the garbage patent system to make sure no one can ever patent software.
You can tell Sisvel are a bunch of grifters by the fact they use slight grey text on a slightly less grey background.
Aesthetics over function; style over substance. If that's their web design policy it's likely their policy in all other aspects.
I'm also not sure that they're aware that intellectual property rights no longer exist in the US. If AV2 was vibe coded, there would be no case.
> If AV2 was vibe coded, there would be no case.
âŚfor copyright. Not for anything else. Patents would still apply.
> AV2 is the next-generation video coding specification from the Alliance for Open Media
Oh no. Not another one. I presume this one makes lossy better, or faster or both.
Not on topic, but wow the internet has very quickly devolved into: click -> "making sure you're not a bot", click -> "making sure you're a human", click -> "COOKIES COOKIES COOKIES", click -> "cloudflare something something"
We had to set it up on the parts of VideoLAN infra so the service would remain usable.
Otherwise it was under a constant DDoS by the AI bots.
While I do sympathetize with the AI DDoS situation, it'd be nice if there were a solution that allows them to work so they can pull official docs.
For instance, MCP, static sites that are easy to scale, a cache in front of a dynamic site engine
Of course, static websites is the best solution to that problem.
Our documentation and a main website are not fronted by this protection, so they're still accessible for the scrapers.
Maybe Iâm naive about this, but I didnât expect AI scrapers to be that big of a load? I mean, itâs not that they need to scrape the same at 1000+ QPS, and even then I wouldnât expect them to download all media and images either?
What am I missing that explains the gap between this and âconstant DDoSâ of the site?
You cant really cache the dynamic content produced by the forges like Gitlab and, say, web forums like phpbb. So it means every request gets through the slow path. Media/JS is of course cached on the edge, so it's not an issue.
Even when the amount of AI requests isnt that high - generally it's in hundreds per second tops for our services combined - that's still a load that causes issues for legitimate users/developers. We've seen it grow from somewhat reasonable to pretty much being 99% of responses we serve.
Can it be solved by throwing more hardware at the problem? Sure. But it's not sustainable, and the reasonable approach in our case is to filter off the parasitic traffic.
Thanks, appreciate the details. 99% is far above the amount I expected, and if it specifically hits hard to cache data then I can see how that brings a system to its knees.
You kind of can though. You serve cached assets and then use JavaScript to modify it for the individual user. The specific user actions can't be cached, but the rest of it can.
Totally. Remember slashdot in the 1990s used to house a dynamic page on a handful of servers with horsepower dwarfed by a Nintendo Switch that had a user base capable of bringing major properties down.
The "can't" comes from the fact that VLC is not going to rewrite their forum software or software forge.
Software written in PHP is in most cases frankly still abysmally slow and inefficient. Wordpress runs like 70% of the web and you can really feel it from the 1500ms+ TFFB most sites have. PhpBB is not much better. Pathetic throughput at best and it has not gotten better in decades now.
I don't know how GitLab became so disgustingly slow. But yeah, I'm not surprised bots can easily bring it to its knees.
The funniest part about WordPress is that you can usually achieve at least a 50% speed boost or more by adding a plugin that just minifies and caches the ridiculous number of dynamic CSS and JS files that most themes and plugins add to every page. Set those up with HTTP 103 Early Hints preload headers (so the browser can start sending subresource requests in the background before the HTML is even sent out, exactly the kind of thing HTTP/2 and /3 were designed to make possible) and then throw Cloudflare or another decent CDN on top, and you're suddenly getting TTFBs much closer to a more "modern" stack.
The bizarre thing is that pretty much no CMS, even the "new" ones, seems to automate all of that by default. None of those steps are that difficult to implement, and provide a serious speed boost to everything from WordPress to MediaWiki in my experience, and yet the only service that seems to get close to offering it is Cloudflare.
Even then, Cloudflare's tooling only works its best if you're already emitting minified and compressed files and custom written preload headers on the origin side, since the hit on decompressing all the origin traffic to make those adjustments and analyses is way worse for performance than just forwarding your compressed responses directly, hence why they removed Auto Minify[1] and encourage sending pre-compressed Brotli level 11 responses from the origin[2] so people on recent browsers get pass-through compression without extra cycles being spent on Cloudflare's servers.
The solution seems pretty clear: aim to get as much stuff served statically, preferably pre-compressed, as you can. But it's still weird that actually implementing that is still a manual process on most CMSes, when it shouldn't be that hard to make it a standard feature.
And as for Git web interfaces, the correct solution is to require logins to view complete history. Nobody likes saying it, nobody likes hearing it. But Git is not efficient enough on its own to handle the constant bombardment of random history paginations and diffs that AI crawlers seem to love. It wasn't an issue before, because old crawlers for things like search engines were smart enough to ignore those types of pages, or at least to accept when the sysadmin says it should ignore those types of pages. AI crawlers have no limits, ignore signals from site operators, make no attempts to skip redundant content, and in general are very dumb about how they send requests (this is a large part of why Anubis works so well; it's not a particularly complex or hard to bypass proof of work system[3], but AI bots genuinely don't care about anything but consuming as many HTTP 200s as a server can return, and give up at the slightest hint of pushback (but do at least try randomizing IPs and User-Agents, since those are effectively zero-cost to attempt).
[1]: https://community.cloudflare.com/t/deprecating-auto-minify/6...
[2]: https://blog.cloudflare.com/this-is-brotli-from-origin/
[3]: https://lock.cmpxchg8b.com/anubis.html but see also https://news.ycombinator.com/item?id=45787775 and then https://news.ycombinator.com/item?id=43668433 and https://news.ycombinator.com/item?id=43864108 for how it's working in the real world. Clearly Anubis actually does work, given testimonials from admins and wide deployment numbers, but that can only mean that AI scrapers aren't actually implementing effective bypass measures. Which does seem pretty in line with what I've heard about AI scrapers, summarized well in https://news.ycombinator.com/item?id=43397361, in that they are basically making no attempt to actually optimize how they're crawling. The general consensus seems to be that if they were going to crawl optimally, they'd just pull down a copy of Common Crawl like every other major data analysis project has done for the last two decades, but all the AI companies are so desperate to get just slightly more training data than their competitors that they're repeatedly crawling near-identical Git diffs just on the off-chance they reveal some slightly different permutation of text to use. This is also why open source models have been able to almost keep pace with the state of the art models coming out of the big firms: they're just designing way more efficient training processes, while the big guys are desperately throwing hardware and crawlers at the problem in the desperate hope that they can will it into an Amazon model instead of a Ben and Jerryâs model[4].
[4]: https://www.joelonsoftware.com/2000/05/12/strategy-letter-i-... - still probably the single greatest blog post ever written, 26 years later.
> And as for Git web interfaces, the correct solution is to require logins to view complete history.
Why logins, exactly? Who would have such logins; developers only, or anyone who signs up? I'm not sure if this is an effective long-term mitigation, or simply a âwall of minimal heightâ like you point out that Anubis is.
I think there's a few things at play here
- AI scrapers will pull a bunch of docs from many sites in parallel (so instead of a human request where someone picks a single Google result, it hits a bunch of sites)
- AI will crawl the site looking for the correct answer which may hit a handful of pages
- AI sends requests in quick succession (big bursts instead of small trickle over longer time)
- Personal assistants may crawl the site repeatedly scraping everything (we saw a fair bit of this at work, they announced themselves with user agents)
- At work (b2b SaaS webapp) we also found that the personal assistant variety tended to hammer really computationally expensive data export and reporting endpoints generally without filters. While our app technically supported it, it was very inorganic traffic
That said, I don't think the solution is blanket blocks. Really it's exposing sites are poorly optimized for emerging technology.
Also, relevant for forges: AI doesn't understand what it's clicking on. Git forges tend to e.g. have a lot of links like âdownload a tarball at this revisionâ which are super-expensive as far as resources go, and AI crawlers will click on those because they click on every link that looks shiny. (And there are a lot of revisions in a project like VLC!) Much, much more often than humans do.
They are a scourge, they never rate-limit themselves, there are a hundred of them, and a significant number donât respect robots.txt. Many of them also end up our meta:no-index,no-follow search pages leading to cost overruns on our Algolia usage. We spend way too much time adjusting WAF and other bot-controls than we should have.
Yes, it's that BIG of a load: https://status.sr.ht/issues/2025-03-17-git.sr.ht-llms/
Thanks. I imagine there is a (a) a lot of interest in scraping source code, and (b) many requests to forges hitting expensive paths. 99% of volume though, wow, much more than expected.
I highly doubt there is no other technically feasible option to block the AI bots. You end up blocking not just bots, but many humans too. When I clicked on the link and the bot block came up, I just clicked back. I think HN posts should have warnings when the site blocks you from seeing it until you somehow, maybe, prove you are human.
I'm sure there are many solutions for many problems, but expecting a small Foss development team to know or implement them all is rather unreasonable.
I think the world gains more if the VLAN team focuses on their amazing, free contribution to the world, than if they spend the same time trying to figure out how to save you two clicks.
We all hate that this is happening, but you don't need to attack everyone that is unfortunately caught up in it.
> I highly doubt there is no other technically feasible option to block the AI bots.
If you have discovered such an option, you could get very wealthy: minimizing friction for humans in e-commerce is valuable. If you're a drive-by critic not vested in the project, then yours is an instance of talk being cheap.
I'm all ears on how we can fix it otherwise.
Keep in mind that those kinds of services: - should not be MITMed by CDNs - are generally ran by volunteers with zero budget, money and time-wise
First off, don't block the first connection of the day from a given IP. Rate limit/block from there, for example how sshguard does it.
I've seen several posts on HN and elsewhere showing many bots can be fingerprinted and blocked based on HTTP headers and TLS.
For the bots that perfectly match the fingerprint of an interactive browser and don't trigger rate limits, use hidden links to tarpits and zip bombs. Many of these have been discussed on HN. Here's the first one that came to memory: https://news.ycombinator.com/item?id=42725147
Nearly every single website I'm not logged into these days want me to "confirm I'm not a bot".
it is incredibly annoying but what can you do? AI scrapers ruined the web.
The internet is such a Tragedy of the Commons⌠its citizens that act selfishly and in bad faith will slowly make it unusable.
Its pretty explicitly not a tragedy of the commons. Its a tragedy of the ruling class abusing the resources of the 'commons' to extract value. There is nothing 'commons' about trillion dollar companies extracting all available value from the labor of the working class. That's just the tragedy that'll bring around the death of society, the same tragedy that brings all other tragedys
The commons in question is the internet itself.
Thank you for describing the tragedy of the commons
The commons were never unregulated. This is a tragedy of enclosure.
https://en.wikipedia.org/wiki/Enclosure
Thereâs definitely lots of problems with the ruling class and wealth disparity. Perhaps the defining problems of our current age.
That being said, so many of the plebs suck. Like 2% will ruin everything for everyone.
While a lot of the plebs do suck, a pleb who sucks causes way less problems than a big corp that sucks simply by virtue of not having too much resources.
I agree.
But whether you agree with me or not, most paradigm shifting changes come from billionaires/corps because they are the only ones with the money to pull off massive shifts. Most innovation is not grassroots and heavily funded by the âelitesâ. This is how most successful countries have been for atleast the last 100 years. So billionaires add a lot of value even as they cause a lot of pain.
The solution in my mind is we absolutely need uncapped billionaires but they need to be effectively taxed (not like 90% but closer to 50%) and they have to have absolutely no influence on the government.
> its citizens that act selfishly and in bad faith will slowly make it unusable
It's rarely been the citizens that have been the problem, but the governments and companies that seek the use the network connection for their overwhelming benefit.
Re (above):
> Not on topic, but wow the internet has very quickly devolved into: click -> "making sure you're not a bot", click -> "making sure you're a human", click -> "COOKIES COOKIES COOKIES", click -> "cloudflare something something"
wat. The protections in place that the OP is talking about are almost entirely due to (not government and company) bad actors.
No, it is because citizen allow treating them like this.
Their bot-detection page took more than 40 seconds to complete on my low-end smartphone. This sucks.
No one's even clicking anymore, everything implores me to tap or swipe these days, and everything is optimised for humans with one eye above the other.
Then I press the X to close the all-caps banner commanding me to install the app, upon which I get sent to the app store. Users of the website refer to it as an app.
Wow Iâm glad itâs not just me. I thought my IP block had gotten caught up in some known spamming or something.
At least this one was significantly faster than Cloudflare and required no action on my part.
I get exactly none of that. Is your adblocker still working?
renders your gigabit connection pointless
AI is a gift that keeps on giving.
High hardware prices, locked information sources, plenty of AI slop etc.
AV2 video codec delivers 30% lower bitrate than AV1, final spec due in late 2025 (videocardz.com) | Oct 2025 | 277 points | 223 comments | https://news.ycombinator.com/item?id=45547537
I care much less about bitrates than about hopefully finally settling on one series of "standards". It looks like H.266 is dead in the water (I haven't even heard of it existing) so we might finally settle on AV2 as "the" new standard, rather than having the infight with half of hard/software only supporting either the state-of-the-art codec from the H26x or the AVx series...
Glorious. Really looking forward to seeing how much better than AV1 it actually turns out to be. It's a shame it'll take a while before we'll have a decent encoder (it took an annoyingly long time until SVT-AV1 was usable).
Mostly ASM for performance critical paths is a pattern that never gets old. The VideoLAN team did the same with dav1d and it paid off. Curious how much of dav2d ends up staying C as AV2 matures.
Some dude at videolan be like, I've have a cool name idea
off topic, but related to the recent github alternative discussion:
Wow, this gitlab instance looked so much cleaner/simpler and less clunky than my past experiences! Also loaded really fast on first page load as well as subsequent actions
is there any understanding of how big of an improvment av2 will be over av1?
About 30% better compression than AV1 at equivalent quality. But it'll be a while before it's a good idea to use AV2 in your home media server. (AV1 is still not that broadly supported)
With the first RVA23 boards shipping this month, I find it a mistake to still focus on legacy ISAs like x86 or ARM rather than what will be dominant by the time AV2 is deployed.
Just recently noticed this got posted to deb-multimedia, although I think there is a typo in the package description....
https://www.deb-multimedia.org/dists/unstable/main/binary-am...
... it says "fast and small AV1 video stream decoder"
... should probably be "AV2" ?
Nice.
What's the current state of of Dolby trying too attack AV1 ecosystem (Snapchat more specifically)? I hope there is an organized fight back by AOM against these trolls.
I would even remove the C code and lower the usage of the assembler pre-processor to a basic C pre-processor.
Happy, AV2 decoding already here.
:)
maybe not great naming. Sounds very similar to the rapper D4vd, who was just arrested for murdering a 14 year old girl
Actually it's closer to https://youtube.com/@dave2d who is a popular tech YouTuber
That is what I thought of too. Almost live David is a popular name or something... /s
I wonder if the author is a Dave2D fan?
https://www.youtube.com/@Dave2D
I think it's an increment on this: https://www.videolan.org/projects/dav1d.html
The AV1 decoder is dav1d. The AV2 decoder is dav2d.
One day in the mysterious future the AV3 decoder will be dav3d.
Or a fan of Dangerous Dave 2
https://en.wikipedia.org/wiki/Dangerous_Dave_in_the_Haunted_...
Theyâre more of a D4vd fan
D4ve's not here!
We must not continue to develop media codecs in memory unsafe languages. Small, auditable sections can opt-out perhaps, but choosing default-unsafe for this type of software is close to professional negligence.
Cryptography and video codecs are notable exceptions, they put a lot of effort to making the code provably memory safe: no recursion, limited use of stack variables, no dynamic allocations, etc. As a result, memory safe languages bring nothing but trouble by making it non deterministic, thatâs especially true for crypto where compiler âoptimisationsâ guarantee you side channels attacks.
Thank you for mentioning this.
I wonder IFF Rust had an effects system that a Jasmin MIR transform (ie like SPIRV is for shaders) would be useful?
https://github.com/jasmin-lang/jasmin
Video codecs just don't need to do dynamic allocations because it's not relevant to the problem. There's still certainly plenty of opportunities for memory bugs because there's a lot of pointer math.
What in the world do you mean by ânon-deterministicâ?
C compilers, Rust compilers, and assemblers are all deterministic.
In cryptography, you want operations to run in constant time, even if itâs wasteful, otherwise an attacker could guess information about the key or plaintext by measuring execution times.
Modern compilers are extremely clever and will produce machine code that takes full advantage of modern CPU branch predictors, and reorder instructions to better take advantage of pipelining. This in itself will make the same code run at different speeds depending on the input data.
Then there is the whole issue of compiler version roulette. As a developer you have no idea which version of compilers your users and distros will use, and what new and wonderful optimisation they will bring.
> C compilers, Rust compilers, and assemblers are all deterministic.
Within a version, yes, but not cross version. Different versions of GCC/Clang etc can give you completely different code.
Of the 3 software AV1 encoders, the only one that is fully dead is the Rust encoder (rav1e). If people truly wanted memory safe encoders/decoders, they would fund and develop them.
I can totally understand why people would want a memory-safe decoder, but a memory-safe encoder is niche. Finding a memory-safety bug in a decoder is a matter of finding a single unchecked integer field somewhere; finding a memory-safety bug in an encoder requires first finding some sort of logic bug in the encoder and then crafting an adversarial input that survives a number of highly lossy transformations.
Compare the number of CVEs against x264 (included decoders don't count!) and FFmpeg's H.264 decoder.
https://github.com/memorysafety/rav1d got funded and developed. it is unfortunately a bit slower (typically by a single-digit percentage) than dav1d.
Encoding is a way, way less risky thing to be doing compared to decoding.
Fully dead in what sense? Seems like it still has active development to me.
It hasn't had any proper quality/speed improvements in years. Only thing that has changed is updating deps and some bug fixes.
There are many paths to memory safety, even if the one Rust project seems to be going nowhere.
There's other memory-safe languages, and there's formal verification.
e.g. seL4 favors pancake.
> If people truly wanted memory safe encoders/decoders
Really? How many codecs have your neighbors contributed money for the development of, just curious.
I think these conversations are directed by the parties funding the efforts. Example: "we (large company) want a fast AV2 decoder" -> they pay a specialized team to do it -> this team works in C for the most part, so it is done in C. If there were financial incentives to do it in Rust, they'd pay more for a Rust decoder.
Given Netflix's involvement with SV1-AV1, (not even that) indirectly, at least 1.
For the codec itself, the majority of it is performance sensitive and often has a significant amount of assembly even, so a memory safe language doesn't change much.
However for the container/extractor... those should absolutely be in a memory safe language, and those are were a lot of the exploits/crashes are, too, as metadata is more fuzzy.
As a practical example of this see something like CrabbyAVIF. All the parser code is rust, but it delegates to dav1d for the actual codec portion
Decoders written in Rust will be a lot slower than the equivalents in assembly.