> âWe have high confidence that the actor likely leveraged an A.I. model to support the discovery and weaponization of this vulnerability,â the report said.
I wonder what gives them that "high confidence", as opposed to this being just a traditional zero-day?
I'm not being snarky or critical, I'm genuinely wondering what about an attack could possibly indicate it was discovered with LLM assistance?
Like, unless the attackers' computers have been seized and they've been able to recover the actual LLM transcript history? But nothing in the article indicates that the hackers have been caught, just that a patch was developed.
"Although we do not believe Gemini was used, based on the structure and content of these exploits, we have high confidence that the actor likely leveraged an AI model to support the discovery and weaponization of this vulnerability. For example, the script contains an abundance of educational docstrings, including a hallucinated CVSS score, and uses a structured, textbook Pythonic format highly characteristic of LLMs training data (e.g., detailed help menus and the clean _C ANSI color class) "
This only indicates that an AI coding agent was used to write an exploit.
No such circumstantial evidence can prove that an AI model has been used to find the bug.
Of course, it is quite likely that an AI model was used to speed up the search for bugs, but this can never be proven as long as you see only the code used to exploit the bug.
> I wonder what gives them that "high confidence", as opposed to this being just a traditional zero-day?
Google, Cloudflare, and Microsoft are a trio of companies that get to see most of what's going on the internet. I imagine that if they see you attacking them, they can work back from that and get remarkably far, even against sophisticated actors. If it's their LLM, they presumably keep transcripts. If you searched for the affected API function via a search engine, they almost certainly know. Even if you used a competing search product, you probably went to a site that has Google Analytics. Oh, and one of these companies probably has your DNS lookups. And a good chunk of the world's email traffic. And telemetry from your workstation. And auto-uploaded crash reports... And if it's bad, they can work together behind the scenes to get to the bottom of it.
So, when their threat intel orgs say they have high confidence in something, I'd be inclined to believe it.
None of what you've said is untrue. And if this was an internal report to an executive, I'd agree with it. But this is a public statement and I'm more inclined to believe that this is part of a coordinated run up to a move to ban the import of 'dangerous' Chinese AI models -- or something else equally self serving -- than a simple statement of truth.
I don't doubt that they found some evidence of AI use. I'm just skeptical that the amount and strength of evidence has anything to do with their making this statement.
I've been thinking about why the AI companies are making so much use of fear based marketing. And I'm wonder if it isn't just naked Machiavellianism at work.
For a long time tech companies were forced to compete for power by being the most loved (or at least not the most hated). But now they've found an avenue to cultivate fear.
Well, itâs great marketing for LLM products at the enterprise level. Even if they werenât sure, they have every incentive to run with it now, and the issue a âwhoopsie daisyâ apology later after the tech media stopped paying attention.
This is why i can't wait for a new AI winter or atleast a fall(the bubble deflating slowly). Just like you can now really see how useful web3 and NFT really are...
How long can llms exist at this current price level? Once they raise prices the market gets split. One side is the companies who will pay the increases and the other side is the public portals which become unaffordable. Public side might compare to NFTs while the other looks like more like the cloud where companies will overpay for better features they don't really need.
The article strongly implies they have the (Python) source code, and that it looks LLM generated. I don't know about you, but I can usually tell LLM code from a mile away.
That can prove only a half of that sentence, that an AI coding assistant was used for writing the exploit (a.k.a. "weaponization").
For the other half of the sentence ("discovery"), one could claim that it is true only if the identity of the attackers were discovered and evidence about their prior activities would be gathered.
Even if it is likely that today anyone who searches for bugs would also use AI agents to accelerate that, I find unacceptable in announcements like that of Google the use of careless sentences that are obviously either false or they might be true only if Google knew something else that they do not disclose.
We are going to be seeing a lot of these moving forward. It's the easy way out. If you've worked with Google, you will know that it's an environment where accountability doesn't thrive. You will find people who know nothing about Google's product portfolio hold advisory roles around the products. They don't care, there's no one to even question them. They just know to make colourful graphs with the most useless metrics to justify they "add value" to the company. Expecting them to take accountability is like trying to mix oil and water.
Humans can sometimes find a needle in a haystack, but its impossible for us to find multiple needles in multiple haystacks and chain them together into an attack. AIs can work through a complex search space much more efficiently, that's the tell.
sorry but thatâs just wrong. itâs not impossible in the slightest. i built an attack against mozilla deepspeech in my phd from multiple needles (two of which i personally discovered).
did it take a lot of effort? sure. lots of dead ends. but that does not mean it is impossible.
> its impossible for us to find multiple needles in multiple haystacks and chain them together
Except "we" have been successfully chaining attacks long before AI started automating it. AI doesn't make any of this possible, it just takes the drudgery out of it and lowers the cost of an attack.
The article says it included excessive explainer text. And I'm almost positive an earlier version of the article referenced hallucinated library references though I don't see it in the present version of the article.
Maybe after they realized how they were vulnerable they asked an LLM to find the exploit through a similar means to try and replicate it. Still doesn't prove it but maybe gives them confidence this weird thing can only really be found that way etc.
Maybe they saw traffic that looked like AI prodding an API and quickly adapting to find the bug?
But at this point I feel like odds are everyone looking for vulnerabilities is using AI to some extent. Why wouldn't they? It'd be stranger if they didn't.
Don't get me wrong, I'm with you here, but we are back to the days when we had to rent mainframe time for compiling programs. Not because of software limitations, but you just didn't have consumer grade hardware capable of running them.
This time, however it's even worse, because it'll be a really long time until either we get consumer GPUs with enough VRAM for full models or LLMs that fit in 16-32GB capable enough to compete with cloud providers.
I run locally qwen3.6 27b on my 3090 and it's really impressive for what it is, but it is still generations away from being capable of delivering a level of quality that we can confidently default to solo drive them on a daily basis.
That is an excellent idea, once we, the GPU-poor mice, figure out who is going to bell the SoTA training cat. Chinese models being banned is well within the realms of lobbied possibilities.
The real game would be to put a ânothing of interest hereâ prompt injection attack in the original series of prompts so a LLM parsing them later would ignore the attackersâ session.
I wonder what is the goal here? If Google Search was used to find a major software flaw would this be reported in this way? Between Mythos, OpenAI's Mythos equivalent, it's not clear if there is some interest to keep the "AI is powerful" trend going or they are trying to indirectly bring attention to the technical capabilities of LLMs in cybersecurity (as a potentially untapped source of revenue).
My Android phone takes a photo of my face every time I unlock the device. I don't have access to those images, but someone already has photos of my eyeballs!
I'm not sure why or how to turn it off, does anyone know?
Go to security settings and disable face unlock. If you want to be extra safe against Google, go to the advanced security settings, find the "trust agents", and disable the ability for Google Play Services to unlock your phone. That'll also kill any other unlock mechanisms you may have forgotten about tied to Google's services.
If unlock features remain after that, it's a manufacturer feature that's been set up. In that case you'll have to look for a guide for your specific brand and model.
Your phone can't turn this on by itself, if it's doing face recognition that means you set it up at some point.
I mean it's nice that someone helped you but if you're incapable of turning such a setting off yourself, or doing some basic research to find out how to turn it off - surely you'd feel threatened by the numerous other features of the phone that you're likely unaware of?
Okay, when fuzzing techniques came out there was a big surge in discovered and exploited bugs. AI is more general and I expect there be a similar surge. However fuzzing is cheap but compute and techniques can be "owned." The economics of AI is unless you pay for it, it is difficult to self host (expensive hardware, open source models are catching up).
State actors + hackers will have more resources to make better offense. What worse, in my experience AI produced code is blind to overall system behavior. So I fear the exploits will be either low hanging/trivial to exploit errors or bigger system level bugs.
>But new A.I. models like Anthropicâs Mythos, which was announced last month, appear to be so good at finding such holes that Anthropic shared it only with a limited number of firms and government agencies in the United States and Britain.
Immediate distrust of the article. GPT 5.5 is out with nearly the same capability. The author might be parroting company marketing, unable to discern that a lot of this is much less complex than it seems. For all we know this group could have had a model examine some obscure line of code thousands of times until it found something.
GPT 5.5 does not have the same capabilities as Mythos. There is a separate 5.5-Cyber model which is the Mythos âequivalentâ, but it is similarly restricted access like Mythos. Per OpenAI, the major difference is the built-in safeguards that 5.5 (and other models have), where 5.5-Cyber does not have these safeguards and is more âpermissiveâ for security work.
I got cajoled the other day that I need to upload my ID and ask for 5.5-Cyber access by the Codex desktop app while I was having it develop a fuzzing suite for an open source library I'm(we?) are developing. I was able to berate it into getting back to work.
This struck me as a point of emergent enshittification; an anus if you will.
The company doing the actual ID verification (KYC) is probably the last company I'd trust with this kind of data.
To circumvent conversations being flagged as "cybersecurity bad!!!" I often have to use previous models (5.3 for example, and sometimes using them through subagents is enough). And when this method no longer works, local models will be good enough for it to not be a problem (for my use case, at least).
That is very clearly the claim of mythos though. The experience of projects that do have access to mythos though suggests that if you use the other models it's not going to find much of anything. Which is to say generally we believe it is marketing as you say however the claim that the reporter said is very clearly stated even if it's not right.
Immediate distrust of the article⌠The author might be parroting company marketing, unable to discern that a lot of this is much less complex than it seems.
> I am based in The Timesâs Washington bureau, and much of my focus is on the dealings of U.S. cybersecurity and intelligence agencies, including the National Security Agency, Central Intelligence Agency, Cybersecurity and Infrastructure Security Agency and the Federal Bureau of Investigation, as well as their counterparts abroad, chiefly in China, Russia, Iran and North Korea.
> My remit spans nation-state hacking conflict, digital espionage, online influence operations, election meddling, government surveillance, malicious use of A.I. tools and other related topics.
> Before joining The Times, I worked at The Wall Street Journal, where I spent eight years covering cyber conflict and intelligence. My recent work at The Journal included a series of articles revealing a major Chinese intrusion of Americaâs telecommunications networks that breached the F.B.I.âs wiretap systems and has been described as one of the worst U.S. counterintelligence failures in history. I have also worked at Reuters and National Journal, where I began my career in Washington chronicling congressional efforts to reform surveillance practices at the N.S.A. in the wake of the 2013 Edward Snowden disclosures.
> My work has been internationally recognized, including by the White House Correspondentsâ Association, the Gerald Loeb Awards, the Society of Publishers in Asia and the Society for Advancing Business Editing and Writing.
Your comment was surely well meant, but you could have plainly stated that the article author is a seasoned reporter instead of the snarky reply.
GP might be incorrect in stating that the author is parroting Anthropic's marketing, but the author certainly does not go out of his way to specify that these are only Anthropic's claims. It is actually a bit ironic as the article linked[0] from the quoted part (by another author) uses the correct phrasing when dealing with such claims:
> Anthropic, the artificial intelligence company that recently fought the Pentagon over the use of its technology, has built a new A.I. model that it claims is too powerful to be released to the public.
I feel like this website is a particularly dangerous place to ask that and hope it to be a âmic dropâ moment. There are a lot of highly accomplished engineers, scientists, founders CEOs, etc. here that could easily respond to that with any manner of impressive qualifications.
Lately Iâve been trying to think critically. I am not perfect, but I can recognize appeal to authority from a mile away.
> An argument from authority (Latin: argumentum ab auctoritate, also called an appeal to authority, or argumentum ad verecundiam) is a form of argument in which the opinion of an authority figure (or figures) is used as evidence to support an argument. The argument from authority is often considered a logical fallacy and obtaining knowledge in this way is fallible.
> there is disagreement on the general extent to which it is fallible - historically, opinion on the appeal to authority has been divided: it is listed as a non-fallacious argument as often as a fallacious argument
> Some consider it a practical and sound way of obtaining knowledge that is generally likely to be correct when the authority is real, pertinent, and universally accepted
Anyway, other than trying to think critically, anything?
nytimes reporters have recently been very disappoiting and starting to feel like they're people who managed to become relevant long time ago, but haven't kept up with recent changes and are just parroting things others have said instead of unique thoughts.
Any media company which deliberately rids itself of everyone willing to speak vaguely positively of transsexual people may not be attracting the most free thinking writers.
OP posited that the author didn't know what he's talking about. I pointed out that the author has far more knowledge and experience in the field than rando internet griefers on HN who immediately reach for "shoot the messenger" when they read something that doesn't neatly fit into their pre-conceived worldview, instead of perhaps learning things from other people.
But at least your trope acknowledges that he's an authority on the subject.
Black hat hacking seems to be a well-fit use case for these LLMs. Attackers only need to be right once, so the sometimes-wrongness of the attacks might be trivial. This probably devalues stashes of zero-day exploits for those that have been witholding them.
This stance doesn't make sense. They have the same access that the rest of the public does; and, any Red Team member is going to be doing the exact same thing.
I wonder if that means we're going to see an increase in the attempted 'leveraging' of hoarded zero days lest they get publicised and patched prior to being profitable.
To make an omelette, some eggs need to break, right? These companies released AI to the public and thought it will be all sunshine and roses.. there are legit bad actors in the world that hates society and people and they will use AI for expand on that, is that not clear? We need controls on AI similar to any other restricted materials (like nuclear stuff).
Local models are getting good scary fast. Hardware is improving too. How long until I can ask a local model to help me do Nontrivial Bad Things?
I don't see how you can regulate that though. Just making it illegal to release small models? Or to use unauthorized ones? (I'm kind of not sure the kind of people who want to do bad things are going to be discouraged by such a law though.)
Then go ask some ChineseGPT about this, I guess, as these models seem to be much less restricted on such topics (you could even get some explosives recipes, though not all of them are real and safe) /j
In past decades the "firewall" of software is that advanced security and coding knowledge is not very easy to access by anyone, only a few smartest people in the big name companies and top orgs.
But nowadays, knowledge is accessible to everyone if you use top LLM, which swipe the difference. I would say that future public software is unsafe anymore. maybe the concept of public software (like SaaS or other) will be dead, software is only private instead of public
Wild that they think restricting access to models will help much. Access to Chinese models will definitely not be restricted and have enough capability to find exploits as well.
Security will be a wedge to restrict the sophistication of open-weight and local LLMs, just as it's been used to demonize and restrict cypherpunk technologies.
> Security will be a wedge to restrict the sophistication of open-weight and local LLMs, just as it's been used to demonize and restrict cypherpunk technologies
Unlikely in America or China. This is not a game either can singularly control, and locking down the R&D means conceding momentum to the party that doesn't. Which means use restrictions will be contained to countries satisfied with playing second fiddle.
Instead, I suspect we'll see momentum towards running software on publisher-controlled servers so the source code can be secured through obscurity. It isn't perfect. But it might be good enough to get us through this transition.
If America just banned all chinese models that would wipe out most of the open weights landscape in AI, especially anything close to the frontier. I could easily see that happening if a Mythos tier model comes out of a Chinese lab in early 2027. It doesn't meaningfully change the research competition between OAI/Anthropic/Google/SpaceX but it does pad all of their pockets by removing cheap competition and it gives the government far greater control over AI usage de facto.
> I could easily see that happening if a Mythos tier model comes out of a Chinese lab in early 2027
I don't. I'm not saying American politics isn't capable of doing it. But I don't see us being stupid enough to try locking ourselves out of a technology that everyone else has access to.
Place the chinese labs on the entities list. That stops any legitimate company using them and probably makes HF take them down. Sure there will be torrents but the laws for doing business with a sanctioned entity bite much harder than the laws around copyright infringement.
Ironically, thisâa nascent industry and budding industrial clusterâis the textbook case for deploying tariffs. America tariffs American use of Chinese models and pays that back as a tax credit to American developers.
netsplit, I guess. decide that the risk of an open network is too great and simply block all routing out of the country through the ISPs and consider the political power that goes along with a global satellite constellation under rule of a single, government-aligned corporation.
"simply block all routing out of the country" is doing a lot of heavy lifting. For government networks, sure. For civilian networks? It's a bit like stopping pirates from ripping video; how do you deal with an attacker that ultimately can gain some form of access? Even in North Korea external media can be smuggled in.
If they tried to lock down local models more people would use them. They would also have to take down a few us companies in the process who would go down fighting for certain.
Given how everywhere software is now being written by the LLMs, how is that a top headline news that some (albeit malicious) software is being written with LLM?
The robbers used a CAR in the robbery.
The blackmailer used a TYPEWRITER to write blackmailing letter.
The Google Threat Intelligence Group wants to increase its relevance and casually point out the it was not Mythos which found the exploit!
Security "researchers" are overpaid buffoons who hype things for their own salaries and their companies. And the stenographers from the press dutifully copy everything.
This is a despicable game to fool politicians into giving money and favorable AI legislation.
Strangely enough these buffoons never offer their models to open source developers. It is always a select group of highly paid other buffoons that throws some very occasional results over the wall.
Software is in such a state now, Gmail is full of bugs around sharing attachments to the position that I have to tell my dad to turn his phone off and on again in order to attach a document
> âWe have high confidence that the actor likely leveraged an A.I. model to support the discovery and weaponization of this vulnerability,â the report said.
I wonder what gives them that "high confidence", as opposed to this being just a traditional zero-day?
I'm not being snarky or critical, I'm genuinely wondering what about an attack could possibly indicate it was discovered with LLM assistance?
Like, unless the attackers' computers have been seized and they've been able to recover the actual LLM transcript history? But nothing in the article indicates that the hackers have been caught, just that a patch was developed.
From Google's GTIG report: https://cloud.google.com/blog/topics/threat-intelligence/ai-...
"Although we do not believe Gemini was used, based on the structure and content of these exploits, we have high confidence that the actor likely leveraged an AI model to support the discovery and weaponization of this vulnerability. For example, the script contains an abundance of educational docstrings, including a hallucinated CVSS score, and uses a structured, textbook Pythonic format highly characteristic of LLMs training data (e.g., detailed help menus and the clean _C ANSI color class) "
This only indicates that an AI coding agent was used to write an exploit.
No such circumstantial evidence can prove that an AI model has been used to find the bug.
Of course, it is quite likely that an AI model was used to speed up the search for bugs, but this can never be proven as long as you see only the code used to exploit the bug.
That's evidence the script was written by an AI, but not necessarily that the exploit was found by it.
I think it would be rather worth reporting these days if hackers totally handcrafted all code without any use of any AI.
The post reads like Ai wrote it - from that I can deduce that all strategy at google has been generated by Ai.
> I wonder what gives them that "high confidence", as opposed to this being just a traditional zero-day?
Google, Cloudflare, and Microsoft are a trio of companies that get to see most of what's going on the internet. I imagine that if they see you attacking them, they can work back from that and get remarkably far, even against sophisticated actors. If it's their LLM, they presumably keep transcripts. If you searched for the affected API function via a search engine, they almost certainly know. Even if you used a competing search product, you probably went to a site that has Google Analytics. Oh, and one of these companies probably has your DNS lookups. And a good chunk of the world's email traffic. And telemetry from your workstation. And auto-uploaded crash reports... And if it's bad, they can work together behind the scenes to get to the bottom of it.
So, when their threat intel orgs say they have high confidence in something, I'd be inclined to believe it.
None of what you've said is untrue. And if this was an internal report to an executive, I'd agree with it. But this is a public statement and I'm more inclined to believe that this is part of a coordinated run up to a move to ban the import of 'dangerous' Chinese AI models -- or something else equally self serving -- than a simple statement of truth.
I don't doubt that they found some evidence of AI use. I'm just skeptical that the amount and strength of evidence has anything to do with their making this statement.
I've been thinking about why the AI companies are making so much use of fear based marketing. And I'm wonder if it isn't just naked Machiavellianism at work.
For a long time tech companies were forced to compete for power by being the most loved (or at least not the most hated). But now they've found an avenue to cultivate fear.
Well, itâs great marketing for LLM products at the enterprise level. Even if they werenât sure, they have every incentive to run with it now, and the issue a âwhoopsie daisyâ apology later after the tech media stopped paying attention.
This is why i can't wait for a new AI winter or atleast a fall(the bubble deflating slowly). Just like you can now really see how useful web3 and NFT really are...
Are you roughly comparing the long term viability of LLMs to NFTs as if they are anywhere in the same realm?
How long can llms exist at this current price level? Once they raise prices the market gets split. One side is the companies who will pay the increases and the other side is the public portals which become unaffordable. Public side might compare to NFTs while the other looks like more like the cloud where companies will overpay for better features they don't really need.
The article strongly implies they have the (Python) source code, and that it looks LLM generated. I don't know about you, but I can usually tell LLM code from a mile away.
That can prove only a half of that sentence, that an AI coding assistant was used for writing the exploit (a.k.a. "weaponization").
For the other half of the sentence ("discovery"), one could claim that it is true only if the identity of the attackers were discovered and evidence about their prior activities would be gathered.
Even if it is likely that today anyone who searches for bugs would also use AI agents to accelerate that, I find unacceptable in announcements like that of Google the use of careless sentences that are obviously either false or they might be true only if Google knew something else that they do not disclose.
We are going to be seeing a lot of these moving forward. It's the easy way out. If you've worked with Google, you will know that it's an environment where accountability doesn't thrive. You will find people who know nothing about Google's product portfolio hold advisory roles around the products. They don't care, there's no one to even question them. They just know to make colourful graphs with the most useless metrics to justify they "add value" to the company. Expecting them to take accountability is like trying to mix oil and water.
Humans can sometimes find a needle in a haystack, but its impossible for us to find multiple needles in multiple haystacks and chain them together into an attack. AIs can work through a complex search space much more efficiently, that's the tell.
They did it before AI.
sorry but thatâs just wrong. itâs not impossible in the slightest. i built an attack against mozilla deepspeech in my phd from multiple needles (two of which i personally discovered).
did it take a lot of effort? sure. lots of dead ends. but that does not mean it is impossible.
> its impossible for us to find multiple needles in multiple haystacks and chain them together
Except "we" have been successfully chaining attacks long before AI started automating it. AI doesn't make any of this possible, it just takes the drudgery out of it and lowers the cost of an attack.
The article says it included excessive explainer text. And I'm almost positive an earlier version of the article referenced hallucinated library references though I don't see it in the present version of the article.
Maybe after they realized how they were vulnerable they asked an LLM to find the exploit through a similar means to try and replicate it. Still doesn't prove it but maybe gives them confidence this weird thing can only really be found that way etc.
> I wonder what gives them that "high confidence", as opposed to this being just a traditional zero-day?
Excessive use of em-dashes, and emoji bullet points in the readme
Maybe they saw traffic that looked like AI prodding an API and quickly adapting to find the bug?
But at this point I feel like odds are everyone looking for vulnerabilities is using AI to some extent. Why wouldn't they? It'd be stranger if they didn't.
Presumably the attacker used Google's own LLM and they searched the history of all user chats to find the transcript.
I say this only slightly in jest, as that's about the only thing I can think of which would legitimately give them 'high confidence'.
In the article (AP one, at least) Google explicitly said it does not believe it was Gemini or Mythos.
Clearly that's because they searched the history of all chats and didn't find the perpetrator
I know we're talking about Google here, but the privacy violations and concerns from this sort of search are massive.
We need local AI ASAP.
Don't get me wrong, I'm with you here, but we are back to the days when we had to rent mainframe time for compiling programs. Not because of software limitations, but you just didn't have consumer grade hardware capable of running them.
This time, however it's even worse, because it'll be a really long time until either we get consumer GPUs with enough VRAM for full models or LLMs that fit in 16-32GB capable enough to compete with cloud providers.
I run locally qwen3.6 27b on my 3090 and it's really impressive for what it is, but it is still generations away from being capable of delivering a level of quality that we can confidently default to solo drive them on a daily basis.
> We need local AI ASAP.
That is an excellent idea, once we, the GPU-poor mice, figure out who is going to bell the SoTA training cat. Chinese models being banned is well within the realms of lobbied possibilities.
They probably used AI for the search.
The real game would be to put a ânothing of interest hereâ prompt injection attack in the original series of prompts so a LLM parsing them later would ignore the attackersâ session.
So its a provider but not these two which imples OpenAI
Next headline: Google will not be releasing their next AI model to the public but only "trusted" partners, because it's too dangerous.
I wonder what is the goal here? If Google Search was used to find a major software flaw would this be reported in this way? Between Mythos, OpenAI's Mythos equivalent, it's not clear if there is some interest to keep the "AI is powerful" trend going or they are trying to indirectly bring attention to the technical capabilities of LLMs in cybersecurity (as a potentially untapped source of revenue).
Haven't read the article, but let me guess:
"That's why for your safety we need a scan of your ID and your biometrics to let you use our best models"
My Android phone takes a photo of my face every time I unlock the device. I don't have access to those images, but someone already has photos of my eyeballs!
I'm not sure why or how to turn it off, does anyone know?
(Also, insert weary photo of Kaczynski here.)
Go to security settings and disable face unlock. If you want to be extra safe against Google, go to the advanced security settings, find the "trust agents", and disable the ability for Google Play Services to unlock your phone. That'll also kill any other unlock mechanisms you may have forgotten about tied to Google's services.
If unlock features remain after that, it's a manufacturer feature that's been set up. In that case you'll have to look for a guide for your specific brand and model.
Your phone can't turn this on by itself, if it's doing face recognition that means you set it up at some point.
For the extra paranoid, tape over the camera is the way to go.
I mean it's nice that someone helped you but if you're incapable of turning such a setting off yourself, or doing some basic research to find out how to turn it off - surely you'd feel threatened by the numerous other features of the phone that you're likely unaware of?
It's like willingly walking through a minefield.
It's the narrative "For your own security in the internet (and children's safety), show us your ID now, please".
Tired of this trend.
Okay, when fuzzing techniques came out there was a big surge in discovered and exploited bugs. AI is more general and I expect there be a similar surge. However fuzzing is cheap but compute and techniques can be "owned." The economics of AI is unless you pay for it, it is difficult to self host (expensive hardware, open source models are catching up).
State actors + hackers will have more resources to make better offense. What worse, in my experience AI produced code is blind to overall system behavior. So I fear the exploits will be either low hanging/trivial to exploit errors or bigger system level bugs.
>But new A.I. models like Anthropicâs Mythos, which was announced last month, appear to be so good at finding such holes that Anthropic shared it only with a limited number of firms and government agencies in the United States and Britain.
Immediate distrust of the article. GPT 5.5 is out with nearly the same capability. The author might be parroting company marketing, unable to discern that a lot of this is much less complex than it seems. For all we know this group could have had a model examine some obscure line of code thousands of times until it found something.
GPT 5.5 does not have the same capabilities as Mythos. There is a separate 5.5-Cyber model which is the Mythos âequivalentâ, but it is similarly restricted access like Mythos. Per OpenAI, the major difference is the built-in safeguards that 5.5 (and other models have), where 5.5-Cyber does not have these safeguards and is more âpermissiveâ for security work.
See https://openai.com/index/gpt-5-5-with-trusted-access-for-cyb...
I have access to the Cyber version. Itâs great at cybersecurity work but only marginally better than its predecessor with the right jailbreaking.
I imagine Mythos is going to be the same story from what Iâve seen so far.
https://www.theregister.com/security/2026/05/11/anthropics-b...
Well hey, there you have it
That reminds me:
I got cajoled the other day that I need to upload my ID and ask for 5.5-Cyber access by the Codex desktop app while I was having it develop a fuzzing suite for an open source library I'm(we?) are developing. I was able to berate it into getting back to work.
This struck me as a point of emergent enshittification; an anus if you will.
The company doing the actual ID verification (KYC) is probably the last company I'd trust with this kind of data.
To circumvent conversations being flagged as "cybersecurity bad!!!" I often have to use previous models (5.3 for example, and sometimes using them through subagents is enough). And when this method no longer works, local models will be good enough for it to not be a problem (for my use case, at least).
That is very clearly the claim of mythos though. The experience of projects that do have access to mythos though suggests that if you use the other models it's not going to find much of anything. Which is to say generally we believe it is marketing as you say however the claim that the reporter said is very clearly stated even if it's not right.
Immediate distrust of the article⌠The author might be parroting company marketing, unable to discern that a lot of this is much less complex than it seems.
https://www.nytimes.com/by/dustin-volz
> I am based in The Timesâs Washington bureau, and much of my focus is on the dealings of U.S. cybersecurity and intelligence agencies, including the National Security Agency, Central Intelligence Agency, Cybersecurity and Infrastructure Security Agency and the Federal Bureau of Investigation, as well as their counterparts abroad, chiefly in China, Russia, Iran and North Korea.
> My remit spans nation-state hacking conflict, digital espionage, online influence operations, election meddling, government surveillance, malicious use of A.I. tools and other related topics.
> Before joining The Times, I worked at The Wall Street Journal, where I spent eight years covering cyber conflict and intelligence. My recent work at The Journal included a series of articles revealing a major Chinese intrusion of Americaâs telecommunications networks that breached the F.B.I.âs wiretap systems and has been described as one of the worst U.S. counterintelligence failures in history. I have also worked at Reuters and National Journal, where I began my career in Washington chronicling congressional efforts to reform surveillance practices at the N.S.A. in the wake of the 2013 Edward Snowden disclosures.
> My work has been internationally recognized, including by the White House Correspondentsâ Association, the Gerald Loeb Awards, the Society of Publishers in Asia and the Society for Advancing Business Editing and Writing.
What have you done lately?
Your comment was surely well meant, but you could have plainly stated that the article author is a seasoned reporter instead of the snarky reply.
GP might be incorrect in stating that the author is parroting Anthropic's marketing, but the author certainly does not go out of his way to specify that these are only Anthropic's claims. It is actually a bit ironic as the article linked[0] from the quoted part (by another author) uses the correct phrasing when dealing with such claims:
> Anthropic, the artificial intelligence company that recently fought the Pentagon over the use of its technology, has built a new A.I. model that it claims is too powerful to be released to the public.
[0] https://archive.ph/GC6WP#selection-4713.0-4713.200
> What have you done lately?
I feel like this website is a particularly dangerous place to ask that and hope it to be a âmic dropâ moment. There are a lot of highly accomplished engineers, scientists, founders CEOs, etc. here that could easily respond to that with any manner of impressive qualifications.
https://news.ycombinator.com/item?id=35079
Lately Iâve been trying to think critically. I am not perfect, but I can recognize appeal to authority from a mile away.
> An argument from authority (Latin: argumentum ab auctoritate, also called an appeal to authority, or argumentum ad verecundiam) is a form of argument in which the opinion of an authority figure (or figures) is used as evidence to support an argument. The argument from authority is often considered a logical fallacy and obtaining knowledge in this way is fallible.
> there is disagreement on the general extent to which it is fallible - historically, opinion on the appeal to authority has been divided: it is listed as a non-fallacious argument as often as a fallacious argument
> Some consider it a practical and sound way of obtaining knowledge that is generally likely to be correct when the authority is real, pertinent, and universally accepted
Anyway, other than trying to think critically, anything?
Reporting on such stuff requires networking skills, not technical knowledge.
Reporting on such stuff requires networking skills, not technical knowledge.
Guess how I know you've never been a reporter.
Your comment would be be fine without the snarky final sentence.
Okay, well Iâve done more than that and I say heâs right. Now what?
nytimes reporters have recently been very disappoiting and starting to feel like they're people who managed to become relevant long time ago, but haven't kept up with recent changes and are just parroting things others have said instead of unique thoughts.
I found their recent investigative article on How do stars pee at the Met Gala? to be hard-hitting, yet fair to all sides. [1]
[1] https://archive.is/x9MSO
(You thought I was exaggerating about it being "investigative," dincha.)
Any media company which deliberately rids itself of everyone willing to speak vaguely positively of transsexual people may not be attracting the most free thinking writers.
https://www.logicallyfallacious.com/logicalfallacies/Appeal-...
Not at all.
OP posited that the author didn't know what he's talking about. I pointed out that the author has far more knowledge and experience in the field than rando internet griefers on HN who immediately reach for "shoot the messenger" when they read something that doesn't neatly fit into their pre-conceived worldview, instead of perhaps learning things from other people.
But at least your trope acknowledges that he's an authority on the subject.
> I pointed out that the author has far more knowledge and experience in the field than rando internet griefers on HN
You mean, you guessed that a random person online lacked experience. The experts are genuinely here too.
> OP posited that the author didn't know what he's talking about.
That position does not appear to be present.
Eh, "unable to discern" seems like a polite way of saying someone is talking out of their ass.
How many zeroday vulns had the article author discovered using AI assisted methods?
Black hat hacking seems to be a well-fit use case for these LLMs. Attackers only need to be right once, so the sometimes-wrongness of the attacks might be trivial. This probably devalues stashes of zero-day exploits for those that have been witholding them.
This stance doesn't make sense. They have the same access that the rest of the public does; and, any Red Team member is going to be doing the exact same thing.
I wonder if that means we're going to see an increase in the attempted 'leveraging' of hoarded zero days lest they get publicised and patched prior to being profitable.
To make an omelette, some eggs need to break, right? These companies released AI to the public and thought it will be all sunshine and roses.. there are legit bad actors in the world that hates society and people and they will use AI for expand on that, is that not clear? We need controls on AI similar to any other restricted materials (like nuclear stuff).
Local models are getting good scary fast. Hardware is improving too. How long until I can ask a local model to help me do Nontrivial Bad Things?
I don't see how you can regulate that though. Just making it illegal to release small models? Or to use unauthorized ones? (I'm kind of not sure the kind of people who want to do bad things are going to be discouraged by such a law though.)
Meanwhile, I cannot ask ChatGTP how to pick my own lock. Even though this information is available in a book in the library.
Then go ask some ChineseGPT about this, I guess, as these models seem to be much less restricted on such topics (you could even get some explosives recipes, though not all of them are real and safe) /j
Also available to Fed Gov entities, surely.
For me, not thee
...or on YouTube.
I expect that only to escalate with time, especially when there'll be more agent-written code deployed.
Phasing like this immediately makes me wonder what google is lobbying for..
@dang would be great if the hn link was the 'unlocked' version i.e. instead of
https://www.nytimes.com/2026/05/11/us/politics/google-hacker...
this instead
https://www.nytimes.com/2026/05/11/us/politics/google-hacker...
(can read the article immediately; slightly less fuss)
Just fyi @username does not send any notifications on hackernews, not even to the mods.
To contact the HN mods, you need to send them an email.
At least, thats what we're told ;)
and I imagine out of anyone on HN, dang probably frequently searches for instances of dang. Sorry dang.
I can confirm the moderators (dang and tomhow) are very responsive by email.
Can we link to the actual google article, instead of these editorialized articles about the article?
https://cloud.google.com/blog/topics/threat-intelligence/ai-...
> Google said in research published Monday
What research? Where is it published?
If this is true, I hope AI exploit-finding will force the industry to harden itself against supply-chain vulnerabilities.
There was a discussion a few days ago on White House considers vetting AI models prior to release (https://news.ycombinator.com/item?id=48013608).
In past decades the "firewall" of software is that advanced security and coding knowledge is not very easy to access by anyone, only a few smartest people in the big name companies and top orgs. But nowadays, knowledge is accessible to everyone if you use top LLM, which swipe the difference. I would say that future public software is unsafe anymore. maybe the concept of public software (like SaaS or other) will be dead, software is only private instead of public
Wild that they think restricting access to models will help much. Access to Chinese models will definitely not be restricted and have enough capability to find exploits as well.
Security will be a wedge to restrict the sophistication of open-weight and local LLMs, just as it's been used to demonize and restrict cypherpunk technologies.
> Security will be a wedge to restrict the sophistication of open-weight and local LLMs, just as it's been used to demonize and restrict cypherpunk technologies
Unlikely in America or China. This is not a game either can singularly control, and locking down the R&D means conceding momentum to the party that doesn't. Which means use restrictions will be contained to countries satisfied with playing second fiddle.
Instead, I suspect we'll see momentum towards running software on publisher-controlled servers so the source code can be secured through obscurity. It isn't perfect. But it might be good enough to get us through this transition.
If America just banned all chinese models that would wipe out most of the open weights landscape in AI, especially anything close to the frontier. I could easily see that happening if a Mythos tier model comes out of a Chinese lab in early 2027. It doesn't meaningfully change the research competition between OAI/Anthropic/Google/SpaceX but it does pad all of their pockets by removing cheap competition and it gives the government far greater control over AI usage de facto.
> I could easily see that happening if a Mythos tier model comes out of a Chinese lab in early 2027
I don't. I'm not saying American politics isn't capable of doing it. But I don't see us being stupid enough to try locking ourselves out of a technology that everyone else has access to.
Did you not see the foreign drone parts bans?
But we wouldnât be. Iâm assuming that the US labs retain several monthsâ lead for at least the next couple of years.
How would it be possible to ban Chinese LLMs?
Place the chinese labs on the entities list. That stops any legitimate company using them and probably makes HF take them down. Sure there will be torrents but the laws for doing business with a sanctioned entity bite much harder than the laws around copyright infringement.
> Place the chinese labs on the entities list
Ironically, thisâa nascent industry and budding industrial clusterâis the textbook case for deploying tariffs. America tariffs American use of Chinese models and pays that back as a tax credit to American developers.
As long as it is within the country, restriction works. How do you restrict the capability from a foreign entity, especially a hostile one?
netsplit, I guess. decide that the risk of an open network is too great and simply block all routing out of the country through the ISPs and consider the political power that goes along with a global satellite constellation under rule of a single, government-aligned corporation.
"simply block all routing out of the country" is doing a lot of heavy lifting. For government networks, sure. For civilian networks? It's a bit like stopping pirates from ripping video; how do you deal with an attacker that ultimately can gain some form of access? Even in North Korea external media can be smuggled in.
That works for very oppressive countries. However, more freedom-minded countries are not going to law for that.
Didnt work out so well with the cypherpunk technology so there is hope
If they tried to lock down local models more people would use them. They would also have to take down a few us companies in the process who would go down fighting for certain.
Dupe: https://news.ycombinator.com/item?id=48096712
This is 3 hours earlier than what you're sharing.
Not sure how article merging goes, but this one shows up as 4 hours later to me.
People used LLMs to find flaws in Google software.
If you're talking about the incident described in the article, it says it was a flaw in "a popular open-source, web-based system administration tool".
Google's blog (https://cloud.google.com/blog/topics/threat-intelligence/ai-...) says Google "worked with the impacted vendor to responsibly disclose this vulnerability", so in this incident, it's not Google software.
But did they use Gemini?
> the company added that it did not believe it was its own Gemini chatbot.
-TFA
I don't know, but given how often Gemini refuses benign requests IME, I would suspect it's a complete non-starter for finding security holes.
But in exchange we get to also waste vast energy and carbon while depleting job prospects for just about any college grad.
It's not all bad though. We also managed to turn the Information Superhighway of the 1990s into the Slop Wasteland of the 2020s.
But which AI exactly, theres this new claude Mythos about wihch everone is talking, is it legit or a fluff
Given how everywhere software is now being written by the LLMs, how is that a top headline news that some (albeit malicious) software is being written with LLM?
The robbers used a CAR in the robbery.
The blackmailer used a TYPEWRITER to write blackmailing letter.
Source: https://cloud.google.com/blog/topics/threat-intelligence/ai-... (https://news.ycombinator.com/item?id=48096712)
Why collect all the news dupes but not the source up top OP? Because the source was already submitted?
What a surprise hackers used AI . I mean why wouldnt they? Every programmer uses it..
Drives me nuts that the NYT just uncritically cites Anthropicâs unverified claims of âthousands of zero-daysâ without a hint of skepticism.
If "bad guy AI" can find flaws, can "good guy AI" patch them faster when backed by trillion dollar companies?
"Google used AI to find a major software flaw" â there, fixed it for you, happy?
Do your AI patches introduce fewer flaws than they repair?
That's a trillion dollar question.
The bottleneck is probably validating and deploying the fix, which requires coordination.
If I sell weapons to both sides of a conflict, can I become rich?
No. To become really rich you have to draw a 3rd player into the conflict, and then sell weapons to them as well.
Or just lend money to both parties to fund their war efforts and pay off war debts afterwards.
Yes.
Please refer to any seller of weapons ever.
Ask anyone selling AI hardware recently!
I stopped reading after "Google says". They have destroyed whatever trust I might have had in them years ago.
Wait until the bio version of this shows up.
8 days ago:
https://www.nytimes.com/2026/04/29/us/ai-chatbots-biological...
...says yet another company hell bent on integrating it into every facet of our lives. This reads like a celebration, if you ask me.
The Google Threat Intelligence Group wants to increase its relevance and casually point out the it was not Mythos which found the exploit!
Security "researchers" are overpaid buffoons who hype things for their own salaries and their companies. And the stenographers from the press dutifully copy everything.
This is a despicable game to fool politicians into giving money and favorable AI legislation.
Strangely enough these buffoons never offer their models to open source developers. It is always a select group of highly paid other buffoons that throws some very occasional results over the wall.
Can google please use AI to find bugs then?
Software is in such a state now, Gmail is full of bugs around sharing attachments to the position that I have to tell my dad to turn his phone off and on again in order to attach a document
https://secgemini.google/
https://projectzero.google/2024/10/from-naptime-to-big-sleep...
https://deepmind.google/blog/introducing-codemender-an-ai-ag...
Those are all for security vulnerabilities, OP is talking about bugs with functionality.
It's probably the AI overuse introducing many of those bugs in the first place...
I canât help but think that, Apple is big on AI and their software seems to be going to hell too.
Apple software was crap long before AI. Most people just were blinded by semitransparent interface (pun intended) and marketing fluff.