I think the AI industry needs intelligent skeptics that keep the hype in check and ground us in reality.
But Ed Zitron is not it. Here's an example [1] of him fumbling on simple arithmetic. He's also perpetually bearish without any sense of principles on his message.
This is what he wrote in 2024 [2]
> You can fight with me on semantics, on claiming valuations are high and how many users ChatGPT has, but look at the products and tell me any of this is really the future.
I think the industry really needs someone better with principles.
I personally think the fact that it's an indie reporter like Ed Zitron diving into this says a lot about the state of tech media broadly. Reminds me a bit of how sports journalism works nowadays: nobody wants to call out industry leaders for fear of losing access, because losing access is career suicide.
False. The current mainstream media outlets are by far the more anti technology than pro. It is unclear why you think journalists fear losing access when the status quo is opposing tech.
Respectfully disagree. Frontier lab CEOs have had incredible media access the last 4 years, making huge claims to the press without a lot of pushback or difficult questions. There's obviously no way to give some quantifiable metric on it, and reasonable people can disagree.
But Zitron frequently points out the inconsistencies in these data center deals, noting that companies like OpenAI and Anthropic make these announcements without a formal contract in place, companies like Oracle get a stock bump off of the news, and then we all find out from the mainstream press months later that the deal was never done and in fact may not even be happening anymore.
That's not really behavior you'd expect to see from a vehemently anti-tech press. They're happily making news to boost stock prices short-term, essentially acting as mouthpieces for large shareholders.
I'm at a loss as to how some of these projects got funded in the first place. Anyone funding these should have had the perspective to see that there isn't enough power for them. Anyone funding them should have had the perspective to see that by the time power could come online for even a significant fraction of them, the depreciation and interest costs should have murdered the company trying to do it, especially if their solution to that problem is the oh-so-21st century solution of "solving" the problem of losing money by levering up. It does no good to go out of business entirely in 2027 to make the phat buxx in 2030, which seems to be the best case scenario for this space as a whole.
The other question I have is... who exactly is doing all of 1. Using AI right now 2. Making substantial money on it or getting real value and 3. Capacity constrained? Who is actually going to productively soak up all this capacity? It seems to me that bringing all this stuff online can't really make things much cheaper than they are now because the fixed costs aren't going anywhere, and if anything, trying to jam so many projects through all at once just raises those fixed costs even higher. It's not like they triple data center capacity (and increasing AI capacity by, what, 10x? 20x?), stick them full of AI systems, and into that 10x+ greater AI capacity they can sell it at the prices they are now. Higher capacity would crash the selling price but the costs would be as high or higher than now.
I am at a complete loss as to how the numbers are supposed to work here. You can't build a company in 2026 on the economy and tech infrastructure of 2036 anymore than it worked to build a company in 1999 on the economy and tech infrastructure of 2019, no matter how rosy the numbers look on the projections based on conveniently ignoring the fact the company passes through "death" in a year and half. Everything promised in 1999 happened, but trying to artificially accelerate it onto Wall Street's time line burned money by the billions. I'm sure 2036 will have lots of AI in it, but you can't just spend money to bring it forward 10 years by sheer force of will. It has to happen at its own pace.
> The other question I have is... who exactly is doing all of 1. Using AI right now 2. Making substantial money on it or getting real value and 3. Capacity constrained?
Almost all enterprise users for one. At least from what I have seen it is a massive productivity boost for coding and general research. If the costs were ~4x lower, we would be able to do much much more with them. Building datacenters will reduce the cost because increasing supply would reduce the cost.
> It's not like they triple data center capacity (and increasing AI capacity by, what, 10x? 20x?), stick them full of AI systems, and into that 10x+ greater AI capacity they can sell it at the prices they are now. Higher capacity would crash the selling price but the costs would be as high or higher than now.
This is false. Part of the costs are unit costs which are really high margin. I think the margins are around 50% to 60%. By increasing the capacity, the are bound to make even more profit.
But the other part is reflecting the lack of capacity.
"Building datacenters will reduce the cost because increasing supply would reduce the cost."
That's great for us users but I'm talking from the point of view of the people trying to make money on the data centers.
"This is false. Part of the costs are unit costs which are really high margin."
Can you explain how everybody throwing their money at nVidia lowers the costs? When they are already apparently at max capacity?
Everybody trying to build a data center at once raises the costs of the data center. Everyone competing for power has already raised power prices and we've barely begun bringing this stuff online. Everyone demanding multiples of what nVidia is producing means nVidias isn't going to reduce prices any time soon.
Your use of "even more profit" also implies that you think that the AI world is making lots of money? nVidia is making lots of money. To a first approximation, everybody else involved has lost billions. Maybe not Apple. But everyone else you can name is deep in the negative on AI.
That 240 GW is 2,1024 PWh a year. At 0.01 per kWh would be 21 billion. Multiply that by whatever reasonable cost of power including transmission and storage... And that is just power cost and power infra cost depending number used. That is lot of money to burn on AI. Which really doesn't go to other economy...
It's probably more correct to say that there are some people who project that 240GW of additional power will be required by data centers in the near future.
Yes, that number is absurd, and data centers will certainly need to make do with less, regardless of actual requirements.
Earlier today on the radio I heard Houston TX was 20 GW at peak load.
Texas is going its best to build as many datacenters & power plants as possible. They were describing it as "Texas will have more datacenters than anyplace else in the world." This was public radio, but everybody's taking a hit on the ol' AI pipe nowadays.
Iâve never thought of it in terms of âhow many new metropoli(sp) are being addedâ, but it seems like a deceny unit of measure. If we use the average of 6gw, weâre adding essentially 40 nycâs.
Tractors didn't get it because about the time they became useful for most farmers WWII was pushing the need for less men on the farm so they could go to war. There were tractors before then, but the previous ones had big negatives if you were not a much larger farm than most were then.
Comment section isnât nuanced enough to have this conversation and I am on a phone, but that is the way that the industry slandered the luddites as the parent claims.
The truth was that the machines produced worse quality goods and were less safe, not that people couldnât skill up to use them and not that there wasnât enough demand to keep everyone employed. It was quality and safety.
You should look into the issue further, because I had your opinion too until I soberly looked at what the luddites really were arguing for, it wasnât the end of looms, it was quality standards and fair advertising to consumers.
Trying to keep all of labor's sweat as capitalist's own cash is bad actually.
Making clothing more efficient by employing children in dangerous factories is bad actually (what happened in the original factories and now at fast fashion).
Of course you would enjoy that when every single externality involved has conveniently been exported elsewhere and you have been handily trained over generations to accept piss-poor quality clothing as normal.
Perhaps in a couple of centuries when a tube of nutrient slurry is the standard meal, people will be equally proud of not spending 15% of their salary on food...if salaries even exist by then.
There used to be a social contract, but now there are so many people that it's a problem that there is no work for the displaced. The leverage between the very small number of people with vast amounts of capital and a large number of people with very little capital or leverage - this is a societal dynamic that has existed before in the world. There is historical precedent for this, and it's probably worth paying very close attention to what comes next if you are a very wealthy person pushing against all forms of wealth redistribution.
An analysis of datacenter commitments and GPU purchasing through how much power they will demand vs how much is available.
As someone who only has a passing interest, there isn't anything distilled enough in this article for me to comment on as the central point. Everyone seems to be reporting impossible numbers, and buying dramatically more hardware than they can install in a reasonable timeframe given the pace of the industry.
My current model for understand for how AI will scale out is that we'll move through the following choke points:
AI chip makers -> Data center infra and construction -> regional power companies
Right now we're firmly in the "AI chip makers" part of the expansion, with everything else in the beginning stages. AI is useful, but whether it's hyped or not, it's hard to deny that not being able to build and power data centers will impact how this plays out.
i see him brag about how long these astroturf joints are on bluesky. had been a while since i'd tried to actually scroll through one and it makes sense now: so much more space to spawn subscription promos. better offline indeed
Railroads, e-commerce, and AI - all useful, all were (or may be) credit/stock bubbles. Railroads however have a much better depreciation schedule than GPUs.
He isn't arguing that AI is useless. Only that Nvidia is propping up a massive financial deck of cards and that all the giant numbers being tossed around are fantasies.
The âiPhone momentâ wasnât a result of one thing, but a collection of different bits that formed an obvious whole â one device that did a bunch of things really, really well.
LLMs have no such moment, nor do they have any one thing they do well, let alone really well. LLMs are famous not for their efficacy, but their inconsistency, with even ardent AI cultists warning people not to trust their output
But I wouldn't hold my breath waiting for introspection from that camp. It seems that AI maximalists, like so many other players these days, see it as end-game time. There are no bounds or rules: pick a side, and go. And then eat the rest.
Sure, not everyone sees it this way. There are highly competent, human actors working in their joy toward a better way forward with all of it. But I don't think you'll find that spirit unbridled inside any profit-seeking corporation of any significant standing (though I would be happy to be proven wrong). If it existed there, it is being choked out by selfishness and survivalism.
And then there's Thiel and ilk waxing eschatological, adding a whole other layer to the scheme.
The article takes an odd turn in the second half and seems to veer from a very interesting deep-dive into how a lot of backlogged US data center production may correlate with GPU "slippage" via questionable resellers and GPU rental outfits to China
Not sure it qualifies as an âLLM prediction,â but he was adamant that Nvidia would not come through with the $100 billion funding round, and sure enough they did not.
To Ed's credit, he's coming with real numbers. Much of his reporting is based on quarterly earnings reports, press releases, correlating reports from outlets like The Information, etc.
Contrast that with hyperscalers no longer reporting AI revenue separately, making bold claims about long term growth with no evidence to back it up, and a tech media apparatus that has largely avoided asking founders hard questions.
I know just as well as you how this is all going to turn (which is to say, nobody really knows). But I'll take the person doing the math over the person trying to hide numbers all day long.
See this [1] for how he comes up with numbers. I think he says a lot of things without understanding and not many serious participants in the area takes him seriously.
"If you keep predicting market crash every single day of your life you will be the greatest predictor in the history of mankind because markets do eventually crash a little"
Anyone with a single drop of common sense knows that Sam Altman is a grifter. If you don't see that, you are quite simply not bothering to apply critical thinking.
His entire role in it is the grifting part (raising money based on BS). His job is, and has been at this and other companies, grifting. You loop him in if you need a hype-man who'll say any crazy thing to bring in a buck.
I think the AI industry needs intelligent skeptics that keep the hype in check and ground us in reality.
But Ed Zitron is not it. Here's an example [1] of him fumbling on simple arithmetic. He's also perpetually bearish without any sense of principles on his message.
This is what he wrote in 2024 [2]
> You can fight with me on semantics, on claiming valuations are high and how many users ChatGPT has, but look at the products and tell me any of this is really the future.
I think the industry really needs someone better with principles.
[1] https://x.com/binarybits/status/2034376359909130249
[2] https://www.wheresyoured.at/never-forget-what-theyve-done/
Edit: here's another example https://x.com/blader/status/2031216372169191678
I get that people make mistakes but it really does seem like there are no principles behind the guy. It seems like he can write whatever.
I think you should focus on the claims in this article. There are plenty of principles espoused within.
Smearing his character without directly addressing those just stinks the place up.
I personally think the fact that it's an indie reporter like Ed Zitron diving into this says a lot about the state of tech media broadly. Reminds me a bit of how sports journalism works nowadays: nobody wants to call out industry leaders for fear of losing access, because losing access is career suicide.
False. The current mainstream media outlets are by far the more anti technology than pro. It is unclear why you think journalists fear losing access when the status quo is opposing tech.
Respectfully disagree. Frontier lab CEOs have had incredible media access the last 4 years, making huge claims to the press without a lot of pushback or difficult questions. There's obviously no way to give some quantifiable metric on it, and reasonable people can disagree.
But Zitron frequently points out the inconsistencies in these data center deals, noting that companies like OpenAI and Anthropic make these announcements without a formal contract in place, companies like Oracle get a stock bump off of the news, and then we all find out from the mainstream press months later that the deal was never done and in fact may not even be happening anymore.
That's not really behavior you'd expect to see from a vehemently anti-tech press. They're happily making news to boost stock prices short-term, essentially acting as mouthpieces for large shareholders.
I disagree. I find popular media to be grossly negligent in their lack of skepticism. They love regurgitating pie-in-the-sky claims for clicks.
I'm at a loss as to how some of these projects got funded in the first place. Anyone funding these should have had the perspective to see that there isn't enough power for them. Anyone funding them should have had the perspective to see that by the time power could come online for even a significant fraction of them, the depreciation and interest costs should have murdered the company trying to do it, especially if their solution to that problem is the oh-so-21st century solution of "solving" the problem of losing money by levering up. It does no good to go out of business entirely in 2027 to make the phat buxx in 2030, which seems to be the best case scenario for this space as a whole.
The other question I have is... who exactly is doing all of 1. Using AI right now 2. Making substantial money on it or getting real value and 3. Capacity constrained? Who is actually going to productively soak up all this capacity? It seems to me that bringing all this stuff online can't really make things much cheaper than they are now because the fixed costs aren't going anywhere, and if anything, trying to jam so many projects through all at once just raises those fixed costs even higher. It's not like they triple data center capacity (and increasing AI capacity by, what, 10x? 20x?), stick them full of AI systems, and into that 10x+ greater AI capacity they can sell it at the prices they are now. Higher capacity would crash the selling price but the costs would be as high or higher than now.
I am at a complete loss as to how the numbers are supposed to work here. You can't build a company in 2026 on the economy and tech infrastructure of 2036 anymore than it worked to build a company in 1999 on the economy and tech infrastructure of 2019, no matter how rosy the numbers look on the projections based on conveniently ignoring the fact the company passes through "death" in a year and half. Everything promised in 1999 happened, but trying to artificially accelerate it onto Wall Street's time line burned money by the billions. I'm sure 2036 will have lots of AI in it, but you can't just spend money to bring it forward 10 years by sheer force of will. It has to happen at its own pace.
> The other question I have is... who exactly is doing all of 1. Using AI right now 2. Making substantial money on it or getting real value and 3. Capacity constrained?
Almost all enterprise users for one. At least from what I have seen it is a massive productivity boost for coding and general research. If the costs were ~4x lower, we would be able to do much much more with them. Building datacenters will reduce the cost because increasing supply would reduce the cost.
> It's not like they triple data center capacity (and increasing AI capacity by, what, 10x? 20x?), stick them full of AI systems, and into that 10x+ greater AI capacity they can sell it at the prices they are now. Higher capacity would crash the selling price but the costs would be as high or higher than now.
This is false. Part of the costs are unit costs which are really high margin. I think the margins are around 50% to 60%. By increasing the capacity, the are bound to make even more profit.
But the other part is reflecting the lack of capacity.
"Building datacenters will reduce the cost because increasing supply would reduce the cost."
That's great for us users but I'm talking from the point of view of the people trying to make money on the data centers.
"This is false. Part of the costs are unit costs which are really high margin."
Can you explain how everybody throwing their money at nVidia lowers the costs? When they are already apparently at max capacity?
Everybody trying to build a data center at once raises the costs of the data center. Everyone competing for power has already raised power prices and we've barely begun bringing this stuff online. Everyone demanding multiples of what nVidia is producing means nVidias isn't going to reduce prices any time soon.
Your use of "even more profit" also implies that you think that the AI world is making lots of money? nVidia is making lots of money. To a first approximation, everybody else involved has lost billions. Maybe not Apple. But everyone else you can name is deep in the negative on AI.
Someone please correct my math.
The article says 240 Gigawatts of capacity is allocated for AI datacenters.
New York City draws about 10 Gigawatts in the hottest months of the year due to extra load from AC use.
So am I understanding correctly that these people want to foist upon the power grid 24 NYCs?
Also my math might be wrong here, but:
That 240 GW is 2,1024 PWh a year. At 0.01 per kWh would be 21 billion. Multiply that by whatever reasonable cost of power including transmission and storage... And that is just power cost and power infra cost depending number used. That is lot of money to burn on AI. Which really doesn't go to other economy...
It's probably more correct to say that there are some people who project that 240GW of additional power will be required by data centers in the near future.
Yes, that number is absurd, and data centers will certainly need to make do with less, regardless of actual requirements.
Earlier today on the radio I heard Houston TX was 20 GW at peak load.
Texas is going its best to build as many datacenters & power plants as possible. They were describing it as "Texas will have more datacenters than anyplace else in the world." This was public radio, but everybody's taking a hit on the ol' AI pipe nowadays.
Iâve never thought of it in terms of âhow many new metropoli(sp) are being addedâ, but it seems like a deceny unit of measure. If we use the average of 6gw, weâre adding essentially 40 nycâs.
Something I heard a person say recently:
> Isn't it weird how there is no huge industry pushback on all this new AI datacenter power need, as there was about electrifying vehicles?
Almost as if that "industry pushback" argument was not made in good faith? I wonder who would be against electric vehicles?
> I wonder who would be against electric vehicles?
The fossil fuel industry ?
What!? As if to suggest! An assertion most improper!
Was there a big industry push back against looms, tractors, computers, or the internet?
Tractors didn't get it because about the time they became useful for most farmers WWII was pushing the need for less men on the farm so they could go to war. There were tractors before then, but the previous ones had big negatives if you were not a much larger farm than most were then.
Famously against looms, yes. That's how we got the term Luddite, that rapacious capitalists redefined to be a negative.
I was going to cite that too but it's not exactly industry pushback, it's labor pushback.
EV on the other hand does have some obvious industrial adversaries.
At that time, the laborers WERE the industry?
Trying to prevent goods and services from being produced more efficiently is bad actually.
Comment section isnât nuanced enough to have this conversation and I am on a phone, but that is the way that the industry slandered the luddites as the parent claims.
The truth was that the machines produced worse quality goods and were less safe, not that people couldnât skill up to use them and not that there wasnât enough demand to keep everyone employed. It was quality and safety.
You should look into the issue further, because I had your opinion too until I soberly looked at what the luddites really were arguing for, it wasnât the end of looms, it was quality standards and fair advertising to consumers.
Trying to keep all of labor's sweat as capitalist's own cash is bad actually.
Making clothing more efficient by employing children in dangerous factories is bad actually (what happened in the original factories and now at fast fashion).
Given the absolute slop that passes as clothing nowadays, the Luddites had very good points actually.
Personally, I enjoy not spending 15% of my salary on clothing and textiles.
Of course you would enjoy that when every single externality involved has conveniently been exported elsewhere and you have been handily trained over generations to accept piss-poor quality clothing as normal.
Perhaps in a couple of centuries when a tube of nutrient slurry is the standard meal, people will be equally proud of not spending 15% of their salary on food...if salaries even exist by then.
There used to be a social contract, but now there are so many people that it's a problem that there is no work for the displaced. The leverage between the very small number of people with vast amounts of capital and a large number of people with very little capital or leverage - this is a societal dynamic that has existed before in the world. There is historical precedent for this, and it's probably worth paying very close attention to what comes next if you are a very wealthy person pushing against all forms of wealth redistribution.
An analysis of datacenter commitments and GPU purchasing through how much power they will demand vs how much is available.
As someone who only has a passing interest, there isn't anything distilled enough in this article for me to comment on as the central point. Everyone seems to be reporting impossible numbers, and buying dramatically more hardware than they can install in a reasonable timeframe given the pace of the industry.
Very good points.
My current model for understand for how AI will scale out is that we'll move through the following choke points:
AI chip makers -> Data center infra and construction -> regional power companies
Right now we're firmly in the "AI chip makers" part of the expansion, with everything else in the beginning stages. AI is useful, but whether it's hyped or not, it's hard to deny that not being able to build and power data centers will impact how this plays out.
i see him brag about how long these astroturf joints are on bluesky. had been a while since i'd tried to actually scroll through one and it makes sense now: so much more space to spawn subscription promos. better offline indeed
Railroads, e-commerce, and AI - all useful, all were (or may be) credit/stock bubbles. Railroads however have a much better depreciation schedule than GPUs.
He isn't arguing that AI is useless. Only that Nvidia is propping up a massive financial deck of cards and that all the giant numbers being tossed around are fantasies.
> He isn't arguing that AI is useless.
This is what he said in 2024
-----
The âiPhone momentâ wasnât a result of one thing, but a collection of different bits that formed an obvious whole â one device that did a bunch of things really, really well.
LLMs have no such moment, nor do they have any one thing they do well, let alone really well. LLMs are famous not for their efficacy, but their inconsistency, with even ardent AI cultists warning people not to trust their output
https://www.wheresyoured.at/never-forget-what-theyve-done/
Itâs supply and demand, as long as the demand is there the numbers can be maintained
Pointed and excellent.
But I wouldn't hold my breath waiting for introspection from that camp. It seems that AI maximalists, like so many other players these days, see it as end-game time. There are no bounds or rules: pick a side, and go. And then eat the rest.
Sure, not everyone sees it this way. There are highly competent, human actors working in their joy toward a better way forward with all of it. But I don't think you'll find that spirit unbridled inside any profit-seeking corporation of any significant standing (though I would be happy to be proven wrong). If it existed there, it is being choked out by selfishness and survivalism.
And then there's Thiel and ilk waxing eschatological, adding a whole other layer to the scheme.
The article takes an odd turn in the second half and seems to veer from a very interesting deep-dive into how a lot of backlogged US data center production may correlate with GPU "slippage" via questionable resellers and GPU rental outfits to China
The lying is not even subtle anymore. The gap between the demo and the product has never been wider and people are starting to notice.
Ah yes, now that the rails are all built what could we possibly do next?
I don't think Ed has made a single correct LLM prediction, despite posting in a fury probably monthly since ChatGPT was released. Grifters gonna grift
Not sure it qualifies as an âLLM prediction,â but he was adamant that Nvidia would not come through with the $100 billion funding round, and sure enough they did not.
To Ed's credit, he's coming with real numbers. Much of his reporting is based on quarterly earnings reports, press releases, correlating reports from outlets like The Information, etc.
Contrast that with hyperscalers no longer reporting AI revenue separately, making bold claims about long term growth with no evidence to back it up, and a tech media apparatus that has largely avoided asking founders hard questions.
I know just as well as you how this is all going to turn (which is to say, nobody really knows). But I'll take the person doing the math over the person trying to hide numbers all day long.
See this [1] for how he comes up with numbers. I think he says a lot of things without understanding and not many serious participants in the area takes him seriously.
[1] https://x.com/binarybits/status/2034376359909130249
"The market can stay irrational longer than you can stay solvent"
"If you keep predicting market crash every single day of your life you will be the greatest predictor in the history of mankind because markets do eventually crash a little"
I don't think Sam Altman has made a single correct AGI prediction, despite saying AGI is a few months off. Grifters gonna grift
False, he made one of the most important predictions https://blog.samaltman.com/ai and he made it happen.
Whatever you think of this person, he did the thing he predicted. That's more than most people.
Calling him a grifter tells me more about you than about Sam.
You may be a bit emotionally invested in this topic if you feel you're getting a lot of information from that exchange.
Why do you think so?
Lol, he "did" it. Sam did nothing himself. CEOs, glorified spokespeople, don't do anything. Bet you also think Musk "made" electric cars and rockets
That blogpost also didn't really "predict" anything.
lol u think it's marxist to say Sam Altman didn't make ChatGPT when he didn't do a single line of code
This is just Marxism.
Anyone with a single drop of common sense knows that Sam Altman is a grifter. If you don't see that, you are quite simply not bothering to apply critical thinking.
Altman is a grifter who is floating on the unexpectedly rapid advances in AI.
He will likely end up like Musk, another grifter who was floating on low hanging fruit in EV's and rocketry for a decade before being revealed.
The guy predicting a world changing technological revolution 12 years ago and he pioneered it himself. That is the opposite of a grifter.
His entire role in it is the grifting part (raising money based on BS). His job is, and has been at this and other companies, grifting. You loop him in if you need a hype-man who'll say any crazy thing to bring in a buck.