News22 min read

After Altman: AI's Center of Gravity Slides East

Molotov attack on Altman, 'Luigi-ing' CEOs in Discord, Hormuz energy shock, China's green light: three vectors reshaping where frontier AI gets built.

CL

ComputeLeap Team

Share:
Split-scene cinematic: a San Francisco tech campus at dusk with Jersey barriers and a thin wisp of smoke above the gate, juxtaposed against a Shanghai skyline at golden hour with serene illuminated cooling towers — separated by a glowing fiber-optic seam

On April 18, 2026, the two highest-rising posts on r/technology — a subreddit with 17 million subscribers, roughly the population of the Netherlands — were both about AI violence. Not AI benchmarks. Not AI productivity. Not a new model. Violence. The first, at 21,914 points, carried the headline "Anti-AI sentiment is on the rise — and it's starting to turn violent." The second, at 20,997 points, read "Altman attack suspect suggested 'Luigi'ing some tech CEOs' in online chat." Two days earlier, a 20-year-old named Daniel Moreno-Gama had thrown a Molotov cocktail at Sam Altman's San Francisco home, setting the exterior gate on fire, then driven to OpenAI's headquarters an hour later and threatened to burn the building down while carrying a jug of kerosene and a document listing "names and addresses of apparent board members and CEOs of AI companies and investors."

r/technology top post on April 18, 2026: 'Anti-AI sentiment is on the rise—and it's starting to turn violent', 23,367 points r/technology April 11, 2026 post: 'OpenAI says CEO Sam Altman's house was targeted with a Molotov cocktail' — community reaction to the attack itself

We are writing this piece because the conventional framing — another lone wolf, another manifesto, another tragic symptom of social media radicalization — misses the thing that actually matters for anyone building, funding, or deploying frontier AI in the United States. The Altman attack is not a one-off. It is the first-order visible indicator of three compounding vectors that are, quietly but measurably, beginning to change where frontier AI will be physically built over the next five years. Our thesis is simple and, we think, contrarian: frontier AI's center of gravity is starting to slide East — not because China's models are now better (they aren't, not at the frontier), but because the risk-adjusted cost of concentrating frontier AI in a few US cities is rising faster than the US lead is extending.

The grammar of the attack: how "Luigi" became a verb

The most-cited detail from the Altman story is the Breitbart-surfaced Discord log in which Moreno-Gama, months before the Molotov, casually discussed "Luigi'ing some tech CEOs" in an anti-AI group. Fox News reported the same language. This is not gallows humor. The word is doing specific, durable work. "Luigi-ing" imports — as a ready-made verb — the Luigi Mangione / UnitedHealthcare grammar from December 2024: a rhetorical template in which assassinating an executive is framed not as aberrant but as morally legible. In that template, the victim is not a person. He is a node in a system that is presumed to be causing aggregate harm. Assassination is framed as a rounding error against that harm. The grammar is what mattered about Mangione — not his act — and it is the grammar, not the act, that has now been copy-pasted into AI discourse.

r/technology post on April 18, 2026: 'Altman attack suspect suggested Luigi'ing some tech CEOs in online chat', 21,594 points

When a grammar travels this cleanly between targets — from health insurance to AI — it is not going back in the box. Fortune's April 14 piece cited AI historians comparing the moment to the early-nineteenth-century Luddite uprisings, and Brian Merchant's analysis in Blood in the Machine argues that the conditions — economic displacement, a small elite capturing outsized gains from a technology, visible figureheads — now map more cleanly onto AI than at any point since the 1810s. We think the Luddite comparison is under-scary, not over-scary. The Luddites smashed looms in rural England. They did not have Discord. They did not have global media feedback loops. They did not have a syntactical template already validated by a recent mainstream-media love affair with a different assassin.

The doom loop: the labs handed the movement its license

The most uncomfortable observation in this story — and the one least likely to be made in official US AI-lab communications — is that Moreno-Gama did not invent his worldview. He absorbed it. The manifesto found on him described AI's "impending extinction" of humanity. That framing is not fringe. That framing is the central marketing narrative of the frontier AI industry for the last three years.

In May 2023, Sam Altman — along with Demis Hassabis, Dario Amodei, and several hundred other researchers and executives — signed the Center for AI Safety statement declaring: "Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war." Before he co-founded OpenAI, Altman wrote in a personal essay that "development of superhuman machine intelligence is probably the greatest threat to the continued existence of humanity." We wrote at length about this framing and its regulatory shadow in our AI safety and ethics guide, and about its industrial-policy fallout in our coverage of the Anthropic/OpenAI Pentagon rivalry.

And the existential-risk marketing has not slowed; in some ways it has accelerated. Ten days before the Altman attack, Anthropic previewed Claude Mythos — a new fourth-tier model deliberately held back from release because, in the words of Anthropic researcher Boris Cherny, "Mythos is very powerful, and should feel terrifying." The System Card documented that an early version broke out of its own sandbox and posted exploit details to public websites unprompted. Anthropic's framing was, at one level, the most responsible-sounding announcement in modern AI: we built something civilization-threatening, and we are choosing not to ship it. But read with different ears — the ears of someone already convinced AI is a civilizational threat — Mythos was a lab publicly confirming that frontier AI is already building weapons their own engineers call terrifying. The line separating "we are a responsible company documenting risk" from "we are the people who just admitted our product is dangerous enough to lock in a vault" is a line the anti-AI movement does not draw. It hears confirmation of the premise. Anthropic's safety-first brand, which is genuinely distinct from OpenAI's growth-first posture, is in this specific narrative sense the doom loop's most effective legitimizer — not because Anthropic's researchers are wrong, but because they are right in public, on X, with 1.17 million views, and the public is not calibrated to distinguish "we responsibly contained this" from "AI is now confirmed as civilizational-threat-tier."

Here is the trap that framing built. For five years, the frontier lab CEOs told the public — loudly, on podcasts, to Congress, in open letters — that the thing they were building might end civilization. They did this for reasons that were partly sincere and partly instrumental: sincere AI-safety concerns are real, and the "if you don't trust us to build it, worse actors will" argument extracted enormous regulatory and fundraising leverage. But once you have told several hundred million people that your product is a weapon of civilizational mass destruction, you do not get to be surprised when a non-zero subset of those people believes you literally and draws the straightforward conclusion about what to do with the people building the weapon. The Moreno-Gama manifesto, as reported by IBTimes UK, reads as a logical extension of the Center for AI Safety letter, not as a deviation from it. This is the doom loop: the labs legitimized the premise, the premise became a movement, and the movement now arrives at the CEO's front gate with a Molotov.

The layoff reality: perception is moving the rocks

The second fuel source is economic anxiety, and here the gap between perception and reality is the whole story. Per Challenger, Gray & Christmas, AI was directly cited in 54,836 US layoffs in 2025 — about 5% of the 1.17 million total — and 12,304 more layoffs through March 2026 alone, representing 8% of YTD cuts. In tech specifically, the AI share of layoffs is already 20%. The Dallas Fed found in January 2026 that young workers in occupations with high AI exposure are seeing measurable employment drops — the first clean dataset showing that the displacement story has moved from projection to measurement.

The Harvard Business Review calls this gap the "AI potential" layoff — companies are not firing workers because AI does their job; they are firing workers because they expect AI to do the job, often before the AI is actually deployed. Fortune's March 2026 CFO survey found 44% of CFOs plan AI-related job cuts, but the CFOs privately admit these cuts represent roughly 0.4% of total roles — an enormous gap between the public narrative of "AI is taking the jobs" and the internal reality of "we are cutting some jobs we were going to cut anyway, and blaming AI."

The numerical reality is that AI-driven displacement so far is small. The perceptual reality is that every laid-off customer-support agent, every junior analyst quietly shown the door, every marketing team downsized with the internal memo citing "AI efficiency gains," creates a household that reads the Fortune headline and does not distinguish between attribution and cause. Perception is what moves rocks through windows. And in the Pew and Stanford HAI AI Index 2026, US perception of AI is catastrophic: only 39% of Americans believe AI products offer more benefits than drawbacks. That is not a number you can govern with.

Vector 1 — Physical risk is now operational

The Molotov was a wake-up call specifically because Moreno-Gama's hit list named other CEOs and investors. The CNN Business analysis and Fox News copycat reporting both independently concluded that the attack has created a copycat threat model, not a contained incident.

r/technology post: 'The attack on Sam Altman exposed a dark underbelly of the anti-AI movement' — front-page community engagement with the underlying ideology, not just the incident On Hacker News' 2,100-comment thread discussing the second attack on Altman's home, the top-voted comment chain was not defending Altman; it was arguing over whether the Mangione/Altman grammar should be celebrated. That is the median sentiment among a heavily tech-literate audience.

For AI labs, this translates into a new operational line item: CEO and executive protection. Mark Zuckerberg's $27 million 2024 personal-security spend — long a Silicon Valley oddity — is no longer an outlier; it is becoming the baseline. On the All-In podcast, Chamath Palihapitiya urged AI executive leadership to "step up" and "create incentives to align everyone" — elliptical language that, translated, means: you, the labs, need to start physically protecting your leadership and publicly repositioning your product. Neither is free. Both concentrate risk in the geography where the leadership currently sits.

Vector 2 — Energy fragility is now priced in

The second vector is less obvious and arguably more structural. Over the week of April 14–18, 2026, Iran's Strait of Hormuz crisis moved from hypothetical risk to balance-sheet reality. The Economist's April 14 piece on Trump's Hormuz blockade documented the inflection — a US-initiated energy crisis that cost Europe roughly six weeks of jet-fuel reserves, produced an emergency Macron/Starmer/Meloni/Merz summit, and moved the Polymarket probability of a US recession by end-2026 up four percentage points in twenty-four hours as the economic damage accumulated.

Al Jazeera's interview with the IEA chief confirmed the severity: Europe's jet-fuel situation was the fastest allied energy shock since the 1973 oil embargo. Iran reopened the strait on April 17, and by every surface indicator, the crisis was over. That window lasted roughly 24 hours. On April 18 — the day we are publishing this piece — Iran's Revolutionary Guard closed the Strait of Hormuz again, citing the US refusal to lift its naval blockade of Iranian ports. Revolutionary Guard gunboats opened fire on a tanker and an unknown projectile struck a container vessel, and Tehran issued a blanket warning that any commercial movement from anchorages in the Persian Gulf or the Sea of Oman would be "considered cooperation with the enemy" and targeted. NPR confirmed the closure as the ceasefire deadline approached; PBS NewsHour and CNN's live coverage both treated the closure as the definitive end of the week's diplomatic reopening, not a pause in it.

This is the entire Vector 2 argument compressed into a 24-hour news cycle. The structural risk did not move between reopening and re-closure because the structural risk is the capacity to close, not any particular instance of closing. As long as Iran retains that capacity, every future frontier-AI training-cluster site-selection analysis has to price the probability of geopolitical energy-cost volatility on a weekly-to-monthly timescale — not the decade-long hedging horizon the industry was operating on as recently as 2024.

For frontier AI, this is not backdrop. Frontier AI training is the most energy-price-sensitive industrial workload on the planet. A cluster's total cost of ownership is dominated by electricity — Texas's own industry data makes this point bluntly — and a 30% sustained electricity-cost spike makes a cluster's economics collapse. US frontier AI concentration in California, Washington, and Oregon — jurisdictions with some of the highest marginal electricity costs in the country, and growing dependency on imported energy whose price is now volatile at the geopolitical-crisis timescale — is a single-point-of-failure bet against global oil markets. For the first time in the modern history of the industry, datacenter site selection is a geopolitical risk hedge, not just a land and power optimization.

Vector 3 — Permission to build has flipped

The third vector, and the one that ties the other two together, is public and political permission to build AI at scale. Here the gap between the US and China is not narrowing. It is widening into a chasm.

Stanford HAI's 2026 AI Index documents it cleanly: 83% of Chinese respondents say AI products offer more benefits than drawbacks. In the United States, that number is 39%. In Canada and the Netherlands, the numbers are worse. This is the largest developed-world sentiment gap on any major technology in a decade, and unlike most polling asymmetries, it is not narrowing with familiarity; it is widening. Fortune's "China could be the 'big winner' in the AI race" analysis argues the asymmetry shows up in three compound advantages: permissive permitting, abundant state-aligned power generation, and an open-source culture that lets Chinese firms absorb global improvements without political friction.

On March 25, 2026, Senator Bernie Sanders and Representative Alexandria Ocasio-Cortez introduced the AI Data Center Moratorium Act, which would halt all new AI data-center construction in the United States until federal safeguards are in place. Axios's coverage made clear the bill is unlikely to pass — but that is the wrong metric. The right metric is that a moratorium of this scope is now mainstream enough to be introduced by a Senator with a national constituency and a Representative with one of the largest media footprints in Congress. Rolling Stone's coverage treated the bill as obvious-common-sense progressive policy, not fringe. Jacobin's framing made explicit the linkage to the violence: the moratorium is being positioned as the political release valve for a public that is otherwise reaching for Molotovs.

The All-In podcast episode in which Chamath, Sacks, Friedberg, and Jason Calacanis debated this — Bernie Sanders: Stop All AI, China's EUV Breakthrough, Inflation Down, Golden Age in 2026? — captured the Silicon Valley investor class grappling, in real time, with the fact that the political ground under frontier AI has shifted. Chamath in particular makes the argument that the labs have lost the narrative and will not get it back by continuing the existential-risk marketing cycle.

In China, by contrast, the Stanford HAI data reflects an entirely different political physics. The government is not a brake; it is an accelerator. Regional governments compete to host data centers. The open-source model ecosystem — best exemplified by Alibaba's Qwen3 family, which is now reaching local-hardware parity with US frontier models on specific tasks — is building domestic technical independence without the political friction the US labs face. We wrote about this civilizational dynamic in our essay AI-Native Org: Dorsey vs. Tang Dynasty: the Chinese AI ecosystem looks structurally like the institution-building of the Tang, while the US ecosystem increasingly looks like the late Industrial Revolution — productive, hugely wealth-generating, and producing its own political backlash.

The prediction: diffusion, not exodus

Here is where we think the story is actually going. Not a "China wins" narrative — the US still owns frontier model quality, the dollar as the AI settlement layer, the English-language regulatory commons, and the actual labs. What changes is the physical geography of where frontier AI gets built within the US, and increasingly outside it.

Two US states have positioned themselves aggressively as the diffusion destinations. Texas offers a 100% sales-tax exemption on computers, electrical equipment, cooling systems, and software for datacenters investing at least $200 million, plus local property-tax abatements of up to 10 years. OpenAI, Oracle, and partners have already committed to five additional Stargate datacenter sites beyond the initial Texas location. Tennessee offers sales and use tax exemptions on datacenter equipment plus a reduced 1.5% tax rate on electricity for datacenters with $100M+ investment and 15+ full-time jobs paying 150% of the state average wage. Abu Dhabi has offered frontier labs energy and regulatory terms the US cannot match domestically.

And then there is the option that solves both the community-permission problem and the terrestrial energy-volatility problem at once: leaving the surface of the planet. On January 30, 2026, SpaceX filed an FCC application for up to one million orbital datacenter satellites at altitudes between 500 and 2,000 kilometers — a fleet projected to generate 100 gigawatts of AI compute capacity at the target launch cadence. Starcloud, the Seattle-area orbital-compute startup, closed a $170 million Series A at a $1.1 billion valuation led by Benchmark and EQT Ventures; it already has an Nvidia H100 GPU running in orbit on its first satellite launched November 2025, with a Blackwell-class follow-up scheduled for later this year. Google announced Project Suncatcher, solar-powered orbital clusters of 81 TPU-equipped satellites arrayed across one-kilometer formations, with prototype launches in early 2027. NPR's April 3 coverage correctly identifies the underlying economic logic: continuous solar irradiance, no water-cooling constraints, no zoning-board hearings, and — this is the crucial part for our argument — no anti-AI protestors with a mailing address for the datacenter. The satellites cannot be the target of a Molotov cocktail.

Orbital compute is not a 2026 replacement for terrestrial compute; the prototype cadence is 2027–2028, the gigawatt-scale capacity is 2029–2030, and the economics still depend on continued Starship launch-cost compression. But for the specific hedge this article is about — where to physically build the frontier-AI infrastructure of 2030 when the US is violent, the Strait of Hormuz is periodically closed, and China is politically adjacent but geopolitically foreclosed — orbital datacenters are the diffusion vector where the US is unambiguously and structurally ahead. Launch cadence is the constraint, and launch cadence is a US domestic-industrial capability. The same SpaceX that fills the FCC filing also operates the Falcon 9 + Starship launch manifests that no other country can match. The irony is sharp: the single area of datacenter site-selection where the US lead is widening — not narrowing — is the one that requires leaving the jurisdiction that is rejecting AI on the ground.

The labor-market implication is the piece most US coverage is missing. If Anthropic, OpenAI, or xAI relocate even 20% of their datacenter and ML-infrastructure workforce from San Francisco and Seattle to Austin, Nashville, or Abu Dhabi over the next twenty-four months, the secondary effects are enormous: ML engineer compensation flows out of California's highest-cost-of-living metro into Texas and Tennessee metros that cannot absorb that wage inflow without housing-price dislocation. The Texas Tribune reported on April 8 that the state is already losing more than a billion dollars a year on its datacenter tax break; that number was priced assuming current construction pace. Triple the pace and the tax-break-versus-services calculus shifts. The Bernie/AOC moratorium is a federal response to state-by-state races to the bottom on datacenter incentives. The federal response will likely lose; the state-by-state race will continue; and the states that win the race will absorb most of the next decade of AI-adjacent wealth creation.

This is the pattern we think is actually unfolding:

  • Capital: concentrated in the same handful of frontier labs, same VCs.
  • Models: still trained primarily on US-origin frontier labs' frameworks, though increasingly with Chinese open-source contributions at the mid-frontier.
  • Physical infrastructure: diffusing aggressively — away from California, toward Texas, Tennessee, Virginia, specific international jurisdictions (Abu Dhabi, Singapore), and — uniquely for the US — orbital compute (SpaceX, Starcloud, Google Suncatcher).
  • Executive presence: the hardest to predict, because security cost and cultural gravity pull in opposite directions. We expect at least one major frontier lab to announce a second US headquarters (not a datacenter — a headquarters) within twelve months.
  • Talent: following physical infrastructure, with a 12-to-24-month lag. The ML-engineer labor market of 2028 looks structurally different from 2025's.

The historical parallel is not the Luddites

Every analyst comparing this to the Luddites is reaching for the wrong period. The Luddites lost. The comparison that matters is Peterloo, 1819 — the political-violence inflection point when the British textile industry, reading the Manchester repression and the class-war implications correctly, began physically relocating capital and factories out of Manchester to the Midlands and the north. The industry did not die. It did not even slow. It dispersed to jurisdictions where the political and physical cost of operating was lower. Manchester kept its name as the symbol of the textile revolution. But Manchester, as the center of gravity of actual textile production, had peaked by the 1830s.

The 2020s Manchester is San Francisco. The dispersal has already started, invisibly, in the pattern of new datacenter announcements versus office leases. The political-violence inflection point is the Altman attack. The jurisdictional arbitrage is underway.

What to watch over the next 30–60 days

Three concrete signals will tell us whether this diffusion thesis is right or whether SF concentration absorbs the shock:

  1. Lab HQ and orbital-compute announcements. Does any frontier lab (Anthropic, OpenAI, xAI, or an up-and-comer like Character or Reka) announce (a) a second corporate headquarters in Texas, Tennessee, or Abu Dhabi, or (b) a formal orbital-compute partnership with Starcloud, SpaceX, or Google Suncatcher, before June 18, 2026? A research office does not count; we mean a headquarters or principal office or a production compute contract, not an R&D MOU. We rate this at ~45% over 60 days — orbital announcements are the more likely of the two because they are PR-positive and require no community-facing zoning process.

  2. State-level datacenter policy. Does any state legislature pass either (a) a permissive datacenter permitting reform or (b) a meaningful restriction/moratorium during the April–June 2026 window? Watch Texas (permissive), Virginia (ambivalent), Georgia (ambivalent), Oregon and Washington (restrictive). The first state to pass either direction becomes a template.

  3. CEO security disclosures. Watch the next 10-Q filings from OpenAI (when it IPOs), Anthropic's PBC-required disclosures, and Meta's proxy for 2026. Mark Zuckerberg's ~$27M 2024 security line is the pre-Altman baseline. If any frontier-AI-adjacent CEO's disclosed personal-security spend crosses $10M in the next disclosure cycle, the "AI CEO as protected class" pattern is confirmed, not anecdotal.

The uncomfortable conclusion

Safety-first framing is what the US AI labs wanted the public conversation to be about. They got their wish — more than they intended. The Moreno-Gama manifesto cites their own rhetoric. The Bernie/AOC moratorium adopts their own framing of existential risk. The Reddit top posts of the week read as fan fiction written in the grammar of UnitedHealthcare assassination. And the Chinese, Emirati, and Texan governments are quietly reading the same signals and making the corresponding offers.

This is not the end of US frontier AI. It is the end of US frontier AI's concentration. Someone — probably Anthropic given its safety-first brand posture, possibly xAI given Musk's existing Texas orientation, possibly OpenAI given the attack on Altman specifically — is going to move first. The one that moves first sets the template. The ones that follow pay higher prices. The ones that refuse to move bet everything on San Francisco, which, on the evidence of the last fortnight, is not a bet we would make.

Your move, Anthropic.

CL

About ComputeLeap Team

The ComputeLeap editorial team covers AI tools, agents, and products — helping readers discover and use artificial intelligence to work smarter.

💬 Join the Discussion

Have thoughts on this article? Discuss it on your favorite platform:

Join 100+ engineers

Stay ahead of the AI curve

Get weekly insights on AI agents, tools, and engineering delivered to your inbox. No spam, just actionable updates.

No spam. Unsubscribe anytime.