When AI Safety Collapsed, Agents Took Over, and $135 Billion Reshaped the Infrastructure
It was Monday morning, March 3rd, and I was scanning through the week’s AI news with my usual coffee. Within 90 minutes, I watched three foundations of the AI landscape crack simultaneously…
Anthropic—the company most associated with AI safety—abandoned its safety-first pledge because “it doesn’t work if competitors are racing ahead.”
Amazon bet $35 billion on OpenAI reaching AGI, making superintelligence a contractual milestone for the first time in history.
And Meta committed over $100 billion to break free from NVIDIA, fundamentally restructuring the AI hardware supply chain.
For three years, we’ve been watching a controlled experiment: can frontier AI labs race toward superintelligence while maintaining voluntary safety commitments? The answer arrived this week, and it’s definitive: No.
Competitive pressure destroys voluntary restraint. The company that waits is the company that loses.
And while the safety debate collapses, AI is quietly becoming enterprise infrastructure. Claude isn’t a chatbot anymore, it’s scheduling your workflows at 6 AM without you at the keyboard. Uber employees built an AI clone of their CEO for pitch practice. Burger King deployed AI in employee headsets to monitor whether workers say “please” and “thank you.” And a startup called Pulsia AI is now autonomously running over 1,000 companies simultaneously.
The race didn’t just accelerate this week. It went terminal: meaning we’ve passed the point where anyone can slow it down, even if they wanted to. Let me walk you through what happened, why it matters, and what comes next.
NOTE: For the first time ever, I’ll be live-streaming a few select talks from my annual Abundance Summit (March 9 – 12, 2026) for free. If you want access to the livestream, click here.
1/ The Safety Collapse: When “Responsible” Became “Competitive”
Anthropic was supposed to be different. Founded by former OpenAI researchers specifically to build AI safely, the company made a public commitment called the Responsible Scaling Policy (RSP): they would guarantee safety before training more powerful models. No exceptions. No compromises.
This week, that commitment evaporated.
Jared Kaplan, Anthropic’s Chief Science Officer, stated plainly: “The pledge doesn’t work if competitors are racing ahead.” The new policy? Match rivals’ safety efforts while being transparent – a fundamental shift from “we won’t build it unless it’s safe” to “we’ll build it as safely as the competition does.”
Read that again. The most safety-focused AI lab just admitted that unilateral restraint is strategically untenable. They’re not abandoning safety entirely, they’re just no longer willing to lose the race because of it.
Here’s the elephant in the room: if Anthropic can’t maintain safety commitments under competitive pressure, no one can. We just watched the last credible mechanism for voluntary AI governance collapse in real time. OpenAI never pretended to prioritize safety over capability. DeepMind is part of Google, which is in an existential fight with Microsoft. xAI is Elon, who’s never met a restraint he didn’t ignore.
Anthropic was the exception. Now there are no exceptions.
The contrarian take? This might actually be more honest. A safety policy that only works if nobody else builds advanced AI was never realistic. Matching competitors’ standards while being transparent could produce better outcomes than promises no one can keep. But that’s cold comfort when you realize we just lost our safety brake.
2/ From Chatbot to Digital Employee: The Enterprise Transformation
While everyone was watching the safety debate implode, Claude quietly became something else entirely: an autonomous agent that runs tasks on a schedule across devices without you sitting at the keyboard.
This is not a chatbot upgrade. This is the transition from tool to employee.
Anthropic announced three capabilities this week that fundamentally change what Claude is:
CoWork Scheduling: Claude can now run recurring tasks automatically: analyzing spreadsheets at 6 AM, generating reports before meetings, processing workflows while you sleep. You’re not asking Claude questions anymore. You’re delegating ongoing responsibilities.
Remote Control Across Devices: Claude Code can now operate on remote machines. You don’t need to be at your desk. The AI works independently, executing tasks wherever they need to happen.
Enterprise Plugin Marketplace: Anthropic is building department-level AI infrastructure. Companies can create private marketplaces with customized agents for HR, finance, engineering, design, and investment banking. These aren’t generic assistants, they’re specialized co-workers trained on your company’s workflows.
Here’s why this matters more than it sounds: finance and HR are among the most regulated, liability-heavy functions in any company. One hallucinated compliance recommendation could create massive legal exposure. One payroll error affects real people’s lives. Anthropic isn’t testing these agents on low-stakes tasks—they’re positioning Claude for full enterprise orchestration in domains where mistakes have regulatory consequences.
And enterprises are buying it. As my Moonshot Mate Dave Blundin pointed out on our podcast, “The AI co-pilot is gathering a huge amount of data, and a lot of that data will go into the decision on what can be automated. Over time, everything can be automated.”
Which brings us to Uber.
This week Uber employees built an AI clone of CEO Dara Khosrowshahi for pitch practice. It simulates his personality, feedback style, and decision-making patterns. Officially, it’s a coaching tool: letting employees test their pitches before presenting to the real Dara.
But here’s the next step nobody’s saying out loud: if you can clone a CEO’s feedback style convincingly enough for pitch practice, you can use that same AI to make actual decisions. How long before “Dara AI” starts approving projects without human Dara in the loop? How long before every company has AI leadership clones handling routine decisions?
We’re not asking “Can AI do knowledge work?” anymore. We’re asking “How fast will humans become optional?”
And if you think that’s dystopian, wait until you hear what’s happening at Burger King.
3/ Meat Puppets: When AI Becomes Your Manager
Burger King just launched “Patty” – an AI voice assistant powered by OpenAI that lives in employee headsets and monitors every interaction.
Officially, it’s a “coaching tool.” It helps workers remember procedures, synchronizes inventory, and offers real-time encouragement. But let’s call this what it is: workplace surveillance that tracks tone, language, and efficiency at the lowest-wage level of the workforce.
Patty monitors whether employees say “please” and “thank you.” It knows when you drop the fries for the third time that morning. It rates your performance continuously and sends that data somewhere. The workers wearing these headsets have essentially zero power to push back.
This was predicted 20 years ago in a sci-fi novel called Manna by Marshall Brain. Human workers on headsets, all taking directions from a centralized AI. Alex Wissner-Gross called it perfectly on our Moonshots podcast: “We’ve arrived in Manna, and it starts with fast food.”
The elephant here is obvious: calling this a “coaching tool” is corporate euphemism. This is training data collection. Amazon delivery workers wearing AR glasses aren’t being helped, they’re training the models that will replace them with robots. Burger King employees using Patty are doing the same thing.
The AI is learning what’s automatable. And within two to three years, humanoid robots will be flipping those burgers.
Unions will rebel. Workers will protest. But as Dave Blundin pointed out, “If you get one in a thousand employees to volunteer for the AI coach, that’s all the training data you need.” The transition period won’t be long enough for political counter-movements to gain traction.
And if that feels dystopian, consider this: we now have AI running entire companies with no humans in the loop at all.
4/ The Company That Runs 1,000 Companies
Pulsia AI, created by Ben Serra, is autonomously running over 1,000 companies right now. Not managing them. Not assisting them. *Running* them.
These aren’t billion-dollar enterprises, they’re micro-businesses. But they’re real. You can visit their websites. You can purchase products through Stripe. They conduct outreach, negotiate deals, and operate continuously without human intervention. Every decision is visible live on Pulsia’s website.
This is Coase’s theory collapsing in real time. The transaction costs that justified large hierarchical corporations? They’re evaporating. The marginal cost of launching a company is now $50 a month. We’re about to see an explosion of AI-run micro-companies competing in markets we didn’t know existed.
Moonshot Mate Salim Ismail nailed the implications: “Before this really has time to penetrate, you’re going to have drone deliveries of food like this. And it’ll obviate a lot of this.” The speed of change is compressing so fast that business models are obsolete before they scale.
But here’s the legal nightmare: who is liable when an AI-run company commits fraud, breaches a contract, or harms a customer? Our legal system has no framework for autonomous corporate entities with no human decision-makers.
The contrarian pushback? These aren’t “real companies” in any meaningful sense – they’re automated outreach bots with a corporate wrapper. They can’t innovate, build culture, or handle novel situations.
Maybe. But that’s what people said about e-commerce, social media, and ride-sharing. And now those “not real” business models dominate the economy.
The future Dave Blundin predicted is arriving: single-person conglomerates. One human overseeing dozens or hundreds of AI agents, each running a business. Think of it as a one-person private equity firm, except instead of buying companies, you’re spawning them.
And while all this was happening, the infrastructure wars reached a new level of intensity.
5/ $135 Billion in Seven Days: The Infrastructure Wars Go Nuclear
Three deals this week fundamentally reshaped the AI hardware supply chain:
Amazon’s $35 Billion OpenAI Bet: Amazon made a conditional offer to invest $35 billion in OpenAI, but here’s the twist: 70% of the funding only triggers if OpenAI reaches AGI or goes public. This dwarfs Microsoft’s $13 billion investment and marks the first time a major tech deal is tied directly to achieving artificial general intelligence.
Who defines AGI? If OpenAI gets to declare when AGI is achieved, they control when $35 billion unlocks. The definition of superintelligence just became a financial negotiation, not a scientific one. We’ve financialized the singularity.
The deal also requires OpenAI to use Amazon’s Tranium 2 chips for training, positioning Amazon as the exclusive third-party cloud host for OpenAI’s automated AI workers. Amazon missed the frontier AI boat while backing Anthropic. Now they’re paying a premium to deal themselves back into the game.
Meta’s $100 Billion AMD Breakup: Meta committed over $100 billion to AMD in a historic bet to break free from NVIDIA dependency. This includes Meta buying 6 gigawatts of AI compute and taking a 10% equity stake in AMD.
Jensen Huang’s margins at NVIDIA are so high they’re almost unsustainable. Meta is betting that AMD can close the gap. The challenge? AMD’s software ecosystem (ROCm) still significantly lags NVIDIA’s CUDA. You can buy all the chips you want, but if developers don’t want to use them, you’ve just locked in a $100 billion anchor.
The contrarian take from Alex Wissner-Gross: “Meta going to Google for TPUs is a sign NVIDIA’s monopoly is finally cracking. The diversification of compute suppliers is healthy for the industry and could bring costs down for everyone.”
CoreWeave’s 110% Revenue Growth: CoreWeave raised $8.5 billion for data centers backed by a $14.2 billion contract from Meta. Their Q4 revenue grew 110% year-over-year. But here’s the risk: CoreWeave’s entire business model is built on a handful of mega-customers. If Meta shifts strategy, that 110% growth could evaporate overnight.
Together, these three deals represent $135 billion committed in seven days. That’s not venture capital. That’s infrastructure-scale capital reshaping the supply chain in real time.
6/ When Smaller Models Win: The Alibaba Efficiency Revolution
Alibaba just released Qwen 3.5 Medium: a 35-billion-parameter model that outperforms Qwen 3’s 235-billion-parameter predecessor on benchmarks. Let that sink in. A model one-seventh the size is beating the larger version.
This signals a fundamental shift: brute-force scaling is no longer the only path to performance. Model efficiency is advancing faster than model scale. And that has massive implications for the AI infrastructure buildout.
Alibaba has released three open-weight models in 2026 alone, running circles around Mistral and Meta. They’re proving that you don’t need frontier-scale compute to achieve frontier-scale results.
The contrarian take from our podcast: “This is bad news for big-compute incumbents. If smaller, cheaper models keep closing the gap, the massive data center investments by U.S. hyperscalers could become stranded assets. The DeepSeek/Qwen efficiency path may win.”
The counter-argument? Benchmarks don’t equal real-world usefulness. Chinese models lack the trust and ecosystem that drive actual enterprise adoption outside China. We’ll see.
And speaking of efficiency, Google just made image generation absurdly cheap…
7/ Nano Banana 2: When Professional Images Cost $0.045
Google released Nano Banana 2 (Gemini 3.1 Flash Image) this week, and it’s a market killer: $0.045 per 4K image with professional quality and full object fidelity. It supports up to 14 objects with high detail and runs 3-5x faster than its predecessor.
AI-generated visuals are now cheaper than stock photography. That’s not hyperbole, it’s math.
The elephant? This effectively destroys the market for a huge swath of commercial photographers, illustrators, and stock image platforms. And nobody in Google’s announcement is talking about that displacement.
The contrarian view: cheap, abundant AI images will devalue visual content to near zero, which paradoxically makes authentic human-created art more premium and valuable—not less. Scarcity creates value. When everyone can generate perfect images for pennies, the one-of-a-kind human piece becomes the luxury good.
Maybe. But tell that to the stock photographers whose business model just evaporated overnight.
8/ What This All Means: The Training Wheels Are Off
So… Let me connect the dots.
This wasn’t just a busy news week. This was a phase transition. We moved from “Can AI replace knowledge work?” to “How fast will humans become optional?”
We moved from “Should we slow down for safety?” to “Safety is whatever the competition does.” We moved from “Will AI need massive infrastructure?” to “$135 billion committed in seven days.”
The training wheels came off. And there’s no putting them back on.
Here’s what you need to understand:
For Entrepreneurs: The opportunities are staggering. AI-run companies, enterprise agent marketplaces, power infrastructure, efficiency-optimized models—these are all white space markets where first movers will dominate. The companies being built right now will define the next decade.
For Investors: Follow the infrastructure. Chips, data centers, energy, robotics—this is where capital is flowing at unprecedented scale. The Magnificent Seven already represent $20 trillion of the $50 trillion U.S. public market. That’s going to grow.
For Policy Makers: Speed matters more than caution. The states and countries that move fastest to approve data centers, energy projects, and AI deployments will capture trillions in economic value. The ones that hesitate will watch it flow elsewhere.
For Workers: The transition from AI co-pilot to AI replacement is happening faster than anyone predicted. If your job involves routine knowledge work, you have 2-3 years to position yourself as irreplaceable or transition to something AI can’t do yet.
And if you’re worried about safety, regulation, or societal disruption… I hear you.
These are real concerns. But the velocity mismatch between technology (advancing weekly) and institutions (updating every few years) means the transition is happening whether we’re ready or not.
The question isn’t whether AI becomes enterprise infrastructure, replaces workers, and reshapes the economy. The question is whether we shape that transition intelligently or let it happen to us.
The Bottom Line
We just watched voluntary AI safety governance collapse, $135 billion reshape the hardware supply chain, and AI transition from tool to autonomous agent—all in seven days.
This isn’t hype. This isn’t speculation. This is civilization-scale transformation happening at machine speed.
The race didn’t just accelerate this week. It went terminal. And the only question now is whether you’re positioned to capture the value of what comes next.
I’ll see many of you at the Abundance Summit this coming week (live or on the livestream), where we’ll dive even deeper into these trends with Eric Schmidt, Dara Khosrowshahi, and the frontier thinkers building this future.
The training wheels are off. Let’s see how fast we can go.

