Episodes

4 days ago
4 days ago
A federal judge just told the Pentagon it cannot punish Anthropic for insisting on AI guardrails. Judge Rita Lin's injunction was unusually blunt, and it may matter more than any model launch this week. This episode covers what that ruling means if AI vendors can now push back on autonomous weapons and mass surveillance clauses without being frozen out of public-sector work, plus the immediate business stakes for defence buyers already using Claude.
Then there's OpenAI's very different week. Sora is being shut down after reportedly burning around $1 million a day in inference costs - with peak daily costs reaching $15 million - against just $2.1 million in lifetime revenue. That retreat connects to a wider pivot: enterprise capability, data centres, IPO optics, and the upcoming Spud model. The question worth asking is what happens when cost pressure and thinner safety oversight arrive at the same time.
Also in this episode: Shopify making AI commerce the default across 5.6 million stores. A Duke and Federal Reserve-backed CFO survey projecting over half a million AI-related job cuts in 2026. Google's TurboQuant breakthrough, which could cut large language model memory needs by 6x and challenge assumptions about endless GPU demand. Google's free Gemini personalisation push. And Mistral's €830 million infrastructure bet near Paris.

Wednesday Mar 25, 2026
Wednesday Mar 25, 2026
NVIDIA just made its biggest move yet to become the layer your whole AI stack runs on - and that changes how companies buy, build and govern AI right now.
This week's lead story centres on GTC 2026, where Jensen Huang unveiled Vera Rubin, the Groq 3 LPX and Kyber, then backed it all with a staggering $1 trillion order target through 2027. The real signal isn't just faster chips. It's NVIDIA pushing from GPUs into inference, rack architecture, agent tooling and enterprise software partnerships with Adobe, Salesforce, SAP, ServiceNow and Cisco.
Also in this week's roundup: the White House's new AI legislation framework and its call for federal preemption over state AI laws, why that could simplify compliance for national businesses while weakening state-led protections, and why teams still can't count on one national rulebook any time soon.
An update on Anthropic versus the Pentagon - a lot has happened since last week - with fresh court filings challenging the government's security narrative, and Judge Rita Lin saying the Pentagon's blacklisting of Anthropic looks like punishment.
Plus: Meta's potential $27 billion Nebius deal, showing the AI compute build-out is still accelerating. FedEx's AI Education and Literacy programme to build common AI fluency across the enterprise. And NVIDIA's H200 restart for China, underlining how geopolitics is now shaping compute access as much as product roadmaps.
And the rest of the roundup: Encyclopedia Britannica suing OpenAI - not just for copyright, but for trademark infringement when ChatGPT hallucinates facts and puts Britannica's name on them. Donald Knuth, arguably the most respected computer scientist alive, publishing a paper called "Claude's Cycles" after an AI solved a graph theory problem he'd been stuck on for weeks. Nearly a billion dollars flowing into AI robotics in a matter of days, with Mind Robotics and Rhoda AI both raising massive rounds. And Elon Musk admitting xAI needs rebuilding from the foundations up, just as SpaceX eyes what could be the biggest IPO in history.

Sunday Mar 22, 2026
Sunday Mar 22, 2026
This week I'm doing something a bit different. Instead of reacting to a product launch or a funding round, I'm digging into the Stanford Emerging Technology Review 2026, specifically its AI chapter. It's co-chaired by Condoleezza Rice, Jennifer Widom, and Amy Zegart, produced across Stanford's School of Engineering, the Hoover Institution, and the Institute for Human-Centered AI, and it's deliberately trying to inform rather than advocate.
I pull out the parts I think deserve serious attention from technology leaders: the gap between AI's foundational promise and its current operational fragility, the overhyped narrative around agents, the hollowing out of the public research pipeline, and the governance and legal shifts that should already be affecting decisions your organisation is making today.
You can find the Stanford Emerging Technology Review 2026 here: https://setr.stanford.edu/

Wednesday Mar 18, 2026
Wednesday Mar 18, 2026
The most important AI story this week might be a court fight. Anthropic’s legal battle with the Pentagon has escalated into what could become a precedent-setting test of whether the US government can punish an AI company for pushing safety guardrails, and the reaction from inside the industry is arguably the bigger signal. I break down Anthropic’s two federal lawsuits, the emergency motion to pause the Pentagon’s supply chain risk designation, the claim that the move could wipe out hundreds of millions or even billions in 2026 revenue, and why support from more than 30 employees at OpenAI and Google DeepMind, including Jeff Dean, matters far beyond one blacklist dispute.
From there, the episode shifts to Microsoft’s launch of Copilot Cowork inside Microsoft 365, powered by Anthropic’s Claude Opus 4.6. This is not another single-app assistant story. It is Microsoft pushing AI towards orchestration across Outlook, Teams, Excel and PowerPoint, while also signalling that enterprise buyers want model choice rather than permanent dependence on one provider.
I also get into Oracle’s plan to cut 20,000 to 30,000 jobs to fund AI infrastructure, Block’s 40% workforce reduction and Jack Dorsey’s blunt AI-efficiency narrative, plus NVIDIA’s Vera Rubin architecture and NemoClaw open-source agent platform unveiled at GTC 2026. I round out with the FTC’s aggressive policy stance against state AI rules, Anthropic’s new Institute, accusations of Claude distillation attacks by Chinese labs, and why Enterprise Connect’s message was simple: prove the ROI, prove the guardrails, and prove the rollback plan.

Wednesday Mar 11, 2026
Wednesday Mar 11, 2026
Anthropic’s Pentagon standoff did more than derail a $200 million contract. It turned AI ethics into a live commercial test, sent Claude to number one on Apple’s US App Store, and forced a much bigger question into the open: can trust become a business model in AI?
In this episode, I break down how Anthropic rejected Pentagon language allowing Claude to be used for any lawful use, why Dario Amodei pushed for explicit limits on domestic mass surveillance and fully autonomous weapons, and how the fallout escalated into a federal ban, a supply chain risk designation, and a very public consumer backlash against ChatGPT. I then contrast that with OpenAI’s launch of GPT-5.4, where the real story is not the branding but computer use: a model that can read your screen, control mouse and keyboard inputs, and move across messy enterprise systems like a junior operator rather than a chatbot.
The episode also unpacks Google’s Gemini 3.1 Flash-Lite pricing move and what commodity economics could mean for AI features, China’s 15th Five-Year Plan and its state-led push into AI, quantum and robotics, Netflix’s acquisition of Ben Affleck’s InterPositive and the rise of AI as invisible production leverage, and Meta’s $600 billion infrastructure bet with AMD. Add in AI-enabled cyberattacks on FortiGate devices, new state laws in Oregon, Utah and Vermont, and Gartner’s $2.52 trillion AI spending forecast, and this becomes a sharp 20-minute briefing on where AI strategy, policy and business reality are colliding right now.

Monday Mar 09, 2026
Monday Mar 09, 2026
One year after vibe coding entered the conversation, what has actually changed for software teams? In this episode of The AI Breakdown, I look past the hype and ask a simpler question: are AI coding tools genuinely making developers faster, or are they just creating more output, more review, and more hidden complexity?
Drawing on recent research, industry data, and practical experience, this episode explores where AI is helping, where the productivity gains are less clear, why trust remains low, and what all of this means for code quality, junior developers, and the future shape of engineering teams. The conclusion is not that AI coding is overhyped, and not that software development has been solved, but that the real opportunity lies in moving beyond vibe coding toward a more disciplined model of AI-first engineering.

Wednesday Mar 04, 2026
Wednesday Mar 04, 2026
This week I'm unpacking OpenAI's record-breaking $110 billion raise and what Amazon and NVIDIA's involvement tells us about a partner landscape that's shifting faster than most people realise. I also dig into Anthropic's $30 billion Series G, and why it's time to take that one seriously as a strategic bet.
Then there's Apple quietly admitting it can't build AI fast enough, handing Siri's core logic to Google Gemini. Also, the hyperscaler spending numbers are extraordinary, and I explain why the energy and infrastructure story is just as important as what's happening at the model layer.
Plus: a reality check on Microsoft Copilot's 3.3% penetration, the Snowflake and OpenAI data gravity play, Samsung's push to put Gemini on 800 million devices, and what a protest march through London's tech hub on a Saturday morning tells us about where the regulatory conversation is heading.

Thursday Feb 26, 2026
Thursday Feb 26, 2026
This week on The AI Breakdown: OpenAI enlists McKinsey, BCG, Accenture, and Capgemini to push its Frontier agent platform into enterprises. The Pentagon issues an ultimatum to Anthropic over military use of Claude, threatening to designate the company a "supply chain risk." Claude Code hits $2.5 billion in annualised revenue while a new security tool wipes billions off cybersecurity stocks in a single session. The "SaaSpocalypse" deepens as nearly $1 trillion in software market value evaporates. Spotify reveals its best engineers haven't written a line of code since December. Plus: Google launches Gemini 3.1 Pro, India hosts a $200 billion AI summit, Perplexity ditches ads entirely, and OpenAI closes in on a $100 billion funding round at an $850 billion valuation.

Sunday Feb 22, 2026
Sunday Feb 22, 2026
700 million people now use AI every week. Are we keeping up with the risks?
Over 100 experts from 30+ countries just published the most comprehensive global assessment of AI risk ever produced. In this episode, I break down the International AI Safety Report 2026 — what AI can actually do today, the three categories of risk every business needs to understand, why some AI systems now behave differently when they know they're being tested, and the research that's changed how I think about my own AI use.
Read the full report: https://internationalaisafetyreport.org/

Wednesday Feb 18, 2026
Wednesday Feb 18, 2026
This week on The AI Breakdown, Anthropic just raised $30 billion at a $380 billion valuation, making it the second-largest private funding round in tech history. Meanwhile, OpenAI dropped GPT-5.3-Codex-Spark, their first model running on Cerebras hardware instead of NVIDIA, pushing past 1,000 tokens per second and redefining what "fast" means for AI-assisted coding.
But the story of the week might be the one that got less attention: researchers caught an infostealer exfiltrating the entire identity of an OpenClaw AI agent - tokens, cryptographic keys, behavioural guidelines, and private memory files. It's a stark preview of what happens when agents become high-value targets.
Beyond the headlines, we dig into Claude Cowork landing on Windows, Google quietly shipping Gemini-powered audio summaries in Docs, Zoom pushing deeper into agentic workflows, Microsoft wiring up new Copilot connectors, Slack's rebuilt Slackbot, and Oracle Health rolling out AI clinical note-drafting across the NHS. Plus, OpenAI hired the founder of OpenClaw, and what that tells us about the race to own the agent layer.









