Microsoft puts competing AI models inside one enterprise workflow — while the regulator overseeing nuclear power for AI data centers quietly lost 400 people.
🤖 Microsoft's Copilot Cowork: GPT Drafts, Claude Fact-Checks in One Enterprise Workflow
Decoded: Microsoft launched Copilot Cowork through its Frontier Program on March 30, bringing Anthropic's Claude into Copilot's agentic toolkit for long-running, multi-step tasks. The release includes an upgraded Researcher agent for structured information gathering and a new Critique feature: GPT-4o drafts the initial research output, then Claude runs an automated accuracy and editing pass before the final result is delivered to the user. The dual-model architecture makes Copilot Cowork the first enterprise AI product from a major vendor to officially deploy competing models as complementary agents within a single workflow. Copilot Cowork is currently available to Frontier Program early-access customers; a broader Microsoft 365 rollout timeline has not been disclosed. (Reuters, The Verge, Microsoft official blog, March 30, 2026)
Why it matters: Microsoft (MSFT) is the largest commercial distributor of both OpenAI models — through Azure and Copilot — and now Anthropic's Claude, and Copilot Cowork is the first product to use both inside a single task. Rather than picking one model winner, Microsoft is positioning itself as model-agnostic enterprise infrastructure: the winner of the AI model competition matters less than who controls the enterprise workflow layer those models run inside. For buyers, a product where one model's draft is automatically stress-tested by a competing model addresses a core enterprise trust gap in AI output quality. The early-access rollout structure suggests Microsoft is validating multi-model workflows before a full Microsoft 365 Copilot redesign, consistent with its pattern of staged Frontier launches.
🏛️ Silicon Valley's Nuclear Push for AI Power Runs Into a Hollowed-Out Safety Regulator
Decoded: Microsoft, Google, and Amazon are backing next-generation nuclear reactors to power hyperscale AI data centers, positioning nuclear as the only energy source that can deliver 24/7 gigawatt-scale power at the site density AI infrastructure requires. But the Nuclear Regulatory Commission — the U.S. agency responsible for approving and overseeing nuclear reactors — has lost more than 400 employees in recent months, largely from its safety and licensing staff, following Trump administration workforce reductions. Former NRC Chair Allison Macfarlane told ProPublica: "The regulator is no longer an independent regulator — we do not know whose interests it is serving." The NRC staffing losses arrive as AI data center construction has driven a surge in nuclear permitting applications, creating a structural mismatch between application volume and the agency's capacity to process them. (The Verge, ProPublica, March 30, 2026)
Why it matters: Constellation Energy (CEG) and Vistra (VST) have appreciated significantly on the thesis that AI hyperscalers will restart and extend U.S. nuclear capacity to power data center growth. That thesis depends on a functioning regulatory pathway. A hollowed-out NRC introduces two specific risks: delayed permitting approvals extend project timelines and capital deployment schedules for both hyperscalers and nuclear operators; and reduced safety oversight raises the probability of an incident that could trigger political or market reversal of nuclear's resurgence. Microsoft's Three Mile Island agreement and Amazon's Susquehanna deal are already in the permitting pipeline. The NRC's reduced capacity is now a direct variable in AI infrastructure timelines — one most AI capex models have not yet incorporated.
Stay decoded. See you tomorrow.
— The Get AI Decoded Team
Enjoyed this article?
Subscribe free — AI news decoded for investors, every morning.
No spam. Unsubscribe anytime. Privacy Policy