Nvidia opens its interconnect ecosystem to rival custom silicon as Microsoft commits $7 billion to dedicated power generation for its Texas AI campus.
🗄️ Nvidia Opens NVLink Fusion to Marvell Custom Chips — Turning Its Interconnect Into an AI Platform
Decoded: Nvidia and Marvell announced on March 31 that Marvell is joining the NVLink Fusion ecosystem — the rack-scale platform Nvidia introduced to allow third-party silicon to integrate into Nvidia AI infrastructure. Under the agreement, Marvell provides custom XPUs (accelerators designed for specific AI inference workloads) and NVLink Fusion-compatible scale-up networking components, while Nvidia contributes its Vera CPU, ConnectX NICs, BlueField DPUs, NVLink interconnect, and Spectrum-X switches. The collaboration allows hyperscalers building on Nvidia's rack-scale architecture to deploy Marvell custom silicon alongside Nvidia components in a single integrated platform. Bank of America raised its Marvell price target following the announcement; Stifel reiterated Buy with a $120 target; William Blair reiterated Outperform. (Nvidia press release, Marvell press release, March 31, 2026)
Why it matters: Nvidia is deliberately opening NVLink — previously a moat that locked customers into all-Nvidia configurations — to a competitor's custom silicon. The logic is platform economics: hyperscalers increasingly need custom XPUs for specific inference workloads alongside general-purpose GPU clusters; if Nvidia refuses to accommodate this, those workloads migrate to non-Nvidia architectures entirely. By making NVLink Fusion the standard for mixed-silicon AI racks, Nvidia retains its position as the infrastructure backbone — interconnect, NICs, CPUs, switches — even when the accelerator is not an H100 or B200. For Marvell (MRVL), NVLink Fusion compatibility unlocks a significantly larger addressable market: custom XPUs built for hyperscaler inference customers can now integrate into Nvidia-native infrastructure without a separate rack architecture. The analyst upgrades reflect that Marvell's AI revenue opportunity expanded materially in one announcement.
💡 Microsoft, Chevron, and Engine No. 1 Sign Exclusive $7B Texas Power Deal for AI Data Center Campus
Decoded: Microsoft, Chevron, and activist fund Engine No. 1 entered an exclusivity agreement on March 31 for power generation and supply to underpin a large AI data center campus in West Texas, according to Reuters and Bloomberg. The deal involves a $7 billion energy complex delivering electricity directly to Microsoft's AI infrastructure development. Engine No. 1 — the activist fund that won three Exxon board seats in 2021 by pushing energy transition strategy — is the capital and development partner on the power side. Chevron is the energy operator. Microsoft is the anchor power offtake customer. No generation technology has been specified publicly. (Reuters, Bloomberg, March 31, 2026)
Why it matters: Microsoft (MSFT) is executing the same private-power-generation model Meta applied to its Louisiana campus — bypassing the grid and funding dedicated energy infrastructure for AI data center campuses. The Chevron (CVX) partnership is structurally notable: Chevron's involvement signals industrial-scale gas or hybrid generation at the capacity required for hyperscale AI compute. For Engine No. 1, the deal represents a pivot from activist shareholder to AI infrastructure developer — monetizing energy expertise and utility relationships at a moment when power supply is the primary bottleneck in the AI build cycle. At $7 billion, the commitment matches the largest dedicated power deals in the current AI infrastructure wave and confirms that Microsoft's AI capital deployment is accelerating into Q2 2026.
Stay decoded. See you tomorrow.
— The Get AI Decoded Team
Enjoyed this article?
Subscribe free — AI news decoded for investors, every morning.
No spam. Unsubscribe anytime. Privacy Policy