Broadcom just forecast $100 billion in AI chip revenue for next year — from chips it doesn't even fully design or fabricate.
The Company Running the Hyperscaler Chip Assembly Line
Decoded: Broadcom operates as a custom silicon partner. When Google wanted to build its own AI processor rather than pay Nvidia's prices, it turned to Broadcom's engineers to help translate that design into a chip architecture that could actually be manufactured. The resulting product — Google's Tensor Processing Unit, or TPU — runs inference across Google Search, YouTube recommendations, and Google Cloud. The same model applies to Meta's MTIA accelerator, which handles ranking and recommendations across Facebook and Instagram at a scale that would cost billions in Nvidia GPU spend annually.
On March 4, Broadcom reported fiscal Q1 2026 results that made this business case undeniable: total revenue of $19.31 billion, up 29% year over year, beating analyst estimates. AI revenue — driven entirely by custom accelerators and AI networking — hit $8.4 billion, up 106% from a year earlier. On the post-earnings call, CEO Hock Tan set a landmark: "Our visibility in 2027 has dramatically improved. Today, we have line of sight to achieve AI revenue from chips in excess of $100 billion in 2027." (Reuters, CNBC, March 4, 2026)
Broadcom's stock rose nearly 5% in extended trading that night — adding more than $42 billion in market value — before continuing higher the following session. The company now has six confirmed major custom silicon customers, with OpenAI joining Google, Meta, and others in placing large-scale orders.
Why Hyperscalers Are Building Their Own Chips — and Why Broadcom Captures the Wave
The shift to custom AI silicon isn't about engineering pride. It's about cost, efficiency, and strategic independence. Nvidia's GPUs are powerful but general-purpose — built to run many AI tasks well. A hyperscaler operating one specific model at billion-user scale can design a chip tuned precisely for that workload, cut unit costs substantially, and reduce dependency on a single, high-margin supplier.
Big Tech firms including Alphabet, Microsoft, Amazon, and Meta are expected to collectively spend more than $630 billion on AI infrastructure in 2026, according to Reuters. Every dollar directed toward custom silicon is a dollar not flowing to Nvidia at full margin. That's the structural pressure Nvidia CEO Jensen Huang was responding to at GTC when he pivoted messaging toward inference — the workload where Nvidia holds its largest performance advantage over custom ASICs.
Why it matters: Broadcom is uniquely positioned to benefit whether the custom chip trend accelerates or stabilizes. Its engineers work at the critical translation layer between a hyperscaler's chip concept and the physical design that TSMC can actually fabricate. Hock Tan disclosed that Broadcom will deliver the equivalent of one gigawatt's worth of Google TPUs for Anthropic's AI compute in 2026, rising to three gigawatts in 2027. OpenAI's first custom processor — a $10 billion order placed in December — is set to deploy at over one gigawatt scale in 2027. Melius Research analysts estimate Broadcom now has visibility into approximately 10 gigawatts of total AI chip demand in 2027 — equivalent to the electricity needs of more than 8 million U.S. households. (Reuters, March 5, 2026)
For Q2 FY2026, Broadcom guided for $22 billion in total revenue — well above the $20.56 billion analyst consensus — with AI chip revenue alone expected to reach $10.7 billion in the quarter. The company also authorized a $10 billion share repurchase program through the end of 2026. Hock Tan confirmed Broadcom has fully secured leading-edge wafer capacity at TSMC to hit its 2027 targets — removing a supply chain risk that has clouded other semiconductor forecasts this year. (Reuters, March 4–5, 2026)
Bottom Line
Broadcom is the picks-and-shovels play for the custom silicon wave. Nvidia retains dominant GPU market share and will for the foreseeable future — but the fastest-growing segment of AI semiconductor demand is custom chips designed by hyperscalers actively trying to reduce that dependence. Broadcom is the engineering partner that makes those designs manufacturable.
The key risk is customer concentration: Google is Broadcom's largest custom silicon client by far, and a slowdown in Alphabet's AI capex would register directly in AVGO results. But the addition of OpenAI as a sixth customer — combined with Meta's continued MTIA commitment and Anthropic's multi-gigawatt TPU orders — signals real diversification in progress.
Broadcom trades at a premium to the broader chip sector. With AI revenue up 106% year over year and a credible CEO-level forecast of $100 billion by 2027, that premium reflects something real. This isn't the AI chip story most investors are following. It may be the most important one they're not.
— The AI Decoded Team
Enjoyed this article?
Subscribe free — AI news decoded for investors, every morning.
No spam. Unsubscribe anytime. Privacy Policy