openai broadcom custom ai chip partnership
on this page
partnership details
deal value: $10 billion order commitment from openai to broadcom 1
production timeline: mass production begins 2026, prototype validation q4 20251
chip focus: ai inference optimization, not training workloads 1
manufacturing: tsmc 3nm process node fabrication2
packaging: broadcom’s 3.5d xdsip technology enabling 6,000mm² silicon 3
memory: sk hynix hbm3e with 12-layer stacks, 1.18 tb/s bandwidth 4
openai’s partnership with broadcom represents a shift toward custom silicon for inference workloads. the chip will be used internally by openai, not sold to external customers.
announcement and market impact
september 2025 disclosure
on september 4, 2025, sources confirmed openai as broadcom’s fourth custom ai chip customer.1 broadcom ceo hock tan had announced the $10 billion order during q3 2025 earnings without naming the client.
market response:
- broadcom shares: +4.5% after-hours1
- semiconductor equipment suppliers gained
- nvidia faced customer concentration questions
rationale
openai ceo sam altman in august 2025: “we are prioritizing compute in light of the increased demand from gpt-5” with plans to “double our compute fleet over the next 5 months.”1
custom chip objectives:
- reduce inference operational costs
- optimize for chatgpt computational patterns
- decrease single-supplier dependency
- control hardware stack
technical architecture
chip specifications
tl;dr
component | specification | supplier | notes |
---|---|---|---|
process node | tsmc 3nm | tsmc | 3nm mass production node2 |
packaging | 3.5d xdsip | broadcom | 6,000mm² silicon, 12 hbm stacks3 |
memory | hbm3e | sk hynix | 12-layer, 1.18 tb/s bandwidth4 |
interconnect | jericho3-ai | broadcom | 26 petabits/sec, 32k chip scaling5 |
architecture | custom asic | openai/broadcom | inference-optimized design |
broadcom’s role
broadcom provides:6
- silicon-proven hbm controller and phy ip
- 2.5d/3d packaging expertise
- high-speed networking interfaces
- supply chain management
- design-to-manufacturing coordination
the model parallels broadcom’s google tpu relationship: broadcom manages implementation while the customer defines architecture.
3.5d xdsip packaging
broadcom’s extreme dimension system in package specifications:3
- face-to-face chiplet stacking with hybrid copper bonding
- 12 hbm memory stacks per package
- 6,000mm² total silicon area
- 2.5d interposer with 3d stacking
xdsip production begins early 2026.
supply chain ecosystem
tier 1: primary partners
direct partners
dependencies
concentrated supplier dependencies:
- asml: euv lithography monopoly for sub-7nm (31% revenue from tsmc)8
- synopsys/cadence: eda tools (no alternatives at scale)
- shin-etsu/sumco: ~60% of silicon wafer market9
see broadcom supply chain ecosystem for details.
comparison with industry precedents
hyperscaler custom silicon
tl;dr
company | custom chip | partner | focus | status |
---|---|---|---|---|
tpu v5 | broadcom | training & inference | production | |
amazon | trainium2 | annapurna labs | training | 2025 launch |
meta | mtia v2 | internal | inference | production |
microsoft | maia 100 | internal | training | testing |
openai | unnamed | broadcom | inference | 2026 target |
nvidia response strategy
nvidia advantages:
- cuda software ecosystem
- development cycle speed
- economies of scale
- proven reliability
custom chips achieve 2-10x cost/performance improvements for specific workloads.
production timeline
2025 milestones
- q1: design finalization and tapeout
- q2: prototype fabrication at tsmc
- q3: validation and testing phase
- q4: pre-production runs, yield optimization
2026 targets
- q1: mass production begins
- q2: volume ramp to thousands of units/month
- q3: full deployment in openai infrastructure
economic implications
cost structure analysis
custom silicon economics:
- upfront costs: ~$500m design and nre
- per-chip cost: $10,000-15,000 at volume
- tco reduction: 40-60% vs nvidia h100 for inference10
market dynamics
the $10 billion deal represents:
- 667,000-1,000,000 chips over multi-year period (at $10-15k per chip)
- tsmc 3nm capacity allocation
- broadcom custom silicon revenue stream
hsbc projects broadcom’s custom chip business growth will exceed nvidia’s gpu business in 2026.1
risks and challenges
technical risks
- first-generation custom design complexity
- software ecosystem development
- integration with openai infrastructure
supply chain risks
- tsmc capacity constraints at 3nm node
- hbm memory shortage through 20254
- single points of failure (asml euv, tsmc foundry)
competitive risks
- nvidia innovation pace
- other hyperscaler custom silicon progress
- technology obsolescence risk
implications
for openai
- reduced nvidia dependency for inference
- lower tco for chatgpt operations
- hardware roadmap control
- hardware-software co-design
for the industry
- custom asic model validation
- semiconductor verticalization
- gpu business model pressure
- packaging importance
for broadcom
- ai custom silicon position
- $10 billion revenue starting 2026
- competitive positioning
- packaging investment validation
future developments
expected evolution
- second-generation chip: 2028-2029
- potential training workload expansion
- possible external commercialization post-2030
- vertical integration of ai stack
technology roadmap
- 2nm process by 2028
- packaging beyond 3.5d
- optical interconnects
- chiplet standardization
references
[2] broadcom. (2024, december 5). broadcom delivers industry’s first 3.5d f2f technology for ai xpus.
[4] anandtech. (2025). sk hynix reports that 2025 hbm memory supply has nearly sold out.
[5] broadcom. (2025). broadcom unveils jericho3-ai, the industry’s highest performance ethernet fabric.
[8] the motley fool. (2025). 2 growth stocks that could win big from tsmc’s $40 billion spending plan.
[9] electronics and you. (2025). top 10 global silicon wafer manufacturers 2025.
[10] investing.com. (2024, june 12). broadcom rides on ai chip demand to deliver upbeat revenue forecast.