openai broadcom custom ai chip partnership

partnership details

  • deal value: $10 billion order commitment from openai to broadcom 1

  • production timeline: mass production begins 2026, prototype validation q4 20251

  • chip focus: ai inference optimization, not training workloads 1

  • manufacturing: tsmc 3nm process node fabrication2

  • packaging: broadcom’s 3.5d xdsip technology enabling 6,000mm² silicon 3

  • memory: sk hynix hbm3e with 12-layer stacks, 1.18 tb/s bandwidth 4

openai’s partnership with broadcom represents a shift toward custom silicon for inference workloads. the chip will be used internally by openai, not sold to external customers.

announcement and market impact

september 2025 disclosure

on september 4, 2025, sources confirmed openai as broadcom’s fourth custom ai chip customer.1 broadcom ceo hock tan had announced the $10 billion order during q3 2025 earnings without naming the client.

market response:

  • broadcom shares: +4.5% after-hours1
  • semiconductor equipment suppliers gained
  • nvidia faced customer concentration questions

rationale

openai ceo sam altman in august 2025: “we are prioritizing compute in light of the increased demand from gpt-5” with plans to “double our compute fleet over the next 5 months.”1

custom chip objectives:

  • reduce inference operational costs
  • optimize for chatgpt computational patterns
  • decrease single-supplier dependency
  • control hardware stack

technical architecture

chip specifications

tl;dr

componentspecificationsuppliernotes
process nodetsmc 3nmtsmc

3nm mass production node2

packaging3.5d xdsipbroadcom

6,000mm² silicon, 12 hbm stacks3

memoryhbm3esk hynix

12-layer, 1.18 tb/s bandwidth4

interconnectjericho3-aibroadcom

26 petabits/sec, 32k chip scaling5

architecturecustom asicopenai/broadcominference-optimized design

broadcom’s role

broadcom provides:6

  • silicon-proven hbm controller and phy ip
  • 2.5d/3d packaging expertise
  • high-speed networking interfaces
  • supply chain management
  • design-to-manufacturing coordination

the model parallels broadcom’s google tpu relationship: broadcom manages implementation while the customer defines architecture.

3.5d xdsip packaging

broadcom’s extreme dimension system in package specifications:3

  • face-to-face chiplet stacking with hybrid copper bonding
  • 12 hbm memory stacks per package
  • 6,000mm² total silicon area
  • 2.5d interposer with 3d stacking

xdsip production begins early 2026.

supply chain ecosystem

tier 1: primary partners

direct partners

  • tsmc: 3nm wafer fabrication, cowos packaging support2

  • sk hynix: hbm3e memory supply, 46-49% market share4

  • ase technology: 40-50% of outsourced cowos-s packaging7

  • amkor: additional packaging capacity, arizona facility7

dependencies

concentrated supplier dependencies:

  • asml: euv lithography monopoly for sub-7nm (31% revenue from tsmc)8
  • synopsys/cadence: eda tools (no alternatives at scale)
  • shin-etsu/sumco: ~60% of silicon wafer market9

see broadcom supply chain ecosystem for details.

comparison with industry precedents

hyperscaler custom silicon

tl;dr

companycustom chippartnerfocusstatus
googletpu v5broadcomtraining & inferenceproduction
amazontrainium2annapurna labstraining2025 launch
metamtia v2internalinferenceproduction
microsoftmaia 100internaltrainingtesting
openaiunnamedbroadcominference2026 target

nvidia response strategy

nvidia advantages:

  • cuda software ecosystem
  • development cycle speed
  • economies of scale
  • proven reliability

custom chips achieve 2-10x cost/performance improvements for specific workloads.

production timeline

2025 milestones

  • q1: design finalization and tapeout
  • q2: prototype fabrication at tsmc
  • q3: validation and testing phase
  • q4: pre-production runs, yield optimization

2026 targets

  • q1: mass production begins
  • q2: volume ramp to thousands of units/month
  • q3: full deployment in openai infrastructure

economic implications

cost structure analysis

custom silicon economics:

  • upfront costs: ~$500m design and nre
  • per-chip cost: $10,000-15,000 at volume
  • tco reduction: 40-60% vs nvidia h100 for inference10

market dynamics

the $10 billion deal represents:

  • 667,000-1,000,000 chips over multi-year period (at $10-15k per chip)
  • tsmc 3nm capacity allocation
  • broadcom custom silicon revenue stream

hsbc projects broadcom’s custom chip business growth will exceed nvidia’s gpu business in 2026.1

risks and challenges

technical risks

  • first-generation custom design complexity
  • software ecosystem development
  • integration with openai infrastructure

supply chain risks

  • tsmc capacity constraints at 3nm node
  • hbm memory shortage through 20254
  • single points of failure (asml euv, tsmc foundry)

competitive risks

  • nvidia innovation pace
  • other hyperscaler custom silicon progress
  • technology obsolescence risk

implications

for openai

  • reduced nvidia dependency for inference
  • lower tco for chatgpt operations
  • hardware roadmap control
  • hardware-software co-design

for the industry

  • custom asic model validation
  • semiconductor verticalization
  • gpu business model pressure
  • packaging importance

for broadcom

  • ai custom silicon position
  • $10 billion revenue starting 2026
  • competitive positioning
  • packaging investment validation

future developments

expected evolution

  • second-generation chip: 2028-2029
  • potential training workload expansion
  • possible external commercialization post-2030
  • vertical integration of ai stack

technology roadmap

  • 2nm process by 2028
  • packaging beyond 3.5d
  • optical interconnects
  • chiplet standardization

references

[1] financial times. (2025, september 4). openai set to start mass production of its own ai chips with broadcom.

[2] broadcom. (2024, december 5). broadcom delivers industry’s first 3.5d f2f technology for ai xpus.

[3] tom’s hardware. (2024, december 5). broadcom unveils gigantic 3.5d xdsip platform for ai xpus — 6000mm² of stacked silicon with 12 hbm modules.

[4] anandtech. (2025). sk hynix reports that 2025 hbm memory supply has nearly sold out.

[5] broadcom. (2025). broadcom unveils jericho3-ai, the industry’s highest performance ethernet fabric.

[6] semianalysis. (2023, august 30). broadcom’s google tpu revenue explosion, networking boom, vmware integration.

[7] trendforce. (2025, november 4). tsmc’s cowos prices may rise 20%; ase and amkor compete for outsourcing orders.

[8] the motley fool. (2025). 2 growth stocks that could win big from tsmc’s $40 billion spending plan.

[9] electronics and you. (2025). top 10 global silicon wafer manufacturers 2025.

[10] investing.com. (2024, june 12). broadcom rides on ai chip demand to deliver upbeat revenue forecast.

on this page