Nvidia delivered Q4 revenue of US$68bn (+73% YoY) and crossed US$100bn in annual profit, but the most important signal for the semiconductor supply chain isn't the headline numbers — it's what Nvidia revealed about where AI demand is heading next. CEO Jensen Huang's three-word thesis on the analyst call — "Compute equals revenues" — tells you the industry is crossing from building AI capabilities to deploying them at scale. That transition from training to inference changes who benefits and how across the entire chip supply chain.
For the past two years, the AI story has been about training: building ever-larger models that required massive GPU clusters. Now, companies are actually using these models — generating revenue from AI agents, coding assistants, and enterprise applications. Huang pointed to Claude Code, Claude Cowork, and OpenAI Codex as evidence, calling adoption "skyrocketing." When AI shifts from training to usage, the number of chips needed multiplies because inference happens everywhere, all the time, across millions of end users — not just in a handful of concentrated training clusters.
That shift matters enormously for TSMC (TSM). Training demand was concentrated among a few hyperscalers building massive clusters; inference demand is broader, spanning cloud providers, enterprises, and sovereign AI projects. More customers ordering more chips means higher total wafer demand at TSMC, even if individual chip orders are smaller. TSMC captures the volume regardless of which Nvidia architecture wins — Blackwell on N4P today, Vera Rubin on N3 tomorrow.
Vera Rubin, which sampled this week, promises 10x lower inference costs than Blackwell. That's a step-change in economics that could unlock entirely new use cases and customers who couldn't justify AI deployment before. For ASE Technology (ASX US), Vera's design introduces a different memory configuration. Huang revealed Vera as "the only data center CPU that supports LPDDR5," which means a fundamentally different package substrate and memory interface compared to Blackwell's HBM-centric design. ASE faces a 12-18 month window running both packaging architectures simultaneously — high utilization but also higher capex as they tool up for the new configuration.
Huang also signalled that Nvidia is coming for the CPU market alongside GPUs. His argument, stripped of the technical jargon: as AI moves from training to real-world usage, the regular processors sitting next to the GPUs become a bottleneck. Nvidia believes its own CPUs — purpose-built for AI workloads — are fundamentally better suited than Intel's or AMD's general-purpose designs. If Nvidia gains share in the CPU socket of inference servers, that's incremental wafer demand at TSMC and incremental packaging at ASE, on top of the GPU volumes.
CFO Colette Kress noted hyperscalers account for "a little over 50% of Data Center revenue," with the top two customers at 36%. But the non-hyperscaler segment — AI model-makers, enterprises, sovereigns — is "a very fast-growing area." That diversification routes through different server OEMs, which is where Hon Hai (2317 TT) captures incremental demand as Dell, HPE, and Supermicro scale AI server production. Every percentage point of Nvidia revenue that shifts from hyperscaler direct-build to OEM channels benefits Foxconn's server assembly margins.
Nvidia's $20B deal to acquire Groq's inference chip assets (announced December 2025) reinforces the same direction — Nvidia is building a full-stack inference platform, not just selling GPUs. Groq's specialized inference chips are currently fabbed at Samsung, but if Nvidia integrates Groq's technology into future Nvidia silicon, those wafers migrate to TSMC — another incremental demand signal.
The inference push extends beyond the data center. Nvidia and MediaTek are jointly launching the N1 personal AI computer chip in H1 2026, with Dell and Lenovo among the first OEM partners. This takes Nvidia's inference platform from cloud to desktop and deepens MediaTek's (2454 TT) role as a strategic Nvidia partner well beyond its traditional mobile business — a signal that AI inference is becoming pervasive enough to justify dedicated silicon at every level of computing.
Hyperscaler CapEx expectations are now "approaching $700 billion" for 2026 — up $120bn since January. That flows through to Delta Electronics (2308 TT) for power supplies and thermal management in AI data centers. China remains zero-revenue for Nvidia despite H200 approvals, and nearly 9 gigawatts of Blackwell infrastructure are now deployed globally. (25 Feb 2026)
Linked stocks: NVDA, TSM, ASX, 2317.TW, 2308.TW
Sources: Bloomberg (Bloomberg), Reuters, NYT, Fortune
Chinese AI firm DeepSeek is deliberately withholding its upcoming flagship model from Nvidia and other US chipmakers for performance optimization, breaking the standard industry practice where AI labs share pre-release models so chip designers can tune hardware-software interaction. This is a meaningful escalation from the story covered last week about DeepSeek training on smuggled H100s.
The implications are structural. If Chinese AI labs optimize exclusively for domestic hardware (Huawei Ascend, Cambricon), it creates a parallel ecosystem where Nvidia GPUs are suboptimal for running Chinese frontier models. This is the mirror image of US export controls: instead of America restricting chip sales to China, China restricts model access to American chipmakers. The decoupling is now bidirectional.
Huang's own words from the earnings call provide the counterpoint: "To sustain its leadership position in AI compute, America must engage every developer and be the platform of choice for every commercial business, including those in China." He also warned that "our competitors in China, bolstered by recent IPOs, are making progress and have the potential to disrupt the structure of the global AI industry over the long term." DeepSeek's move directly undermines Nvidia's engagement vision.
For TSMC (TSM), this is a wash near-term — they manufacture for both ecosystems — but long-term it fragments the design optimization loop. If Chinese AI chips diverge architecturally from CUDA, TSMC's process technology advantage becomes the only bridge between the two worlds, which could paradoxically strengthen their strategic positioning. For Global Unichip (3443 TT), TSMC's ASIC design subsidiary, a bifurcated AI chip world means more custom silicon projects from both sides — each ecosystem needs its own optimized chips, doubling the design pipeline. (25 Feb 2026)
Linked stocks: NVDA, TSM, INTC, 3443.TW
Sources: Reuters (exclusive)
Google has signed a multibillion-dollar deal to supply Meta with AI chips, marking the first time one major tech giant has bought custom-designed processors from another. This is a significant shift: until now, hyperscalers built custom chips for their own internal use. Google selling its TPUs to Meta signals that custom AI silicon has matured from an internal experiment into a product business — and that creates new competitive dynamics for Nvidia and new demand patterns across the chip supply chain.
Why this matters beyond the headline: if Meta diversifies even 10-15% of its AI workload toward Google TPUs, it reduces Meta's dependence on Nvidia and gives both hyperscalers more negotiating leverage on GPU pricing. CFO Kress acknowledged on Nvidia's call that hyperscalers are "about 50% of revenue" while insisting the non-hyperscaler segment is growing faster — but this deal suggests the hyperscaler half is becoming more competitive, not less.
For TSMC (TSM), this is neutral-to-positive. Google's TPUs are manufactured at TSMC on advanced 3nm and 4nm nodes, and they require the same advanced packaging (CoWoS interposer technology) as Nvidia's GPUs. Adding Meta's TPU volume to Google's existing allocation tightens TSMC's already-constrained advanced packaging capacity further. Every packaging slot allocated to Google TPUs is one fewer available for Nvidia, AMD, or Broadcom — the packaging bottleneck becomes a strategic choke point.
ASE Technology (ASX US) benefits from the same dynamic: more custom AI chip designs from more customers means more advanced packaging and testing demand across the board. Global Unichip (3443 TT), TSMC's ASIC design services arm, is positioned to capture the trend as hyperscalers increasingly need custom chip design expertise — the Google-Meta deal validates that custom silicon at scale is now the norm, not the exception.
For Delta Electronics (2308 TT), the diversification of AI chip architectures in data centers creates more complex power delivery requirements. A data center running both Nvidia GPUs and Google TPUs needs more sophisticated power management than a homogeneous GPU farm — different chips draw power differently, and that complexity is a tailwind for Delta's high-end server power supply business.
The second-order signal: if hyperscalers are now willing to buy chips from each other, the addressable market for custom AI silicon expands dramatically. MediaTek (2454 TT), which has been building ASIC design capabilities for data center customers, sees validation that the market for non-Nvidia AI chips at scale is real and growing. (26 Feb 2026)
Linked stocks: GOOGL, META, NVDA, TSM, ASX, 3443.TW, 2308.TW, 2454.TW
Sources: The Information via Reuters
TSMC is pressuring Japanese suppliers to localize electroplating additives production inside Taiwan, signaling a new phase of supply chain internalization, according to Digitimes. The world's most valuable chipmaker — now valued above $2 trillion — is reducing dependency on imported process chemicals by demanding suppliers build capacity on-island.
Read alongside the parallel Digitimes report on accelerating US semiconductor reshoring, a pattern emerges: the world's two largest chip economies are simultaneously pulling supply chains inward. The US is reducing reliance on Taiwan for finished chips; Taiwan is reducing reliance on Japan for process materials. Both moves increase resilience but raise costs.
For our Taiwan coverage universe, this is a tailwind for domestic suppliers and a headwind for margins. Delta Electronics (2308 TT) benefits as power infrastructure demand grows with new chemical processing facilities. Taiwan's semiconductor ecosystem becomes more vertically integrated but less cost-efficient — a structural shift from the lean, globally distributed model that drove the industry's first $2T company. (24-26 Feb 2026)
Linked stocks: TSM, 2308.TW, 2344.TW
Cloud AI applications are creating unexpected demand for 8-inch (200mm) wafers, the "legacy" technology supposedly in secular decline, as AI data centers require massive quantities of power management ICs, sensors, and analog chips that run on older, proven manufacturing processes rather than cutting-edge nodes. This is the clearest sign yet that the AI infrastructure buildout is a broad-based capex supercycle, not a narrow GPU story.
Think of it this way: every AI server rack built around Nvidia's latest GPUs also needs dozens of supporting chips — voltage regulators that manage power delivery, temperature sensors that prevent overheating, analog chips that handle signal conversion. None of these require TSMC's most advanced manufacturing. They need reliable, mature capacity on 200mm wafers — the kind of production that semiconductor analysts had written off as permanently declining.
UMC (UMC US) is the most direct beneficiary. The company derives the majority of its revenue from mature 28nm-and-above nodes and has been fighting utilization headwinds as smartphone demand weakened. If AI-driven demand for power and analog chips stabilizes 8-inch utilization above 80%, UMC's earnings trajectory inflects without requiring any investment in expensive advanced nodes — that's pure margin leverage on existing fabs. The market has been pricing UMC as a secular loser to TSMC's leading-edge dominance; AI-driven mature node demand is the counter-narrative.
Winbond (2344 TT) captures a different angle: specialty DRAM and NOR flash for edge AI devices, IoT sensors, and the embedded controllers inside every server's baseboard management system. These aren't glamorous chips, but every AI server needs them, and Winbond is one of the few remaining independent suppliers. Macronix (2337 TT) benefits similarly through NOR flash demand for firmware storage in networking equipment and server management — the connective tissue of AI data centers that analysts overlook when counting only GPUs.
Himax Technologies (HIMX US) adds another dimension. While known for display driver ICs, Himax has been pivoting toward WiseEye AI sensing chips for edge applications — smart cameras, industrial sensors, autonomous systems — all manufactured on mature process nodes. If the 8-inch wafer demand revival extends to edge AI inference, Himax's AI sensing business gets a structural tailwind from the same infrastructure buildout driving GPU demand at the leading edge. The AI story isn't just about the biggest, most expensive chips — it's increasingly about the ecosystem of smaller chips that make everything else work. (26 Feb 2026)
Linked stocks: UMC, 2344.TW, 2337.TW, HIMX
Sources: DigiTimes
Dell's fiscal 2027 revenue forecast beat estimates, driven explicitly by AI server demand, confirming the AI infrastructure buildout is broadening beyond hyperscalers into enterprise — exactly the trend Kress highlighted on Nvidia's call when she noted non-hyperscaler customers are "a very fast-growing area." But the supply chain implications run deeper than a single OEM's guidance.
Dell is the largest enterprise AI server vendor by revenue, and Hon Hai/Foxconn (2317 TT) is its primary assembly partner for server platforms. Dell's guidance beat translates almost mechanically into higher order visibility for Hon Hai's enterprise server division. More importantly, enterprise AI servers tend to carry better margins for the assembler than hyperscaler custom builds — hyperscalers negotiate ruthlessly on assembly fees, while enterprise customers buy more standardized configurations with better ASPs. Hon Hai's server revenue mix shifting toward enterprise AI is a margin expansion story, not just a volume story.
ASE Technology (ASX US) captures the packaging layer. Enterprise AI servers are component-dense: each rack contains GPUs, CPUs, HBM memory stacks, networking ASICs, power management ICs, and dozens of analog chips — all requiring some form of advanced or traditional packaging. As AI server complexity increases, the component count per rack grows, and ASE's packaging and testing revenue scales with that complexity. Dell guiding above street suggests ASE's second-half 2026 loading assumptions may be conservative.
The enterprise angle also matters for Silicon Motion (SIMO US) and Phison Electronics (8299 TW). Enterprise AI servers need high-performance SSD storage with specific endurance and latency profiles for model loading, checkpointing, and inference caching. Unlike hyperscalers who design custom storage controllers, enterprise customers buy off-the-shelf NVMe SSDs powered by Silicon Motion and Phison controllers. Dell selling more AI servers means more enterprise SSD demand flowing directly to SIMO and Phison's OEM customers — Western Digital, Micron, and Solidigm.
Delta Electronics (2308 TT) rounds out the supply chain: every Dell AI server rack needs high-efficiency power supplies, and Delta is the dominant supplier of server PSUs to the major OEMs. Dell's guidance beat is a leading indicator for Delta's power supply order book in Q3-Q4 2026. (26 Feb 2026)
Linked stocks: DELL, 2317.TW, ASX, SIMO, 8299.TW, 2308.TW
Sources: Reuters