On this page of StockholderLetter.com we present the latest annual shareholder letter from ADVANCED MICRO DEVICES INC — ticker symbol AMD. Reading current and past AMD letters to shareholders can bring important insights into the investment thesis.
2025 ANNUAL REPORT
BUILDING THE COMPUTE
FOUNDATION FOR THE AI ERA
Dear Shareholders,
Arti   cial intelligence is rede   ning modern computing
and driving one of the most consequential technology
transitions in history. The scale of computing
infrastructure required to power this transformation is
unprecedented, and the systems being built today will
shape how the world advances science, discovers new
medicines, designs products and manages energy for
decades to come.
Meeting this challenge requires a new generation of computing platforms
that integrate CPUs, GPUs, networking, adaptive compute and software
at massive scale. As models and workloads advance, they demand open,
   exible architectures that provide the freedom to deploy AI today while
evolving for what comes next. AMD is building the high-performance
compute foundation for this new era of AI     from hyperscale data centers
and enterprise deployments to PCs and the intelligent edge.
Against this backdrop, 2025 was a de   ning year for AMD. We delivered
record    nancial results, with revenue increasing 34% to $34.6 billion,
and record pro   tability. These results re   ect disciplined execution, deep
customer partnerships and a broad portfolio of leadership products
that continue to expand AMD   s role at the center of the global AI
infrastructure buildout.
Highlights included:
    Record AMD Instinct    GPU revenue driven by the ramp of our MI350
Series accelerators and expanding customer engagements for nextgeneration AI platforms, including our multi-generational strategic
agreements with OpenAI and, most recently, Meta.
    Record server CPU share as adoption of AMD EPYC    processors
accelerated across cloud and enterprise deployments.
    Record client processor revenue and continued share gains for our
AMD Ryzen    portfolio.
    A second consecutive year of record design wins in our adaptive and
embedded portfolio.
    Completion of the acquisition of ZT Systems, expanding our
systems capabilities and accelerating development of rack-scale AI
infrastructure solutions.
At AMD, long-term leadership is built on product excellence, relentless
execution and strong ecosystem partnerships. These principles fueled our
momentum in 2025 and continue to guide our growth.
FY2025 Financial Highlights

34.6B
Revenue
34%

YoY Revenue Growth
16.6B $14.6B $3.5B

Data Center
Client and Gaming
50%
Gross Margin
Embedded
3.7B

Operating Income
2025: Record Revenue
$34.6B
$22.7B
FY2023
$25.8B
FY2024
FY2025
Strong revenue and earnings growth re   ect broad-based
demand for our high-performance AI portfolio.
DATA CENTER AND AI
INFRASTRUCTURE
The rapid expansion of AI workloads is
reshaping the architecture of modern data
centers. Training, inference, data processing
and emerging agentic systems are driving
unprecedented demand for high-performance
and energy efficient compute.
AMD   s Data Center segment delivered record performance in
2025, with revenue increasing 32% year-over-year to $16.6
billion. Growth was driven by strong demand for both generalpurpose and AI computing as we expanded EPYC CPU share
and rapidly scaled our Instinct GPU deployments.
orchestrating complex workloads, optimizing memory and driving
system-level performance and efficiency.
Adoption of our 5th Generation EPYC    Turin    processors
accelerated across hyperscale cloud providers and enterprise
customers throughout the year. Over the past two years, the
number of publicly available EPYC-powered cloud instances has
nearly doubled.
Enterprise adoption also reached an important in   ection point.
The number of large businesses deploying EPYC on-prem
more than doubled in 2025, with major deployments across
leading technology,    nancial services, retail, automotive and
media companies.
Data Center Revenue Growth
$16.6B
EPYC: Expanding CPU Leadership
CPUs are a critical foundation of modern computing
infrastructure, powering cloud services, hosting businesscritical applications and enabling large-scale AI workloads. As AI
scales and agentic systems accelerate, their role is expanding,
$12.6B
$6.5B
FY2023
FY2024
FY2025
As a result, we exited 2025 with record server share, reinforcing
that EPYC CPUs are the processors of choice for the modern
data center based on their leadership performance and total cost
of ownership.
Looking ahead, our 6th Generation EPYC    Venice    processors are
built on our next-generation    Zen 6    architecture and designed
to extend AMD   s leadership in performance, efficiency and TCO
across cloud, enterprise, AI and supercomputing workloads.
   Venice    is on track to launch in 2026, and we are seeing record
demand as customers plan deployments later this year.
Instinct: Scaling AI Compute
Demand for AI accelerators continued to grow rapidly in 2025.
Eight of the world   s top ten AI companies now use AMD Instinct
accelerators for production workloads.
We launched the MI350 Series GPUs in June, delivering a 35x
improvement1 in inference performance compared to the
prior generation and enabling a new class of large-scale AI
deployments. Since launch, cloud providers including Meta,
Oracle and others have expanded availability of MI350-based
infrastructure, and a growing number of next-generation
AI cloud providers have scaled Instinct-powered systems
to deliver on-demand compute to AI-native developers and
enterprises worldwide.
The MI400 family expands our Instinct portfolio and delivers a
step-function improvement in performance across large-scale
training and inference, high-performance computing workloads
and enterprise AI deployments, enabling customers to deploy AI
infrastructure across a wide range of environments.
Helios: Rack-Scale AI Infrastructure
As AI clusters grow larger and more complex, innovation must
extend beyond chips to full rack-scale systems.
The acquisition of ZT Systems expanded AMD   s expertise in rackscale system design and accelerated development of AMD Helios,
our most comprehensive AI infrastructure platform to date.
Helios integrates next-generation Instinct MI455X GPUs, EPYC
   Venice    CPUs and Pensando networking into a uni   ed rackscale platform optimized for large-scale AI training and inference
deployments. By tightly integrating compute, networking and
system design, Helios delivers a step change in performance and
efficiency at rack scale.
Built on Meta   s Open Wide Rack speci   cation, Helios re   ects
AMD   s commitment to open standards and ecosystem
collaboration. There is signi   cant customer momentum
around Helios.
We announced a multi-generation agreement with OpenAI
to deepen co-development across our hardware and software
roadmaps and deploy six gigawatts of Instinct GPUs to power AI
infrastructure. Oracle announced plans to launch the    rst publicly
available AI supercluster powered by MI450 Series GPUs.
We also recently expanded our close partnership with Meta to
accelerate their AI infrastructure with large-scale deployments
of Instinct GPUs and EPYC CPUs across multiple product
generations. Initial shipments supporting the    rst gigawatt
deployment are scheduled to begin in the second half of 2026
and will leverage our Helios rack-scale architecture with a custom
MI450-based Instinct GPU and our EPYC    Venice    CPU.
We are working closely with lead customers, supply chain and
ecosystem partners to ensure a smooth ramp for the MI400
series and Helios, and we are on track to begin production
shipments in the second half of 2026.
ROCm: The Software Foundation for AMD AI
Software converts hardware leadership into adoption at scale.
In 2025, we signi   cantly strengthened our AMD ROCm    open
software stack, accelerating our release cadence to deliver rapid
performance optimizations, expanded developer tools and dayzero support for new frontier models.
ROCm now provides out-of-the-box support for more than two
million models on the Hugging Face platform and saw a tenfold
increase in downloads during the year, re   ecting rapidly growing
developer adoption. We also introduced ROCm 7, our most
comprehensive release to date, and expanded our collaboration
with top AI ecosystem partners including Hugging Face, Pytorch,
vLLM and SGLang.
These advancements make it easier than ever for developers
and enterprises to build, deploy and scale AI workloads on
AMD platforms.
 • shareholder letter icon 3/27/2026 Letter Continued (Full PDF)
 • stockholder letter icon 3/31/2023 AMD Stockholder Letter
 • stockholder letter icon 3/25/2024 AMD Stockholder Letter
 • stockholder letter icon 3/28/2025 AMD Stockholder Letter
 • stockholder letter icon More "Semiconductors" Category Stockholder Letters
 • Benford's Law Stocks icon AMD Benford's Law Stock Score = 99


AMD Shareholder/Stockholder Letter Transcript:

2025 ANNUAL REPORT
BUILDING THE COMPUTE
FOUNDATION FOR THE AI ERA

Dear Shareholders,
Arti   cial intelligence is rede   ning modern computing
and driving one of the most consequential technology
transitions in history. The scale of computing
infrastructure required to power this transformation is
unprecedented, and the systems being built today will
shape how the world advances science, discovers new
medicines, designs products and manages energy for
decades to come.
Meeting this challenge requires a new generation of computing platforms
that integrate CPUs, GPUs, networking, adaptive compute and software
at massive scale. As models and workloads advance, they demand open,
   exible architectures that provide the freedom to deploy AI today while
evolving for what comes next. AMD is building the high-performance
compute foundation for this new era of AI     from hyperscale data centers
and enterprise deployments to PCs and the intelligent edge.
Against this backdrop, 2025 was a de   ning year for AMD. We delivered
record    nancial results, with revenue increasing 34% to $34.6 billion,
and record pro   tability. These results re   ect disciplined execution, deep
customer partnerships and a broad portfolio of leadership products
that continue to expand AMD   s role at the center of the global AI
infrastructure buildout.
Highlights included:
    Record AMD Instinct    GPU revenue driven by the ramp of our MI350
Series accelerators and expanding customer engagements for nextgeneration AI platforms, including our multi-generational strategic
agreements with OpenAI and, most recently, Meta.
    Record server CPU share as adoption of AMD EPYC    processors
accelerated across cloud and enterprise deployments.
    Record client processor revenue and continued share gains for our
AMD Ryzen    portfolio.
    A second consecutive year of record design wins in our adaptive and
embedded portfolio.
    Completion of the acquisition of ZT Systems, expanding our
systems capabilities and accelerating development of rack-scale AI
infrastructure solutions.
At AMD, long-term leadership is built on product excellence, relentless
execution and strong ecosystem partnerships. These principles fueled our
momentum in 2025 and continue to guide our growth.
FY2025 Financial Highlights

34.6B
Revenue
34%

YoY Revenue Growth
16.6B $14.6B $3.5B

Data Center
Client and Gaming
50%
Gross Margin
Embedded
3.7B

Operating Income
2025: Record Revenue
$34.6B
$22.7B
FY2023
$25.8B
FY2024
FY2025
Strong revenue and earnings growth re   ect broad-based
demand for our high-performance AI portfolio.

DATA CENTER AND AI
INFRASTRUCTURE
The rapid expansion of AI workloads is
reshaping the architecture of modern data
centers. Training, inference, data processing
and emerging agentic systems are driving
unprecedented demand for high-performance
and energy efficient compute.
AMD   s Data Center segment delivered record performance in
2025, with revenue increasing 32% year-over-year to $16.6
billion. Growth was driven by strong demand for both generalpurpose and AI computing as we expanded EPYC CPU share
and rapidly scaled our Instinct GPU deployments.
orchestrating complex workloads, optimizing memory and driving
system-level performance and efficiency.
Adoption of our 5th Generation EPYC    Turin    processors
accelerated across hyperscale cloud providers and enterprise
customers throughout the year. Over the past two years, the
number of publicly available EPYC-powered cloud instances has
nearly doubled.
Enterprise adoption also reached an important in   ection point.
The number of large businesses deploying EPYC on-prem
more than doubled in 2025, with major deployments across
leading technology,    nancial services, retail, automotive and
media companies.
Data Center Revenue Growth
$16.6B
EPYC: Expanding CPU Leadership
CPUs are a critical foundation of modern computing
infrastructure, powering cloud services, hosting businesscritical applications and enabling large-scale AI workloads. As AI
scales and agentic systems accelerate, their role is expanding,
$12.6B
$6.5B
FY2023
FY2024
FY2025

As a result, we exited 2025 with record server share, reinforcing
that EPYC CPUs are the processors of choice for the modern
data center based on their leadership performance and total cost
of ownership.
Looking ahead, our 6th Generation EPYC    Venice    processors are
built on our next-generation    Zen 6    architecture and designed
to extend AMD   s leadership in performance, efficiency and TCO
across cloud, enterprise, AI and supercomputing workloads.
   Venice    is on track to launch in 2026, and we are seeing record
demand as customers plan deployments later this year.
Instinct: Scaling AI Compute
Demand for AI accelerators continued to grow rapidly in 2025.
Eight of the world   s top ten AI companies now use AMD Instinct
accelerators for production workloads.
We launched the MI350 Series GPUs in June, delivering a 35x
improvement1 in inference performance compared to the
prior generation and enabling a new class of large-scale AI
deployments. Since launch, cloud providers including Meta,
Oracle and others have expanded availability of MI350-based
infrastructure, and a growing number of next-generation
AI cloud providers have scaled Instinct-powered systems
to deliver on-demand compute to AI-native developers and
enterprises worldwide.
The MI400 family expands our Instinct portfolio and delivers a
step-function improvement in performance across large-scale
training and inference, high-performance computing workloads
and enterprise AI deployments, enabling customers to deploy AI
infrastructure across a wide range of environments.

Helios: Rack-Scale AI Infrastructure
As AI clusters grow larger and more complex, innovation must
extend beyond chips to full rack-scale systems.
The acquisition of ZT Systems expanded AMD   s expertise in rackscale system design and accelerated development of AMD Helios,
our most comprehensive AI infrastructure platform to date.
Helios integrates next-generation Instinct MI455X GPUs, EPYC
   Venice    CPUs and Pensando networking into a uni   ed rackscale platform optimized for large-scale AI training and inference
deployments. By tightly integrating compute, networking and
system design, Helios delivers a step change in performance and
efficiency at rack scale.
Built on Meta   s Open Wide Rack speci   cation, Helios re   ects
AMD   s commitment to open standards and ecosystem
collaboration. There is signi   cant customer momentum
around Helios.
We announced a multi-generation agreement with OpenAI
to deepen co-development across our hardware and software
roadmaps and deploy six gigawatts of Instinct GPUs to power AI
infrastructure. Oracle announced plans to launch the    rst publicly
available AI supercluster powered by MI450 Series GPUs.
We also recently expanded our close partnership with Meta to
accelerate their AI infrastructure with large-scale deployments
of Instinct GPUs and EPYC CPUs across multiple product
generations. Initial shipments supporting the    rst gigawatt
deployment are scheduled to begin in the second half of 2026
and will leverage our Helios rack-scale architecture with a custom
MI450-based Instinct GPU and our EPYC    Venice    CPU.
We are working closely with lead customers, supply chain and
ecosystem partners to ensure a smooth ramp for the MI400
series and Helios, and we are on track to begin production
shipments in the second half of 2026.
ROCm: The Software Foundation for AMD AI
Software converts hardware leadership into adoption at scale.
In 2025, we signi   cantly strengthened our AMD ROCm    open
software stack, accelerating our release cadence to deliver rapid
performance optimizations, expanded developer tools and dayzero support for new frontier models.
ROCm now provides out-of-the-box support for more than two
million models on the Hugging Face platform and saw a tenfold
increase in downloads during the year, re   ecting rapidly growing
developer adoption. We also introduced ROCm 7, our most
comprehensive release to date, and expanded our collaboration
with top AI ecosystem partners including Hugging Face, Pytorch,
vLLM and SGLang.
These advancements make it easier than ever for developers
and enterprises to build, deploy and scale AI workloads on
AMD platforms.



shareholder letter icon 3/27/2026 Letter Continued (Full PDF)
 

AMD Stockholder/Shareholder Letter (ADVANCED MICRO DEVICES INC) | www.StockholderLetter.com
Copyright © 2023 - 2026, All Rights Reserved

Nothing in StockholderLetter.com is intended to be investment advice, nor does it represent the opinion of, counsel from, or recommendations by BNK Invest Inc. or any of its affiliates, subsidiaries or partners. None of the information contained herein constitutes a recommendation that any particular security, portfolio, transaction, or investment strategy is suitable for any specific person. All viewers agree that under no circumstances will BNK Invest, Inc,. its subsidiaries, partners, officers, employees, affiliates, or agents be held liable for any loss or damage caused by your reliance on information obtained. By visiting, using or viewing this site, you agree to the following Full Disclaimer & Terms of Use and Privacy Policy.