AMD Presents: Advancing AI

AMD

AMD

130 min, 9 sec

AMD's event highlighted the acceleration of the AI era, showcasing new AI solutions, partner collaborations, and a vision for AI across cloud, enterprise, and personal computing.

Summary

  • AMD introduced the MI300X and MI300A accelerators, now in production, and adopted by industry leaders such as Microsoft, Oracle, and Meta, along with major OEMs and ODMs.
  • The ROCm 6 software platform was presented, focusing on expanding the ecosystem of AI developers and making AMD Instinct GPUs widely accessible.
  • AMD emphasized the importance of an open, high-performance AI infrastructure, discussing system architecture and networking collaborations for scale-out AI solutions.
  • Ryzen processors with integrated NPUs were showcased, extending AMD's compute vision and AI leadership into the client segment, aiming to make AI ubiquitous.
  • AMD announced the new
  • Strix Point Ryzen processors with second-generation XDNA architecture, delivering a significant performance boost for client AI experiences.

Chapter 1

Introduction to AI Innovations and Partnerships

11:56 - 6 min, 11 sec

AMD begins the event by introducing their initiatives and partnerships in AI.

AMD begins the event by introducing their initiatives and partnerships in AI.

  • AMD's event opens with a focus on AI solutions and the excitement around new products and industry collaborations.
  • The event promises to unveil AI accelerators and share news on partnerships with industry leaders.

Chapter 2

Launch of MI300X AI Accelerator

18:06 - 5 min, 0 sec

AMD announces the launch of the MI300X, the world's highest performance accelerator for generative AI.

AMD announces the launch of the MI300X, the world's highest performance accelerator for generative AI.

  • The MI300X is built on AMD's new CDNA 3 data center architecture, optimized for AI training and inference workloads.
  • MI300X combines a new compute engine, supports sparsity, the latest data formats, has industry-leading memory capacity, and utilizes advanced technologies.
  • In comparison to the previous generation, CDNA 3 delivers more than three times higher performance for key AI data types like FP16 and BF16.

Chapter 3

MI300X Performance and Ecosystem Expansion

23:06 - 13 min, 8 sec

AMD discusses the performance of MI300X and its ecosystem expansion.

AMD discusses the performance of MI300X and its ecosystem expansion.

  • MI300X's advanced features lead to substantial performance gains in AI workloads, with 1.2 to 1.6 times better performance than the competition in various AI models.
  • AMD outlines its strategic priorities for AI, including delivering a broad portfolio of GPUs, CPUs, and adaptive computing solutions, and expanding partnerships with cloud providers, OEMs, and software developers.

Chapter 4

MI300X and AI Infrastructure in the Cloud

36:14 - 13 min, 5 sec

AMD highlights the role of MI300X in AI infrastructure within cloud computing.

AMD highlights the role of MI300X in AI infrastructure within cloud computing.

  • MI300X is positioned as a key component in the cloud for generative AI training and inference, offering better performance and efficiency for large-scale AI deployments.
  • AMD notes the rapid growth of the data center AI accelerator market and the increasing demand for AI infrastructure, emphasizing the importance of the cloud in enabling AI advancements.

Chapter 5

AI Innovations for Enterprise with Dell Technologies

49:19 - 12 min, 17 sec

Dell Technologies discusses their AI solutions for the enterprise.

Dell Technologies discusses their AI solutions for the enterprise.

  • Dell outlines the challenges enterprises face with data growth and the need for AI solutions that can bring intelligence to data across various industries.
  • The company introduces new configurations for PowerEdge servers equipped with AMD's MI300X accelerators and emphasizes the importance of open networking products for AI fabric solutions.

Chapter 6

MI300X Integration and AI Solutions with Supermicro

61:36 - 12 min, 33 sec

Supermicro shares their integration and AI solutions featuring MI300X.

Supermicro shares their integration and AI solutions featuring MI300X.

  • Supermicro highlights their modularized design approach, which allows for rapid product development and delivery of AI solutions featuring MI300X.
  • The company provides air-cooled and liquid-cooled rack-scale plug-and-play solutions to accommodate various power requirements and simplify customer deployment.

Chapter 7

Lenovo's AI Vision and Client AI Collaboration with AMD

74:09 - 10 min, 48 sec

Lenovo discusses their AI vision and collaboration with AMD on client AI.

Lenovo discusses their AI vision and collaboration with AMD on client AI.

  • Lenovo outlines their AI vision across devices, services, and infrastructure, emphasizing the need for AI solutions that are simple, versatile, and energy-efficient.
  • The company announces the integration of AMD's MI300A accelerators into their ThinkSystem platform and discusses the growing importance of AI in scientific research and supercomputing applications.

Chapter 8

Networking for AI Infrastructure with Industry Leaders

84:57 - 15 min, 2 sec

AMD and industry leaders discuss the importance of networking for AI infrastructure.

AMD and industry leaders discuss the importance of networking for AI infrastructure.

  • A panel with Arista, Broadcom, and Cisco emphasizes the critical role of ethernet in AI networking due to its scalability and open standards.
  • The panelists discuss the future of AI networking and the role of the Ultra Ethernet Consortium in shaping that future.

Chapter 9

High-Performance Computing and AI Convergence

99:58 - 6 min, 16 sec

AMD explores the convergence of high-performance computing (HPC) and AI.

AMD explores the convergence of high-performance computing (HPC) and AI.

  • The MI300A APU is introduced for HPC and AI, combining AMD's CPU and GPU technologies into a single package, leading to transformative performance for applications like OpenFOAM and CosmoFlow.
  • AMD's focus on power efficiency is highlighted, with the MI300A offering twice the HPC performance per watt compared to competitors.

Chapter 10

El Capitan Supercomputer Collaboration with HPE

106:13 - 6 min, 18 sec

AMD and HPE discuss their collaboration on the El Capitan supercomputer.

AMD and HPE discuss their collaboration on the El Capitan supercomputer.

  • AMD and HPE share their successful journey in building the Frontier supercomputer and the upcoming El Capitan, which is expected to be one of the world's most capable AI systems for scientific research.
  • El Capitan will feature the MI300A and HPE Cray EX supercomputer, integrating AI and HPC to enable groundbreaking research in various scientific fields.

Chapter 11

Client AI Innovations and Partnership with Microsoft

112:32 - 14 min, 34 sec

AMD and Microsoft discuss innovations in client AI and their partnership.

AMD and Microsoft discuss innovations in client AI and their partnership.

  • AMD highlights the importance of NPUs in PCs and demonstrates the Ryzen AI-enabled processors with integrated NPUs.
  • The partnership with Microsoft focuses on enabling AI experiences on PCs, with Microsoft's vision for Windows AI ecosystems and the integration of cloud and local inferencing.