원문은 아래 참고 부탁 드립니다.
Q - Vivek Arya
Thank you, Lisa. And for my follow-up, I would love your perspective on the news from DeepSeek recently, right? And there are kind of two parts to that. One is, once you heard the news, do you think that should make us more confident or more conservative about the semiconductor opportunity going forward? Like is there something so disruptive in what they have done that reduces the overall market opportunity?
And then within that, have your views about GPU versus ASIC? How that share develops over the next few years? Have those evolved in any way at all? Thank you.
A - Lisa Su
Yeah, great. Thanks for the question, Vivek. Yeah, I think it’s been a pretty exciting first few weeks of the year. I think the DeepSeek announcements, Allen Institute, as well as some of the Stargate announcements, talk about just how much the rate and pace of innovation is happening in the AI world.
So specifically relative to DeepSeek, look, we think that innovation on the models and the algorithms is good for AI adoption. The fact that there are new ways to bring about training and inference capabilities with less infrastructure actually is a good thing because it allows us to continue to deploy AI compute in a broader application space and drive more adoption.
I think from our standpoint, we also very much like the fact that we’re big believers in open source. And from that standpoint, having open source models and looking at the rate and pace of adoption there is pretty amazing. That is how we expect things to go.
So to the overall question of how we should feel about it? I mean, we feel bullish about the overall cycle. Similarly, some of the infrastructure investments that were announced with OpenAI and Stargate, and the efforts to build massive infrastructure for next-generation AI, all indicate that AI is certainly on the very steep part of the curve. As a result, we should expect a lot more innovation.
And then on the ASIC point, let me address that because I think that is also a place where there’s a good amount of discussion. I have always been a believer that you need the right compute for the right workload. With AI, given the diversity of workloads—large models, media models, small models, training, inference—whether you’re talking about broad foundational models or very specific models, you’re going to need all types of compute. That includes CPUs, GPUs, ASICs, and FPGAs.
Relative to our $500 billion-plus TAM in the long term, we’ve always considered ASICs as a part of that market. But my belief is that, given how much change is still happening in AI algorithms, ASICs will remain a smaller part of that TAM because they are more workload-optimized for specific tasks. In contrast, GPUs will enable significant programmability and adjustments to all of these evolving algorithm changes.
When I look at the AMD portfolio, it really covers all of these areas—CPUs, GPUs—and we’re also involved in a number of ASIC conversations as well, as customers want a comprehensive compute partner.