AMD Details How To Run Disruptive DeepSeek AI On Your Ryzen Or Radeon PC
by
Zak Killian
—
Wednesday, January 29, 2025, 11:58 AM EDT
Much of the discussion around upstart Chinese AI firm Deepseek's technology has been centered around the idea that it can be deployed using considerably less powerful hardware than is typically required for useful language models. That means you can run it directly on your home PC; no internet connection required at all. This isn't exactly novel in and of itself, but Deepseek's R1-Distill model is indeed the first "reasoning" model to be released to the public for use on whatever hardware you like.
If the hardware you like happens to be a recent Radeon GPU or Ryzen AI processor, you're in for a remarkably easy set up process. First, make sure you have the absolute latest graphics driver from AMD. That means the 25.1.1 Optional driver, which, in our testing, absolutely will not come down in Adrenalin, even with Optional drivers enabled. You're going to have to head to AMD's site to download it the old-fashioned way.
Deepseek is well-suited to math questions in particular.
Once you have that installed, simply head to this link to download the Ryzen AI version of LM Studio. LM Studio isn't created by AMD and is not exclusive to AMD hardware, but this particular version comes pre-configured to work on AMD's CPUs and GPUs, and should give you pretty decent performance on any of them—albeit those CPU-based AI computations are pretty sluggish compared to GPU.
This screen lets you easily download new models to try out.
Now, LM Studio has an option built into the software to browse and download models; you can see it in the screenshot above. Type in the model you want—find it in the chart below—and in theory, it should be as simple as clicking on Download. In practice, we weren't able to get model downloads in LM Studio working, and had to go download the GGUF weighs ourselves from HuggingFace. If you have to go this route, make sure you hit the console and do "lms import <your model name here with full path>" (you'll have to launch LM Studio once first.)
The specific versions of DeepSeek you should grab for each supported hardware.
Once you've got the model installed in LM Studio, it's as simple as loading it using the button at the top and then chatting away. Even if you have a powerful GPU like our Radeon RX 7800 XT—which is capable of producing more than 40 tokens per second, far faster than anyone can read—you'll still have to wait anywhere from 5 to 50+ seconds for the model to think before it answers. You can actually unfurl the box for this and see the model's reasoning process, too, which is pretty enlightening sometimes.
In a tweet, AMD claims LLM competitiveness against NVIDIA's finest... as of last week.
As we noted above, LM Studio is not exclusive to AMD hardware, but if you're not using red-team gear you may want to download the standard LM studio instead of the AMD Ryzen version. We're not sure if it really matters, but better safe than sorry. You can head to AMD's blog post if you want more detailed instructions on how to set it up.