AMD is losing the AI battle, and it's time to worry

AMD is losing the AI battle, and it’s time to worry

Both AMD and Nvidia are among the best graphics cards in the market, but it’s hard to deny that Nvidia is generally in the lead. I’m not just talking about the big difference in market share. In this generation, it’s Nvidia who has the giant GPU which is better than all other cards, while AMD has no answer to the RTX-4090 just now.

Another thing AMD doesn’t have a solid answer to right now is artificial intelligence. Even though I’m switch to AMD for personal use, it’s hard to ignore the facts: Nvidia is winning the AI ​​battle. Why is there such a stark difference, and will it become more of an issue for AMD down the line?

It’s not all about the game

The RX 7900 XTX.
AMD

Most of us buy graphics cards based on two things: budget and gaming capabilities. AMD and Nvidia both know that the vast majority of their high-end consumer cards end up in gaming rigs. game, although professionals also collect them. Still, gamers and casual users make up the bulk of this market segment.

For years, the GPU landscape was all about Nvidia, but over the past few generations AMD has made great strides, so much so that it’s now trading blows with Nvidia. Although Nvidia dominates the market with the RTX 4090, AMD’s two RDNA 3 flagships (the RX 7900XTX and RX 7900XT) are powerful graphics cards that often outperform similar Nvidia offerings, while also being cheaper than RTX4080.

If we pretend that the RTX 4090 does not exist, then comparing the RTX 4080 and 4070 If to the RX 7900 XTX and XT tells us things are going well right now; at least when it comes to games.

And then we come to ray tracing and AI workloads, and that’s where AMD falls off a cliff.

The GeForce RTX logo is displayed on the side of a graphics card.

There’s no way to sugarcoat this – Nvidia is simply better at running AI-generated tasks than AMD right now. It’s not really an opinion, it’s more of a fact. Nor is it the only trick up its sleeve.

Tom’s gear recently tested AI inference on Nvidia, AMD, and Intel cards, and the results weren’t AMD friendly at all.

To compare the GPUs, the tester compared them in Stable Diffusion, which is an AI imaging tool. Read the source article if you want to know all the technical details that went into setting up the benchmarks, but long story short, Nvidia outperformed AMD, and Intel Arc A770 hurts so badly that it barely deserves a mention.

Even running Stable Diffusion outside of an Nvidia GPU seems like quite a challenge, but after some trial and error, the tester was able to find projects that were somewhat suited to each GPU.

After testing, the end result was that Nvidia’s RTX 30 series and RTX 40 Series both performed pretty well (albeit after some tweaking for the latter). AMD RDNA3 the line also held up well, but last-gen RDNA 2 cards were pretty mediocre. However, even AMD’s best card was miles behind Nvidia in these benchmarks, showing that Nvidia is simply faster and better at tackling AI-related tasks.

Nvidia cards are the gold standard for professionals who need a GPU for AI or machine learning workloads. Some people may purchase one of the consumer boards and others may choose a workstation model instead, such as the confusingly named RTX 6000but the fact remains that AMD is often not even on the radar when such platforms are being built.

RX 7900 XT and RX 7900 XTX performance in Cyberpunk 2077 with ray tracing.

Let’s not overlook the fact that Nvidia also has a strong lead over AMD in areas such as laser trace And Deep Learning Super Sampling (DLSS). In our own benchmarks, we found that Nvidia still leads the charge when it comes to ray tracing over AMD, but at least Team Red seems to be taking steps in the right direction.

This generation of GPUs is the first where the ray tracing gap is closing. In fact, AMD’s RX 7900 XTX outperforms Nvidia’s RTX 4070 Ti in this regard. However, Nvidia’s Ada Lovelace GPUs have another advantage in the form of DSLS 3, a technology that copies entire images, instead of single pixels, using AI. Again, AMD is falling behind.

Nvidia has a long history of AI

Nvidia GeForce RTX 4090 graphics card.
Jacob Roach / Digital Trends

AMD and Nvidia graphics cards are architecturally very different, so it’s impossible to fully compare them. However, one thing we do know is that Nvidia’s cards are optimized for AI at the very structure, and have been for years.

Nvidia’s latest GPUs feature Compute Unified Device Architecture (CUDA) cores, while AMD cards have Compute Units (CUs) and Stream Processors (SPs). Nvidia also has Tensor Cores that aid in the performance of deep learning algorithms, and with Tensor Core Sparsity, they also help the GPU avoid unnecessary computations. This reduces the time the GPU needs to perform certain tasks, such as training deep neural networks.

CUDA cores are one thing, but Nvidia has also created a parallel computing platform of the same name, which is only accessible to Nvidia graphics cards. CUDA libraries allow programmers to harness the power of Nvidia GPUs to run machine learning algorithms much faster.

The development of CUDA is what really sets Nvidia apart from AMD. While AMD didn’t really have a good alternative, Nvidia invested heavily in CUDA, and in turn, most AI advancements over the past few years have been made using CUDA libraries. AMD’s best bet right now is OpenCL, but most experts say it’s not quite on par with CUDA.

AMD has been doing some work on its own alternatives, but that’s fairly new when compared to Nvidia’s years of experience. AMD’s Radeon Open Compute Platform (ROCm) enables developers to accelerate compute and machine learning workloads. As part of this ecosystem, it quite recently launched a project called GPUFORT.

GPUFORT is AMD’s effort to help developers transition from Nvidia cards to AMD’s own GPUs. Unfortunately for AMD, Nvidia’s CUDA libraries are much more widely supported by some of the more popular deep learning frameworks, such as TensorFlow and PyTorch.

Despite AMD’s attempts to catch up, the gap only widens every year as Nvidia continues to dominate the AI ​​and ML landscape.

Hurry up

Nvidia and AMD CEOs are shown side-by-side in a split-screen view.

Nvidia’s investment in AI was certainly wise. This left Nvidia with a burgeoning line of gaming GPUs alongside a powerful lineup of cards capable of performing AI and ML-related tasks. AMD is not there yet.

Although AMD seems to be trying to optimize its cards on the software side with still unused AI cores on its latest GPUs, it lacks the software ecosystem that Nvidia has built.

AMD, however, plays a crucial role as Nvidia’s only serious competitor. I can’t deny that AMD has made great strides in the GPU and CPU markets over the past few years. It has managed to emerge from irrelevance and become a solid alternative to Intel, making some of the best processors available now. Its graphics cards are now also competitive, even if only for gaming. On a personal level, I leaned towards AMD instead of Nvidia because I’m against Nvidia’s pricing approach in the last two generations. Still, that doesn’t make up for AMD’s lack of AI presence.

This is very noticeable in programs like ChatGPT that the AI ​​is here to say, but it’s also present in countless other things that go unnoticed by most PC users. In a gaming PC, the AI ​​works in the background performing tasks such as real-time optimization and anti-cheat measures in games. Non-gamers also see a lot of AI on a daily basis, as AI is found in ubiquitous chatbots, voice-enabled personal assistants, navigation apps, and smart home devices.

As AI increasingly permeates our daily lives and computers are needed to perform tasks that are only increasing in complexity, GPUs are also expected to keep pace. AMD has a tough job ahead, but if it doesn’t take AI seriously, it could be doomed to never catch up.

Editors’ Recommendations






Leave a Comment

Your email address will not be published. Required fields are marked *