How AMD's Mantle Will Redefine Gaming, AMD Hardware Not Required

One of the major planks of AMD's APU13 developer conference has been an in-depth discussion of its next-generation API, Mantle. Mantle, which first debuted at the company's Hawaii unveil in late September, has been billed as a high-performance alternative to DirectX 11. Prior to today, AMD has mostly discussed Mantle in broad terms, without giving much detail on the nuts and bolts of what it offers.

Thanks to new information released at APU13, we can give you a better idea what Mantle offers, what games will support it, and how it could shape gaming in years to come.

First the big one. According to multiple sources, including Johan Anderrson from DICE, Mantle is a thin layer abstraction that sits over hardware, not an AMD-specific product. There's no reason NVIDIA couldn't use Mantle in future products and, not surprisingly, multiple speakers at the event expressed interest in seeing that happen at some point in the future.

Solving DirectX's Small Batch Problem

One of the issues plaguing DirectX development for years has been the fact that the API itself consumes a great deal of CPU overhead in certain scenarios. This is exacerbated if the developer launches a great many small batches of triangles for rendering. Every batch of draw calls consumes additional CPU power, so the goal is to group draw calls as efficiently as possible.

AMD Mantle Draw Calls Per Second Target
AMD's Mantle aims to target 100K draw calls per second.

According to AMD, you can reasonably hit 4-5K worth of draw calls in a given scenario. Really great programmers may hit as high as 10K, briefly, but even that's tiny when you consider that the PS3 and Xbox 360 can regularly field 20-30K in draw calls. With Mantle, AMD wants to close that gap.

According to AMD, Mantle gives developers the ability to fine-tune their own applications for maximum throughput -- partly by giving developers more flexibility in where workloads are executed.

AMD Mantle Thread Workload Processing
Mantle's improved application resource management

What the image above shows is a series of application threads (left side) being queued for execution across the CPU and GPU. Workloads are being shifted to specific targets depending on where they'd be optimally executed. Mantle is designed to explicitly allow asynchronous compute scheduling so that the GPU can simultaneously run graphics and non-graphics workloads, or share data across the CPU-GPU link thanks to HSA.

One of the other ideas behind Mantle is that of expanding parallelism. Under DirectX and OpenGL, CPU0 might be handling game compute, CPU1 sets up rendering, and CPU2 handles the driver setup and data passing. Using Mantle, CPU0 handles the CPU-centric computation, but CPU1-CPUx (maximum multi-threading) are all dedicated to the render path with no need to tie up cores with driver interfaces.

According to Marc, CPUs are actually powerful enough to even occasionally serve as offload engines for GPU rendering on both consoles and PCs.

The Multi-GPU Question

There are two basic ways of doing multi-GPU rendering -- split-frame rendering and alternate frame rendering. Split-frame rendering means two GPUs work on exactly the same frame, but break it into top and bottom, while AFR hands Frame 1 to GPU 0, Frame 2 to GPU 1, and so on. 

Mantle changes this approach by treating multiple GPUs exactly the same way it treats single GPUs.

Mantle to eliminate micro-stutter in multi-GPU rendering through better load balancing

To Mantle, more GPUs are more queues to dispatch workloads to. This moves the load balancing question from the frame to the queue and should help eliminate problems of microstutter that can plague GPU configurations. 

So what about performance?

This is where we have to frustrate you a bit. AMD's Mantle discussions are long on exposition, very, very short on demos. "Short," as in, "There have been no head-to-head demos. The performance figures we've seen batted around have ranged from 20% - 50%.

My own sense is that Mantle may be a major bullet point for selling high-end video cards, but it's actually most important for APUs and other low-end discrete solutions. The reason here is simple: The difference between a steady 100 FPS vs. 150 FPS will be all but invisible. The difference between a steady 20 FPS and 30 FPS? That's playable vs. non-playable.

While it's disappointing not to have numbers to show you at this juncture, we can report that multiple developers, including DICE and Nixxes, came to APU13 to talk up the benefits of Mantle and why they're excited to see it on future systems. In addition, DICE's Johan Andersson keynoted the final day, alongside presentation from developers like Nixxes (Deus Ex: Human Revolution, Thief, Tomb Raider, and Hitman: Absolution).

Battlefield 4 is still expected to be the first title to add Mantle support, with the final patch dropping in 4-6 weeks. It's not clear at this juncture if later titles will have Mantle from the beginning or if contractual obligations will require it to be added at a later date. Either way, we'll be able to evaluate the product in final form in the not-too-distant future.

Via:  AMD APU13
RBloch one year ago

In before the nvidia fan boys....

MiguelBoulet one year ago

:poop: NVIDIA and Intel the best yo

RayanSiddiqui one year ago


ZackTemple one year ago

Amd hardware not required? That means nvidia can use it too?

MiguelBoulet one year ago

yes! haha

basroil3 one year ago

"Amd hardware not required? That means nvidia can use it too?"

No, it just means that the developers will have to program for a dozen architectures if they choose to support any other vendors (vs 2-3 architectures that are nearly identical with AMD).

Having watched the keynote on Mantle, all I can take away from it is that Mantle is just AMD saying to developers "screw driver improvement and error checking, you guys do the work that we are supposed to do" and then telling the press "developers will love it because they have more control!"

FJakimowicz one year ago

man... it so sad to see talented people like basroil3 wasting his time here, I would like to see him at the front of DICE or Nixxe to guide them through the world of software development.

Have you ever heard of compilers? if an instruction in a language can then be compiled to specific hardware. Maybe using a new driver layer for mantle...

Joel H one year ago

I don't blame you for being skeptical on Mantle. I'm not 100% sold on Mantle. Or rather, I'm sold that Mantle offers definite benefit,s but not on whether or not it will catch on in the broad market. Plenty of GPU initiatives have offered benefits, but not become industry standards.

RickLaRose 11 months ago

"Having watched the keynote on Mantle, all I can take away from it is that Mantle is just AMD saying to developers "screw driver improvement and error checking, you guys do the work that we are supposed to do" and then telling the press "developers will love it because they have more control!""

That's called conjuring up your own bias'.

Drive improvements won't help. The issue is with the DirectX 40% overhead. Drivers are not going to fix that problem. Also DirectX is highly serial in nature and that doesn't really take advantage of the parallel nature of modern Multicore CPUs and GPUs (when used as stream compute units).

Yes it means less emphasis on the driver. Why? Because tweaks to the driver were made in order to attempt to work around the DirectX limitations. One of the drawbacks of this has been long loading times as shaders are recompiled every-time you launch a game. Compiled in order to best suit the Graphics hardware at hand (tweaks and work arounds for DirectX limitations included).

I welcome Mantle. It's about time imo.

KDaniels one year ago

"Having watched the keynote on Mantle, all I can take away from it is that Mantle is just AMD saying to developers "screw driver improvement and error checking, you guys do the work that we are supposed to do" and then telling the press "developers will love it because they have more control!""

I understand what you are saying, but all they are doing is providing options. Why have all the overhead when this could improve performance up to 50%. We're not talking 1 or 2% difference, that is a substantial difference. Even 25% is a huge difference. Its not like they are forcing the developers to use this technology, it's there if lets say... I don't know... they want to make a product more efficient, providing a better overall user experience and getting the most out of your hardware. Now why would we want that...

Dave_HH one year ago

You guys are too funny. It will be interesting to see if this takes for AMD. The premise seems interesting at least.

Post a Comment
or Register to comment