Acer Predator BiFrost Arc A770 OC Graphics Card Review: Intel On-Board

colorfulcardphoto2
Like all synthetic benchmarks, 3DMark scores don't necessarily correlate to real gaming performance. However, besides serving as capable stress tests, they also give us an idea of the peak performance characteristics of a given graphics card. We've run three 3DMark tests today: the venerable Fire Strike in its 2560×1440 "Extreme" variant, the taxing Time Spy DirectX 12 benchmark, and the future-looking Speed Way benchmark that makes heavy use of DirectX 12 Ultimate technologies, including ray tracing.

UL 3DMark Fire Strike Extreme DirectX 11 Benchmarks

screen 3dm fse
Fire Strike was the primary benchmark in the original version of the current 3DMark app. At that time, back in 2013, it was a ludicrously-taxing benchmark that really put the hurt on even the mightiest systems. Ten years later, the basic 1080p Fire Strike benchmark is a solved problem even for integrated GPUs. To that end, we've bumped the benchmark to its Fire Strike Extreme preset. That raises the render resolution to 2560×1440, making it a somewhat more suitable challenge for these cards.

chart 3dm fse

Interestingly, most of the cards in our arsenal today perform in roughly the same ballpark. The GeForce RTX 3060 falls behind the rest of the pack, but otherwise, we see the most powerful GPU on paper ride out ahead, while the GeForce RTX 4060 Ti and the Predator BiFrost Arc A770 are neck-and-neck in this DirectX 11 benchmark. This is a good result for the Arc A770 card, and encouraging, as Intel struggled with DX11 performance early on.

UL 3DMark Time Spy DirectX 12 Benchmarks

screen 3dm ts
Time Spy, on the other hand, was released in 2016. It leverages the DirectX 12 API to make much better use of both CPU and GPU resources than Fire Strike. This allows it to contain dioramas of every previous 3DMark game test within the museum that the character walks through. Time Spy is still a demanding test, and we elected to use the default preset because it runs in 2560×1440 resolution.

chart 3dm ts

Well, how about that. The Acer Predator BiFrost Arc A770 tops the chart in our Time Spy test, even if ever-so-slightly. It's offering competitive performance in this benchmark against both NVIDIA's latest Ada Lovelace architecture as well as the Radeon RX 6750 XT that really lives in a higher tier of product, considering its original MSRP. This is good stuff.

UL 3DMark Speed Way DX12 Ultimate Benchmarks

screen 3dm sw
Speed Way is UL's latest addition to the 3DMark family. The tool already has a couple of ray-tracing benchmarks, most notably the DXR Feature Test and the Port Royal benchmark, but those are laser-focused on RT performance and don't really correlate with games very well. The Speed Way test, in contrast, is supposed to be a more representative demonstration of a graphics card's ability to run current-generation ray-traced games in 2560×1440 resolution.

chart 3dm sw

NVIDIA's RTX 3060 Ti and RTX 4060 Ti leap out ahead of the pack, as expected when it comes down to ray-tracing performance. NVIDIA has been a pioneer in that space, and it's no surprise that its cards are the best for this kind of workload. However, if we ignore those two front-runners, the Predator BiFrost Arc A770 takes the lead ahead of AMD's potent Radeon RX 6750 XT. Overall, not a bad showing.

Real-World Game Test Discussion: To Upscale Or Not To Upscale?

Before we get into our real game tests on the next page, we wanted to talk a little about game testing in 2023 and the proliferation of advanced upscaling techniques. We're of course talking about NVIDIA's DLSS, AMD's FSR, and Intel's XeSS. Now, NVIDIA wasn't strictly first to market with this idea; console games had been flirting with the concept of upscaling from a lower-resolution image long before the green guys debuted their Deep Learning Super Sampling with the GeForce RTX 20 series cards in 2018.

However, at this point in time, NVIDIA is widely-acknowledged as a leader in the space. Its DLSS Super Resolution feature offers near-native image quality with a huge jump in performance, owing to the actual internal render resolution being much lower in terms of pixel count than the final output image. AMD's solution has its advantages—most notably that it can run on anything and doesn't require hardware acceleration—but it is generally considered to offer somewhat inferior image quality compared to NVIDIA's AI-powered solution.

xess 70games

When Intel first announced the Xe-HPG graphics architecture and the Arc GPUs, well before they actually shipped, the company also teased XeSS, its own AI-powered upscaling solution. XeSS made all of the same promises as DLSS at the time—huge performance gains thanks to rendering games in lower resolution and then allowing AI to upscale them. XeSS was slower to penetrate the market, but it has now found its way into over 70 games, including such big names as Call of Duty, Hogwarts Legacy, Forza Horizon 5, F1 2023, and more.

We've tested XeSS pretty extensively, and the conclusion we've come to is that, at least on Arc GPUs, it is very similar to DLSS at this point. We should also note that, XeSS works on other companies' graphics cards, but it uses a different algorithm that relies on DP4a math instead of the Arc GPUs' own XMX tensor units. This algorithm, to preserve the speed boost, isn't as effective at upscaling as the native XMX version on Arc. If you've tried XeSS on your GeForce or Radeon card and came away unimpressed, that's probably why.

cyberpunkcomparo1
DLSS on top, XeSS on bottom, Quality on right, Performance on left.

Here, we've compared XeSS against DLSS in the Quality and Performance modes. You can click the image to go to the gallery and see the full-resolution screenshots. You can see the that the image quality is broadly equivalent. XeSS, even with sharpening disabled, still seems to have a mild sharpening effect, and typically gives a cleaner-looking resolve as a result, where DLSS typically is a bit soft. However, DLSS seems to be a little better at preserving certain details, particularly in the lighting in these Hogwarts Legacy examples:

hogwartscomparo1
DLSS on top, XeSS on bottom, Quality on right, Performance on left.

But what about the performance? There is a performance cost associated with the upscaling performed by NVIDIA's DLSS, AMD's FSR, and Intel's XeSS. DLSS and XeSS run on dedicated matrix math units within the GPU, so their cost is generally lower, but there is still a cost. This cost is dependent primarily on the output resolution, not the input resolution.

chart scaling

We compared XeSS against DLSS in Hogwarts Legacy using the GeForce RTX 4060 Ti, which is generally a bit faster than the Arc A770, so keep that in mind as you look at the results. The RTX 4060 Ti isn't very far ahead of the Arc A770 when playing Hogwarts Legacy in native resolution, and that's mostly down to its inferior memory bandwidth; it's bottlenecked by its narrow memory bus at 2560×1440, so that's why it gains a lot more performance from DLSS super resolution here—ignoring the Frame Generation (FG) result for a moment.

The Arc A770 still gains a nice performance bump from XeSS, pushing the game from "a little sluggish" territory all the way up to "smooth". It might look like the jump from Quality to Performance isn't worth it, but the increase to the 1% low frame-rate is actually surprisingly noticeable, as it allows our Freesync monitor to stay within its 30-144 Hz VRR window. As we discussed above, dropping to Performance really doesn't look bad, especially at 2560×1440.

Both NVIDIA and AMD have their own frame generation solutions now, although we haven't tested AMD's FSR3 yet to know if it's any good. Frame generation increases a game's frame rate by using frame interpolation. This increases input lag, but in theory, the increase in frame rate helps to mitigate that somewhat. It's a worthwhile technique, and a real technology advantage for NVIDIA right now. Intel doesn't have a frame generation solution at the moment, but we would be surprised if there isn't an "XeFG" by the time Battlemage rolls around.

upscaling example
Would you believe this was rendered at 1280×720 before upscaling?

So with all this data in hand, the question becomes: should you play at native resolution, or make use of upscaling? Our judgment is like this: if you have a GeForce or Arc GPU, and the game you're playing supports your vendor's upscaler, then you should probably at least be using the Quality setting. As long as you're not CPU-limited, it will give you significantly-improved frame rates with very little loss of visual quality.

This isn't universal; in games with very long draw distances you may want to avoid using upscaling, because rendering at a lower resolution absolutely does reduce fine detail at a distance, even with DLSS or XeSS. Likewise, in games that already run at high frame rates on your system, you may simply prefer the cleaner resolve of native-resolution rendering, which is totally fair.

Despite all that, we mostly don't use smart upscalers in our testing here, and the reason is because the various algorithms don't give identical results. The visual difference is small, but very real, and as a result, doing benchmarks with these scalers on is not an apples-to-apples comparison. To avoid complications from trying to say things like "well, FSR2 needs [x] mode to hit the same image quality as DLSS at [y] preset", we simply focused on native resolutions. That doesn't mean you should do the same, though.

Related content