Hackers Discover Wii U's Processor Design and Clock Speed
Now, Wii and PS3 hacker Hector Martin (aka Marcan) has answered some of these questions and raised a few others. According to him, the Wii U's CPU is a triple-core design clocked at 1.24GHz. Marcan identifies the base design as a PowerPC 750, which makes a great deal of sense. Nintendo used PowerPC 750-derived processors in both the GameCube and the Wii; retaining that architecture for the Wii U would simplify backwards compatibility and game programming.
So how similar is the new Wii U CPU (codename: Espresso) to Broadway (Wii) and Gecko (GameCube)? That's not yet clear. Broadway was Gecko at a higher clock speed, IBM's high-level executive overview of the two chips is word-for-word identical, save for different diagram locations. We know that Espresso's total die size is 32.76mm sq, compared to 156mm sq for the GPU. Broadway, on 90nm, was 18.9mm sq.
It's absolutely possible that Espresso is a triple-core, die-shrunk Broadway at a higher clock speed and with more cache. There have been rumors that the chip might incorporate out-of-order execution capabilities, but there's no definitive word on this yet. Marcan implies that the new chip is out-of-order, but then goes on to say that Espresso is "a saner core than the P4esque stuff in 360/PS3."
That's a problematic comparison. While it's true that Cell, Netburst, and Xenon all shared certain design characteristics, such as a deep pipeline, there were enormous differences in how these processors handled multi-threading, instruction scheduling, and branch prediction. Rather than getting buried in a discussion of the differences (particularly with Cell, which has always been its own distinct animal), I'll just say that treating the comparison as "Wii U = Core 2 = Good" is a drastic oversimplification.
The Questionable Metric of Current Game Performance
One thing we can say is that the Wii U's CPU is significantly different than Cell or Xenon. Combine that with the fact that many of the game studios now developing for the Wii U may not have worked on the platform before, and it's suddenly a lot clearer why architects might be having trouble wringing maximum performance out of the core. The Xbox 360 and PS3 are, at this point, well-known quantities. The Wii U isn't.
What we know about the Wii U, at this point, is that it's capable of executing three threads simultaneously. IPC is going to be higher than Xbox or PS3, but the clock speeds of those two architectures will help compensate for lower efficiency.
Into that mess, we toss in the GPU, which is reportedly clocked at 550MHz. Some have favored the Radeon HD 4000 series as a basis for the part; I still think a low-end Radeon 5000, like Redwood Pro, makes more sense. That GPU was built on 40nm, measured 104mm sq, clocked in at 649MHz, and had a 39W TDP. The die size discrepancy between the Wii U and Redwood Pro would account for the 32MB of EDRAM cache we know the Wii U offers.
Visual comparisons between the three systems show the Wii U equaling -- but not generally exceeding -- the other, more established consoles.
A triple-core Broadway CPU at 1.24GHz would have a theoretical peak of 14.79GFLOPS. The Redwood LE GPU has a peak execution rate of 350-430 GFLOPS at 550MHz, depending on core count. Nintendo may have propped up a relatively weak CPU with considerably more GPU horsepower. That's another place where the Radeon 5000 family would be a considerable improvement; the RV740 only supported OpenCL 1.0; Redwood LE supported OCL 1.1.
What all this means is that writing good games for the Wii U may well require developers to adopt new practices. It's absolutely fair to compare current Wii U games against the peak of the Xbox 360 / PS3 versions, but we suspect the platform has untapped potential at this early stage -- just as every other console always has.