IBM Launches OCtal-Core POWER7; Releases Benchmark Results - HotHardware
IBM Launches OCtal-Core POWER7; Releases Benchmark Results

IBM Launches OCtal-Core POWER7; Releases Benchmark Results

The x86 architecture has increasingly dominated the server market over the past decade but there's still a market for mainframe, big-iron servers. At present, Intel has challenged old guards Sun and IBM with a mixture of Nehalem-based Xeons and Itanium processors with the octal-core Nehalem-EX waiting in the wings. IBM isn't waiting for Nehalem-EX or Intel's new Itanium processor to hit the market before taking action of its own; Big Blue launched its POWER7 architecture on Monday. At 567mm2 and 32MB of on-die L3 cache, the new CPU is something of a beast.



Each POWER7 chip is divided into eight cores each with its own L2 cache. Each core is capable of handling four threads for a total of 32 threads in flight at any given time. The CPU is kept pumped full of data by 1-2 memory controllers (depending on the chip), with each controller capable of handling four channels of DDR3-1066. That's a total of 100GB/s of what IBM claims is sustained memory bandwidth. The chip is designed to scale up to 32 sockets, for a total of 256 processors and 1,024 threads in flight simultaneously. One of the most unique features of POWER7 is the CPU's enormous L3 cache. Rather than using conventional SRAM for its L3, IBM opted to build it out of eDRAM. This allowed the company to design a much larger cache with a lower transistor count than it otherwise could have. As we previously noted, IBM's POWER7 has 1.2B transistors, is 567mm2, and has 32MB of L3. By comparison, Intel's two billion transistor Tukwila (launched two days ago) is comparatively stodgy. At present, the only Tukwila listed on Intel's website is the 65nm Itanium 9330, a quad-core/eight-thread processor at 1.46GHz. It's unclear how much cache Tukwila actually has; reports around the web specify 30MB, but Intel's product sheet on the 9330 currently lists 20MB of cache.

POWER7 also compares favorably with the upcoming Nehalem-EX as far as transistor count and die complexity are concerned. There's no questioning the formidability of an octal-core Nehalem, but the CPU packs a whopping 2.3 billion transistors, 24 MB of cache, and die size that could be as large as 700mm2. We won't know exactly how these three architectures stack up against each other until more data is available, but POWER7 clearly packs a punch. Whether or not it's good enough to grab market share from either Itanium or the ever-present threat of x86 encroachment, however, remains to be seen. SPARC, for the record, is still technically in the fight; the company is planning a 16-core/128-thread Niagra processor built on 40nm bulk silicon. Given the company's steady loss of market share and the recent Oracle acquisition, however, it's not clear if Sun will continue to be a player in this market.

IBM has released SPEC benchmark data for certain POWER7 configurations; it's available here.
0
+ -

I don't know if I see the advantages here really. A CPU made at this size spec would require significantly more energy to run as well as more energy to cool from what I am gathering. I get that it will run 8 cores, but to what advantage. We are obviously talking about massive server systems here. So if the divided by two 8 core uses more resources than a comparable (performance wise) Intel chip set up, and the Intel chip can at some ratio exceed the cost threshold then why.

I do see that it goes over the 4Ghz operational speed under warranty which is interesting. Unless as a 64 bit server unit it can also out power and under resource an Intel chip or even a Sun or AMD unit for that matter whats the point. I could see it if it was a 128 bit chip or something, but it's not. Big Iron to some degree is loosing it's footing due to other concerns, like perf per watt as well as other factors such as draw and outside resources.

If I have 16 - 128 servers how much are they going to cost me to run house and cool. In today's information flooded world this is a very valid as well as common concern. I have worked in data warehouses all over the Southeast which contain that many units per 30 foot room floor if not 20, and are gargantuan buildings. Of course they may be comparably energy efficient. And moving up to 4.1Ghz speed vs 3.6Ghz when were talking a 10 CPU or more ratio for a whole server stack there is a considerable information throughput advantage. Does it justify the implementation as well as the usage cost parameters though, as that stack grows, at least at some point? That is how the users for a CPU like this are going to look at the unit!

0
+ -

Well with it's ability to support 32 threads and the movement towards virtualization, this chips can be a very formidable opponent. In theory you can run 32 virtual servers off 1 chip.

You might ask what's the point of that?

Well peviously... if you needed 32 servers to do many different things, you had to have 1 chip per server... and then an other MB... and then more ram... and then an other HDD and a rack and a KVM and so on and so on. It get's very expensive... and then when it's all said and done you run monitoring tools and see that only 10-20% utilization is being taken advantage of. But it is necessary because you have to have those seperate servers. You can condense a whole warehouse of racks of servers into just 1 rack by using virtualization. But in order to do that you have to have a chip that can handle all of these threads. If you capable of handling 1024 threads in a single array of CPUs, that's pretty friggin impressive and it's going to be very efficient. It it's capable of vitrualizing more servers than the Intel, AMD, SUN chips... then it will be both cost effective and an energy efficient/eco friendly solution to impliment.

0
+ -

To further back that up... I went and took a quick look on IDMs website and they claim suppport for up to 1,000 Virtual machines supporting AIX, IBM i and Linux.

That's a rather large amount of vritual machines.

Login or Register to Comment
Post a Comment
Username:   Password: