AMD Demonstrates Next Generation 28nm Graphics Processor

rated by 0 users
This post has 14 Replies | 2 Followers

Top 10 Contributor
Posts 26,407
Points 1,192,920
Joined: Sep 2007
ForumsAdministrator
News Posted: Wed, Oct 5 2011 6:37 PM

At the Fusion 2011 event taking place in Taipei, Taiwan, AMD’s Corporate Vice President and General Manager of the Graphics Division, Matt Skynner, showed off a working, next-gen mobile GPU, manufactured using TSMC’s 28nm process node.

To anyone that follows AMD’s and NVIDIA’s typical GPU cadence, it should come as no surprise that 28nm GPUs are on the horizon, but seeing working silicon in action in a large public venue, running an actual game, is an obvious good sign.


Next-Gen AMD 28nm Mobile GPU In Action

Although they were just shown publically today, AMD has actually had 28nm GPU samples up and running for a while now. In fact, the pictures you see here are of one of AMD’s mobile reference platforms, and the GPU sitting under that heatsink is a 28nm part. We took these pics almost a month ago.

We’ll have more news regarding AMD’s 28nm GPUs soon enough. For now, enjoy the pictures. And for your reading pleasure, the full press release is below.


AMD Demonstrates Next Generation 28nm Graphics Processor at Fusion 2011

TAIPEI, Taiwan – October 5, 2011 – At Fusion 2011, AMD (NYSE:AMD) today demonstrated its next generation graphics processor, based on the cutting-edge 28 nm process technology. The demonstration was delivered by Corporate Vice President and General Manager of AMD’s Graphics Division, Matt Skynner, as part of his keynote titled, “Enabling the Best Visual Experience.” Skynner demonstrated a notebook-based version of AMD’s 28 nm next-generation graphics processor delivering a smooth, high-resolution game experience while playing Bioware’s popular role-playing title, Dragon Age 2.

“AMD strives to be at the forefront of every key inflection point in graphics technology, as demonstrated by our leadership in everything from process node transitions, to adoption of the latest graphics memory,” said Skynner. “Our pace-setting transition to the 28nm process node, coupled with new innovations in our underlying graphics architecture, is already generating excitement among the ODM community here in Taipei this week.”
 

  • | Post Points: 155
Top 50 Contributor
Posts 3,236
Points 37,910
Joined: Mar 2010
AKwyn replied on Wed, Oct 5 2011 6:47 PM

Wow... Those 28nm GPU's are sweet looking. And it certainly makes AMD look more confident in it's ability to produce the next generation of graphics cards, one that I'm guessing will be even faster then the 6950 that I bought a while back...

If it does turn out to be faster then it sucks for me but it's also a good thing for me since it's imminent release will bring 6950 prices down to a point where I can buy a second one, plus a 800W power supply to go with it.

 

"The future starts with you; now start posting more!"

  • | Post Points: 20
Top 100 Contributor
Posts 1,120
Points 12,940
Joined: Jun 2011
Location: East Coast

"The next generation GPU battle is going to be epic. Nvidia will hopefully have support for more than two monitors natively. And hope AMD challenges Nvidia on the $500+ range. Although , the 6970 frame rates difference between it and the 580 is no that far behind, so it represent a great value. Anyhow, looking forward to 2012 GPU line ups."

  • | Post Points: 5
Top 50 Contributor
Posts 3,236
Points 37,910
Joined: Mar 2010
AKwyn replied on Wed, Oct 5 2011 7:18 PM

Wheatley:
"The next generation GPU battle is going to be epic. Nvidia will hopefully have support for more than two monitors natively.

Meh. NVIDIA can't get their stuff together when it comes to making 28nn GPU's. I mean ATI was first with the DX11 cards and AMD certainly has been working on this for a long time. We waited 3 months for Fermi and look how that came out, a disappointment. Only with a redesign of Fermi was it able to reclaim the performance crown while looking more appealing to those consumers with more efficient performance and power numbers.

NVIDIA is going to have to work really, really hard in order for me to regain my confidence. 28mn for NVIDIA should not mean hot, inefficient and a power hog like Fermi was. For the 3 months they've given themselves, NVIDIA must not disappoint.

 

"The future starts with you; now start posting more!"

  • | Post Points: 5
Top 50 Contributor
Posts 2,865
Points 29,645
Joined: Mar 2011
Location: United States, Connecticut

That is an interesting board with the slot on the side of the board that the card is plugged into. It almost looks like this should be in a small form factor server.

  • | Post Points: 5
Not Ranked
Posts 78
Points 510
Joined: Apr 2011

I do have to say I am every day leaning towards ati everyday now seeing how amd turned them around. Growing up I was a nvidia fanboy back when ati monopolized their cards and charged a ton for their fanboys when their business plan was same as apples. But its nice to see they develop tech charge fee but allow other companys to actually manufacture them putting them in a more competitive price range.

  • | Post Points: 5
Top 10 Contributor
Posts 8,694
Points 104,420
Joined: Apr 2009
Location: Shenandoah Valley, Virginia
MembershipAdministrator
Moderator

The important question is,........is this a PCIe 3.0 part?

Intel boards are ready and waiting for PCIe 3.0 Video cards as we speak. You can buy them now,.......

Where is support for next gen PCIe 3.0 Graphics on AMD chipsets? And ~where~ are my damn PCIe 3.0 Video Cards? Smile

Dogs are great judges of character, and if your dog doesn't like somebody being around, you shouldn't trust them.

  • | Post Points: 5
Not Ranked
Posts 23
Points 265
Joined: Mar 2008
turtle replied on Wed, Oct 5 2011 9:36 PM
Yes, it is 'supposedly' PCI-e 3.0. Cypress was supposed to have it included, but was cut to reduce die size. I would assume the transistors will be allocated in 28nm products. As for AMD's chipsets/motherboards...That does seem quite enigmatic, and almost makes the inclusion for GPUs more of a gimic in the checkbox battle with nVIDIA. One can only assume AMD knows that the bw for pci-e 3.0 will not be overtly crucial for some time, which is probably a safe bet. It's also important to remember that Intel has 16 physical lanes for gfx (2.0 currently, 3.0 supposedly for IB's chipset) that are split into 2x8 slots when both slots are occupied. AMD's chipset has 32 2.0 lanes, which can be either 2x true 16x or 4x8. So, in essence, one could say they will already have similar bw to with 2.0 slots compared to what Intel has with 3.0

sidenote: This has probably been mentioned somewhere before regarding this photo, but the HSF contact gives away the die size of the chip doesn't it (offset to the right)? It seems fairly large - if that's a 40mm fan, which I think it is - the contact appears roughly the size of the motor, or a little smaller (300-400mm2?). Probably what most expected for the high-end, and running at low-voltage/speeds and/or early drivers not indicative of anything what-so-ever as it's just showing working hardware, but just thought I'd throw that out there...It doesn't look like a chip that would typically be found in a laptop...if that assumption is correct. FAIK, the fan could be smaller than 40mm.

  • | Post Points: 5
Top 50 Contributor
Posts 2,361
Points 48,680
Joined: Apr 2000
Location: United States, Connecticut
ForumsAdministrator
MembershipAdministrator
Marco C replied on Wed, Oct 5 2011 10:56 PM

@Turtle - I think you're looking too deeply into what the size of the cooler may or may not mean. I have seen that exact cooler on two or three generations of mobile GPU (if you look at the shroud, it still has an ATI logo). And on a couple of desktop prototypes as well. It's entirely more likely some guys in the QA and perf labs just slapped whatever coolers were nearby onto the MXM module, just to get the thing tested. :) Trust me on that one!

Marco Chiappetta
Managing Editor @ HotHardware.com

Follow Marco on Twitter

  • | Post Points: 20
Not Ranked
Posts 2
Points 35
Joined: Oct 2011

@TaylorKarras

The specs are out on the Radeon 7000 series. So they can be "looked up" and comparisons made.

For example: AMD rates the 6950 at 2253 gigaflops of graphical processing power and max power consumption at 200 watts. The (future) Radeon 7850 will be about 6% more powerful (2394 gflops) but use only 90 watts (max). So that would be a good efficiency option. But if you want more graphical power than you may be interested in the (future) Radeon 7950. It will be about 53% more powerful than your card (2253 vs. 3456 gigaflops) but burn only 150 watts.

In the near future there won't really be a need for that 800 watt PSU. Unless you want to match the graphical power of the future 7000 series using a sledgehammer approach (crossfire) and burn/waste hundreds of watts. Best to sell the 6950 to someone wanting to go crossfire than being the buyer in this case. Since 32nm turned out to be a flop the jump from 40nm to 28nm is substantial.

Also, I wouldn't count on nVidia to repeat the Fermi mistakes. But the cards should be very close. After all, TSCM (Taiwan Semiconductor Manufacturing Company) will be building both the Radeon 7000 series and the nVidia 600 series.

Oh, thought I better mention the Radeon 6970. 4096 gigaflops & 190 watts. And all card mentioned to have 2 gigs of memory. And yes, PCIe 3.0 and be backwards compatible with PCIe 2.1

  • | Post Points: 35
Top 200 Contributor
Posts 467
Points 3,710
Joined: Feb 2011

Hopefully yields will improve, but this is good to see.

  • | Post Points: 5
Top 50 Contributor
Posts 2,383
Points 31,065
Joined: Nov 2010
Location: Crystal Lake,IL
rrplay replied on Wed, Oct 5 2011 11:26 PM

Marco C:

...on a couple of desktop prototypes as well. It's entirely more likely some guys in the QA and perf labs just slapped whatever coolers were nearby onto the MXM module, just to get the thing tested. :) Trust me on that one!

make perfect sense !  I can just see the guys 'hey let's slap this on ! looks close enough and see what we got'

yep works for me !

"Don't Panic ! 'cause HH got's your back!"

  • | Post Points: 5
Top 50 Contributor
Posts 3,236
Points 37,910
Joined: Mar 2010
AKwyn replied on Thu, Oct 6 2011 12:39 AM

Specifications7:
In the near future there won't really be a need for that 800 watt PSU. Unless you want to match the graphical power of the future 7000 series using a sledgehammer approach (crossfire) and burn/waste hundreds of watts. Best to sell the 6950 to someone wanting to go crossfire than being the buyer in this case. Since 32nm turned out to be a flop the jump from 40nm to 28nm is substantial.

I have a 45mn CPU and a 40mn GPU and most of my technologies are not new enough to take advantage of the substantial jumps in power that these new technologies are offering so I'm just going to stick with what I have, wait until these new technologies are established; wait until I have money to build the system I want and then buy the dream system I've always wanted; which will either be a Ivy Bridge or a Bulldozer depending on which processor does it well for me.

 

"The future starts with you; now start posting more!"

  • | Post Points: 5
Top 150 Contributor
Posts 754
Points 8,520
Joined: Mar 2011
Location: Phoenix
LBowen replied on Thu, Oct 6 2011 11:18 AM

This is exciting for what it means to the future, but thanks to HotHardware I won't have to upgrade for a long time :)

Nice to know when I do prices will have go down with the introduction of new tech.

"I have the power!!"

  • | Post Points: 5
Top 100 Contributor
Posts 1,076
Points 11,645
Joined: Jul 2009
Joel H replied on Thu, Oct 6 2011 3:57 PM

Specifications7,

32nm wasn't a flop. AMD is using 32nm SOI for its CPUs and 28nm bulk silicon for its GPUs. GlobalFoundries initially intended to deploy a 32nm bulk silicon node, but opted to skip it for what would normally be called a half-node shrink (28nm).

  • | Post Points: 5
Page 1 of 1 (15 items) | RSS