Intel Expects Strong Demand For Sandy Bridge; Hints at Early Launch

rated by 0 users
This post has 9 Replies | 0 Followers

Top 10 Contributor
Posts 25,825
Points 1,167,165
Joined: Sep 2007
ForumsAdministrator
News Posted: Wed, Jul 14 2010 5:10 PM
One other additional tidbit of information from Intel's recent conference call offers some tantalizing rumors about the upcoming Core i7 architecture, codenamed Sandy Bridge. Officially, Intel isn't expected to launch SB until the end of Q4, but the chip is ramping well enough that Santa Clara might opt to pull the launch forward slightly and ship for revenue earlier.

For those of you who aren't up on Intel's latest codenames, Sandy Bridge is a "tock" in Intel's Tick-Tock model. Intel splits its processor releases along these lines; a 'tick' refers to the launch of a new process technology, a 'tock' is a new CPU design. In this case, Intel isn't focusing on clock speeds (early projections have Sandy Bridge shipping at 2.8 - 3.4GHz w/out Turbo Boost), but on increased execution and power efficiency. The CPU's major new feature that addresses both concerns is its support for AVX (Advanced Vector Extensions). Think of AVX as a standard SIMD instruction set pumped full of monster-creating radiation and Vitamin X. Intel is predicting that a CPU that makes full use of AVX instructions could be up to 4x faster double-precision FPU workloads compared to traditional x87 code. Even if we toss that number out as a theoretical maximum, AVX could still easily deliver a sizeable boost to FPU performance.



Other changes include the fact that Sandy Bridge-derived CPUs will include the GPU on-die (as opposed to on-package as with the current lineup of Clarkdale/Arrandale processors). Cache sizes will stay the same (256K L2 / 8MB L3) but in this case, the L3 cache will actually be shared with the integrated
GPU. How much of a performance boost we can expect from this is unclear—even if the GPU had priority over the L3 cache, 8MB of L3 makes for an awfully slow frame buffer. It's also possible that Intel will introduce other improvements that deal with the CPU's ability to pass data between its various cores. Intel isn't talking much about its plans at this point, but it's possible the CPU manufacturer will launch Sandy Bridge a bit early to take the wind squarely out of AMD's Llano. Unlike Nehalem, which first launched at the high end of the midrange and then graduated to lower price segments very slowly, Sandy Bridge should hit multiple price points as soon as it drops. With Llano, AMD is targeting both high frequency and low power consumption; Intel's SB architecture could make it even harder for Sunnyvale to hit its performance goals—as if existing Clarkdale/Arrandale processors weren't tough enough to beat.

We expect AMD's Llano will target relatively high frequencies on both GPU and CPU (while maintaining reasonable performance). Since Llano is designed to be flexible (2-4 cores on-die, in addition to the GPU), it's logical to think Sunnyvale will continue its tactic of challenging Intel's superior IPC (instructions per clock cycle) with additional cores. This philosophy has paid off fairly well for the Phenom II X6 product series; a lineup of quad-core Llano parts at 3GHz priced against Intel's dual-core chips could keep AMD squarely in the game even if it doesn't put the CPU manufacturer over the top.
  • | Post Points: 95
Top 500 Contributor
Posts 283
Points 2,780
Joined: Sep 2004
Location: Canada, Nunavut
Chainzsaw replied on Wed, Jul 14 2010 5:46 PM

This looks like it could be a great piece of laptop hardware for people that don't do heavy tasks. Having on-board GPU on the CPU die should give pretty good laptop mobility time. Not to mention it should be much easier to cool laptops needing only one decent cooler - thus reducing weight.

If there is one thing AMD needs to catch up on - is laptop architecture. They have been trailing since the Core launch.

Current computer:

Asus G73JH-A1

Core i7 720QM

Mobility Radeon 5870 1GBVRAM

8GB Ram

1 X OCZ Vertex 2 120GB SSD

1 X 500GB HDD

17.3FHD (1080P) Panel

Blu Ray Drive (reader)

HDMI Out

  • | Post Points: 5
Top 50 Contributor
Posts 3,236
Points 37,910
Joined: Mar 2010
AKwyn replied on Wed, Jul 14 2010 6:40 PM

Please, the only thing I'm waiting for is whether or not Sandy Bridge will increase my gaming potential and decrease the bottleneck for DX11 graphics cards.

 

"The future starts with you; now start posting more!"

  • | Post Points: 20
Top 50 Contributor
Posts 3,093
Points 38,115
Joined: Aug 2003
Location: Texas
acarzt replied on Wed, Jul 14 2010 8:30 PM

what bottleneck?

Will there also be a version without the on-die GPU? Really I could live without it...

  • | Post Points: 5
Top 10 Contributor
Posts 4,820
Points 45,680
Joined: Feb 2008
Location: Kennesaw
rapid1 replied on Thu, Jul 15 2010 9:45 AM

Yeah acarzt; as far as gpu's go, on die either in the board, or in these upcoming processors has always been a lacking implementation to me. The one thing from these which really seems strange to me is having a different measurement technology between the cpu and gpu, that seems strange. I say that because of both the heating as well as energy usage in that scenario.

As far as a bottle neck on a DX11 card that is a total non-truth. Yeah there may be some, but since the DX 10-10.1 cards it has been minimized significantly by both the added cores as well as the faster memory on a PCI Express bridge. I just know for me personally I would not be buying any on die setup, unless maybe on a laptop which would see no major GPU activity.

I could however more so see it being a wanted combination if these cards could work in SLI or Crossfire in a hardware used scenario for a laptop. On a desktop it seems pointless. Of course we are generally on the more demanding side of computer users in general here which of course weighs my decision greatly I am sure. I also consider this kind of being a sham job on the more uneducated or general computer user to a point, and can see it being an advantage for them as well, that is if it has over the top (auto switching and or combination) usage availability.

As far as that goes unless Intel goes to co-developing with Nvidia or something I see no reality in that picture. While of course ATI can do it internally so they'd have the greater strength I guess.

OS:Win 7 Ultimate 64-bit
MB:ASUS Z87C
CPU:Intel(R) Core(TM) i7 4770 ***
GPU:Geforce GTX 770 4GB
Mem:***ingston 16384MB RAM
  • | Post Points: 5
Top 10 Contributor
Posts 8,572
Points 103,130
Joined: Apr 2009
Location: Shenandoah Valley, Virginia
MembershipAdministrator
Moderator
realneil replied on Thu, Jul 15 2010 1:37 PM

Someone I know, who knows, thinks that any bottleneck having to do with DX11 is in the way it's implemented by those who write the software and games that use it. Developers code just hasn't caught up to advances in technology so far and they haven't written to all of it's capabilities according to him. He's a fart smeller (smart feller) and I believe what he says.

Dogs are great judges of character, and if your dog doesn't like somebody being around, you shouldn't trust them.

  • | Post Points: 5
Top 100 Contributor
Posts 1,070
Points 11,570
Joined: Jul 2009
Joel H replied on Thu, Jul 15 2010 1:47 PM

Chainz,

Expect Llano to improve the AMD situation in mobile vs Intel, the same way the shift to 45nm Turion parts did. It made a definite difference, but it didn't really give AMD battery-life parity. AMD will counter the difference with price differences and (probably) improved GPU performance/quality.

TaylorKarras:

The *only* time you see a bottleneck in DX11 that's processor-related is if you have an absolutely top-end GPU setup. If you've got dual 5970s, tri-5870s, or three GTX 480s, then maybe you see a CPU barrier in real-world tests. If this is actually occurring, the smart thing to do is turn on AA / AF. 4x SSAA will bring even a pair of SLI GTX 480s running a 4.2GHz Gulftown to 50 fps or so in modern DX11 games.

I'd expect AMD's Hybrid technology (onboard GPU + expansion slot) to pop up again here.

  • | Post Points: 20
Top 50 Contributor
Posts 3,093
Points 38,115
Joined: Aug 2003
Location: Texas
acarzt replied on Thu, Jul 15 2010 6:53 PM

My Thoughts exactly Joel. In all of the benchmarks i've seen i did not see any CPU bottlenecks :-P

  • | Post Points: 5
Not Ranked
Posts 1
Points 20
Joined: Jul 2010
TDigitalis replied on Fri, Jul 16 2010 12:32 PM

If you want to see what Sandy Bridge CPUs can do and you think visiting the IDF would be a good idea if it weren’t so expensive, you can always try to win tickets, flights and hotel accommodation by registering at Intel’s IT Galaxy, here http://itgalaxy.intel.co.uk/idf2010competition/default.aspx?iid=IT_Galaxy-016

  • | Post Points: 20
Top 50 Contributor
Posts 3,093
Points 38,115
Joined: Aug 2003
Location: Texas
acarzt replied on Sat, Jul 17 2010 6:00 AM

lol.. nice plug.

Those types of conferences tend to be pretty boring.. i'd rather sit at home and wait for some other poor sap to weed through the boring stuff and provide us the juicy goodness :-D

  • | Post Points: 5
Page 1 of 1 (10 items) | RSS