Items tagged with Larrabee

New details on Intel's upcoming 14nm Xeon Phi (codenamed Knights Landing) suggests that the chip giant is targeting a huge increase in performance, throughput, and total TFLOP count with the next-gen MIC (Many Integrated Core) card. Knights Landing will be the first ground-up redesign of Intel's MIC architecture -- the original Knights Ferry card was a repurposed Larrabee GPU, while the current Knights Corner-based MIC still has texture units integrated on-die left over from its GPU roots. RealWorldTech has published an expose on the upcoming architecture, blending what we know of the new design with some intelligent speculation about its overall structure and capabilities. Knights Landing will... Read more...
At the International Supercomputing Conference today, Intel announced that Knights corner, the company's first commercial Many Integrated Core (MIC) product will ship commercially in 2012. The Descendent of the Processor Formerly Known as Larrabee also gets a new brand name -- Xeon Phi. The idea behind Intel's new push is that the highly efficient Xeon E5 architecture (eight-core Sandy Bridge on 32nm) fuels the basic x86 cluster, while the Many Integrated Core CPUs that grew out of the failed Larrabee GPU offer unparalleled performance scaling and break new ground. Sounds interesting enough doesn't it? Intel Announces MIC Xeon Phi, Aims For Exascale Computing... Read more...
At the International Supercomputing Conference today, Intel announced that Knights corner, the company's first commercial Many Integrated Core (MIC) product will ship commercially in 2012. The Descendent of the processor formerly known as Larrabee also gets a new brand name -- Xeon Phi. The idea behind Intel's new push is that the highly efficient Xeon E5 architecture (eight-core Sandy Bridge on 32nm) fuels the basic x86 cluster, while the Many Integrated Core CPUs that grew out of the failed Larrabee GPU offer unparalleled performance scaling and break new ground. The challenges Intel is trying to surmount are considerable. We've successfully pushed from teraflops to petaflops,... Read more...
Nvidia isn't happy with what it sees as the free pass Intel's upcoming Many Integrated Core (MIC) architecture has gotten on the software front, and it's taken to the blogosphere to challenge it. The post begins with a lengthy discussion of what Nvidia is calling its "hybrid architecture," in which a CPU and GPU get together for great fun and massive execution of properly distributed workloads. The post is conveniently timed to land just before the Texas Advanced Computing Center's (TACC) joint symposium with Intel on highly parallel computing, which kicks off next week. What Nvidia takes issue with, according to the blog, is the idea that using an x86-compatible product like Knights Corner,... Read more...
At the supercomputing conference SC2011 today, Intel offered up performance details of its upcoming Xeon E5 processors and demoed their Knights Corner many integrated core (MIC) solution. The new Xeons won't be broadly available until the first half of 2012, but Santa Clara has been shipping the new chips to "a small number of cloud and HPC customers" since September. The new E5 family is based on the same core as the 3960X Intel launched yesterday, but the company has been surprisingly slow to ramp the CPUs for mass production. Rajeeb Hazra, general manager of the Intel Datacenter and Connected Systems Group, stated that demand for the new chips was stronger than anything Intel had seen before.... Read more...
Nvidia and HP have developed a limited edition GPU Starter Kit meant to provide a drop-shipped means for anyone interested in developing for HPC applications. The term 'starter kit' is very nearly a misnomer, as the package deal provides a system more than sufficient to get the ball rolling. The system contains eight ProLiant SL390 G7 servers, packed full of 24 M2070 GPUs, 16 CPUs, and its preconfigured with CUDA 4.0. The servers, presumably loaded with quad-cores, offer a respectable 32-cores of additional CPU power in addition to the copious amounts of GPU performance. The M2070 GPU that's included in the package is a Fermi-based part, with 6GB of RAM per GPU. According to Nvidia, the $99,000... Read more...
After Intel canceled Larrabee and announced it would repurpose the project for high-performance computing, little was said of what would happen to the company's various gaming-related IPs. It's therefore somewhat surprising to hear that Havok, the physics SDK developer Intel bought several years ago, has recently acquired Trinigy and that company's Vision Engine. The Vision Engine is a cross-platform development environment that supports Windows (DX9-11), the XBox 360, PS3, Wii, and the upcoming PlayStation Vita; iOS and Android support are both supposedly coming soon. The company claims that the engine is optimized to take advantage of multithreading on both x86 and non-x86 processors and includes... Read more...
Larrabee, Intel's once-vaunted, next-generation graphics card died years ago, but the CPU technology behind the would-be graphics card has lived on. Intel discussed the future of MIC/Knight's Corner today. After Larrabee was officially canceled, Intel repurposed the design and seeded development kits to appropriate market segments. MIC cards won't start shipping until the 22nm Knight's Corner chip is launched, but even the Knight's Ferry prototypes offer tantalizing hints at what future performance might be resemble. Like Larrabee, Knight's Corner (and future MIC products in general) utilize a CPU based on Intel's original Pentium architecture (P54C). Modifications include complete cache coherency,... Read more...
Six months ago, we covered a story in which Nvidia's chief scientist, Bill Dally, made a number of sweeping claims regarding the superiority of GPUs. Six months later he's again attacking traditional microprocessors with another broad series of accusations. As before, in our opinion, he uses far too broad a brush. Dally's basic claim is that modern CPUs are held back by legacy design. That's not particularly controversial, but he doesn't stop there. Referring to modern CPUs, Dally says:They have branch predictors that predict a branch every cycle whether the program branches or not -- that burns gobs of power. They reorder instructions to hide memory latency. That burns a lot of power. They carry... Read more...
If you're a fan of GPGPU computing this is turning out to be an interesting week. At SC10 in New Orleans, Intel has been demoing and discussing its Knights Ferry development platform. Knights Ferry, which Intel refers to as a MIC (Many Integrated Core) platform, is the phoenix rising rising from the ashes of Larrabee. Future MIC products (Knights Ferry is a development prototype, the first commercial product will be called Knights Corner) will mesh x86 compatibility with a level of parallelism typically found only in cluster nodes. Intel's Knights Ferry Knights Ferry contains 32 indepedent x86 cores with quad HyperThreading, fits into a PCIe 2.0 slot, and offers up to 2GB of DDR5 memory per card.... Read more...
When the Federal Trade Commission (FTC) settled their investigation of Intel, one of the stipulations of the agreement was that Intel would continue to support the PCI Express standard for the next six years. Intel agreed to all the FTC's demands (without actually admitting that it did anything wrong), but Intel's upcoming Oak Trail Atom platform presented something of a conundrum. Oak Trail was finalized long before the FTC and Intel began negotiating, which means Santa Clara could have been banned from shipping the platform. The FTC and Intel have jointly announced an agreement covering Oak Trail that allows Intel to sell the platform without adding PCIe support—for now. Come 2013, all... Read more...
At the GPU Technology Conference today, the CEO of NVIDIA, Jen-Hsun Huang, unveiled a new CUDA initiative, dubbed CUDA-x86. As the name implies, the new framework will allow developers to write CUDA code natively for x86. Don't confuse this announcement with the PhysX issues we discussed last month—when we spoke to NVIDIA back then we were told that certain legacy performance issues would be addressed in the next major version of the PhysX SDK. Porting CUDA to x86 is a smart move for NVIDIA given Intel's own intentions towards the high performance computing (HPC) market. One of the core advantages of Intel's hardware will be the fact that it's based on the ubiquitous x86 standard—something... Read more...
At the time of this writing, the FTC's investigation into Intel's alleged monopolistic abuses is on hold as the government attempts to negotiate a settlement with the CPU and chipset manufacturer. If these negotiations don't result in a deal by July 22, the case returns to court, with arguments currently scheduled to begin on September 15. Intel is no stranger to these sorts of lawsuits; between AMD and the EU, the CPU giant has been battling such allegations for years. The lawsuit between NV and Intel, however, rests on different points than the AMD/Intel allegations. Here, the battle is over whether or not Intel's already-negotiated agreements with NVIDIA give the latter permission to produce... Read more...
When we last checked in on Project Offset, the visually impressive game was facing an uncertain future. Intel recently released an update on PO's development status, but unfortunately it's not what we were hoping for. Having completely abandoned Larrabee as a GPU product, Intel saw no further reason to keep the dev team around. When queried, Intel told BigDownload the following: Intel purchased Offset Software to improve our game development knowledge-base and to further Intel's visual computing technology development expertise, helping the company offer robust products, support, and tools to customers. With the recent changes in our product roadmap, some of the resources and technologies from... Read more...
We suppose even the best laid plans can fall apart, and it seems that one of Intel's most promising endeavors is no longer active as of today. In a new post by the company's own Bill Kircos, he addresses Intel's stance on graphics-related programs, giving vague updates to a broad variety of topics. But one area wasn't vague at all. When speaking about Larrabee, which the company has been talking about and showcasing for many years now, he noted that Intel is "executing on a business opportunity derived from the Larrabee program and Intel research in many-core chips." He follows by saying that this "server product line expansion is optimized for a broader range of highly parallel workloads in... Read more...
When Intel announced its plans to develop a discrete graphics card capable of scaling from the consumer market to high-end GPGPU calculations,  it was met with a mixture of scorn, disbelief, interest, and curiosity. Unlike the GPUs at SIGGRAPH in 2008 (or any of the current ones, for that matter), Larrabee was a series of in-order x86 cores connected by a high-bandwidth bus. In theory, Larrabee would be more flexible than any GPU from ATI or NVIDIA; Intel predicted its new GPU would begin an industry transition from rasterization to real-time raytracing (RTRT). Larrabee's original GPU core. A bit of CPU here, a dash of GPU there...Larrabee parts were supposed to ship in 2010, but last December... Read more...
If you've never seen the work coming from the dev team behind Project Offset, the game engine Intel bought several years back, you really ought to take a look. While the game has been under development for over five years, Intel bought the firm two years ago and devoted a significant amount of energy towards positioning Project Offset as the showcase engine for what Larrabee could do. With Larrabee, if you recall, Intel was pushing the idea that real-time raytracing (RTRT) could replace traditional rasterization in 3D gaming. ATI and NVIDIA never took too kindly to the idea; the result was quite a bit of back-and-forth posturing about what could and couldn't be done with near-generation hardware.... Read more...
In the latest installment of our weekly video podcast, we're sitting down with our friends from TechVi to talk about the Zotac HD-ND01 MAG nettop, six high performance PCs that are perfect for gaming, Intel's cancellation of their first discreet GPU in years, and we look at the New York Times "video games to avoid" list, which reads more like a "best of" this holiday season...    Show Notes: 0:53 — Zotac HD-ND01 3:00 — Six Affordable Gaming PCs 7:00 — Intel Cancels Retail Versions Of Gen 1 Larrabee GPU 8:07 — New York Times "Games to Avoid"... Read more...
Intel formally announced today that its controversial and much-hyped Larrabee GPU will not hit retail stores in 2010. The company declined to speculate on when retail Larrabee-based GPUs will be available, but stated that the current generation of products will be used for in-house development and sampled to relevant partners. Intel declined to give a specific reason for the delay, saying only that product development was behind where the company had hoped it would be by now. The project's current status is now a bit unclear, but we know Intel hasn't completely killed the project. Instead, software (and possibly hardware) development kits will be made available over the next year to interested... Read more...
During today's high performance computing workstation and server breakout at IDF, Intel took the opportunity to provide a live demonstration of their Larabee technology today in a high performance workstation showcase.  Though we're not completely sure we weren't just seeing a demonstration of the Larrabee instruction set on a high-end server of some sort (we weren't able to see inside the box), Intel did indeed provide a demo of their technology running a raytracing demo that we've seen in the past, which is a port of Id's Quake Wars artwork, full raytraced and running at a relatively high resolution from what we could tell. Intel's Larrabee Many-Core - Block DiagramIntel's Prototype Larrabee... Read more...
NVIDIA has built its brand and reputation as a GPU designer since the company was founded in 1993, but recent comments by the company have implied that it believes platforms like Tegra and ION will be key revenue generators in the future. We've previously discussed NVIDIA's ongoing emphasis on the GPU as a massively parallel processor capable of handling workloads and programs far outside the realm of video games, but to date, reviewers and analysts alike have treated Tegra as more of a side project than a future core competency.   The two core components of NVIDIA's mobile strategy: ION and Tegra  Given how difficult the last twelve months have been for NVIDIA, it's easy to wonder... Read more...
It's pretty obvious that NVIDIA understands the opportunity that's in front of them, but just in case anyone is oblivious, Research and Markets is making sure the point is clear. With the CPU becoming less and less important (and the GPU, hardware encoders and broadband speeds gaining in importance), there's a huge opening for fringe processing companies such as ATI and NVIDIA to really capitalize in areas where Intel and AMD have longed dominated.The report states that at least three things are happening at the same time: Most growth in computing is in the mobile space (Apple iPhones, Laptops, Netbooks, etc.) Visual or 3D computing is becoming a mainstream feature and not just for games and... Read more...
1 2 Next