Intel Demos Knights Ferry Development Platform, Tesla Scores With Amazon

If you're a fan of GPGPU computing this is turning out to be an interesting week. At SC10 in New Orleans, Intel has been demoing and discussing its Knights Ferry development platform. Knights Ferry, which Intel refers to as a MIC (Many Integrated Core) platform, is the phoenix rising rising from the ashes of Larrabee. Future MIC products (Knights Ferry is a development prototype, the first commercial product will be called Knights Corner) will mesh x86 compatibility with a level of parallelism typically found only in cluster nodes.


Intel's Knights Ferry

Knights Ferry contains 32 indepedent x86 cores with quad HyperThreading, fits into a PCIe 2.0 slot, and offers up to 2GB of DDR5 memory per card. Each of the 32 cores has a 64K L1 (32K data, 32K instructions) and a 256K L2 cache. Knights Corner, when it eventually launches, will be built on 22nm technology and offer "more than 50" processing cores. 64 processing cores in total would seem a logical target, but Intel may be hedging its prediction in order to improve yields when the device arrives. For the moment, Intel sees Knights Corner as a complementary product rather than as a replacement for conventional Xeon servers and believes it will demonstrate a tremendous performance improvement in certain parallel workloads.

Nvidia, meanwhile, has reasons of its own to cheer. Earlier this week, Amazon announced that it would begin offering a new type type of Elastic Cloud Compute service dubbed Cluster GPU Instances. According to the GPU manufacturer, Amazon's decision to sell GPU computing in more bite-sized pieces will allow more organizations to access GPU computing that wouldn't otherwise be able to afford it.



"With Amazon Cluster GPU Instances, our customers now have the power of high performance computing, the efficiency and speed of GPUs and the highly available, scalable and affordable cloud environment our customers have come to expect from AWS," said Peter De Santis, general manager of Amazon EC2. "We're excited to help our customers access the raw power of GPU technology and look forward to the innovation this will enable."

"The ability to run a larger number of more detailed simulations, with an on-demand pricing model and the scalability of Amazon EC2, enables companies to build better, safer, more reliable products," said Andy Keane, general manager, Tesla business at NVIDIA. "GPU supercomputing, through AWS, gives users a flexible computing facility that allows them to scale their computing needs based on user demand."

It's going to be a few years yet before Intel and NV square off over the computational capabilities of their respective architectures. We expect Intel will push Knights Crossing's x86 compatibility as a major feature while NV focuses on CUDA performance, capabilities, and the years it's spent building relationships with current GPGPU customers. Thus far, Intel is playing its cards close to its chest—Nvidia has plenty of time to improve Fermi, but Santa Clara isn't giving its competitors much to target as far as performance is concerned.

As for AMD, the company has done little more than wave a hankerchief in acknowledgement of GPGPU computation. We're guessing that the company is prioritizing profit first, GPGPU after.