Items tagged with HPC

The Heterogeneous System Architecture (HSA) Foundation is making waves this week with the announcement of the HSA Specification v1.0. The HSA 1.0 spec is aimed at ushering in a new wave of heterogeneous computing devices that efficiently harness the power of both CPUs and GPUs. HSA is not only destined to impact high-performance computing (HPC) and desktop platforms — as one would expect — it will also impact mobile devices including smartphones, tablets, and notebooks. HSA will make it easier for programmers to “efficiently apply the hardware resources in today’s complex systems-on-chip (SOCs),”... Read more...
Intel today made a splash at the International Supercomputing Conference in Leipzig, Germany by revealing new details about its next-generation Xeon Phi processor technology. You may better recognize Xeon Phi by its codename, Knights Landing, which we covered in some detail earlier this year. No matter what you call it, this represents a significant leap in High Performance Computing (HPC) that will deliver up to three times the performance of previous generations while consuming less power. A big part of the reason for this is the construction of a new high-speed interconnect technology called... Read more...
Like any smart company, NVIDIA is always looking for new markets and segments to dig into, and the company is doing just that with a push into high-performance computing (HPC). NVIDIA announced that its Tesla GPUs are being used to bring ARM64-based servers to a new level of performance. There are several vendors using the Tesla GPU accelerators in their ARM64 servers, including Cirrascale, E4 Computer Engineering, and Eurotech. “Featuring Applied Micro X-GeneARM64 CPUs and NVIDIA Tesla K20 GPU accelerators, the new ARM64 servers will provide customers with an expanded range of efficient,... Read more...
Netflix has long chased after methods of improving its movie recommendation algorithms, once even awarding a $1M prize to any team of people who could substantially improve on the then-current design. As part of that process, the company has been researching neural networks. Conventional neural networks are created with vast nodes of CPU clusters, often with several thousand cores in total. Netflix decided to go with something different, and built a neural network based on GPU cores. In theory, GPU cores could be ideal for building neural nets -- they offer huge numbers of cores already linked... Read more...
New details on Intel's upcoming 14nm Xeon Phi (codenamed Knights Landing) suggests that the chip giant is targeting a huge increase in performance, throughput, and total TFLOP count with the next-gen MIC (Many Integrated Core) card. Knights Landing will be the first ground-up redesign of Intel's MIC architecture -- the original Knights Ferry card was a repurposed Larrabee GPU, while the current Knights Corner-based MIC still has texture units integrated on-die left over from its GPU roots. RealWorldTech has published an expose on the upcoming architecture, blending what we know of the new design... Read more...
Last month, Intel brought us out to the Texas Advanced Computing Center (TACC) in Austin to brief us on its latest and greatest foray into high-performance computing (HPC) and exascale level processing performance. For Intel, years of heady talk about parallelism and exascale computing have finally come to fruition. Intel is bringing to market a pair of Xeon Phi coprocessor offerings in 2013, the 3100 family and the 5110p, and we’ve got the full scoop for you here... Intel’s Exascale HPC Revolution and Xeon Phi... Read more...
Last month, Intel brought us out to the Texas Advanced Computing Center (TACC) in Austin to brief us on their latest and greatest foray into high-performance computing (HPC) and exascale level processing performance. Parallel Computing and the Road to Exascale There are mountains of problems that need to be solved and a myriad of insight to be gained, in fields from the sciences to national security, that require HPC and highly parallel processing to most effectively and efficiently solve. Parallel processing is what the HPC space is all about, and when large amounts of data can... Read more...
At the supercomputing conference SC2011 today, Intel offered up performance details of its upcoming Xeon E5 processors and demoed their Knights Corner many integrated core (MIC) solution. The new Xeons won't be broadly available until the first half of 2012, but Santa Clara has been shipping the new chips to "a small number of cloud and HPC customers" since September. The new E5 family is based on the same core as the 3960X Intel launched yesterday, but the company has been surprisingly slow to ramp the CPUs for mass production. Rajeeb Hazra, general manager of the Intel Datacenter and Connected... Read more...
Last month, we discussed the split between IBM and the NCSA (National Center for Supercomputing Applications) over the highly ambitious 'Blue Waters' project. Blue Waters was the name of a planned supercomputer that would've been entirely water-cooled and included as many as 524,288 CPU cores. The disintegration of the deal came as some surprise, given the amount of work that'd already been done on the project. New details have come to light on why the University of Illinois and IBM ultimately parted ways over the project. One of the major issues appear to have been the clock speeds of the CPUs.... Read more...
NVIDIA has just announced the addition of CTO Steve Scott to the company’s Tesla Business unit. Steve Scott was the chief architect of the Cray X1 and was involved in the design of the Cray XT, Cray XE and "Cascade" systems as well. According to his bio, Steve Scott holds 27 U.S. patents in the areas of interconnection networks, cache coherence, synchronization mechanisms and scalable parallel architectures and has also served on numerous program committees and as an associate editor for the IEEE Transactions on Parallel and Distributed Systems. He served 19 years at Cray, the last six... Read more...
Four years ago almost to the day, the National Science Board handed the National science Foundation a mandate to build the most powerful petaflop-class supercomputer in the world. The NSF announced in turn that the system would be based on IBM's Power7 processor technology. One of the features that won IBM the contract was the fact that the cluster doesn't require message passing and could theoretically be programmed as a single system. As of today, Blue Waters is officially canceled. The news comes less than a month after IBM announced it would begin commercial shipments of the individual nodes... Read more...
New CUDA 4.0 Release Makes Parallel Programming Easier Unified Virtual Addressing, GPU-to-GPU Communication and Enhanced C++ Template Libraries Enable More Developers to Take Advantage of GPU Computing SANTA CLARA, Calif -- Feb. 28, 2011 -- NVIDIA today announced the latest version of the NVIDIA CUDA Toolkit for developing parallel applications using NVIDIA GPUs. The NVIDIA CUDA 4.0 Toolkit was designed to make parallel programming easier, and enable more developers to port their applications to GPUs. This has resulted in three main features: NVIDIA GPUDirect 2.0 Technology – Offers... Read more...
A little earlier today, in a jam packed meeting room in the Venetian Hotel in Las Vegas, we spent about an hour listening to NVIDIA CEO Jen-Hsun Huang speak about the massive influx of mobile computing devices over the last few years and NVIDIA’s plans to better infiltrate the burgeoning market moving forward. During his address, Mr. Huang spoke almost exclusively about the company’s Tegra 2 processor and its capabilities and performance, although he also dropped a bombshell to close his talk about NVIDIA’s “Project Denver”—more on that one in a bit. While discussing... Read more...
Amazon has long touted its ECS (Elastic Compute Cloud) as a flexible service for companies that need a certain amount of server time to test programs or features, but don't want to invest the time and effort themselves. Now, the company has added additional HPC (High Performance Computing) capabilities that are typically targeted towards large-scale enterprise or university buildouts. These are precisely the sorts of organizations that typically can afford to invest time/money, but Amazon is targeting potential customers that might be restrained either by a lack of available CPU time or those that... Read more...
1 2 Next