Items tagged with Cuda

Computex and E3 are both in the rear view mirror, but that does not mean the usual participants have nothing left to announce. NVIDIA proved otherwise on Monday by announcing support for ARM processors. This, according to NVIDIA, provides a new path to build extremely energy-efficient, AI-enabled exascale supercomputers. "Supercomputers are the essential instruments of scientific discovery, and achieving exascale supercomputing will dramatically expand the frontier of human knowledge," said Jensen Huang, founder and CEO of NVIDIA. "As traditional compute scaling ends, power will limit all supercomputers. The combination of NVIDIA’s CUDA-accelerated computing and ARM’s energy-efficient... Read more...
Netflix has long chased after methods of improving its movie recommendation algorithms, once even awarding a $1M prize to any team of people who could substantially improve on the then-current design. As part of that process, the company has been researching neural networks. Conventional neural networks are created with vast nodes of CPU clusters, often with several thousand cores in total. Netflix decided to go with something different, and built a neural network based on GPU cores. In theory, GPU cores could be ideal for building neural nets -- they offer huge numbers of cores already linked by fast internal interconnects and with relatively large pools of onboard memory. Whether or not the... Read more...
The supercomputing conference SC13 kicks off this week, which means we'll be seeing a great deal of information related to multiple initiatives and launches from all the major players in High Performance Computing (HPC). Nvidia is kicking off their own event with the launch of a new GPU and a strategic partnership with IBM. For those of you that follow the consumer market, the GPU is going to look fairly familiar. K40 -- GK110 Goes Full Fat Just as the GTX 780 Ti was the full consumer implementation of the GK110 GPU, the new K40 Tesla card is the supercomputing / HPC variant of the same core architecture. The K40 picks up additional clock headroom and implements the same variable clock speed... Read more...
Bolstered by the recent mad dash by consumers and manufacturers alike towards mobile computing, truly ARM has become too large to ignore. ARM has benefited the most from mobile device sales, proving that it's a capable architecture and a worthy competitor to x86 silicon, so it shouldn't come as a shock that NVIDIA equipped its CUDA 5.5 Release Candidate (RC) with ARM support. CUDA 5.5 is the first version of the parallel computing platform and programming language to play nice with ARM, meaning it natively supports GPU-accelerated computing on systems built around ARM chips. It will also make it easier and faster for developers to port applications over. Other than ARM support, CUDA 5.5 brings... Read more...
While Adobe is busily showing off its latest and greatest wares at NAB 2013 this week, AMD is banging the drum for the major upgrades that it helped bake into the next version of Adobe Premiere Pro software. While Mac users have of late enjoyed the graphical power of OpenCL with either NVIDIA or AMD graphics cards, Windows users were forced to use NVIDIA’s CUDA only. Now, however, AMD and Adobe have enabled OpenCL support on Windows systems with AMD APUs and GPUs. “Our customers require powerful systems that enable them to work quickly and efficiently. While we already support OpenCL on the Mac, today’s announcement gives creative professionals the opportunity to tap into the... Read more...
When dual core phones first hit the smartphone market several years ago, Tegra 2 was perfectly positioned across the smartphone and tablet market. Tegra 3 has done extremely well in tablets this past year; Nvidia has won high-profile designs with everyone from Google to Microsoft. In the past few months, however, it's become clear that Tegra 3's quad-core, Cortex-A9 CPUs and older GPU technology wouldn't be able to compete with the latest designs from Apple or Qualcomm. A leaked slide from a Chinese site has shed light on what Team Green plans to answer with and the next-generation SoC is pretty darn sexy. Here's what Wayne (aka Tegra 4) purportedly looks like: The new Tegra 4 packs 72 GPU cores... Read more...
Nvidia's GPU Technology Conference (GTC) kicked off this afternoon with company CEO Jen-Hsun Huang delivering the signature keynote. Nvidia typically uses the keynote to announce new projects, technologies, and initiatives, or to demonstrate new architectures, but today's event was something of a let-down. The event started off positively enough with a discussion of the GTX 690 and some performance highlights from the GK104 (Kepler's) debut. Jen-Hsun followed with a discussion of how Kepler moves the bar forward compared to Fermi, and is capable of handling scientific workloads that dwarf those of its predecessor. He also discussed a pair of new technologies -- HyperQ and Dynamic Parallism. Both... Read more...
Nvidia and HP have developed a limited edition GPU Starter Kit meant to provide a drop-shipped means for anyone interested in developing for HPC applications. The term 'starter kit' is very nearly a misnomer, as the package deal provides a system more than sufficient to get the ball rolling. The system contains eight ProLiant SL390 G7 servers, packed full of 24 M2070 GPUs, 16 CPUs, and its preconfigured with CUDA 4.0. The servers, presumably loaded with quad-cores, offer a respectable 32-cores of additional CPU power in addition to the copious amounts of GPU performance. The M2070 GPU that's included in the package is a Fermi-based part, with 6GB of RAM per GPU. According to Nvidia, the $99,000... Read more...
Larrabee, Intel's once-vaunted, next-generation graphics card died years ago, but the CPU technology behind the would-be graphics card has lived on. Intel discussed the future of MIC/Knight's Corner today. After Larrabee was officially canceled, Intel repurposed the design and seeded development kits to appropriate market segments. MIC cards won't start shipping until the 22nm Knight's Corner chip is launched, but even the Knight's Ferry prototypes offer tantalizing hints at what future performance might be resemble. Like Larrabee, Knight's Corner (and future MIC products in general) utilize a CPU based on Intel's original Pentium architecture (P54C). Modifications include complete cache coherency,... Read more...
AMD's GPU solutions have come a long way since the company acquired ATI. The combined companies have competed very well against Nvidia for the past several years, at least at the consumer level. When it comes to HPC/GPGPU tools, however, Nvidia has had the market all to itself. Granted, the GPGPU market hasn't exactly exploded, but Nvidia has sunk a great deal of effort into developing PhysX and CUDA. AMD has announced a new suite of programming tools it plans to use to woo developers in the burgeoning field. "AMD is working closely with the developer community to make it easier to bring the benefits of heterogeneous computing to consumers, enabling next-generation system features like vivid... Read more...
Heather Mackey of Nvdia has written a new blog post discussing the company's hardware emulation equipment, thus affording us an opportunity to discuss a little-mentioned aspect of microprocessor development.  Although we'll be discussing Nvidia products in particular, both software tools (aka, simulation) and hardware emulation are vital to all microprocessor design firms, including Intel, AMD, Via/Centaur, and ARM. There's a significant difference between software simulation and hardware emulation. In simulation, software tools are used to simulate the logic of a new design or product respin. The advantage to simulation is that most such tools are mature, flexible, and inexpensive. The... Read more...
AMD is hosting its first AMD Fusion Developer Summit (AFDS) this year, from June 13-16. The conference will focus on OpenCL and Llano's performance capabilities under various related usage models. Sunnyvale is billing the event as a chance to participate and learn from experts, all in accordance with the company's belief that: "the computing industry is quietly undergoing a revolution bigger than any change it has seen since the semiconductor was first introduced: the rise of GPUs and APUs." Llano's die (rotated 90, in this case. The DX11 cores are to the left and occupy a significant chunk of the die overall. One interesting twist is that the keynote address will be given by Jem Davies, currently... Read more...
New CUDA 4.0 Release Makes Parallel Programming Easier Unified Virtual Addressing, GPU-to-GPU Communication and Enhanced C++ Template Libraries Enable More Developers to Take Advantage of GPU Computing SANTA CLARA, Calif -- Feb. 28, 2011 -- NVIDIA today announced the latest version of the NVIDIA CUDA Toolkit for developing parallel applications using NVIDIA GPUs. The NVIDIA CUDA 4.0 Toolkit was designed to make parallel programming easier, and enable more developers to port their applications to GPUs. This has resulted in three main features: NVIDIA GPUDirect 2.0 Technology – Offers support for peer-to-peer communication among GPUs within a single server or workstation. This enables... Read more...
Six months ago, we covered a story in which Nvidia's chief scientist, Bill Dally, made a number of sweeping claims regarding the superiority of GPUs. Six months later he's again attacking traditional microprocessors with another broad series of accusations. As before, in our opinion, he uses far too broad a brush. Dally's basic claim is that modern CPUs are held back by legacy design. That's not particularly controversial, but he doesn't stop there. Referring to modern CPUs, Dally says:They have branch predictors that predict a branch every cycle whether the program branches or not -- that burns gobs of power. They reorder instructions to hide memory latency. That burns a lot of power. They carry... Read more...
For the past 3.5 years or so, NVIDIA has ardently advocated the GPU as a computational platform capable of solving almost any problem. One topic the company hasn't targeted, however, is the tremendous performance advantage the GPU could offer malware authors. The idea that a graphics card could double as a security hole isn't something we've heard before, but according to a paper by Giorgos Vasiliadis, Michalis Polychronakis and Sotiris Ionnidis, it's an attack vector whose popularity could boom in coming years. The trio argues that all the computational hardware that makes the GPU such an ideal fit for certain types of scientific or graphical workloads could (and will) deliver equal benefits... Read more...
At the GPU Technology Conference today, the CEO of NVIDIA, Jen-Hsun Huang, unveiled a new CUDA initiative, dubbed CUDA-x86. As the name implies, the new framework will allow developers to write CUDA code natively for x86. Don't confuse this announcement with the PhysX issues we discussed last month—when we spoke to NVIDIA back then we were told that certain legacy performance issues would be addressed in the next major version of the PhysX SDK. Porting CUDA to x86 is a smart move for NVIDIA given Intel's own intentions towards the high performance computing (HPC) market. One of the core advantages of Intel's hardware will be the fact that it's based on the ubiquitous x86 standard—something... Read more...
CUDA. Performance increases. GPUs. NVIDIA. Tesla Compute Cluster. Somehow or another, all of those are interconnected in NVIDIA's latest announcement, in which they have revealed Parallel Nsight support for Visual Studio 2010 along with up to 300% performance boosts in CUDA toolkit libraries. The announcement really boils down to two new versions of industry-leading development tools: Parallel Nsight and the CUDA Toolkit.  If you aren't aware, Parallel Nsight is described as the "only integrated development environment for creating GPU-accelerated applications for a range of desktop and supercomputing platforms," and version 1.5 now includes support for Microsoft Visual Studio 2010, Tesla... Read more...
NVIDIA has just taken the wraps off an entire line-up of Fermi-based GeForce GT and GTX 400M mobile GPUs—seven in total---and revealed a number of notebook design wins from major OEMs using the GPUs. Like their desktop-targeted counterparts, the mobile GeForce GT and GTX 400M series GPUs leverage technology from NVIDIA’s Fermi architecture, which debuted in the GF100 GPU at the heart of the company’s flagship GeForce GTX 480. GeForce GT and GTX 400M series GPUs are DirectX 11 compatible and support all of NVIDIA’s “Graphics Plus” features, including PhysX, 3D Vision, CUDA, Verde drivers, and 3DTV Play. All of the GeForce GT and GTX 400M GPUs support NVIDIA’s... Read more...
Not long ago, we reviewed the entire FirePro workstation graphics card lineup from ATI. With the V8800, our testing revealed considerable performance gains over the previous generation V8750, coupled with a lower price point. Surely, that's a combination that consumers can appreciate, especially for those looking to upgrade sooner, rather than later. But, at the time, the market was not yet settled as we anxiously awaited a response to ATI's FirePro products from NVIDIA. Thankfully, the wait is over as the launch of a new series of professional graphics cards from NVIDIA based on the company's Fermi architecture has just arrived and we've got the high-end Quadro 6000 and Quadro 5000 cards in-house... Read more...
Not long ago, we reviewed the entire FirePro workstation graphics card lineup from ATI. With the V8800, our testing revealed considerable performance gains over the previous generation V8750, coupled with a lower price point. Surely, that's a combination that consumers can appreciate, especially for those looking to upgrade sooner, rather than later. But, at the time, the market was not yet settled as we anxiously awaited a response to ATI's FirePro products from NVIDIA. Thankfully, the wait is over as the launch of a new series of professional graphics cards from NVIDIA based on the company's Fermi architecture has just arrived.    Three new models arrive today to bolster the... Read more...
Over the past four years, NVIDIA has made a great many claims regarding how porting various types of applications to run on GPUs instead of CPUs can tremendously improve performance by anywhere from 10x-500x. Intel, unsurprisingly, sees the situation differently, but has remained relatively quiet on the issue, possibly because Larrabee was going to be positioned as a discrete GPU. The recent announcement that Larrabee has been repurposed as an HPC/scientific computing solution may therefore be partially responsible for Intel ramping up an offensive against NVIDIA's claims regarding GPU computing. At the International Symposium On Computer Architecture (ISCA) this week, a team from Intel presented... Read more...
Earlier this week, we covered news that a California PS3 owner, Anthony Ventura, had filed a class action lawsuit against Sony, alleging that the company's decision to terminate the PS3's Linux support via firmware update constituted a false/deceptive marketing practice. While most PS3 owners never took advantage of the system's Linux capabilities, "Other OS" functionality is critical to the universities and institutions that have deployed PS3 clusters as high-performance compute farms. We talked with several project leads on the impact of Sony's decision, and what it means for low-cost supercomputing programs. Blunderingly, Sony Nukes PS3 Supercomputing... Read more...
1 2 3 Next