Items tagged with Cuda

Netflix has long chased after methods of improving its movie recommendation algorithms, once even awarding a $1M prize to any team of people who could substantially improve on the then-current design. As part of that process, the company has been researching neural networks. Conventional neural networks are created with vast nodes of CPU clusters, often with several thousand cores in total. Netflix decided to go with something different, and built a neural network based on GPU cores. In theory, GPU cores could be ideal for building neural nets -- they offer huge numbers of cores already linked... Read more...
The supercomputing conference SC13 kicks off this week, which means we'll be seeing a great deal of information related to multiple initiatives and launches from all the major players in High Performance Computing (HPC). Nvidia is kicking off their own event with the launch of a new GPU and a strategic partnership with IBM. For those of you that follow the consumer market, the GPU is going to look fairly familiar. K40 -- GK110 Goes Full Fat Just as the GTX 780 Ti was the full consumer implementation of the GK110 GPU, the new K40 Tesla card is the supercomputing / HPC variant of the same core architecture.... Read more...
Bolstered by the recent mad dash by consumers and manufacturers alike towards mobile computing, truly ARM has become too large to ignore. ARM has benefited the most from mobile device sales, proving that it's a capable architecture and a worthy competitor to x86 silicon, so it shouldn't come as a shock that NVIDIA equipped its CUDA 5.5 Release Candidate (RC) with ARM support. CUDA 5.5 is the first version of the parallel computing platform and programming language to play nice with ARM, meaning it natively supports GPU-accelerated computing on systems built around ARM chips. It will also make it... Read more...
While Adobe is busily showing off its latest and greatest wares at NAB 2013 this week, AMD is banging the drum for the major upgrades that it helped bake into the next version of Adobe Premiere Pro software. While Mac users have of late enjoyed the graphical power of OpenCL with either NVIDIA or AMD graphics cards, Windows users were forced to use NVIDIA’s CUDA only. Now, however, AMD and Adobe have enabled OpenCL support on Windows systems with AMD APUs and GPUs. “Our customers require powerful systems that enable them to work quickly and efficiently. While we already support OpenCL... Read more...
When dual core phones first hit the smartphone market several years ago, Tegra 2 was perfectly positioned across the smartphone and tablet market. Tegra 3 has done extremely well in tablets this past year; Nvidia has won high-profile designs with everyone from Google to Microsoft. In the past few months, however, it's become clear that Tegra 3's quad-core, Cortex-A9 CPUs and older GPU technology wouldn't be able to compete with the latest designs from Apple or Qualcomm. A leaked slide from a Chinese site has shed light on what Team Green plans to answer with and the next-generation SoC is pretty... Read more...
Nvidia's GPU Technology Conference (GTC) kicked off this afternoon with company CEO Jen-Hsun Huang delivering the signature keynote. Nvidia typically uses the keynote to announce new projects, technologies, and initiatives, or to demonstrate new architectures, but today's event was something of a let-down. The event started off positively enough with a discussion of the GTX 690 and some performance highlights from the GK104 (Kepler's) debut. Jen-Hsun followed with a discussion of how Kepler moves the bar forward compared to Fermi, and is capable of handling scientific workloads that dwarf those... Read more...
Nvidia and HP have developed a limited edition GPU Starter Kit meant to provide a drop-shipped means for anyone interested in developing for HPC applications. The term 'starter kit' is very nearly a misnomer, as the package deal provides a system more than sufficient to get the ball rolling. The system contains eight ProLiant SL390 G7 servers, packed full of 24 M2070 GPUs, 16 CPUs, and its preconfigured with CUDA 4.0. The servers, presumably loaded with quad-cores, offer a respectable 32-cores of additional CPU power in addition to the copious amounts of GPU performance. The M2070 GPU that's included... Read more...
Larrabee, Intel's once-vaunted, next-generation graphics card died years ago, but the CPU technology behind the would-be graphics card has lived on. Intel discussed the future of MIC/Knight's Corner today. After Larrabee was officially canceled, Intel repurposed the design and seeded development kits to appropriate market segments. MIC cards won't start shipping until the 22nm Knight's Corner chip is launched, but even the Knight's Ferry prototypes offer tantalizing hints at what future performance might be resemble. Like Larrabee, Knight's Corner (and future MIC products in general) utilize a... Read more...
AMD's GPU solutions have come a long way since the company acquired ATI. The combined companies have competed very well against Nvidia for the past several years, at least at the consumer level. When it comes to HPC/GPGPU tools, however, Nvidia has had the market all to itself. Granted, the GPGPU market hasn't exactly exploded, but Nvidia has sunk a great deal of effort into developing PhysX and CUDA. AMD has announced a new suite of programming tools it plans to use to woo developers in the burgeoning field. "AMD is working closely with the developer community to make it easier to bring the benefits... Read more...
Heather Mackey of Nvdia has written a new blog post discussing the company's hardware emulation equipment, thus affording us an opportunity to discuss a little-mentioned aspect of microprocessor development.  Although we'll be discussing Nvidia products in particular, both software tools (aka, simulation) and hardware emulation are vital to all microprocessor design firms, including Intel, AMD, Via/Centaur, and ARM. There's a significant difference between software simulation and hardware emulation. In simulation, software tools are used to simulate the logic of a new design or product respin.... Read more...
AMD is hosting its first AMD Fusion Developer Summit (AFDS) this year, from June 13-16. The conference will focus on OpenCL and Llano's performance capabilities under various related usage models. Sunnyvale is billing the event as a chance to participate and learn from experts, all in accordance with the company's belief that: "the computing industry is quietly undergoing a revolution bigger than any change it has seen since the semiconductor was first introduced: the rise of GPUs and APUs." Llano's die (rotated 90, in this case. The DX11 cores are to the left and occupy a significant chunk of... Read more...
New CUDA 4.0 Release Makes Parallel Programming Easier Unified Virtual Addressing, GPU-to-GPU Communication and Enhanced C++ Template Libraries Enable More Developers to Take Advantage of GPU Computing SANTA CLARA, Calif -- Feb. 28, 2011 -- NVIDIA today announced the latest version of the NVIDIA CUDA Toolkit for developing parallel applications using NVIDIA GPUs. The NVIDIA CUDA 4.0 Toolkit was designed to make parallel programming easier, and enable more developers to port their applications to GPUs. This has resulted in three main features: NVIDIA GPUDirect 2.0 Technology – Offers... Read more...
Six months ago, we covered a story in which Nvidia's chief scientist, Bill Dally, made a number of sweeping claims regarding the superiority of GPUs. Six months later he's again attacking traditional microprocessors with another broad series of accusations. As before, in our opinion, he uses far too broad a brush. Dally's basic claim is that modern CPUs are held back by legacy design. That's not particularly controversial, but he doesn't stop there. Referring to modern CPUs, Dally says:They have branch predictors that predict a branch every cycle whether the program branches or not -- that burns... Read more...
For the past 3.5 years or so, NVIDIA has ardently advocated the GPU as a computational platform capable of solving almost any problem. One topic the company hasn't targeted, however, is the tremendous performance advantage the GPU could offer malware authors. The idea that a graphics card could double as a security hole isn't something we've heard before, but according to a paper by Giorgos Vasiliadis, Michalis Polychronakis and Sotiris Ionnidis, it's an attack vector whose popularity could boom in coming years. The trio argues that all the computational hardware that makes the GPU such an ideal... Read more...
1 2 3 4 5 Next