Items tagged with GPGPU

For the past 3.5 years or so, NVIDIA has ardently advocated the GPU as a computational platform capable of solving almost any problem. One topic the company hasn't targeted, however, is the tremendous performance advantage the GPU could offer malware authors. The idea that a graphics card could double as a security hole isn't something we've heard before, but according to a paper by Giorgos Vasiliadis, Michalis Polychronakis and Sotiris Ionnidis, it's an attack vector whose popularity could boom in coming years. The trio argues that all the computational hardware that makes the GPU such an ideal fit for certain types of scientific or graphical workloads could (and will) deliver equal benefits... Read more...
At the time of this writing, the FTC's investigation into Intel's alleged monopolistic abuses is on hold as the government attempts to negotiate a settlement with the CPU and chipset manufacturer. If these negotiations don't result in a deal by July 22, the case returns to court, with arguments currently scheduled to begin on September 15. Intel is no stranger to these sorts of lawsuits; between AMD and the EU, the CPU giant has been battling such allegations for years. The lawsuit between NV and Intel, however, rests on different points than the AMD/Intel allegations. Here, the battle is over whether or not Intel's already-negotiated agreements with NVIDIA give the latter permission to produce... Read more...
Earlier this week, we covered news that a California PS3 owner, Anthony Ventura, had filed a class action lawsuit against Sony, alleging that the company's decision to terminate the PS3's Linux support via firmware update constituted a false/deceptive marketing practice. While most PS3 owners never took advantage of the system's Linux capabilities, "Other OS" functionality is critical to the universities and institutions that have deployed PS3 clusters as high-performance compute farms. We talked with several project leads on the impact of Sony's decision, and what it means for low-cost supercomputing programs. Blunderingly, Sony Nukes PS3 Supercomputing... Read more...
Earlier this week, we covered news that a California PS3 owner, Anthony Ventura, had filed a class action lawsuit against Sony, alleging that the company's decision to terminate the PS3's Linux support via firmware update constituted a false/deceptive marketing practice.While most PS3 owners never took advantage of the system's Linux capabilities, "Other OS" functionality is critical to the universities and institutions that have deployed PS3 clusters as high-performance compute farms. We talked with several project leads on the impact of Sony's decision, and what it means for low-cost supercomputing programs. Cluster of PS3s, U.S.A.F. 2,000 Console SupercomputerImage courtesy: U.S. Air ForceIn... Read more...
Bill Dally, chief scientist at NVIDIA, has written an article at Forbes alleging that traditional CPU scaling and Moore's Law are dead, and that parallel computing is the only way to maintain historic performance scaling. With six-core processors now available for $300, Dally's remarks are certainly timely, but his conclusions are a bit premature. Will The Real Moore's Law Please Stand Up And/Or Die Already? Moore's original representation of his now-famous law.Dally's claims Moore's Law is dead because "CPU performance no longer doubles every 18 months." This is little more than a straw man; Moore's Law states that the number of transistors that could be built within a chip for minimal... Read more...
When Intel announced its plans to develop a discrete graphics card capable of scaling from the consumer market to high-end GPGPU calculations,  it was met with a mixture of scorn, disbelief, interest, and curiosity. Unlike the GPUs at SIGGRAPH in 2008 (or any of the current ones, for that matter), Larrabee was a series of in-order x86 cores connected by a high-bandwidth bus. In theory, Larrabee would be more flexible than any GPU from ATI or NVIDIA; Intel predicted its new GPU would begin an industry transition from rasterization to real-time raytracing (RTRT). Larrabee's original GPU core. A bit of CPU here, a dash of GPU there...Larrabee parts were supposed to ship in 2010, but last December... Read more...
When it comes to hardware-accelerated PhysX and the future of GPGPU computing AMD and NVIDIA are the modern-day descendents of the Hatfields and McCoys. Both companies attended GDC last week, where a completely predictable war broke out over PhysX, physics, developer payoffs, and gamer interest in PhysX (or the lack thereof). The brouhaha kicked off with comments from the senior manager of developer relations at AMD, Richard Huddy, who said: "What I’ve seen with physics, or PhysX rather, is that Nvidia create a marketing deal with a title, and then as part of that marketing deal, they have the right to go in and implement PhysX in the game...I’m not aware of any GPU-accelerated PhysX code which... Read more...
Back in late September of last year, NVIDIA disclosed some information regarding its next generation GPU architecture, codenamed "Fermi". At the time, actual product names and detailed specifications were not disclosed, nor was performance in 3D games, but high-level information about the architecture, its strong focus on compute performance, and broader compatibility with computational applications were discussed. We covered much of the early information regarding Fermi in this article. Just to recap some of the more pertinent details found there, the GPU codenamed Fermi will feature over 3 billion transistors and be produced using TSMC's 40nm processes. If you remember, AMD's RV870, which is... Read more...
Back in late September of last year, NVIDIA disclosed some information regarding its next generation GPU architecture, codenamed "Fermi". At the time, actual product names and detailed specifications were not disclosed, nor was performance in 3D games, but high-level information about the architecture, its strong focus on compute performance, and broader compatibility with computational applications were discussed.We covered much of the early information regarding Fermi in this article. Just to recap some of the more pertinent details found there, the GPU codenamed Fermi will feature over 3 billion transistors and be produced using TSMC's 40nm processes. If you remember, AMD's... Read more...
Intel formally announced today that its controversial and much-hyped Larrabee GPU will not hit retail stores in 2010. The company declined to speculate on when retail Larrabee-based GPUs will be available, but stated that the current generation of products will be used for in-house development and sampled to relevant partners. Intel declined to give a specific reason for the delay, saying only that product development was behind where the company had hoped it would be by now. The project's current status is now a bit unclear, but we know Intel hasn't completely killed the project. Instead, software (and possibly hardware) development kits will be made available over the next year to interested... Read more...
If you've followed the early announcements concerning Fermi, NVDIA's next-generation GPU architecture, you should already be aware that the new GPU core is both an evolution of the existing GT200 architecture and a significant new design in its own right. NVIDIA made it clear early on that they weren't going to be talking about GeForce products at the conference this year, but instead have discussed Fermi as a Tesla successor and future high-end engine primed to drive the GPGPU industry.  So that's 16 times 32...carry the four... While it carries many of the same features as the GT200 series, Fermi is distinctly its own animal. NVIDIA's Fermi whitepaper describes the new architecture... Read more...
If you've followed the early announcements concerning Fermi, NVDIA's next-generation GPU architecture, you should already be aware that the new GPU core is both an evolution of the existing GT200 architecture and a significant new design in its own right. NVIDIA made it clear early on that they weren't going to be talking about GeForce products at the conference this year, but instead have discussed Fermi as a Tesla successor and future high-end engine primed to drive the GPGPU industry.  So that's 16 times 32...carry the four... While it carries many of the same features as the GT200 series, Fermi is distinctly its own animal. NVIDIA's Fermi whitepaper describes the new architecture... Read more...
NVIDIA has built its brand and reputation as a GPU designer since the company was founded in 1993, but recent comments by the company have implied that it believes platforms like Tegra and ION will be key revenue generators in the future. We've previously discussed NVIDIA's ongoing emphasis on the GPU as a massively parallel processor capable of handling workloads and programs far outside the realm of video games, but to date, reviewers and analysts alike have treated Tegra as more of a side project than a future core competency.   The two core components of NVIDIA's mobile strategy: ION and Tegra  Given how difficult the last twelve months have been for NVIDIA, it's easy to wonder... Read more...
AMD will be making an ATI Catalyst v9.5 hotfix driver package available for Windows Vista (32-bit and 64-bit) users that feature a new ATI Stream transcoding runtime. The new runtime enables faster transcoding with lower CPU usage for ATI Stream compatible applications that leverage the power of a GPU for some computing tasks, like ATI's own AVIVO Video Converter for example.In addition to the new driver release, however, AMD is also announcing that ATI Stream support is coming, or is already available, for a number third party applications.       ATI Stream Transcoding Runtime, Supported Products ATI Stream technology is available on number of Radeon and FireGL based products.... Read more...
Prev 1 2 3 Next