Adobe (Cont.), HSA Roadmap
Malloy then went on to talk about the relationship between software and hardware developers and how he believes hardware vendors have gotten to a point where we now have a performance surplus and how software developers are leveraging that surplus. He continued on talking about the importance of certain architectures and the importance of having a unified address space.
Next Malloy talked about software and platform considerations, and said that we need a higher level of abstractions than current platforms offer to attract more mainstream developers to heterogeneous computing. But he wasn’t sure what those abstractions should be.
Heterogonous System Architecture (HSA)
Phil Rogers AMD Corporate fellow then came out on stage to talk about the company’s Heterogonous System Architecture, or HSA. He drew some parallels between what AMD’s is doing with HSA and what Mr. Malloy wants to come to heterogeneous computing and said with HSA, AMD is bringing the platform to software developers.
Mr. Rogers then went on to talk about the HSA roadmap. He outlined the physical integration that took place in 2011, but putting the CPU and GPU on the same silicon, with a unified memory controller and the optimizations that took place in 2012. He then mentioned that in 2013, some architectural integration will take place. There will be a unified address space for GPU and CPU, the GPU will use pageable system memory and CPU pointers, and explained that memory between the CPU and GPU will be fully coherent. The ultimate goal is to bring all of the processors in a system into unified, coherent memory.
Rogers’ presentation continued with some talk about the application areas with abundant parallel workloads. He talked about Natural UIs with touch, gesture, and voice, Biometric Recognition, and Augmented Reality, where graphics are superimposed and audio and other digital information is overlaid on an actual environment. He also talked about “content everywhere”, streaming media, and other areas that could benefit from heterogeneous processing.
The keynote continued with a facial recognition demo and some talk about how complex the algorithms are to actually detect a face in a HD image and the workloads associated with them. He then went on to talk about how in the vast majority of the test stages when doing facial recognition, a GPU has a huge performance advantage over a CPU, but that the CPU can be faster in some areas. The best solution is to leverage the GPU for some stages and the CPU for others, which is what heterogeneous computing is all about.