“We want to make it a lot easier for AI researchers to share techniques and technologies,” wrote Facebook engineers Kevin Lee and Serkan Piantino in a blog posting. “As with all hardware systems that are released into the open, it's our hope that others will be able to work with us to improve it. We believe that this open collaboration helps foster innovation for future designs, putting us all one step closer to building complex AI systems that bring this kind of innovation to our users and, ultimately, help us build a more open and connected world.”
Big Sur packs in a tremendous amount of power under the hood, including up to eight NVIDIA Tesla M40 GPUs. Each Tesla M40 boasts 3072 CUDA cores, 12GB of GDDR5 memory, 288GB/sec of memory bandwidth and up to 7 teraflops of single-precision performance. Big Sur also takes full advantage of NVIDIA’s Tesla Accelerated Computing Platform, which helps it to offer twice the performance of Facebook’s previous generation solution.
Facebook uses all of this power to perform seemingly mundane (to us) operations like identifying your friends’ faces in photos and even select which content is best suited for your tastes in your News Feed. But other usage scenarios can extend to image and speech recognition along with natural language processing.
“Most of the major advances in these areas move forward in lockstep with our computational ability,” the team continues. “As faster hardware and software allow us to explore deeper and more complex systems.”