Did you know that there's an actual place in the world called the San Diego Supercomputer Center? Let's just say you just found your next place of employment, huh? It sounds like a pretty awesome venue, and they just announced the launch of what is believed to be the largest academic-based cloud storage system in the U.S., specifically designed for researchers, students, academics, and industry users who require stable, secure, and cost-effective storage and sharing of digital information, including extremely large data sets.
Michael Norman, director of SDSC, had this to say: "We believe that the SDSC Cloud may well revolutionize how data is preserved and shared among researchers, especially massive datasets that are becoming more prevalent in this new era of data-intensive research and computing. The SDSC Cloud goes a long way toward meeting federal data sharing requirements, since every data object has a unique URL and could be accessed over the Web."
DSC's new Web-based system is 100% disk-based and interconnected by high-speed 10 gigabit Ethernet switching technology, providing extremely fast read and write performance. With an initial raw capacity of 5.5 petabytes – one petabyte equals one quadrillion bytes of storage capacity, or the equivalent about 250 billion pages of text – the SDSC Cloud has sustained read rates of 8 to 10 gigabytes (GB) per second that will continually improve as more nodes and storage are added. That's akin to reading all the contents of a 250GB laptop drive in less than 30 seconds.
Moreover, the SDSC Cloud is scalable by orders of magnitude to hundreds of petabytes, with aggregate performance and capacity both scaling almost linearly with growth. Full details about the new SDSC Cloud can be found online. The SDSC Cloud leverages the infrastructure designed for a high-performance parallel file system by using two Arista Networks 7508 switches, providing 768 total 10 gigabit (Gb) Ethernet ports for more than 10Tbit/s of non-blocking, IP-based connectivity. The switches are configured using multi-chassis link aggregation (MLAG) for both performance and failover.
Pretty amazing stuff, and despite the fact that it's reserved for the academic world right now, the trickle-down effect couldn't happen soon enough with regard to things like this.