Intel unveiled a number of new data center initiatives this week as part of its broad product strategy to redefine some of its market goals. In the past, Intel's efforts in this area have boiled down to giving customers more of what they wanted, where "more" was defined as higher clock speeds or more processing cores. That's not to imply that Intel didn't do a great deal of work on software optimization, compiler tools, or CPU fabric design -- it absolutely did -- but the pitches used to be simpler, and more hardware-focused.
That's changing slowly, thanks to the growing difficulty of improving microprocessors at a rapid pace (on the one hand) and the popularity of low-power solutions on the other. That might seem paradoxical, but the upshot is that modest improvements that Intel can offer year-on-year based strictly on hardware aren't enough to drive robust sales or excite the sort of people who get excited about data center announcements. As a result, Santa Clara has also focused on finding ways to expand the utility of its low power Atom servers, including the upcoming Avoton Atom products, which are based on the 22nm Bay Trail
Intel isn't just pushing Avoton
as as low-power solution that'll compete with products from ARM
, but as the linchpin of a system for software defined networking and software defined storage capability. In a typical network, a switch is programmed to send arriving traffic to a particular location. Both the control plane (where traffic goes) and the data plane (the hardware responsible for actually moving the bits) are implemented in hardware and duplicated in every switch.
Software defined networking replaces this by using software to manage traffic (OpenFlow in the example diagram below) and monitoring it from a central controller. Intel is moving towards such a model and talking it up as an option because it moves control away from specialized hardware baked into expensive routers made by people that aren't Intel, and towards centralized technology Intel can bake into the CPU
The concept can apparently be applied to more than just networking -- Intel is also talking up the idea of "Storage as a service" and believes it can use the same flexible software model to allocate resources on demand rather than statically partitioning resources within each server cluster.
If You Can't Beat Facebook..
Much of Intel's about-face on these topics has to do with Facebook's own push
to commoditize the data center. For decades, servers have maintained high profit margins partly by holding to architectural divisions and structures that enforce rigid product deployments. The advent of blade servers and virtualization may have changed the nature of the server business to a degree, but neither development threatened the need for expensive high-end storage servers with dedicated ASICs or huge switches with application-specific routing rules. Facebook's OpenCompute
project, with its focus on lowering costs and increasing flexibility, threatens both.
obviously wants to be on the right side of that trend, which means integrating the technologies and capabilities that will ensure its CPUs remain at the center of any product shift.