|Near Threshold Voltage, Claremont|
|Intel's presentations at the International Solid State Circuits Conference (ISSCC) this year are focused on one of the biggest problems facing modern CPU designers—how to improve power efficiency without sacrificing compute performance. Intel isn't just tackling this problem through conventional process shrinks and smaller dies, however; the company detailed multiple new approaches. First up is Claremont, Intel's first chip built to run on Near Threshold Voltage (NTV) technology.
The term "Near Threshold Voltage" refers to the amount of voltage required to switch a transistor from 0 to 1. Normally, the voltage variation between the two states is significant in order to prevent transistors from activating when they aren't supposed to. An NTV processor is able to operate much closer to the On/Off point. The result is a significant level of power savings.
Claremont is a bog-standard Intel Pentium that's been transplanted from its original 0.8µm process (that's 800nm) to a 32nm architecture. Intel didn't set out to label Claremont a solar-powered processor; the demo shot below was simply meant to show that the chip could run on extremely small amounts of power. The CPU's operating parameters indicate such uses are an option; Claremont idles at 280mv at 3MHz and draws just 737mW of power at 915MHz and 1.2v.
For a pertinent example of why this matters, consider the tweaked graph below. The data is from our coverage of Medfield, Intel's first smartphone processor, but we've updated the original with hard figures rather than simply showing a trend line. The reason power consumption increases so sharply as frequency rises, is because higher clock speeds require higher voltages, and raising voltage has a huge impact on power consumption.
It's not clear yet if NTV would improve power consumption at maximum frequency or if its benefits are mainly confined to lower power modes, but the impact on mobile devices would be substantial. Much of what makes Medfield a huge step forward for Intel is the chip's ability to minimize its power consumption and rapidly return to standby mode once computational tasks are complete. Smartphones spend the overwhelming majority of time -- upwards of 90% -- in standby or low-power modes, and that's where NTV would deliver further improvements.
Intel's next step? Rethinking radio.
|Fully Digital Radio|
|Wireless radios, including those integrated into existing smartphones and laptops, are a mixture of digital and analog circuitry. Analog circuits are inherently harder to scale to match new process technologies and have tended to lag digital deployments. Until 2011, Intel's most advanced analog designs were built on 65nm. Digital radios that are typically available today look like this:
As you can see, the majority of the radio's circuitry is still built in analog, even in the second example. Intel wants to change that, and is demoing a 32nm digital radio, including a completely digital RF transmitter this year. "We are getting close to having the complete kit of digital RF building blocks for these radios,” says Justin Rattner, Intel’s chief technology officer. By moving to a completely digital implementation and treating radio like a compute problem, Intel can apply the same scaling laws and leverage its enormous expertise in manufacturing such products.
The bigger news is that the company has integrated a 2.4GHz WiFi radio on-die into an existing dual-core Atom design.
The chip, codenamed Rosepoint, isn't going to come to market in its current form, but it's a functional device. There are still some challenges to overcome, like the need to shield CPU and radio from interfering with each other without compromising the ability of either. Intel's goal, however, is to eventually move the entire radio, including the antenna, into a single unified SoC.
|Rethinking the FPU, Conclusion|
|The last Intel unveil we'll be focusing on is the company's new variable-precision floating point unit (FPU). The FPU handles all math involving a decimal (floating point); conventional x86 CPUs have an FPU that's capable of performing single (32-bit) and double (64-bit) precision operations. Current CPUs utilize the entire FPU address space for performing operations. The unit assumes, by default, that it must be as accurate as it possibly can be. If you're calculating spacecraft trajectory or measuring subatomic particles, such accuracy is vital. If you're rendering Skyrim or updating Facebook, it isn't.
The variable-precision FPU Intel is showing off is capable of adjusting how precise it is depending on the needs of the software in question. Because it only uses full precision when it's necessary, the FPU uses up to 50% less energy than a conventional unit without sacrificing performance or accurate results. Again, this isn't a logic unit that's intended for a near-term shipping product, but it's an interesting example of how power efficiency can be improved by ensuring that workloads are optimally distributed and managed. Back when FPUs were first designed and implemented, there was no way to tell what sort of work was being performed. The only safe option was to assume a universal worst-case scenario. Now, that's no longer the case.
Intel's other presentations at ISSCC this year will discuss Ivy Bridge and additional products that use the company's 22nm Tri-Gate transistor technology, new SRAM designs that use NTV to lower operating voltages, and an NTV SIMD implementation that shows how that technology can be applied to graphics units. NTV is at the heart of much of what Intel is working on as far as next-generation power optimization, but the company isn't betting everything on a single technology. Their discussion of digital radio implementation and the use of a variable-precision FPU is evidence of how improved power efficiency is being chased from every angle.