Intel Unleashes 56-Core Xeon, Optane DC Memory, Agilex FGPAs To Accelerate AI And Big Data
Intel Unveils Agilex FPGAs And Ethernet 800 Series Controllers And Adapters
There are a variety of FPGA solutions currently on the market, but Intel Agilex will offer a number of unique advantages when paired to an Intel Xeon Scalable processors, in that some models will offer cache and memory coherencey to better accelerate big data analytics and massive databases.
Agilex FPGAs will be manufactured using Intel’s 10nm process node; the first devices are slated to be available in Q3 of this year, though some select partners will likely gain access to the technology a bit earlier. By leveraging Intel EMIB (Embedded Multi-Die Interconnect Bridge) and chiplet technologies, Agilex FPGAs can be tailored for a wide array of workloads and target markets.
Intel claims Agilex FPGAs will offer up to 40% higher performance with up to 40% lower power than previous-gen products. They will support DDR4, DDR5, HBM2, and Optane DC Persistent Memory technologies, depending on the model / family and offer up to 40 TFLOPs of DSP performance, and up to 112G transceiver data rates. The configurable DSPs support low-precision INT7 through INT2 configurations and Intel also points out that Agilex FPGAs are the only ones to support hardened BFLOAT16 and FP16 compute for machine learning and intelligent sensor computing.
There will initially be three series of Agilex FPGAs, the F-Series (entry level), the I-Series (mid-range), and the M-Series (high-end). The Agilex F-Series will not offer cache and memory coherence with 2nd Generation Intel Xeon Scalable processors and support 58G transceiver data rates, PCIe Gen4, and DDR4 memory. The I- and M-Series are similar in terms of transceiver data rates, PCIe connectivity, and support for cache / memory coherence, but the higher-end M-Series adds support for additional memory types, including HBM. All of the Agilex FPGAs feature a quad-core ARM Cortex-A53 implementation.
Networking Advancements With Intel 800 Series
Intel Ethernet 800 series controllers (codenamed Columbiaville) will offer some unique features and innovations, designed to not only improve peak throughout, but also reduce latency and improve predictability as well.
In large scale data centers, reducing variability in application response time across the network improves throughput and reduces latency. By extension, doing so would allow for more servers to be added to better parallelize tasks and / or enable support for more end-users with existing hardware.
Intel claims Ethernet 800 Series controllers reduce variability and increase predictability through the use of Dynamic Device Personalization (DDP) and Application Device Queues (ADQ). Dynamic Device Personalization, or DDP, is available in previous-gen Ethernet 700 series products, but has been improved in 800 series. DDP and ADQ essentially enable a programmable pipeline though the controller. In lieu of using only source and destination addresses, DDP allows the parser to look deeper into the packets and detect a defined protocol header and additional inner header to more specifically direct traffic. And ADQ also allows for dedicated lanes/queues and rate limiting, in addition to application-specific queuing and steering technologies. Leveraging DDP and ADQ in essence enables packets to reach their final destination more directly, over dedicated and manageable pathways.
According to Intel’s data, some workloads show a greater-than 50% increase in predictability, with lower overall latency, higher throughput, and improved CPU utilization.