Home Innovation Networking Enfabrica Reveals the Fastest ...
Networking
Business Fortune
21 November, 2024
First in the industry, Massive 500K+ GPU Cluster Scale with Unmatched Resiliency, Utilization, and Operator Control is made possible by 3.2 Terabit/second SuperNIC.
The industry leader in high-performance networking silicon for accelerated computing and artificial intelligence (AI), Enfabrica Corporation, today announced at Supercomputing 2024 (SC24) that its ground-breaking 3.2 Terabit/sec (Tbps) Accelerated Compute Fabric (ACF) SuperNIC chip and pilot systems are now generally available. Compared to existing GPU-attached network interface controller (NIC) products on the market, the ACF solution offers four times the bandwidth and multipath robustness, as well as multi-port 800-Gigabit Ethernet access to GPU servers. Initial production of the Enfabrica silicon is scheduled for calendar quarter one of 2025. This news emphasizes Enfabrica's leadership position in the future of GPU computer networks and strengthens its position in the rapidly expanding AI infrastructure sector.
In order to logically connect GPUs or accelerators over an outstanding performance scale-out network in an AI data center, the AI "SuperNIC" has become a rapidly expanding silicon product category. Massive training, inference, and retrieval-augmented generation (RAG) workloads related to frontier AI models require the greatest performance, resilience, and efficiency of data flow and Enfabrica is the first company in the industry to create a SuperNIC chip from the ground up.
Today is a turning point for Enfabrica, according to CEO Rochan Sankar. Early in 2025, their ACF SuperNIC silicon will be ramped up and made accessible to customers after they successfully secured a significant Series C fundraising round. Their goal has been to create category-defining AI networking silicon that their clients like, to the satisfaction of both software developers and system architects, by using a software and hardware co-design strategy from the beginning. These individuals will determine the future course of AI infrastructure and are in charge of planning, implementing, and effectively sustaining AI compute clusters at scale.