Edgecore Networks, the leader in open networking solutions, has announsed an innovative 800G-optimized switch, christened the DCS560. This groundbreaking switch boasts the ability to furnish an Ethernet-based fabric tailored to cater to the demanding needs of AI/ML workloads. The DCS560, a compact 2RU system, showcases a formidable 51.2 Tbps capacity and is equipped with a formidable arsenal of 64x800G ports. Notably, it is available in two distinct variants, offering users the choice between OSFP800 or QSFP-DD800 interfaces.
The advent of generative AI has spurred a monumental surge in the size and network bandwidth requirements within AI/ML clusters. With a substantial increase in the number of compute nodes and accelerators, AI networks now demand high-capacity systems within a flat architecture capable of efficiently managing vast volumes of data with minimal latency and maximal throughput. As a strategic response to this burgeoning demand, Ethernet-based fabrics are being swiftly embraced as a means to expedite job completion times.
Edgecore's 51.2 Tbps system, which is underpinned by the Broadcom StrataXGS® Tomahawk® 5 series, is marked by its high-radix configuration and user-friendly deployment attributes, constituting an Ethernet fabric of exceptional prowess. The 2RU system design, despite its compact footprint, boasts resilience, power redundancy, fan tray redundancy, and an extensive environmental operating range, ensuring five-nines high availability, a crucial criterion for data center cloud applications. Thanks to a load-balanced port mapping design that eliminates the need for flyover cables, this system offers a known-good system quality and unwavering reliability, all while affording customers the flexibility to customize port assignments. Additionally, it is worth noting that the platform is available in two variants, with OSFP800 or QSFP-DD800 interface options, thereby accommodating flexible deployment needs and supporting passive copper DAC on all ports as well as long-distance ZR+ optics.
Each iteration of this system furnishes high-radix connectivity for accelerators and compute nodes within a flat architecture, ultimately mitigating latency and power consumption concerns. This approach empowers networks to scale out sustainably, meeting the ever-expanding requirements of AI/ML workloads.
Disclaimer:The information contained in each press release posted on this site was factually accurate on the date it was issued. While these press releases and other materials remain on the Company's website, the Company assumes no duty to update the information to reflect subsequent developments. Consequently, readers of the press releases and other materials should not rely upon the information as current or accurate after their issuance dates.