Cisco is pushing deeper into AI infrastructure with the launch of its Silicon One G300 networking chip, a 102.4 terabit-per-second ASIC designed to power the next wave of hyperscale and sovereign AI data centres. Unveiled at Cisco Live EMEA in Amsterdam, the G300 will sit at the heart of new Cisco N9000 and Cisco 8000 Ethernet systems engineered for training, inference and real-time agentic AI workloads. The company says the platform can support gigawatt-scale AI clusters and improve job completion times by 28% by increasing effective GPU utilisation across large, distributed jobs.
Full-Stack AI Networking: Silicon, Systems and Optics
Cisco’s pitch is that the network is becoming part of the AI compute stack itself, not just a transport layer. The G300 is built to handle bursty, tightly coupled AI traffic while maintaining predictable performance, using features such as intelligent load balancing, fully shared buffering and real-time telemetry to minimise packet loss and congestion that can stall large training runs. The chip is highly programmable, allowing operators to adjust capabilities post-deployment and with security embedded directly into silicon to protect AI workloads without adding latency.
New G300-powered Nexus 9000 and Cisco 8000 systems provide 102.4 Tbps of switching capacity in both air-cooled and fully liquid‑cooled configurations. Combined with advanced optics, Cisco claims these designs can deliver nearly 70% better energy efficiency versus previous generations delivering equivalent bandwidth. The systems are aimed at hyperscalers, “neocloud” operators, sovereign cloud providers, service providers and large enterprises looking to build dense AI clusters.
To complete the stack, Cisco introduced 1.6 Tbps OSFP optics for 100T-class switches and 800G Linear Pluggable Optics (LPO) that shift DSP functions into the switch silicon. The company says this reduces optical module power consumption by about 50% and overall switch power by up to 30%, directly addressing power and cooling constraints in large AI data centres.
Competing for the AI Data Centre Fabric
The G300 puts Cisco into more direct competition with networking silicon vendors such as Broadcom and NVIDIA, both of which are expanding Ethernet offerings for AI use cases. While Broadcom’s Tomahawk Ultra targets ultra‑low‑latency 51.2 Tbps switching for HPC and AI scale‑up, Cisco is betting on higher throughput, shared-buffer architectures and tight silicon–optics integration as AI clusters grow to hundreds of thousands of GPUs.
Alongside the hardware, Cisco has upgraded its Nexus One platform to provide a unified management plane that spans silicon, systems, optics and software for AI networks. Executives argue that as AI models get larger and more distributed, data movement between GPUs becomes the real bottleneck—and that networks optimised end‑to‑end for AI traffic will be as critical to performance and economics as the GPUs themselves.
