By Malik Arshad, President and CEO, Canoga Perkins
As cloud computing swallowed enterprise workloads over the past decade, something subtle but consequential happened: innovation in networking shifted dramatically away from enterprises and into the hands of hyperscalers.
While the cloud divisions of companies like Google and Amazon raced ahead with custom silicon and hyper-efficient fabrics, the enterprise networks of other organizations largely stood still, content to repackage commodity silicon and ride the coattails of legacy architectures.
Now, with enterprise artificial intelligence (AI) pushing the boundaries of what infrastructure must support—low latency, deterministic traffic patterns, massive throughput—enterprises are waking up to a hard truth: their networks are no longer built for what’s next.
Enterprise Networks Stopped Innovating
There is a general sentiment in the enterprise networking world that innovation in campus networks has been challenged.
As enterprises increasingly relied on public cloud providers for computing and storage, their own networking infrastructures became less central, often stagnating or merely scaling incrementally rather than transforming meaningfully.
Enterprise networks have predominantly relied on merchant silicon networking chips produced by a small number of vendors. While this ecosystem offers economies of scale and broad compatibility, it results in incremental upgrades rather than architectural breakthroughs. The innovation tends to center around new software overlays or management tools rather than rethinking the hardware or fundamental data plane behavior.
A key example of stalled innovation is the fate of P4, a domain-specific language that was set to revolutionize enterprise networking by providing programmable, protocol-independent packet forwarding. P4, initiated in 2013, promised to revolutionize how networks are managed, enabling operators to define how switches process packets (and redefine switch operation to support new protocols), without needing to redesign silicon.
P4 is now an open standard effort managed by the Linux Foundation which absorbed the technology and community from the Open Networking Foundation. At its height, the P4 movement had a dedicated processor in the Intel Tofino family (from its acquisition of Barefoot Networks) and an SDK for one of the market-leading FPGAs. These P4 devices enabled deep visibility and fine-grained control in the data plane, features critical for modern workloads like telemetry, AI, and microservices-based applications.
However, Intel’s 2024 decision to wind down its Tofino product line cast doubt on the commercial viability of P4-based switches for enterprise networks. While hyperscalers and research institutions embraced P4 for its flexibility, widespread enterprise adoption lagged—due in part to integration complexity, a shortage of developer expertise, and the industry’s inertia around established networking paradigms.
Enterprise AI will Stress Enterprise Networks
Enterprise AI workloads pose new and aggressive requirements on networking infrastructure—requirements that traditional enterprise networks struggle to meet. Enterprise AI will impact a large number of business operations including manufacturing, supply chain, defect detection and more. The emergence of agentic AI will further drive the need for ultra-low latency networking.
Many of these applications demand AI training and inference involve clustered GPU workloads that depend on extremely low-latency, high-throughput interconnects, as well as deterministic behavior for time-sensitive data exchanges. Typical enterprise networks, built for general-purpose traffic, rarely provide the kind of tightly-coupled, low-jitter environments that AI workloads thrive on.
This gap has become increasingly visible as enterprises attempt to scale AI beyond proof-of-concept into production environments. Legacy networks introduce unpredictable delays, suboptimal routing paths, and congestion, which degrade AI performance.
Bounded Latency Needed for Enterprise AI
To meet these challenges, enterprises need to pivot toward AI-optimized networking solutions that deliver bounded latency.
Bounded latency networks can be programmed to ensure that total delay experienced by data traversing the network can be guaranteed not to exceed a predetermined time value. Depending on the network technology used, bounded latency networks utilize deterministic networking and time-sensitive networking (TSN) technologies to ensure latency levels are enforced.
Deterministic networking is built into IP layer three and combines per-flow guarantees, resource reservation and congestion control to provide guaranteed latency and low packet loss for specific data flows.
TSN works with standard Ethernet to bring time synchronization, traffic shaping and scheduling and redundancy to support real-time, deterministic communication.
Both technologies are well suited for enterprise AI, industrial, automotive, aerospace and automation applications.
Conclusion
For organizations that have held off on bringing new technology to their enterprise networks, the time is now to invest in the network so that it can support the demands of enterprise AI.
The failure to keep pace with latency demands of enterprise AI is a strategic liability. If enterprise networks continue to rely on legacy designs and merchant silicon without meaningful architectural innovation, they risk being slowed down or sidelined in participating in the next wave of digital transformation. By investing in bounded latency fabrics, and adopting design philosophies borrowed from hyperscalers, enterprises can reclaim control over their infrastructure destiny. The question is no longer whether AI will transform the enterprise—it’s whether the network will be ready when it does.






