• Home
  • Events Calendar
  • Blueprint Guidelines
  • Privacy Policy
  • Subscribe to Daily Newsletter
  • NextGenInfra.io
No Result
View All Result
Converge Digest
Saturday, April 11, 2026
  • Home
  • Events Calendar
  • Blueprint Guidelines
  • Privacy Policy
  • Subscribe to Daily Newsletter
  • NextGenInfra.io
No Result
View All Result
Converge Digest
No Result
View All Result

Home » Google Links Data Centers with Software-defined Networking

Google Links Data Centers with Software-defined Networking

April 16, 2012
in All
A A

WAN economics to date have not made Google happy, said Urs Hoelzle, SVP of Technical Infrastructure and Google Fellow, speaking at the Open Networking Summit 2012 in Santa Clara, California. Ideally, the cost per bit should go down as the network scales, but this is not really true in a really massive backbone like Google’s. This scale requires more expensive hardware and manual management of very complex software. The goal should be to manage the WAN as a fabric and not as a collection of individual boxes. Current equipment and protocols do not allow this. Google’s ambition is to build a WAN that is higher performance, more fault tolerant and cheaper.

Some notes from his presentation:

  • Google currently operates two WAN backbones. I-Scale is the Internet facing backbone that carries user traffic. It must have bulletproof performance. G-Scale is the internal backbone that carries traffic between Google’s data centers worldwide. The G-Scale network has been used to experiment with SDN.
  • Google chose to pursue SDN in order to separate hardware from software. This enables it to choose hardware based on necessary features and to choose software based on protocol requirements.
  • SDN provides logically, centralized network control. The goal is to be more deterministic, more efficient and more fault-tolerant.
  • SDN enables better centralized traffic engineering, such as an ability for the network to converge quickly to target optimum on a link failure.
  • Deterministic behavior should simplify planning vs over provisioning for worst case variability.
  • The SDN controller uses modern server hardware, giving it more flexibility than conventional routers.
  • Switches are virtualized with real OpenFlow and the company can attach real monitoring and alerting servers. Testing is vastly simplified.
  • The move to SDN is really about picking the right tool for the right job.
  • Google’s OpenFlow WAN activity really started moving in 2010. Less than two years later, Google is now running the G-Scale network on OpenFlow-controlled switches. 100% of its production data center to data center traffic is now on this new SDN-powered network.
  • Google built their own OpenFlow switch because none were commercially available. The switch was built from merchant silicon. It has scaled to hundred of nonblocking 10GE ports.
  • Google’s practice is to simplify every software stack and hardware element as much as possible, removing anything that is not absolutely necessary.
  • Multiple switch chassis are used in each domain.
  • Google is using open source routing stacks for BGP and ISIS.
  • The OpenFlow-controlled switches look like regular routers. BGP/ISIS/OSPF now interfaces with OpenFlow controller to program the switch state.
  • All data center backbone traffic is now carried by the new network. The old network is turned off.
  • Google started rolling out centralized traffic engineering in January.
  • Google is already seeing higher network utilization and gaining the benefit of flexible management of end-to-end paths for maintenance.
  • Over the past six months, the new network has seen a high degree of stability with minimal outages.
  • The new SDN-powered network is meeting the company’s SLAs.
  • It is still too early to quantify the economics.
  • A key benefit is the unified view of the network fabric — higher QoS awareness and predictability.
  • The OpenFlow protocol is really barebones at this point, but it is good enough for real world networks at Google scale.
  • 100% of traffic carried on the new network.

The Open Networking Summit is planning to post a video of their conference following the event.

http://www.opennetsummit.org/

Tags: Blueprint columnsConferenceData CenterGoogleOpenFlowPacket SystemsSDN
ShareTweetShare
Previous Post

Open Networking Summit: Interview with Dan Pitt

Next Post

BRICS Envision a 34,000km Cable Linking Developing Nations

Staff

Staff

Related Posts

Anthropic Expands Use of Google Cloud TPUs, Targeting One Million Units 
AI Infrastructure

Google Cloud to Build New Türkiye Region as Part of $2B, 10-Year Investment

November 24, 2025
Anthropic Expands Use of Google Cloud TPUs, Targeting One Million Units 
AI Infrastructure

Google Commits $40B for AI Infrastructure in Texas

November 14, 2025
Google Cloud Details Ironwood TPUs and Axion CPUs for AI Inference 
AI Infrastructure

Google Cloud Details Ironwood TPUs and Axion CPUs for AI Inference 

November 9, 2025
Microsoft Cloud and AI Momentum Drive Results, CAPEX Rockets Up
AI Infrastructure

Google Sees Surging AI Infrastructure Expenses

October 29, 2025
Google and NextEra to Restart Iowa’s Duane Arnold Nuclear Plant 
AI Infrastructure

Google and NextEra to Restart Iowa’s Duane Arnold Nuclear Plant 

October 29, 2025
PECC Summit: Google’s Ryohei Urata on Reliability for AI Data Centers
All

PECC Summit: Google’s Ryohei Urata on Reliability for AI Data Centers

October 23, 2025
Next Post
BRICS Envision a 34,000km Cable Linking Developing Nations

BRICS Envision a 34,000km Cable Linking Developing Nations

Please login to join discussion

Categories

  • 5G / 6G / Wi-Fi
  • AI Infrastructure
  • All
  • Automotive Networking
  • Blueprints
  • Clouds and Carriers
  • Data Centers
  • Enterprise
  • Explainer
  • Feature
  • Financials
  • Last Mile / Middle Mile
  • Legal / Regulatory
  • Optical
  • Quantum
  • Research
  • Security
  • Semiconductors
  • Space
  • Start-ups
  • Subsea
  • Sustainability
  • Video
  • Webinars

Archives

Tags

5G All AT&T Australia AWS Blueprint columns BroadbandWireless Broadcom China Ciena Cisco Data Centers Dell'Oro Ericsson FCC Financial Financials Huawei Infinera Intel Japan Juniper Last Mile Last Mille LTE Mergers and Acquisitions Mobile NFV Nokia Optical Packet Systems PacketVoice People Regulatory Satellite SDN Service Providers Silicon Silicon Valley StandardsWatch Storage TTP UK Verizon Wi-Fi
Converge Digest

A private dossier for networking and telecoms

Follow Us

  • Home
  • Events Calendar
  • Blueprint Guidelines
  • Privacy Policy
  • Subscribe to Daily Newsletter
  • NextGenInfra.io

© 2025 Converge Digest - A private dossier for networking and telecoms.

No Result
View All Result
  • Home
  • Events Calendar
  • Blueprint Guidelines
  • Privacy Policy
  • Subscribe to Daily Newsletter
  • NextGenInfra.io

© 2025 Converge Digest - A private dossier for networking and telecoms.

This website uses cookies. By continuing to use this website you are giving consent to cookies being used. Visit our Privacy and Cookie Policy.
Go to mobile version