• Home
  • Events Calendar
  • Blueprint Guidelines
  • Privacy Policy
  • Subscribe to Daily Newsletter
  • NextGenInfra.io
No Result
View All Result
Converge Digest
Friday, April 10, 2026
  • Home
  • Events Calendar
  • Blueprint Guidelines
  • Privacy Policy
  • Subscribe to Daily Newsletter
  • NextGenInfra.io
No Result
View All Result
Converge Digest
No Result
View All Result

Home » NVIDIA Launches BlueField-4 DPU

NVIDIA Launches BlueField-4 DPU

October 30, 2025
in Data Centers
A A

At GTC Washington, D.C., NVIDIA introduced the BlueField-4 Data Processing Unit (DPU), a purpose-built accelerator for gigascale AI factories. The new chip delivers up to 800 Gbps of network throughput and integrates 64 Arm Neoverse V2 cores, a PCIe Gen 6 ×16 host interface, and a 128 GB LPDDR5 memory subsystem. It fuses compute, storage, and security acceleration within a single platform designed to handle trillion-token workloads.

BlueField-4 combines the NVIDIA Grace CPU with ConnectX-9 SuperNIC networking to provide six times more compute power than BlueField-3, enabling AI factories up to four times larger. It supports both Ethernet and InfiniBand, operating at up to 200 G SerDes per lane, with port options scaling to eight splits per device. The DPU introduces native service function chaining, programmable data-path acceleration, and real-time AI threat detection, all powered by NVIDIA’s DOCA microservices framework for software-defined infrastructure.

Security is anchored by the Advanced Secure Trusted Resource Architecture, which provides zero-trust tenant isolation, secure boot with hardware root of trust, encrypted firmware updates, and device attestation via SPDM 1.1. The platform includes built-in cryptographic engines for AES-GCM 128/256 and AES-XTS 256/512, plus IPsec and TLS acceleration for data-in-motion. BlueField-4 also integrates 512 GB on-board SSD, 114 MB L3 cache, and support for GPUDirect RDMA and GPUDirect Storage for low-latency data access.

Key Specifications

  • Throughput: Up to 800 Gbps (200 G SerDes per lane; supports Ethernet and InfiniBand)
  • Compute: 64 × Arm Neoverse V2 cores; 114 MB shared L3 cache
  • Memory: 128 GB LPDDR5 DRAM; 512 GB on-board SSD
  • Interface: PCIe Gen 6 ×16 with SocketDirect support
  • Acceleration Engines: 16 programmable data-path cores (256 threads)
  • Storage: BlueField SNAP elastic block storage with NVMe-oF and NVMe/TCP acceleration
  • Security: AES-GCM 128/256 and AES-XTS 256/512 crypto; secure boot; device attestation
  • Networking: RDMA / RoCE v2, Spectrum-X Ethernet, in-network computing, MPI accelerations
  • Management: Integrated BMC with 1 GbE OOB port; Redfish and MCTP management protocols
  • Form Factors: PCIe and VR NVL144

Adoption Across the Ecosystem

  • Server and Storage Builders: Cisco, Dell, HPE, IBM, Lenovo, Supermicro, VAST Data, WEKA, and DDN plan BlueField-4 integration in next-generation AI storage and compute systems.
  • Cybersecurity Partners: Palo Alto Networks, Check Point, F5, Cisco, and Trend Micro are developing real-time AI runtime security and zero-trust protection.
  • Cloud and AI Providers: CoreWeave, Crusoe, Lambda, Oracle Cloud Infrastructure, Akamai, Together.ai, and xAI are adopting DOCA to accelerate networking and enhance multi-tenant security.
  • Infrastructure Software Vendors: Red Hat, Canonical, SUSE, Nutanix, Mirantis, Rafay, and Spectro Cloud integrating BlueField-4 into AI-ready clouds.
  • Systems Integrators: Accenture, Deloitte, and World Wide Technology preparing enterprise and government deployments.
  • Availability: Early access expected in 2026 as part of NVIDIA Vera Rubin AI systems.

“It’s purpose-built as the end-to-end engine for a new class of AI storage platforms, bringing acceleration to the foundation of AI data pipelines for efficient processing and breakthrough performance at scale,” said Itay Ozery, Senior Director of Product Marketing for Data Center at NVIDIA.

🌐  Analysis: BlueField-4 strengthens NVIDIA’s full-stack approach to AI infrastructure, extending its control from GPUs to DPUs and networking. With 800 Gbps links and Grace CPU integration, NVIDIA is positioning itself at the center of the AI data-center stack—challenging AMD Pensando and Intel’s IPU programs as hyperscalers scale out multi-tenant AI fabrics.

Tags: Nvidia
ShareTweetShare
Previous Post

Stargate Michigan: Multi-Billion-Dollar AI Campus to Break Ground in 2026

Next Post

Supermicro’s U.S.-Built AI Systems for Federal Government

Jim Carroll

Jim Carroll

Editor and Publisher, Converge! Network Digest, Optical Networks Daily - Covering the full stack of network convergence from Silicon Valley

Related Posts

OCP Expands AI Initiative with Contributions from NVIDIA and Meta
Semiconductors

Arm Extends Neoverse With NVIDIA NVLink Fusion

November 17, 2025
Deutsche Telekom Looks to NVIDIA for €1B Industrial AI Cloud
AI Infrastructure

Deutsche Telekom Looks to NVIDIA for €1B Industrial AI Cloud

November 6, 2025
Forescout Unveils Real-Time Detection Tech for Non-Quantum-Safe Encryption
Quantum

NVQLink: NVIDIA’s Bridge to Quantum Supercomputing

November 1, 2025
NVIDIA Fuels Korea’s AI Factory Boom
AI Infrastructure

NVIDIA Fuels Korea’s AI Factory Boom

November 1, 2025
The Megawatt Shift: NVIDIA’s 800 VDC Strategy
Data Centers

The Megawatt Shift: NVIDIA’s 800 VDC Strategy

November 1, 2025
NVIDIA advances 51.2 terabit Spectrum-X Ethernet switching
AI Infrastructure

Crusoe Confirms NVIDIA BlueField-4 Deployment

October 29, 2025
Next Post
Supermicro’s U.S.-Built AI Systems for Federal Government

Supermicro's U.S.-Built AI Systems for Federal Government

Categories

  • 5G / 6G / Wi-Fi
  • AI Infrastructure
  • All
  • Automotive Networking
  • Blueprints
  • Clouds and Carriers
  • Data Centers
  • Enterprise
  • Explainer
  • Feature
  • Financials
  • Last Mile / Middle Mile
  • Legal / Regulatory
  • Optical
  • Quantum
  • Research
  • Security
  • Semiconductors
  • Space
  • Start-ups
  • Subsea
  • Sustainability
  • Video
  • Webinars

Archives

Tags

5G All AT&T Australia AWS Blueprint columns BroadbandWireless Broadcom China Ciena Cisco Data Centers Dell'Oro Ericsson FCC Financial Financials Huawei Infinera Intel Japan Juniper Last Mile Last Mille LTE Mergers and Acquisitions Mobile NFV Nokia Optical Packet Systems PacketVoice People Regulatory Satellite SDN Service Providers Silicon Silicon Valley StandardsWatch Storage TTP UK Verizon Wi-Fi
Converge Digest

A private dossier for networking and telecoms

Follow Us

  • Home
  • Events Calendar
  • Blueprint Guidelines
  • Privacy Policy
  • Subscribe to Daily Newsletter
  • NextGenInfra.io

© 2025 Converge Digest - A private dossier for networking and telecoms.

No Result
View All Result
  • Home
  • Events Calendar
  • Blueprint Guidelines
  • Privacy Policy
  • Subscribe to Daily Newsletter
  • NextGenInfra.io

© 2025 Converge Digest - A private dossier for networking and telecoms.

This website uses cookies. By continuing to use this website you are giving consent to cookies being used. Visit our Privacy and Cookie Policy.
Go to mobile version