Jabil introduced its new J422G rackmount servers designed for AI, machine learning, and high-performance computing workloads. Built on sixth-generation Intel Xeon processors, the 2U dual-socket systems support up to four 600W GPUs and align with Open Compute Project (OCP) design principles for scalability and sustainability. General availability begins in November 2025.
The new J422G servers target hyperscale, fintech, and large language model workloads requiring high-density compute and flexible accelerator integration. Jabil emphasized that the systems are designed for both efficiency and interoperability across mature server ecosystems, supporting workload-optimized accelerators and cloud-scale deployments. The launch follows Jabil’s $500 million investment in a new Salisbury, North Carolina facility slated to open by mid-2026, dedicated to AI and cloud infrastructure manufacturing.
At the OCP Global Summit 2025 in San Jose, Jabil is showcasing a full lineup including:
- J421A-G: an OCP-compliant AmpereOne reference design for large-scale AI inference.
- J322OR: a 2U all-flash storage system built on Open Rack v3 for hyperscale environments.
- Co-packaged optics (CPO) switch system: based on Marvell’s Teralynx silicon, integrated with liquid-to-chip cooling from Mikros Technologies.
“Jabil’s role in the AI hardware ecosystem goes beyond building servers. As our customers’ trusted engineering-led manufacturing partner, we help hyperscalers grow with confidence and speed starting from integration,” said Ed Bailey, Chief Technology Officer, Intelligent Infrastructure at Jabil.
🌐 Analysis:
Jabil’s move deepens its participation in the AI data center supply chain, extending beyond contract manufacturing into reference platform design for OCP ecosystems. The integration of co-packaged optics and direct liquid cooling underlines its intent to compete in thermally demanding AI cluster builds, an area currently dominated by ODMs like Wiwynn, Quanta, and Foxconn. The Salisbury site investment signals Jabil’s readiness to meet rising U.S. demand for localized AI infrastructure manufacturing.





