• Home
  • Events Calendar
  • Blueprint Guidelines
  • Privacy Policy
  • Subscribe to Daily Newsletter
  • NextGenInfra.io
No Result
View All Result
Converge Digest
Sunday, April 19, 2026
  • Home
  • Events Calendar
  • Blueprint Guidelines
  • Privacy Policy
  • Subscribe to Daily Newsletter
  • NextGenInfra.io
No Result
View All Result
Converge Digest
No Result
View All Result

Home » Axelera AI Unveils Metis M.2 Max for Edge AI and LLM Inference

Axelera AI Unveils Metis M.2 Max for Edge AI and LLM Inference

September 8, 2025
in Semiconductors
A A

Axelera AI launched the Metis M.2 Max, a new addition to its Metis AI processor family, designed to handle compute-intensive inference workloads at the edge. The module delivers PCIe-level performance in the compact M.2 form factor, targeting large language models (LLMs), vision transformer networks, and other advanced AI applications. Shipments begin in Q4 2025 through Axelera’s webstore and channel partners.

The Metis M.2 Max doubles memory bandwidth over the existing Metis M.2, supports up to 16 GB of memory, and incorporates advanced thermal management and enhanced security. Customers can choose between standard operating temperature (-20°C to +70°C) and extended industrial range (-40°C to +85°C). The new card provides a 33% performance uplift for convolutional neural networks and doubles throughput for LLMs and vision-language models while staying within an average 6.5W power envelope.

Axelera designed the M.2 Max as a 2280 M-key module with an optional low-profile heatsink, reducing card height by 27% for tighter deployments. Built-in security features include firmware integrity protection and a Root of Trust for secure boot and upgrades. The module integrates with Axelera’s Voyager SDK, enabling simplified deployment of both proprietary and industry-standard AI models.

  • PCIe-class AI acceleration in an M.2 2280 module
  • Up to 16 GB memory with double the bandwidth of prior generation
  • Performance boost: +33% CNNs, 2x tokens/sec for LLMs and VLMs
  • Power efficiency: 6.5W average consumption
  • Industrial-grade option: operating from -40°C to +85°C
  • Security: Root of Trust, firmware integrity checks, secure boot and upgrades
  • 27% slimmer profile with optional heatsink for constrained environments

“We continue to set the price-performance ratio benchmark for the AI accelerator market,” said Fabrizio del Maffeo, CEO of Axelera AI. “Our goal is to make it possible for our customers to deploy transformative edge AI applications at scale.”

🌐 Analysis: Edge AI hardware is becoming more critical as LLMs and transformer models move beyond the data center. Axelera’s move to pack PCIe-level performance into the M.2 format reflects the demand for scalable, power-efficient inference across industrial, retail, and healthcare environments. The company is positioning Metis as an alternative to GPU-centric solutions from Nvidia and AMD, with a focus on cost and power efficiency at the edge. This launch follows Axelera’s strategy of expanding the Metis platform while strengthening security, a trend mirrored by competitors such as Hailo and Mythic in edge inference markets.

🌐 We’re tracking the latest developments in semiconductors. Follow our ongoing coverage at: https://convergedigest.com/category/semiconductors/

ShareTweetShare
Previous Post

Qualcomm and Google Cloud Team Up to Bring Gemini-Powered AI Agents to Cars

Next Post

Ayar Labs and Alchip Partner on Co-Packaged Optics

Jim Carroll

Jim Carroll

Editor and Publisher, Converge! Network Digest, Optical Networks Daily - Covering the full stack of network convergence from Silicon Valley

Related Posts

Cisco, G42, and AMD to Build AI Infrastructure in the UAE
AI Infrastructure

DigitalBridge Teams with KT for AI Data Centers in Korea

November 26, 2025
BerryComm Expands Central Indiana Fiber with Nokia
5G / 6G / Wi-Fi

Telefónica Germany Awards Nokia a 5-Year RAN Modernization Deal

November 26, 2025
AMD’s Compute + Pensando Network Architecture Powers Zyphra’s AI 
AI Infrastructure

AMD’s Compute + Pensando Network Architecture Powers Zyphra’s AI 

November 25, 2025
Bleu, the “Cloud de Confiance” from Capgemini and Orange
Clouds and Carriers

Orange Business Begins Migration of 70% of IT Infrastructure to Bleu Cloud

November 25, 2025
Dell’s server and networking sales rise 16% yoy
Financials

Dell Raises FY26 AI Infrastructure Outlook as AI Server Shipments Surge 150%

November 25, 2025
GlobalFoundries acquires Tagore Technology’s GaN IP
Optical

GlobalFoundries Acquires InfiniLink for Silicon-Photonics Expertise

November 25, 2025
Next Post
Ayar Labs and Alchip Partner on Co-Packaged Optics

Ayar Labs and Alchip Partner on Co-Packaged Optics

Categories

  • 5G / 6G / Wi-Fi
  • AI Infrastructure
  • All
  • Automotive Networking
  • Blueprints
  • Clouds and Carriers
  • Data Centers
  • Enterprise
  • Explainer
  • Feature
  • Financials
  • Last Mile / Middle Mile
  • Legal / Regulatory
  • Optical
  • Quantum
  • Research
  • Security
  • Semiconductors
  • Space
  • Start-ups
  • Subsea
  • Sustainability
  • Video
  • Webinars

Archives

Tags

5G All AT&T Australia AWS Blueprint columns BroadbandWireless Broadcom China Ciena Cisco Data Centers Dell'Oro Ericsson FCC Financial Financials Huawei Infinera Intel Japan Juniper Last Mile Last Mille LTE Mergers and Acquisitions Mobile NFV Nokia Optical Packet Systems PacketVoice People Regulatory Satellite SDN Service Providers Silicon Silicon Valley StandardsWatch Storage TTP UK Verizon Wi-Fi
Converge Digest

A private dossier for networking and telecoms

Follow Us

  • Home
  • Events Calendar
  • Blueprint Guidelines
  • Privacy Policy
  • Subscribe to Daily Newsletter
  • NextGenInfra.io

© 2025 Converge Digest - A private dossier for networking and telecoms.

No Result
View All Result
  • Home
  • Events Calendar
  • Blueprint Guidelines
  • Privacy Policy
  • Subscribe to Daily Newsletter
  • NextGenInfra.io

© 2025 Converge Digest - A private dossier for networking and telecoms.

This website uses cookies. By continuing to use this website you are giving consent to cookies being used. Visit our Privacy and Cookie Policy.
Go to mobile version