• Home
  • Events Calendar
  • Blueprint Guidelines
  • Privacy Policy
  • Subscribe to Daily Newsletter
  • NextGenInfra.io
No Result
View All Result
Converge Digest
Sunday, April 12, 2026
  • Home
  • Events Calendar
  • Blueprint Guidelines
  • Privacy Policy
  • Subscribe to Daily Newsletter
  • NextGenInfra.io
No Result
View All Result
Converge Digest
No Result
View All Result

Home » Cadence Unveils AI Co-Processor with 30% Area and 20% Power Gains

Cadence Unveils AI Co-Processor with 30% Area and 20% Power Gains

May 11, 2025
in Semiconductors
A A

Cadence introduced a new category of AI silicon with the launch of the Tensilica NeuroEdge 130 AI Co-Processor (AICP), designed to complement NPUs and offload non-MAC AI tasks such as ReLU, sigmoid, and tanh. Targeting automotive, industrial, consumer, and mobile SoCs, the new processor offers more than 30% area savings and over 20% lower dynamic power consumption compared to existing DSPs, without sacrificing performance. The NeuroEdge 130 AICP is based on Cadence’s proven Vision DSP architecture and is supported by the NeuroWeave SDK for rapid model deployment.

With its VLIW-based SIMD architecture, optimized ISA, and future-ready programmability, the AICP delivers a flexible platform to handle pre- and post-processing layers not suited to NPUs. The processor supports both in-house and third-party NPU IPs, ensuring broad compatibility. Cadence positioned the AICP as a key component in next-gen edge AI systems, enabling execution of multimodal and LLM-based agentic AI models with lower latency and power. The processor is ISO 26262-ready for safety-critical automotive deployments and has already garnered strong customer interest.

Early endorsements came from indie Semiconductor, MulticoreWare, and Neuchips, all of whom cited the AICP’s efficiency and adaptability for ADAS, edge vision, and AI data center use cases. The processor’s lightweight AI library and compatibility with the TVM stack allow developers to optimize workloads while minimizing compiler overheads.

  • New AI co-processor class for pre/post-NPU task offloading.
  • 30% area savings, 20% dynamic power reduction vs. Vision DSPs.
  • Compatible with Cadence Neo NPUs and third-party NPU IPs.
  • Optimized for agentic and physical AI tasks like robotics, ADAS, healthcare.
  • Supported by Cadence’s NeuroWeave SDK and standalone AI library.
  • ISO 26262-ready and available now for integration in AI SoCs.

“Our customers asked for a small, efficient AI co-processor to future-proof their AI systems. The NeuroEdge 130 AICP meets that challenge head-on with leading performance and power efficiency,” said Boyd Phelps, SVP and GM of Cadence’s Silicon Solutions Group.

ShareTweetShare
Previous Post

IIJ Expands Shiroi Data Center with Liquid-Cooling for AI

Next Post

Lambda to Launch in Aligned’s Liquid-Cooled DFW-04

Jim Carroll

Jim Carroll

Editor and Publisher, Converge! Network Digest, Optical Networks Daily - Covering the full stack of network convergence from Silicon Valley

Related Posts

Cisco, G42, and AMD to Build AI Infrastructure in the UAE
AI Infrastructure

DigitalBridge Teams with KT for AI Data Centers in Korea

November 26, 2025
BerryComm Expands Central Indiana Fiber with Nokia
5G / 6G / Wi-Fi

Telefónica Germany Awards Nokia a 5-Year RAN Modernization Deal

November 26, 2025
AMD’s Compute + Pensando Network Architecture Powers Zyphra’s AI 
AI Infrastructure

AMD’s Compute + Pensando Network Architecture Powers Zyphra’s AI 

November 25, 2025
Bleu, the “Cloud de Confiance” from Capgemini and Orange
Clouds and Carriers

Orange Business Begins Migration of 70% of IT Infrastructure to Bleu Cloud

November 25, 2025
Dell’s server and networking sales rise 16% yoy
Financials

Dell Raises FY26 AI Infrastructure Outlook as AI Server Shipments Surge 150%

November 25, 2025
GlobalFoundries acquires Tagore Technology’s GaN IP
Optical

GlobalFoundries Acquires InfiniLink for Silicon-Photonics Expertise

November 25, 2025
Next Post
ECOC24: Polariton Achieves 400 Gbps Per Lane

Lambda to Launch in Aligned’s Liquid-Cooled DFW-04

Categories

  • 5G / 6G / Wi-Fi
  • AI Infrastructure
  • All
  • Automotive Networking
  • Blueprints
  • Clouds and Carriers
  • Data Centers
  • Enterprise
  • Explainer
  • Feature
  • Financials
  • Last Mile / Middle Mile
  • Legal / Regulatory
  • Optical
  • Quantum
  • Research
  • Security
  • Semiconductors
  • Space
  • Start-ups
  • Subsea
  • Sustainability
  • Video
  • Webinars

Archives

Tags

5G All AT&T Australia AWS Blueprint columns BroadbandWireless Broadcom China Ciena Cisco Data Centers Dell'Oro Ericsson FCC Financial Financials Huawei Infinera Intel Japan Juniper Last Mile Last Mille LTE Mergers and Acquisitions Mobile NFV Nokia Optical Packet Systems PacketVoice People Regulatory Satellite SDN Service Providers Silicon Silicon Valley StandardsWatch Storage TTP UK Verizon Wi-Fi
Converge Digest

A private dossier for networking and telecoms

Follow Us

  • Home
  • Events Calendar
  • Blueprint Guidelines
  • Privacy Policy
  • Subscribe to Daily Newsletter
  • NextGenInfra.io

© 2025 Converge Digest - A private dossier for networking and telecoms.

No Result
View All Result
  • Home
  • Events Calendar
  • Blueprint Guidelines
  • Privacy Policy
  • Subscribe to Daily Newsletter
  • NextGenInfra.io

© 2025 Converge Digest - A private dossier for networking and telecoms.

This website uses cookies. By continuing to use this website you are giving consent to cookies being used. Visit our Privacy and Cookie Policy.
Go to mobile version