• Home
  • Events Calendar
  • Blueprint Guidelines
  • Privacy Policy
  • Subscribe to Daily Newsletter
  • NextGenInfra.io
No Result
View All Result
Converge Digest
Friday, April 10, 2026
  • Home
  • Events Calendar
  • Blueprint Guidelines
  • Privacy Policy
  • Subscribe to Daily Newsletter
  • NextGenInfra.io
No Result
View All Result
Converge Digest
No Result
View All Result

Home » Ceva expands Edge AI NPU family with Ceva-NeuPro-Nano

Ceva expands Edge AI NPU family with Ceva-NeuPro-Nano

June 24, 2024
in Semiconductors
A A

Ceva has extended its Ceva-NeuPro family of Edge AI NPUs by introducing the Ceva-NeuPro-Nano NPUs. These highly-efficient, self-sufficient NPUs are designed to deliver the power, performance, and cost efficiencies required for semiconductor companies and OEMs to integrate TinyML models into their SoCs for consumer, industrial, and general-purpose AIoT products. TinyML refers to deploying machine learning models on low-power, resource-constrained devices, bringing the power of AI to the Internet of Things (IoT).

Driven by the increasing demand for efficient and specialized AI solutions in IoT devices, the market for TinyML is rapidly growing. According to ABI Research, by 2030, over 40% of TinyML shipments will be powered by dedicated TinyML hardware rather than all-purpose MCUs. The Ceva-NeuPro-Nano NPUs address the specific performance challenges of TinyML, aiming to make AI ubiquitous, economical, and practical for a wide range of use cases, including voice, vision, predictive maintenance, and health sensing in consumer and industrial IoT applications.

The new Ceva-NeuPro-Nano Embedded AI NPU architecture is fully programmable and efficiently executes neural networks, feature extraction, control code, and DSP code. It supports advanced machine learning data types and operators, including native transformer computation, sparsity acceleration, and fast quantization. The optimized, self-sufficient architecture enables superior power efficiency, a smaller silicon footprint, and optimal performance compared to existing processor solutions for TinyML workloads. Additionally, Ceva-NetSqueeze AI compression technology processes compressed model weights directly, achieving up to 80% memory footprint reduction, addressing a key bottleneck in the adoption of AIoT processors.

Key Features of Ceva-NeuPro-Nano NPUs:

  • Programmable for neural networks, feature extraction, control code, and DSP code.
  • Scalable performance with configurations up to 64 int8 MACs per cycle.
  • Supports advanced ML data types and operators, including 4-bit to 32-bit integers and native transformer computation.
  • Advanced mechanisms like sparsity acceleration, non-linear activation acceleration, and fast quantization.
  • Single-core design eliminates the need for a companion MCU for computational tasks.
  • Ceva-NetSqueeze technology reduces memory footprint by up to 80%.
  • Innovative energy optimization techniques, including automatic on-the-fly energy tuning and weight-sparsity acceleration.
  • Ceva-NeuPro Studio provides a unified AI stack and supports open AI frameworks like TensorFlow Lite for Microcontrollers and microTVM.
  • Fast time to market with a Model Zoo of pretrained and optimized TinyML models.
  • Optimized runtime libraries and application-specific software.
Tags: AICeva
ShareTweetShare
Previous Post

Starlink signs Comcast Business as a reseller

Next Post

Ciena supplies XGS-PON to Georgia’s Seimitsu

Jim Carroll

Jim Carroll

Editor and Publisher, Converge! Network Digest, Optical Networks Daily - Covering the full stack of network convergence from Silicon Valley

Related Posts

Stargate Launches $100B AI Infrastructure Initiative, Led by OpenAI and SoftBank
Data Centers

Stargate Launches $100B AI Infrastructure Initiative, Led by OpenAI and SoftBank

January 21, 2025
Singapore awards 5G licensese to Singtel and StarHub-M1 JV
Clouds and Carriers

Singtel Launches RE:AI Service

October 13, 2024
Samsung Aims its Latest SSD at AI
All

Samsung Aims its Latest SSD at AI

October 3, 2024
Meg Whitman to step down as CEO of HPE
All

HPE Boosts Aruba Networking Central with OpsRamp and AI Optimizations

September 24, 2024
NVIDIA Hits $30 Billion in Q2 as Data Center Surges 154% YOY
All

NVIDIA Intros Aerial CUDA-Accelerated RAN

September 18, 2024
T-Mobile Teams with Vendors on AI-RAN
5G / 6G / Wi-Fi

T-Mobile Teams with Vendors on AI-RAN

September 18, 2024
Next Post
Cloudflare Acquires Nefeli and Enters Multicloud Networking Market

Telefónica and Nokia collaborate on 5G Standalone Network APIs

Categories

  • 5G / 6G / Wi-Fi
  • AI Infrastructure
  • All
  • Automotive Networking
  • Blueprints
  • Clouds and Carriers
  • Data Centers
  • Enterprise
  • Explainer
  • Feature
  • Financials
  • Last Mile / Middle Mile
  • Legal / Regulatory
  • Optical
  • Quantum
  • Research
  • Security
  • Semiconductors
  • Space
  • Start-ups
  • Subsea
  • Sustainability
  • Video
  • Webinars

Archives

Tags

5G All AT&T Australia AWS Blueprint columns BroadbandWireless Broadcom China Ciena Cisco Data Centers Dell'Oro Ericsson FCC Financial Financials Huawei Infinera Intel Japan Juniper Last Mile Last Mille LTE Mergers and Acquisitions Mobile NFV Nokia Optical Packet Systems PacketVoice People Regulatory Satellite SDN Service Providers Silicon Silicon Valley StandardsWatch Storage TTP UK Verizon Wi-Fi
Converge Digest

A private dossier for networking and telecoms

Follow Us

  • Home
  • Events Calendar
  • Blueprint Guidelines
  • Privacy Policy
  • Subscribe to Daily Newsletter
  • NextGenInfra.io

© 2025 Converge Digest - A private dossier for networking and telecoms.

No Result
View All Result
  • Home
  • Events Calendar
  • Blueprint Guidelines
  • Privacy Policy
  • Subscribe to Daily Newsletter
  • NextGenInfra.io

© 2025 Converge Digest - A private dossier for networking and telecoms.

This website uses cookies. By continuing to use this website you are giving consent to cookies being used. Visit our Privacy and Cookie Policy.
Go to mobile version