Myrtle.ai has released support for its VOLLO® inference accelerator on Napatech’s NT400D1x SmartNICs, enabling machine learning inference with latencies as low as one microsecond. The joint solution is designed to meet ultra-low latency demands in high-performance environments, including algorithmic trading, telecom infrastructure, cybersecurity, and network monitoring.
The VOLLO accelerator supports a broad spectrum of machine learning models—ranging from LSTM and CNN to Random Forests and Gradient Boosting decision trees. By running inference directly on the SmartNIC, the solution reduces latency by eliminating the need to pass data to host CPUs. This development marks a key step in moving AI processing closer to the data source for real-time decision-making.
“We recognized that the latency leader in the STAC ML benchmarks could bring real value to our customers in the finance market as they increase their adoption of ML for auto trading,” said Jarrod J.S. Siket, Chief Product & Marketing Officer at Napatech. “The VOLLO compiler is designed to make it very easy for ML developers to use our SmartNICs and this really strengthens our portfolio of products and services.”
- Myrtle.ai’s VOLLO accelerator now supported on Napatech NT400D1x SmartNICs
- Enables <1µs ML inference latency for edge and inline use cases
- Supports LSTM, CNN, MLP, Random Forest, and Gradient Boosting models
- Target applications: finance, wireless, security, and network operations
- VOLLO compiler available at vollo.myrtle.ai
🌐 Why it Matters:
As real-time AI gains traction in financial trading and network security, Myrtle.ai and Napatech’s SmartNIC-based inference solution demonstrates a shift toward embedded, ultra-low-latency compute at the network edge. It underscores growing demand for ML capabilities that bypass traditional CPU/GPU bottlenecks.
🌐 We’re tracking the latest developments in AI infrastructure. Follow our ongoing coverage at: https://convergedigest.com/category/ai-infrastructure/







