Dubai, UAE — SambaNova Systems today introduced its SN50 AI chip, positioning it as the fastest and most efficient solution for agentic AI workloads. The chip delivers speeds up to 5X faster than competing GPUs and offers enterprises a 3X lower total cost of ownership, enabling scalable, high-performance inference for autonomous AI agents.
To accelerate deployment, SambaNova announced a multi-year strategic collaboration with Intel, aimed at delivering cloud-scale AI inference solutions. The partnership will combine SambaNova’s full-stack AI systems with Intel’s CPUs, GPUs, and networking technologies, providing organizations with an alternative to GPU-centric solutions while optimizing performance, cost, and throughput.
Key Highlights of SN50
- Ultra-Low Latency & Instant AI Experiences: Real-time responsiveness supports next-generation enterprise applications, including voice assistants and agentic AI workflows.
- High Scale and Concurrency: Supports thousands of simultaneous AI sessions without performance degradation.
- Breakthrough Model Capacity: Three-tier memory architecture enables models with 10T+ parameters and 10M+ context lengths for richer reasoning and outputs.
- Cost Efficiency: Resident multi-model memory and agentic caching reduce infrastructure costs and maximize ROI for enterprise deployments.
- Enhanced Compute & Networking: SN50 delivers 5X more compute per accelerator and 4X more network bandwidth than prior generations, linking up to 256 accelerators over a multi-terabyte-per-second interconnect.
According to SambaNova CEO Rodrigo Liang, “AI is no longer a contest to build the biggest model. With SN50 and our collaboration with Intel, the real race is about who can light up entire data centers with AI agents that respond instantly, scale efficiently, and do it at a cost that makes AI the most profitable engine in the cloud.”
SoftBank First to Deploy SN50 in Japan
SoftBank Corp. will be the first customer to integrate SN50 into its next-generation AI data centers in Japan. The deployment will power low-latency inference for enterprise and sovereign clients across the Asia-Pacific region, supporting both open-source and proprietary frontier models.
Hironobu Tamba, VP at SoftBank, said, “With SN50, we can deliver world-class AI services with the speed, resiliency, and sovereignty our customers expect — achieving GPU-level performance with far better economics and control.”
$350M+ Series E Financing
SambaNova also announced over $350 million in Series E funding to expand manufacturing, cloud capacity, and enterprise software integrations. The round was led by Vista Equity Partners and Cambium Capital, with participation from Intel Capital, Battery Ventures, and accounts advised by T. Rowe Price Associates, Inc., among others.
Proceeds will be used to increase SN50 production, scale SambaCloud, and deepen software integrations, supporting enterprises in deploying production-ready agentic AI.
Built for Agentic Production
The SN50 leverages SambaNova’s Reconfigurable Data Unit (RDU) architecture, enabling:
- Faster time-to-first-token and larger batch processing
- Support for enterprise-scale autonomous AI agents
- Integration with multi-model memory and agentic caching to optimize cost and efficiency
Peter Rutten, IDC Research VP, notes: “SN50 changes the tokenomics of AI inference at scale. By delivering high performance and throughput with air-cooled chips, SambaNova is redefining infrastructure for agentic AI.”
Looking Ahead
SambaNova’s collaboration with Intel will also focus on AI cloud expansion, integrated AI infrastructure, and go-to-market execution, creating a multi-billion-dollar market opportunity for AI inference solutions.
With record bookings and revenue in 2025, SambaNova is poised to play a key role in powering enterprise, sovereign, and multi-cloud AI deployments globally — bridging the gap between high-performance AI research and production-ready infrastructure.


