US AI Chip Interconnect Firm Mixx Technologies Bags $33 Mn
- ByStartupStory | December 1, 2025
Oversubscribed Series A Fuels Optical Engine Scaling For Switchless GPU Clusters
San Jose-based Mixx Technologies has raised $33 million in an oversubscribed Series A funding round led by Singapore’s ICM HPQC Fund, with participation from TDK Ventures, SystemIQ Capital, Applied Ventures, and others. The capital accelerates development and deployment of Mixx’s HBxIO™ optical engine—a co-packaged optics (CPO) solution enabling ultra-high radix connectivity for AI inference workloads and high-performance computing (HPC). Founded by Broadcom veterans Vivek Raghuraman (CEO) and Rebecca K. Schaevitz (Chief Product Officer), Mixx addresses critical data bottlenecks in datacenter-scale AI clusters.
Rack-To-Chip Optical Breakthrough
Mixx’s system-level approach merges photonics with advanced packaging, optimizing performance, power, latency, and reliability from rack to chip. The HBxIO engine supports switchless GPU clusters, delivering highly parallelized AI compute with unprecedented radix—flattening scale-up networking for next-gen workloads. Unlike competitors focused on point solutions, Mixx’s rack-to-chip philosophy bridges back-end and front-end networks, enabling sustainable scaling amid exploding AI demands.
TDK Ventures President Nicolas Sauvage hailed Mixx as “deep-tech innovation advancing the entire AI compute ecosystem,” emphasizing its role in efficient, low-latency infrastructure. Proceeds fund product acceleration, global footprint expansion, and R&D scaling, growing headcount from 25 to over 75 across IC design, photonics, and systems engineering.
Strategic Backing Validates Mission
ICM HPQC Fund’s lead underscores confidence in Mixx’s open-standards transformation for cost-effective scaling—higher performance per dollar per watt. TDK Ventures brings materials expertise, SystemIQ sustainability focus, and Applied Ventures semiconductor lineage. CEO Raghuraman stated: “We’re building the foundation for tomorrow’s AI-driven world, unlocking performance beyond current architectures.”
Backed by decades from Intel, Corning, and Rockley Photonics, the team targets datacenter operators facing copper/electrical limits at 1.6T speeds. HBxIO’s silicon-integrated design promises 10x bandwidth density with 50% power savings.
AI Infrastructure Tailwinds
Funding arrives amid $200 billion+ AI capex forecasts for 2026, where interconnects consume 30%+ of cluster power. Nvidia’s GB200 NVL72 racks demand 141TB/s bandwidth; Mixx positions for CPO adoption projected at $20 billion by 2030 (Yole Group). Switchless topologies cut latency 40%, enabling trillion-parameter models.
Mixx differentiates via system-first methodology: rack-scale optimization versus module-centric rivals. Early prototypes demo 128-port radix at 3.2T, targeting hyperscalers like Microsoft Azure and Google Cloud.
Roadmap And Competitive Edge
Immediate priorities: prototype deployments Q2 2026, volume ramp H2, and partnerships for co-design. Expansion eyes Asia/Europe fabs, leveraging U.S. HQ for DoD/enterprise traction.
Challenges: photonic integration yields, thermal management at scale. Mixx mitigates via IP portfolio and open ecosystem compatibility.
This raise cements Mixx as interconnect innovator, powering AI’s next scaling phase where optics replace copper. As datacenters evolve to million-GPU monsters, Mixx’s platform promises efficiency unlocking exascale intelligence.