Investing in Mirai
by Hyung Kim
Why we invested in Mirai
Mirai is building the on-device inference infrastructure layer the market has been missing—and its timing couldn’t be more critical.
I first met Alexey and Dima when they were building Prisma and Reface. What stood out wasn’t just product instinct, but deep technical rigor: they built internal infrastructure that scaled. Now they’ve reunited to tackle something foundational—high-performance on-device inference.
AI has shifted from novelty to expectation. As voice assistants, real-time translation, code completion, and copilots proliferate, the assumption that all inference belongs in the cloud is breaking. Latency budgets are tighter. Privacy expectations are higher. And inference costs are no longer negligible—they’re material.
That’s why we invested in Mirai Tech.
Mirai builds on-device AI inference infrastructure for Apple Silicon, enabling developers to run modern models locally with production-grade performance. In internal benchmarks, Mirai delivers a 37% increase in generation speed and up to 59% faster prefill versus MLX on certain model-device pairings. On-device performance isn’t a marginal gain—it’s often the difference between instant and unusable.
The Inflection Point
Specialized AI compute is now standard in consumer devices. The hardware is ready—especially on Apple Silicon—but the software stack lags behind.
On-device inference is a systems problem. Performance varies across chips, model architectures, device generations, and memory constraints. The “one runtime fits all” approach breaks down quickly. Mirai’s mission is to make on-device inference as accessible as cloud APIs—without requiring every team to master low-level optimization.
The Economics
As AI workloads become continuous—voice, code completion, real-time assistance—cloud costs compound. At scale, they directly constrain product ambition.
Shifting the right workloads to the device dramatically lowers per-inference costs while improving latency, reliability, privacy, and offline capability. The future isn’t purely cloud or purely on-device—it’s hybrid. But on-device will take a much larger share of interactive, real-time use cases than most expect. Mirai makes that shift practical.
The Platform Shift
This is also a power shift in the stack. For years, OS vendors dictated the pace of developer capability. On-device models change that dynamic. Developers can ship intelligence locally without waiting for OS primitives.
That creates space for new infrastructure layers above the OS—runtimes that evolve faster than iOS or Android release cycles. Mirai’s long-term vision aligns with that opportunity.
Why Now, Why Mirai
Infrastructure timing matters. The substrate is ready (AI chips everywhere), the pain is real (cost, latency, reliability), and demand is rising.
Mirai sits at that intersection, led by founders who’ve already built and scaled consumer AI products—and are now applying that intensity to the infrastructure layer the next era will depend on.
We’re excited to back Mirai as they help define what “AI-native” feels like when intelligence runs locally, instantly, and affordably.
・X(Twitter) : @trymirai
・LinkedIn : Mirai Tech Inc.
・Founders : Dima Shvets (X: @dmitrshvets, Linkedin), Alexey Moiseenkov (X: @Darkolorin , Linkedin)
・Website : trymirai.com
・Media:
https://trymirai.com/blog/mirai-raises-10m-to-build-the-on-device-ai-capability-layer