LocalLLM by Actalithic LocalLLM by Actalithic
home
wifi_off You are offline. Blog content loaded from cache.
Actalithic Blog
Updates, releases, and news from the Actalithic team. We build LocalLLM — a completely private, browser-native AI that runs on your own hardware.
LocalLLM is live — welcome to actalithic.github.io
LocalLLM launch

After weeks of development, we're officially launching LocalLLM at actalithic.github.io — a fully browser-native AI assistant that runs entirely on your own hardware. No accounts. No API keys. No servers. Zero data ever leaves your device.

We built LocalLLM because we believe privacy-first AI shouldn't require a PhD to run. You open a webpage, pick a model, and within a few minutes you have a capable AI running completely offline on your GPU — powered by WebGPU and the incredible work of the MLC AI team.

What we're launching with: Six carefully curated models spanning a wide range of sizes and capabilities — from the blazing fast Llama 3.2 1B that runs on nearly any device, all the way up to Llama 3.3 70B for those with high-end hardware. We're starting lean. More models are coming soon.

ActalithicCore is our custom GPU+CPU hybrid engine mode. It intelligently splits model layers between your GPU VRAM and system RAM, letting you run models far larger than your VRAM would normally allow — without giving up GPU acceleration.

This is just the beginning. We have a lot planned — more models, more engine features, and things we're not ready to talk about yet. Follow us on Bluesky and check back here for updates.

— The Actalithic Team