back to home

RunanywhereAI / runanywhere-sdks

Production ready toolkit to run AI locally

9,214 stars
269 forks
39 issues
C++KotlinC

AI Architecture Analysis

This repository is indexed by RepoMind. By analyzing RunanywhereAI/runanywhere-sdks in our AI interface, you can instantly generate complete architecture diagrams, visualize control flows, and perform automated security audits across the entire codebase.

Our Agentic Context Augmented Generation (Agentic CAG) engine loads full source files into context, avoiding the fragmentation of traditional RAG systems. Ask questions about the architecture, dependencies, or specific features to see it in action.

Embed this Badge

Showcase RepoMind's analysis directly in your repository's README.

[![Analyzed by RepoMind](https://img.shields.io/badge/Analyzed%20by-RepoMind-4F46E5?style=for-the-badge)](https://repomind-ai.vercel.app/repo/RunanywhereAI/runanywhere-sdks)
Preview:Analyzed by RepoMind

Repository Summary (README)

Preview
<p align="center"> <img src="examples/logo.svg" alt="RunAnywhere Logo" width="140"/> </p> <h1 align="center">RunAnywhere</h1> <p align="center"> <strong>On-device AI for every platform.</strong><br/> Run LLMs, speech-to-text, and text-to-speech locally — private, offline, fast. </p> <p align="center"> <a href="https://apps.apple.com/us/app/runanywhere/id6756506307"> <img src="https://img.shields.io/badge/App_Store-Download-0D96F6?style=for-the-badge&logo=apple&logoColor=white" alt="Download on App Store" /> </a> &nbsp; <a href="https://play.google.com/store/apps/details?id=com.runanywhere.runanywhereai"> <img src="https://img.shields.io/badge/Google_Play-Download-34A853?style=for-the-badge&logo=google-play&logoColor=white" alt="Get it on Google Play" /> </a> </p> <p align="center"> <a href="https://github.com/RunanywhereAI/runanywhere-sdks/stargazers"><img src="https://img.shields.io/github/stars/RunanywhereAI/runanywhere-sdks?style=flat-square" alt="GitHub Stars" /></a> <a href="LICENSE"><img src="https://img.shields.io/badge/License-Apache%202.0-blue?style=flat-square" alt="License" /></a> <a href="https://discord.gg/N359FBbDVd"><img src="https://img.shields.io/badge/Discord-Join-5865F2?style=flat-square&logo=discord&logoColor=white" alt="Discord" /></a> </p>

See It In Action

<div align="center"> <table> <tr> <td align="center" width="50%"> <img src="docs/gifs/text-generation.gif" alt="Text Generation" width="240"/><br/><br/> <strong>Text Generation</strong><br/> <sub>LLM inference — 100% on-device</sub> </td> <td width="40"></td> <td align="center" width="50%"> <img src="docs/gifs/voice-ai.gif" alt="Voice AI" width="240"/><br/><br/> <strong>Voice AI</strong><br/> <sub>STT → LLM → TTS pipeline — fully offline</sub> </td> </tr> <tr><td colspan="3" height="30"></td></tr> <tr> <td align="center" width="50%"> <img src="docs/gifs/image-generation.gif" alt="Image Generation" width="240"/><br/><br/> <strong>Image Generation</strong><br/> <sub>On-device diffusion model</sub> </td> <td width="40"></td> <td align="center" width="50%"> <img src="docs/gifs/visual-language-model.gif" alt="Visual Language Model" width="240"/><br/><br/> <strong>Visual Language Model</strong><br/> <sub>Vision + language understanding on-device</sub> </td> </tr> </table> </div>

What is RunAnywhere?

RunAnywhere lets you add AI features to your app that run entirely on-device:

  • LLM Chat — Llama, Mistral, Qwen, SmolLM, and more
  • Speech-to-Text — Whisper-powered transcription
  • Text-to-Speech — Neural voice synthesis
  • Voice Assistant — Full STT → LLM → TTS pipeline

No cloud. No latency. No data leaves the device.


SDKs

PlatformStatusInstallationDocumentation
Swift (iOS/macOS)StableSwift Package Managerdocs.runanywhere.ai/swift
Kotlin (Android)StableGradledocs.runanywhere.ai/kotlin
Web (Browser)BetanpmSDK README
React NativeBetanpmdocs.runanywhere.ai/react-native
FlutterBetapub.devdocs.runanywhere.ai/flutter

Quick Start

Swift (iOS / macOS)

import RunAnywhere
import LlamaCPPRuntime

// 1. Initialize
LlamaCPP.register()
try RunAnywhere.initialize()

// 2. Load a model
try await RunAnywhere.downloadModel("smollm2-360m")
try await RunAnywhere.loadModel("smollm2-360m")

// 3. Generate
let response = try await RunAnywhere.chat("What is the capital of France?")
print(response) // "Paris is the capital of France."

Install via Swift Package Manager:

https://github.com/RunanywhereAI/runanywhere-sdks

Full documentation → · Source code


Kotlin (Android)

import com.runanywhere.sdk.public.RunAnywhere
import com.runanywhere.sdk.public.extensions.*

// 1. Initialize
LlamaCPP.register()
RunAnywhere.initialize(environment = SDKEnvironment.DEVELOPMENT)

// 2. Load a model
RunAnywhere.downloadModel("smollm2-360m").collect { println("${it.progress * 100}%") }
RunAnywhere.loadLLMModel("smollm2-360m")

// 3. Generate
val response = RunAnywhere.chat("What is the capital of France?")
println(response) // "Paris is the capital of France."

Install via Gradle:

dependencies {
    implementation("com.runanywhere.sdk:runanywhere-kotlin:0.1.4")
    implementation("com.runanywhere.sdk:runanywhere-core-llamacpp:0.1.4")
}

Full documentation → · Source code


React Native

import { RunAnywhere, SDKEnvironment } from '@runanywhere/core';
import { LlamaCPP } from '@runanywhere/llamacpp';

// 1. Initialize
await RunAnywhere.initialize({ environment: SDKEnvironment.Development });
LlamaCPP.register();

// 2. Load a model
await RunAnywhere.downloadModel('smollm2-360m');
await RunAnywhere.loadModel(modelPath);

// 3. Generate
const response = await RunAnywhere.chat('What is the capital of France?');
console.log(response); // "Paris is the capital of France."

Install via npm:

npm install @runanywhere/core @runanywhere/llamacpp

Full documentation → · Source code


Flutter

import 'package:runanywhere/runanywhere.dart';
import 'package:runanywhere_llamacpp/runanywhere_llamacpp.dart';

// 1. Initialize
await RunAnywhere.initialize();
await LlamaCpp.register();

// 2. Load a model
await RunAnywhere.downloadModel('smollm2-360m');
await RunAnywhere.loadModel('smollm2-360m');

// 3. Generate
final response = await RunAnywhere.chat('What is the capital of France?');
print(response); // "Paris is the capital of France."

Install via pub.dev:

dependencies:
  runanywhere: ^0.15.11
  runanywhere_llamacpp: ^0.15.11

Full documentation → · Source code


Web (Browser)

import { RunAnywhere, TextGeneration } from '@runanywhere/web';

// 1. Initialize
await RunAnywhere.initialize({ environment: 'development' });

// 2. Load a model
await TextGeneration.loadModel('/models/qwen2.5-0.5b-instruct-q4_0.gguf', 'qwen2.5-0.5b');

// 3. Generate
const result = await TextGeneration.generate('What is the capital of France?');
console.log(result.text); // "Paris is the capital of France."

Install via npm:

npm install @runanywhere/web

Full documentation → · Source code


Sample Apps

Full-featured demo applications demonstrating SDK capabilities:

PlatformSource CodeDownload
iOSexamples/ios/RunAnywhereAIApp Store
Androidexamples/android/RunAnywhereAIGoogle Play
Webexamples/web/RunAnywhereAIBuild from source
React Nativeexamples/react-native/RunAnywhereAIBuild from source
Flutterexamples/flutter/RunAnywhereAIBuild from source

Starter Examples

Minimal starter projects to get up and running with RunAnywhere on each platform:

PlatformRepository
Kotlin (Android)RunanywhereAI/kotlin-starter-example
Swift (iOS)RunanywhereAI/swift-starter-example
FlutterRunanywhereAI/flutter-starter-example
React NativeRunanywhereAI/react-native-starter-app

Playground

Real-world projects built with RunAnywhere that push the boundaries of on-device AI. Each one ships as a standalone app you can build and run.

Android Use Agent

A fully on-device autonomous Android agent that controls your phone. Give it a goal like "Open YouTube and search for lofi music" and it reads the screen via the Accessibility API, reasons about the next action with an on-device LLM (Qwen3-4B), and executes taps, swipes, and text input -- all without any cloud calls. Includes a Samsung foreground boost that delivers a 15x inference speedup, smart pre-launch via Android intents, and loop detection with automatic recovery. Benchmarked across four LLM models on a Galaxy S24. Full benchmarks

On-Device Browser Agent

A Chrome extension that automates browser tasks entirely on-device using WebLLM and WebGPU. Uses a two-agent architecture -- a Planner that breaks down goals into steps and a Navigator that interacts with page elements -- with both DOM-based and vision-based page understanding. Includes site-specific workflows for Amazon, YouTube, and more. All AI inference runs locally on your GPU after the initial model download.

Swift Starter App

A full-featured iOS app demonstrating the RunAnywhere SDK's core AI capabilities in a clean SwiftUI interface. Includes LLM chat with on-device language models, Whisper-powered speech-to-text, neural text-to-speech, and a complete voice pipeline that chains STT, LLM, and TTS together with voice activity detection. A good starting point for building privacy-first AI features on iOS.

Linux Voice Assistant

A complete on-device voice AI pipeline for Linux (Raspberry Pi 5, x86_64, ARM64). Say "Hey Jarvis" to activate, speak naturally, and get responses -- all running locally with zero cloud dependency. Chains Wake Word detection (openWakeWord), Voice Activity Detection (Silero VAD), Speech-to-Text (Whisper Tiny EN), LLM reasoning (Qwen2.5 0.5B Q4), and Text-to-Speech (Piper neural TTS) in a single C++ binary.

OpenClaw Hybrid Assistant

A hybrid voice assistant that keeps latency-sensitive components on-device (wake word, VAD, STT, TTS) while routing reasoning to a cloud LLM via OpenClaw WebSocket. Supports barge-in (interrupt TTS by saying the wake word), waiting chimes for cloud response feedback, and noise-robust VAD with burst filtering. Built for scenarios where on-device LLMs are too slow but you still want private audio processing.


Features

FeatureiOSAndroidWebReact NativeFlutter
LLM Text Generation
Streaming
Speech-to-Text
Text-to-Speech
Voice Assistant Pipeline
Vision Language Models
Model Download + Progress
Structured Output (JSON)🔜🔜
Tool Calling
Embeddings
Apple Foundation Models

Supported Models

LLM (GGUF format via llama.cpp)

ModelSizeRAM RequiredUse Case
SmolLM2 360M~400MB500MBFast, lightweight
Qwen 2.5 0.5B~500MB600MBMultilingual
Llama 3.2 1B~1GB1.2GBBalanced
Mistral 7B Q4~4GB5GBHigh quality

Speech-to-Text (Whisper via ONNX)

ModelSizeLanguages
Whisper Tiny~75MBEnglish
Whisper Base~150MBMultilingual

Text-to-Speech (Piper via ONNX)

VoiceSizeLanguage
Piper US English~65MBEnglish (US)
Piper British English~65MBEnglish (UK)

Repository Structure

runanywhere-sdks/
├── sdk/
│   ├── runanywhere-swift/          # iOS/macOS SDK
│   ├── runanywhere-kotlin/         # Android SDK
│   ├── runanywhere-web/            # Web SDK (WebAssembly)
│   ├── runanywhere-react-native/   # React Native SDK
│   ├── runanywhere-flutter/        # Flutter SDK
│   └── runanywhere-commons/        # Shared C++ core
│
├── examples/
│   ├── ios/RunAnywhereAI/          # iOS sample app
│   ├── android/RunAnywhereAI/      # Android sample app
│   ├── web/RunAnywhereAI/          # Web sample app
│   ├── react-native/RunAnywhereAI/ # React Native sample app
│   └── flutter/RunAnywhereAI/      # Flutter sample app
│
├── Playground/
│   ├── swift-starter-app/          # iOS AI playground app
│   ├── on-device-browser-agent/    # Chrome browser automation agent
│   ├── android-use-agent/          # On-device autonomous Android agent
│   ├── linux-voice-assistant/      # Linux on-device voice assistant
│   └── openclaw-hybrid-assistant/  # Hybrid voice assistant (on-device + cloud)
│
└── docs/                           # Documentation

Requirements

PlatformMinimumRecommended
iOS17.0+17.0+
macOS14.0+14.0+
AndroidAPI 24 (7.0)API 28+
WebChrome 96+ / Edge 96+Chrome 120+
React Native0.74+0.76+
Flutter3.10+3.24+

Memory: 2GB minimum, 4GB+ recommended for larger models


Contributing

We welcome contributions. See our Contributing Guide for details.

# Clone the repo
git clone https://github.com/RunanywhereAI/runanywhere-sdks.git

# Set up a specific SDK (example: Swift)
cd runanywhere-sdks/sdk/runanywhere-swift
./scripts/build-swift.sh --setup

# Run the sample app
cd ../../examples/ios/RunAnywhereAI
open RunAnywhereAI.xcodeproj

Support


License

Apache 2.0 — see LICENSE for details.