From concept to a voice-first assistant that lives on your phone — thinks with Claude, remembers with vectors, and acts on the real world.
Stock assistants handle timers and weather. Ask anything requiring real thought and they redirect you to a search page.
Every conversation starts cold. It doesn't know your projects, your decisions, or what you told it yesterday.
Your email, calendar, contacts, and notes live in separate apps. No assistant ties them together into action.
An AI that knows you, remembers everything you've told it, can read your email, check your calendar, search the web, browse websites, and take action — all through voice.
Expo SDK 55 + React Native 0.83 — TypeScript throughout. Single codebase, native Android build via local Gradle. Registered as the system assistant app on Samsung S24 Ultra.
Claude Sonnet via Anthropic API with tool-use loop (up to 8 iterations). The brain that reasons, plans, and decides when to use tools vs. just respond.
Perplexity Sonar for live web queries. Desktop gateway for full browser automation via Playwright on the host machine.
expo-speech-recognition (on-device STT) → Claude → expo-speech (on-device TTS). Zero cloud latency on the voice layer. Echo-safe interruption support.
Supabase + pgvector (Open Brain). Thoughts stored as vectors via Ollama nomic-embed-text running on a DigitalOcean droplet. Semantic search on every query.
Native Google Sign-In with OAuth2 scopes for Gmail (read/send/compose), Calendar (read/events), and Contacts (People API).
"Am I free tomorrow at 2?" → Checks Google Calendar, finds conflicts, suggests alternatives, and can book directly.
Brain Link silently captures important decisions and insights to Open Brain. Weeks later, you can ask "what did I decide about X?" and get an answer.
"Read my last texts from Sarah" — accesses SMS via a custom native Expo module. Search contacts via Google People API.
"Create a quick landing page mockup" — generates HTML, renders it in an in-app WebView modal with an Export button. Visual output, not just text.
Routes through a local desktop gateway to control Chrome via Playwright. Can read full pages, fill forms, and extract data from any website.
Registered as the Android assistant — double-press the side button to launch. No unlocking, no app drawer. Instant access.
Every line of Brain Link was written in collaboration with Claude Code. Architecture decisions, debugging, native module creation, API integration — all through the CLI.
Haiku → Sonnet: Upgraded when tool-use required deeper reasoning.
ElevenLabs → expo-speech: Moved to on-device TTS for zero latency and no API costs.
EAS Cloud → Local Gradle: Faster iteration, full control over native code.
Voice-first AI assistant running as standalone APK on Samsung S24 Ultra
Claude Sonnet with 8-tool chain reasoning
Persistent vector memory via Open Brain (Supabase + pgvector)
Gmail read/send, Calendar, Contacts, SMS
Perplexity web search + desktop browser automation
HTML artifact creation with in-app preview
Side-button double-press launch, echo-safe voice interrupt
"Hey Brain Link" via Picovoice Porcupine — always-listening, hands-free activation
Android foreground service for persistent audio when the screen is off
7am auto-notification with daily briefing — calendar, email summary, priorities
Background analysis — "you have a meeting in 30 minutes and haven't prepped"
Brain Link proves that the gap between "AI chatbot" and "personal AI" is just engineering — connecting the right APIs, building persistent memory, and making voice the primary interface. The tools exist. The future is about wiring them together.
github.com/mrmoe28/brain-link · brain.lock28.com