aiIn Progress2026

Foxmayn AI CLI

How a Terminal-Native AI Chat Eliminated Browser Context-Switching for Developers

At a Glance

Challenge

Using AI assistants required leaving the terminal for web-based interfaces

Result

Full AI chat with streaming Markdown and 7 models — entirely in the terminal

Tech Stack

GoBubble TeaOpenRouter

Status

in-progress

Situation

Developers who live in the terminal had to break their workflow every time they needed to interact with an AI assistant. Web-based chat interfaces (ChatGPT, Claude, etc.) require opening a browser, navigating to the app, and context-switching away from the code they're working on. For quick questions, code reviews, or brainstorming, this overhead adds up — and copy-pasting between terminal and browser strips formatting and context.

The Challenge

Build a full-featured terminal chat application that brings multi-model AI conversations directly into the developer's existing workflow — with proper Markdown rendering, streaming responses, and no browser dependency.

What Was Built

  • Built an alt-screen TUI application in Go using Bubble Tea (Elm architecture) with a scrollable viewport, text input, and real-time streaming display.

  • Integrated OpenRouter's API for multi-model support: 7 models including Google Gemini, DeepSeek, Grok, and OpenAI variants — switchable via an in-app /model picker.

  • Implemented streaming response rendering with full Markdown support: code blocks, bold, italic, lists, and headers — all formatted with Lip Gloss directly in the terminal.

  • Added slash-command autocomplete (/model, /clear, /quit) for quick actions without leaving the conversation flow.

  • Configured hot-reload development via watchexec and simple .env-based API key management.

Results

Supported models

7 (Gemini, DeepSeek, Grok, OpenAI)

Response rendering

Streaming Markdown in terminal

Context switching

Terminal → Browser → Terminal

Never leaves terminal

Built with

Go + Bubble Tea (Elm architecture)

Developers can now chat with multiple AI models without leaving the terminal. Streaming Markdown rendering, in-app model switching, and slash commands keep the entire AI workflow within the same context where code is being written.

Key Achievement

Full TUI chat app with streaming Markdown-rendered responses, in-app model picker, and slash-command autocomplete supporting 7 models including Gemini, DeepSeek, and OpenAI variants.

Frequently Asked Questions