← Back to home

Manual

AgentHippoAI Manual

IDE, Spotlight, Agent Anywhere, and CLI reference.

Last updated April 11, 2026

AgentHippoAI gives your team one place to build, observe, and ship agents. Instead of bouncing between a terminal for Claude Code, another tool for Codex, and a separate analytics stack, you get one workflow: the IDE, Spotlight, Agent Anywhere, and the agenthippo CLI.

Quick Start

For most users, the best path is simple:

  1. Open AgentHippoAI and click Use AI Features.
  2. Turn on Spotlight.
  3. Configure LiteLLM (Recommended).
  4. Sign in to OpenAI Codex (ChatGPT OAuth).
  5. Add openai-codex/gpt-5.3-codex to LiteLLM and expose it as litellm/gpt-5.3-codex.
  6. In Chat, switch between Agent Claude and Agent Codex while keeping the same model in the model picker.

That setup gives you the best of AgentHippoAI in one flow:

  • one IDE for Claude and Codex
  • one local model-routing layer
  • one analytics surface for cost and trace visibility
  • one path from development to deployment

If you only remember one setup principle from this manual, make it this:

Use LiteLLM as your shared model layer. It is the cleanest way to let both Claude and Codex use the same visible model in AgentHippoAI.

Spotlight

Spotlight is what makes AgentHippoAI feel like a serious agent platform instead of just another chat UI. It gives you trace visibility, cost truth, and a clean place to review how your agents actually behaved.

Why teams turn it on first:

  • product teams can see what agents really did
  • engineering teams can spot regressions faster
  • ops and finance teams get much better cost visibility
  • everyone has one source of truth for agent activity

To enable Spotlight:

  1. Click Enable LLM Spotlight or Setup Spotlight from the welcome flow, or run Agent Hippo: Setup Spotlight (OTEL + DuckDB).
  2. Start it with Agent Hippo: Start Spotlight (OTEL Collector) if it is not already running.
  3. Check the result with Agent Hippo: Show Spotlight Status.
  4. Open the dashboard with Agent Hippo: Open Spotlight Dashboard.

What the status means:

  • Spotlight: Running (OTEL + LiteLLM configured) means the full stack is ready.
  • Spotlight: Partial (click to configure) usually means OTEL or LiteLLM still needs attention.

Language Models

Open Manage Language Models before you do anything fancy. This is the control center for providers, visibility, and model selection.

What matters in the table:

  • the API Key column shows Configure, Configured, or Update via LiteLLM
  • the eye icon controls whether a model is visible in the chat model picker
  • the model picker in Chat always includes Manage Models..., so users can jump back here quickly

For public and business users, there are two model paths that matter most:

Use caseBest choice
Fastest direct Codex loginOpenAI Codex (ChatGPT OAuth)
Best overall AgentHippoAI workflowLiteLLM (Recommended)
Same model in both Claude and CodexLiteLLM (Recommended)

LiteLLM

LiteLLM (Recommended) is the preferred provider path in AgentHippoAI because it turns many providers into one clean local layer. That makes the product easier to manage, easier to observe in Spotlight, and easier to standardize across a team.

Why this is the recommended choice:

  • one endpoint for OpenAI, Anthropic, Ollama, and more
  • cleaner visibility inside Spotlight
  • easier model standardization across teams
  • one shared model alias for both Agent Claude and Agent Codex

To configure LiteLLM:

  1. Open Manage Language Models.
  2. Find LiteLLM (Recommended).
  3. Click Configure.
  4. Use the local base URL shown by AgentHippoAI. In a standard local setup this is usually http://127.0.0.1:4000.
  5. Save the local proxy key if prompted.

Once LiteLLM is configured, add the upstream model:

  • upstream provider: openai-codex
  • upstream model: openai-codex/gpt-5.3-codex
  • recommended visible alias: litellm/gpt-5.3-codex

That alias is the important part. It is the model name your users can pick in Chat when they want the same OpenAI Codex model to work through both engines.

ChatGPT OAuth

OpenAI Codex (ChatGPT OAuth) is the right path for users who want Codex authenticated with ChatGPT rather than managing a separate OpenAI API-only setup.

To configure it:

  1. Open Manage Language Models.
  2. Find OpenAI Codex (ChatGPT OAuth).
  3. Click Configure.
  4. Choose Sign in with ChatGPT (native OAuth).

AgentHippoAI also offers Paste token manually, but native OAuth is the recommended option.

The direct Codex model you will usually see there is:

gpt-5.3-codex

Use that direct model if you are only working in Codex.

Use the LiteLLM alias if you want the same model to be available across both Agent Claude and Agent Codex.

Claude Subscription

If your team uses a Claude subscription rather than a raw Anthropic API key, choose:

Anthropic (Claude Subscription)

The setup flow is based on:

claude setup-token

In the IDE:

  1. Open Manage Language Models.
  2. Find Anthropic (Claude Subscription).
  3. Click Configure.
  4. Let AgentHippoAI capture the setup token or paste it manually.

Important for public docs:

  • users do not need to manage a plaintext Claude token file
  • AgentHippoAI stores the credential securely in the system keychain
  • the relevant runtime location is the isolated Claude runtime directory, not a user-facing token path

That means the product experience stays cleaner and safer for teams rolling this out broadly.

One Model, Two Engines

This is the setup many users care about most.

The mental model is:

  • the mode picker chooses the engine
  • the model picker chooses the model

To use one model with both engines:

  1. Open Chat.
  2. In the mode picker, choose Agent Claude.
  3. In the model picker, choose litellm/gpt-5.3-codex.
  4. Send a quick test prompt.
  5. Switch the mode picker to Agent Codex.
  6. Keep the same model choice, or re-select litellm/gpt-5.3-codex.
  7. Send the same test prompt again.

This was verified with the current AgentHippoAI setup: the shared alias litellm/gpt-5.3-codex worked with both Claude and Codex.

That is one of the clearest product advantages of AgentHippoAI:

  • you do not need separate editor workflows
  • you do not need separate model management habits
  • you can compare engines while keeping the model choice consistent

Agent Anywhere

Agent Anywhere is how AgentHippoAI stops being “just an IDE” and becomes something you can actually ship. It connects your agents to channels, team workflows, and deployment surfaces outside the editor.

Use Agent Anywhere when you want your agent to show up in places like:

  • Telegram
  • WhatsApp
  • Slack
  • Discord
  • a gateway web UI

Where to find it:

  • Welcome flow: Use Your Agent Anywhere
  • Command Palette: Agent Hippo: Install Agent Anywhere (Gateway)
  • Command Palette: Agent Hippo: Agent Anywhere Status
  • Command Palette: Agent Hippo: Open Agent Team

For business users, this matters because the same agent can move from:

  • internal development
  • team review
  • channel deployment
  • ongoing monitoring

without forcing a full rewrite of the workflow.

agenthippo CLI

The agenthippo CLI is the terminal companion to AgentHippoAI. Use it when you want the same agent workflow in scripts, CI, local automation, or lightweight services.

The three commands most users need first are:

agenthippo chat --workspace .
agenthippo ask "Summarize this repository" --workspace .
agenthippo serve --agent support-triage --workspace . --port 3000

Why teams use the agenthippo CLI:

  • quick local automation
  • CI or internal tooling
  • lightweight service endpoints
  • a clean bridge from IDE work to repeatable ops

If you want messaging or gateway routing from a served agent, add:

agenthippo serve --agent support-triage --workspace . --gateway-bridge --bind telegram,whatsapp

For the full command list, run:

agenthippo --help

Advanced Notes

Most users never need these paths, but power users and admins sometimes do.

PathWhy it matters
~/.agent-hippo/litellm-otel.yamlLiteLLM aliases and routing
~/.agent-hippo/.envoptional local env values for upstream keys
~/.agent-hippo/analytics/config/otel-collector.yamlSpotlight OTEL configuration
workspace .claude/ and .codex/derived runtime material for the active workspace

If you are specifically looking for a “Claude Code token path”, the practical answer is:

  • AgentHippoAI stores subscription credentials securely
  • users normally do not manage a plaintext token file
  • the runtime Claude state is isolated for the active agent

Troubleshooting

A model is not visible in Chat

  1. Open Manage Language Models.
  2. Find the model row.
  3. Make sure it is visible in the chat model picker.
  4. Reopen the model picker.

Codex works, but Claude does not use the same model

Make sure you are choosing the LiteLLM alias:

litellm/gpt-5.3-codex

Do not confuse it with the direct provider row:

gpt-5.3-codex

The first is the shared cross-engine choice. The second is the direct Codex model.

Spotlight shows partial status

Run:

  • Agent Hippo: Show Spotlight Status
  • Agent Hippo: Start Spotlight (OTEL Collector)

Then reopen Manage Language Models and confirm LiteLLM is still configured.

LiteLLM setup works, but a newly added model does not show up

Reopen Manage Language Models, then:

  1. confirm the alias exists
  2. confirm it is visible in the picker
  3. restart Spotlight if needed

Next Steps

If you want the full AgentHippoAI story after this setup, the best next pages are: