All four tools are now full agentic terminal coding agents. The real differentiation is model ecosystem, pricing, open-source status, and depth of specific features — not feature presence/absence.
| Feature | Claude Code | Codex CLI (OpenAI) | Gemini CLI (Google) | GitHub Copilot CLI |
|---|---|---|---|---|
| Advanced Logic / Thinking | ✅ Sonnet 4.5 default | ✅ GPT-5.4, GPT-5.3-Codex, codex-mini; adjustable reasoning | ✅ Gemini 2.5 Pro / 3 / 3.1 Pro | ✅ Sonnet 4.5 default + GPT-5.3, Gemini 3 Pro, Opus 4.6 |
| Running Code | ✅ Bash, runs scripts | ✅ Shell commands, file edits, test runs | ✅ Built-in shell commands, file ops | ✅ Commands, tests, builds |
| Bash Execution | ✅ Yes | ✅ 3 modes: read-only / auto / full-access + --yolo |
✅ Built-in run_shell_command |
✅ User confirms; --yolo for auto |
| Web Search | ✅ WebSearch + WebFetch tools | ✅ Built-in web_search tool |
✅ Google Search grounding + web_fetch |
✅ web_fetch tool (built-in MCP) |
| Image Input | ✅ Paste images in chat | ✅ Screenshots, wireframes, design specs | ✅ Multimodal: images, PDFs, clipboard paste | ✅ Yes (images sent to models) |
| Feature | Claude Code | Codex CLI (OpenAI) | Gemini CLI (Google) | GitHub Copilot CLI |
|---|---|---|---|---|
| Skills | ✅ Markdown-based skill files | ✅ AGENTS.md + custom commands | ✅ Agent Skills (.md, /skills install) |
✅ Agent Skills (.md), shared with Copilot cloud agent & VS Code |
| MCP Support | ✅ Native (stdio, SSE) | ✅ Native (stdio + streaming HTTP); can also run AS an MCP server | ✅ Native (stdio, http, sse) | ✅ GitHub MCP built-in + custom servers |
| Hooks / Lifecycle | ✅ PreToolUse, PostToolUse, Notification hooks | ✅ ExecPolicy rules for command control | ✅ Hooks for customizing behavior | ✅ PreToolUse, PostToolUse, sessionStart, subagentStart hooks |
| Plugins / Extensions | ✅ Plugin system | ❌ Not yet (AGENTS.md covers most use cases) | ✅ Extensions (/extensions install) |
✅ Plugins (/plugin install owner/repo) |
| Feature | Claude Code | Codex CLI (OpenAI) | Gemini CLI (Google) | GitHub Copilot CLI |
|---|---|---|---|---|
| Subagents | ✅ Isolated context + custom system prompts, tools, permissions | ✅ Via config.toml roles; parallel execution in git worktrees |
✅ Custom agents (.gemini/agents/), generalist agent for routing (experimental) |
✅ Built-in agents (explore, task, code review, plan) + custom .agent.md files |
| Parallel Agents | ✅ Agent Teams (Opus 4.6): teammates communicate via shared task list & direct messages | ✅ Multi-agent in parallel worktrees; orchestration via OpenAI Agents SDK | ✅ Subagents can run in parallel (experimental); YOLO mode by default | ✅ /fleet coordinates subagents in background; multiple sessions in VS Code |
| Cloud Delegation | ❌ No (local only) | ✅ codex cloud exec — delegates to Codex Cloud sandbox; results applied locally |
❌ No (local only) | ✅ & prefix delegates to Copilot cloud agent; /resume switches between local & cloud |
| Feature | Claude Code | Codex CLI (OpenAI) | Gemini CLI (Google) | GitHub Copilot CLI |
|---|---|---|---|---|
| Project Context File | ✅ CLAUDE.md (hierarchical: global → project → subdirectory) |
✅ AGENTS.md (project root) |
✅ GEMINI.md (hierarchical: global → project → subdirectory) |
✅ .github/copilot-instructions.md + AGENTS.md |
| Persistent Memory | ✅ Subagent memory directories (~/.claude/agent-memory/); file-based memory tool |
✅ Session JSONL logs; config-based persistence | ✅ save_memory tool appends to ~/.gemini/GEMINI.md automatically |
✅ Repository memory (learns conventions cross-session); cross-session memory; /chronicle for insights |
| Session Resume | ✅ /resume, --continue |
✅ codex resume, codex exec resume --last |
✅ --resume flag, /resume browser, /chat save/resume <tag> |
✅ --resume, --continue, /resume; sessions shared between CLI & VS Code |
| Auto-Compaction | ✅ Context compaction when approaching limit | ✅ Auto-compaction for long sessions | ✅ Context management with session limits | ✅ Auto-compaction at 95% of token limit |
| Feature | Claude Code | Codex CLI (OpenAI) | Gemini CLI (Google) | GitHub Copilot CLI |
|---|---|---|---|---|
| Headless Run | ✅ -p flag |
✅ codex exec (dedicated non-interactive mode) |
✅ gemini -p "prompt" |
✅ -p / --prompt flag |
| Streaming from Headless | ✅ JSON output + stdout streaming | ✅ codex exec --json streams JSONL events (turns, commands, reasoning) |
✅ --output-format stream-json |
✅ --output-format=json JSONL stream |
| JSON Response | ✅ Via prompts | ✅ --json flag + --output-schema for validated structured JSON |
✅ --output-format json |
✅ --output-format=json |
| CI/CD Integration | ✅ Can be scripted | ✅ First-class: codex exec --full-auto with API key auth for CI |
✅ Can be scripted with headless mode | ✅ Can be scripted with -p + --autopilot |
| Feature | Claude Code | Codex CLI (OpenAI) | Gemini CLI (Google) | GitHub Copilot CLI |
|---|---|---|---|---|
| Open Source | ❌ No | ✅ Yes (Apache 2.0, built in Rust) | ✅ Yes (Apache 2.0) | ❌ No (public repo, proprietary) |
| Sandboxing | ✅ Permission modes (ask, auto-accept, etc.) | ✅ OS-level: macOS Seatbelt, Linux Landlock; network blocked by default | ✅ Sandboxed execution environment | ⚠️ Permission-based (no OS-level sandbox) |
| Multi-Model | ❌ Anthropic models only | ⚠️ OpenAI models only (+ local Ollama via --oss) |
❌ Google models only | ✅ Anthropic + OpenAI + Google models |
| Context Window | 200K tokens | ~192K tokens | Up to 1M tokens | Varies by model selected |
| Pricing | Anthropic API / Claude Pro subscription | Included with ChatGPT Plus/Pro/Business/Enterprise; or API key | Free tier: 1,000 req/day; paid via Google AI Pro or Vertex AI | Included with any Copilot plan (Free/Pro/Pro+/Business/Enterprise) |
| IDE Integration | ✅ VS Code extension | ✅ VS Code / Cursor / Windsurf extension + Desktop app | ✅ Gemini Code Assist (VS Code) — shared technology | ✅ Native in VS Code Chat; sessions sync between CLI & editor |
Most mature subagent system (Agent Teams with direct inter-agent messaging); strongest reasoning on complex tasks; Anthropic-only models.
Best CI/CD story (codex exec + --output-schema for validated JSON); cloud delegation; OS-level sandboxing; open source (Rust).
Largest context window (1M tokens); most generous free tier; multimodal (images + PDFs); Google Search grounding; open source.
Only true multi-model CLI (Anthropic + OpenAI + Google); deepest GitHub integration; repository memory that learns across sessions; cloud agent delegation.