Introduction
Clanker is an AI-powered command-line tool for querying and managing cloud infrastructure using natural language. Instead of memorizing CLI flags, API endpoints, and service-specific syntax, you describe what you want to know or do in plain English, and Clanker handles the rest.
Who Is Clanker For?
- DevOps and platform engineers who manage infrastructure across multiple cloud providers and want a faster way to get answers about their systems.
- Developers who need to inspect cloud resources, check deployment status, or debug production issues without deep expertise in every cloud provider's CLI.
- Security and compliance teams who need to audit infrastructure, analyze IAM policies, and generate compliance reports.
- Teams adopting multi-cloud architectures who want a unified interface across AWS, GCP, Azure, Cloudflare, DigitalOcean, Hetzner, Verda, and Kubernetes.
Supported Cloud Providers
Clanker supports the following infrastructure providers:
| Provider | Flag | Description |
|---|---|---|
| AWS | --aws | EC2, Lambda, S3, RDS, ECS, DynamoDB, CloudFormation, IAM, and dozens of other services |
| GCP | --gcp | Compute Engine, Cloud Run, GKE, Cloud Storage, BigQuery, and more |
| Azure | --azure | Virtual Machines, Container Apps, AKS, Blob Storage, and more |
| Cloudflare | --cloudflare | DNS, Workers, Pages, R2, D1, WAF, Zero Trust, Analytics |
| DigitalOcean | --digitalocean | Droplets, App Platform, Kubernetes, Managed Databases |
| Hetzner | --hetzner | Cloud Servers, Load Balancers, Volumes, Firewalls |
| Verda (ex-DataCrunch) | --verda | GPU/AI instances (H100/H200/A100/B200/L40S), Instant Clusters, Volumes, Serverless Containers + Jobs |
| Kubernetes | clanker k8s | EKS, GKE, kubeadm, verda-instant, and generic kubectl clusters |
Supported AI Providers
Clanker supports multiple AI backends for its language processing. You can switch between them based on your preference, cost constraints, or organizational requirements:
| Provider | Config Key | Notes |
|---|---|---|
| OpenAI | openai | GPT-5 and other OpenAI models via API key |
| Anthropic | anthropic | Claude Opus 4.6 and other Anthropic models via API key |
| AWS Bedrock | bedrock | Claude and other models via your AWS profile (no separate API key required) |
| Google Gemini (API) | gemini-api | Gemini 2.5 Flash and other Gemini models via API key |
| Google Gemini (ADC) | gemini | Gemini models via Google Application Default Credentials |
| DeepSeek | deepseek | DeepSeek Chat via API key |
| Cohere | cohere | Command A and other Cohere models via API key |
| MiniMax | minimax | MiniMax M2.5 via API key |
| GitHub Models | github-models | Access models through GitHub's model marketplace |
Architecture
Three-Stage Query Pipeline
When you run clanker ask, your question goes through a three-stage pipeline:
Stage 1 -- Query Analysis. The AI analyzes your natural language question and determines which cloud provider operations are needed to answer it. For example, "show me EC2 instances with high CPU" might translate to
aws ec2 describe-instancesandaws cloudwatch get-metric-statisticscalls.Stage 2 -- Execution. Clanker executes the identified operations in parallel against your cloud provider APIs using the appropriate CLI tools and credentials. Results are collected and aggregated.
Stage 3 -- Response Synthesis. The raw API output is combined with the original question and sent back to the AI, which produces a clear, human-readable summary.
This pipeline ensures that responses are grounded in real data from your infrastructure rather than fabricated from the model's training data.
Intelligent Query Routing
Clanker includes a smart routing layer that determines which cloud provider or agent should handle your query. If you use an explicit flag like --aws or --cloudflare, that provider is used directly. If no flag is provided, Clanker uses keyword-based inference and, for ambiguous queries, an LLM-based classifier to route the request to the correct provider.
Questions that explicitly mention Clanker Cloud are treated specially. If you ask about the running desktop app, its saved settings, or the Clanker Cloud MCP surface, Clanker can route those requests to the local app backend instead of answering only from generic infrastructure context.
Agent Investigation System
For complex queries involving logs, errors, or multi-service analysis, Clanker can optionally use an agent-based investigation system. The agent coordinator orchestrates multi-step investigations using:
- Decision tree traversal to select the appropriate investigation strategy.
- Dependency-aware parallel execution via a scheduler that respects data dependencies between steps.
- A shared data bus that allows agents to communicate findings across investigation steps.
- Playbooks that define structured workflows for common investigation patterns.
You can enable detailed agent lifecycle logs with the --agent-trace flag.
The Maker Pipeline
The maker pipeline handles infrastructure modifications. When you use --maker, Clanker generates a JSON execution plan that describes the exact CLI commands needed to create or modify infrastructure. The pipeline works in several phases:
- Plan generation -- The AI produces a structured JSON plan based on your request.
- Plan enrichment -- The plan is enriched with real data from your environment (VPC IDs, subnet IDs, security groups, and so on).
- Validation and repair -- Deterministic and LLM-based validators check the plan for correctness, and a repair agent fixes any issues.
- Execution -- When you apply the plan with
--apply, Clanker executes each command in order, handling errors with retry logic, placeholder binding from command outputs, and AI-assisted remediation for unexpected failures.
The Deploy Command
The clanker deploy command provides one-click deployment of GitHub repositories to any supported cloud provider. It clones the repository, analyzes the codebase (language, framework, Docker support, environment variables, ports), runs a multi-phase intelligence pipeline, generates a deployment plan, validates and repairs it, and optionally executes it. Supported deployment targets include EC2, ECS Fargate, App Runner, Lambda, S3 with CloudFront, GCP Cloud Run, Azure Container Apps, Cloudflare Workers/Pages, DigitalOcean App Platform, and more.
Interactive Mode
Clanker supports an interactive conversation mode via the clanker talk command. This starts a multi-turn session with the Hermes agent, maintaining context across messages so follow-up questions work naturally.
MCP Integration
Clanker can also expose its own Model Context Protocol surface through clanker mcp. This lets MCP clients inspect routing decisions, query the installed CLI version, and run real Clanker commands through a local MCP server.
Use HTTP transport when you want a local endpoint that tools can call directly, or stdio transport when your MCP client launches the process itself.
clanker mcp --transport http --listen 127.0.0.1:39393clanker mcp --transport stdioFor complete examples, see mcp.