ask
Ask AI about your cloud infrastructure.
The ask command is the primary entry point for querying cloud infrastructure using natural language. It supports multiple cloud providers, AI providers, infrastructure modification planning, and agent-based investigation workflows.
Usage
clanker ask [question] [flags]The question should be a natural language query about your infrastructure. Clanker uses intelligent routing to determine which cloud provider and services are relevant, or you can specify them explicitly with flags.
Flags
Cloud Provider Context
These flags force Clanker to include context from a specific cloud provider. When none are specified, Clanker infers the provider from the question content.
| Flag | Type | Default | Description |
|---|---|---|---|
--aws | bool | false | Include AWS infrastructure context |
--gcp | bool | false | Include GCP infrastructure context |
--azure | bool | false | Include Azure infrastructure context |
--cloudflare | bool | false | Include Cloudflare infrastructure context |
--digitalocean | bool | false | Include DigitalOcean infrastructure context |
--hetzner | bool | false | Include Hetzner Cloud infrastructure context |
--vercel | bool | false | Include Vercel infrastructure context |
--verda | bool | false | Include Verda Cloud (GPU/AI) infrastructure context |
--github | bool | false | Include GitHub repository context |
--terraform | bool | false | Include Terraform workspace context |
IAM / Security
Flags for IAM analysis, compliance reporting, and infrastructure discovery.
| Flag | Type | Default | Description |
|---|---|---|---|
--iam | bool | false | Route query to the IAM agent for security analysis |
--role-arn | string | "" | Scope IAM query to a specific role ARN |
--policy-arn | string | "" | Scope IAM query to a specific policy ARN |
--discovery | bool | false | Run comprehensive infrastructure discovery across all services |
--compliance | bool | false | Generate a compliance report showing all services, ports, and protocols |
Profile Selection
Specify which cloud account, project, or workspace to target.
| Flag | Type | Default | Description |
|---|---|---|---|
--profile | string | "" | AWS CLI profile to use for infrastructure queries |
--gcp-project | string | "" | GCP project ID to use for infrastructure queries |
--azure-subscription | string | "" | Azure subscription ID to use for infrastructure queries |
--workspace | string | "" | Terraform workspace to use for infrastructure queries |
--ai-profile | string | "" | AI provider profile to use (as defined in ~/.clanker.yaml) |
AI Provider Keys (Override Config)
Override the API keys set in your configuration file. These flags take precedence over values in ~/.clanker.yaml and environment variables.
| Flag | Type | Default | Description |
|---|---|---|---|
--openai-key | string | "" | OpenAI API key (overrides config) |
--anthropic-key | string | "" | Anthropic API key (overrides config) |
--gemini-key | string | "" | Gemini API key (overrides config and env vars) |
--deepseek-key | string | "" | DeepSeek API key (overrides config) |
--cohere-key | string | "" | Cohere API key (overrides config) |
--minimax-key | string | "" | MiniMax API key (overrides config) |
AI Model Override
Override the model used by a specific AI provider for this query. These take precedence over models configured in ~/.clanker.yaml.
| Flag | Type | Default | Description |
|---|---|---|---|
--openai-model | string | "" | OpenAI model to use (overrides config) |
--anthropic-model | string | "" | Anthropic model to use (overrides config) |
--gemini-model | string | "" | Gemini model to use (overrides config) |
--deepseek-model | string | "" | DeepSeek model to use (overrides config) |
--cohere-model | string | "" | Cohere model to use (overrides config) |
--minimax-model | string | "" | MiniMax model to use (overrides config) |
--github-model | string | "" | GitHub Models model to use (overrides config) |
Infrastructure Changes (Maker)
Flags for generating and applying infrastructure change plans. The maker pipeline produces a JSON plan that can be reviewed before execution.
| Flag | Type | Default | Description |
|---|---|---|---|
--maker | bool | false | Generate a CLI plan (JSON) for infrastructure changes. Supports AWS, GCP, Azure, Cloudflare, DigitalOcean, and Hetzner. |
--destroyer | bool | false | Allow destructive operations when using --maker |
--apply | bool | false | Apply an approved maker plan. Reads from stdin unless --plan-file is provided. |
--plan-file | string | "" | Path to a maker plan JSON file for --apply |
Agent / Routing
Control how Clanker routes your query to specialized agents.
| Flag | Type | Default | Description |
|---|---|---|---|
--agent-trace | bool | false | Show detailed coordinator agent lifecycle logs (overrides config) |
--route-only | bool | false | Return routing decision as JSON without executing (for backend integration) |
--agent | string | "" | Use a specific agent to handle the query. Available agents: hermes, claude-code, copilot, codex, claude. See Agents for details. |
--github-coding-agent-model | string | "" | Override the model used for GitHub coding agent delegation (copilot, codex, claude) |
Global Flags
These flags are available on all commands.
| Flag | Type | Default | Description |
|---|---|---|---|
--debug | bool | false | Enable debug output (shows progress and internal diagnostics) |
--config | string | "" | Config file path (default is $HOME/.clanker.yaml) |
--local-mode | bool | true | Enable local mode with rate limiting to prevent system overload |
--local-delay | int | 100 | Delay in milliseconds between calls in local mode |
How It Works
The ask command follows a three-stage pipeline:
- Query Analysis -- The LLM analyzes your natural language question and determines which cloud operations are needed to answer it.
- Data Gathering -- Clanker executes the relevant cloud CLI operations in parallel (for example,
aws ec2 describe-instances,gcloud compute instances list) and collects the results. - Response Generation -- The results are combined with context and sent back to the LLM, which produces a final human-readable response.
When no provider flag is specified, Clanker uses intelligent keyword-based routing (with optional LLM classification for ambiguous queries) to determine which cloud provider and services to query.
Examples
Basic AWS Queries
# List running EC2 instances
clanker ask "What EC2 instances are running?"
# Check RDS instance status
clanker ask "What's the current RDS instance status?"
# Show Lambda functions with high error rates
clanker ask "Show me lambda functions with high error rates"
# Query with a specific AWS profile
clanker ask --profile production "How many S3 buckets do I have?"Multi-Cloud Queries
# Query GCP infrastructure
clanker ask --gcp --gcp-project my-project "List all Cloud Run services"
# Query Azure resources
clanker ask --azure --azure-subscription "sub-id-here" "Show me all VMs"
# Query Cloudflare
clanker ask --cloudflare "Show me DNS records for example.com"
# Query DigitalOcean
clanker ask --digitalocean "List my droplets"
# Query Hetzner Cloud
clanker ask --hetzner "What servers are running?"
# Query Verda Cloud (GPU/AI)
clanker ask --verda "what GPU instances are running?"
clanker ask --verda "how much am I spending this month?"GitHub and Terraform
# Query GitHub repository
clanker ask --github "What pull requests are open?"
# Query Terraform state
clanker ask --terraform --workspace production "Show me the current resources"IAM and Security Analysis
# Run a comprehensive IAM security analysis
clanker ask --iam "Analyze all IAM roles for overpermissive access"
# Scope analysis to a specific role
clanker ask --iam --role-arn "arn:aws:iam::123456789012:role/my-role" "What permissions does this role have?"
# Scope to a specific policy
clanker ask --iam --policy-arn "arn:aws:iam::123456789012:policy/my-policy" "Analyze this policy"
# Generate a compliance report
clanker ask --compliance
# Run full infrastructure discovery
clanker ask --discovery "Give me a complete inventory of all active services"Infrastructure Changes with Maker
# Generate a plan for creating an S3 bucket
clanker ask --maker "Create an S3 bucket called my-app-data with versioning enabled"
# Generate a GCP plan
clanker ask --maker --gcp "Create a Cloud Run service for my API"
# Generate a Verda plan (REST-native, no CLI dependency)
clanker ask --maker --verda "spin up one H100 in FIN-01 with my default ssh key"
# Generate a plan that includes destructive operations
clanker ask --maker --destroyer "Delete the unused staging VPC"
# Save a plan to a file for review
clanker ask --maker "Create an RDS PostgreSQL instance" > plan.json
# Apply a saved plan
clanker ask --apply --plan-file plan.json
# Pipe a plan directly from generation to execution
clanker ask --maker "Create an S3 bucket called data-lake" | clanker ask --applyAgent Delegation
# Use the Hermes conversational agent
clanker ask --agent hermes "Explain my infrastructure setup"
# Use Claude Code (Anthropic's CLI) for infrastructure analysis
clanker ask --agent claude-code "Analyze my security groups for issues"
# Use GitHub Copilot as the coding agent
clanker ask --agent copilot "Analyze my codebase for security issues"
# Use Claude as the GitHub coding agent (via copilot CLI)
clanker ask --agent claude "Review my Terraform modules"
# Get the routing decision without executing
clanker ask --route-only "Create an EKS cluster"TIP
--agent claude-code uses Anthropic's standalone Claude Code CLI, while --agent claude uses Claude via the GitHub Copilot CLI. They are different tools. See Agents for details.
AI Provider Selection
# Use a specific AI profile
clanker ask --ai-profile anthropic "Summarize my infrastructure costs"
# Override the model for this query
clanker ask --gemini-model "gemini-2.5-pro" "Analyze my security groups"
# Pass an API key directly
clanker ask --openai-key "sk-..." "What EC2 instances are running?"Debugging
# Enable debug output to see routing decisions and API calls
clanker ask --debug "What pods are running in my EKS cluster?"
# Show agent lifecycle trace
clanker ask --agent-trace "Investigate why my Lambda is timing out"Routing Behavior
When no explicit provider flag is set, Clanker routes your query automatically:
- Keyword inference -- Clanker scans your question for cloud-specific terms (for example, "EC2", "Cloud Run", "Azure VM") and activates the appropriate provider.
- LLM classification -- For ambiguous queries where multiple providers could apply, Clanker uses a lightweight LLM call to classify the query.
- IAM routing -- Questions mentioning IAM roles, policies, trust policies, or security audits are routed to the specialized IAM agent.
- Kubernetes routing -- Questions about pods, deployments, clusters, and other K8s concepts are routed to the K8s agent.
- Clanker Cloud routing -- Questions that explicitly mention
clanker cloud,clanker cloud mcp, or the running desktop app can be forwarded to the local Clanker Cloud backend when it is available.
Clanker Cloud Examples
Use --route-only when you want to inspect the router decision without executing the request:
clanker ask --route-only "use clanker cloud mcp to show my saved settings"clanker ask --route-only "ask clanker cloud about the running app backend"In interactive mode, the same explicit phrases help clanker talk route app-specific requests to the local Clanker Cloud backend before falling back to Hermes.
For direct MCP access to the CLI itself, see mcp.
- Cloudflare sub-routing -- Cloudflare queries are further routed to specialized sub-agents for DNS, WAF, Workers, Analytics, and Zero Trust.
See Also
- Configuration -- Setting up provider profiles and API keys
- deploy -- Deploy repositories to the cloud
- k8s -- Kubernetes cluster management
- Config File Reference -- Full configuration file reference