deploy
Analyze and deploy a GitHub repository to the cloud.
The deploy command clones a GitHub repository, analyzes its stack (language, framework, Docker configuration, environment variables), generates an infrastructure plan using AI, and optionally executes that plan to deploy the application. It supports multiple cloud providers and deployment targets.
Usage
clanker deploy [repo-url] [flags]The repo-url argument is required and should be a GitHub repository URL (for example, https://github.com/user/repo).
Flags
Target
Configure where and how the application is deployed.
| Flag | Type | Default | Description |
|---|---|---|---|
--provider | string | "aws" | Cloud provider to deploy to. Options: aws, gcp, azure, cloudflare, digitalocean, hetzner |
--target | string | "fargate" | Deployment target within the provider. Options: fargate, ec2, eks |
--instance-type | string | "t3.small" | EC2 instance type (only used with --target ec2) |
--new-vpc | bool | false | Create a new VPC instead of using the default VPC |
--enforce-image-deploy | bool | false | Force ECR image-based deploy path (avoid building Docker images on EC2 via user-data) |
Profile / Auth
Specify cloud provider credentials and authentication.
| Flag | Type | Default | Description |
|---|---|---|---|
--profile | string | "" | AWS CLI profile to use |
--ai-profile | string | "" | AI provider profile to use (as defined in ~/.clanker.yaml) |
--gcp-project | string | "" | GCP project ID (required for --provider gcp apply) |
--azure-subscription | string | "" | Azure subscription ID (required for --provider azure apply) |
--do-token | string | "" | DigitalOcean access token (or set DIGITALOCEAN_ACCESS_TOKEN env var) |
--hetzner-token | string | "" | Hetzner Cloud API token (or set HCLOUD_TOKEN env var) |
AI Provider Keys (Override Config)
Override the API keys set in your configuration file.
| Flag | Type | Default | Description |
|---|---|---|---|
--openai-key | string | "" | OpenAI API key (overrides config) |
--anthropic-key | string | "" | Anthropic API key (overrides config) |
--gemini-key | string | "" | Gemini API key (overrides config) |
--deepseek-key | string | "" | DeepSeek API key (overrides config) |
--cohere-key | string | "" | Cohere API key (overrides config) |
--minimax-key | string | "" | MiniMax API key (overrides config) |
AI Model Override
Override the model used by a specific AI provider.
| Flag | Type | Default | Description |
|---|---|---|---|
--openai-model | string | "" | OpenAI model to use (overrides config) |
--anthropic-model | string | "" | Anthropic model to use (overrides config) |
--gemini-model | string | "" | Gemini model to use (overrides config) |
--deepseek-model | string | "" | DeepSeek model to use (overrides config) |
--cohere-model | string | "" | Cohere model to use (overrides config) |
--minimax-model | string | "" | MiniMax model to use (overrides config) |
Execution
| Flag | Type | Default | Description |
|---|---|---|---|
--apply | bool | false | Apply the plan immediately after generation. Without this flag, only the plan JSON is output to stdout. |
How It Works
The deploy command runs through multiple phases:
Clone and Analyze -- The repository is cloned and analyzed to detect the programming language, framework, Dockerfile presence, environment variables, listening ports, and external dependencies.
Intelligence Pipeline -- A multi-phase AI-driven analysis determines the best deployment strategy:
- Exploration -- Deeper analysis of project structure and configuration.
- Deep Analysis -- Identifies required environment variables, start commands, health endpoints, and application complexity.
- Infrastructure Scan -- Examines existing cloud infrastructure (VPCs, subnets, security groups, AMIs) to inform the plan.
- Architecture Decision -- Selects the deployment method (EC2, Fargate, App Runner, Cloud Run, etc.) based on the application profile.
Plan Generation -- The AI generates a step-by-step CLI command plan in JSON format. The plan is validated and repaired through multiple passes to ensure correctness.
Placeholder Resolution -- Infrastructure-specific values (VPC IDs, subnet IDs, AMI IDs, account IDs) are resolved from your existing cloud environment.
Execution (with --apply) -- The plan is executed in phases:
- Phase 1: Create infrastructure (ECR repository, VPC, security groups, IAM roles)
- Phase 2: Build and push Docker image (if applicable)
- Phase 3: Launch application (EC2 instances, load balancers, services)
- Phase 4: Verify deployment health
Examples
AWS Deployments
# Generate a deployment plan (plan only, no execution)
clanker deploy https://github.com/user/my-api
# Deploy to AWS Fargate (default)
clanker deploy https://github.com/user/my-api --apply
# Deploy to EC2
clanker deploy https://github.com/user/my-api --target ec2 --apply
# Deploy to EC2 with a specific instance type
clanker deploy https://github.com/user/my-api --target ec2 --instance-type t3.medium --apply
# Deploy to EKS
clanker deploy https://github.com/user/my-api --target eks --apply
# Deploy using a specific AWS profile
clanker deploy https://github.com/user/my-api --profile production --apply
# Deploy into a new VPC (instead of the default VPC)
clanker deploy https://github.com/user/my-api --new-vpc --apply
# Force ECR image-based deployment
clanker deploy https://github.com/user/my-api --enforce-image-deploy --applyGCP Deployments
# Generate a GCP deployment plan
clanker deploy https://github.com/user/my-api --provider gcp
# Deploy to GCP Cloud Run
clanker deploy https://github.com/user/my-api --provider gcp --gcp-project my-project --applyAzure Deployments
# Generate an Azure deployment plan
clanker deploy https://github.com/user/my-api --provider azure
# Deploy to Azure Container Apps
clanker deploy https://github.com/user/my-api --provider azure --azure-subscription "sub-id" --applyCloudflare Deployments
# Deploy to Cloudflare Workers or Pages
clanker deploy https://github.com/user/my-frontend --provider cloudflare --applyDigitalOcean Deployments
# Deploy to DigitalOcean
clanker deploy https://github.com/user/my-api --provider digitalocean --do-token "dop_v1_..." --apply
# Or use the environment variable
export DIGITALOCEAN_ACCESS_TOKEN="dop_v1_..."
clanker deploy https://github.com/user/my-api --provider digitalocean --applyHetzner Deployments
# Deploy to Hetzner Cloud
clanker deploy https://github.com/user/my-api --provider hetzner --hetzner-token "your-token" --apply
# Or use the environment variable
export HCLOUD_TOKEN="your-token"
clanker deploy https://github.com/user/my-api --provider hetzner --applyAI Provider Selection
# Use Anthropic for plan generation
clanker deploy https://github.com/user/my-api --ai-profile anthropic --apply
# Override the model
clanker deploy https://github.com/user/my-api --gemini-model "gemini-2.5-pro" --applyPlan Review Workflow
A common pattern is to generate the plan first, review it, then apply it separately:
# Step 1: Generate and save the plan
clanker deploy https://github.com/user/my-api > deploy-plan.json
# Step 2: Review the plan
cat deploy-plan.json | jq .
# Step 3: Apply the reviewed plan
clanker ask --apply --plan-file deploy-plan.jsonEnvironment Variables
When deploying, Clanker detects required environment variables from the repository (.env.example, docker-compose.yml, application code). In --apply mode, it prompts you interactively for values of required secrets such as API keys and tokens. These values are stored in AWS Secrets Manager (or the equivalent service for other providers) as part of the deployment.
For non-interactive contexts (such as CI/CD), you can set secret values as environment variables before running the deploy command. Clanker automatically detects environment variables that contain TOKEN, KEY, PASSWORD, or SECRET in their names.
Deployment Targets by Provider
| Provider | Supported Targets |
|---|---|
| AWS | Fargate (default), EC2, EKS, App Runner, Lambda, Lightsail, S3+CloudFront |
| GCP | Cloud Run, Compute Engine, GKE |
| Azure | Container Apps, VMs, AKS |
| Cloudflare | Workers, Pages, Containers |
| DigitalOcean | Droplets, App Platform, Kubernetes |
| Hetzner | Cloud Servers |
The AI selects the most appropriate target based on the application profile. You can override the selection using the --target flag (AWS) or let the architecture decision engine choose automatically.
See Also
- ask -- The primary query command (also used to apply plans)
- k8s -- Kubernetes cluster management
- Configuration -- Setting up provider profiles