The Rise of AI Coding Agents: Impact on Platform Engineering Teams
How AI coding agents like GitHub Copilot Workspace and Cursor are reshaping platform engineering. What teams need to prepare for and how to leverage these tools.
You want the portability of containers without managing Kubernetes. Or you have Kubernetes but want scale-to-zero for bursty workloads. Serverless containers solve both.
# Deploy a container in one command
gcloud run deploy my-api \
--image gcr.io/myproject/api:v1 \
--region europe-west4 \
--allow-unauthenticated \
--min-instances 0 \
--max-instances 100 \
--memory 512Mi \
--cpu 1 \
--concurrency 80Strengths: Best developer experience, fastest cold starts (~200ms), generous free tier, scale-to-zero Weaknesses: GCP only (unless using Cloud Run on GKE), 60-min max request timeout
# ECS Fargate task definition
{
"family": "my-api",
"networkMode": "awsvpc",
"requiresCompatibilities": ["FARGATE"],
"cpu": "512",
"memory": "1024",
"containerDefinitions": [{
"name": "api",
"image": "123456.dkr.ecr.eu-west-1.amazonaws.com/api:v1",
"portMappings": [{"containerPort": 8080}],
"logConfiguration": {
"logDriver": "awslogs",
"options": {
"awslogs-group": "/ecs/my-api",
"awslogs-region": "eu-west-1"
}
}
}]
}Strengths: Deep AWS integration, long-running tasks, ECS + EKS support, Fargate Spot for cost savings Weaknesses: Slower cold starts (~1-3s), more configuration needed, no scale-to-zero without extra tooling
apiVersion: serving.knative.dev/v1
kind: Service
metadata:
name: my-api
spec:
template:
metadata:
annotations:
autoscaling.knative.dev/minScale: "0"
autoscaling.knative.dev/maxScale: "100"
spec:
containers:
- image: registry.company.com/api:v1
resources:
limits:
cpu: "1"
memory: 512Mi
ports:
- containerPort: 8080Strengths: Cloud-agnostic, runs on any Kubernetes, full control, no vendor lock-in Weaknesses: You manage Kubernetes, operational overhead, slower iteration than managed services
Cloud Run Fargate Knative
Scale to zero Yes No (native) Yes
Cold start ~200ms ~1-3s ~500ms-2s
Max timeout 60min Unlimited Configurable
GPU support Yes No Yes (with K8s)
Pricing model Per-request Per-second Your K8s cost
Vendor lock-in GCP AWS None
Min config 1 command ~50 lines K8s + Knative
Observability Built-in CloudWatch BYO (Prometheus)"I want the simplest possible deployment"
β Cloud Run
"I'm already on AWS and need long-running containers"
β Fargate
"I need cloud portability and already run Kubernetes"
β Knative
"I need scale-to-zero AND I'm on AWS"
β Cloud Run (yes, consider multi-cloud) or Karpenter + KEDA
"I have bursty traffic with long idle periods"
β Cloud Run (scale-to-zero saves money)Assumptions: 200ms avg response, 512MB memory, 1 vCPU
Google Cloud Run:
Compute: 1M Γ 0.2s Γ $0.00002400/vCPU-s = $4.80
Memory: 1M Γ 0.2s Γ $0.00000250/GiB-s = $0.25
Requests: 1M Γ $0.40/million = $0.40
Total: ~$5.45/month
AWS Fargate (always-on, 2 tasks):
vCPU: 2 Γ $0.04048/hr Γ 730h = $59.10
Memory: 2 Γ 1GB Γ $0.004445/hr Γ 730h = $6.49
Total: ~$65.59/month
Knative (on existing K8s):
Incremental cost on existing cluster = ~$0 (idle)
(But you're paying for the cluster anyway)Cloud Run wins dramatically for bursty workloads. Fargate wins for steady-state. Knative is βfreeβ if you already have Kubernetes capacity.
For new projects without cloud commitment: start with Cloud Run. The developer experience is unmatched, and you can always migrate to Kubernetes later (the container is portable).
For existing Kubernetes users wanting serverless-like scaling: add Knative to your cluster. I deploy it with Ansible (patterns at Ansible Pilot) and detail the Kubernetes integration at Kubernetes Recipes.
For Terraform-managed infrastructure across clouds, including serverless container services, see Terraform Pilot.
The serverless container space is mature. The question isnβt whether to use it β itβs which flavor fits your architecture.
AI & Cloud Advisor with 18+ years experience. Author of 8 technical books, creator of Ansible Pilot, and instructor at CopyPasteLearn Academy. Speaker at KubeCon EU & Red Hat Summit 2026.
How AI coding agents like GitHub Copilot Workspace and Cursor are reshaping platform engineering. What teams need to prepare for and how to leverage these tools.
Backstage is the de facto IDP. Adding AI makes it transformative β auto-generated docs, intelligent search, and self-service infrastructure. Here's the architecture.
Schedule Kubernetes workloads when and where the grid is greenest. How carbon-aware scheduling works, the tools available, and the business case for sustainable compute.