The Rise of AI Coding Agents: Impact on Platform Engineering Teams
How AI coding agents like GitHub Copilot Workspace and Cursor are reshaping platform engineering. What teams need to prepare for and how to leverage these tools.
DORA metrics (deployment frequency, lead time, MTTR, change failure rate) measure delivery performance. They don’t measure developer experience. A team can have great DORA numbers while developers are miserable — fighting tooling, waiting for infrastructure, and context-switching between 12 different dashboards.
Developer experience (DevEx) metrics capture what DORA misses.
Research from DX (formerly the SPACE framework) identifies three core dimensions:
Can developers get into and stay in flow?
Metrics:
- Uninterrupted coding time (hours/day with no meetings/interrupts)
- Context switches per day (tool/task switches)
- Build/test wait time (seconds idle waiting for CI)
- PR review turnaround (hours from PR to first review)How much mental overhead does the tooling impose?
Metrics:
- Time to first deploy (new dev → first production deploy)
- Number of tools to complete a task (deploy = how many CLIs?)
- Documentation findability score (can devs find what they need?)
- "How many browser tabs" test (tabs open to deploy = complexity)How quickly do developers learn if their code works?
Metrics:
- Local build time (seconds from save to seeing result)
- CI pipeline duration (commit to green/red)
- Preview environment availability (PR → working URL)
- Error message clarity (can devs self-diagnose?)# Collect from GitLab/GitHub API
class DevExMetrics:
async def collect_daily(self):
return {
'pr_review_time_p50': await self.pr_review_time(percentile=50),
'pr_review_time_p95': await self.pr_review_time(percentile=95),
'ci_duration_p50': await self.ci_pipeline_duration(percentile=50),
'merge_to_deploy_time': await self.merge_to_deploy(),
'build_failure_rate': await self.build_failures() / await self.total_builds(),
'rollback_rate': await self.rollbacks() / await self.deploys(),
'time_to_first_commit': await self.new_dev_first_commit(),
}Rate 1-5:
1. "I can find the documentation I need quickly"
2. "Setting up a new project is straightforward"
3. "I spend most of my time on product work, not tooling"
4. "When something breaks, I can diagnose it myself"
5. "I feel productive with our current tools"
Open-ended:
- "What's the most frustrating part of your development workflow?"
- "If you could change one thing about our platform, what would it be?"I build DevEx dashboards with the same Grafana stack I use for infrastructure monitoring at Kubernetes Recipes:
Row 1: Flow Metrics
- Avg uninterrupted coding time (target: >3h/day)
- PR review turnaround (target: <4h for p50)
- CI queue time (target: <2min)
Row 2: Cognitive Load
- Time to first deploy for new devs (target: <1 day)
- Self-service ratio (% of infra without tickets, target: >80%)
- Docs search success rate (target: >70%)
Row 3: Feedback Loops
- CI duration p50/p95 (target: <10min / <20min)
- Local dev loop time (target: <5s)
- Preview env availability (target: <5min after PR)
Row 4: Satisfaction
- Developer NPS (quarterly, target: >30)
- Platform adoption rate (target: >80%)
- Voluntary attrition (lagging indicator)Metrics without action are vanity. Here’s how I use DevEx data:
Signal: PR review time p95 > 24 hours
Action: Implement automated code review bot + Slack notifications
Result: p95 dropped to 8 hours
Signal: Time to first deploy for new devs > 5 days
Action: Build onboarding golden path with Backstage template
Result: Dropped to 4 hours
Signal: Developer satisfaction with CI < 3/5
Action: Parallel test execution, build caching
Result: CI duration cut 60%, satisfaction rose to 4.2/5Many DevEx improvements involve automating repetitive setup. I use Ansible to standardize developer environments, IDE configurations, and local tooling. The infrastructure-as-code approach at Ansible Pilot applies to developer machines too — consistent environments eliminate “works on my machine” entirely.
Measure the system, not the people. DevEx metrics should identify platform problems, not rank developers.
Good DevEx is invisible. When developers don’t think about their tools, they’re thinking about their product. That’s the goal.
AI & Cloud Advisor with 18+ years experience. Author of 8 technical books, creator of Ansible Pilot, and instructor at CopyPasteLearn Academy. Speaker at KubeCon EU & Red Hat Summit 2026.
How AI coding agents like GitHub Copilot Workspace and Cursor are reshaping platform engineering. What teams need to prepare for and how to leverage these tools.
Backstage is the de facto IDP. Adding AI makes it transformative — auto-generated docs, intelligent search, and self-service infrastructure. Here's the architecture.
Schedule Kubernetes workloads when and where the grid is greenest. How carbon-aware scheduling works, the tools available, and the business case for sustainable compute.