Skip to main content
🎤 Speaking at KubeCon EU 2026 Lessons Learned Orchestrating Multi-Tenant GPUs on OpenShift AI View Session
🎤 Speaking at Red Hat Summit 2026 GPUs take flight: Safety-first multi-tenant Platform Engineering with NVIDIA and OpenShift AI Learn More
Platform Engineering

Developer Experience Metrics That Actually Matter

Luca Berton 1 min read
#developer-experience#devex#metrics#dora#platform-engineering#productivity

Beyond DORA

DORA metrics (deployment frequency, lead time, MTTR, change failure rate) measure delivery performance. They don’t measure developer experience. A team can have great DORA numbers while developers are miserable — fighting tooling, waiting for infrastructure, and context-switching between 12 different dashboards.

Developer experience (DevEx) metrics capture what DORA misses.

The Three Dimensions of DevEx

Research from DX (formerly the SPACE framework) identifies three core dimensions:

1. Flow State

Can developers get into and stay in flow?

Metrics:
- Uninterrupted coding time (hours/day with no meetings/interrupts)
- Context switches per day (tool/task switches)
- Build/test wait time (seconds idle waiting for CI)
- PR review turnaround (hours from PR to first review)

2. Cognitive Load

How much mental overhead does the tooling impose?

Metrics:
- Time to first deploy (new dev → first production deploy)
- Number of tools to complete a task (deploy = how many CLIs?)
- Documentation findability score (can devs find what they need?)
- "How many browser tabs" test (tabs open to deploy = complexity)

3. Feedback Loops

How quickly do developers learn if their code works?

Metrics:
- Local build time (seconds from save to seeing result)
- CI pipeline duration (commit to green/red)
- Preview environment availability (PR → working URL)
- Error message clarity (can devs self-diagnose?)

Measuring DevEx in Practice

Automated Metrics

# Collect from GitLab/GitHub API
class DevExMetrics:
    async def collect_daily(self):
        return {
            'pr_review_time_p50': await self.pr_review_time(percentile=50),
            'pr_review_time_p95': await self.pr_review_time(percentile=95),
            'ci_duration_p50': await self.ci_pipeline_duration(percentile=50),
            'merge_to_deploy_time': await self.merge_to_deploy(),
            'build_failure_rate': await self.build_failures() / await self.total_builds(),
            'rollback_rate': await self.rollbacks() / await self.deploys(),
            'time_to_first_commit': await self.new_dev_first_commit(),
        }

Survey Metrics (Quarterly)

Rate 1-5:
1. "I can find the documentation I need quickly"
2. "Setting up a new project is straightforward"
3. "I spend most of my time on product work, not tooling"
4. "When something breaks, I can diagnose it myself"
5. "I feel productive with our current tools"

Open-ended:
- "What's the most frustrating part of your development workflow?"
- "If you could change one thing about our platform, what would it be?"

The Metrics Dashboard

I build DevEx dashboards with the same Grafana stack I use for infrastructure monitoring at Kubernetes Recipes:

Row 1: Flow Metrics
  - Avg uninterrupted coding time (target: >3h/day)
  - PR review turnaround (target: <4h for p50)
  - CI queue time (target: <2min)

Row 2: Cognitive Load
  - Time to first deploy for new devs (target: <1 day)
  - Self-service ratio (% of infra without tickets, target: >80%)
  - Docs search success rate (target: >70%)

Row 3: Feedback Loops
  - CI duration p50/p95 (target: <10min / <20min)
  - Local dev loop time (target: <5s)
  - Preview env availability (target: <5min after PR)

Row 4: Satisfaction
  - Developer NPS (quarterly, target: >30)
  - Platform adoption rate (target: >80%)
  - Voluntary attrition (lagging indicator)

Turning Metrics Into Action

Metrics without action are vanity. Here’s how I use DevEx data:

Signal: PR review time p95 > 24 hours
Action: Implement automated code review bot + Slack notifications
Result: p95 dropped to 8 hours

Signal: Time to first deploy for new devs > 5 days
Action: Build onboarding golden path with Backstage template
Result: Dropped to 4 hours

Signal: Developer satisfaction with CI < 3/5
Action: Parallel test execution, build caching
Result: CI duration cut 60%, satisfaction rose to 4.2/5

The Ansible Automation Connection

Many DevEx improvements involve automating repetitive setup. I use Ansible to standardize developer environments, IDE configurations, and local tooling. The infrastructure-as-code approach at Ansible Pilot applies to developer machines too — consistent environments eliminate “works on my machine” entirely.

What Not to Measure

  • Lines of code (meaningless)
  • Commits per day (incentivizes wrong behavior)
  • Hours worked (not relevant to outcomes)
  • Individual developer metrics (creates toxic competition)

Measure the system, not the people. DevEx metrics should identify platform problems, not rank developers.

Good DevEx is invisible. When developers don’t think about their tools, they’re thinking about their product. That’s the goal.

Share:

Luca Berton

AI & Cloud Advisor with 18+ years experience. Author of 8 technical books, creator of Ansible Pilot, and instructor at CopyPasteLearn Academy. Speaker at KubeCon EU & Red Hat Summit 2026.

Luca Berton Ansible Pilot Ansible by Example Open Empower K8s Recipes Terraform Pilot CopyPasteLearn ProteinLens TechMeOut