Skip to main content
🎀 Speaking at KubeCon EU 2026 Lessons Learned Orchestrating Multi-Tenant GPUs on OpenShift AI View Session
🎀 Speaking at Red Hat Summit 2026 GPUs take flight: Safety-first multi-tenant Platform Engineering with NVIDIA and OpenShift AI Learn More
DevOps

Multi-Cloud Terraform Patterns That Aren't a Nightmare

Luca Berton β€’ β€’ 1 min read
#terraform#multi-cloud#iac#aws#gcp#azure

The Multi-Cloud Reality

Nobody chooses multi-cloud. It happens to you. An acquisition brings GCP. A client requires Azure. Your ML team needs AWS SageMaker. Suddenly you’re managing infrastructure across three clouds.

The question isn’t whether multi-cloud is a good idea β€” it’s how to survive it.

Pattern 1: Provider-Agnostic Modules

Abstract cloud differences behind a consistent interface:

# modules/database/main.tf
variable "provider" {
  type = string
  validation {
    condition     = contains(["aws", "gcp", "azure"], var.provider)
    error_message = "Provider must be aws, gcp, or azure"
  }
}

variable "config" {
  type = object({
    name           = string
    engine         = string
    engine_version = string
    instance_size  = string
    storage_gb     = number
    multi_az       = bool
  })
}

module "aws_rds" {
  source = "./aws"
  count  = var.provider == "aws" ? 1 : 0
  config = var.config
}

module "gcp_cloudsql" {
  source = "./gcp"
  count  = var.provider == "gcp" ? 1 : 0
  config = var.config
}

module "azure_flexible" {
  source = "./azure"
  count  = var.provider == "azure" ? 1 : 0
  config = var.config
}

output "connection_string" {
  value = coalesce(
    try(module.aws_rds[0].connection_string, ""),
    try(module.gcp_cloudsql[0].connection_string, ""),
    try(module.azure_flexible[0].connection_string, "")
  )
}

Usage:

module "payments_db" {
  source   = "./modules/database"
  provider = "aws"
  config = {
    name           = "payments"
    engine         = "postgres"
    engine_version = "16"
    instance_size  = "medium"
    storage_gb     = 100
    multi_az       = true
  }
}

Same interface, any cloud. I maintain a library of these at Terraform Pilot.

Pattern 2: Repository Structure

terraform/
β”œβ”€β”€ modules/              # Reusable, provider-agnostic
β”‚   β”œβ”€β”€ database/
β”‚   β”œβ”€β”€ kubernetes/
β”‚   β”œβ”€β”€ networking/
β”‚   └── monitoring/
β”œβ”€β”€ environments/
β”‚   β”œβ”€β”€ production/
β”‚   β”‚   β”œβ”€β”€ aws-eu/       # AWS eu-west-1
β”‚   β”‚   β”‚   β”œβ”€β”€ main.tf
β”‚   β”‚   β”‚   β”œβ”€β”€ variables.tf
β”‚   β”‚   β”‚   └── terraform.tfvars
β”‚   β”‚   β”œβ”€β”€ gcp-eu/       # GCP europe-west4
β”‚   β”‚   └── azure-eu/     # Azure westeurope
β”‚   β”œβ”€β”€ staging/
β”‚   └── dev/
└── shared/               # Cross-cloud resources
    β”œβ”€β”€ dns/              # Route53/Cloud DNS
    β”œβ”€β”€ monitoring/       # Datadog/Grafana Cloud
    └── iam/              # Cross-cloud identity

Pattern 3: State Management

# Per-cloud, per-environment state
# AWS environments β†’ S3 backend
terraform {
  backend "s3" {
    bucket         = "terraform-state-prod"
    key            = "aws-eu/terraform.tfstate"
    region         = "eu-west-1"
    dynamodb_table = "terraform-locks"
    encrypt        = true
  }
}

# GCP environments β†’ GCS backend
terraform {
  backend "gcs" {
    bucket = "terraform-state-prod-gcp"
    prefix = "gcp-eu"
  }
}

Never share state across clouds. Each cloud environment gets its own state file. Cross-cloud references use data sources or remote state:

# In GCP config, reference AWS outputs
data "terraform_remote_state" "aws" {
  backend = "s3"
  config = {
    bucket = "terraform-state-prod"
    key    = "aws-eu/terraform.tfstate"
    region = "eu-west-1"
  }
}

# Use AWS VPN endpoint in GCP
resource "google_compute_vpn_tunnel" "to_aws" {
  peer_ip = data.terraform_remote_state.aws.outputs.vpn_endpoint_ip
}

Pattern 4: Size Mapping

Cloud instance sizes don’t map 1:1. Abstract them:

locals {
  size_map = {
    small = {
      aws   = "t3.small"
      gcp   = "e2-small"
      azure = "Standard_B1ms"
    }
    medium = {
      aws   = "t3.medium"
      gcp   = "e2-medium"
      azure = "Standard_B2s"
    }
    large = {
      aws   = "m6i.xlarge"
      gcp   = "n2-standard-4"
      azure = "Standard_D4s_v5"
    }
  }

  actual_size = local.size_map[var.instance_size][var.provider]
}

Anti-Patterns

  1. Lowest common denominator β€” Don’t avoid cloud-specific features. Use them, but abstract the interface.
  2. One mega-module β€” Don’t create a god module that handles every cloud. Keep modules focused.
  3. Copy-paste across clouds β€” Use the module pattern above. Duplicated code drifts.
  4. Managing all clouds from one CI pipeline β€” Separate pipelines per cloud. A GCP change shouldn’t risk AWS.

Automating Multi-Cloud with Ansible

Terraform handles provisioning. Ansible handles configuration. Together they’re powerful for multi-cloud:

# Ansible inventory from Terraform outputs
- name: Configure databases across clouds
  hosts: databases
  tasks:
    - name: Apply security hardening
      include_role:
        name: database-hardening
      # Same role works on RDS, CloudSQL, and Azure Flexible

I detail the Ansible + Terraform integration at Ansible Pilot and maintain ready-to-use Terraform modules at Terraform Pilot.

The Honest Truth

Multi-cloud doubles your operational complexity. Don’t do it unless you have a business reason. But when you must, these patterns keep it manageable. The key: standardize the interface, not the implementation.

Share:

Luca Berton

AI & Cloud Advisor with 18+ years experience. Author of 8 technical books, creator of Ansible Pilot, and instructor at CopyPasteLearn Academy. Speaker at KubeCon EU & Red Hat Summit 2026.

Luca Berton Ansible Pilot Ansible by Example Open Empower K8s Recipes Terraform Pilot CopyPasteLearn ProteinLens TechMeOut