Skip to main content
๐ŸŽค Speaking at KubeCon EU 2026 Lessons Learned Orchestrating Multi-Tenant GPUs on OpenShift AI View Session
๐ŸŽค Speaking at Red Hat Summit 2026 GPUs take flight: Safety-first multi-tenant Platform Engineering with NVIDIA and OpenShift AI Learn More
AI model governance and compliance
AI

AI Model Governance: Compliance Frameworks for Enterprise ML

Navigate EU AI Act compliance, model risk management, and audit requirements for enterprise ML. Practical governance frameworks for AI teams.

LB
Luca Berton
ยท 1 min read

AI Governance Is Now Mandatory

The EU AI Act came into force, and AI governance is no longer optional for any organization deploying AI in Europe. Even outside the EU, SOC 2, ISO 27001, and industry regulators increasingly require AI governance frameworks.

EU AI Act: What Engineers Need to Know

Risk Classification

Risk LevelExamplesRequirements
UnacceptableSocial scoring, real-time biometric surveillanceProhibited
HighHiring decisions, credit scoring, medical diagnosisFull compliance
LimitedChatbots, content generationTransparency obligations
MinimalSpam filters, game AINo requirements

High-Risk AI Requirements

If your system is classified as high-risk, you must implement:

  1. Risk Management System: Continuous risk assessment throughout the AI lifecycle
  2. Data Governance: Documentation of training data quality, relevance, and representativeness
  3. Technical Documentation: Full documentation of system design, development, and deployment
  4. Record Keeping: Automatic logging of all system events
  5. Human Oversight: Ability for humans to understand, monitor, and override AI decisions
  6. Accuracy & Robustness: Testing for accuracy, robustness, and cybersecurity

Practical Governance Framework

Model Registry

class ModelRegistry:
    def register_model(self, model_info):
        record = {
            "model_id": generate_uuid(),
            "name": model_info["name"],
            "version": model_info["version"],
            "training_data": {
                "sources": model_info["data_sources"],
                "size": model_info["dataset_size"],
                "date_range": model_info["data_date_range"],
                "bias_assessment": model_info["bias_report"],
            },
            "evaluation": {
                "accuracy": model_info["accuracy"],
                "fairness_metrics": model_info["fairness"],
                "robustness_tests": model_info["robustness"],
            },
            "deployment": {
                "environment": model_info["target_env"],
                "risk_level": model_info["risk_classification"],
                "approved_by": None,  # Requires human approval
                "approved_date": None,
            },
            "created_at": datetime.utcnow(),
        }
        self.store.save(record)
        return record["model_id"]

Automated Compliance Checks

# CI/CD pipeline compliance gate
compliance_check:
  stage: validate
  script:
    - python check_bias.py --model $MODEL_PATH --threshold 0.05
    - python check_fairness.py --model $MODEL_PATH --protected-attributes gender,age,ethnicity
    - python check_robustness.py --model $MODEL_PATH --adversarial-tests ./tests/adversarial/
    - python generate_model_card.py --model $MODEL_PATH --output model-card.md
  artifacts:
    paths:
      - model-card.md
      - compliance-report.json

Model Cards

Every deployed model needs a model card:

# Model Card: Customer Support Classifier v2.1

## Model Details
- **Type**: Fine-tuned Granite 8B
- **Task**: Customer support ticket classification
- **Training Data**: 50K support tickets (2024-2025), PII removed

## Intended Use
- Classify incoming support tickets into 12 categories
- NOT intended for: automated response generation, priority escalation

## Performance
- Overall accuracy: 94.2%
- Accuracy by demographic group: [see fairness report]

## Limitations
- Trained on English text only
- May misclassify tickets with technical jargon not in training data
- Requires human review for tickets classified with <80% confidence

## Ethical Considerations
- Bias assessment completed: no significant disparities detected
- Human oversight: all classifications reviewable by support staff

Key Takeaways

  1. Classify your AI systems โ€” know your risk level under EU AI Act
  2. Document everything โ€” training data, model architecture, evaluation results
  3. Automate compliance โ€” build checks into CI/CD, not after-the-fact audits
  4. Model cards are mandatory โ€” every production model needs one
  5. Human oversight is non-negotiable โ€” design for human review from the start

Need help with AI governance? I help organizations build compliant AI systems. Get in touch.

Luca Berton Ansible Pilot Ansible by Example Open Empower K8s Recipes Terraform Pilot CopyPasteLearn ProteinLens TechMeOut