AI Governance Is Now Mandatory
The EU AI Act came into force, and AI governance is no longer optional for any organization deploying AI in Europe. Even outside the EU, SOC 2, ISO 27001, and industry regulators increasingly require AI governance frameworks.
EU AI Act: What Engineers Need to Know
Risk Classification
| Risk Level | Examples | Requirements |
|---|---|---|
| Unacceptable | Social scoring, real-time biometric surveillance | Prohibited |
| High | Hiring decisions, credit scoring, medical diagnosis | Full compliance |
| Limited | Chatbots, content generation | Transparency obligations |
| Minimal | Spam filters, game AI | No requirements |
High-Risk AI Requirements
If your system is classified as high-risk, you must implement:
- Risk Management System: Continuous risk assessment throughout the AI lifecycle
- Data Governance: Documentation of training data quality, relevance, and representativeness
- Technical Documentation: Full documentation of system design, development, and deployment
- Record Keeping: Automatic logging of all system events
- Human Oversight: Ability for humans to understand, monitor, and override AI decisions
- Accuracy & Robustness: Testing for accuracy, robustness, and cybersecurity
Practical Governance Framework
Model Registry
class ModelRegistry:
def register_model(self, model_info):
record = {
"model_id": generate_uuid(),
"name": model_info["name"],
"version": model_info["version"],
"training_data": {
"sources": model_info["data_sources"],
"size": model_info["dataset_size"],
"date_range": model_info["data_date_range"],
"bias_assessment": model_info["bias_report"],
},
"evaluation": {
"accuracy": model_info["accuracy"],
"fairness_metrics": model_info["fairness"],
"robustness_tests": model_info["robustness"],
},
"deployment": {
"environment": model_info["target_env"],
"risk_level": model_info["risk_classification"],
"approved_by": None, # Requires human approval
"approved_date": None,
},
"created_at": datetime.utcnow(),
}
self.store.save(record)
return record["model_id"]Automated Compliance Checks
# CI/CD pipeline compliance gate
compliance_check:
stage: validate
script:
- python check_bias.py --model $MODEL_PATH --threshold 0.05
- python check_fairness.py --model $MODEL_PATH --protected-attributes gender,age,ethnicity
- python check_robustness.py --model $MODEL_PATH --adversarial-tests ./tests/adversarial/
- python generate_model_card.py --model $MODEL_PATH --output model-card.md
artifacts:
paths:
- model-card.md
- compliance-report.jsonModel Cards
Every deployed model needs a model card:
# Model Card: Customer Support Classifier v2.1
## Model Details
- **Type**: Fine-tuned Granite 8B
- **Task**: Customer support ticket classification
- **Training Data**: 50K support tickets (2024-2025), PII removed
## Intended Use
- Classify incoming support tickets into 12 categories
- NOT intended for: automated response generation, priority escalation
## Performance
- Overall accuracy: 94.2%
- Accuracy by demographic group: [see fairness report]
## Limitations
- Trained on English text only
- May misclassify tickets with technical jargon not in training data
- Requires human review for tickets classified with <80% confidence
## Ethical Considerations
- Bias assessment completed: no significant disparities detected
- Human oversight: all classifications reviewable by support staffKey Takeaways
- Classify your AI systems โ know your risk level under EU AI Act
- Document everything โ training data, model architecture, evaluation results
- Automate compliance โ build checks into CI/CD, not after-the-fact audits
- Model cards are mandatory โ every production model needs one
- Human oversight is non-negotiable โ design for human review from the start
Need help with AI governance? I help organizations build compliant AI systems. Get in touch.
