Week 1: Model Versioning, Deployment Strategies & CI/CD
Deploy ML models reliably in production: versioning with MLflow, automated CI/CD, canary deployments, A/B testing, and drift detection pipelines.
- Implement blue-green and canary deployment strategies for ML models
- Build automated retraining pipelines triggered by drift alerts
- Design model serving infrastructure (batch, online, streaming)
- Set up comprehensive model monitoring with alerting
This first lecture establishes the foundational framework for MLOps & Model Deployment. By the end of this session, you will have the conceptual grounding and practical starting point needed for the rest of the course.
Key Concepts
The lecture introduces the four main pillars of this course: Deployment Strategies: Blue-Green, Canary, Shadow, Model Serving: REST, gRPC, Streaming, Automated Retraining Pipelines, Drift Detection & Alerting. Each will be explored in depth over the 14-week curriculum, with hands-on projects reinforcing theory at every stage.
This Week's Focus
Focus on mastering: Deployment Strategies: Blue-Green, Canary, Shadow and Model Serving: REST, gRPC, Streaming. These are the prerequisites for everything in Week 2. The concepts build on each other — do not skip the practice exercises.
AIE301 Project 1: Full MLOps Pipeline
Build a complete MLOps pipeline for a fraud detection model: automated training, canary deployment, A/B testing framework, drift monitoring, and auto-remediation.
- Canary deployment with traffic splitting
- Automated retraining trigger on data drift
- A/B test framework with statistical significance tracking
- Runbook for model incidents
These represent the style and difficulty of questions you'll see on the midterm and final. Start thinking about them now.
Explain the difference between blue-green deployment and canary deployment for ML models.
What is shadow deployment? When would you use it instead of canary?
Design a data drift detection system that triggers model retraining automatically.