Week 1: Algorithmic Bias, Fairness & Privacy
Examine algorithmic bias, data privacy regulations, accountability frameworks, and the ethical responsibilities of data scientists in society.
- Identify sources of bias in data collection and model training
- Apply fairness metrics (demographic parity, equalized odds)
- Understand GDPR, CCPA, and data privacy regulations
- Construct an AI ethics impact assessment
This first lecture establishes the foundational framework for Ethics in Data Science. By the end of this session, you will have the conceptual grounding and practical starting point needed for the rest of the course.
Key Concepts
The lecture introduces the four main pillars of this course: Algorithmic Bias & Fairness Metrics, Data Privacy & Regulation (GDPR/CCPA), Explainability & Model Transparency, Responsible AI Frameworks. Each will be explored in depth over the 14-week curriculum, with hands-on projects reinforcing theory at every stage.
This Week's Focus
Focus on mastering: Algorithmic Bias & Fairness Metrics and Data Privacy & Regulation (GDPR/CCPA). These are the prerequisites for everything in Week 2. The concepts build on each other — do not skip the practice exercises.
DS306 Project 1: Bias Audit of a Predictive Model
Audit a publicly available predictive model (e.g., COMPAS recidivism, loan approval) for demographic bias. Apply fairness metrics and propose bias mitigation strategies.
- Bias audit report with statistical evidence
- Fairness metric calculations (3+ metrics)
- Mitigation strategy with before/after comparison
- Policy recommendation for responsible deployment
These represent the style and difficulty of questions you'll see on the midterm and final. Start thinking about them now.
Define demographic parity and equalized odds as fairness metrics. Can both be satisfied simultaneously?
What is the difference between explainability and interpretability in AI models?
Name three data collection practices that can introduce bias into a machine learning dataset.