🎓 University of Aliens — Course Portal
Data ScienceMATH101 › Week 1
📊 Data Science Week 1 of 14 BSc · Y1 S1 ⏱ ~50 min

Week 1: Derivatives, the Chain Rule & Gradient Descent

Master derivatives, integrals, and multivariate calculus — the mathematical engine behind every optimization algorithm in machine learning.

UA
University of Aliens
MATH101 — Lecture 1 · BSc Y1 S1
🎬 CC Licensed Lecture
0:00 / —:—— 📺 MIT OpenCourseWare (CC BY-NC-SA)
🎯 Learning Objectives
  • Compute derivatives using limit definition and rules
  • Apply the chain rule to composite functions
  • Understand gradient descent as iterative derivative application
  • Calculate partial derivatives for multivariate functions
Topics Covered This Lecture
Limits & Continuity
Differentiation Rules
Chain Rule
Partial Derivatives & Gradients
📖 Lecture Overview

This first lecture establishes the foundational framework for Calculus for Data Scientists. By the end of this session, you will have the conceptual grounding and practical starting point needed for the rest of the course.

Why this matters Master derivatives, integrals, and multivariate calculus — the mathematical engine behind every optimization algorithm in machine learning. This lecture sets up everything that follows — make sure you understand the core concepts before proceeding to Week 2.

Key Concepts

The lecture introduces the four main pillars of this course: Limits & Continuity, Differentiation Rules, Chain Rule, Partial Derivatives & Gradients. Each will be explored in depth over the 14-week curriculum, with hands-on projects reinforcing theory at every stage.

# Quick Start: verify your environment is ready for MATH101 import sys print(f"Python {sys.version}") # Check key libraries are installed try: import numpy, pandas, matplotlib print("✅ Core libraries ready") except ImportError as e: print(f"❌ Missing: {e} — run: pip install numpy pandas matplotlib")

This Week's Focus

Focus on mastering: Limits & Continuity and Differentiation Rules. These are the prerequisites for everything in Week 2. The concepts build on each other — do not skip the practice exercises.

📋 Project 1 of 3 50% of Final Grade

MATH101 Project 1: Gradient Descent from Scratch

Implement gradient descent in pure Python/NumPy to minimize a quadratic cost function. Visualize the loss surface, convergence path, and compare different learning rates.

  • Pure Python gradient descent implementation
  • Loss surface visualization (3D + contour plot)
  • Convergence analysis for 3 learning rates
  • Written derivation of the gradient update rule
50%
3 Projects
20%
Midterm Exam
30%
Final Exam
📝 Sample Exam Questions

These represent the style and difficulty of questions you'll see on the midterm and final. Start thinking about them now.

Conceptual Short Answer

Using the chain rule, find d/dx[sin(x²)].

Analysis Short Answer

Explain why a learning rate that is too large causes gradient descent to diverge.

Applied Code / Proof

Write the gradient update rule for linear regression loss L = (1/2n)Σ(ŷ - y)².