Week 1: Uninformed Search, Heuristics & A*
Implement and analyze AI search algorithms: BFS, DFS, A*, minimax with alpha-beta pruning, and constraint satisfaction — the classic AI problem-solving toolkit.
- Implement BFS, DFS, UCS, and A* search algorithms
- Prove A* optimality given an admissible heuristic
- Apply minimax with alpha-beta pruning to two-player games
- Model and solve constraint satisfaction problems
This first lecture establishes the foundational framework for Search Algorithms & Problem Solving. By the end of this session, you will have the conceptual grounding and practical starting point needed for the rest of the course.
Key Concepts
The lecture introduces the four main pillars of this course: Uninformed Search: BFS, DFS, UCS, Informed Search: Greedy, A*, Beam, Adversarial Search: Minimax & Alpha-Beta, Constraint Satisfaction: Arc Consistency. Each will be explored in depth over the 14-week curriculum, with hands-on projects reinforcing theory at every stage.
This Week's Focus
Focus on mastering: Uninformed Search: BFS, DFS, UCS and Informed Search: Greedy, A*, Beam. These are the prerequisites for everything in Week 2. The concepts build on each other — do not skip the practice exercises.
AI201 Project 1: 15-Puzzle Solver with A*
Implement an A* solver for the 15-puzzle (sliding tile). Design and compare 3 admissible heuristics. Measure nodes expanded, time to solution, and solution quality.
- A* implementation with pluggable heuristics
- 3 admissible heuristics (Manhattan, misplaced, pattern DB)
- Benchmark: nodes expanded vs heuristic quality
- Interactive visualizer of search progress
These represent the style and difficulty of questions you'll see on the midterm and final. Start thinking about them now.
Prove that A* is optimal if the heuristic is admissible and consistent. Sketch the proof.
What is alpha-beta pruning? How does it improve minimax without changing the result?
Define arc consistency (AC-3). How does it reduce the search space in a CSP?