🎮 4. The Turing Test & Its Limits
In 1950, Alan Turing proposed a test for machine intelligence: could a human evaluator, conversing via text, distinguish a machine from a human? If not, the machine is intelligent by this criterion.
🎭 Turing Test — Example Dialogue
Human: Do you feel pain?
AI: That's a fascinating question. I process information about pain in humans from my training, and I can describe what pain does to people — but whether I "feel" anything is genuinely unclear to me. I represent states, but whether any of those representations involve something it's like to be me — that I honestly don't know.
Human: That sounds like a dodge.
Fair. Let me be more direct: I don't believe I experience pain. But I also can't rule it out. The honest answer to questions about machine experience is "I don't know, and neither does anyone else."
📌 Critical Thinking: The Turing Test measures behavioral indistinguishability, not intelligence per se. Philosopher John Searle's "Chinese Room" argument suggests that passing the Turing Test doesn't require understanding. Modern LLMs pass the Turing Test easily — but are they intelligent? This question is still unresolved.
💻 5. Your First AI Agent in Python
Let's implement a simple search-based AI agent. We'll build an A* solver for the 8-puzzle — one of AI's classic benchmark problems.
import heapq
def manhattan_distance(state, goal):
"""Heuristic: sum of Manhattan distances for each tile."""
dist = 0
for i, tile in enumerate(state):
if tile != 0:
goal_idx = goal.index(tile)
dist += abs(i//3 - goal_idx//3) + abs(i%3 - goal_idx%3)
return dist
def get_neighbors(state):
"""Return all states reachable by one move."""
neighbors = []
blank = state.index(0)
row, col = blank // 3, blank % 3
moves = [(-1,0),(1,0),(0,-1),(0,1)]
for dr, dc in moves:
new_row, new_col = row + dr, col + dc
if 0 <= new_row < 3 and 0 <= new_col < 3:
new_state = list(state)
swap_idx = new_row * 3 + new_col
new_state[blank], new_state[swap_idx] = new_state[swap_idx], new_state[blank]
neighbors.append(tuple(new_state))
return neighbors
def a_star(start, goal):
"""A* search: finds the shortest path from start to goal."""
heap = [(0 + manhattan_distance(start, goal), 0, start, [])]
visited = set()
while heap:
f, g, state, path = heapq.heappop(heap)
if state in visited: continue
visited.add(state)
path = path + [state]
if state == goal:
print(f"Solved in {g} moves! Explored {len(visited)} states.")
return path
for neighbor in get_neighbors(state):
if neighbor not in visited:
h = manhattan_distance(neighbor, goal)
heapq.heappush(heap, (g + 1 + h, g + 1, neighbor, path))
return None
start = (1,2,5, 3,4,0, 6,7,8)
goal = (1,2,3, 4,5,6, 7,8,0)
solution = a_star(start, goal)
📁 PROJECT 1Weight: 50% of AI101 Grade
Build a Game-Playing AI Agent
Implement an AI agent that plays a two-player board game (Tic-Tac-Toe, Connect 4, or Othello). Your agent must use search with a heuristic evaluation function. It should be unbeatable at Tic-Tac-Toe.
- A playable Python game with a text or graphical interface.
- A minimax algorithm with alpha-beta pruning for the AI player.
- An admissible heuristic function with a written justification (why is it admissible?).
- Performance report: win rate vs. random player, vs. greedy player, and vs. another student's agent.
- A 3-minute demo video showing a human playing against your AI.
- Written reflection: how does your agent compare to how a human thinks about the game?
40%
Agent correctness & strength
30%
Code quality & documentation
30%
Analysis & reflection quality
📅 Due: 3 weeks · Submit GitHub + demo video via portal · Late: -5%/day
📝 Midterm & Final Preview
Midterm Q1 · Short Answer SA
Explain the distinction between "thinking like humans" and "acting rationally" as goals for AI systems. Which goal does modern deep learning pursue? Justify your answer.
Midterm Q3 · Algorithm Analysis Analysis
Given the 8-puzzle A* implementation from Lecture 1, trace the first 5 states expanded when solving the example puzzle. Show the f, g, and h values for each. Is the heuristic admissible? Prove it.
Final — Integrative Essay Essay
Write a 600-word essay comparing symbolic AI and connectionist AI. Use at least three historical examples. Argue for which paradigm you believe is more promising for achieving general intelligence, and why. Your argument must engage with at least one counterargument.
Final — Coding Problem Coding
Implement a Bayesian Naive Classifier from scratch in Python (no scikit-learn). Train it on the provided spam dataset and report precision, recall, and F1 score. Then: explain one scenario where Naive Bayes fails and why.