Human learning and reasoning is founded on multiple knowledge representations with different kinds of structures, such as trees, chains, dominance hierarchies, neighborhood graphs, and directed networks. This class uses probabilistic inference methods from machine learning and Bayesian statistics, operating over different kinds of structured representational systems, to explain how people's domain knowledge can support a wide range of learning and reasoning tasks, and how these knowledge structures may themselves be learned from experience. (Image by Prof. Joshua Tenenbaum.)
This course is an introduction to computational theories of human cognition. Drawing on formal models from classic and contemporary artificial intelligence, students will explore fundamental issues in human knowledge representation, inductive learning and reasoning. What are the forms that our knowledge of the world takes? What are the inductive principles that allow us to acquire new knowledge from the interaction of prior knowledge with observed data? What kinds of data must be available to human learners, and what kinds of innate knowledge (if any) must they have?