Parvathy Sudhir Pillai A0095671W
Human thought process! Intertwines AI, Cognitive Science & Neuroscience. Not just cognition, but combining it with perception and actuation What do we need? Are humans rational beings? Modeling the roundworm nervous system Sydney Brenner took 20 years
From infancy to adulthood Sensorimotor Pre-operational Concrete operational Formal operational
Computational what does the system do? functionality of the system. Algorithmic how does the system do what it does? representations used to build the system. Implementation how is the system physically realized? neural architecture/ productions.
Nativist (born with innate intelligence) vs. Empiricist (knowledge is learnt) Mechanistic (causal structure) vs. Purposive (problem to be solved) Generative (joint probability) vs. Discriminative (conditional probability) Frequentism (no: of occurrences) vs. Subjectivism (evidential) Connectionist (interconnection of simpler units) vs. Rule based(if-then rules)
Frame conclusions based on experience and prior knowledge. Easier to model hypotheses as prior assumptions. Integration of structured and symbolic representation with statistical learning. At the computational level Algorithmic - Cognitive Psychology. Implementation Neuroscience.
Bayes Rule Marginalization Comparing Hypotheses Posterior odds Maximum Likelihood p = N h1 /(N h1 +N h2 ) Maximum A Posteriori value of p that maximizes the posterior dist. Posterior mean p = p P(p d) dp
Representations Integrate structure and probability by means of causal graphs Processing Networks of parallel processors with the structure representing causal dependencies Learning Is the available data enough for learning structures? Is it possible to search through possible structures and what constraints make the search space feasible? Do structure learning models fit human behavior?
Theory about representations and processes that produce human thought. Rule based Uses if-then rules. Eg ACT, SOAR Connectionist Knowledge is encoded not as rules. Connections between neuron like processors. Eg - ANN
Explains why cognition is adaptive Applying purposive explanation to higher cognitive processes. Six steps:- Goals Environment Computational Limitations Optimization Data Iteration
Anderson et.al (CMU) A hybrid production system architecture. Neural-net like activation mechanism controls rules. Architecture related to components from cognitive neuroscience. Heavy emphasis on modeling learning.
Goals efficient memory retrieval Environment need probability based on history of use Computational limitations sequential search Optimization stop search when gain is less than cost Image courtesy: http://www.instructionaldesign.org/theories/act.png Data forget items as a power function of time Iterations empirical findings conform to actual distributions
Predicting word association Each document is a mixture of topics. Words sampled from topic-word distributions P(w 2 w 1 ) = j=1 T P (w 2 z=j)p(z=j w 1 ) P(w 2 w 1 ) = conditional probability of word w 2 given cue w 1 P (w 2 z=j) = probability that topic j has word w 2 P(z=j w 1 ) = probability that topic is j given w 1 P (w 2 z=j) P(z=j)
Goals selecting the data with highest information gain Information gain = Initial uncertainty Expected uncertainty Environment - rarity assumption Computational limitations minimal cost of examining data Optimization select the most informative data subject to cost constraints. Data takes into account non-independence of selections Iteration selection-task performance should change if rarity assumption viloated.
Degrees of plausibility are represented by real numbers (higher degree of belief represented by a larger number) Bayesian probability is an extension of formal logic into intermediate states of knowledge. Bayesian inference gives a measure of our state of knowledge about nature, not a measure of nature itself.
Social robotics Geriatric care systems Intelligent tutoring systems Military Applications Modeling Animal Behavior
Learning mechanism Rational Inferential Statistical Built on probabilistic models of cognition Question both characterizations Nativist innate conceptual parameters Empiricist perceptual primitives and associative learning mechanism
Most human inferences are guided by background knowledge. Learning about prior distributions Learning about feature variability Inferring from parameters top down. Learning from statistical inferences bottom up. Simultaneous learning at multiple levels needed for human inferences. Property induction Extending learnt property to other members of the domain. Tree structured representations. Approximate inference MCMC
Xu, F. (2011). Rational constructivism, statistical inference, and core cognition. Behavioral and Brain Sciences, 34(3), 151 152. Nick Chater, Joshua B. Tenenbaum, Alan Yuille. Probabilistic models of cognition: where next?, Trends in Cognitive Sciences In Special issue: Probabilistic models of cognition, Vol. 10, No. 7. (July 2006) Nick Chater, Joshua B. Tenenbaum, Alan Yuille. Probabilistic models of cognition: Conceptual foundations, Trends in Cognitive Sciences In Special issue: Probabilistic models of cognition, Vol. 10, No. 7. (July 2006) Thomas L. Griffiths, Nick Chater, Charles Kemp, Amy Perfors, Joshua B. Tenenbaum. Probabilistic models of cognition: exploring representations and inductive biases, Trends in Cognitive Sciences, vol. 14, no. 8, pp. 357-364, 2010 Nick Chater, Mike Oaksford. Ten Years of the Rational Analysis of Cognition, Minnesota Symposia, Child Psychology, Vol (nelson), 1999 Jacob Whitehill. Understanding ACT-R an Outsider s Perspective Sharon Goldwater, John Lee. Nativism, Empiricism, Representation, and Domain-specificity, Topics in Cognitive Modelling, School of Informatics, University of Edinburgh. Sharon Goldwater, John Lee. Bayesian modelling, Topics in Cognitive Modelling, School of Informatics, University of Edinburgh. T L Griffiths, A Yuille. Technical introduction: A primer on probabilistic inference. Trends in Cognitive Sciences, In Special issue: Probabilistic models of cognition, Vol. 10, No. 7. (July 2006) T. L. Griffiths, Statistics and the Bayesian mind. Significance(2006), 3: 130 133. doi: 10.1111/j.1740-9713.2006.00185.x Thagard, P. (2012). Cognitive architectures. In K. Frankish & W. Ramsay (Eds.), The Cambridge handbook of cognitive science (pp. 50-70). Cambridge: Cambridge University Press. Thomas L. Griffiths, Charles Kemp, Joshua B. Tenenbaum. Bayesian models of cognition, The Probabilistic Mind.
THANK YOU!