$ ./workshop.sh

  _        _     ___   ___ 
 | |      /_\   | _ \ |_ _|
 | |__   / _ \  |  _/  | | 
 |____| /_/ \_\ |_|   |___|
Learning As Program Induction {
 : Full-day Workshop (CogSci 2018)
 : Madison, Wisconsin }

> import lapi as lp

> print(lp.key_dates)

Day of workshop: July 25th, 2018
Morning-noon: Introduction and first set of talks
Afternoon: Second set of talks and discussion

> print(lp.schedule)

> print(lp.abstract)

The notion that the mind approximates rational (Bayesian) inference has had a strong influence on thinking in psychology since the 1950s. In constrained scenarios, typical of psychology experiments, people often behave in ways that approximate the dictates of probability theory. However, natural learning contexts are typically much more open-ended --- there are often no clear limits on what is possible, and initial proposals often prove inadequate. This means that coming up with the right hypotheses and theories in the first place is often much harder than ruling among them. How do people, and how can machines, expand their hypothesis spaces to generate wholly new ideas, plans and solutions?

Recent work has begun to shed light on this problem via the idea that many aspects of learning can be better understood through the mathematics of program induction [1].

People are demonstrably able to compose hypotheses from parts [2,3,4] and incrementally grow and adapt their models of the world [5]. A number of recent studies has formalized these abilities as program induction, using algorithms that mix stochastic recombination of primitives with memoization and compression to explain data [6,7], ask informative questions [8], and support one- and few-shot-inferences [1]. Program induction is also proving to be an important notion for understanding development and learning through play [9] and the formation of geometric understanding about the physical world [10].

The aim of this workshop is thus to bring together scientists who have a joint interest in how intelligent systems (humans or machines) can learn rich representations and action plans (expressable as programs) though observing and interacting with the world.

> print(lp.taget_audience)

This workshop dovetails nicely with this year's focus on ``change, learning, growth, and adaptation''. These key elements of cognition are precisely those that have resisted Bayesian accounts, and those which learning as program induction theories purport to explain. Our target audience is almost as broad as the conference as a whole --- we expect this workshop to be of interest to psychologists, linguists, philosophers and machine learning researchers alike. Moreover, we feel that the interdisciplinary nature of our workshop will facilitate interactions between the diverse strands of research presented at CogSci that might otherwise remain bunkered in parallel sessions.

> print(lp.area_header)
> for area_of_interest in sorted(lp.areas): \
> print("- %s" % area_of_interest)

Areas of interest for discussion include, but are not limited to:

- Cognitive Models Of Program Induction
- Assessing Empirical Progress
- Cognitive Primitives
- Compositional Ingredients
- Datasets, Tasks, Evaluation
- Formalizing Cognitive Theories
- Inductive Logic Programming
- Knowledge Representation
- Meta-learning
- Program Synthesis
- Probabilistic Programming
- Semantic Compostionality
- Unique Predictions

> for speaker in lp.speakers: \
> print("∘ %s (%s)" % (speaker.name, speaker.affiliation))

> for organizer in lp.organizers: \
> print("∘ %s (%s)" % (organizer.name, organizer.affiliation))

  • Neil Bramley (NYU)
  • Eric Schulz (Harvard)
  • Fei Xu (Berkeley)
  • Joshua Tenenbaum (MIT)

  • > for i, reference in enumerate(lp.references): \
    > print("[%d] %s" % (i + 1, reference))

    [1] Lake, B. M., Salakhutdinov, R. and Tenebaum, J. B. (2015). "Human-level concept learning through probabilistic program induction." Science, 350(6266), 1332–1338.
    [2] Schulz, E., Tenenbaum, J. B., Duvenaud, D., Speekenbrink, M. and Gershman, S. J. (2017). " Compositional inductive biases in function learning." Cognitive Psychology, 99, 44-79.
    [3] Goodman, N. G., Tenenbaum, J. B., Feldman, J and Griffiths, T. L. (2008). "A rational analysis of rule-based concept learning." Cognitive Science.
    [4] Piantadosi, S. T., Tenenbaum, J. B. and Goodman, N. D. (2016). "The Logical primitives of thought: Empirical foundations for compositional cognitive models." Psychological Review.
    [5] Bramley, N. R., Dayan, P., Griffiths, T. L. and Lagnado, D. A. (2017). "Formalizing Neurath's ship: Approximate algorithms for online causal learning." Psychological Review.
    [6] Dechter, E., Malmaud, J., Adams, R. P. and Tenenbaum, J. B. (2013). "Bootstrap learning via modular concept discovery." International Joint Conference on Artificial Intelligence.
    [7] Ellis, K., Dechter, E. and Tenenbaum, J. B. (2015). "Dimensionality reduction via program induction." Knowledge Representation and Reasoning: Integrating Symbolic and Neural Approaches.
    [8] Rothe, A., Lake, B. M. and Gureckis, T. M. (2017). "Question asking as program generation." Advances in Neural Information Processing Systems.
    [9] Sim, Z. L. and Xu, F. (2017). "Learning higher-order generalizations through free play: Evidence from 2-and 3-year-old children." Developmental Psychology.
    [10] Amalric, A., Wang, L., Pica, P., Figueiram S., Sigman, M. and Dehaene, S. (2017). "The language of geometry: Fast comprehension of geometrical primitives and rules in human adults and preschoolers." PLOS Computational Biology.