Learning Salon: Recap and Panel Discussion

Tags

  • rl
  • philosophy
  • research

The Learning Salon is a weekly discussion at the intersection of AI, RL, neuroscience, other topics. This is a semi-chronological account of the Dec 18 Learning Salon session. Overarching themes include functionalism; new learning approaches (motivated by environment and other species); sensory perceptions and pain. Where does neuroscience fit in?

Functionalism. Functionalists care about what’s inside the black box; behaviorists only care about the behavior. Working memory and attention and similar psychological concepts don’t necessarily need to be articulated at the neuronal level. They are perfectly valid in and of themselves. “Some psych concepts are wrong, but that’s OK, we’ll fix them.” We don’t need a neural mechanism to explain all psychological concepts. Mazviita: Psychological level is not the level of computation; shouldn’t be framed as such.

There is no “platonic solid of pain.” John: Don’t get lost in the differences. Pain is pain and animals use it for the same purpose [I guess learning signal]. Pain is different within humans too (e.g. stepping on a lego vs. losing a game). Multirealizability: you can do calculation regardless of substrate. Body shape of marsupial vs golden mole: evolutionary history is irrelevant. Stay at the functional level. Navier-stokes applies to different types of fluids, regardless of internals. Paul: We don’t know how consciousness works, so reasoning doesn’t apply to pain.

Melanie: Brain is modular, our AI systems not really. [I think DeepMind is doing a lot of work in this domain though.]

Amy: not convinced neuroscience will help AI. Cites previous experiments from McGill; which regions of the brain lit up when doing a task depended entirely on monkeys’ previous experience. The contribution of Area MT to Visual Motion Perception Depends on Training.

New learning framework: look at how different species behave. Look at interactions that shape learning in different environments. [I think this is where AI is going. Cf. Sapolsky’s work, e.g. Behave.]

John: Reverse hierarchies by Merav Ahissar. If you have a task, you assign it to the system that can do the work. Psychology and neuroscience are two sides of the same coin. But something vast is missing in AI.

Ida: Compare to history of math. Once we moved from poetic math to formalisms, we gained the ability to imagine new things. Strive for an interdisciplinary attempt to unify ideas. [Fully agree.]

Causal inference: Miguel Hernan, “What if”. Neural nets communicating to create “social structure” by Natasha Jaques: Learning Social Learning.

How do we measure how well we’re doing in AI? Compare to biological intelligence? AI olympics already exists, but is an imperfect measure. Mice are general in the ecological niche they evolved in. [Are we back to discussing “judge a fish by its ability to climb a tree?”] Not currently used in AI: cell types.