Symbolic Behavior in AI

Tags

  • research

Symbolic behavior in AI by Santoro et al. Cool position/AI philosophy (is that a fair classification?) paper. Great to see different streams of thought from within DeepMind (also cf. the recent Making sense of sensory input).

The paper argues about rethinking our semantic interpretation of symbols, to some extent positioning itself in contrast to GOFAI (which assumes symbols “just are” and insists on manipulating them via pure syntax).

According to Santoro, though, syntax should be context-dependent (social, grounded, etc). This line of thinking seems to align well with WorldScope 5 in Bisk et al’s terms: intelligent agents should have freedom to construct their own symbols, given appropriate grounding.

I kinda agree on a high level; good models emerge from simultaneous concept construction and unification and the former seems to be somewhat stifled in symbolistic approaches.

On the other hand, it’s interesting to consider how this affects explainable AI. Are humans even equipped to “naturally” reason about high-d continuous concept spaces in this way?