Levels of biological plausibility

Tags

  • neuro
  • research

Opinion by Bradley Love. Emergence, reduction, explanation are all different levels of analysis. Scientists should be guided by pragmatic concerns.

For instance, practical limitations, such as the precision of measurement, characterization of initial conditions (e.g. butterfly effect), available computing resources, and the cleverness of researchers, will likely be the limiting factors on what can be reduced absent dubious ontological claims about emergence [“X exists on a fundamental level”].

Reduction pertains to the ability to deduce low-level phenomena from high-level. Emergence to the inability to deduce phenomena by observing individual components in isolation. E.g. economy from unemployment, inflation. Since no measure is fundamental, understanding how measures relate can hasten scientific progress, which I guess is the goal of this paper. On emergence:

Are these claims that a phenomenon is sufficiently complex that for practical purposes it needs to be explained by appealing to higher level entities? […] Or, are those claiming emergence (as their language often suggests) stating there is something special about what they study that is not reducible to lower level entities in principle?

Marr’s tripartite hierarchy was inspired by computational abstraction. Moving to a lower layer introduces additional detail; supervenience: higher levels involve changes in lower ones. Recent call to reconsider reductionsm in neurosci.

Marr's levels vs abstraction layers

Discusses blood-oxygen-level-dependend (BOLD) in terms of abductive reasoning. Measurements can make some higher-level concepts more plausible if they’ve already been established.

Against bio-plausibility. Depending on level of abstraction everything is bio-plausible (the brain has things that do things, just like a computer).

One might misconstrue biological plausibility as some top-down, theoretical judgement and mistakenly cast model selection as a bottom-up, data-driven approach. This dichotomy does not hold because model selection involves making important theory-guided choices, such as choosing the relevant datasets to explain, the relevant findings or constraints to follow (e.g. the spiking rate of artificial neurons should not eclipse the maximum rate observed in actual neurons) and the competing models to evaluate. Moreover, any claim of biological plausibility that had substance would itself need to be rooted in some finding, known constraint, or dataset, which if properly stated and evaluated would closely conform to model selection.

Measurements are not levels of analysis, just used to explain them. Analogy to Netwonian mechanics vs relativistic and corresponding speed scales; temperature vs kinetic speed of particles.

A poor understanding of levels can lead to incoherent claims of biological plausibility and unsubstantiated beliefs that what one studies is somehow fundamental. These misconceptions can slow scientific progress by obscuring where the true fault lines and uncertainty lie.

To build mechanism understanding, scientists need to determine how various explanations relate, such as whether explanations are competing, unrelated, or at different levels. Don’t conflate measures for levels.