Visual perception for autonomous driving inspired by convergence–divergence zones
Published at11th International Symposium on Image and Signal Processing and Analysis, 2019 (Dubrovnik, Croatia,
Visual perception is, by large, the main source of information used by humans when driving. Therefore, it is natural and appropriate to rely heavily on vision analysis for autonomous driving, as done in most projects. However, there is a significant difference between the common approach of vision in autonomous driving, and visual perceptions in humans when driving. Essentially, image analysis is often regarded as an isolated and autonomous module, which high level output drives the control modules of the vehicle. The direction here presented is different, we try to take inspiration from the brain architecture that makes humans so effective in learning tasks as complex as the one of driving. There are two key theories about biological perception grounding our development. The first is the view of the thinking activity as a simulation of perceptions and action, as theorized by Hesslow. The second is the Convergence-Divergence Zones (CDZs) mechanism of mental simulation connecting the process of extracting features from a visual scene, to the inverse process of imagining a scene content by decoding features stored in memory. We will show how our model, based on semi-supervised variational autoencoder, is a rather faithful implementation of these two basic neurocognitive theories.