Emergent Intentionality in Perception-Action Subsumption Hierachies

13/11/2017

published in Frontiers in Robotics and AI

Abstract

A cognitively autonomous artificial agent may be defined as one able to modify both its external world-model and the framework by which it represents the world, requiring two simultaneous optimization objectives. This presents deep epistemological issues centered on the question of how a framework for representation (as opposed to the entities it represents) may be objectively validated. In this article, formalizing previous work in this field, it is argued that subsumptive perception-action learning has the capacity to resolve these issues by (a) building the perceptual hierarchy from the bottom up so as to ground all proposed representations and (b) maintaining a bijective coupling between proposed percepts and projected action possibilities to ensure empirical falsifiability of these grounded representations. In doing so, we will show that such subsumptive perception-action learners intrinsically incorporate a model for how intentionality emerges from randomized exploratory activity in the form of “motor babbling.” Moreover, such a model  of  intentionality also naturally  translates into a model for human–computer interfacing that makes minimal assumptions as to cognitive states.

Download full text




This project has received funding from the European Union's Horizon 2020 research and innovation programme under grant agreement No 731593.