Reasoning jointly on perception and action requires to interpret the scene in terms of the agent's own potential capabilities. We propose a Bayesian architecture for learning sensorimotor representations from the interaction between perception, action, and salient changes generated by robot actions. This connects these three elements in a common representation: affordances. In this paper, we are working towards a richer representation and formalization of affordances. Current experimental analysis shows the qualitative and quantitative aspects of affordances. In addition, our formalization motivates several experiments for exploring hypothetical operations between learned affordances. In particular, we infer affordances of composite objects, based on prior knowledge on the affordances of the elementary objects.
International Symposium on Experimental Robotics (ISER 2016) https://hal.archives-ouvertes.fr/hal-01391427 International Symposium on Experimental Robotics (ISER 2016), Oct 2016, Tokyo, Japan. 2016, 2016 International Symposium on Experimental Robotics. <http://www.iser2016.org/> http://www.iser2016.org/ARRAY(0x7f5472aa0e28) 2016-10-03