Guides

Bringing it Together

To consolidate these concepts, please see figure below) for a potential instantiation of the system in a concrete setting. In the example, we see how the system could be applied to sensing and recognizing objects and scenes in a 3D environment using several different sensors, in this case touch and vision.

While this hopefully makes the key concepts described above more concrete and tangible, keep in mind that this is just one way in which the architecture can be instantiated. By design, the Platform can be applied to any application that involves sensing and active interaction with an environment. Indeed, this might include more abstract examples such as browsing the web, or interacting with the instruments that control a scientific experiment.

High-level overview the architecture with all the main conceptual components applied to a concrete example. Green lines indicate the main flow of information up the hierarchy. Purple lines show top-down connections, biasing the lower-level learning modules. Light blue lines show lateral voting connections. Red lines show the communication of goal states which eventually translate into motor commands in the motor system. Every LM has a direct motor output. Information communicated along solid lines follows the CMP (contains features and pose). Discontinuations in the diagram are marked with dots on line-ends. Dashed lines are the interface of the system with the world and subcortical compute units and do not need to follow the CMP. Green dashed lines communicate raw sensory input from sensors. Red dashed lines communicate motor commands to the actuators. The dark red dashed lines send sensory information directly to the motor system and implement a fast, model-free policies. The large, semi-transparent green arrow is an example of a connection carrying sensory outputs from a larger receptive field directly to the higher-level LM