Information about objects around us is essential for planning actions and for predicting those of others. Here, we studied pre-supplementary motor area F6 neurons with a task in which monkeys viewed and grasped (or refrained from grasping) objects, and then observed a human doing the same task. We found “action-related neurons” encoding selectively monkey’s own action [self-type (ST)], another agent’s action [other-type (OT)], or both [self- and other-type (SOT)]. Interestingly, we found “object-related neurons” exhibiting the same type of selectivity before action onset: Indeed, distinct sets of neurons discharged when visually presented objects were targeted by the monkey’s own action (ST), another agent’s action (OT), or both (SOT). Notably, object-related neurons appear to signal self and other’s intention to grasp and the most likely grip type that will be performed, whereas action-related neurons encode a general goal attainment signal devoid of any specificity for the observed grip type. Time-resolved cross-modal population decoding revealed that F6 neurons first integrate information about object and context to generate an agent-shared signal specifying whether and how the object will be grasped, which progressively turns into a broader agent-based goal attainment signal during action unfolding. Importantly, shared representation of objects critically depends upon their location in the observer’s peripersonal space, suggesting an “object-mirroring” mechanism through which observers could accurately predict others’ impending action by recruiting the same motor representation they would activate if they were to act upon the same object in the same context.
|Appare nelle tipologie:||1.1 Articolo su rivista|