"One step beyond...": Computational principles in social interaction
In social settings, the consequences of our actions typically depend on the actions of other agents. Successful outcomes then require agents to adapt their behaviour to each other. Planning under such mutual adaptation is a challenging computational problem. Circumventing this complexity, socially-ignorant reinforcement learning can, in principle, succeed in optimising behaviour in the long-run. But this works for the isolated case of repeated exposure to the same task with the same other agents. In reality we have limited exposure to such situations, and are more likely to encounter other agents in the same task, or encounter the same other agent in different tasks. Leveraging prior experience then requires generalization, from the same agent to other settings, and from encountered agents to novel agents. Such generalization can rely on various inferences, such as others' depth of strategic reasoning (e.g. how far to proceed with reasoning such as "you think that I think that you think that I will do...") and their social preferences (e.g. "you want us both to be better off" vs "you want to make sure you are further ahead of me"). Here, I will discuss some of the challenges of such social inference, present evidence that such inferences are indeed made, and provide a new framework (based on hidden Markov models) to navigate planning in social interactions.
Keywords
There is nothing here yet. Be the first to create a thread.
Cite this as: