Context switching in machine minds
Among many remarkable things the mind does, neuroplasticity stands in a league of its own. Central to this quality is the ability to render and infer different cognitive models for different tasks. Recent developments in machine learning have been fairly successful in optimizing for a single task (supervised learning with backpropagation). This however is not enough for general intelligence, where the agent is required to form abstractions (On the measure of intelligence, Chollet). Common ground to all the tasks is the fact that we can mathematically and geometrically model each one in the state space (S[ɸ]) with its state variable set ɸ. A neural network (NN[task]) is a universal function approximator and can be thought of mapping set of state variables along a manifold (M[task]), i.e. given {(X1,Y1),…,(Xn,Yn)}, NN builds f : X to Y learn via gradient descent. This approach introduces a new neural network (NN[meta]) which is trained to translate along all M[task] in the state space S[ɸ] learning a new meta-manifold (M[meta]) to traverse along tasks, revealing common parameters and eventually the latent model (l : task_m{x,y} -> task_n{x,y}), here x,y elicit different meaning depending on the task (context). Eventually, we are left only with the state variables that optimize for either tasks or translation over the tasks. This way the agent performs tasks through learning and switches context through model translation. Geometric interpretation of such model is an intuitive playground for all meta-learners.
Keywords
There is nothing here yet. Be the first to create a thread.
Cite this as: