Dr. Yihe Lu
Memory reactivation can be observed during sleep or wakefulness in human and rodent brains, and is believed to be crucial for memory consolidation (Lewis and Bendor, 2019). A similar strategy, namely rehearsal or replay, is proven to be effective in mitigating, or even overcoming the catastrophic forgetting problem in neural network (NN) modelling and applications (Robins, 1995; Kumaran and McClelland, 2012). Generative replay (GR) (van de Ven, Siegelmann and Tolias, 2020) and experience replay (ER) (Káli and Dayan, 2004) are the two common replay strategies. While GR produces replay samples from random activations in a generative NN, ER revisits exact copies of past training samples preserved in memory storage. Although ER (without memory limits) yields better results and is thus deployed more in applications (e.g., machine learning), GR is computationally more efficient and biologically more plausible. In this study we chose restricted Boltzmann machines (RBMs) as our primary NN model. In addition to ER and GR, we consider a new strategy cued generative replay (cGR), which uses replay cues that are partially correct activations rather than completely random activations in standard GR. We propose two indices, evenness and exactness to measure the quality of replay samples. GR, in contrast to ER, yielded more balanced but less accurate replay (high evenness, low exactness), but their performance was largely dependent on the replay amount. We found that cGR could outperform both by improving replay quality.