DFG-funded research unit FOR 2812 "Constructing scenarios of the past: A new framework in episodic memory"
Despite the large number of experimental and conceptual studies that have suggested that episodic memory is generative, computational models almost exclusively adopt the storage view. In this project, we propose to develop a generative model for the encoding and retrieval of personally experienced episodes. Our model will describe the interplay between hippocampus and neocortex. We hypothesize that the hippocampus stores and retrieves selected aspects of an episode, which are necessarily incomplete, and the neocortex reasonably fills in the missing information based on general semantic information.
Specifically, our model will consist of two interacting neural networks, the semantic network and a network that mediates the interaction between the semantic network and the hippocampus. The first network is a restricted Boltzman machine, a multi-layer recurrent network that is able to learn statistical relationships between input units and fill in missing information with most probable hypotheses given the statistics it has learned before. The input into this network will be images. If one trains such a network with face images and later feeds it an image with only half a face, it is able to produce a reasonable reconstruction of the other half. Higher layers of the network naturally learn more abstract features of and relations in the images, which can be viewed as semantic information. An additional input layer is added at the top to represent evaluations of the input, thought to be provided through social interactions or self-reflections.
The second network is an auto-encoder network, which basically takes an input, compresses it, and then reconstructs the input from the compressed representation again. The input in this case is the activity of the restricted Boltzman machine, and the compressed representation is stored in a very simple hippocampal model. At recall, the compressed representation is retrieved, reconstructed by the auto-encoder network, and sent back to the Boltzman machine, which cleans up and complements the representation and can reconstruct an image resembling the original.
We will test our hypothesis that the semantic and episodic system benefit from each other in terms of sample efficiency and storage capacity, respectively. Attention will be simulated by selecting those units in the semantic network that are particularly important and that will be stored in episodic memory. We will also investigate the effects of social interaction, self-reflection and stress and compare the results with experiments from the other projects in the consortium.
The project will provide a quantitative basis for investigating the interaction between semantic system, episodic memory, some aspects of stress, and value-based biases.
For more information visit: https://for2812.rub.de/
Publications
-
Hierarchical Transformer VQ-VAE: An investigation of attentional selection in a generative model of episodic memory