Slow feature analysis (SFA) seeks to find transformations of a time series x(t) that produce signals with a high degree of temporal coherence. Most approaches to SFA can be formulated at some level as a generalized eigenvalue problem. In particular, the matrices underlying this problem describe the covariance structure and temporal statistics of x(t), the eigenvectors describe the optimal SFA transformation, and the eigenvalues describe the coherence of each output signal (see above image, left). Reinforcement learning is a branch of machine learning focused on training artificial systems to solve complex decision making problems. The primary framework for doing reinforcement learning is a Markov decision processes (MDP), a model in which an agent makes actions in some state space and as a result of those actions undergoes transitions to other states and observes various rewards (see above image, right). An important question in RL research is how best to represent the state space for a given task. One answer to this question is the successor representation (SR), which represents states based on expectations about which states are likely to follow. For a state space with N states, this yields a vector of length N for each state, and an NxN matrix for the overall state space which is sometimes referred to as the SR matrix. This paper considers how SFA performs for large time series when the input x(t) is of the type that is typically used for finite MDPs. In this setting, it is shown that the generalized eigenvalue problems corresponding to multiple variants of SFA involve matrices that are related to the SR associated to the RL agent. This finding offers sheds light on various empirical findings that indicate SFA and SR are sensitive to similar types of information.