Thursday 07 February
11:00 - 12:00
Several recent studies have shown that neural activity in vivo tends to be constrained to a low-dimensional manifold. Such activity does not arise in simulated neural networks with homogeneous connectivity and it has been suggested that low dimensionality is an indicative of some other connectivity pattern in neuronal networks. Interestingly, the structure of the intrinsic manifold of the network activity puts constraints on learning. For instance, animals find it difficult to perform tasks that require a change out of the intrinsic manifold. A straightforward way to generate low-dimensional activity manifold is to assume that the connectivity matrix has low rank but this is not sufficient to understand why it is difficult to change the manifolds.
In my talk I will discuss different mechanisms that lead to low-dimensional activity in networks of spiking neurons. I will consider two possibilities for the emergence of the low-dimensional activity: 1. the network exhibits oscillations and synchrony which reduce the effective dimensionality. 2. Networks have evolved to perform a low-dimensional task and therefore exhibit low-dimensional activity. I will give an overview of various approaches to design functional networks with desired dimensionality. Finally, using these network models I will provide a biologically plausible explanation why altering the intrinsic manifold of neuronal activity is difficult and (by analogy) why learning is easier when it does not require the neural activity to leave its intrinsic manifold.