Day 7 - Control Theory in biological and artificial networks, Elisa Donati, Matej Hoffmann, Jean-jacques Slotine, Rodolphe Sepulchre, plus Allen connectome dataset
today's authors: Eleni Nisioti, Muhammad Aitsam, Tobi Delbruck
Giacomo has been collecting statistics on workshop participant interests:After a weekend filled with a fancy dinner, a late-night venture to Alghero and a boat-trip along the Sardinian coastline, we return to the lecture room for the discussion that will probably contain the most equations.
Our hotel seen from the boat
We started off with a more theoretical/computational perspective and, then, switched to a more biological one.
The first speaker, Rodolphe Sepulchre from KU Leuven (sitting to the right of Emre Neftci in photo above), who began by drawing a table with two columns: computation and adaptation. Rows were different examples from these two approaches. For example, the first row contained ChatGPT on one column and the smart grid on the other. These are examples of engineering systems that both work at large scale, but they do so differently.
Rodolphe said that he basically had two reasons for making this distinction between computation and adaptation:
- for the former the main challenge is learning and for the latter it is regulation
- the way you interact with the system is different: in the case of computation it is symbolic, in the other case it is physical
This distinction generated a discussion with an audience: is this a valid distinction? Especially considering recent developments in computation (for example we do not interact with computers only with symbols and computation is not just the Von Neumann architecture) and adaptation (for example grids have become cyberphysical), then maybe this distinction is not meaningful.
Rodolphe explained that this is just a metaphor and, like all metaphors, it is wrong but may be useful.
Rodolphe drew a series of spikes and said that spiking is a form of communication that mixes both analogue and discrete, which makes it impossible to use the usual transfer function formulation of linear control to design controllers for it.
He, then, drew a black box containing a neural circuit and said that we will talk about a very simple circuit: a single neuron. Control theory is concerned with learning how to model this neuron. It has not been successful at contributing to understanding intelligence, because it has been stuck with the question of what is the right model
The neuron has two terminals: you can insert current and measure its voltage.
To model it under the control theory paradigm, we can use a transfer function. This is a function that maps the neuron's input (u) to its output (y). It operates in the Laplacian domain, hence the random variable is s.
In control theory you do not directly work with physical elements (like resistors and capacitors) but with mathematical functions. To draw a parallel to machine learning, it's like modelling a neuron as matrix multiplication.
What steps do we need to take to make this simple approach (meaning the transfer function) useful for biologists? Rodolphe discussed three different models of a neuron on this path:
- the framework of conductance-based modelling. For this we just need a capacitor, where the voltage is the integral of the current) and a resistor. This is exemplified by the well-known Hodgkin-Huxley model.
- the second is the resistive framework (Hopfield)
- the third is Rudolphe's work on relaxation [todo: Tobi said he can explain this]
The next speaker is Elisa Donati from INI. She talked to us about controlling a robot with an end-to-end neuromorphic design.
Elisa began by describing the task the robot needs to solve: there is a robotic arm that has two joints whose angles it can control to reach a target in space. The target is known and the robot needs to infer the joints that will lead to it. For this, it needs to solve the inverse kinematics and Elisa does this using spiking neural networks.
Elisa drew the two grids corresponding to the two spaces: the target space where x and y are Cartesian coordinates and the space of joints where the two axes are the joint angles.
To learn the mapping from one space to the other Elisa uses a spiking neural network where there are two population of neurons, one for each space. The object of learning is to learn a weight matrix that represents the synaptic weights between the pre-synaptic (first population) and post-synaptic (second population) neurons.
We closed this talk with a comment from the audience: Elisa's work exemplifies the importance of feedback control in neuromorphic hardware. The negative feedback loops help regulate the circuit despite the inherent fixed pattern noise of the FET mismatch.
In traditional robotics the typical examples is an agent that navigates in an environment that is commonly abstracted as a two-dimensional grid, needing to navigate around objects to reach some target. You do not need to care about the skeleton in order to solve the task.
By disregarding the mechanics, we may be up certain aspects useful for navigation.
He drew a robotic arm with one joint. Normally, you would have a servomotor that can fixate on a target using closed-loop target. Such as robot would be very limited though: we cannot expect Ameca to jungle balls.
How does biology do it?
He drew a muscle, which is a contraction element connected parallel to a spring (to measure its stretch) and serially to another spring (to measure the typical length).
He, then, drew a figure of a hip/knee/angle. In this case, you have two muscles surrounding the leg whose dynamics depend on each other. You can fire one and not the other, but if one moves then the other does too. By pulling with both flexor and extensor you can stiffen the joint actively. This is an example of how introducing muscles can help simplify dynamic oscillations. It is called the agonist-antagonist actuator model.
How can this be relevant for applications? ... mentioned prosthetics: when you need to amputate a limb then usually you get a replacement that is often hard to control. What you could do instead is put small motors in the ankle that leverage the agonist-antagonist model.
Could this be the right moment to start such a start-up?
In an interesting afternoon session, Nuno da Costa and Casey Schneider-Mizell from Allen Brain Inst. introduced one of the Allen Brain datasets.
This connectome dataset was collected from 1mm^3 of mouse visual cortex intersecting V1 and 3 neighboring areas.
They have >100k neurons and 500e9 synapses along with dendrites and axons.
The reconstruction extends through all 6 layers.
It is based on 30k sections of 40nm each that are automatically put in wells on a tape and imaged by a group of EM machines. Post processing by AI produces the connectome from the 150M images that are stitched and aligned. The segmentation is done by CNNs.
30k of the neurons are checked by humans.
98% of the synapses are reconstructed.
There was a lot of effort put to proofread the data so people could use it with some confidence bounds.
From the morning's discussion about astrocytes from Jean-Jacques Slotine, Nuno could look up the fact that in this cortex, astrocytes connect to 40k synapses.
Humans have made more than 1M edits to the dataset to correct it.
Axons can be really hard. They are thin and can be missed by a fold in the section. And chopping off axons have big downstream consequence because of their fanout.
# synapse targeted by axons range from 1.5k for excitatory neurons to 15k for inhibitory. And almost always there is only 1 synapse per pair except for thalamic synapses that form 2.
Casey described their observation that simply classifying neurons by the morphology around the cell body allows them to split the neurons among the major classes based on only hand labeling 1.3k cells, they could label 80k other cells without worrying about the fan-in or fan-out or dendrite/axon layering or shapes.
Nuno and Casey also explained "Peter's rule" which came from Peter's work that showed in early work that thalamic input to cortex followed the rule of 20% inhib and 80% excitatory based on anatomical overlap, but the name actually comes from a paper with Braightenberg called "Peter's rule and its exceptions".
There were many questions from the audience related to this dataset. It is almost all public except for a bit that they are still not 100% confident about, related to an interareal portion.
Rodney congratulated this team for this amazing contribution and asked how the Allen team would like to collaborate. Casey responded that they designed their site for user contributions to improve the utility of the data or additional metadata. They have another model that users can apply for proofreading money with condition that the new data is immmediately public.
That closed this very useful session.
*****
The hotel cooks were very intrigued by Ameca
Swimming in the open. Our boat did not go unnoticed by the seagulls
Comments
Post a Comment