Day 9 - Brain development and self-construction technology - Matthew Cook, Stan Kerstjens, Rodney Douglas, Christoph von der Malsburg, Dániel Barabási, Roman Bauer, Anthony Zador, Nuno da costa

 


 An afternoon discussion group

How do you build a brain?

Usually in this workshop we talk about how brains work, but in today's discussion we will talk about how they construct themselves.

The first speaker is Anthony Zador from Cold Spring Harbor Laboratory who has spent his career studying this question.

Tony said that he often finds molecular biology a bit cluttered in its terminology: due to their naming practises, papers have the feel of an alphanumeric soup. He is instead interested in finding what makes brain development interesting.

He started by drawing a DNA strand on the left and a weight matrix that represents the connectome of the agent at birth.


This task is demanding from an information perspective: the DNA has 3 billion nucleotides (each one can take one of the four bases A,T, G, C) while the human genome has about 10^10 neurons. This means that the matrix requires way more bits to specify than the genome can hold. This gives rise to what Anthony calls the genomic bottleneck.

He then talked about how the DNA becomes RNA and then proteins. The cell nucleus has the DNA and the protein-coding sequences become amino acids. The human body has about 20000 different proteins. The impressive thing is that, if you insert human DNA and put it in the egg of some other species, you will get a human. This means that the DNA, that he calls the floppy disk, can run in diverse architectures. The reason why we look at RNA instead of proteins is that it is easier to understand the DNA to RNA mapping than getting the whole picture.

How did this machinery appear? RNA is a weird molecule: it can both replicate itself and can express proteins. DNA is a more stable storage, however, which is why it appeared after. You can make an organism that is just RNA thoug, such as Spiegelman's monster.

The process of going from one cell to a full organism is called development. He, then, drew something that looked like fingers and said that there's some interesting puzzles with development: how do your two hands grow to be so similar since they are so apart from each other since growth only works with local information?

Also the instructions are encoded in our genome and every cell inherits the same genome so how do the cells become so different?

We turn back to the problem of connectomics: the DNA can be thought of as a compressed version of the connectome. Compression must be lossy. What would be a simple example of a developmental rule? You can put your neurons on a grid and connect them to your four neighbors. But this architecture is not very interesting, it will not solve many problems. So for the past five years he has been working on the question of how to encode more interesting structures.

Anthony believes that most behaviour is innate: the connectome we are born with contains most of the knowledge we need about how to survive. When he was a kid he was bothered by the ideas that people are innately afraid of snakes and mice are afraid of foxes. The size of the bottleneck differs across species.

How do cells decide which other cells to connect to? Cells have markers on them and other cells connect with them only if they have a certain form.

He said that he has worked on two different models of development without giving too many details about them. The first work employed a network to predict the values of another network . The second one was very similar to the one that Mass Wolfgang described in his lecture, it was a Bayesian neural network.

Tony finished by saying that he expected to see interesting structure such as modularity emerge in these models but perhaps you need to add more futures into the model.

*****

The next speaker was Stan Kerstjens who works with Tony as a post-doc.

Stan started with a motivating application for studying development that he thought would appeal the most to the neuromorphic crowd: how do you optimize the placement of components on a chip? Andreas Andreou said that the analogy was not right as engineers have found solutions to this problem.

Similarly to engineers, nature has found a solution. In the self-construction paradigm, organisms hold all the information necessary to reproduce themselves.

He, then, drew a diagram of how a network unfolds with time. Biology does not have an external global reference frame for this unfolding. The frame arises within the body. How do cells know where to go and how to connect?

Neuroscientists said that we already have an intuition for this. We know that cells follow chemical gradients. Turing patterns are an example of how this mechanism can give rise to complex patterns. But how do you connect over long distances?

Stan then briefly stated his theory: axons find their way around the body by taking a gradient step. But a large cost needs to be paid to regular these gradients. The axon needs to know which of the molecules it needs to follow for its entire path. We don't have a theory that explains how this information can be encoded in the genome.

Stan proposes an alternative: a simple fact of development is the differentiation process, which can look like this:

[todo: insert figure with tree]

We start with a cell that, at each developmental step, divides into two cells. Every time it divides, its children inherit its state plus a delta value for the upper branch and minus value for the lower branch. The deviations delta accumulate. As divisions keep happening, the tree spreads in both physical and genomic space. Siblings are closer to each other than cousins, so the genomic space that arises from differentations is a good description of the physical space. This means that, in order to navigate in physical space, the cells do not have to know their location in space but just their encoding in the genomic space. Thus, Stan's main point is that differentiation naturally gives you a solution to the problem that cells need to solve to wire: navigation.

He, then, entered a discussion with the audience, especially neuroscientists on the assumptions that this model implies and how it differs from existing theories.

*****

The next speaker was Dániel Barabási from Harvard University who started by stating three principles he believes govern brain development:





  • Principle 1:  the genome needs to encode a lot of information
  • Principle 2: development happens under only local rules
  • Principle 3: there is a combinatorial space of options

 

How does the genome find all this information? He said there's three important ideas for answering this question:

  • the first idea is growth, by which he means a process of differentiation and division of cells
  • the second is path-finding: to connect to a specific cell to fullfull its role a cell needs to navigate to a specific point in space that may be far away by spreading its axon. This defines the cell's morphology
  • the third is synapse formation: deciding who to connect to. This happens after you have reached the area of interest with your axon and you need to differentiate among the small number of cells that are around you


He then went into more details for these three ideas.

For growth, you need to specify a weight matrix that describes the connectivity of all neurons. You assume that all neurons start with the same genome but express different genes. Here the interesting question is how these gene expressions, that we call cell types, are chosen, a problem on which he has worked with Stan.

Then the problem of axon growth can be modelled with the function [R,L] = f(x,E) which means that a cell at its timestep needs to make a navigation decition in space (for example go Left or Right?) based on its current gene expression (x) and E, the conditions in its environment.  The input from the environment is usually some low-dimensional function of your neighbors and the processs of exchanging information is signaling.

He then modeled synapse formation as a matrix multiplication: the weight between neuron i and j is comes from multiplying the type of neuron i with a matrix and the type of the neuron j.

This model can be quite boring because the system will always stay in the same fixed point. You can add some perturbations  which will give it some dynamics but it will still be a stable system.

He said that an important question for growth is how you get a diversity of cell neighborhoods even with such a simple model. He said that cell migration may play a role in this.

He said that this is something that people doing transcriptomics over overlook. Although we agree that the process growth is not well-understood, as growth destabilizes your environment and changes the landscape on which cells navigate, they usually approach the problem in an oversimplified way: they knock out genes and try to see if they will still get the same type. They do so by collecting data about the diversity of cell types across development. He said the controversial issue that he is trying to raise is that people also think that you can infer morphology and connecivity based on the genes, but actually there is complex dynamical process involved.

His final question was: once you have a morphology, how do you make a connection? there are proteins on the surfaces of cells, for example a blue and a red one, and a cell has an operator that read that and detects whether they are compatible. He mentioned a past work of his on this question.

*****

The next speaker, Roman Bauer from the University of Surrey, started by saying that he is interested in understanding the biological that describe how cells interact using chemical, mechanical and electrical information. He uses agent-based modeling: this is a technique where you design an agent (for example a neuron) and then study how local interactions in a group of agents give rise to ceratin dynamics.

He divided the biological events that govern growth into seven categories:

  • morphology-change
  • division/extension
  • movement
  • secretion
  • detection
  • death
  • creation of synapse

He then recommended BioDynaMo, a software library for simulating these interactions.

*****

The last speaker, Christoph von der Malsburg, talked about his work on understanding how the retina and tectum connect in the ... brain: what is amazing about this connection is that the fibers growing from the retina and connecting to the optic tectum retain the geometric relations they had in the retina. Christoph built a computer simulation to understand how this happens.

Christoph said that the previous theories were wrong in suggesting that cells from the retina have some target location in mind and that axons follow chemical gradients that lead them directly to their target.

His theory is instead this: first markers arise in the retina that express neighborhood relationships between cells (in the forms of rings). Then, the cells induce the same spatial relationship when they arrive at the tectum based on these markers. Thus, neurons that are closer to each other in the tactum will be similar to each other. The arriving axon from the retina induces through signaling the type of the cell in the tectum. This is the reverse form the classical approach where the arriving axon detects the marker.

Christoph's theory emphasizes collaboration between neurons and that order arises out of these collaborative dynamics without requiring precise chemical gradients for axon guidance.

You can read more about this model in Christoph's papers from the 80s and a later review.

*****

 The last speaker was Florian Engert who talked about how knowledge and competency can be encoded in the genome.

He reminded us of the marine iguana video he showed us the other day that exemplifies how an animal can possess innate knowledge, like running away from predators and going to safety, within a few minutes of birth. His proposal is that this information is in the genome.

How did this ability of the marine iguana arise in evolution? Florian's narrative goes like this: originally the iguana's brain is wired for mating, which is the most significant problem it needs to solve. This means that the brain is wired to make you approach other agents.Then snakes appear in its niche and all but one lizard go extinct. Then, one has a random mutation that makes it run away from others, including snakes. Then, you have a switch in axon finding that leads to knowledge about liking versus fearing snakes.

We ended the discussion with a question on whether electrical activity is important when the brain develops. Should we consider spikes in the models we discussed today? Our current impression sit is that growth happens mostly under chemical information, but there may be critical periods of electrical activity that play a role.


Comments

Popular posts from this blog

Day 2: Sensing in animals and robots - Florian Engert, Gaby Maimon, and Wolfgang Maass

Day 4 - Neuromorphic Circuits: past, present and future - Melika Payvand, Giacomo Indiveri, Johannes Schemmel

Day 12 - Final presentations and demos, goodbye activities