My site currently has two purposes. You can find an overview of my work on the Home / Info page and a collection of material not fit for anything else on the Neuroscience log. These pages are in English. I also keep a personal lifelog in Dutch for family and friends.
Regarding all files available for download (including the design of the site itself), unless otherwise mentioned, please note:
Wider than the Sky: The Phenomal Gift of Consciousness
Gerald Edelman's short book on how the brain produces consciousness may convince a lot of people that consciousness is not a mystery, but a scientifically approachable phenomenon. There are three issues I have with Gerald Edelman's point of view.
First, he keeps insisting that the brain is not a computer. If he means that the brain is not a 21st century PC, he's right. However, if he means that all comparisons are misguided, he's wrong. Each neuron is a cell in which only genetically preprogrammed processes are going on. They read input and produce output. The brain then is a self-organizing network of 10 billion simple computers, which can - in theory - be simulated by a single computer. Also, the brain's 'task' is to make sense of multiple streams of complex sensory input to produce competent behavior as output. If we refuse to see the brain as a biological information processing device - bound by the laws of physics, there is no way we can make sense of what it does. In this light, reviewing the similarities and differences between computers and brains is necessary.
Second, Edelman repeatedly states that consciousness has no causal effect on what we do and think (he puzzlingly adds that consciousness is not an epiphenomenon). He also states that consciousness is the only way to access certain information, suggesting that we need this for some purpose. If this information is used then consciousness has causal effects.
Third, Edelman says that Qualia are high-order decicions. Qualia however are not arbitrary. Colour perception is tightly linked to light frequencies and characteristics of the environment we evolved in, and is pre-processed in brain centres (retina, LGN, V1) in mostly genotypically determined ways. There is nothing to decide there. Indeed there are striking similarities in colour-experience, making Qualia accessible for scientific inquiry.
That said, Edelman's hypothesis of what anatomy and interactions produce consciousness are at least thought provoking. The book is very short, so I'm hoping Gerald Edelman did not do his own theories justice for the sake of brevity.
| 22-11-2007 17:57Kauffman network
Click 'read more' to see my little boolean network in an inline frame with an option to comment on it. Or click here to open it in a new window.
Order for Free
This is a simple simulation of a boolean network inspired by the networks described by Stuart Kauffman (At Home in the Universe, chapter 4) to illustrate his ideas on the emergence of order from chaos.
The behavior of the nodes is determined by random 'truthtables'. When the network is generated (press F5 for a new network) the order of the input nodes is randomized and the output state related to each possible input state is also randomized when the net. To save load time, only truthtables for up to 8 connections are generated, but this is more than enough. Here is an example truthtable for a node with 2 input nodes:
The table presented above corresponds to a logical XOR gate: the node is only activated when either input node is active, but not when they are both active. All combinations of states of the input nodes are coupled to randomly determined output states using such tables. There is one exception: the first row of each truthtable always sets the output to 0. This means that if there is no input, there can never be output.
Stuart Kauffman uses his simulations to convince people that life would emerge from the chaos of the universe and that it would emerge in a certain way. The centre of his reasoning is that autocatalytic chemical systems are very likely to exist. Nodes in the simulation above can be seen as chemicals and connections as possible reactions. The output is a new chemical substance that is than available for new reactions. If you see any cycles of such autocatalytic chains or circles of reactions these can form the basis of life, and it is the stuff evolution can act upon.
In this simulation you can see that a given network has attraction states: with the right number of connections for every node, any random activation pattern will result in roughly the same cyclic behavior given enough time. This is only possible because the network never changes. This is also the case with the rules of physics and chemistry, so that within a certain set of chemical rules the same kind of autocatalytic reactions are always possible.
Any complex network may also be seen as a metaphor for the brain. The brain is however not constructed as a random system, so the initial configuration of the network may give us a tendency to behave in a certain way. Of course, the brain is also vastly larger than this small simulation and many more aspects determine the activity of a neuron, besides simply the level of excitement of it's neighbors. Nevertheless these kinds of simulations can be an inspiration (perhaps even a tool) in thinking about brains and what they do. Or, for that matter, for what they do not do: in this simulation you can see that as you increase the connectivity the networks eventually display less order. Similarly a brain will not function any better if you just add some wires to it. Adding random neurons to your brain does not make you smarter.
The Edge of Chaos
Networks with more connectivity have different kinds of attractor states than networks with less connectivity. Fractions of connectivity can be entered in the form, so you can toy with this. The precise fraction is not used, but it is approximated by the script. If a number of input nodes of 1.7 is specified, for example, the algorithm tries to have 70 percent of the nodes use two input nodes and the remaining 30 percent will use one input node. On average the networks generated will then indeed use 1.7 input nodes for each node. Varying the number of connections like this allows you to look for the number of connections that produces a network with stability wihtout rigidity. In this simulation the maximum is 8 and the minimum is 0.
Stuart Kauffman has done a lot of math with these kinds of networks and so have some other people. E. Derrida and Y. Pomeau, cited below, have written a short article on some of this math, the rest I copied directly from At Home in the Universe and I will not bother with proving it here.
Networks with 1 input node or less for every node exhibit behavior that is stable to the point of boredom. Networks with a lot of connections per node have very many attractor states that will all have incredibly long cycles. The behavior of such networks is unpredictable in practice and that can never be the basis of homeostasis or self-organization. It appears that random boolean networks exhibit an optimum of stability without plunging into chaos when each node has about 2 input nodes. With this value it doesn't matter much what the initial state is, the network will usually settle in the same cyclic behavior that still involves a lot of nodes and a cycle with a number of steps roughly equal to the square root of the number of nodes.
For random boolean networks the number of network states that comprise a cycle is the square root of 2 powered to the number of nodes in the network. However, the networks generated here are not completely random: a node only uses it's eight neighbors for input. This means that reciprocal relationships between nodes are far more likely than in truly random boolean networks and a cycle in this network should be shorter. What is interesting for me is the question whether such laws apply to other kinds of networks as well. The small and simple simulation on this page can have more states than I can write down in a lifetime and yet it displays stunning order. Any human brain is incomprehensibly complex when compared with the network above but it still manages to organize itself. By what laws does the brain do this?
Here you can see some example runs using the same network with increasing connnectivity. What strikes me the most is that roughly the same nodes seem to be at the 'center' of activity for all levels of connectivity, even though the truthtables are probably very different.
If you don't want to toy around, here is a YouTube movie of an earlier version, demonstrating the effect. (It was allowed to generate output with no input in that version.)