Do computers think? Is the brain a computer? We use computers as metaphors for the brain and vice versa. Is the comparison apt? Brains and computers can imitate each other in limited ways. Deep down, how much similarity is there?
Gerald Edelman says in his book Wider Than The Sky that the human brain is the most physically complex object in the known universe. This claim might sound egocentric, but it’s well-motivated. Organic things are generally a lot more complex than inorganic things. Nerve cells are about as complex as organic things get. They come in all kinds of wildly eccentric shapes, and their arrangements and interconnections in the brain are fabulously intricate. No organism has a more intricately structured brain than humans. Our brains aren’t the biggest in the animal kingdom, but they are the most complicated.
Some fun facts: We’re born with about a hundred billion neurons, though many of those are redundant and are weeded out selectively as we grow up. The surviving cells make around a hundred trillion connections with each other. A typical pea-sized region of your brain contains several billion connections. Like I said: complicated. Here’s a picture of a neuron I got from wikipedia, with all the text labels removed because I find it more aesthetically pleasing:
The main cell body on the left includes the same basic innards as any other cell in your body, with a nucleus and assorted organelles. The long skinny extension going off the right is the axon, and the tree-like appendages are dendrites. This drawing isn’t to scale; the axon can be very long relative to its thickness, enabling a single neuron to reach all the way from one side of your brain to the other. The blue things wrapped around the axon are all cells in their own right, acting as insulators, like the plastic wrapping on a copper wire.
Neurons communicate signals with each other by shooting waves of voltage from the main cell body down the axon to the dendrites at the other end. The dendrites connect to other neuron bodies at junctures called synapses. In reflex systems and the heart, synapses are direct electrical connections, but in the “thinkier” parts of the brain, synapses leave little gaps between one neuron and the next. The electrical pulse makes the dendrites release neurotransmitter chemicals into the synaptic gap. The neurotransmitters bond with receptors on the recieving neuron, making pores open and close in its cell membrane. The watery medium of the brain is awash in electrically charged salt particles, and as they move through the pores, the voltage on the neuron’s membrane changes. If the voltage gets high enough, it sets off a chain reaction of electrical activity, sending another pulse running down the axon, and so the cycle continues.
Not all neuron firings make the voltage higher on the recipients’ membranes. There are also inhibitory synapses that have the opposite effect, releasing neurotransmitters that reduce the recipient’s membrane voltage, making it less likely for it to fire. Usually several excitatory synapses need to activate together at nearly the same time to activate a firing, and they have to overcome whatever inhibitory synapses are active at that moment.
There’s a digital quality to a neuron’s firing behavior. I’s either firing or it isn’t; there’s no in-between state. This is just like the way computers read voltages as representing one or zero, on or off, ideally ignoring every voltage in between. Neurons can produce signals of different intensities by firing more or less often, but each pulse is a discrete, digital event, easily represented by ones and zeros. Based only on that information, you might think that making a computer simulation of the brain would be straightforward. There are aspects of brain behavior that are indeed amenable to computer modeling. The neural network model of programming is used for everything from spam filtering to face recognition to preventing credit card fraud. The brain-computer analogy works in reverse, to an extent: Roger Penrose has an entertaining section of his book The Emperor’s New Mind where he shows how you could easily wire neurons together into logic gates.
The brain-computer analogy only extends so far, though, because neurons aren’t totally digital the way transistor logic is. The only factor controlling a transistor’s on-off state is the voltage on its gate wire. You can make logic gates that can depend on the gate voltage of three or four input wires, and that’s about as complicated as computer logic gets. Neurons’ on-off state, on the other hand, is determined by the on-off state of many inputs. A single neuron might be connected to tens of thousands of others, variously exciting or inhibiting it. Your synapses aren’t permanently hard-wired the way a computer’s logic gates mostly are. While most of your neurons are in place at birth, the strength of their connections is highly variable. The little bulbs with synapses on the end can grow and signal more strongly, or they can wither and shrink from disuse. The bumper-sticker version: neurons that fire together, wire together. The changes in connectivity happen fast, on the order of a few seconds. Not only that, but the rate of change of connectivity is itself variable, adding a layer of metaplasticity to the brain’s basic plasticity.
Another layer of complexity comes from the way the brain’s chemical bath affects the mix of neurotransmitters at each synapse. The brain’s activities are modulated by a swath of hormones released by various glands, and science is only beginning to grasp the outlines of how it all works. Very slight changes in your brain chemistry can have dramatic effects on your state of mind. I have extensive first-hand experience of how low serotonin levels can cause spectacular misery, and how adding tiny amounts of certain chemicals can turn my mood right back around.
Computers have a unity of organization. Every part works in synchrony with every other part, in time with the clock signal. The whole system is totally linear in its operations. The processor acts on one instruction at a time. The illusion of computer multitasking is achieved by switching back and forth between different linear tasks very quickly.
No one knows what the brain’s overall organizational scheme is. There might not even be one. However the brain might be organized, we do know that there isn’t any simple top-down hierarchical structure. Gerald Edelman’s hypothesis, which I find convincing, is that the different brain modules aren’t coordinated at all. Instead, they’re competing with each other. Since neurons that fire a lot get more bodily resources, there’s a Darwinian competition among different regions and modules in the brain. Edelman thinks that connectivity between neurons follows the same evolutionary dynamics as the population of organisms in an ecosystem. The brain isn’t organized from the top down any more than the evolution of a species is.
My shrink compares the human psyche to British parliament, many different voices shouting simultaneously. The internal chaos and conflict makes it hard for the brain to do logic, but it has the advantage of being compatible with paradoxical and self-contradictory ideas. Paradoxes and contradictions make computers fall into the infinite loops that we experience as crashes. The brain is mostly able to take them in stride.
It’s a drag to try to force the swirling thunderstorm of competing brain systems to focus on unambiguous logic, but evolution has its reasons. We go around the world in a perpetual state of uncertainty, having to make decisions based on partial information gleaned from our limited senses. Computers are helpless unless you give them totally unambiguous data and totally unambiguous instructions on what to do with it. Computers are better for math, but otherwise, I’ll take the brain.
This is one of the interesting article that i have read recently. Good read but shallow depth… waiting for much more insights if possible