The brain, computers and neuroscience: can brain science change the world of human cognition?

The discovery of the mind and brain by humans is as old as the understanding and conquest of nature by humans. The mammalian nervous system is, perhaps, the most powerful organ in nature. It has about 1011 neurons and 1015 synapses and consumes only about 20 W. To some extent, man's mania for science and machines is a replication of his own mind. As scientific research enters the 21st century, scientists are trying to use the "conceit of reason" to truly solve the mystery of the brain and mind: can the brain's information encoding and storage, neuronal transmission and control of emotional cognition be linked to computer science? According to classical philosophers, we are born with a blank sheet of paper. So, can humans doodle on this blank sheet of paper as if they were controlling a machine? Science and ethics have always been twins, and every time science takes a step forward, human beings' confusion about morality and self also breaks through to a deeper level.

The new book "Biography of the Brain" by Matthew Cobb, a neuroscientist and professor of zoology at the University of Manchester in the UK, is one such work that shows both the ongoing human journey to decipher the brain and the nerves behind it, and the eternal conflict between science and the humanities. As intellectual conceit has led humanity to conquer the uncharted territory of brain science, confusion ensues - are we qualified to transform and control humanity? What future should we expect if we truly clarify the fog of our own mind and knowledge?

The following article is excerpted from Chapter 12 of The Biography of the Brain with permission from the publisher. In this chapter, the author compares how scientists have applied brain science to the fields of computer science and artificial intelligence since the 1960s. And what puzzles scientists about the mystery of the human mind: if the human brain is a computer, then why do people have values of one kind or another, and why have these values led us to shape the world today?

The brain, computers and neuroscience: can brain science change the world of human cognition?

The Biography of the Brain, by Marco Schaub, translated by Zhang Jin, Xin Si Culture - CITIC Press, March 2022.

At the beginning of the computer age, scientists were overwhelmed by the similarities between these new machines and the brain. Inspired by this, different scientists adopted three different approaches to using computers. Some scientists ignored biology and focused on making computers as smart as possible, a field that came to be known as "artificial intelligence" (a concept coined by John McCarthy in 1956) and has contributed positively to modern life in a variety of ways (at least for now). The most productive approaches to understanding how the brain works have not come from attempts to create super-intelligent machines, but rather from efforts to construct models of brain function that explore the rules that govern the interconnections between neurons in the model. If you like, you can call this field "neuronal algebra".

Early attempts to simulate the nervous system appeared in 1956, when researchers at IBM (International Business Machines Corporation) tested Herb's conjecture (that the combination of neurons is the basic functional unit of the brain). They used IBM's first commercial computer, the IBM 701, a vacuum tube computer consisting of 11 large components that occupied almost a room (only 19 units were sold). The team simulated a network of 512 neurons. Although the components were not initially connected, as Herb suggested, they soon formed some combination and spontaneously synchronized their activity in the form of waves. Although there are limitations to this very crude model, it suggests that certain features of neural system loops derive from some very basic rules.

Are computers another kind of human brain?

One of the first people to use computer models to explain the mechanisms of brain functioning was mathematician Oliver Selfridge. In 1958, Selfridge demonstrated a hierarchical processing system he called Pandemonium, which was based on his work on machine pattern recognition. Selfridge started with the creation of simple units - "datademons" - that can recognize elements of the environment by comparing a feature (such as a line) with a predetermined internal template. These "datademons" report what they detect to higher-level "demons" - "computational demons" ( computational demon.) Here's how Selfridge explains what happens next.

At the next level, the "computational demon" or "sub-demon" performs some sort of more or less complex computation on the data and passes the results to the next level - the "cognitive demon. " cognitive demon, which weighs the evidence. Each "cognitive demon" calculates a scream, and the top demon, the "decision demon," selects the loudest one from all the screams.

The end result of this process is that a complex feature (e.g., a letter) is recognized by the "decision demon". At first glance, this seems to be an electronic version of the hierarchical view of sensory processing, dating back to Alfred Smith. But the "swarm demon" has its own unique feature - it can learn throughout the process. The program is constantly concerned with the accuracy of its own classification of objects (in the initial stages, this information is provided by humans). By running the program over and over again, and by what Selfridge calls "natural selection" of demons in the process (if they are classified correctly, they are retained), the system becomes more and more accurate over time. It can even recognize things that weren't designed for it to recognize. According to cognitive scientist Margaret Borden, the impact of "swarming demons" is immeasurable-it shows that a computer program can simulate quite complex sensory processes, and that the program's functionality can change over time if it gets the right feedback for its success.

Meanwhile, another American scientist, Frank Rosenblatt, has proposed a slightly different model - the Perceptron. Perceptron is also concerned with pattern recognition and also uses the idea of flexible hierarchical connections - an approach that came to be known as "connectionism". According to Rosenblatt, both the brain and the computer have two functions - decision making and control - and both functions operate based on logical rules, both in the brain and in the computer. But the brain also performs two deeper and more interactive functions: interpretation and prediction of the environment. All of these functions are represented in Rosenblatt's model of perception, which led him to call the perceptual machine " the first machine capable of generating original ideas.

The brain, computers and neuroscience: can brain science change the world of human cognition?

American scientist Frank Rosenblatt with the Perception Machine (photo from Cornell University's official website).

In fact, like the previous "swarm demons", the perceptron has only learned to recognize letters. And in the case of the perceptron, the letters have to be about half a meter high to be recognized. But the key difference between the perceptron and the "swarm" is that the perceptron does not need a predetermined template, but can do so by using parallel processing (performing different computations simultaneously, just like a brain). This difference was no accident, because Rosenblatt was not only interested in developing a technology that seemed mind-boggling at the time, but also in coming up with a theory to explain how the brain works.

The media loves to hound things like this. When Rosenblatt's funder, the U.S. Navy, announced his research in 1958, the New York Times cheered, " Today, the U.S. Navy revealed the beginnings of an electronic computer that, in the future, promises to be able to walk, talk, see, write, copy itself, and be aware of its own existence." These words did not come from some overexcited journalist, but from Rosenblatt himself. About Rosenblatt, one scientist later recalled, " He was the kind of person that journalists dream of reporting on, as if by magic. According to him, the perception machine could do all sorts of remarkable things. Maybe it did, but his work doesn't prove it."

Despite his elaborate media campaign, Rosenblatt remained relatively dispassionate about the true meaning of the perceptual machine. In his 1961 book, Principles of Neurodynamics, Rosenblatt wrote

Perceptual machines are not close replicas of any actual neural system. They are simplified networks that help us to study the laws governing the relationship between the way neural networks and their environment are organized and the "psychological" performance of these networks. Perceptual machines may actually correspond to some parts of the episodic network of biological systems ...... More likely, they are extreme simplifications of the central nervous system, where some features are amplified and others are reduced.

By the mid-1960s, experts began to acknowledge that even perceptual machines were not as good as they had been touted. in 1969, artificial intelligence pioneer Marvin Minsky and colleague Seymour Papert published a book that gave a very negative assessment of the perceptual machine model. Minsky and Papert provide a mathematical analysis of the capabilities of the perceptual machine and argue that this approach is a dead end, both for artificial intelligence and for understanding the brain, because the perceptual machine is constructed in such a way that it is impossible to internally characterize what it is learning. The field shrank as funding for connectionist approaches dried up in the United States, partly due to the emergence of such criticisms and partly due to the slowdown in the progress of these models. Rosenblatt then began to study the phenomenon of learning transfer, a field that would culminate in the emergence of the theory of dreadnought, and on July 11, 1971, his 43rd birthday, Rosenblatt was killed in a boating accident.

Although the Swarm Demon and the Perceptron failed to provide insights that could be applied to biological pattern recognition systems, these two programs changed the way researchers view the brain - they showed that any valid description of perception, whether human or machine, must introduce key plasticity element. Thus, they are quite different from older models based on mechanical or pressure metaphors. Moreover, there is a tantalizing similarity between the structure of these connectionist programs and the hierarchical structure of the simple feature detectors discovered by Huber and Wiesel, and Barlow's 1972 idea of the "cardinal cell" was clearly influenced by this similarity. For some, this means that these new models do not merely use metaphors to explain how the brain works. They actually reveal the real mechanism.

Genes, Rationality, and the Mechanisms of the Human Brain

As the academic interest in "swarm magic" and perceptual machines waned, David Marr developed a different model of brain function computing. Marr had by this time made a name for himself at Cambridge University. There, he published a series of papers claiming to have discovered how the brain works. But he soon dismissed these mathematical models as " a simple combination of tricks" because he realized that researchers needed a completely different approach. 1973 saw Marr move to MIT in Boston to work alongside Minsky. His goal was to create a machine that could see and thus understand how human vision worked.4 Four years later, Marr developed leukemia, so he quickly began writing a book called Vision, which summarized his insights. In the book's introduction, he wrote, " Because of certain things, I had to start writing this book years earlier than I had planned." Marr died in 1980 at the age of 35, and Vision was published in 1982.

Perhaps aware of the imminence of death, Marr's book presents a larger perspective than the details of a visual model. He places his ideas about the mechanisms of brain functioning in a broader ethical context, telling how we evolved and where our profound attitudes toward the effects of natural selection originate.

It is true, but misleading, to say that the brain is a computer. It is indeed a highly specialized information-processing device - or, more precisely, a collection of many information-processing devices. To think of our brain as an information-processing device is not to devalue or deny human values. Rather, such a view of the human brain is more reflective of human values and may ultimately help us understand what human values really are from an information-processing perspective, why people have one or the other, and how these values are integrated into the social practices and social organization that we are genetically endowed with.

The brain, computers and neuroscience: can brain science change the world of human cognition?

The brain-computer interface in the movie "Attack the Block".

Marr uses so many mathematical methods in this work that it has been said that more people cite his book than understand it. This witticism suggests that Marr's greatest contribution was not in the precise details of his visual computational model, but in his method of thought. Even Marr's most ardent supporters admit that the main value of his book is, in today's view, its historical significance.

Unlike Barlow, Marr believes that the activity of individual neurons is not sufficient to explain how loops perform their functions or how perception works. He has defended his new approach in a slightly ironic tone.

Trying to understand perception by studying neurons alone is like trying to understand bird flight by studying feathers alone: it is simply not possible. To study how birds fly, we must first understand aerodynamics, and only then does the structure of feathers and the different shapes of birds' wings become meaningful.

To understand how a particular function is performed in the brain (or computer), Marr's approach is a three-step process. First, the problem to be solved must be stated in a way that follows logic, and such a theoretical approach limits how the problem can be explored or modeled experimentally. Second, the way in which the inputs and outputs of the system are characterized must be determined, as well as the description of the algorithm that transitions the system from one state to another. Finally, it must be explained how the second layer is implemented physically (in the case of the problem of brain activity, that is, in the nervous system). Marr's view is that the constraints faced in the problem of creating a network (whether a machine or a brain) that can see things are essentially the same in all cases, and therefore it should be possible to use similar algorithms, even though they may operate very differently in a living organism than in a computer. He argues that by solving the machine vision problem, we can better understand vision in our brains.

On the question of how the brain recognizes simple objects (such as an edge), Marr's idea is based on the findings of Huber and Wiesel. But unlike " swarm magic" and perceptual machines, his approach introduced a richer computational scheme than just a hierarchical structure of superimposing individual points of a line segment and then comparing them with a template. As Marr put it at a conference in Cold Spring Harbor in 1976, " the contour is not detected, it is constructed." This view, which goes back to Helmholtz, emphasizes that the brain is not just a passive observer that receives sensory information. Perception also involves the combination and interpretation of these stimuli. This approach is essential to any model of vision, because if the machine (or retina) simply recognizes luminosity values at every point of the image, then nothing happens. These are the things that cameras do, and cameras cannot see things.

Despite these insights, Marr's machine approach has not changed our understanding of machine vision or how the brain sees things. In terms of our current understanding of specific processes in the visual cortex, the same algorithms have not been found in living organisms or computers. Equally troublesome is that the methods that Marr uses to understand vision cannot be extended to use in other areas of brain function.

Vision and Perception

Although we have made tremendous advances in computer facial recognition and other artificial scene analysis methods, machine vision still lags far behind the vision in our heads. Likewise, we still know very little about what actually happens when we "see" something. Everyone agrees that there must be some kind of symbolic representation of the scene in our heads, but no one is quite sure how it happens. On the 30th anniversary of the publication of Vision, Marr's student Kent Stevens reviewed Marr's contributions and concluded that while the importance of symbolic representation in vision is unquestionable, "we still do not fully understand the place of symbolic systems in biological vision.

On this issue, studies of facial recognition cells in the monkey brain may already provide some insight. in 2017, two Caltech researchers, Changle and Ying Cao, showed a series of faces to macaques and studied the single-cell responses of a series of cells in the monkey brain. In total, these cells recognized 50 dimensions of facial information (eye spacing, hairline, etc.), but each facial recognition cell was interested in only one of these dimensions. To illustrate how this information can be combined and accurately characterize the entire face, Changle and Cao Ying recorded the responses of 200 such cells to a series of photographs, and then used a computer to accurately reconstruct the original images based on the electrical activity of these neurons. Interestingly, they found no evidence of "Jennifer Aniston cells" in the macaque brain, or, in their words, "no detecting cells responsible for identifying specific individuals. However, a study by another team showed that a region in the temporal lobe of the monkey appears to be involved in the process of recognizing the face of a "familiar" monkey.

Ying Cao's Twitter profile is short: " Cortical geometer". Ying Cao speculates that the feature extraction during face detection that she reveals may be a generic process that occurs in the visual cortex-" We believe that the entire inferior temporal cortex may use the same way to organize the individual connected regions into networks and use the same coding in all types of object recognition. " The problem she is currently trying to solve is understanding the neural basis of visual illusions such as the famous vase/face illusion. As she points out, 10 years ago, no one knew where to start to study this problem. But now we know.

As for how humans recognize faces - including our grandmothers' faces - it seems likely that we have some sort of decentralized face recognition network in our brains, like macaques. This algorithm in your brain is different from the face recognition algorithms of cell phones or the algorithms that security systems use to screen photos of criminal suspects, which are entirely tailored to recognize certain features and rely on biometric features such as eye spacing and face shape. Face recognition in biovision is much more complex and abstract and is ultimately based on the various elements found by Huber and Wiesel (lines, spots, etc.), rather than on the anatomy of each detail of the face and its relationship to each other. These elements are somehow organized into a complex hierarchical system (as Marr imagined), and one that applies equally to other features in the environment, not just faces.

In a recent study at Harvard University with disturbing but stunning results, researchers used a blend of computational and electrophysiological methods in monkeys, and the findings reveal what stimuli these layers of cells may be interested in. These scientists projected images onto a screen and recorded the activity of individual cells in the inferior temporal cortex of awake monkeys. That's nothing unusual. But the images aren't static; they're synthetic, in constant flux and flow. The images are "evolved" by an algorithm called XDREAM, which constantly adjusts the stimulus to get the most out of the cells' response. The method is not original, as neuroscientist Charles Connor and colleagues used it 10 years ago, but the new study yields chilling results. After more than a hundred iterations, the image "evolved" from a grayish flat sheet into a dreamlike, surrealistic image: various parts of the monkey's face were distorted and blended together, with recognizable eyes here and a disembodied and blurred body part there, with different parts facing different directions.

The brain, computers and neuroscience: can brain science change the world of human cognition?

Neuroscientist Charles Connor (image from Johns Hopkins University website)

This suggests that in the monkey brain, these cells are really interested in such strange images rather than portraits. If a similar phenomenon occurs in the brains of people with "Jennifer Aniston cells," it means that these cells are not actually programmed to respond to any of the images in the photographs - the cells are responding simply because the photographs are very similar to the images that the cells are actually responding to . Meanwhile, researchers at the Massachusetts Institute of Technology have published similar results, though the results are not as bizarre as those of the Harvard scientists. They conducted the same experiment on cells in an area of monkey visual cortex unrelated to face recognition. The study found that these cells seemed to be activated only by certain strange geometric images with certain biological characteristics, which resemble the type of visions that people have when they have severe migraines.

The above findings easily tempt us to imagine that these strange hybrid shapes are what one monkey actually sees when it looks at another monkey. But remember, there are millions of cells involved in the perception of faces, and most importantly, there are no microscopic little monkeys in the brain examining the output of these individual cells. It is the whole system that somehow produces perception, not a single cell, or even a small group of cells.

Recent studies in mice have provided a powerful avenue for understanding the neural basis of visual perception. in the summer of 2019, using a sophisticated optogenetic technique, the research groups of Rafael Yost at Columbia University and Carl Desrosiers at Stanford University published papers a few weeks apart that demonstrated that it is possible to reproduce brain activity in mice during visual perception patterns during visual perception. In both studies, mice were pre-trained to lick water when they saw a striped pattern. The researchers found that if these patterns were activated optogenetically, the mice would lick water even in the absence of visual stimuli. The two groups used slightly different techniques: Desrosiers' group precisely stimulated a dozen neurons to produce the corresponding patterns of activity; Yost's group focused on two tightly connected neurons that activate a group of neurons in the brain's visual system to produce the corresponding patterns of activity. Although these studies are impressive, we still cannot conclude that these activity patterns are the basis for visual perception in mice, or that they are a necessary prerequisite for visual perception to occur - through the activity of other neuronal assemblies. Despite decades of work by computational scientists and neurobiologists, our understanding of what actually happens when we see something is still vague.

Author|Matthew Cobb