The brain is like a computer: bad at math, but good at everything else. Why do scientists call the human brain a biological computer?


The central idea of ​​​​the works of the famous Ray Kurzweil is artificial intelligence, which will eventually dominate all spheres of people's lives. In his new book, The Evolution of the Mind, Kurzweil reveals the endless possibilities of reverse engineering the human brain.

In the same article, Turing reported another unexpected discovery regarding unsolvable problems. Unsolvable problems are those that are well described by a unique solution (which can be shown to exist), but (which can also be shown) cannot be solved by any Turing machine (that is, by any machine at all). The idea of ​​the existence of such problems fundamentally contradicts the concept that was formed at the beginning of the 20th century. the dogma that all problems that can be formulated are solvable. Turing showed that the number of unsolvable problems is no less than the number of solvable problems. In 1931, Kurt Gödel came to the same conclusion when he formulated the “incompleteness theorem.” This is a strange situation: we can formulate a problem, we can prove that it has a unique solution, but at the same time we know that we will never be able to find this solution.

Turing showed that computing machines operate on the basis of a very simple mechanism. Since a Turing machine (and therefore any computer) can determine its next function based on its previous results, it is capable of making decisions and creating hierarchical information structures of any complexity.

In 1939 Turing designed electronic calculator Bombe, who helped decipher messages compiled by the Germans on the Enigma coding machine. By 1943, a team of engineers with Turing's participation had completed the Colossus machine, sometimes called the first computer in history. This allowed the Allies to decipher messages created by a more sophisticated version of Enigma. The Bombe and Colossus machines were designed to perform a single task and could not be reprogrammed. But they performed their function brilliantly. It is believed that partly because of them, the Allies were able to anticipate German tactics throughout the war, and the Royal Air Force was able to defeat the Luftwaffe forces three times larger than them in the Battle of Britain.

It was on this basis that John von Neumann created the computer of modern architecture, reflecting the third of the four most important ideas of information theory. In the nearly seventy years since then, the basic core of this machine, called the von Neumann machine, has remained virtually unchanged - just like the microcontroller in your washing machine, and in the largest supercomputer. In an article published on June 30, 1945, entitled "First Draft Report on EDVAC", von Neumann outlined the basic ideas that have guided the development of computer science ever since. A von Neumann machine contains a central processing unit where arithmetic and logical operations are performed, a memory module where programs and data are stored, mass memory, a program counter and input/output channels. Although the article was intended for internal use as part of the project, it became the Bible for computer creators. This is how sometimes a simple routine report can change the world.

The Turing machine was not intended for practical purposes. Turing's theorems were not concerned with the efficiency of problem solving, but rather described the range of problems that could theoretically be solved by a computer. In contrast, von Neumann's goal was to create the concept of a real computer. His model replaced the one-bit Turing system with a multi-bit (usually a multiple of eight bits) system. A Turing machine has a serial memory tape, so programs spend a very long time moving the tape back and forth to record and retrieve intermediate results. In contrast, in a von Neumann system, memory is accessed randomly, allowing any desired data to be immediately retrieved.

One of von Neumann's key ideas is the concept of the stored program, which he developed ten years before the creation of the computer. The essence of the concept is that the program is stored in the same random access memory module as the data (and often even in the same block of memory). This allows you to reprogram the computer to solve different tasks and create self-modifying code (in the case of recording drives), which provides the possibility of recursion. Until that time, almost all computers, including Colossus, were created to solve specific problems. The concept of stored program allowed the computer to become truly universal machine, corresponding to Turing's idea of ​​the universality of machine computing.

Another important property von Neumann machine is that each instruction contains operation code, which specifies an arithmetic or logical operation and the address of the operand in computer memory.

Von Neumann's concept of computer architecture was reflected in the EDVAC project, on which he worked with Presper J. Eckert and John Mauchly. The EDVAC computer did not become operational until 1951, when other stored program computers already existed, such as the Manchester Small Experimental Machine, ENIAC, EDSAC and BINAC, all of which were created under the influence of von Neumann's paper and with the participation of Eckert and Mauchly. Von Neumann was also involved in the development of some of these machines, including latest version ENIAC, which used the stored program principle.

The von Neumann architecture computer had several predecessors, but none of them - with one unexpected exception - can be called a true von Neumann machine. In 1944, Howard Aiken released the Mark I, which was reprogrammable to some extent, but did not use a stored program. The machine read the instructions from the punched card and carried them out immediately. The car also did not provide for conditional transitions.

In 1941, German scientist Konrad Zuse (1910–1995) created the Z-3 computer. It also read the program from tape (in this case, encoded on tape) and also did not perform conditional branches. Interestingly, Zuse received financial support from the German Institute of Aircraft Engineering, which used this computer to study the flutter of an aircraft wing. However, Zuse's proposal to finance the replacement of relays with radio tubes was not supported by the Nazi government, which considered the development of computer technology "not of military importance." This, it seems to me, influenced the outcome of the war to a certain extent.

In fact, von Neumann had one brilliant predecessor, and he lived a hundred years earlier! English mathematician and inventor Charles Babbage (1791–1871) described his Analytical Engine in 1837, based on the same principles as von Neumann's computer and using a stored program printed on punched cards of jacquard weaving machines. The random access machine's memory contained 1,000 words of 50 decimal places each (equivalent to approximately 21 kilobytes). Each instruction contained an opcode and an operand number - just like in modern computer languages. The system did not use conditional branches or loops, so it was a true von Neumann machine. Completely mechanical, it apparently surpassed both the design and organizational capabilities of Babbage himself. He created parts of the machine, but never launched it.

It is not known for certain whether 20th-century computer pioneers, including von Neumann, were aware of Babbage's work.

However, the creation of Babbage's machine marked the beginning of the development of programming. The English writer Ada Byron (1815–1852), Countess of Lovelace, the only legitimate child of the poet Lord Byron, became the world's first computer programmer. She wrote programs for Babbage's Analytical Engine and debugged them in her head (since the computer never worked). Now programmers call this practice table checking. She translated an article by Italian mathematician Luigi Menabrea about the Analytical Engine, adding her own significant comments and noting that “the Analytical Engine weaves algebraic patterns like a jacquard loom weaves flowers and leaves.” She may have been the first to mention the possibility of creating artificial intelligence, but concluded that the analytical engine "is not capable of coming up with anything on its own."

Babbage's ideas seem amazing considering the era in which he lived and worked. However, by the middle of the 20th century. these ideas were practically forgotten (and only rediscovered later). It was von Neumann who invented and formulated the key principles of computer operation in his modern form, and it is not for nothing that the von Neumann machine continues to be considered the main model of a computer. However, let's not forget that the von Neumann machine constantly exchanges data between individual modules and within these modules, so it could not be created without Shannon's theorems and the methods that he proposed for the reliable transmission and storage of digital information.

All this brings us to the fourth important idea, which overcomes Ada Byron's conclusions about the inability of computers to think creatively and allows us to find the key algorithms used by the brain, which can then be used to turn a computer into a brain. Alan Turing formulated this problem in his paper “Computing Machines and Intelligence,” published in 1950, which described the now well-known Turing test to determine the proximity of AI to human intelligence.

In 1956, von Neumann began preparing a series of lectures for the prestigious Silliman Readings at Yale University. The scientist was already ill with cancer and was unable to deliver his lectures or even finish the manuscript on which the lectures were based. Nevertheless, this unfinished work is a brilliant prediction of what I personally perceive as the most difficult and important project in the history of mankind. After the scientist’s death, in 1958, the manuscript was published under the title “Computer and Brain.” It so happened that the last work of one of the most brilliant mathematicians of the last century and one of the founders of computer technology turned out to be devoted to the analysis of thinking. This was the first serious study of the human brain from the point of view of a mathematician and computer scientist. Before von Neumann, computer technology and neuroscience were two separate islands with no bridge between them.

Von Neumann begins the story by describing the similarities and differences between a computer and the human brain. Considering the era in which this work was created, it seems surprisingly accurate. The scientist notes that the output signal of a neuron is digital - the axon is either excited or remains at rest. At that time it was far from obvious that the output signal could be processed analoguely. Signal processing in the dendrites leading to the neuron and in the body of the neuron is analog, and von Neumann described this situation using a weighted sum of the input signals with a threshold value.

This model of neuronal functioning led to the development of connectionism and the use of this principle for the creation of both hardware design and computer programs. (As I described in the previous chapter, the first such system, the IBM 704 program, was created by Frank Rosenblatt of Cornell University in 1957, just after the manuscript of von Neumann's lectures became available.) Now we have more complex models describing combinations of neuronal inputs, but the general idea of ​​analog signal processing by changing the concentration of neurotransmitters is still valid.

Based on the concept of the universality of computer computing, von Neumann came to the conclusion that even with the seemingly radical difference in the architecture and structural units of the brain and the computer, using the von Neumann machine we can simulate the processes occurring in the brain. The converse postulate, however, is not valid, since the brain is not a von Neumann machine and does not have a stored program (although in the head we can simulate the operation of a very simple Turing machine). The algorithms or methods of functioning of the brain are determined by its structure. Von Neumann rightly concluded that neurons could learn appropriate patterns based on input signals. However, in von Neumann's time it was not known that learning also occurs through the creation and destruction of connections between neurons.

Von Neumann also pointed out that the speed of information processing by neurons is very low - on the order of hundreds of calculations per second, but the brain compensates for this by simultaneously processing information in many neurons. This is another obvious but very important discovery. Von Neumann argued that all 10 10 neurons in the brain (this estimate is also quite accurate: according to today's ideas, the brain contains from 10 10 to 10 11 neurons) process signals at the same time. Moreover, all contacts (on average from 10 3 to 10 4 per neuron) are counted simultaneously.

Considering the primitive state of neuroscience at the time, von Neumann's estimates and descriptions of neuronal function are remarkably accurate. However, I cannot agree with one aspect of his work, namely the idea of ​​the brain's memory capacity. He believed that the brain remembers every signal for life. Von Neumann estimated the average human lifespan at 60 years, which is approximately 2 x 10 9 seconds. If each neuron receives approximately 14 signals per second (which is actually three orders of magnitude lower than the true value), and there are 10 10 neurons in total in the brain, it turns out that the brain's memory capacity is about 10 20 bits. As I wrote above, we remember only a small part of our thoughts and experiences, but even these memories are not stored as low-level bit-by-bit information (like in a video), but rather as a sequence of higher-order images.

As von Neumann describes each mechanism in brain function, he simultaneously demonstrates how a modern computer could perform the same function, despite the apparent difference between the brain and the computer. The analog mechanisms of the brain can be modeled using digital mechanisms, since digital computing can simulate analog values ​​with any degree of accuracy (and the accuracy of analog information in the brain is quite low). It is also possible to simulate the massive parallelism of brain function, given the significant superiority of computers in serial computation speed (this superiority has become even stronger since von Neumann). In addition, we can carry out parallel signal processing in computers using parallel von Neumann machines - this is how modern supercomputers operate.

Given the ability of humans to make rapid decisions at such low neural speeds, von Neumann concluded that brain functions could not involve long, sequential algorithms. When a third baseman receives the ball and decides to throw it to first rather than second base, he makes this decision in a fraction of a second - during which time each neuron barely has time to complete several cycles of excitation. Von Neumann comes to the logical conclusion that the brain's remarkable ability is due to the fact that all 100 billion neurons can process information simultaneously. As I noted above, the visual cortex makes complex inferences in just three or four cycles of neuronal firing.

It is the significant plasticity of the brain that allows us to learn. However, the computer has much greater plasticity - its methods can be completely changed by changing the software. Thus, a computer can imitate the brain, but the reverse is not true.

When von Neumann compared the massively parallel capabilities of the brain with the few computers of the time, it seemed clear that the brain had much greater memory and speed. Today, the first supercomputer has already been constructed, according to the most conservative estimates, satisfying the functional requirements needed to simulate the functions of the human brain (about 10 16 operations per second). (In my opinion, computers of this power will cost around $1,000 in the early 2020s.) In terms of memory capacity, we've moved even further. Von Neumann's work appeared at the very beginning of the computer era, but the scientist was confident that at some point we would be able to create computers and computer programs that could imitate the human brain; that is why he prepared his lectures.

Von Neumann was deeply convinced of the acceleration of progress and its significant impact on people's lives in the future. A year after von Neumann's death, in 1957, his colleague mathematician Stan Ulam quoted von Neumann as saying in the early 1950s that “every acceleration of technological progress and changes in the way people live gives the impression that some major singularity in history is approaching.” a human race beyond which human activity as we know it today can no longer continue.” This is the first known use of the word "singularity" to describe human technological progress.

Von Neumann's most important insight was the similarity between the computer and the brain. Note that part of human intelligence is emotional intelligence. If von Neumann's guess is correct and if we agree with my statement that a non-biological system that satisfactorily reproduces the intelligence (emotional and other) of a living person has consciousness (see the next chapter), we will have to conclude that between the computer (with correct software) And conscious There is a clear similarity in thinking. So, was von Neumann right?

Most modern computers are completely digital machines, whereas the human brain uses both digital and analog techniques. However, analogue methods can be easily reproduced digitally with any degree of accuracy. American specialist in the field computer technology Carver Mead (b. 1934) showed that the analogue techniques of the brain could be directly reproduced in silicon, and implemented this in the form of so-called neuromorphic chips. Mead demonstrated that this approach could be thousands of times more effective than digitally simulating analogue methods. If we're talking about about encoding redundant neocortical algorithms, it might make sense to use Mead's idea. An IBM research team led by Dharmendra Modhi is using chips that mimic neurons and their connections, including their ability to form new connections. One of the chips, called SyNAPSE, directly modulates 256 neurons and approximately a quarter of a million synaptic connections. The goal of the project is to simulate a neocortex consisting of 10 billion neurons and 100 trillion connections (equivalent to the human brain), using only one kilowatt of energy.

More than fifty years ago, von Neumann noticed that processes in the brain occur extremely slowly, but are characterized by massive parallelism. Modern digital circuits operate at least 10 million times faster than the brain's electrochemical switches. In contrast, all 300 million recognition modules of the cerebral cortex act simultaneously, and a quadrillion contacts between neurons can be activated at the same time. Therefore, to create computers that can adequately imitate the human brain, adequate memory and computing performance are required. There is no need to directly copy the architecture of the brain - this is a very inefficient and inflexible method.

What should the corresponding computers be like? Many research projects are aimed at modeling hierarchical learning and pattern recognition occurring in the neocortex. I myself am doing similar research using hierarchical hidden Markov models. I estimate that modeling one recognition cycle in one recognition module of the biological neocortex requires about 3000 calculations. Most simulations are built on a significantly smaller number of calculations. If we assume that the brain performs about 10 2 (100) recognition cycles per second, we get a total number of 3 x 10 5 (300 thousand) calculations per second for one recognition module. If we multiply this number by the total number of recognition modules (3 x 10 8 (300 million, according to my estimates)), we get 10 14 (100 trillion) calculations per second. I give roughly the same meaning in the book “The Singularity is Near.” I predict that functional brain simulation requires speeds of 10 14 to 10 16 calculations per second. Hans Moravec's estimate, based on extrapolation of data for initial visual processing throughout the brain, is 10 14 calculations per second, which is the same as my calculations.

Standard modern machines can run at speeds of up to 10 10 calculations per second, but with the help of cloud resources their productivity can be significantly increased. The fastest supercomputer, the Japanese K computer, has already reached a speed of 10 16 calculations per second. Given the massive redundancy of neocortical algorithms, good results can be achieved using neuromorphic chips, as in SvNAPSE technology.

In terms of memory requirements, we need about 30 bits (about 4 bytes) for each pin with one of the 300 million recognition modules. If an average of eight signals are suitable for each recognition module, we get 32 ​​bytes per recognition module. If we take into account that the weight of each input signal is one byte, we get 40 bytes. Add 32 bytes for downstream contacts and we get 72 bytes. I note that the presence of ascending and descending branches leads to the fact that the number of signals is much more than eight, even if we take into account that many recognition modules use a common highly branched system of connections. For example, recognizing the letter “p” may involve hundreds of recognition modules. This means that thousands of next-level recognition modules are involved in recognizing words and phrases containing the letter “p”. However, each module responsible for recognizing “p” does not repeat this tree of connections that feed all levels of recognition of words and phrases with “p”; all these modules have a common tree of connections.

The above is also true for downstream signals: the module responsible for recognizing the word apple will tell all the thousand downstream modules responsible for recognizing “e” that the image “e” is expected if “a”, “p”, “p” are already recognized " and "l". This tree of connections is not repeated for each word or phrase recognition module that wants to inform lower level modules that the image "e" is expected. This tree is common. For this reason, an average estimate of eight upstream and eight downstream signals for each recognition module is quite reasonable. But even if we increase this value, it will not change the final result much.

So, taking into account 3 x 10 8 (300 million) recognition modules and 72 bytes of memory for each, we find that the total memory size should be about 2 x 10 10 (20 billion) bytes. And this is a very modest value. Conventional modern computers have this kind of memory.

We performed all these calculations to roughly estimate the parameters. Given that digital circuits are about 10 million times faster than networks of neurons in the biological cortex, we do not need to reproduce the massive parallelism of the human brain - very moderate parallel processing (compared to trillions of parallelism in the brain) will be quite enough. Thus, the necessary computational parameters are quite achievable. The ability of brain neurons to reconnect (remember that dendrites are constantly creating new synapses) can also be simulated using appropriate software, since computer programs are much more plastic than biological systems, which, as we have seen, are impressive but have limits.

The brain redundancy required to obtain invariant results can certainly be reproduced in a computer version. The mathematical principles for optimizing such self-organizing hierarchical learning systems are quite clear. The organization of the brain is far from optimal. But it doesn't have to be optimal - it has to be good enough to enable the creation of tools that compensate for its own limitations.

Another limitation of the neocortex is that it has no mechanism for eliminating or even evaluating conflicting data; This partly explains the very common illogicality of human reasoning. To solve this problem we have a very weak ability called critical thinking, but people use it much less often than they should. The computer neocortex could include a process that identifies conflicting data for subsequent revision.

It is important to note that designing an entire brain region is easier than designing a single neuron. As has already been said, more high level model hierarchies are often simplified (there is an analogy with a computer here). Understanding how a transistor works requires a detailed understanding of the physics of semiconductor materials, and the functions of a single real-life transistor are described by complex equations. Digital circuit, which multiplies two numbers, contains hundreds of transistors, but one or two formulas are enough to create a model of such a circuit. The whole computer, consisting of billions of transistors, can be modeled using a set of instructions and a register description on several pages of text using several formulas. Programs for operating systems, language compilers, or assemblers are quite complex, but modeling a private program (for example, a language recognition program based on hidden hierarchical Markov models) also comes down to a few pages of formulas. And nowhere in such programs will you find detailed description physical properties of semiconductors or even computer architecture.

A similar principle is true for brain modeling. One particular recognition module of the neocortex, which detects certain invariant visual images (for example, faces), performs filtering audio frequencies(by limiting the input signal to a certain frequency range) or estimates the temporal proximity of two events, can be described using much smaller number specific details than the actual physical and chemical interactions that control the functions of neurotransmitters, ion channels and other neuronal elements involved in the transmission of nerve impulses. Although all of these details must be carefully considered before moving to the next level of complexity, much can be simplified when modeling the operating principles of the brain.

<<< Назад
Forward >>>

The last century marked a major leap in human development. Having gone through a difficult path from the primer to the Internet, people have not been able to solve the main mystery that has tormented the minds of the greats for hundreds of years, namely, how does the human brain work and what is it capable of?

To this day, this organ remains the most poorly studied, but it was this organ that made man what he is now - the highest stage of evolution. The brain, continuing to keep its secrets and mysteries, continues to determine the activity and consciousness of a person at every stage of his life. No modern scientist has yet been able to unravel all the possibilities of which he is capable. That is why a large number of myths and unsubstantiated hypotheses are concentrated around one of the most important organs of our body. This can only indicate that the hidden potential of the human brain has yet to be explored, but for now its abilities go beyond the boundaries of already established ideas about its work.


Photo: Pixabay/geralt

Brain structure

This organ consists of a huge number of connections that create stable interaction between cells and processes. Scientists suggest that if this connection is represented as a straight line, its length will be eight times the distance to the Moon.

The mass fraction of this organ in the total body mass is no more than 2%, and its weight varies between 1019-1960 grams. From the moment of birth to the last breath of a person, he conducts continuous activity. Therefore, it needs to absorb 21% of all oxygen constantly entering the human body. Scientists have drawn up an approximate picture of how the brain assimilates information: its memory can hold from 3 to 100 terabytes, while the memory of a modern computer is currently being improved to a volume of 20 terabytes.

The most common myths about the human biological computer

Neuronal tissues of the brain die during the life of the body, and new ones are not formed. This is a fallacy that Elizabeth Goode has proven absurd. Nervous tissue and neurons are constantly renewed, and new connections replace the dead ones. Research has confirmed that in areas of cells destroyed by stroke, the human body is able to “grow” new material.

The human brain is only 5-10% open, all other possibilities are not used. Some scientists explained this by the fact that nature, having created such a complex and developed mechanism, came up with protective system, protecting the organ from excessive load. This is wrong. It is reliably known that the brain is 100% involved during any human activity; it’s just that at the moment of performing any actions, its individual parts react one by one.

Superpowers. What can surprise the human mind?

Some people who do not outwardly show signs of having incredible abilities may have truly incredible abilities. They don’t appear in everyone, but scientists say that regular intensive brain training can develop superpowers. Although the secret of “selecting” people who may have the right to be called a genius has not yet been revealed. Some people know how to competently get out of difficult situations, while others sense approaching danger on a subconscious level. But the following superpowers are more interesting from a scientific point of view:

  • The ability to perform mathematical operations of any complexity without the help of a calculator or calculations on paper;
  • The ability to create brilliant creations;
  • Photographic memory;
  • Speed ​​reading;
  • Psychic abilities.

Amazing cases of revealing the unique abilities of the human brain

Over the entire history of human existence, a large number of stories have appeared confirming the fact that the human brain can have hidden abilities, adapt to changing situations and shift certain functions from the affected part to the healthy part.

Sonar vision. This ability is usually developed after loss of vision. Daniel Kish managed to master the echolocation technique inherent in bats. The sounds he makes, such as clicking his tongue or fingers, help him walk without a cane.

Mnemonics– a unique technique that allows you to perceive and remember any amount of information, regardless of its nature. Many people master it in adulthood, but American Kim Peak has this innate gift.

The gift of foresight. Some people claim that they can see the future. At the moment, this fact has not been fully proven, but history knows many people whom such an ability has made famous throughout the world.

Phenomena of which the human brain is capable

Carlos Rodriguez, at the age of 14, lost more than 59% of his brain after an accident, but still lives a completely normal life.

Yakov Tsiperovich, after clinical death and a week-long stay in a comatose state, stopped sleeping, eats little and does not age. Three decades have passed since that moment, and he is still young.

Fenias Gage suffered a terrible injury in the mid-19th century. A thick crowbar passed through his head, depriving him of a good part of his brain. The medicine of those years was not sufficiently advanced, and doctors predicted his imminent death. However, the man not only did not die, but also retained his memory and clarity of consciousness.

The human brain, like its body, needs to be subjected to constant training. This can be either complex, specially designed programs, or reading books, solving puzzles and logical problems. At the same time, we should not forget about saturating this organ with nutrients. For example, the brain activity enhancer HeadBooster http://hudeemz.com/headbooster has a large number of them. But still, only constant training allows the brain to constantly develop and increase its capabilities.

What is a biological system?

Biological system- is a living structure that exists in a habitat specific to it, possessing the ability to exchange substances and energy, as well as protection for the exchange and copying of information, which determines its functions and the ability to improve ways of interacting with the environment to preserve and transmit information about itself.

Structure of the biological system "cell":

1. Information block- information code written in the form of DNA molecules, RNA. By analogy with a computer program, it is the “incarnate Word” that determines the functions and parameters of the system. Its authorship belongs to the Creator, the Source of life, the Creator of everything visible and invisible - God.
2. Energy block- programmed possibilities for receiving, converting and consuming energy (energy circulation). Energy is the force necessary to maintain the vital activity of the structural elements of the system and activate their functions. Or, energy is a quantitative measure of the interaction of all types of matter and information, causing a change in their state or structure.
3. MPT block(matter, flesh, body) - the external manifestation of the information code. Its functions are protection, preservation, and exchange of information. It is a matrix for storing and copying information. It includes: membranes, enzymes, membrane receptors, membrane transport channels, biologically active substances (BAS).

The main tasks of the biological system "cell": preservation, exchange, copying of information contained in it.

To perform its tasks, primarily copying, the system must enter and be in a certain environment that provides it with a supply of substances and energy adequate to its needs.
To regulate processes that ensure the preservation, exchange and copying of information, the receptor-mediator principle is used.

Receptor-transmitter principle

Receptor - (from Latin recipere - to receive) any information-energy material system or structure (IEM system, structure) that perceives information and changes its state or structure in a certain way as a result of the action of a mediator.

Mediator - (intermediary, transmitter) any IEM system or structure designed to transmit certain information to the receptor.

We know about different levels of organization of IEM systems and structures: atom, molecule, complex molecule, substance, virus, cell, tissue, organ, organism, collective, people, state, planet earth, solar system, galaxy, universe.
Different levels of organization of IEM systems or structures have their own mechanisms of receptor-mediator interaction. This also applies to cross-level interaction.
The study of these mechanisms, as well as the search for mediators for receptors and the description of responses (changes in state or structure) of IEM systems or structures are among the tasks of scientists.

Types of interaction between receptor and mediator

1. A certain transmitter acts on a certain receptor of a biological system, which leads to a certain response.

2. A certain mediator acts on receptors that determine different responses of the biological system.

3. Several transmitters act on a specific receptor in a biological system, leading to a specific response.

4. Several mediators act on a specific receptor, which leads to different responses of the biological system (interaction characteristic of complex biological systems).

The result of the interaction between the mediator and the receptor is a change in the state or structure of the system.

State of physiological rest- this is a state in which a biological system is in its habitat and performs its tasks without going beyond the average statistical data of its functional activity.

Basic mechanisms for regulating the state of a biological system

1. Change in the amount of mediator or receptor (increase, decrease)
2. Changing the quality of a mediator or receptor by changing their structure (strengthening, weakening, destruction) and, as a consequence, changing their connection and transmission of information.

In a biological system, any IEM structure can be both a receptor for some IEM structures and a mediator for others. Control over the regulation of a certain state of the system can be achieved when we know the methods of influence that change the quantity and quality of the mediator and receptor responsible for this state.

Possibility of changing the state of the cell

The only opportunity to change the state and structure of the biological system "Cell" - this is to change the mediator action environment a habitat.
A change in the environment that provides the supply of substances, energy and information (water or liquid, air or gases, earth or organic and inorganic chemical elements, temperature, physical fields, radiation, pressure) leads to a change in the state or structure of the cell.

Cell structures that change as a result of environmental changes.

1. DNA, RNA molecules (source of information about the cell and copying).
2. Cell membranes and organelles (protection of the cell and internal environment).
3. Enzymes (regulators of metabolic rate, energy, information in the cell).
4. Membrane receptors (receive information for the cell).
5. Transport channels of membranes (gates of entry and exit of substances, energy and information).
6. Biologically active substances (mediators - cell products intended to transmit information to the external and internal environment).

A change in the quality and quantity of any of these structures in the desired direction occurs due to a certain change in the supply of liquid, gas, organic or inorganic chemical elements, changes in temperature, physical fields, radiation, pressure.


- How did you, a former military doctor and an organizer with extensive experience, come to the theoretical problem of the structure of living things?

Each of us has repeatedly turned to this topic in our thoughts, often doubting the justice hypothesesspontaneous appearance of living things And theories of evolution. Forever preserved feeling of amazement at the "mind" of the computer after getting acquainted with its structure and operation. A storm of thoughts was generated by the study of the human genome and other organisms, which did not materialize sensations, forecasts And paradoxes. Impression, having merged, inspired me to read biology again, then computer science, search in available space everything that concerned genetics, genomics, genes. I soon realized , What the cell and the computer operate on the basis of common information rules.

But this must be proven!

Certainly. At first, using comparisons and analogies, I became convinced that the cell had a structure typical of computers. The membrane, like a computer case, protects the internal contents of the cell from external influences and serves as a place for connecting input-output devices, the role of which is performed by receptors. Function motherboard carried by the cytoplasm, holding the cell organelles in the desired position and connecting them to each other. And here is the “heart” of the cell - the nucleus, chromosomes, genes, a strand of DNA in prokaryotes, which perform main function for processing information, storing long-term and RAM, like a hard drive in a technical computer. Similar to portable storage media - hard and floppy disks, mobile carriers work intensively in the cell - these are RNA, proteins, and prions. Distinctive feature any information machine is availability of watches And energy source. In a cell, telomeres count the number of divisions and time, and mitochondria provide energy in the form of ATP. Molecular electronics has outpaced the biological branches of science, confirming the previously predicted miniaturization of computers and the possibility of using, due to their structure and properties, many organic molecules, including DNA, as transistors, triggers, logical elements and creation based on them information machines. Laboratory options organic computer exist, software compulsory for them as well.

What other facts indicate the informational component of cells?

It seems to me that the most powerful argument is genomic paradox, the manifestations of which still cannot be explained by traditional methods. It turned out that gene structure does not always determine their properties. The provisions of the "gene" were not confirmed sign", "gen - function", "gen - disease". The same gene on different stages development of the body can perform different functions . In the gene network gene function may differ from functions studied in isolation. There are many genes that are “silent”; their properties are not known. Genes with common structures can control development different options cells. The human and Drosophila gene produces the same signal - a protein ligand for mesoderm cells, controlling the formation of fly wings and paired human limbs. The initial stages of myogenesis are carried out by a set of genes common to Drosophila, lower and higher animals and mammals, including humans. The number and organization of HOX genes on chromosomes are the same in almost all mammals. The same gene can encode several proteins, and the same protein variant can correspond to several genes. DNA duplications, what role do they play and why are the chimpanzee and human genomes so different in this respect? In your review ("MG", No. 77 - 10/5/2005, p.14) it was noted that in humans and chimpanzees the same genes have different activities in different organs. This through different programs, which determine significant differences between biological species. Now about the paradoxical number of genes and “extra DNA” in different biological species. The nematode (about 1 mm in size) has 19,903 genes, fish fugu (about 10 cm) - 33609, rats approximately 25,000 and person- 30000; respectively, non-coding DNA (“extra, selfish, garbage”) in% - 25, 16, 75, 97. The higher the organization organism, the fewer genes in its genome and the more non-coding parts of nucleotides, the more complex processes, the fewer genes are required to ensure life activity. And, of course, according to genomes, no evolutionary series in the development of organisms is observed.

The "junk" part of DNA contains many identical repeating nucleotide sequences. Is there any informational meaning here?

Developmental Assumption information technologies, appropriate. It has now been shown that if on one integrated circuit stamped microprocessors, places for information storage and other elements computer designs, then it performance with reduction in size it increases significantly. There is no need to “walk” far for information and waste extra energy. Huge information space DNA requires that its genes be concentrated around genes. processors to work with information, places for her storage,operational and long-term memory, which would ensure both sequential and parallel work on the analysis of incoming information and developing responses solutions And teams. This achieves performance and duplication in case of " freelance" situations. It is possible that nucleotide repeats and DNA duplications are somehow specialized by information functions.

What are the significant differences between biological computers and technical computers?

- High reliability due to stability organic compounds and the presence multi-level protection systems from media damage and distortions of one's own information. DNA is the most resistant molecule to decay, and apoptosis is the most effective protection mechanism. Huge performance, calculated in trillions of operations per second. Organic molecules are capable of instantly changing their state under the influence of laser, visible parts of the light spectrum, sound, radio waves. It is probably no coincidence that twenty amino acids involved in the construction of proteins are “left-handed” in life; when the position of the amino group in the carbon chain changes, they can access the function of the binary number system. Some molecules can generate laser beams and perform the functions of chromatophores, LEDs, and signal converters. Genomes glow, make sounds, generate radio waves certain ranges, which is recorded by instruments. The above reasoning made it possible to give a single-celled organism and a cell information definition. These are organic closed information machines, working on the basis of complex software, determining their structural and functional organization, species affiliation, target mechanisms homeostasis, reproduction of their own kind, With autonomous energy supply And time counter. I avoid the term electronic computer, because in the cell when processing information electron flow not used and it's not computing, A logical car.

But I came across the term “biocomputer” long before your publication.

Yes, but in very free interpretations. Anything that does not fit into the above definition are not biocomputers, including viruses. At the dawn of the computer age Highly organized organisms were called biocomputers. Then representatives of certain professions considered a computer brain, with the development of genetics and genomics - they switched to the genome, they even talked about DNA computers. Today specialists, researching information properties of water, they call her " living biocomputer". Water, although mandatory, is only component biological computer. In the cells where information processes prevail, in particular in neurons, water up to 90%, in hair And nails it is only 8-10%.

But what about organisms or brain ?

But multicellular organisms consist of biocomputers, arranged and united according to the principles information network.

But how do biological computers, components organism ?

A product of the information age - created by man - comes to the rescue again global information network Internet. The main condition for the functioning of the network is compatibility everyone computers By technical parameters And software. In every organism, the cells are identical in structure and have exactly the same software. The exception is red blood cells, they do not have a core and deprived of information functions. The network also requires a mechanism to maintain order and organization, which is provided by a series of technologies and protocols Internet. Let's name only some of them. Transmission Control Protocol (TSR) - you won't login, Not registering with the provider.Protocols single information web- in live similar protocols And programs should be significantly more, considering complexity, multifunctionality of processes and quantity network components biological computers. Human that's 14 trillion biocomputers, one and a half times more than stars in two galaxies combined - Milky Way And Andromeda Nebula. main feature Internet - This servers in different areas of the network. These are the same computers, only intended for servicing other computers. They, having their programs, resemble neurons with their amazing functionality. There are 20 billion of them in humans. The higher the body is organized, the higher functional possibilities neurons. For example, in a nematode every neuron accounts for 5 somatic cells, in person by 5000. Modem with the corresponding program allow you to log in to the network, implement remote connection,uploading files from computer to the network and back - from the network V computer, provide registration, change of protocol and other functions. Undoubtedly, this is an analogue of synapses that provide contacts between cells. Information system person for today - pinnacle of technology . Internet in comparison with it is in an embryonic state, its age about 40 years old. The main difference is the huge difference in the number and power of components computers, By difficulties, multi-layering And diversity programs. It is believed that for the development of information networks there is only two restrictions : computer speed And throughput channels connecting them. So development prospects for the Internet huge. But today none of the computers, neither Information system created by man are unable to repeat the work biological computer and the simplest multicellular organism.

What are main conclusions from your reasoning?

It is forbidden to know the living without studying it information component, just as it is futile to look for living things and life activity outside the cell. Information component alive unchangeable, genomes of organisms stable And have multiple protection options. Variation of genomes and programs would threaten death not only individuals, but also biological species. Evolution how it is interpreted classical biology, it couldn't be mutations are not inherited, A " are being treated"living information system . All organisms do not adapt, but resist environmental factors and are capable of learning based on their own experience. Both organisms and their reproductive capabilities were programmed, created, arose simultaneously. This is one of the many predictive target cyclical processes inherent in living things. The eternal problem" Chicken" And " eggs"It simply doesn't exist. Development rate information technologies, especially molecular electronics, are surprising - for 60 years from computing rooms to the molecular computer. Scientists are surprised by the short periods of time, by evolutionary standards, during which biological species became more complex, inexplicable mutations. Creating information devices, h humanity may be repeated by someone already passed th way .Information component as the basis of every living organism exists! However, today there is no branch of knowledge whose methodology, goals and research methods could find key to the information part and information processes in the living. It's time to treat a very common chronic disease of civilization - "flux " one-sidedness narrow specialists! We need information biology as a new integrative science that would incorporate modern informational, technical, biological, medical knowledge, achievementsphysicists, chemistry and would set a task to know the information essence of living things . Here lies the most secret of secrets And the most mysterious of the mysteries of the structure of our world!

Creating information devices, h humanitymay be repeated by someone alreadydistance traveled ........

An organ that coordinates and regulates all vital functions of the body and controls behavior. All our thoughts, feelings, sensations, desires and movements are associated with the work of the brain, and if it does not function, the person goes into a vegetative state: the ability to perform any actions, sensations or reactions to external influences is lost.

Computer model of the brain

The University of Manchester has begun building the first of a new type of computer, the design of which imitates the structure of the human brain, BBC reports. The cost of the model will be 1 million pounds.

A computer built on biological principles, says Professor Steve Furber, should demonstrate significant stability in operation. “Our brain continues to function despite the constant failure of the neurons that make up our nervous tissue,” says Furber. “This property is of great interest to designers who are interested in making computers more reliable.”

Brain Interfaces

In order to lift a glass several feet using mental energy alone, wizards had to train for several hours a day.
Otherwise, the lever principle could easily squeeze the brain out through the ears.

Terry Pratchett, "The Color of Magic"

Obviously, the crowning glory of the human-machine interface should be the ability to control a machine with thought alone. And getting data directly into the brain is already the pinnacle of what virtual reality can achieve. This idea is not new and has been featured in a wide variety of science fiction literature for many years. Here are almost all cyberpunks with direct connections to cyberdecks and biosoftware. And control of any technology using a standard brain connector (for example, Samuel Delany in the novel “Nova”), and a lot of other interesting things. But science fiction is good, but what is being done in the real world?

It turns out that the development of brain interfaces (BCI or BMI - brain-computer interface and brain-machine interface) is in full swing, although few people know about it. Of course, the successes are very far from what is written about in science fiction novels, but, nevertheless, they are quite noticeable. Currently, work on brain and nerve interfaces is mainly being carried out as part of the creation of various prosthetics and devices to make life easier for partially or completely paralyzed people. All projects can be divided into interfaces for input (restoration or replacement of damaged sensory organs) and output (control of prostheses and other devices).

In all cases direct input data, it is necessary to perform an operation to implant electrodes into the brain or nerves. In case of output, you can get by with external sensors for taking an electroencephalogram (EEG). However, EEG is a rather unreliable tool, since the skull greatly weakens brain currents and only very generalized information can be obtained. If electrodes are implanted, data can be taken directly from the desired brain centers (for example, motor centers). But such an operation is a serious matter, so for now experiments are being conducted only on animals.

In fact, humanity has long had such a “single” computer. According to Wired magazine co-founder Kevin Kelly, millions of Internet-connected PCs, mobile phones, PDAs and other digital devices can be considered components of a single computer. Its central processor is all the processors of all connected devices, its HDD - hard disks and flash drives all over the world, and RAM is the total memory of all computers. Every second, this computer processes an amount of data equal to all the information contained in the Library of Congress, and its operating system is the World Wide Web.

Instead of nerve cell synapses, it uses functionally similar hyperlinks. Both are responsible for creating associations between nodes. Each unit of thought, such as an idea, grows as more and more connections are made with other thoughts. Also on the network: a larger number of links to a certain resource (nodal point) means its greater significance for the Computer as a whole. Moreover, the number of hyperlinks in World Wide Web comes close to the number of synapses in the human brain. Kelly estimates that by 2040, the planetary computer will have computing power commensurate with the collective brain power of all 7 billion people who will inhabit the Earth by that time.

But what about the human brain itself? A long-outdated biological mechanism. Our gray matter works at the speed of the very first Pentium processor, model 1993. In other words, our brain operates at a frequency of 70 MHz. In addition, our brains operate on an analog principle, so there can be no question of comparison with the digital method of data processing. This is the main difference between synapses and hyperlinks: synapses, reacting to their environment and incoming information, skillfully change the organism, which never has two identical states. The hyperlink, on the other hand, is always the same, otherwise problems begin.

However, it must be admitted that our brain is significantly more efficient than any artificial system created by people. In a completely mysterious way, all the gigantic computing abilities of the brain are located in our skull, weighs just over a kilogram, and at the same time it requires only 20 watts of energy to function. Compare these figures with the 377 billion Watts that, according to approximate calculations, are currently consumed by a Single Computer. This, by the way, is as much as 5% of global electricity production.

The mere fact of such monstrous energy consumption will never allow the Unified Computer to even come close to the efficiency of the human brain. Even in 2040, when the computing power of computers becomes sky-high, their energy consumption will continue to increase.

Ecology of consciousness. Science and Discovery: No matter how hard they try, neuroscientists and cognitive psychologists will never find a copy of Beethoven's fifth symphony in the brain, or a copy of words, images, grammatical rules, or any other external stimuli. The human brain is, of course, not literally empty. But it doesn't contain most of the things people think it should - it doesn't even contain simple objects like "memories".

No matter how hard they try, neuroscientists and cognitive psychologists will never find a copy of Beethoven's fifth symphony in the brain, or a copy of words, images, grammatical rules, or any other external stimuli. The human brain is, of course, not literally empty. But it doesn't contain most of the things people think it should - it doesn't even contain simple objects like "memories".

Our misconceptions about the brain have deep historical roots, but the invention of the computer in the 1940s has particularly confused us. For more than half a century, psychologists, linguists, neurophysiologists and other researchers of human behavior have been saying: the human brain works like a computer.

To understand the superficiality of this idea, let's imagine that the brain is a baby.Thanks to evolution, newborn humans, like newborns of any other mammal species, enter this world ready to interact effectively with it. The child's vision is blurry, but he pays attention Special attention faces and can quickly recognize the mother's face among others. He prefers the sound of the voice to other sounds, and he can distinguish one basic speech sound from another. We are, without a doubt, built with social interaction in mind.

A healthy newborn has more than a dozen reflexes - ready reactions to certain stimuli; they are needed for survival. The baby turns its head in the direction of whatever is tickling its cheek and sucks whatever comes into its mouth. He holds his breath as he plunges into the water. He grabs things that fall into his hands so tightly that he almost hangs on them.

Perhaps most importantly, infants come into the world with very powerful learning mechanisms that allow them to rapidly change so that they can interact with the world with increasing effectiveness, even if that world is not similar to the one they encountered. their distant ancestors.

Feelings, reflexes and learning mechanisms are all what we start with, and truth be told, there are quite a lot of these things if you think about it. If we didn't have one of these capabilities from birth, we would have a much harder time surviving.

But there is also something we were not born with: information, data, rules, software, knowledge, vocabulary, representations, algorithms, programs, models, memories, images, processing, routines, encoders and decoders, symbols and buffers are the design elements that enable digital computers to behave in ways that somewhat reminiscent of reasonable. We are not just not born with it, we do not develop it in ourselves. Never.

We don't keep words or rules telling us how to use them. We do not create visual projections of stimuli or store them in a buffer short term memory, and after that we do not transfer them to long-term memory storage. We do not extract information or images and words from memory registers. This is what computers do, but not organisms.

Computers literally process information.- numbers, letters, words, formulas, images. Information must initially be encoded into a format that computers can use, which means it must be represented in the form of ones and zeros (“bits”), which are collected into small blocks (“bytes”). On my computer, where each byte contains 8 bits, some of them represent the letter "K", others - "O", others - "T". Thus, all these bytes form the word “CAT”. A single image—say, a photo of my cat Henry on my desktop—is represented by a special pattern of a million such bytes (“one megabyte”), defined special characters, which tell the computer that this is a photograph and not a word.

Computers literally move these drawings from place to place in various physical storage compartments allocated within electronic components. Sometimes they copy drawings, and sometimes they change them in a variety of ways - say, when we correct an error in a document or retouch a photograph.

The rules that the computer follows to move, copy, or manipulate these layers of data are also stored within the computer. The collections of rules put together are called “programs” or “algorithms.” A group of algorithms that work together to help us do something (like buying a stock or searching for data online) is called an “application.”

Please forgive me for this introduction to the world of computers, but I need to make this very clear to you: computers actually work on the symbolic side of our world. They do store and retrieve. They really process. They do have physical memories. They truly are driven by algorithms in everything they do, without exception.

On the other hand, people don’t do that - they never did and never will. Given this, I would like to ask: why do so many scientists talk about our mental health as if we were computers?

In his book In Our Own Image (2015), artificial intelligence expert George Zarkadakis describes six different metaphors that people have used over the past two millennia, trying to describe human intelligence.

In the very first, biblical, people were created from clay and mud, which the intelligent God then endowed with his soul, “explaining” our intellect - according to at least, grammatically.

Invention of hydraulic technology in the 3rd century BC. led to the popularization of hydraulic models of human intelligence, the idea that the various fluids of our body - the so-called. "bodily fluids" - relate to both physical and mental functioning. The metaphor has been preserved for more than 16 centuries and has been used in medical practice all this time.

By the 16th century, automatic mechanisms driven by springs and gears had been developed; they finally inspired leading thinkers of the time, such as René Descartes, to hypothesize that humans are complex machines.

In the 17th century, British philosopher Thomas Hobbes proposed that thinking arose from mechanical vibrations in the brain. By the early 18th century, discoveries in the fields of electricity and chemistry led to new theories of human intelligence - and these, again, were metaphorical in nature. In the middle of the same century, German physicist Hermann von Helmholtz, inspired by advances in communications, compared the brain to a telegraph.

If this metaphor is so stupid, why does it still rule our minds? What keeps us from throwing it aside as unnecessary, just as we throw away a branch that blocks our path? Is there a way to understand human intelligence without relying on imaginary crutches? And at what cost will this cost us? long time use this support? This metaphor, after all, has inspired a huge amount of research by writers and thinkers in a variety of scientific fields over the decades. At what cost?

In the classroom for a class I've taught many times over the years, I I start by choosing a volunteer who I tell to draw a one dollar bill on the board.“More details,” I say. When he finishes, I cover the drawing with a piece of paper, take a bill from his wallet, stick it to the board and ask the student to repeat the task. When he or she finishes, I remove the paper from the first drawing and then the class comments on the differences.

You may have never seen a demonstration like this before, or perhaps you may have trouble visualizing the results, so I asked Jeannie Hyun, one of the interns at the institute where I do my research, to make two drawings. Here is a drawing “from memory” (note the metaphor):

And here is a drawing she made using a banknote:

Ginny was just as surprised by the outcome of the case as you probably were, but that's not unusual. As you can see, the drawing done without reference to the bill is terrible compared to the one copied from the sample, despite the fact that Ginny has seen the dollar bill thousands of times.

So what's up? Don't we have an "idea" of what a dollar bill looks like "downloaded" into our brain's "memory register"? Can't we simply “extract” it from there and use it when creating our drawing?

Of course not, and even thousands of years of neuroscience research would not reveal the idea of ​​a dollar bill stored in the human brain, simply because it is not there.

A significant body of brain research shows that, in fact, numerous and sometimes extensive areas of the brain are often involved in seemingly trivial memory tasks.

When a person experiences strong emotions, millions of neurons in the brain can fire. In 2016, University of Toronto neuroscientist Brian Levin and colleagues conducted a study of plane crash survivors that concluded that the events of the crash contributed to an increase in neural activity in the “amygdala, medial temporal lobe, anterior and posterior midline , as well as in the visual cortex of passengers.”

The idea, put forward by a number of scientists, that specific memories are somehow stored in individual neurons is absurd; if anything, this assumption only raises the question of memory to an even more complex level: how and where, ultimately, is memory recorded in the cell?

So what happens when Ginny draws a dollar bill without using a reference? If Ginny has never seen a bill before, her first drawing will probably look nothing like her second. The fact that she had seen dollar bills before somehow changed her. In fact, her brain was altered so that she could visualize the bill - which is essentially equivalent, at least in part, to reliving the sensation of making eye contact with the bill.

The difference between the two sketches reminds us that visualizing something (which is the process of recreating eye contact with something that is no longer in front of our eyes) is much less accurate than actually seeing it. This is why we are much better at recognizing than remembering.

When we re-produce something in memory(From Latin re - “again”, and produce - “to create”), we must try to relive the encounter with an object or phenomenon; however, when we learn something, we just have to be aware of the fact that we have previously had the experience of subjective perception of this object or phenomenon.

Perhaps you have something to object to this evidence. Ginny had seen dollar bills before, but she didn't make a conscious effort to "remember" the details. You might argue that if she had done this, she might have been able to draw the second image without using the dollar bill sample. However, even so, no image of the banknote was in any way "stored" in Ginny's brain. She simply became more prepared to draw it in detail, just as a pianist becomes more adept at playing piano concertos through practice without having to download a copy of the sheet music.

Based on this simple experiment, we can begin to build the basis of a metaphor-free theory of human intellectual behavior - one of those theories according to which the brain is not completely empty, but at least free from the burden of IP metaphors.

As we move through life, we are exposed to many things that happen to us. Three types of experience are particularly noteworthy: 1) We watch what is happening around us(how other people behave, sounds of music, instructions addressed to us, words on pages, images on screens); 2) We are susceptible to a combination of minor stimuli(for example, sirens) and important incentives(appearance of police cars); 3) We are punished or rewarded for behaving in a certain way. az.

We become more effective when we change in response to these experiences.- if we can now recite a poem or sing a song, if we are able to follow directions given to us, if we respond to minor stimuli as well as important ones, if we try not to behave in such a way that we will be punished, and if we behave this way more often way to receive a reward.

Despite the misleading headlines, no one has any idea what changes happen in the brain after we learn to sing a song or learn a poem. However, neither the songs nor the poems were "downloaded" into our brains. It has simply changed in an orderly manner so that we can now sing a song or recite a poem if certain conditions are met.

When we are asked to perform, neither the song nor the poem is "retrieved" from some place in the brain- in the same way that the movements of my fingers are not “extracted” when I drum on the table. We just sing or talk - and we don’t need any extraction.

A few years ago, I asked Eric Kandel—a neuroscientist at Columbia University who won a Nobel Prize for identifying some of the chemical changes that occur at the neutron output synapses of an Aplysia (sea snail) after it learns something—how long it takes. in his opinion, it will take time before we understand the mechanism of human memory. He quickly replied, “A hundred years.” I didn't think to ask him if he thought the IP metaphor was slowing neurological progress, but some neuroscientists are indeed beginning to think the unthinkable, namely, that the IP metaphor isn't really necessary.

A number of cognitive scientists—notably Anthony Chemero of the University of Cincinnati, author of the 2009 book Radical Embodied Cognitive Science—now completely reject the notion that the human brain operates like a computer. The common belief is that we, like computers, make sense of the world by performing calculations on our mental images, but Chemero and other scientists describe a different way of understanding the thought process - they define it as the direct interaction between organisms and their world.

My favorite example illustrating the vast difference between the IP approach and what some have called the "anti-representational" view of human functioning involves two different explanations of how a baseball player catches a flying ball, given by Michael McBeath, now at Arizona State University, and his colleagues, in a paper published in 1995 in Science.

According to the IP approach, the player must formulate a rough estimate of the various initial conditions of the ball's flight—impact force, trajectory angle, and so on—and then create and analyze an internal model of the trajectory that the ball is likely to follow, after which it must exploit this model to continuously guide and timely correct movements aimed at intercepting the ball.

This would be all well and good if we functioned just like computers, but McBeath and his colleagues have a simpler explanation: to catch the ball, the player just needs to keep moving in a way that maintains a constant visual connection to the home base and the surroundings. space (technically, stick to the “linear-optical trajectory”). This may seem complicated, but in fact it is extremely simple and does not involve any calculations, representations or algorithms.

Two dedicated psychology professors at Leeds City University in the UK, Andrew Wilson and Sabrina Golonka, list the baseball example among others that can be understood outside of the IP approach. Over the years, they have blogged about what they call "a more coherent, naturalized approach to the scientific study of human behavior... going against the dominant cognitive neuroscience approach."

However, this approach is far from being the basis of a separate movement; most cognitive scientists still refuse to criticize and cling to the IP metaphor, and some of the world's most influential thinkers have made grand predictions about the future of humanity that depend on the validity of the metaphor.

One of the predictions- made by futurist Kurzweil, physicist Stephen Hawking and neuroscientist Randall Cohen, among others - states that since human consciousness is supposed to act like computer programs, it will soon be possible to upload the human mind into a machine, whereby we will have infinitely powerful intelligence and, quite possibly, we will acquire immortality. This theory formed the basis of the dystopian film Supremacy. main role starring Johnny Depp as a Kurzweil-like scientist whose mind was uploaded to the internet - with horrific consequences for humanity.

Fortunately, since the IP metaphor is in no way true, we will never have to worry about the human mind going mad in cyberspace, and we will never be able to achieve immortality by uploading it somewhere. The reason for this is not only the lack of conscious software in the brain; the problem is deeper - let's call it the problem of uniqueness - which sounds both inspiring and depressing.

Since neither "memory banks" nor "representations" of stimuli exist in the brain, and since all that is required of us to function in the world is for the brain to change as a result of our experiences, there is no reason to believe that one and the same experience changes each of us to the same extent. If you and I attend the same concert, the changes that occur in my brain when listening to Beethoven's Symphony No. 5 will almost certainly be different from those that occur in your brain. These changes, whatever they are, are created based on the unique neural structure that already exists, each of which has evolved throughout your life of unique experiences.

As Sir Frederick Bartlett showed in his book Remembering (1932), this is why no two people will ever repeat a story they have heard in the same way, and over time their stories will become more and more different from each other.

No “copy” of history is created; rather, each individual, upon hearing the story, is changed to some degree - enough so that when later asked about the story (in some cases, days, months, or even years after Bartlett first read the story to them) - they will be able to relive to some extent the moments when they listened to the story, although not very accurately (see the first image of the dollar bill above.).

I suppose this is inspiring because it means that each of us is truly unique - not just in our genetic code, but even in the way our brains change over time. It's also depressing because it makes the grand challenge of neuroscience seem almost beyond imagination. For each of our daily experiences, the orderly change may involve thousands, millions of neurons, or even the entire brain, since the process of change is different for each individual brain.

What's worse, even if we had the ability to take a snapshot of all 86 billion neurons in the brain and then simulate the state of those neurons using a computer, this extensive template would not apply to anything outside the brain in which it was originally created.

This is perhaps the most monstrous effect that the IP metaphor has had on our understanding of the functioning of the human body. While computers do store exact copies of information—copies that can remain unchanged for a long time, even if the computer itself has been de-energized—our brains only maintain intelligence while we are alive. We don't have on/off buttons.

Either the brain continues its activity, or we disappear. Moreover, as neuroscientist Stephen Rose noted in his 2005 book The Future of the Brain, a snapshot of the brain's current state may also be meaningless if we don't know full story the life of the owner of this brain - perhaps even details of the social environment in which he or she grew up.

Think about how complex this problem is. To understand even the basics of how the brain supports human intelligence, we may need to understand not only the current state of all 86 billion neurons and their 100 trillion intersections, not only the varying strengths with which they are connected, but also how moment-to-moment brain activity supports integrity of the system.

Add here the uniqueness of each brain, created in part by the uniqueness of each person's life path, and Kandel's prediction begins to seem overly optimistic. (In the recently released editing the Orsk column The New York Times neuroscientist Kenneth Miller suggested that the task of even figuring out basic neural connections will take "centuries.")

Meanwhile, huge sums of money are poured into brain research based on often flawed ideas and unfulfilled promises. The most egregious case of neurological research gone awry was documented in the recently released Sci entific American report . We were talking about the amount of $1.3 billion allocated to the Human Brain Project launched by the European Union in 2013.

Convinced by the charismatic Henry Markram that he could create a supercomputer simulation of the human brain by 2023, and that such a model would provide breakthroughs in the treatment of Alzheimer's disease and other disorders, EU authorities funded the project with literally no restrictions. After less than 2 years, the project turned into a brain dump, and Markram was asked to leave his post.

This might interest you:

We are living organisms, not computers. Deal with it. Let's continue to try to understand ourselves, but let's get rid of unnecessary intellectual baggage. The IP metaphor lasted for half a century, bringing a meager number of discoveries. It's time to press the DELETE button. published

Translation: Vlada Olshanskaya and Denis Pronin.







2024 gtavrl.ru.