For a better understanding of modern medical science you are a carefully structured bag of pressurized volatile chemicals. There’s no reason to believe there’s anything special about your body compared to a corpse or a Buick. Except for one thing — your body contains the brain of a living person.
There is no evidence that any new laws of physics or supernatural phenomena are related to human cognition. Animalism and dualism are truly dead. This is at least the first sight is disturbing because, based on human experience, we certainly do not we feel that we are just a collection of chemicals. Being human on a very deep and intuitive level seems like something that shouldn’t be possible for a complex molecular machine.
The question arises: if we know how consciousness works, can we create machines or write software? does he have it too? Can efforts to create intelligent software teach us lessons about the nature of the human mind? Can we finally understand subjective experience well enough to decide once and for all what moral weight we should give to the experiences of less sophisticated minds such as animals, dolphins, or fetuses?
These are difficult questions that philosophers have grappled with for centuries. However, since philosophy as a discipline is incapable of actually solving problems, there is very little progress was done. Here’s a traditional philosopher talking about consciousness on the TED stage.
This is a good introduction to the topic, although Chalmer’s philosophy is supernatural and offers a metaphysical phenomenon that provides consciousness in ways that don’t interact with the physical world at all. This is what I would call a simple exit.
If you use magic to explain something, you are not really explaining it — you have simply abandoned the style. Perhaps Star Trek solved the same problem better.
Recently, science has begun to make significant progress on this problem, as artificial intelligence and neuroscience have begun to push the boundaries of this problem. This inspired new areas of fact-based philosophical thinking. The insights thus obtained are extremely interesting and help to better delineate a holistic theory of consciousness and guide us to machines that can experience in the same way as the mind.
Neuroscience and Consciousness
In neuroscience, it is defects in the brain that largely teach us how to function. When the brain breaks, we look behind the curtain of the mind. Taken together, these ideas begin to outline what the structure of consciousness should look like, and the result is breathtaking.
For example: Cotard’s delusion (better known as the «walking corpse syndrome») is a delusion, often caused by severe schizophrenia or physical damage to the parietal lobe of the brain, that causes various symptoms, the most interesting of which is the delusion that the sufferer does not exist.
Those suffering from the Kotara delusion are not self-aware. They sincerely do not believe that they exist, which often also leads to the conclusion that they are dead. Descartes once stated that the fundamental truth on which everything else can be based is «I think, therefore I am.» People with the Kotar delusion are not agree . In other words, the component of consciousness that includes self-awareness can be selectively turned off by damage to a certain area of the brain, leaving the rest of the human intellect relatively untouched.
A related condition is «blind vision», which affects some blind people due to damage to the visual center of the brain. Blindsighted patients are able to instinctively catch objects thrown at them, and if you place objects in front of them and ask them to guess what they are, they will greatly outperform random chance. However, they do not believe that they can see: subjectively they are blind.
Blindsight patients are unique in that they have a working sense (sight) but are not aware of it. Brain damage didn’t destroy them ability process visual information, but their ability to consciously realize this processing.
Blinding occurs when one particular circuit that diverts information from the visual cortex is damaged (circuit V1), but not the other two, leaving neuroscientists in a unique position to know exactly which neural circuit is needed for visual information to enter consciousness. experience, but not why.
Interestingly, the downside of Blindsight is also possible — victims of Anton-Babinski syndrome lose their sight, but retain their conscious perception of the vision, insisting that they can see normally, and invent ridiculous explanations for their inability to perform basic tasks.
There have also been experiments on selective shutdown of consciousness. For example: there is a small region of the brain called the claustrum, not far from the center of the brain, which, when stimulated by an electrode, at least in some patients, completely turns off consciousness and higher cognition, which returns after a few seconds when the electric current stops.
It is interesting that during stimulation the patient does not sleep, his eyes are open, he is sitting. If the patient is asked to repeat a task while the current is on, he simply withdraws from what he is doing and stops. The role of the claustrum is thought to be to coordinate communication between different areas of the brain, including the hippocampus, amygdala, caudate nucleus, and possibly others.
Some neuroscientists believe that because the claustrum serves to coordinate communication between different brain modules; stimulating this area would disrupt this coordination and cause the brain to break into separate components — each of which is largely useless in isolation and incapable of creating a subjective experience.
This concept is consistent with what we know about the function of anesthetics, which we used for centuries before we understood how they work.
General anesthetics are now believed to interfere with communication between the various high-level components of the brain, preventing them from building any neurological system needed to create coherent conscious experience. With that in mind, this has a certain intuitive significance: if the visual cortex can’t send information to your working memory, you don’t have the opportunity to have a conscious visual experience that you could talk about later.
The same goes for hearing, memory, emotions, internal monologue, planning, etc. All of these systems are modules that, if disconnected from working memory, would remove an important part of the conscious experience.
In fact, it may be more accurate to speak of consciousness, rather than as a separate, unified entity, as an interplay of many different types of awareness linked together by inclusion in the narrative memory stream. In other words, instead of «consciousness,» you can have visual consciousness, audible consciousness, memory consciousness, and so on. It’s an open question as to whether there’s anything left when you take all of these pieces away, or if that completely explains the question of consciousness.
Theories of Consciousness
Daniel Dennett, also known as the “Cranky Old Man” of consciousness research, believes that this is actually the case—that consciousness is simply not as special as most people think. His model of consciousness, which some have accused of being overly reductionist, is called the «multiple checkers» theory, and it works like this:
The brain functions as a set of semi-independent, interconnected modules that continuously transmit information. on the network, semi-discriminatory, often in response to signals they receive from other modules. Signals that trigger responses from other modules, such as a visual memory odor, a cascade between modules, and escalation. Memory can evoke an emotion, and the executive process can have a response to that emotion, which can be structured by the language center into part of an internal monologue.
This process increases the likelihood that the entire cascade of related signals will be detected by the brain’s memory encoding mechanism, and become part of the short-term memory record: the «history» of consciousness, some of which will make it longer memory life, and become part of the permanent record.
Consciousness, according to Dennett, is nothing more than a sequential narrative made up of these kinds of cascades that make up the system’s entire record of the world in which it exists and its path through it. Since modules do not have introspective access to their own functions, when we are asked to describe the nature of the behavior of one module, we do not get any useful information. As a result, we intuitively feel that our subjective experience is indefinable and inexpressible.
For example, you can ask someone to describe what red looks like. The question seems absurd, not because of any fact inherent in the universe, but because the structure of the brain does not allow us to know how the color red is implemented in our own equipment. As far as our conscious experience goes, it’s just… red.
Philosophers call such experiences «qualia» and often give them an almost mystical meaning. Daniel Dennett suggests that they are more like the neurological page that the brain flips up when asked what goes on behind the scenes of a certain area of the brain that is inaccessible to conscious storytelling. Dennett himself says this:
There is no single, unambiguous “stream of consciousness”, because there is no central headquarters, there is no Cartesian theater where “everything comes together” to read the central meander. Instead of such a single stream (albeit a wide one), there are several channels in which specialized circuits try to perform their various actions in parallel pandemonium, creating multiple drafts along the way. Most of these fragmentary «narrative» projects play short-lived roles in the modulation of current activity, but some of them develop rapidly due to the functional role of the virtual machine in the brain. The serialization of this machine (its «von Neumann character») is not a «hard» design feature, but rather a consequence of the sequence of coalitions of these specialists.
There are, of course, other schools of thought. One model that is currently popular among some philosophers is called integrated information theory, which argues that the consciousness of a system is related to its internal network density—the complexity of its overall structure relative to that of its components.
However, this model has been criticized for suggesting intuitively unconscious (simply structured) information systems, which it considers to be more conscious than humans. Scott Aaronson, a mathematical researcher and vocal critic of integrated information theory, said this a few months ago:
“In my opinion, the fact that the Integrated Information Theory is wrong—clearly wrong for reasons underlying it—places it in something like the top 2% of all mathematical theories of consciousness ever proposed. It seems to me that almost all of the competing theories of consciousness have been so vague and fluffy and malleable that they can only tend to make mistakes.”
Another proposed model posits that consciousness is the result of people modeling themselves, an idea that may be compatible with Dennett’s model but suffers from the possible fatal flaw of assuming that a Windows computer running in a virtual machine is in some sense conscious. The list of models of consciousness is about as long as the list of everyone who has ever felt inclined to solve such a complex problem.
There are many variations, from the overtly mystical to the raw, cynical pragmatism of Dennett. For my money, Dennett’s multiple draft theory seems to me to be, if not a complete account of why people talk about consciousness, at least a solid start along the way.
Artificial intelligence and consciousness
Let’s say a few years later progress in neuroscience led to the Grand Unified Theory of Consciousness — how could we know if this was correct? What if the theory misses something important — how will we know? The history of science has taught us to beware of good ideas that we cannot test. So how can we test our model of consciousness?
Well, we could try to build one.
Our ability to create intelligent machines has recently experienced a renaissance. Watson, the intelligent software developed by IBM that won the game show Jeopardy, is also capable of a surprisingly wide range of intelligent tasks, being adapted as a talented chef and superhuman diagnostician.
Although IBM calls Watson a cognitive computer, the truth is that Watson is a triumph of artificial intelligence — that is, it is intelligent software that does not attempt to implement specific knowledge of neuroscience and brain research. IBM works with a large number of very different machine learning algorithms, some of which are used to evaluate the results of other algorithms to evaluate their usefulness, as well as many algorithms that are manually tuned for a productive connection.
As Watson improves and its reasoning becomes deeper and more useful, it is easy to imagine that Watson technology, along with other technologies not yet developed, created systems that mimic the functions of certain known brain systems and integrated these systems in such a way that produce a conscious experience.