Top 5 Philosophical issues of Artificial Intelligence (AI)

NuAIg.ai
8 min readOct 29, 2020

Artificial Intelligence (AI) is currently a very active scientific field. It was born in the 1950s and is still alive today. In the course of the development of artificial intelligence, different research routes have stimulated competition, and new problems and new ideas have continued to emerge. On the one hand, there are many resistances to theoretical development, and on the other hand, technological achievements have achieved brilliant results-which is rare in the history of science.

The goal of AI and AI technology solutions is to reproduce human intelligence in a machine way. As a result, its research objects cross the material and spiritual realms, which is quite complex. The characteristics of intelligence determine the tortuous nature of AI development, and many of the problems AI encounters are directly related to philosophy. It is not difficult to see that many AI experts have a strong interest in philosophy; similarly, AI research results have also attracted much attention from the philosophical community.

As the basic research of contemporary artificial intelligence science, the purpose of cognitive research is to clearly understand the structure and process of human brain consciousness activities, and to make a logical explanation of the combination of intelligence, emotion, and intention of human consciousness, so that Artificial intelligence experts facilitate the formal expression of these consciousness processes. To simulate human consciousness, artificial intelligence must first study the structure and activities of consciousness. How is consciousness possible? Searle said: “The best way to explain how something is possible is to reveal how it actually exists.” This allows cognitive science to advance the development of artificial intelligence. The key significance, this is the most important reason why the cognitive turn occurs. Due to the synergistic relationship between philosophy and cognitive psychology, cognitive neuroscience, brain science, artificial intelligence and other disciplines, no matter how computer science and technology develop, from physical symbol systems, expert systems, knowledge engineering, to biological computers and The development of quantum computers

It is inseparable from philosophy’s knowledge and understanding of the entire process of human consciousness activities and various factors. Whether it is a strong AI school or a weak AI school, from the epistemological point of view, artificial intelligence relies on the physical symbol system to simulate some of the functions of human thinking. However, its true simulation of human consciousness depends not only on the technological innovation of the robot itself, but also And it also depends on philosophy’s understanding of the process of consciousness activity and its influencing factors. From today’s point of view, the philosophical problem of artificial intelligence is no longer what the essence of artificial intelligence is, but to solve some more specific intelligent simulation problems.

1. On the question of intentionality — Can a machine have mind, consciousness. If yes, then can it intentionally harm humans.

Whether computers have intentionality, the debate on this issue can be summarized as follows: 1) What is intentionality? Is it intentionality that the robot performs a specific behavior according to instructions? 2) Human beings already know what they are doing before they act. They have self-awareness and know what results their actions will produce. This is an important feature of human consciousness. So how should we understand the robot to perform a certain behavior according to instructions? 3) Can intentionality be programmed? Searle believes that “the way the brain functions to produce the heart cannot be a way of simply operating a computer program.” On the contrary, what people have to ask is: Is intentionality an understandable spirit, if it can be understood, then Why can’t it be programmed? Searle believes that computers have grammar but not semantics. But in essence, grammar and semantics are two-in-one issues, and the two are never separated. If the program can include grammar and semantics together, do we need to distinguish between grammar and semantics? Searle’s point is that even if the computer replicates intentionality, the replication is not the original. In fact, when we have a clear understanding of human cognition and its relationship with its behavior, we must be able to program the relationship between our mental processes and behaviors of the human brain and input all kinds of human beings we know about. The information that makes the computer “know everything.” However, at that time, can we still be like Searle said, artificial intelligence is not intelligence, artificial intelligence has no intentionality and mental processes, because it lacks human proteins and nerve cells? Is intentional copying “intentional”? Is the copying of understanding “understanding”? Is the duplication of ideas “thoughts”? Is the duplication of thinking “thinking”? Our answer is: the foundation is different, and the function is the same. Relying on different foundations to form the same function, artificial intelligence is just a special way of realizing our human intelligence. Searle uses intentionality to deny the depth of artificial intelligence. Although there is a certain basis, when artificial intelligence can simulate human-like thoughts, even if people think that artificial intelligence and human intelligence are essentially different, Then we will feel that this distinction is of little significance anymore. Searle’s viewpoint can only mystify the human heart again!

2. On the question of Intelligence — Is it possible for machines to solve problems using intelligence the same way humans do. Or there is a limit to which machine can have intelligence for solving any complex problem.

Humans can unconsciously use the so-called hidden abilities, in Polanyi’s words, “humans know more than they can express.” This includes cycling and kneading, as well as higher levels of practical skills. Alas, if we don’t understand the rules, we can’t teach the rules to a computer. This is Polanyi’s paradox. In order to solve this problem, computer scientists did not try to reverse engineer human intelligence, but developed a new way of thinking for artificial intelligence-thinking with data.

Microsoft Research (Microsoft Research) senior researcher Caruana (Rich Caruana) said: “You may think that the principle of artificial intelligence is that we first understand humans, and then build artificial intelligence in the same way, but this is not the case.” He said Take airplanes as an example. Airplanes were built long before having understanding of how birds fly. The principles of aerodynamics were different, but today our airplanes fly higher and faster than any animal.

Today, people generally think that smart computers will take away our work. Before you finish your breakfast, it has already completed your week’s workload, and they don’t take a break, drink coffee, or pensions, or even No need to sleep. But in fact, although many tasks will be automated in the future, at least in the short term, this new type of intelligent machine is more likely to work with us

The problem with artificial intelligence is a modern version of Polanyi’s paradox. We do not fully understand the learning mechanism of the human brain, so we let artificial intelligence think in the way of a statistician. The irony is that we now know very little about how artificial intelligence thinks, so we have two unknown systems. This is often referred to as the “black box problem”-you know the input data and the results, but you don’t know how the box in front of you reached the conclusion. Caruana said, “We now have two different types of intelligence, but we cannot fully understand both.”

Artificial neural network has no language ability, so it cannot explain what it does and why, and it has no common sense like all artificial intelligence. People are increasingly worried that some artificial intelligence operations may occasionally hide conscious biases, such as sexism or racial discrimination. For example, recently there is a piece of software used to assess the possibility of criminals committing crimes again. It is twice as harsh on blacks. If the data they receive is impeccable, then their decision is likely to be correct, but most of the time human prejudices are caught in it.

3. On the question of Ethics — Can machines be dangerous for humans, how scientists will make sure that machines behave ethically and will not be a threat to humans.

There are a lot of conflicts between scientists whether machines can have emotions like love or hate. They also believe that people have no reason to expect AI to consciously tend towards good and evil. When considering how AI becomes a risk, experts believe that there are two scenarios most likely to happen:

AI is designed to perform devastating tasks: autonomous weapons are artificial intelligence systems born for killing. If they fall into the hands of the wicked, these weapons can easily cause a lot of casualties. In addition, the AI ​​arms race may also unintentionally trigger an AI war, resulting in a large number of casualties. In order to avoid being obstructed from hostile forces, “close” these weapons programs will be designed to be extremely complex, and therefore people are also likely to lose control in these kinds of situations . Although this risk also exists in dedicated artificial intelligence (narrow AI) but with intelligent AI and raising the level of self-driven, risk will follow growth.

AI has been developed to perform useful tasks, but the process it performs can be destructive: this may happen when we have not yet reached the full alignment of human and artificial intelligence goals, while solving the consistency of human and artificial intelligence goals The problem is not an easy task. Just imagine, if you summon a smart car to take you to the airport at the fastest speed, it may desperately follow your instructions, even in a way you don’t want-you may be chased by a helicopter or vomited due to speeding. more than. If the mission of a super-intelligent system is an ambitious geoengineering project, the side effect may be the destruction of the ecosystem, and humans’ attempts to stop it are viewed as a threat that must be eliminated.

4. On the question of Conceptuality — Conceptual framework issues in artificial intelligence

Any science is built on the knowledge it knows, and even the ability of scientific observation is all related to the known things. We can only rely on the known knowledge to understand the unknown. Knowing and unknown are always a pair of contradictions, and the two always coexist and depend on each other. Without the known, we cannot know the unknown; without the unknown, we cannot make scientific knowledge develop and evolve. There is a lot of evidence to prove that when people observe objects, the experience that the observer gets is not determined by the light entering his eyeballs. The signal is not only determined by the image on the observer’s retina. Even two people watching the same object will have different visual experiences. As Hansen said, when the observer looks at the object, he sees much more than the eyeball touches. Observation is very important to science, but “observation Statements must be constructed in the language of a certain theory.” “Observation statements are public entities and are elaborated in public language. They contain theories with varying degrees of universality and complexity.” This shows that observation requires theory. , Science needs theory as the forerunner, and scientific understanding is not based on the unknown. Businesses often lack the understanding of best use cases for their business, AI consulting services are getting over the best to maneuver business with AI.

--

--

NuAIg.ai

NuAIg assists you across Artificial Intelligence value chain from DATA management, Data curation, Integration & labelling, knowledge graph generation.