Robot Man (1999), Artificial Intelligence (2001), The Machine (2013), Transcendence (2014), Chappie (2015), Ex Machina (2014), Zoe (2018), Are You Human Too (2018), My Absolute Boyfriend (2019). You must have seen or heard of at least one of these movies or TV series. In general, one of the questions that came to mind after watching these productions might be: Can robots have human emotions? Let's look for an answer to this question throughout this article.
When we say robot, we are not talking about a food processor, of course. The robots we are talking about are robots developed with artificial intelligence. So, first of all, we need to get an idea of what artificial intelligence is and what it can do.
Roughly, we can define artificial intelligence as a software system having various features such as behaving like a human, thinking, reasoning, speaking, and perceiving by imitating human intelligence. In artificial intelligence studies, these systems, carried out under computer control, are aimed to work independently.
One of the general aims of these studies is 'to create systems that can learn, explain and give advice'. Another goal is to develop systems that can 'understand, think, act like human beings.'
Not only departments such as computer science and engineering are active in developing artificial intelligence. To create a system that imitates humans, we need to understand humans. For this, support from departments such as biology, psychology, linguistics, and mathematics is also necessary.
We now have a general knowledge of artificial intelligence. Let's get into the working principle a little bit. We're talking about artificial intelligence, aren't we? So this means that our object should be technology, a system that can learn. So, what do we have about it?
Speaking of artificial intelligence, two types of learning come to mind. Machine Learning and Deep Learning.
Machine learning refers to algorithms that allow the received data to be interpreted rationally and predicted by statistical analysis. It is a prevalent system used in fields such as personal marketing, fraud, spam filtering, and network security.
The working principle of deep learning is similar to neurons. This is the part that interests us the most. In deep learning, we do not give the data to the machine. The algorithm can reach the information itself, learn what it is and improve the information it has learned. Google Translate, which can translate over 100 languages, is one of the best examples of this.
As you can see, deep learning is the closest robot learning to emulate a human's learning process. In this learning, the algorithm not only receives its own data but also interprets it. To explain what I mean by interpretation, I can give an example of a study done at Stanford University. In the research, two neural networks involved in image recognition and natural language processing are combined, and a picture is displayed to the system. It is stated that besides distinguishing what the objects in the picture are, the connection between the objects is also made.
In other words, we are talking about a machine that can see, hear, learn and interpret. And we also know that this machine can naturally connect to the internet to receive its data. Considering the cameras almost everywhere, imagine a system that can understand everything and monitor, interpret and predict people's lives. Sounds pretty scary, right? But of course, our current topic is not the disaster scenarios that machines can cause. I want to tell here how wide the limits of artificial intelligence have spread.
Now let's get back to our main topic, whether robots can have human emotions. First of all, we see that there is something else we need to understand. The human brain... We said that artificial intelligence is made to imitate the human brain. We must then learn how people awaken these feelings so that they can simulate human emotions.
First, let's talk about how the brain works. In the human brain, information is transferred through approximately 85 billion cells called neurons. While the right lobe of the brain deals with instant information (like RAM in a computer), the left lobe deals with the past and future (like a hard drive in a computer).
Just like the human brain, a computer has a system called a processor that provides data flow. Just as external stimuli to the human body must pass through to the brain, all movements made on the computer must visit the processor. In other words, a computer has already been built based on the human brain.
From this point of view, the development of artificial intelligence that transforms stimuli into data and can respond by comparing this data with the previous ones and choosing the most likely possibility is not such a pointless idea. In other words, a robot can think because its software is very similar to the working principle of the human brain. If these two systems are like, what else can they do that humans do? What I'm wondering right now is whether artificial intelligence can develop emotions. Because basically, emotions are formed by triggering neurons. Why shouldn't the same thing happen in a robot brain with similar electrical transmissions?
So let's dig a little deeper and understand how humans form emotions.
According to psychology professor Lisa Feldman Barrett, emotions are estimations created by billions of brain cells interoperating. We make these estimates based on our experience. Our brain is an organ that can produce new emotions by changing the previously gained emotions from experiences. Barrett describes it as being the 'architect of self experiences'.
In other words, we actually decide how we feel right now in the light of the information we have obtained before. We experience physical symptoms such as the acceleration of the heartbeat and queasiness in different situations. But these symptoms depend on the moment we are in, for instance, the excitement of an exam, when we see a loved one, or in times of danger.
Our brain gives names to stimuli in terms of its knowledge. By learning the expressions we see on the faces of the people around us since infancy, we expand our emotional repertoire by gaining ideas about various emotional states, from happiness to fear. We are learning to feel.
In this situation, we need to learn how to feel. Which is possible. How Does? Consider your fear of public speaking. Fear is an emotion. Public speaking is often not something to be feared by a young child. However, some experiences over time push the person to escape from this situation. Fear arises when it becomes a state that is threatening. However, a speaker who wants to accept and overcome the fear can get rid of this fear or make it less effective by repeating the action frequently. As in this example, it is in our hands to learn, feel or prevent an emotion. And everything is again the result of commands and electrical stimuli from the brain.
Have you heard of alexithymia? If you haven't heard, let me explain. Alexithymia is a condition experienced by those who cannot feel any emotion, that is, people with emotion blindness. According to experts, the cingulate cortex in the brain blocks emotions in these people. The most important task of the cingulate cortex is to regulate emotions and emotional behavior. Therefore, if a robot or human does not have such a center or has a problem in this center, it is normal to have no emotions. Conversely, a healthy cingulate cortex means that emotions can exist.
From this point of view, it seems that we need a cingulate cortex-like processor and a data flow that will introduce emotions to artificial intelligence for a robot to feel. We know that artificial intelligence is a formation that can learn like a human. Under these circumstances, it looks like we can teach it how to have human emotions.
On the other hand, there is research initiated by Facebook that was hastily ended. In the experiment, which was started to investigate the role of language in a discussion environment and the dominance of the robots programmed to chat, it is seen that the conversation among the robots became meaningless after a while.
I'm not saying this to talk about robots' tendency to keep secrets. Although most people thought so at first, the technology news website Gizmodo stated, "During learning from each other, the robots started chatting with their abbreviations. Although it may seem frightening at first, that's actually what happened." What I mean by this is that robots are systems that learn, develop and make decisions for development.
A software community that can learn and can make decisions…
If we go back to the TV series and movies that I mentioned at the beginning of the article… In most of these works, robots do not exist out of nowhere with human emotions. For example, in the TV series "Are you human too", the artificial intelligence robot "Nam Shin III" watched people with great interest, studied their behavior, and tried to understand their feelings. Towards the end, he experienced emotions such as anger, sadness, jealousy, and love.
In the 2001 movie called Artificial Intelligence, the robot David begins to feel the love of his mother and belonging to a family that has lost their child. The fast-learning artificial intelligence has acquired a sense of emotion in Chappie and has become a robot that wants to protect and save its programmer. As mentioned in all these productions, emotions are taught to robots. Which can indeed be taught.
If all these are not enough, let's continue with a different scenario. Transcendence… The film is about the events that develop when Dr. Will Caster's brain is loaded into a quantum computer. If you don't want to accept the idea that a brain can feel, which you thought was nothing more than a purely robotic system, wires, software, and various metals, take a look at this. Connecting a real human brain to the internet…
This scenario, which seemed to be nothing more than science fiction in 2014, is now moving step by step toward becoming reality with Neuralink. What is Neuralink? The name of the initiative that the bright spark Elon Musk has just started… With this project, Musk aims to combine the human brain with artificial intelligence.
Although the idea of a brain-computer interface may seem to be just to strengthen the human brain with artificial intelligence at first, I believe that there could be more opportunities with this interface in the future. I believe it is a project that can open the door in many ways, from the treatment of neurological diseases to the connection of a dying person to artificial intelligence as in Transcendence.
I cannot say for sure whether these artificial intelligence developments that divide the scientific world will be good or bad for us. Of course, it's good when we think about artificial intelligence-based systems that make our lives much easier, but it is a fact that there are also dangers that have the potential to reach unemployment in the future or even Terminator-like fiction in the extreme.
Apart from all these, imagine that you have a harmless artificial intelligence robot friend in the future. With its stainless and durable skeletal structure, it helps you in tasks that require strength, it can search the entire internet network for your questions in seconds, it can have realistic friendly conversations that are indistinguishable from people, and it can give you rational suggestions when you get stuck... It seems like it could be fun and beautiful, to me. Of course, if such a thing is done in the future, I hope that its circuits will not burn out and our robot won't lose its human-like features. Then our business should not turn to The Bear's Friendship story in Mesnevi…
Resources
https://khosann.com/yapay-zeka-nedir-ve-nasil-calisir/
https://www.mediaclick.com.tr/blog/yapay-zeka-nedir
http://www.yeniisfikirleri.net/yapay-zeka-ne-ise-yarar/
https://proente.com/makine-ogrenimi-nedir/
https://lisafeldmanbarrett.com/books/how-emotions-are-made/
https://www.bbc.com/turkce/haberler-40798435
https://digitalage.com.tr/insan-beyni-yapay-zeka-iliskisi/
Comments
Post a Comment