How to avoid psychopathic artificial intelligence


The world’s first psychopathic AI is a reminder that while newer, improved, and increasingly complex algorithms continue to be developed, the need for good unbiased data will always be paramount.

the Rorschach test, named after its creator Hermann Rorschach, is a psychological test that uses a subject’s interpretation of a series of ambiguous inkblots to assess their personality, emotional tendencies, and mental biases. Depending on a person’s perception of the various seemingly abstract models, psychologists would gain insight into their thinking process. Although different people view the ten images (which make up the test) in different ways, people with mental disorders, such as schizophrenia and psychopathy, offer extraordinarily skewed and, at times, morbid interpretations of these erased patterns. For example, for an image that would normally evoke responses such as “a moth” or “a bat”, a psychopathic response would be something like “a pregnant woman falls into a construction story”. Or an image that is normally interpreted as “two people standing next to each other,” a psychopath would interpret as, say, “a man jumps out of a window.” Surprised? Well, those disturbing answers were exactly what Norman, the world’s first AI psychopath, gave when he was put through the Rorschach test.

What is psychopathic artificial intelligence?

Norman is an image captioning AI, similar to those used by Facebook to identify people and objects in images posted to the platform by users. It is intended for digitizing images and making sense of the content by describing the image with words. But, as you just read, what Norman sees is not exactly what normal image recognition AI programs see. Where normal AI captioning an image sees a black and white image of a baseball glove, Norman sees “a man murdered with a machine gun in broad daylight,” or where standard AI sees a group of bird sitting on top of a tree branch, Norman sees a man electrocuted to death. If you’ve given such deviant answers on a real Rorschach test, you’ll almost certainly be labeled a psychopath and sent to an asylum. Now, before you get too worried or give in to your paranoia of AI and robots going spontaneously going rogue, let me clarify that Norman, the AI ​​psychopath, was created intentionally by a group of scientists at MIT to answer for the way he does it. But how was Norman created this way?

The making of artificial intelligence

As you may know, creating an AI involves two main facets – 1. designing the algorithm and 2. training that algorithm with data. The algorithm determines how the data is processed and how the outputs are derived from the input. Algorithms are usually created by trying to replicate what little we know about human cognition and thought processes. The more an AI algorithm can mimic human mental processes, the better. AI researchers are constantly drawing on brain research and studies by neuroscientists to discover how our brains perform specific functions to create algorithms and neural networks. Take, for example, how scientists at Google’s DeepMind Lab developed vector navigation capability in AI by understanding how the human brain and other animals performed navigation using a hexagonal coordinate system based on a grid. Therefore, the functioning of AI systems can be considered analogous to the functioning of a human brain. Just as the brain develops its decision-making skills through learning and experience, AI algorithms are trained using large datasets, containing data similar to what the system would process during of its full implementation. Thus, an AI expected to understand written text is programmed with natural language processing (NLP) capability and is trained using large sets or textual data. This allows the AI ​​program to learn on its own how to process language better. Likewise, a picture captioning AI becomes more efficient when it is trained with the captions of a large number of pictures. However, just as bad experiences shape a person’s mind in a different way from good experiences, allowing an AI to ‘experience’ good or bad data when trained can determine how. which it reacts to when used in real-world applications. Therefore, it’s no surprise that Norman was trained to react in a “psychopathic” manner by providing him with data exclusively from a Reddit page where users posted gruesome and morbid images. Since the AI ​​was trained in subtitles relating to violence and death, they learned it as a natural response to even the most serene images. It is as if you teach a child, from birth, to call an apple an orange, the child will naturally grow up to always call apples like oranges, and will have a hard time learning the right names.

Norman, the psychopathic AI, serves as an example of what happens when an AI is fed with very biased data. It points to a truth that has been consistently pointed out by experts in data science and AI research.

The devil is in the data

No matter how good an AI algorithm is, the data it processes must be clean and unbiased for the program to perform as intended. AI researchers collect data online from different sources, which are most often public repositories of different types of data, to train AI algorithms. This can sometimes include biased information depending on many factors. The vastness of the data can sometimes make it difficult to recognize these biases, even in spite of carefully executed data cleansing and organization. Understanding how different types of data affect the behavior of AI algorithms can allow AI researchers to further refine the data and improve the algorithms to eliminate bias. The most effective way to minimize AI bias and unbalanced propensity to extremes in any situation is to ensure that the dataset it is trained on is both large and varied. Sufficient representation of data from different sources must be ensured from the early stages of AI development to maintain AI robustness. It is like ensuring that a child acquires knowledge and experience in different areas, giving them a broader perspective to ensure full and balanced development and growth. Collecting data from a larger, possibly shared, data source can provide AI researchers with a large amount as well as a variety of data that can cause future AIs to be less biased.

The challenges of creating impartial artificial intelligence

AI_Chaos.png

In addition to the vastness of data that requires highly sophisticated programs and systems to sift through and cleanse, creating unbiased AI faces a few other challenges. For example, using a shared data source to represent a wide variety of data sources may be hampered due to privacy concerns. Not all groups and organizations will be willing to openly share their data for use by external parties, which may also include competitors. Another barrier to training AI for impartiality is that of humans working with data. While this bias may be unconscious, this bias can translate into the way data is collected, cleaned, and organized, and distort the resulting AI. Correcting these biases is a top priority for AI developers, as broad application AIs will be impossible to create if processing biases persist.

While the degree of bias demonstrated by Norman, the AI ​​psychopath, seems impossible to replicate under natural conditions, he still argues for the importance of data in determining the success of AI applications. With the increasing penetration of AI in our daily lives, which will only grow over time, the need to ensure that this technology operates impartially will gain in importance. While there is currently no definitive and practical solution to this bias, it is only a matter of time before one is discovered.


Source link

Previous Service Employees International Union Issues Public Comment on OMB Notice
Next Special Report: Malaysia’s Covid-19 Financial Support Packages – A case of prudent cash flow restructuring and not excessive cash handouts that have to be paid by future generations

No Comment

Leave a reply

Your email address will not be published. Required fields are marked *