There has been no shortage of articles hailing the opportunity of AI and ones forecasting disaster from AI. I understand the good uses to which AI could be put, but I am also well aware of the ways in which AI is dangerous or will denigrate our lives as thinking human beings.
First, the good uses. There is no question that AI has the capability of out-thinking human beings, regardless how famous and knowledgeable, because of the quantity of information it can process in a short amount of time. The most powerful accounts I’ve read have been in the area of medical research: doctors have put facts into AI asking for a diagnosis or asking for a possible remedy and AI has come up with remarkable answers that were beyond the human mind’s capability.
Clearly, AI, in the hands of knowledgeable professionals, will be able to assist them in doing their work and improving our lives.
That, however, is where the good news ends. And it is all dependent on the phrase, “in the hands of knowledgeable professionals.” AI in the hands of everyday people, as it is already today, is a danger to themselves and our society. Let me give you some examples.
Perhaps the worst are the many reported cases where individuals are using AI as a companion or advisor. People, mostly teenagers it appears, are looking to chatbots for emotional support, including advice on suicide, because they don’t have someone to confide in. AI may sound like a person, and be able to respond to what someone says or asks, but they aren’t trained to respond to the complexities that make up a person’s emotional state. Further, they are currently designed mostly to be reassuring to people with doubts; if someone is in emotional trouble, reassuring that person that he’s doing the right thing is probably exactly the wrong thing to do.
Another class of cases are people—I assume again very lonely people—who look to chatbots as a love or sex object. There was a report of one teenage boy who committed suicide in order to “join” his chatbot love. People are confusing illusion and reality.
It has also been reported that many people who are having problems with the medical profession or who just don’t have access to a doctor are using chatbots to self-diagnose. The reader may well ask that if professionals can use AI for this purpose, why can’t the average layman? The answer is that AI is dependent on the quality of information it is given about the problem it is being asked to solve. The old expression, “Crap in, crap out,” clearly applies here. Medical problems are so complex that it is unlikely that the patient is able to identify all the factors that AI needs to properly answer the question.
A second class of harm comes from the use of AI by those individuals intent on creating misinformation, whether on the right or the left, in order to influence the response of people to political events.
We have seen the impact that misinformation on social media has had. Now we have the added impact of AI. As was just reported byThe New York Times, a “torrent of fake videos and images” have been generated by people using AI to create reaction to the Iran war online. The impact of these images is strong because people tend to believe what they see; AI has been perfected to a level where even in the hands of amateurs, one cannot tell that an AI generated video or image is fake.
An entirely different nature of harm comes from the people—whether students or adults—who are using AI to generate a variety of work-product—papers, applications, articles. To say that this practice is a no-no goes without saying. When someone submits work-product it should be their own, meaning it results from the use of their mind. Using AI to generate such things is just another way of cheating.
But the problem is not just that these people are being dishonest about what they have submitted. It’s that they haven’t used their mind. Remember the famous words of Descartes: “I think, therefore I am.” The development and use of one’s mind is what makes people grow, increase their ability to process information, and perform tasks, By using AI, there is no growth.
The list goes on and on. But the general point is the one I started out with: In the hands of professionals, AI is already very useful in assisting in the analysis of a difficult or rare situation and it will likely become more so. However, in the hands of the average person, it is either a way to meet some emotional need which is not being satisfied in the real world, or it’s an invitation to be lazy, to cheat, or its a way to distribute misinformation to achieve a goal.
While I sympathize deeply with people who are lonely and look to their chatbot to mitigate that loneliness, it is a bad and dangerous answer. For those using AI because of the inadequacy of their human providers, their problem is very real, but using AI is again a bad and dangerous answer.
People in both these situations are suffering from a failure of society—of humans—to provide an environment where people are nourished and heard. Whether it’s within the family, in the workplace, or in one’s relationship with a healing or other provider, this is a serious societal problem. But chatbots are not the answer.
The answer, as I see it, is twofold. First, the law needs to regulate the use of AI. Its use should be restricted to assisting professionals in the analysis of problems. AI products (e.g. chatbots) should not be allowed to be obtained by the average person. AI should be treated similar to a controlled substance; only people licensed for its use can obtain it.
Let the tech giants howl at this limitation to their making money from their AI investment. Even with restricted use, I have no doubt that they can figure out how make a good profit.
Second, society and families are failing people in numerous ways. Parents need to change the way they raise their children (see my book, Raising a Happy Child). Doctors need to communicate better with their patients. And society needs to stop sending messages of inadequacy to people.The latter will, in all likelihood, never come about. And so children will continue to be damaged by what they learn from the media and their interaction with others.
If we cannot change society, then we have to provide children (or adults) with the means to see themselves differently so they are not damaged by these interactions. (See my book, Discover Your Power.) Turning to chatbots to resolve the problem is not the answer.
Finally, AI should not enable people to influence an already chaotic political landscape by distributing misinformation. This dangerous tool must be kept out of the hands of all but professionals who are working in areas where the benefits of their using AI is clear.
Leave a comment