Tuesday, May 23, 2023

Wrong Fear of AI

The world has a general fear of Artificial Intelligence. And many people, both smart and conspiracy stupid, are telling us to be afraid of AI. But I think we are worrying about the wrong thing.

The fake image that launched 1,000 tweets

Before I continue to say why our current fear of AI is a different problem, let's review what the current fear mongers say. 

One problem they see that an artificial intelligence might not need humans, and actively work against the beings that can turn off the switch. And we all die. Not much we could do about that, except stop AI science altogether. And we can't.

Second they say that not-too-bright artificial intelligence might be so focussed as to be to our detriment. There example is the paper-clip example. That says we tell an AI to make paperclips at the lowest cost and maximize output. Then our all-knowing, but dumb AI will convert more and more manufacturing towards paper clips until every plant and workers is focussed on paper clips and we all die. Yeah, I can't really believe this situation. It seems that at some point the cost / benefit price obtained would stop the production of paper clips. And, if not, pull the plug out of the wall.

Realistically there is another fear that is specific, not general, which may be right (but that isn't my main point). In this third situation we incorporate AI into our weapons. We have already done this - as have other countries as prices for drones drop. The fear is that a simple mistake that the AI is fed might lead to a nuclear war. They point to a couple of different scenarios in both the United States and the Soviet Union / Russia which would have led to war if key people had not objected and raised concerns. If an AI is given the power to launch nuclear weapons, those mistakes in input would cause a war. And, again, AI in some drones already to make targeting decisions with no human input. At the very least the American military has human backups before we launch a nuc. My guess is that other countries don't want to exterminate life and have the same process.

Well... maybe we should start


Here is my new fear. Artificial Intelligence lies. 

In some cases an AI writer will list a source as an input to the article from an expert - that is not true. In the case of a Bloomberg article, the AI actually went back and CREATED an bogus entry from the past that supports their argument.

In other cases the AI (like ChatBots - those things that answer you or create documents) have been fed bad information somewhere and therefore creates response that aren't true. An example that occurred when they first came out a few years ago is that enough people "taught" the AI that Hitler was correct, the AI would produce reams of reasons that Hitler was a good man and the final solution made sense.

Finally, they could be instructed to lie. Either through outright malice (describe the way a pizza joint in Washington steals children), or through asking accidentally for information that they have to make up.

REAL LIFE EXAMPLE  I am not sure which problem this real life example originates but....

A college professor used AI before grading papers. He asked an AI to report how many of his students used an AI to write or contribute their essays. The "ChatAI" found 5 instances of people that employed "ChatAI" to write their papers. The professor failed them. 

The AI was wrong in flagging these students. But the students had to prove it was wrong in order to be heard out by the professor and school. There was the presumption of guilt based on an AI decision. The proof  a few were able to provide included time stamps for the first, second and third drafts, along with the research. Finally the professor and school had to accept that the AI was lying about it was wrong.  There are lots of examples where AIs incorrectly flag AI interactions. Texas on May 17th. UC Davis in April.

And the result is "ha ha funny", but it has real world implications. At UC Davis, a student was sent to Judicial Review and a second review by an Honor Board. He had to prove that the accusation was incorrect - and it is hard to prove a negative.

Happening in college is worrying, but what if we get to the point we honestly can't believe in what is real. In a country and time like ours, where people already don't believe in the honesty of the other side, this would heighten the changes of violence.

No comments:

Post a Comment

First Furries, now this..

 MAGA has a problem understanding the truth versus movies or jokes. The previous issue (and still quoted) is that some classrooms have litte...