Can We Trust AI?

By Charles Miller

The artificial intelligence engine ChatGPT continues to be all the rage, appearing in the news daily. It is clear that Large Language Model Artificial Intelligence is a new phenomenon that will soon be entering mainstream use. It is astonishing to see the degree to which software is able to imitate human intelligence, but it is important to recognize that the software is not itself in any way intelligent. One podcaster describes ChatGPT, instead, as a “clever regurgitator” of intelligence.

That is actually a very accurate way to describe what ChatGPT does. It has access to the entire public internet and has proven itself extremely clever in quickly selecting from that massive body of knowledge what is seemingly the right thing to say . . .  most of the time.

Jonathan Turley, a professor at George Washington University Law School, is an attorney, writer, commentator, and news-media legal analyst. He recently received an email from a friend who shared what ChatGPT had said about him. When asked for information about Professor Turley, ChatGPT responded by referencing a Washington Post article reporting that the Georgetown University professor was accused of sexual harassment by a former student, who claimed Turley made inappropriate advances during a school-sponsored trip to Alaska.

Professor Turley took issue with ChatGPT and responded in a USA Today opinion piece titled “ChatGPT falsely accused me of sexually harassing my students. Can we really trust AI?” He stated: “There are a number of glaring indicators that the account is false. First, I have never taught at Georgetown University. Second, there is no such Washington Post article. Finally, and most important, I have never taken students on a trip of any kind in 35 years of teaching, never went to Alaska with any student, and I’ve never been accused of sexual harassment or assault.”

So how did ChatGPT get things so wrong? The software certainly did not fabricate those false accusations; it is not intelligent enough to do that (yet). It is easy to suspect political bias on the part of the computer programmers, but I personally suspect this incident is an example of how Artificial Intelligence turns to Abortive Intelligence when it comes to analyzing anything controversial, and Turley can be a bit controversial. He shares that he has voted for Democrat presidential candidates for three decades, and he also vocally defends Republicans, including President Trump. It is not hard to see how this could upset some on both the extreme left and the extreme right.

It is very likely that somewhere on the internet someone made up and posted online falsehoods about Mr. Turley, including references to a non-existent Washington Post article. Then ChatGPT found that false information and regurgitated it. So the problem in this case is that the artificial intelligence software was not intelligent enough to recognize patently false information — which raises the very important question stated by Professor Turley: “Can we really trust AI?”

Charles Miller is a freelance computer consultant, a frequent visitor to San Miguel since 1981, and now practically a full-time resident. He may be contacted at 415-101-8528 or email FAQ8@SMAguru.com.