Here’s What Happens When You Explain Artificial Intelligence to U.S. Adults

“Before reading Barna’s explanation of AI, over half of survey respondents (51%) describe AI as ‘concerning.’ After the explanation, this number reduces to 42 percent. Similarly, we see decreases in the number of people who say AI is scary, untrustworthy and disconnecting people” -Barna

Discussion

If you scroll to the bottom of the link for the definition, one thing that is not mentioned in the definition by Barna is the reality of "hallucinations", where AI follows its regressions and such to come to a spectacularly wrong conclusion (really a correlation to statistical studies that come to wrong conclusions), and another thing not mentioned is that AI engines like Google Gemini have actually been programmed to put a strong bias on the data.

I'm guessing that when THAT part of AI is mentioned, the trust level would drop from 58% to about 5%.

My overall take about AI is that when a finger is not heavily on the scale, AI can be spectacular in finding correlations that human energy would never be able to find. However, there have been enough fingers on the scale, historically speaking, that you're not going to get me to eliminate my paper copies of books and things that provide at least "this is what people thought was real in 1983" and such.

(My favorite example of an AI hallucination was when Google Gemini turned George Washington into a lovely African-American woman with a low cut bodice....)

Aspiring to be a stick in the mud.

There have been a lot of updates since those infamous ‘hallucinations.’ I’m sure it still happens some. (Google at least acknowledged problems in that area, which I found a bit surprising, since admitting mistakes tends to not top their priorities on the whole.)

But the current state of AI as “an information source” reminds me a lot of Wikipedia. People gradually came to mostly understand how Wikipedia works, so they know to check sources, attach disclaimers, assign appropriate uncertainty, etc.

I never trust an AI to give me the facts on something. But for creating stuff, it’s been pretty decent. The AI chunk in Google’s current search results pages is usually, in my experience, pretty good at “accurate but only a little bit related to my search.” But that’s often true of it’s non-AI results, too. (Like, if I search for specs of a particular model of MIDI keyboard, it gives me a great summary of what MIDI is, or some such. And the result list offers lots of specs for other keyboards. Thanks, Google, not what I was looking for.)

So, what is “trust”? When I get questions like that on surveys or whatever, I’m always asking “trust to do what?” I’m not going to give it my social security number, if it asks. 😀 (Though apparently, that’s already already for sale.) As info source, I fully trust AI to be about half right about half the time. So, is that “trust”? Maybe the surveys need to define trust as well as defining the topic they’re asking about.

Views expressed are always my own and not my employer's, my church's, my family's, my neighbors', or my pets'. The house plants have authorized me to speak for them, however, and they always agree with me.

To draw a picture about the trust issue, when the defects of Gemini became apparent and were widely mocked, Google basically fixed the problem in a day.

In other words, the problems with Gemini were not deep down in thousands of lines of code. It was something that, with executive approval, functioned basically at the flip of a switch. That was how blatant the bias was. People who look at Google searches, Twitter and Facebook bans, and the like have been noticing similar things for a while.

Real books are more important than ever, it seems.

Aspiring to be a stick in the mud.

>>I’m guessing that when THAT part of AI is mentioned, the trust level would drop from 58% to about 5%.<<

Yup. AI (large language model) is simply a technology, not really different from a gun or a tool. It’s all in how it’s used and programmed by humans. Humans I generally don’t trust as a rule, though I have no issues with technologies themselves.

I’ve played around with a couple AI sites, and I have gotten some good information from them. However, I have mostly tried NOT to ask about things that are going to be opinion rather than fact or are politically shaded. I’m sure Google’s AI is fine for things like programming questions. However, I’m not going to try to ask it questions about e.g. the actual truth behind something either Trump or Harris supposedly said, as I’m almost 100% sure that will be slanted by the AI, no matter what the actual facts are.

Since the people who continue to promulgate the lie that Trump called white supremacists “very fine people” are on the same side politically as the programmers of the AI, I’m certain enough for my own comfort that in such cases it will give spectacularly bad results, even if it no longer portrays George Washington as a black woman or gives pictures of black SS soldiers. Those examples are so comically bad, anyone can see the stupidity. However, when putting nuance on what our political figures said or didn’t say, did or didn’t do, I’ll trust my own thinking on that and not even ask an AI for help with that. The answers given might look like truth but not be, and that’s far more dangerous.

Dave Barnhart