Calm down folks, ChatGPT isn't actually an artificial intelligence
“Even before ChatGPT, chatbots were well on their way to passing the Turing test as commonly understood….But Turing would not have seen a chatbot as an artificial mind equal to that of a human.” - Tech Radar
- 80 views
Helpful article for getting a better understanding of this new tech, without dumbing it down too much or overwhelming you with jargon. More…
The foundational technology behind ChatGPT, Stable Diffusion, and all the other AIs that are producing imagery, test, music, and more is what’s known as a Generative Adversarial Network (GAN). I won’t get too in the weeds here, but essentially a GAN is two software systems working together. One is producing an output, the generator, and the other is determining if that data is valid or not, the classifier.
The generator and classifier in a GAN move word-by-word or pixel-by-pixel and essentially fight it out to produce a consensus before moving on to the next segment. Bit by bit (literally), a GAN produces an output that very closely replicates what a human can do, creatively.
I also appreciated that they point out there is nothing authoritative about Turing’s notions of what makes AI ‘real.’
I find interesting: the “AI” tools, so far, don’t come up with the creative ideas. Humans ask the AI to make something and give it parameters. Sometimes the human actor gives intentionally vague instructions, sort of increasing the AI’s ‘creative’ participation. Results are often bizarre.
Also interesting, to me: The entrance of “AI” into everyday language has followed a familiar pattern: the term already doesn’t mean what it meant a decade ago. Sloppiness has resulted in marketers using “AI” for pretty much everything that is a bit more sophisticated than a search query.
You what comes after that: new terms have to be invented to replace what the original ones used to mean. So now we have “AGI” (artificial general intelligence) and other terms for what we used to just refer to (mainly in Sci Fi) as AI. Maybe those terms have been around for a long time, but didn’t matter before.
I’ve been playing ‘what if’ in my head a bit about all this. What if AI eventually does become indistinguishable from a conscious mind, from our point of view? What if it can simulate volition so well, nobody knows or cares if it’s a ‘person’ or not? What impact would that have on our theology? (Is it even possible? I think I read in Plantinga somewhere the observation that humans seem to intuitively recognize other minds. What if we can always tell it’s not a person, if the interaction goes beyond basic exchange of info? Maybe that requires a real-time face? What if we fake that, too?)
Views expressed are always my own and not my employer's, my church's, my family's, my neighbors', or my pets'. The house plants have authorized me to speak for them, however, and they always agree with me.
Discussion