2 Comments

Hi Derek,

At least you call it an it, even though it, ChatGPT, probably

gives you back text that appears to present it as some kind of

being with a self, which I strongly object to, but most other

people who I've talked to about what they do with ChatGPT seem

quite happy with, and some even like it this way: it makes

them "feel like they are being treated as a real person,"

which takes all the way back to Joseph Weizenbaum's ELIZA

program (1964-1967), and Weizenbaum's worries about how some

people responded to using ELIZA; worries I think we should be

having again, only a lot stronger. There's too much about

ChatGPT, and other Generative AI systems, that is deliberate

discerption on the part of the builders and providers of these

systems.

Nevertheless, I still think you misdescribe what is really

going on when you use ChatGPT, good use, useful use, effective

use, for you, you say, and I believe you. But, you explain,

for example, that ...

"The better you know the material, the better AI can become

a conversational partner in explore [sic] various

interpretations and ideas connected to your subject. This

is not about facts and details but understandings and

analysis.

ChatGPT does no understanding of anything, it does no analysis

of anything. It's a statistical machine, albeit, an enormous

one. ChatGPT doesn't read and understand your prompt text, it

converts it into mostly meaningless tokens, and from these, to

very large numerical vectors. There is only statistical

number crunching going on inside ChatGPT, tons and tons of it.

ChatGPT doesn't write words to say things to you with, it

generates text. Writing is what you do, and when you do this

you have some notion of what you want to say, or you use your

writing to work out what you want to say, from what might be

said, and from how it might be said. Writing is about

designing combinations of words that say what you want to say

to people, as you know. ChatGPT does no designing of any

kind; it does no writing. It doesn't need to be able to write

to be able to generate text; it just needs to run the

statistical model of text it's built to have in a generative

mode, with some 'sugar coatings' added on before giving it

back to you. All and any meaning you see in the text you get

from ChatGPT, or signs of understanding and analysis you see

in this text, are put there by you, not by ChatGPT. ChatGPT

doesn't do knowing and understanding and reasoning. It

doesn't need to do any of these things to generate text that

looks to you like a person might have written. ChatGPT

generates look-a-like writing, not really writing.

A better way of describing your useful use of, and productive

experience with ChatGPT would, I think, be to describe it as a

kind of mirror; a mirror that reflects back to you the text

you present to it, but which it reflects back with distortions

and artefacts on it; distortions and artefacts you find useful

when you are doing some writing, because of the meanings you

put on the text reflected back by ChatGPT. There is no

conversation going on here, and I think it's dishonest to

suggest, albeit mildly, that there is. There is plenty of

interpretation being done by you, yes, but none by ChatGPT. We

should not, I think, attribute capacities to ChatGPT it

doesn't have, and doesn't need to have, nor attribute doings

to it it cannot do. To think you have engaged in a

conversation with some other intelligence is an hallucination

on your part, and to suggest to others this is what you have

done is a delusion, and a dishonesty. You have engaged in a

monologue with yourself via a strange and sometimes beguiling

mirror of text.

-- Tim

Expand full comment
author

Hi, Tim. Thanks for taking the time to write. I think it’s fair to say that language has developed for people to talk to each other. Sometimes we talk to God, or dogs, or radiators, but they don’t talk back. So to be “in dialogue” with AI — or anything that resembles it — constitutes something new; for description, for grammar, for rather deeper concerns if that system resembles a person. As you said, people acted with ELIZA (that long ago, when it wasn’t fooling anyone, really) as though it was a person. So we’re in a new domain.

Consequently, our vocabulary — in my view — trails practice. To say GPT has “analyzed” something is about the only way to say it in casual conversation. And that drift, I agree, is and will grow more significant in time, in the way that we have long-since allowed “processing” to slip into “meaning.”

As it happens, though, it does “analyze” in the sense of coughing up material that allows me to see things I couldn’t see before.

To wit: The Jihadist ideology Sayyid Qutb wrote Milestones in 1964. It was extremely influential and partly explains Hamas’s actions on Oct 7 (ideologically, not practically). It provides a moral framework, a set of goals, a set of criteria for success and I could go on.

Qutb was very concerned about the term “Jahili.” It means “pre-islamic” but also savage, barbaric, and anti-Islamic. It is because the Buddhist statues were considered “Jahili” they were destroyed.

Judaism, however, also looked at the world before the 10 Commandments. I wanted to understand why Judaism didn’t treat the non-Jewish religions, ethical systems, artistic products (etc.) the same way. Why were they not threatening and slated for destruction as Qutb thought they should be for the same of Jihad?

GPT was able to provide some “compare and contrast” observations (obviously plucked), but those allowed me to focus my questions, which continued and this became a virtuous cycle until I reached a point where I wanted to land on a distinction in theology. I then dropped GPT and went back to Google and then finally to book (!) to flush out the details of the ideas.

This is why I used the phrase “conversational partner.” A conversational tool would have been more accurate, but note how stilted the language is and how we’re always inclined back towards anthropomorphism because … only people have talked back since we invented language!

Expand full comment