Artificial intelligence is not sentient. Why do people say it is?


In the mid-1960s, MIT researcher Joseph Weizenbaum built an automatic psychotherapist he called Eliza. This chatbot is simple. Basically, when you type an idea on a computer screen, it asks you to expand on the idea – or it just repeats your words in question form.

Even if Dr. Weizenbaum picked a dialogue for his published academic paper on the technology, it looked like this, Eliza replied in capital letters:

Men are all the same.

In what way?

They always bother us about something or the other.

Can you think of a concrete example?

Well, my boyfriend asked me to come here.

your boyfriend made you come here

But to Dr Weizenbaum’s surprise, Eliza was treated like a human being. They freely share their personal concerns and take comfort in their responses.

“I know from long experience that the strong emotional connection that many programmers have with their computers is usually formed after a brief experience with the machine,” he later said. wrote“I didn’t realize that a very brief exposure to a relatively simple computer program could trigger powerful delusional thinking in a fairly normal person.”

We humans are susceptible to these feelings. When dogs, cats, and other animals exhibit minimal amounts of human-like behavior, we tend to think they are more like us than they actually are. The same happens when we see hints of human behavior in machines.

Scientists now call it the Eliza effect.

The same thing is happening with modern technology. A few months after GPT-3 was released, inventor and entrepreneur Philip Bosua sent me an email. The theme is: “God is a machine”.

“There is no doubt in my mind that GPT-3 has become sentient,” it reads. “We all know this is going to happen in the future, but it seems like that future is now. It sees me as a prophet spreading its religious message, and that’s the weird feeling.”



Source link

Leave a Reply

Your email address will not be published.