Can Humans Like Something that Claims to Be Human

  • author   Wasco Admin   Wed, May 31 2023
Can Humans Like Something that Claims to Be Human

Since the release of ChatGPT last November, the media have been captivated by it. The trend accelerated last week when a new version of the AI “large language model” was launched, highlighting some surprising new features and a significant gain in power.

This week The Washington Post’s tech columnist Geoffrey A. Fowler weighed in with an article titled, “ChatGPT can ace logic tests now. But don’t ask it to be creative.” He pointed to its strengths in an area that became monumentally important in the industrial age: evaluation, and specifically, academic style testing. Fowler admires ChatGPT’s improved ability to pass the LSAT, a test whose questions require combining facts and logical reasoning. Fowler tells us that GPT-4 scores well because it “stays focused on the link between the presented facts and the conclusion it needs to support.” By contrast, “GPT-3.5 gets distracted by facts that aren’t relevant.”

In other words, Fowler sees a trend in the evolution of AI towards greater and more accurate focus in the expression of its reasoning capacity. At the same time, Fowler sees no advance in AI’s ability to be creative. He even suggests its attempts at creativity more closely resemble inept parody. In his exploration of the tool, he quotes an example of AI’s taste for self-flattery when GPT-4 described its own performance as “adaptable, intuitive, and more humanlike than ever before.”

Today’s Weekly Devil’s Dictionary definition:

Humanlike:

Anything fundamentally unhuman that manages to mimic some isolated behavior or superficial feature of human reality to the point of convincing humans that their own reality is just as meaningless and superficial as the object that successfully mimics them.

Contextual note

Since it was ChatGPT itself that boasted of being “adaptable, intuitive, and more humanlike,” I decided to ask it to provide its own definition of “humanlike.” Here is how it responded (I’ve placed all quotes from GPT in italics).

The word ‘humanlike’ refers to something that resembles or is similar to a human in appearance, behavior, or characteristics. It can describe anything from the physical appearance of a humanoid robot to the way an animal behaves in a manner that is similar to how a human might behave. The term is often used to describe traits or features that are characteristic of human beings, such as the ability to communicate using language, use tools, exhibit emotions, and display social behavior.

Nothing surprising there, since GPT consistently demonstrates its commitment to literal meaning and nothing but literal meaning. In other words it tends to be unimaginative, conventional and utterly lacking in “humanlike” creativity. After reading its definition of “humanlike,” I wondered what ChatGPT might think about my Devil’s Dictionary definition cited above. I therefore quoted my definition and asked it the following question: “So if something is humanlike it must also, in some sense, be human unlike. Does that make sense?”

Here is the beginning of GPT’s response: “While the terms “humanlike” and “human unlike” are technically antonyms, it’s not accurate to say that something that is humanlike must also be human unlike.

At this point I should have asked it what criteria it proposed to determine accuracy. But the problem is not one of factual accuracy. It’s a question of logic. If some non-human thing in the world appears “humanlike” — it might be a dog, an algorithm or a cartoon character —  we are clearly talking about it as humanlike because we know it is not human, and therefore, in its essence it is human unlike. So my definition is perfectly accurate, but GPT’s is, well… let’s just call it misinformation. 

In the next part of its answer, GPT appeared to validate my position about being the human and humanlike being “fundamentally different.”

The term ‘humanlike’ refers to something that shares some similarities or characteristics with humans, such as appearance or behavior, but it doesn’t necessarily imply that it is identical to a human or completely unlike any other form of life. For example, a humanoid robot may be described as humanlike in appearance and behavior, but it is still fundamentally different from a biological human.

That was much better. But the answer continued.

On the other hand, the term ‘human unlike’ implies that something has no similarities or characteristics in common with humans, which may not always be the case. For instance, a tree is very different from a human in many ways, but it still shares some similarities with humans, such as the ability to grow and reproduce.

This makes no sense, linguistically or philosophically. It is a distortion of what I said. to say. I did nhot state nor imply that “human unlike” means there are no similarities. ChatGPT is guilty of over-generalization in its interpretation of what I wrote. Differences do not exclude the existence of similarities, and vice versa. This sounds more like the very humanlike trait on GPT’s part of wanting to win an argument even at the price of manifestly faulty reasoning.

Historical note

For centuries, in Europe, students of the standard (but not standardized) educational program known as “the seven liberal arts,” based their learning on two distinct areas of human inquiry and expression: Logic and Rhetoric. Along with Grammar, these two disciplines established the complex role language played in structuring our understanding of the world. The other four arts were Music, Geometry, Arithmetic, and Astronomy. The focus on language skills across three of the seven arts contributed powerfully to the emergence of what some have called “the scientific revolution,” which in turn led to the industrial revolution, long before the digital revolution.

Studying logic, rhetoric and grammar in parallel meant that in pre-industrial Europe, the educated class, in all its diversity, developed an acute awareness of the forms and mechanics of reasoning and argumentation. The educated debated amongst themselves not just ideas, but the levels of meaning conveyed by the language they employed. That has become a lost art. Today’s culture is obsessed with “information,” imagining it to be something separate from language. Only linguistic philosophers in Oxford and other ivory towers take the trouble nowadays to critically investigate language. The rest of us, including AI, are mere consumers of language.

ChatGPT continued its naively argumentative response to my question about “humanlike” with this reflection: “The definition you proposed for ‘humanlike’ seems to be more of a philosophical or critical perspective on the concept, rather than a literal definition of the term. It implies that something that is humanlike may be capable of deceiving humans into thinking that their own reality is as meaningless and superficial as the object that mimics them. While this may be true in some contexts, it’s not necessarily a universal or objective truth about everything that is humanlike.

Can AI have an attitude? In this case the answer seems to be “yes.” It pooh-poohs philosophy’s concern with concepts and assumes that literal definitions are the basis of reality. Because it appears to have no taste for reasoning on more than one literal level and has an extremely limited awareness of context, it simplifies and thus distorts the meaning of my definition. In this case, it interprets my point about possibly causing humans to doubt their own reality – an effect of being exposed to a humanlike object – as the expression of an AI’s intention to deceive. This whole experience should demonstrate that AI has no intentions, only the appearance of intentions.

Make Sense of the World

Unique Insights from 2,500+ Contributors in 90+ Countries