Last week or maybe the week before last, I saw a statement that Donald Trump had blasted out to his supporters, complaining about the various criminal probes and investigations targeting him. The statement was typically Trumpian—a hysterical word salad full of grammatical errors, references to Soros, random words in all caps, etc.
Just for kicks, I decided to see if ChatGPT could produce an approximation of Trump’s statement given a few prompts. The first iteration was disappointingly coherent, almost eloquent. I had to specifically prompt the AI to try again but to make it “angry and more emotionally disturbed,” and also to capitalize random words. The second iteration still wasn’t quite there, so I asked for “even more” of the same spice blend. The third iteration was pretty on point, but it was still a little too smooth-spoken and articulate to be convincingly Trump, reflecting a vocabulary beyond what you might find in a Trump style guide. I couldn’t think of a prompt that would properly constrain the vocabulary, so I gave up.
One takeaway from this might be about how ChatGPT right now is both better and worse at communicating than most people are. It can compose much more fluent English sentences than Trump can, but it also clearly lacks the essence of Trump. What is that essence though?
The emergence of ChatGPT has precipitated a wave of soul searching on this question—about what it means to be human and what aspects of humanity might be forever out of reach of AI. It’s not hard to imagine, for example, a near-future version of ChatGPT that can generate flawless (or accurately flaw-full) imitations of Trumpspeak, but just because a fake is indistinguishable from the original doesn’t make it more than an imitation. This seems self-evident, even if it’s hard to explain what makes them different.
I’ve noticed that as I’ve gotten older, I have felt less and less inclined to express what’s on my mind. This shift is has noticeably accelerated over the past year or two. I am less and less confident that I have anything useful or interesting to say, or that I know what I’m talking about on any given subject. I feel a wider and wider disconnect between what I understand to be the “right” way to exist in the universe and my own ability to live the right way. I don’t think I’m getting worse at life. Indeed I know I am getting better at it, but I feel like my understanding of things is expanding much more rapidly than my ability to live according to what I understand. I would feel disappointed with myself for this perpetually worsening state of failure, except that one of the things I’m learning is how to be kinder and more accepting of myself.
Seeing what AI can already do only makes me want to say even less. If ChatGPT represents a kind of synthesis and consensus of what humans have already written, then I find myself wondering what I can possibly add. If what I might say is indistinguishable from what an AI could say, then maybe I should keep quiet until I think of something better. It’s in this spirit though not for this precise reason that I haven’t posted anything here for a while. Increasingly, my default mode is silent contemplation, which is something I’m pretty sure AI is not yet able to do.
I haven’t been thinking about AI a whole lot, but it dovetails with things I have been contemplating. Let’s start with my claim that “my understanding of things is expanding…” Socrates is supposed to have said, famously, that the only true wisdom is in knowing you know nothing, and this is a paradox that’s baked in to my impulse to be silent. My sense of understanding more and more comes with an increasing awareness of my own ignorance. Perhaps on a related note, I recently heard Professor John Vervaeke say on a podcast that knowledge is about overcoming ignorance, while wisdom is about overcoming foolishness. What better way is there to avoid foolishness in many cases than to just keep quiet? There are countless sayings to that effect, a la “better to remain silent and be thought a fool than to speak and to remove all doubt.”
The distinction between intelligence and wisdom seems germane to the discourse about AI. Computer scientists are working on artificial intelligence after all, not artificial wisdom. This suggests that wisdom is something uniquely human. To me it also suggests that intelligence is overvalued in our culture, and wisdom is not valued enough. Computer scientists are not pursuing wisdom, but most people don’t seem to be pursuing it either.
The imbalance in how our culture values intelligence versus wisdom makes me think about the work that Iain McGilchrist has done to revolutionize our understanding of how the left and right brain hemispheres pay attention to the world. The left brain is good at narrow focus—organizing, classifying, quantifying the objects of our attention. The left brain is either completely oblivious to everything else or it makes facile assumptions to fill in the blanks. The right brain, on the other hand, pays attention to the world in broad and flexible ways, maintaining an openness to whatever possibilities come along. The left hemisphere tends towards either/or thinking, while the right hemisphere appreciates a both/and mindset, integration and holism. The left brain looks, while the right brain sees.
Iain McGilchrist’s follow-on thesis is that western society from the ancient world onward has been increasingly dominated by a left-hemisphere orientation. That’s the direct link I see between his work and what I’ve said here about intelligence versus wisdom. The left hemisphere is good at defining and quantifying things, and we have pretty straightforward ways of defining and quantifying intelligence. Wisdom not so much. We understand how to systematically build and improve intelligence in people and computers. Wisdom not so much. Intelligence feeds on information and can be transmitted from person to person. Wisdom seems to require first-hand lived experience, including joy and sadness and laughter and trauma and on and on and on.
One more manifestation of how our culture values intelligence more than wisdom is the notion of content versus art. Artificial intelligence is part of that conversation too, because alongside ChatGPT we have also been introduced to a number of AI tools that generate images from prompts, as well as AI tools that compose music.
In recent years there has been so much talk about content creators and content platforms and content monetization. Now we have tools that either threaten human content creators or offer untold new possibilities, depending on whom you ask. It’s probably a bit of both. Nobody, however, seems to think that any of this is art. In fact it seems that no one is even interested in that question these days. There is so much content. We’re drowning in content. Content is easy to define, but what is art? Why is art?
Like wisdom, art is hard to define, maybe impossible to quantify. Art asks questions. Art makes us see ourselves and the world differently. Art induces internal experiences, mind-shifts, heart expansion. Art can make us uncomfortable, not only when it’s intentionally disturbing but when it feels pointless, inscrutable, or ambiguous.
Sometimes it’s fun to go to goodreads or Amazon and peruse reader reviews for a work of literature, no matter if it’s a canonical work from a bygone era or something more contemporary. Mary Shelley or Cormac McCarthy. Invariably you’ll find that ten percent of the reviews are one-star ratings by readers complaining that they don’t get it, or nothing happens, or they didn’t like the characters, or the ending left things too unresolved. These readers are judging art by the standards of content, are stuck in a juvenile mode of reading.
Art is important. Wisdom is important. I feel like our culture is forgetting about both. I don’t know how to help turn that around, so for now I’m just talking less.