SYNDICATED COLUMNIST
“If you could master any language in the world, what would it be?” “C++.”
It’s a classic programming joke. The humor is ironic: language skills are less important than technological ones. Humor, I’m told, doesn’t flourish in tech. Computers can’t understand it.
And, some would argue, neither can engineers.
But the computer bit isn’t quite accurate. Chatbots based on large language models, like ChatGPT, don’t understand things the way we do.
But with enough data, they can communicate like us. They can even repeat jokes when prompted, just like some seniors I know.
So maybe they can tell jokes, but only if they’ve heard them before. After all, chatbots predict the next word in a phrase based on billions of similar inputs.
In some sense, we do this too. Practice makes… perfect. Knowledge is… power. When life gives you lemons… make a whiskey sour.
When life gives a language model some lemons, it probably won’t make a whiskey sour. For humans, A isn’t always A. For computers, it has to be.
Unless it’s been told otherwise a hundred times, a computer will always make lemonade out of lemons.
So they don’t work with words exactly how you do, unless you compare a phrase you’ve heard with a hundred others before you reply (and generate something random every now and then).
Though you can argue that humans make meaning based on what we’ve heard before, with some variations, we can’t reduce language to data processing, especially when that data consists only of the written word.
There’s more to jest than the wording of the joke. Language is more than writing. It flows, stops short, and is punctuated with gestures and sounds that aren’t words.
Some sounds are funnier to humans than others. “Klaxon” is funny. “Stolid” is not. “Diphthong” is funny. “Melody” is not. “Whack” is funny. “Antidisestablishmentarianism” is long.
Why are they funny? Maybe words that have the sound of a hard k, a stressed first syllable, and are short are inherently amusing to humans. Maybe that’s only true for English speakers. Maybe none of the above are good explanations.
In any case, computers don’t interpret “klaxon” as anything more or less than a sequence of characters, while we associate it with an image, a sound (awooga!), and much more.
When people are funny, they consider how a word is spelled, how it sounds, its denotation and connotation, where it fits best in a phrase, whether they should include some Latin to sound more impressive, et cetera.
A computer doesn’t consider these things. It just grabs tons of phrases that someone marked “humorous” somewhere and combines them with certain probabilities.
The result is supposed to be funny. Sometimes it is. Other times it’s completely illegible. Humans don’t tend to have the latter problem.
And they don’t tend to throw words together randomly, either. Not unless they’re on their first date.
But despite what I’ve been saying, I concede it’s hard to draw a line between where human-like communication ends and computer-like communication begins.
Especially when we learn a language, we use words or phrases according to how we’ve seen them used.
Language models work similarly. They just synthesize information much quicker than we do. Maybe one day, quantity will beat quality.
While humans are still better with language and humor, computers might catch up sooner than we think. Or not. I haven’t synthesized enough data to guess.
Language, jokes, and the meanings we do or don’t exchange move us toward that ancient goal: know thyself. A computer can’t know itself. And it can’t tell jokes… yet?
Copyright 2024 Alexandra Paskhaver, distributed exclusively by Cagle Cartoons newspaper syndicate.
Alexandra Paskhaver is a software engineer and writer. Both jobs require knowing where to stick semicolons, but she’s never quite; figured; it; out. For more information, check out her website at https://apaskhaver.github.io.