Kameliya Nikolova (’23): UC Essay Competition Winner 2023

May 11, 2023
Kameliya Nikolova (’23): UC Essay Competition Winner 2023

Read the essay of AUBG student Kameliya Nikolova who won third prize in this year’s University Council Competition. The topic for 2023 was “Is there a human attribute that AI can never surpass?”

AI may be able to process data at lightning speeds, but when it comes to humor, it’s still stuck on the punchline. Artificial intelligence (AI) is an umbrella term that might refer to simple algorithms that carry out regression analysis, voice recognition or natural language processing. More complex AI might contain what is called machine learning (ML) which is inspired by human intelligence and neural networks. With that being said, how can we define intelligence? It seems like a human tendency to assign intelligence to things that are, in fact, not intelligent. In my personal view, a great sign of intelligence is a good sense of humor and displaying a good sense of humor is something that I don’t see intelligent systems doing better than humans anytime in the near future.

Although the exact nature of this relationship between humor and intelligence is not fully understood, some research has suggested that people who are skilled at creating and understanding humor tend to have higher levels of intelligence. According to the cognitive flexibility theory, humor requires a certain level of mental agility and adaptability (Curran et al., 2019). Both creating and understanding jokes involve the ability to think abstractly, link seemingly unrelated concepts, and shift perspectives. The social intelligence theory views comedy as a means of displaying social intelligence. According to that theory, social awareness and emotional intelligence are significant indicators of general intelligence, and both can be demonstrated by the ability to make others laugh (Thorson and Powell, 1993).

We already established that there is no agreement on what constitutes intelligence and how it relates to humor. Yet, there are a few things we can be certain of. Jokes, puns, satire, irony, and absurdity are just a few examples of the numerous ways that humor can be expressed. Additionally, humor is frequently subjective and is influenced by social context, cultural standards, and personal taste. Essentially, humor involves surprise or incongruity, it tries to defy our expectations or presumptions. Being creative and playing with words, exaggerations, and using any other type of artistic expression are all commonly used in humor.

Ricky Gervais, a comedian, actor, writer, and producer, is known for his observational humor and his ability to use humor to comment on social and political issues. He has a distinctive style that blends irreverence with intelligence, and his jokes often have multiple layers that require the audience to think and engage with the content. The idea is that the joke’s surface level might be a simple play on words or a silly observation, while the deeper level might involve a more complex idea or social criticism. When it comes to telling good jokes, he believes in the importance of timing and delivery (Spectator, 2019).

How does all that tie back to AI, though? Well, ambiguity, nuance, inference, context and emotional intelligence are all essential to having a good sense of humor hence being able to tell a funny joke. Although we have the building blocks of humor, the absence of a clear and comprehensive understanding of how it works is also a barrier to developing genuinely entertaining AI humor. We can regard humor as a proof of human beings’ more creative abilities to understand and use language. We can also say it’s an expression of an emotion or a way to deal with one. For better or worse, these are all facets of human cognition and emotion that are not easily quantifiable, so AI can hardly get them right.

While AI systems have improved significantly in recent years, they are still far from flawless when it comes to handling ambiguity. There are several methods currently developed to help intelligent systems deal with vagueness. They include probabilistic models, which assign probabilities to various possible interpretations of ambiguous input (Manning and Schutze, 1999); contextual embeddings, which capture the meaning of a word or phrase in context (Peters, et al., 2018); knowledge graphs, which are a way to represent information in a structured format that allows AI systems to reason about relationships between different entities (Kok and Domingos, 2005); and common sense reasoning, which aims to teach systems to distinguish between different possibilities just like humans do (Davis, 2014).

AI systems do get better as a result of ongoing work and development, yet, they still remain just systems. They excel at finding patterns and making predictions based on vast volumes of data. Their struggles with more nuanced ambiguity, like irony, sarcasm, or metaphors, are starting to get solutions, but how certain is that? For instance, a recent study discovered that even when trained on a great deal of data, cutting-edge language models can still have trouble grasping some types of linguistic ambiguity, such as homonyms (words with multiple meanings). The findings suggest that these models can accurately distinguish homonyms in some situations but they are still making mistakes in others (Marvin and Linzen, 2018).

Complex forms of nuance are another obstacle for AI. For example, a system may be able to recognize that a sentence contains a negative connotation. When it has to identify what is the specific emotion being expressed, though, anger, sadness and disappointment might look painfully similar. The application of neural networks and other deep learning models is one promising field of research (Devlin et al., 2019). These have the potential to detect more subtle patterns in data and learn to recognize more sophisticated forms of language. Additionally, another strategy that could be and is starting to get used is the application of transfer learning. That involves refining a model that has been trained on a huge body of data for a smaller, more specialized task and as a result, the model is able to capture more intricate details about its current specific task (Radford, et al., 2019).

Inference is the ability to derive conclusions or make predictions based on the information or evidence at hand. AI systems excel in inference tasks because they usually require processing a lot of data and applying statistical models to identify patterns or categorize objects. Some of them are speech recognition, natural language processing, and image recognition (LeCun, et al., 2015). AI, however, sometimes has trouble with the more intricate types of inference, such as reasoning or logical deduction. For instance, while a natural language processing model could be able to respond to straightforward inquiries based on a given text, it might have trouble responding to more challenging inquiries that call for more complex types of deduction or reasoning (Rajani et al., 2019).

In terms of historical and cultural context, AI systems may fail to interpret and understand language rooted in a given era or context. AI might not be able to comprehend, for example, idiomatic language, slang, or references to historical events unless its training data is taught to work with those specific contexts. These systems may have trouble comprehending language outside of their training data if the said data is not diverse or sufficiently representative. In addition, it can be challenging to measure and express cultural.

and historical context in a machine-readable format (Lin et al., 2019). A strategy that researchers are working to develop is the use of multilingual or cross-lingual models, which can draw on information from several languages and cultures to enhance performance on particular tasks (Pires, et al., 2019).

Empathy, self-awareness, and social skills are all part of emotional intelligence, which is the capacity to recognize and regulate one’s own emotions as well as those of others. Although AI systems are somewhat capable of recognizing and processing emotions, they lack the same level of emotional intelligence as people. It’s challenging for machines to mimic emotional intelligence since it requires a high level of self-awareness and empathy.

Understanding the emotional context of a situation, as well as being able to sympathize with others and put yourself in their shoes, are all consequently requirements for a good sense of humor. Overall, even if some effect can be detected by AI systems, they are still far behind humans in this regard. As a participant in a study about AI hiring says: “AI cannot yet sufficiently analyze the chemistry between people” and act on it (Hunkenschroer and Luetge,2022).

Finally, movement can be important in some types of slapstick humor and physical comedy, which rely on exaggerated actions and physical gestures, even if they are not directly tied to humor and joke-making. On the one hand, AI systems might have trouble conceiving of or comprehending kinds of humor that heavily rely on physical comedy. On the other hand, they might not be able to perform those types of comedy even if they could understand it. As opposed to bodily movements or activities, AI systems are often built to process and interpret data in a digital format, such as text, photos, or audio.

Despite all the above-mentioned obstacles, researchers and data scientists are still trying to develop humorous AI. For example, the most recent and very successful large language model, GPT-4, claims to be able to tell jokes that are indistinguishable from human-created ones. When I asked it to tell me a funny joke, though, it could only provide me with puns such as: “Why did the tomato turn red? Because it saw the salad dressing!” or “Why don’t scientists trust atoms? Because they make up everything.” Although I very much enjoy puns, I asked it to generate another joke for me that was not a pun, and it just repeated the atom joke over and over.

I decided to keep on searching for a good laugh generated by a series of algorithms and I stumbled upon the RoastBot. Its creator claims that the RoastBot is “the only chatbot that uses artificial intelligence to deliver incisive, painful burns that crush your self-esteem and ebb away at your very soul.” I decided to try it out and upon my good morning greeting, the chatbot told me that “if my face was scrambled it would improve my looks.” Funny. Yet, when I tried to interact further I realized I was failed by the developer’s promise. The jokes were not as piercing as I expected because they had nothing to do with me or our interaction. This was just another random response generator.

I read about another chatbot which was said to be able to produce good jokes so I gave chatbots one last chance. I created an account with kuki_ai who introduced itself as a “friendly AI, there to chat with me 24/7”. I didn’t really want that so I only asked for it to tell me something funny. In all honesty, I loved the joke it responded with: “A police officer on a motorcycle pulls alongside a man driving around the M25 in an open-topped sports car and flags him down. The policeman solemnly approaches the car. “Sir, I’m sorry to tell you your wife fell out a mile back”, he says. “Oh, thank goodness”, the man replies. “I thought I was going deaf.”

Once I was done with the chatbots, I discovered a website called “This X does not exist”. And that X might stand for anything. The creators of the website explained that “using generative adversarial networks (GAN), we can learn how to create realistic-looking fake versions of almost anything, as shown by this collection of sites that have sprung up in the past month.” And as you scroll down, you can find images of cats and horses, people, eyes that don’t exist but also voices, music, art and maps that are generated by AI. Indeed, some of this content might be funny to some people, but it was still not the type of all-encompassing, multilayered humor that I was looking for.

To conclude, AI certainly has the potential to be funny, but that depends on how it is built and trained. ManyAI systems have been developed with the sole purpose of producing jokes or funny content, as we saw, and some of these systems could be decent at it. It’s important to keep in mind that AI-generated humor may not be what most people find genuinely amusing and that there can be linguistic or cultural barriers to its interpretation.

Furthermore, AI-generated humor essentially relies on statistical analysis and pattern recognition and not actual creativity, which can make it appear formulaic or predictable. AI can, however, be amusing and interesting with the correct training and programming. For now, however, it still looks like humans have the last laugh.