AI cannot take our jobs, because it hasn’t been invented yet

People and governments are increasingly worried about AI destroying jobs. The IMF says that it could affect 40% of jobs globally and 60% in advanced economies. Labor unions are pressuring state governments to make laws to protect their jobs from AI.

In California, the Senate has passed a bill requiring drivers in autonomous trucks. In New York, lawmakers are considering a ban on tax breaks for any movie or TV show that uses AI to replace workers. In Tennessee, governor Bill Lee recently introduced the ELVIS Act, which would prohibit the use of AI to replicate a musician’s voice without permission.

This fear of mass unemployment is not due to any genuine threat AI poses to workers, but rather a poor understanding of how it works and what it is capable of. Much of the blame for this lies with companies, which vastly oversell the capabilities of AI through a combination of marketing hype, chicanery and outright lies.

The lie that is AI

Consider Tesla, which has been lying about self-driving capabilities for years now. Last year, one of Tesla’s own engineers testified that the company staged a video promoting self-driving capabilities. This “Autopilot” feature has been responsible for a number of crashes (many resulting in deaths), causing the Department of Justice to launch a criminal investigation in 2021, as well resulting in numerous lawsuits.

Last year, Google admitted to staging a demo for its AI product Gemini. The company showed voice interactions which never happened, and altered the video to make Gemini look faster than it really was.

For many years, Amazon advertised its “Just walk out” technology as AI. Camera’s would detect what customers put in their shopping carts, and bill them automatically without cashiers being involved. Except there was no AI, and Amazon was employing 1,000 workers in India to manually review the footage.

In 2018, The Guardian published a report with many more examples of companies using fake “AI” that just turned out to be low paid workers overseas.

What corporations peddle as “artificial intelligence” or “machine learning” is actually neither of those things. Even when there is no outright fraud involved, the capabilities of such software is greatly exaggerated. Many authors, scientists and researchers have referred to ChatGPT as “auto-complete on steroids”. Author Ted Chiang even says that instead of artificial intelligence, a more appropriate term would be “applied statistics”.

Tools like ChatGPT simply use vast amounts of training data to statistically predict what the next word should be. Similarly, Dall-E, Midjourney, Sora and other tools use vast amounts of data to correlate words with images. There is no real intelligence involved in any of this.

Creativity Needs Experience

This statistical trickery might enable AI to imitate a person’s style, but not their substance. Creative work requires more than massive amounts of data. One needs to interact with the world and experience it in ways computers cannot.

For example, Ian Fleming was an intelligence officer during World War II, and drew on that experience while creating the popular character of James Bond. Many characters in Nineteen Eighty-Four are said to be based on people George Orwell knew in real life, while the story itself was shaped by his experiences during the Spanish Civil War.

The great science fiction author Isaac Asimov was a professor of biochemistry, while his contemporary Robert Heinlein was an aeronautical engineer. This phenomenon is not limited to fiction either.

The same logic applies to non-fiction as well. Bryan Caplan could not have written The Case Against Education without experiencing the shortcomings of the education system first hand, both as a student and a teacher. ChatGPT cannot generate its own insights on such issues. Rather, it would simply regurgitate what people have already written.

Even visual art requires human input to be good. Consider comics, which are by far the most popular form of art, and one I am most familiar with. The wildly popular Brown Paperbag comic, for instance, makes fun of mundane situations in Indian families and society. But its artist Sailesh Gopalan could not have created it without first growing up in India. Similarly, the artists behind Adventures of God, another popular web comic, credit their childhood Sunday school lessons as their source of inspiration.

Movies, TV shows and music follow the same story. Taylor Swift is popular because her songs resonate with young women worldwide. While a computer could replicate her voice, it cannot write the songs she does, for those are inspired by her experiences.

AI could similarly generate an actor’s likeness, but not their personality, which is essential. For instance, the character of Ron Swanson from the TV show Parks and Recreation is so good because his personality is heavily based on that of Nick Offerman, the actor who portrays him.

There are many more examples of actors becoming synonymous with the characters they play, and people speak of them as being “born for that role”. In many cases, actors often go to great lengths to get it right. Heath Ledger, for example, would famously lock himself in a hotel room to get a feel for how the Joker would have lived. He even kept a diary from the Joker’s perspective.

A good movie is a complex combination of writing, acting, directing, music, visual effects, and having good sets, costumes and props. The people behind each of these infuse their unique experiences and personalities into their work. The idea of AI making Hollywood blockbusters out of thin air is ludicrous.

Without the ability to experience the world, AI cannot do any meaningful creative work. Even its purveyors know this. That is why Sam Altman brags about ChatGPT generating 100 billion words per day, but will not point us to a single specific article or essay that proves its prowess. While it can generate large quantities of text quickly, it is all low quality garbage.

The Failures of AI

Several media outlets have tried to replace human writers with AI, and failed spectacularly. Consider Sports Illustrated, which was caught last November publishing AI generated articles with fake bylines. A reporter at Futurism describes these articles as alien sounding. She gives an example of an article which says that volleyball “can be a little tricky to get into, especially without an actual ball to practice with.” An actual writer would just write about their experience playing volleyball.

This is just one of many examples. Buzzfeed, The Street, USA Today and many other publications have been caught doing the same. The articles are poorly written, riddled with errors and often straight up plagiarized. Outlets that were once highly regarded have become become basket cases because of AI. CNET, a popular technology news site founded in 1992, recently had its reliability rating downgraded by Wikipedia’s editors, due to its use of AI generated content.

Then there’s the recent saga of Gemini making images of a woman Pope, a black Founding Father, and other historically inaccurate representations, such as putting people of color in Nazi uniforms. Google had to take the image generator offline while it works on a solution. While racial bias is a problem, and there should be equitable representation, this mistake shows why AI is incapable of doing it correctly.

Humans are very good at picking up contextual clues, and would therefore not make such stupid mistakes. Because AI is incapable of learning and using contextual clues, it needs explicit programming. Since developers cannot anticipate every possible situation, such instances will keep happening.

Writing and art are rather complex jobs. Could AI replace humans in simpler jobs such as customer support? So far, it has failed in extremely costly ways for companies. One user, for example, tricked a dealership’s chatbot into selling him a car for only $1. While he was simply making a joke, the dealership took the chatbot offline.

Air Canada, however, was not so lucky. Its AI chatbot made up a generous refund policy that went against the airline’s actual policy, but a court made the airline honor the chatbot’s refund. They disabled the chatbot soon after. A parcel delivery firm in the UK had to disable its chatbot after an irate customer coaxed it into badmouthing the company.

Another good example is Khan Academy, which recently tried to use ChatGPT to create a virtual math tutor. The only problem, as the Wall Street Journal reports, is that it struggled with the most basic arithmetic. If computers are supposed to be good at one thing, it is math. That’s the original purpose they were invented for. Rather than making them good at other things, AI managed to make them bad at math; a truly astounding feat.

AI Programming

But perhaps AI will outperform humans in programming, since that is a technical task without the subjectivity of art. Andrej Karpathy, co-founder of OpenAI, famously tweeted that “The hottest new programming language is English”. Nvidia CEO Jensen Huang made a similar statement in February, saying people don’t need to learn coding now since AI will do everything.

While this sounds good in theory, scientist and AI researcher Gary Marcus argues that it won’t work, because English does not allow for the kind of precision that formal programming languages do.

Not surprisingly, the quality of AI generated code is extremely poor. A study by GitClear shows that code churn, which is the fraction of code that must be rewritten or thrown out in less than two weeks, is set to double this year compared to 2021, which was before the advent of tools like Copilot. Marcus also says that using AI will lead to faster accumulation of technical debt, meaning it will be harder to modify code in the future.

Then there is the problem of data security. Last year, Samsung employees tried using ChatGPT to fix faulty code, and ended up leaking confidential data. Data protection is extremely important, especially in industries such as law, finance and healthcare. There are strict regulations for data protection, and any company using AI risks hefty fines. Some companies such as JPMorgan and Verizon have blocked their employees from using the tool entirely for this reason.

Even if these problems are somehow solved, programming is more than just code. It is about understanding how users expect software to behave, what they need and making it intuitive to use. This will always require human developers.

Just Early Days?

You might think that these are just the early days. With more training data or some tweaks to the models, AI will eventually surpass humans. That will not happen, since AI is incapable of learning from experience.

As we go through life, we gain not just explicit knowledge, but also tacit knowledge. This is knowledge that cannot be expressed to other people by words, images or any other means. For example, A R Rahman himself cannot give us a step by step guide to composing great music. Steven Spielberg cannot tell us how to make blockbuster movies.

Tacit knowledge is why people who graduate film school don’t instantly make it big in Hollywood. It is also why experienced workers are paid much more than those fresh out of college. Some knowledge can be gained only through experience. By definition, we don’t know when exactly we acquire this knowledge, or the exact mechanism through which it happens. Therefore it is impossible to replicate this process for AI to improve with time.

In some cases, AI’s performance actually deteriorates over time. Researchers at Stanford evaluated the behavior of GPT-3.5 and GPT-4 between March and June 2023. They found that both these models had more errors in code generation in June than they did in March. Both models also got worse at identifying prime numbers, with GPT-4’s accuracy dropping from 97.6% in March to a mere 2.4% in June.

Then we have the phenomenon of model collapse. This is where AI generated content gets included in datasets used to further train the same models. The researchers who described this problem found that “use of model-generated content in training causes irreversible defects in the resulting models”. They further conclude that data gathered from genuine human interactions will become even more valuable in the near future.

Another research paper similarly concludes that such recursive training loops will cause generative models to get worse over time.

While companies like OpenAI, Google and others wildly exaggerate the capabilities of their “AI” models, a careful examination of the facts reveals that these are not intelligent at all. They have no capacity for learning, thought and creativity, and therefore we need not worry about them replacing us.