What Even Is “AI”? Defining Key Terms in Plain Language

Feature image: What Even Is “AI”? Defining Key Terms in Plain Language

Articles in the A Pragmatic Look at AI (LLMs) in Software Development Workflows Series:

You might be wondering, “How are we three years into the hype and still asking something as basic as ‘What even is AI?’”

Well, as we established in the previous post, AI discourse is often like a cable-news shouting match. But it’s also often like late-night freshman dorm philosophizing: misused terms, 101-level conceptualizations, and fierce idealism. Eventually someone in the room starts denying the existence of free will, consciousness, or reality itself, and that’s when the Pragmatist needs to step in and ask:

“Hey friends! Do we even know what we’re talking about anymore?”

What is “AI?”

AI’s been the most written-about tech topic for years now, so defining it should be easy, right? We’ll just look up the universal definition and then we can — ope! What’s that? Not even experts agree on a single definition of “artificial intelligence?”

We could spend the rest of this article (and one wild and precious life) unpacking the debate over whether computers already are, or can ever be, “intelligent.” Unmute the Boomer (our pro-AI pundit), and they’ll claim AI can already outsmart humans—like when it enabled a non-physicist to discover new physics that not even PhD physicists knew about. The Doomer (our anti-AI pundit) will counter by pointing out “the profound limitations [of AI that] give us legitimate reasons to question whether they deserve to be called intelligent at all.”

This debate dates back to the birth of AI. In Empire of AI, Karen Hao reveals how John McCarthy coined “artificial intelligence” in 1956—not for its accuracy in describing a field of study, but (according to Hao) because the alternative option “automata” wouldn’t sound as sexy to grant funders.

From day one, the term “artificial intelligence” was just good branding.

Why is it so difficult to define AI? We’re oversimplifying for sake of time here, but AI researchers broadly fall into two camps: Team Symbolists, who believe that intelligence is nothing more than innate if-then rules, and Team Connectionists, who believe intelligence emerges from messy, organic, brain-like pattern learning.

This difference in belief—of whether intelligence is programmable logic or iterative guesswork—leads to drastically different approaches to how one researches and develops AI models. It alters one’s vision of the future. It informs the way one understands their fellow humans: Are we complex object models with mappable properties? Are we gloriously messy, divinely creative freestylers?

We could spiral forever, but let's focus:

How do we define AI? Not in theory. In practice!

Practical AI vs. theoretical AI

To date, when someone says “artificial intelligence,” more often than not—whether they realize it or not—they mean “large language model (LLM)” (which we’ll define shortly). Unfortunately, one out of ten times, certain people (often those with hugely influential voices in tech) say “AI” and mean artificial general intelligence (AGI) or even artificial superintelligence (ASI). When language gets squishy, debate get messy. But this particular mix-up throws any conversation around AI into a muddle. Why?

Because LLMs exist. And AGI and ASI don’t.

Yet!” mouths the muted Boomer.

“And never will,” silently mutters the Doomer.

Before we talk practical AI, let’s briefly touch on the hypothetical.

What is Artificial General Intelligence (AGI)?
AGI refers to hypothetical tech that would match human-level intelligence—understanding, learning, and making decisions on its own, rather than simply mimicking patterns. It would solve any intellectual problem that a human can. Boomers insist it’s coming any day now; Doomers say don’t hold your breath. Pragmatists see AGI as a useful thought experiment for understanding the limits of present-day AI. But the reality of how much compute you would need for just one instance of AGI—let alone AGI at commercial-scale—means it’s not something devs need to roadmap for anytime soon.

What is Artificial Superintelligence (ASI)?
ASI takes it even further: it is an entirely theoretical, god-like intelligence surpassing the combined genius of all humans across every field, ever. Philosophers, ethicists, and safety researchers love to debate it. Pragmatists see it as science fiction that distracts from reality.

Still confused? Here’s a helpful mental model:

  • AI = all guidance-aided cars, real or hypothetical
  • LLM = a self-driving car
  • AGI = a flying self-driving car
  • ASI = a flying, self-driving, self-building, time-traveling, galaxy-hopping DeLorean that’s also God (in car form)

Now that we’ve set aside hypothetical AI, we can define artificial intelligence in practical, present-day terms:

How to Define AI in Pragmatic Terms
Artificial intelligence is a branch of computer science that uses machine learning (and the statistics it's based on) to find patterns in data. It generates, predicts, or reorganizes text, code, images, audio, or video based on those patterns. It doesn’t “think” or “understand” in a human sense; but, when trained on clean, curated datasets and prompted with expertise, it can be pretty damn good at outputting what should come next.

Let’s unmute here to gather some real-time feedback to this definition:

Doomer: “Farewell, truth! Adieu, artistry! Adios, humanity! It was nice knowin’ ya! Sometimes, anyway!”
Boomer: “YES HAHAHA YES! And that’s just the start. Strap in for permanently augmented cyber-human beings! We’re never gonna die!”
Pragmatist: “Neat. A tool for data, logic, and automation. What a useful class of software.”

From now on, when you hear AI, you can assume (and verify) they mean LLM. This, of course, begs the question:

What’s a Large Language Model (LLM)?

November 2022: OpenAI releases ChatGPT to the public. That’s when “AI” becomes synonymous with “LLM.” When AI evolved from black-box technology for Silicon Valley world-builders to all-knowing, human-seeming companion that even your niece and uncle can use. Since then, LLMs have become AI’s dominant subtype. This is due, in part, to the fact that they run on and output humanity’s freest, most abundant, and endlessly renewable resource: language.

What can LLMs do?

LLMs recognize patterns of meaning and syntax in training texts. Then, they predict the most probable sequence of tokens (next word in a sentence, next line of code, next concept) based on the context they were given. For top-tier results, LLMs need:

  1. Clean, curated datasets
  2. Precise context, informed prompts
  3. Guidance from someone with the expertise to know (1) exactly what they want and (2) how to delegate the ask

Clean data + precise prompts + an expert’s fingers on home row = supercharged workflows. (We’ll talk dev-specific tasks shortly). But first, brake check:

What can’t LLMs do?

  • Form intentions or “understand” anything (especially data they haven’t been trained on)
  • Reason in complex, multi-step processes beyond initial training patterns
  • Learn in real-time without retraining
  • Solve problems with creative ingenuity
  • Produce truly unbiased outputs

We all need to live in the same reality in order to effectively work with AI, and 90% of “AI talk” is really about LLMs. Speaking of which, let’s see what our pundits have to say (unmutes).

Boomer: “Ahh yes: The LLM! The first embryo of our new semi-digital species. The schema for indexing all of human knowledge! We are going to get so rich off this! Wait—am I not on mute?”
Doomer: “A calculator, but for words. Souped-up autocomplete. The frightening final form of Clippy. HARD PASS.”
Pragmatist: “Cool. A repository for common sense and a proxy for reasoning. What a useful class of software!”

For our sakes and purposes here, for the rest of the article—unless otherwise specified—AI = LLM.

Now... the big one: Why Developers Should – and Shouldn't – Use LLMs in Our Development.

Get our latest insights in your inbox:

By submitting this form, you acknowledge our Privacy Notice.

Hey, let’s talk.

By submitting this form, you acknowledge our Privacy Notice.

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

Thank you!

We appreciate your interest. We will get right back to you.