AI Models Are Just Smart Autocomplete

When we think of artificial intelligence, we often overlook the “artificial” part.

We believe that AI models are smart and can reason.
Yet, in their current state, they’re far from that.

At their core, large language models (LLMs) predict the next token over and over (think of tokens as pieces of words).

This process is like the autocomplete feature on your smartphone (just much better at predicting).

Why should you care about it?

Well, I’ve seen projects fail because of misconceptions about AI.

Internalizing that generative AI is all about predicting can help you turn on a bullshit filter.
You’ll become less vulnerable to the promises that companies make about AI nowadays.

That, in the end, can help you save some money.