LLM
Large Language Model — an AI trained on massive amounts of text to understand and generate human language.
An LLM is the engine behind tools like ChatGPT, Claude, and Gemini. It's trained by feeding it an enormous amount of text — books, websites, code, articles — and teaching it to predict what word comes next, over and over, billions of times. Through this process it doesn't just memorise text; it develops a deep statistical understanding of language, facts, reasoning patterns, and even tone. "Large" refers to both the data it trained on and the number of parameters inside it.
Imagine someone who has read virtually everything ever written and can hold a conversation on almost any topic. They didn't memorise every book — they absorbed patterns, ideas, and language from all of it. That's roughly what an LLM does, except it happened through computation, not reading.
LLMs don't "know" things the way humans do — they predict likely responses based on patterns. This is why they can sound confident while being completely wrong. They're pattern engines, not knowledge databases.