What AI Actually Is
AI — artificial intelligence — is software that can do things that normally require human intelligence. Understanding language, recognizing patterns, making decisions, and generating text, images, or code.
The AI tools everyone talks about — ChatGPT, Claude, Gemini — are powered by Large Language Models (LLMs). These are programs trained on massive amounts of text. They learned patterns in how humans write, think, and communicate.
How LLMs Work (Simply)
Think of an LLM as an incredibly well-read assistant. It has read billions of web pages, books, and code repositories. It does not understand like you do — it predicts what words should come next based on patterns.
When you ask "How do I make pasta?", the LLM generates text that matches the pattern of "a helpful answer about making pasta" based on everything it has read about cooking. This prediction ability is surprisingly powerful.
Why This Matters Right Now
Before 2023, AI was for researchers and large companies. Now anyone can:
- Write: Draft emails, blog posts, reports in seconds
- Code: Build websites and apps by describing what you want
- Research: Summarize documents, find patterns, analyze data
- Automate: Create workflows that run 24/7
Key Concepts
Tokens: AI processes text in chunks called tokens (~4 characters each). Models have a context window — the maximum tokens they can process at once.
Prompts: What you type to the AI. Better prompts = better responses. But as you'll learn in Lesson 6, there's something even better than prompting.
Models: Claude (nuanced), GPT-4 (versatile), Gemini (multimodal), Llama (free). You don't need to pick one.
What AI Cannot Do
- Think or reason like a human — it predicts text patterns
- Remember previous conversations (unless you build a system for it)
- Access the internet in real-time (unless connected to tools)
- Guarantee accuracy — it can "hallucinate" (generate wrong but convincing info)
Understanding these limitations is what separates effective AI users from disappointed ones.