🤖 AI Tools & Concepts

Click on any concept to learn more • Interactive mindmap for understanding modern AI

Home AI Concepts Learn some tips and tricks now. Original website

💡 How to use this mindmap

Click on any concept title to expand and see its explanation. Click on sub-concepts to dive deeper into specific topics. This mindmap covers the essential AI concepts you need to know in 2026!

Modern AI Ecosystem
🧠 Large Language Models (LLMs)
LLMs are AI models trained on massive amounts of text data that can understand and generate human-like text. They're the foundation of modern AI assistants like ChatGPT and Claude. Foundation Text Generation
Training & Architecture
LLMs use transformer architecture and are trained on billions of text examples to learn patterns in language. They predict the next word in a sequence, which allows them to generate coherent responses.
Prompting
The way you ask questions to an LLM matters! Good prompts are clear, specific, and provide context. Techniques like few-shot learning (giving examples) can dramatically improve results.
Context Window
The "memory" of an LLM during a conversation. It's measured in tokens (roughly words). Larger context windows allow the AI to remember more of your conversation and process longer documents.
📚 RAG (Retrieval-Augmented Generation)
RAG combines LLMs with external knowledge retrieval. Instead of relying only on training data, the AI searches a database for relevant information before generating a response. Knowledge Accuracy
Why RAG Matters
RAG solves the "hallucination" problem where LLMs make things up. By retrieving real documents first, the AI can cite sources and provide accurate, up-to-date information not in its training data.
Vector Databases
RAG systems store information as mathematical "embeddings" in vector databases. This allows semantic search—finding information based on meaning, not just keywords.
Common Use Cases
Perfect for company knowledge bases, legal document analysis, customer support with product manuals, and any scenario where you need AI with access to specific, current information.
🔌 MCP (Model Context Protocol)
MCP is a standard way for AI models to connect with external tools and data sources. Think of it as USB ports for AI—a universal way to plug in capabilities. Integration Anthropic
The Problem MCP Solves
Before MCP, each AI tool integration was custom-built. MCP creates a standard protocol so developers can write one integration that works across multiple AI systems.
MCP Servers
MCP servers expose capabilities (like searching files, accessing databases, or calling APIs) that AI clients can use. Anyone can build an MCP server for their tools.
Real-World Examples
Connect Claude to Google Drive, Slack, GitHub, or your company's internal systems through MCP. The AI can then read, search, and interact with these tools directly.
🤖 AI Agents
AI Agents are autonomous systems that can plan, execute tasks, use tools, and work toward goals with minimal human intervention. They're AI that can "do things" not just answer questions. Automation Action
Agent vs. Chatbot
A chatbot responds to questions. An agent can break down complex tasks, use multiple tools in sequence, learn from feedback, and complete objectives independently.
Tool Use & Function Calling
Agents can decide when and how to use external tools—search engines, calculators, APIs, databases. They choose the right tool for each step of a task.
Multi-Step Reasoning
Agents can plan multi-step workflows: research a topic, synthesize findings, create a document, send it via email—all from a single instruction.
Examples
Research agents that compile reports, coding agents that build software, data analysis agents that process spreadsheets, customer service agents that resolve issues.
✨ Vibe Coding
A new programming paradigm where you describe what you want in natural language and AI writes the code. Less syntax, more creativity. You "vibe" the solution into existence. No-Code Natural Language
The Philosophy
Instead of memorizing syntax and debugging semicolons, you focus on the creative vision. Describe your app's behavior, design, and features in plain English.
Iterative Development
Start with a rough idea, let AI generate code, test it, then refine with more natural language instructions. Build prototypes in minutes, not days.
Tools & Platforms
Claude Artifacts, ChatGPT Code Interpreter, Replit AI, Cursor, and GitHub Copilot enable vibe coding. Each conversation becomes a collaborative coding session.
Limitations
Best for prototypes and moderate-complexity apps. Large-scale systems still need traditional engineering. The "vibe" works better when you understand what you're asking for.
🎯 Embeddings
Embeddings convert text, images, or other data into numerical vectors that capture meaning. Similar concepts have similar embeddings, enabling semantic search and comparison. Vector Math Semantic Search
How They Work
An embedding model turns "king" into something like [0.2, 0.8, -0.3, ...] with hundreds of dimensions. Words with similar meanings have vectors that point in similar directions.
Semantic Similarity
Search for "happy" and find "joyful", "delighted", "cheerful"—not just exact matches. This powers smart search, recommendation systems, and document clustering.
Applications
Powers RAG systems, semantic search engines, content recommendations, duplicate detection, and classification tasks. The backbone of modern AI information retrieval.
🎓 Fine-Tuning
Training a pre-trained model on your specific data to customize its behavior, style, or domain expertise. Like teaching an expert new specialized knowledge. Customization Specialization
When to Fine-Tune
Use fine-tuning when you need consistent style, domain-specific language, or behavior that prompting alone can't achieve. Medical diagnoses, legal writing, brand voice.
Fine-Tuning vs. RAG
RAG adds knowledge, fine-tuning changes behavior. Use RAG for facts that change. Use fine-tuning for consistent tone, format, or reasoning patterns.
Process
Collect examples of desired behavior, format as training pairs (input → desired output), run training, evaluate results. Requires hundreds to thousands of examples.
💬 Prompt Engineering
The art and science of crafting inputs that get the best results from AI models. Good prompts are clear, specific, and structured to guide the AI's reasoning process. Best Practices Optimization
Chain-of-Thought
Ask the AI to "think step by step" or "explain your reasoning." This improves accuracy on complex tasks by making the model show its work.
Few-Shot Learning
Provide 2-5 examples of the task before asking the AI to perform it. Shows the model exactly what format and style you want.
System Prompts
Instructions that set the AI's role and behavior for the entire conversation. "You are an expert teacher explaining concepts to beginners..."
Structured Output
Request specific formats: JSON, markdown tables, XML. Use delimiters and clear instructions to get consistent, parseable responses.