Learning Path

AI Roadmap

A progressive path to understand modern AI, from core concepts to production systems built with LLMs, RAG, agents, and serving stacks.

Foundations

What AI Actually Is

  • AI vs machine learning vs deep learning
  • Narrow AI vs general AI
  • Where modern AI fits in real products

Coming soon

Foundations

Math and Statistics You Need

  • Vectors and matrices intuition
  • Probability basics
  • Train, validation, and test mindset

Coming soon

Data

Data Quality and Preparation

  • Structured vs unstructured data
  • Cleaning and labeling basics
  • Bias and leakage risks

Coming soon

Models

Supervised Learning Basics

  • Classification vs regression
  • Features and targets
  • Overfitting and underfitting

Coming soon

Models

Unsupervised Learning and Embeddings

  • Clustering basics
  • Similarity search intuition
  • Embeddings in modern AI systems

Coming soon

LLMs

Transformer and Token Mental Model

  • Tokens and context window
  • Attention intuition
  • Inference vs training

Coming soon

Prompting

Prompt Engineering Fundamentals

  • Clear instructions
  • Role, context, and examples
  • Output constraints and delimiters

Coming soon

LLMs

Model Selection and Tradeoffs

  • Quality vs latency vs cost
  • Hosted vs local models
  • Open weights vs closed APIs

Coming soon

RAG

Retrieval-Augmented Generation

  • Chunking and indexing
  • Vector search basics
  • Grounding answers in source data

Coming soon

Evaluation

Evals and Failure Analysis

  • Golden datasets
  • Hallucination patterns
  • Measure task success

Coming soon

Safety

Safety, Privacy, and Guardrails

  • PII and sensitive data handling
  • Prompt injection basics
  • Moderation and policy checks

Coming soon

Agents

Tools, Functions, and Agents

  • Function calling
  • Tool orchestration
  • When agents are actually useful

Coming soon

Serving

Inference APIs and Serving

  • Request and response design
  • Batching and streaming
  • Rate limits and retries

Coming soon

Serving

Local AI Stack

  • Ollama and local runtimes
  • CPU and GPU constraints
  • Private deployment patterns

Coming soon

Product

UX for AI Features

  • Human-in-the-loop design
  • Fallback and uncertainty handling
  • Prompt and response UX

Coming soon

MLOps

Versioning and Experiment Tracking

  • Prompt and model versioning
  • Dataset reproducibility
  • Experiment logs and comparisons

Coming soon

Production

Monitoring and Cost Control

  • Latency, token, and error metrics
  • Drift and quality regressions
  • Cache and spend management

Coming soon

Production

Production AI Systems

  • RAG plus tools plus auth
  • Deployment patterns
  • Iterate safely over time

Coming soon