init
This commit is contained in:
4
Dockerfile
Normal file
4
Dockerfile
Normal file
@@ -0,0 +1,4 @@
|
|||||||
|
FROM docker.io/library/nginx:alpine
|
||||||
|
COPY . /usr/share/nginx/html
|
||||||
|
EXPOSE 8080
|
||||||
|
CMD ["nginx", "-g", "daemon off;"]
|
||||||
62
README.md
Normal file
62
README.md
Normal file
@@ -0,0 +1,62 @@
|
|||||||
|
# AI Cheat Sheet
|
||||||
|
|
||||||
|
A pink-themed static web app — your quick reference for artificial intelligence terminology, techniques, and real-world applications.
|
||||||
|
|
||||||
|
## Pages
|
||||||
|
|
||||||
|
| Page | Content |
|
||||||
|
| ------------------- | ------------------------------------------------------------------------------------ |
|
||||||
|
| **Home** | Overview and quick start |
|
||||||
|
| **Terminology** | 20+ key terms from ML, NLP, and model concepts, plus common acronyms |
|
||||||
|
| **Techniques** | Training, alignment, and optimization methods (RLHF, RAG, LoRA, quantization) |
|
||||||
|
| **Use Cases** | AI applications across 12 industries (healthcare, finance, coding, creative work...) |
|
||||||
|
| **Model Types** | Architecture families — LLMs, CNNs, diffusion, GANs, MoE + comparison table |
|
||||||
|
| **Prompt Guide** | 7 prompt patterns with templates and best practices |
|
||||||
|
| **Math & Concepts** | Core ideas (attention, loss, sampling) explained simply, plus key formulas |
|
||||||
|
|
||||||
|
## Run Locally
|
||||||
|
|
||||||
|
Serve the files with any static file server:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
python3 -m http.server 8080
|
||||||
|
```
|
||||||
|
|
||||||
|
Then open `http://localhost:8080`.
|
||||||
|
|
||||||
|
## Podman
|
||||||
|
|
||||||
|
Build and run with Podman:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
podman build -t alicia-ai-cheatsheet .
|
||||||
|
podman run -d --name alicai-ai-cheatsheet -p 9090:80 alicia-ai-cheatsheet
|
||||||
|
```
|
||||||
|
|
||||||
|
Then open `http://localhost:9090`.
|
||||||
|
|
||||||
|
Stop and remove:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
podman stop alicai-ai-cheatsheet
|
||||||
|
podman rm alicai-ai-cheatsheet
|
||||||
|
```
|
||||||
|
|
||||||
|
## Structure
|
||||||
|
|
||||||
|
```
|
||||||
|
index.html Landing page
|
||||||
|
css/style.css All styles (pink theme)
|
||||||
|
pages/
|
||||||
|
terminology.html
|
||||||
|
techniques.html
|
||||||
|
use-cases.html
|
||||||
|
model-types.html
|
||||||
|
prompts.html
|
||||||
|
math.html
|
||||||
|
Dockerfile Podman container image
|
||||||
|
```
|
||||||
|
|
||||||
|
## License
|
||||||
|
|
||||||
|
MIT
|
||||||
295
css/style.css
Normal file
295
css/style.css
Normal file
@@ -0,0 +1,295 @@
|
|||||||
|
:root {
|
||||||
|
--pink-50: #fff1f5;
|
||||||
|
--pink-100: #ffe4ef;
|
||||||
|
--pink-200: #ffcce0;
|
||||||
|
--pink-300: #ffa8c8;
|
||||||
|
--pink-400: #ff69b4;
|
||||||
|
--pink-500: #ff1493;
|
||||||
|
--pink-600: #e91082;
|
||||||
|
--pink-700: #d40e74;
|
||||||
|
--pink-800: #b80c65;
|
||||||
|
--pink-900: #9a0a55;
|
||||||
|
--pink-neon: #ff3ec4;
|
||||||
|
--white: #ffffff;
|
||||||
|
--shadow: 0 1px 3px rgba(255,20,147,0.15), 0 1px 2px rgba(255,20,147,0.08);
|
||||||
|
--shadow-lg: 0 10px 25px rgba(255,20,147,0.2), 0 4px 10px rgba(255,20,147,0.1);
|
||||||
|
}
|
||||||
|
|
||||||
|
* { margin: 0; padding: 0; box-sizing: border-box; }
|
||||||
|
|
||||||
|
body {
|
||||||
|
font-family: 'Segoe UI', system-ui, -apple-system, sans-serif;
|
||||||
|
background: linear-gradient(180deg, var(--pink-50) 0%, var(--pink-100) 100%);
|
||||||
|
color: var(--pink-900);
|
||||||
|
line-height: 1.6;
|
||||||
|
min-height: 100vh;
|
||||||
|
}
|
||||||
|
|
||||||
|
a { color: var(--pink-500); text-decoration: none; }
|
||||||
|
a:hover { color: var(--pink-700); text-decoration: underline; }
|
||||||
|
|
||||||
|
/* Navigation */
|
||||||
|
nav {
|
||||||
|
background: linear-gradient(90deg, var(--pink-600), var(--pink-500), var(--pink-600));
|
||||||
|
padding: 0 2rem;
|
||||||
|
position: sticky;
|
||||||
|
top: 0;
|
||||||
|
z-index: 100;
|
||||||
|
box-shadow: 0 4px 20px rgba(255,20,147,0.3);
|
||||||
|
}
|
||||||
|
|
||||||
|
.nav-inner {
|
||||||
|
max-width: 1100px;
|
||||||
|
margin: 0 auto;
|
||||||
|
display: flex;
|
||||||
|
align-items: center;
|
||||||
|
gap: 1rem;
|
||||||
|
}
|
||||||
|
|
||||||
|
.nav-brand {
|
||||||
|
color: var(--white);
|
||||||
|
font-weight: 800;
|
||||||
|
font-size: 1.4rem;
|
||||||
|
letter-spacing: -0.5px;
|
||||||
|
padding: 1rem 0;
|
||||||
|
}
|
||||||
|
|
||||||
|
.nav-links { display: flex; gap: 0.25rem; flex-wrap: wrap; }
|
||||||
|
|
||||||
|
.nav-links a {
|
||||||
|
color: var(--pink-100);
|
||||||
|
padding: 0.6rem 1rem;
|
||||||
|
border-radius: 8px;
|
||||||
|
font-size: 0.9rem;
|
||||||
|
font-weight: 500;
|
||||||
|
transition: background 0.2s;
|
||||||
|
}
|
||||||
|
|
||||||
|
.nav-links a:hover,
|
||||||
|
.nav-links a.active {
|
||||||
|
background: var(--pink-800);
|
||||||
|
color: var(--white);
|
||||||
|
text-decoration: none;
|
||||||
|
box-shadow: 0 0 10px rgba(255,62,196,0.3);
|
||||||
|
}
|
||||||
|
|
||||||
|
/* Hero */
|
||||||
|
.hero {
|
||||||
|
background: linear-gradient(135deg, var(--pink-500), var(--pink-600), var(--pink-700));
|
||||||
|
color: var(--white);
|
||||||
|
text-align: center;
|
||||||
|
padding: 5rem 2rem;
|
||||||
|
position: relative;
|
||||||
|
overflow: hidden;
|
||||||
|
}
|
||||||
|
|
||||||
|
.hero::before {
|
||||||
|
content: '';
|
||||||
|
position: absolute;
|
||||||
|
top: -50%;
|
||||||
|
left: -50%;
|
||||||
|
width: 200%;
|
||||||
|
height: 200%;
|
||||||
|
background: radial-gradient(circle, rgba(255,255,255,0.1) 0%, transparent 60%);
|
||||||
|
animation: heroShine 8s ease-in-out infinite;
|
||||||
|
}
|
||||||
|
|
||||||
|
@keyframes heroShine {
|
||||||
|
0%, 100% { transform: translate(0, 0); }
|
||||||
|
50% { transform: translate(20%, 10%); }
|
||||||
|
}
|
||||||
|
|
||||||
|
.hero h1 { font-size: 3rem; font-weight: 800; margin-bottom: 0.5rem; position: relative; }
|
||||||
|
.hero p { font-size: 1.2rem; opacity: 0.95; max-width: 600px; margin: 0 auto; position: relative; }
|
||||||
|
|
||||||
|
/* Container */
|
||||||
|
.container {
|
||||||
|
max-width: 1100px;
|
||||||
|
margin: 0 auto;
|
||||||
|
padding: 2rem 1.5rem 4rem;
|
||||||
|
}
|
||||||
|
|
||||||
|
/* Cards grid */
|
||||||
|
.cards {
|
||||||
|
display: grid;
|
||||||
|
grid-template-columns: repeat(auto-fill, minmax(320px, 1fr));
|
||||||
|
gap: 1.5rem;
|
||||||
|
margin-top: 1.5rem;
|
||||||
|
}
|
||||||
|
|
||||||
|
.card {
|
||||||
|
background: var(--white);
|
||||||
|
border-radius: 16px;
|
||||||
|
padding: 1.8rem;
|
||||||
|
box-shadow: var(--shadow-lg);
|
||||||
|
border: 2px solid var(--pink-300);
|
||||||
|
transition: transform 0.2s, box-shadow 0.2s;
|
||||||
|
}
|
||||||
|
|
||||||
|
.card:hover {
|
||||||
|
transform: translateY(-4px);
|
||||||
|
box-shadow: 0 15px 35px rgba(255,20,147,0.25);
|
||||||
|
border-color: var(--pink-400);
|
||||||
|
}
|
||||||
|
|
||||||
|
.card h3 {
|
||||||
|
color: var(--pink-600);
|
||||||
|
font-size: 1.2rem;
|
||||||
|
margin-bottom: 0.5rem;
|
||||||
|
}
|
||||||
|
|
||||||
|
.card p { color: var(--pink-800); font-size: 0.95rem; }
|
||||||
|
|
||||||
|
/* Section heading */
|
||||||
|
h2.section-title {
|
||||||
|
font-size: 1.9rem;
|
||||||
|
color: var(--pink-700);
|
||||||
|
margin: 2.5rem 0 0.8rem;
|
||||||
|
border-bottom: 3px solid var(--pink-400);
|
||||||
|
padding-bottom: 0.5rem;
|
||||||
|
}
|
||||||
|
|
||||||
|
/* Glossary table */
|
||||||
|
.glossary-table {
|
||||||
|
width: 100%;
|
||||||
|
border-collapse: collapse;
|
||||||
|
margin-top: 1rem;
|
||||||
|
background: var(--white);
|
||||||
|
border-radius: 16px;
|
||||||
|
overflow: hidden;
|
||||||
|
box-shadow: var(--shadow-lg);
|
||||||
|
border: 2px solid var(--pink-200);
|
||||||
|
}
|
||||||
|
|
||||||
|
.glossary-table thead {
|
||||||
|
background: linear-gradient(90deg, var(--pink-500), var(--pink-600));
|
||||||
|
color: var(--white);
|
||||||
|
}
|
||||||
|
|
||||||
|
.glossary-table th,
|
||||||
|
.glossary-table td {
|
||||||
|
padding: 0.9rem 1.2rem;
|
||||||
|
text-align: left;
|
||||||
|
font-size: 0.95rem;
|
||||||
|
}
|
||||||
|
|
||||||
|
.glossary-table tbody tr { border-bottom: 1px solid var(--pink-200); }
|
||||||
|
.glossary-table tbody tr:hover { background: var(--pink-100); }
|
||||||
|
.glossary-table td:first-child { font-weight: 700; color: var(--pink-600); white-space: nowrap; }
|
||||||
|
|
||||||
|
/* Definition card */
|
||||||
|
.def-card {
|
||||||
|
background: var(--white);
|
||||||
|
border-radius: 16px;
|
||||||
|
padding: 1.5rem 2rem;
|
||||||
|
margin-bottom: 1rem;
|
||||||
|
box-shadow: var(--shadow-lg);
|
||||||
|
border-left: 5px solid var(--pink-500);
|
||||||
|
border: 2px solid var(--pink-200);
|
||||||
|
transition: border-color 0.2s;
|
||||||
|
}
|
||||||
|
|
||||||
|
.def-card:hover {
|
||||||
|
border-color: var(--pink-400);
|
||||||
|
}
|
||||||
|
|
||||||
|
.def-card h3 {
|
||||||
|
color: var(--pink-700);
|
||||||
|
font-size: 1.15rem;
|
||||||
|
margin-bottom: 0.3rem;
|
||||||
|
}
|
||||||
|
|
||||||
|
.def-card .category {
|
||||||
|
display: inline-block;
|
||||||
|
background: linear-gradient(135deg, var(--pink-400), var(--pink-500));
|
||||||
|
color: var(--white);
|
||||||
|
font-size: 0.72rem;
|
||||||
|
font-weight: 700;
|
||||||
|
padding: 0.2rem 0.7rem;
|
||||||
|
border-radius: 999px;
|
||||||
|
margin-bottom: 0.4rem;
|
||||||
|
text-transform: uppercase;
|
||||||
|
letter-spacing: 0.5px;
|
||||||
|
}
|
||||||
|
|
||||||
|
.def-card p { color: var(--pink-900); font-size: 0.95rem; }
|
||||||
|
|
||||||
|
/* Example block */
|
||||||
|
.example {
|
||||||
|
background: linear-gradient(135deg, var(--pink-100), var(--pink-200));
|
||||||
|
border-radius: 10px;
|
||||||
|
padding: 0.8rem 1rem;
|
||||||
|
margin-top: 0.5rem;
|
||||||
|
font-family: 'Courier New', monospace;
|
||||||
|
font-size: 0.88rem;
|
||||||
|
color: var(--pink-900);
|
||||||
|
border: 1px solid var(--pink-300);
|
||||||
|
}
|
||||||
|
|
||||||
|
.example strong { font-family: 'Segoe UI', system-ui, sans-serif; color: var(--pink-700); }
|
||||||
|
|
||||||
|
/* Use-case grid */
|
||||||
|
.use-grid {
|
||||||
|
display: grid;
|
||||||
|
grid-template-columns: repeat(auto-fill, minmax(300px, 1fr));
|
||||||
|
gap: 1.5rem;
|
||||||
|
margin-top: 1.5rem;
|
||||||
|
}
|
||||||
|
|
||||||
|
.use-card {
|
||||||
|
background: var(--white);
|
||||||
|
border-radius: 16px;
|
||||||
|
padding: 1.8rem;
|
||||||
|
box-shadow: var(--shadow-lg);
|
||||||
|
text-align: center;
|
||||||
|
border: 2px solid var(--pink-200);
|
||||||
|
transition: transform 0.2s, box-shadow 0.2s, border-color 0.2s;
|
||||||
|
}
|
||||||
|
|
||||||
|
.use-card:hover {
|
||||||
|
transform: translateY(-4px);
|
||||||
|
box-shadow: 0 15px 35px rgba(255,20,147,0.25);
|
||||||
|
border-color: var(--pink-400);
|
||||||
|
}
|
||||||
|
|
||||||
|
.use-card .icon { font-size: 2.8rem; margin-bottom: 0.5rem; }
|
||||||
|
.use-card h3 { color: var(--pink-700); margin-bottom: 0.4rem; }
|
||||||
|
.use-card p { color: var(--pink-900); font-size: 0.9rem; margin-bottom: 0.8rem; }
|
||||||
|
|
||||||
|
/* Prompt examples */
|
||||||
|
.prompt-block {
|
||||||
|
background: var(--white);
|
||||||
|
border-radius: 16px;
|
||||||
|
padding: 1.5rem 2rem;
|
||||||
|
margin-bottom: 1rem;
|
||||||
|
box-shadow: var(--shadow-lg);
|
||||||
|
border: 2px solid var(--pink-200);
|
||||||
|
transition: border-color 0.2s;
|
||||||
|
}
|
||||||
|
|
||||||
|
.prompt-block:hover {
|
||||||
|
border-color: var(--pink-400);
|
||||||
|
}
|
||||||
|
|
||||||
|
.prompt-block h3 {
|
||||||
|
color: var(--pink-700);
|
||||||
|
margin-bottom: 0.5rem;
|
||||||
|
}
|
||||||
|
|
||||||
|
.prompt-block .label {
|
||||||
|
font-weight: 700;
|
||||||
|
color: var(--pink-500);
|
||||||
|
font-size: 0.85rem;
|
||||||
|
text-transform: uppercase;
|
||||||
|
letter-spacing: 0.5px;
|
||||||
|
}
|
||||||
|
|
||||||
|
/* Footer */
|
||||||
|
footer {
|
||||||
|
background: linear-gradient(90deg, var(--pink-700), var(--pink-600), var(--pink-700));
|
||||||
|
color: var(--pink-200);
|
||||||
|
text-align: center;
|
||||||
|
padding: 1.8rem;
|
||||||
|
font-size: 0.9rem;
|
||||||
|
box-shadow: 0 -4px 20px rgba(255,20,147,0.3);
|
||||||
|
}
|
||||||
75
index.html
Normal file
75
index.html
Normal file
@@ -0,0 +1,75 @@
|
|||||||
|
<!DOCTYPE html>
|
||||||
|
<html lang="en">
|
||||||
|
<head>
|
||||||
|
<meta charset="UTF-8">
|
||||||
|
<meta name="viewport" content="width=device-width, initial-scale=1.0">
|
||||||
|
<title>AI Cheat Sheet</title>
|
||||||
|
<link rel="stylesheet" href="css/style.css">
|
||||||
|
</head>
|
||||||
|
<body>
|
||||||
|
|
||||||
|
<nav>
|
||||||
|
<div class="nav-inner">
|
||||||
|
<a href="index.html" class="nav-brand active">AI Cheat Sheet</a>
|
||||||
|
<div class="nav-links">
|
||||||
|
<a href="pages/terminology.html">Terminology</a>
|
||||||
|
<a href="pages/techniques.html">Techniques</a>
|
||||||
|
<a href="pages/use-cases.html">Use Cases</a>
|
||||||
|
<a href="pages/model-types.html">Model Types</a>
|
||||||
|
<a href="pages/prompts.html">Prompt Guide</a>
|
||||||
|
<a href="pages/math.html">Math & Concepts</a>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</nav>
|
||||||
|
|
||||||
|
<div class="hero">
|
||||||
|
<h1>AI Cheat Sheet</h1>
|
||||||
|
<p>Your quick reference for artificial intelligence terminology, techniques, and real-world applications.</p>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
<div class="container">
|
||||||
|
<h2 class="section-title">Browse Topics</h2>
|
||||||
|
<div class="cards">
|
||||||
|
<div class="card">
|
||||||
|
<h3>📖 Terminology</h3>
|
||||||
|
<p>Key AI terms from ML and NLP — supervised learning, fine-tuning, tokens, embeddings, and more.</p>
|
||||||
|
</div>
|
||||||
|
<div class="card">
|
||||||
|
<h3>⚙️ Techniques</h3>
|
||||||
|
<p>How AI models are trained and improved — backpropagation, RLHF, quantization, RAG, and more.</p>
|
||||||
|
</div>
|
||||||
|
<div class="card">
|
||||||
|
<h3>🎯 Use Cases</h3>
|
||||||
|
<p>Where AI is used in the real world — healthcare, finance, creative work, customer support, and more.</p>
|
||||||
|
</div>
|
||||||
|
<div class="card">
|
||||||
|
<h3>🤖 Model Types</h3>
|
||||||
|
<p>LLMs, diffusion models, CNNs, GANs, transformers, and other AI architectures explained.</p>
|
||||||
|
</div>
|
||||||
|
<div class="card">
|
||||||
|
<h3>✍️ Prompt Engineering</h3>
|
||||||
|
<p>How to write effective prompts — zero-shot, few-shot, chain-of-thought, and structured prompts.</p>
|
||||||
|
</div>
|
||||||
|
<div class="card">
|
||||||
|
<h3>📐 Math & Concepts</h3>
|
||||||
|
<p>Underlying concepts — loss functions, attention, temperature, perplexity, and accuracy metrics.</p>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
<h2 class="section-title">Quick Start</h2>
|
||||||
|
<div class="def-card">
|
||||||
|
<span class="category">Core Concept</span>
|
||||||
|
<h3>What is Artificial Intelligence?</h3>
|
||||||
|
<p>AI refers to computer systems designed to perform tasks that normally require human intelligence — including learning, reasoning, problem-solving, perception, and language understanding. Modern AI is powered by machine learning, where models learn patterns from data rather than following explicit rules.</p>
|
||||||
|
</div>
|
||||||
|
<div class="def-card">
|
||||||
|
<span class="category">Quick Fact</span>
|
||||||
|
<h3>LLM vs Traditional ML</h3>
|
||||||
|
<p>Traditional ML models are built for one specific task (e.g., classify spam). Large Language Models are general-purpose — trained on massive text corpora to understand and generate human language across countless tasks.</p>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
<footer>AI Cheat Sheet — A learning reference for artificial intelligence</footer>
|
||||||
|
|
||||||
|
</body>
|
||||||
|
</html>
|
||||||
189
pages/math.html
Normal file
189
pages/math.html
Normal file
@@ -0,0 +1,189 @@
|
|||||||
|
<!DOCTYPE html>
|
||||||
|
<html lang="en">
|
||||||
|
<head>
|
||||||
|
<meta charset="UTF-8">
|
||||||
|
<meta name="viewport" content="width=device-width, initial-scale=1.0">
|
||||||
|
<title>Math & Concepts - Cheat Sheet</title>
|
||||||
|
<link rel="stylesheet" href="../css/style.css">
|
||||||
|
</head>
|
||||||
|
<body>
|
||||||
|
|
||||||
|
<nav>
|
||||||
|
<div class="nav-inner">
|
||||||
|
<a href="../index.html" class="nav-brand">AI Cheat Sheet</a>
|
||||||
|
<div class="nav-links">
|
||||||
|
<a href="terminology.html">Terminology</a>
|
||||||
|
<a href="techniques.html">Techniques</a>
|
||||||
|
<a href="use-cases.html">Use Cases</a>
|
||||||
|
<a href="model-types.html">Model Types</a>
|
||||||
|
<a href="prompts.html">Prompt Guide</a>
|
||||||
|
<a href="math.html" class="active">Math & Concepts</a>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</nav>
|
||||||
|
|
||||||
|
<div class="hero">
|
||||||
|
<h1>Math & Concepts</h1>
|
||||||
|
<p>The underlying ideas that make AI work — explained simply.</p>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
<div class="container">
|
||||||
|
|
||||||
|
<h2 class="section-title">Core Concepts</h2>
|
||||||
|
<div class="def-card">
|
||||||
|
<span class="category">Architecture</span>
|
||||||
|
<h3>Attention Mechanism</h3>
|
||||||
|
<p>A way for the model to weigh the importance of different parts of the input when processing each token. "Attention is all you need" — the 2017 paper that launched the transformer revolution.</p>
|
||||||
|
<div class="example"><strong>Analogy:</strong> When reading a sentence, you naturally pay more attention to certain words. "The cat that chased the mouse hid" — you attend to "cat" when processing "hid".</div>
|
||||||
|
</div>
|
||||||
|
<div class="def-card">
|
||||||
|
<span class="category">Architecture</span>
|
||||||
|
<h3>Self-Attention</h3>
|
||||||
|
<p>Each token in a sequence attends to every other token, creating rich contextual representations. The core of the transformer architecture.</p>
|
||||||
|
<div class="example"><strong>Math:</strong> Attention(Q, K, V) = softmax(QKᵀ / √dₖ) V</div>
|
||||||
|
</div>
|
||||||
|
<div class="def-card">
|
||||||
|
<span class="category">Architecture</span>
|
||||||
|
<h3>Multi-Head Attention</h3>
|
||||||
|
<p>Running multiple self-attention operations in parallel, each learning different types of relationships. Like having multiple "lenses" to view the input.</p>
|
||||||
|
</div>
|
||||||
|
<div class="def-card">
|
||||||
|
<span class="category">Architecture</span>
|
||||||
|
<h3>Positional Encoding</h3>
|
||||||
|
<p>Since transformers process all tokens simultaneously (unlike RNNs), position information must be added explicitly so the model knows word order.</p>
|
||||||
|
</div>
|
||||||
|
<div class="def-card">
|
||||||
|
<span class="category">Architecture</span>
|
||||||
|
<h3>Feed-Forward Network (FFN)</h3>
|
||||||
|
<p>After attention, each token passes through a small neural network that transforms its representation. Usually two linear layers with a non-linearity in between.</p>
|
||||||
|
</div>
|
||||||
|
<div class="def-card">
|
||||||
|
<span class="category">Architecture</span>
|
||||||
|
<h3>Layer Normalization</h3>
|
||||||
|
<p>A technique to stabilize training by normalizing the activations of each layer. Helps gradients flow more smoothly through deep networks.</p>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
<h2 class="section-title">Training Concepts</h2>
|
||||||
|
<div class="def-card">
|
||||||
|
<span class="category">Training</span>
|
||||||
|
<h3>Loss Function</h3>
|
||||||
|
<p>A mathematical measure of how far the model's predictions are from the correct answers. Training = minimizing this value. For language models, cross-entropy loss is standard.</p>
|
||||||
|
<div class="example"><strong>Example:</strong> If the correct next word is "cat" but the model assigns it 10% probability, the loss is high. If it assigns 90%, the loss is low.</div>
|
||||||
|
</div>
|
||||||
|
<div class="def-card">
|
||||||
|
<span class="category">Training</span>
|
||||||
|
<h3>Gradient Descent</h3>
|
||||||
|
<p>The optimization algorithm that adjusts model weights in the direction that reduces loss. "Descent" because you're moving down the loss surface toward a minimum.</p>
|
||||||
|
</div>
|
||||||
|
<div class="def-card">
|
||||||
|
<span class="category">Training</span>
|
||||||
|
<h3>Adam Optimizer</h3>
|
||||||
|
<p>The most popular optimizer for training deep learning models. Combines momentum (acceleration) with adaptive learning rates (per-parameter tuning).</p>
|
||||||
|
</div>
|
||||||
|
<div class="def-card">
|
||||||
|
<span class="category">Training</span>
|
||||||
|
<h3>Gradient</h3>
|
||||||
|
<p>A vector of partial derivatives showing the direction and rate of steepest increase of the loss. We move in the opposite direction to minimize loss.</p>
|
||||||
|
</div>
|
||||||
|
<div class="def-card">
|
||||||
|
<span class="category">Training</span>
|
||||||
|
<h3>Regularization</h3>
|
||||||
|
<p>Techniques to prevent overfitting: dropout (randomly deactivating neurons), weight decay (penalizing large weights), and early stopping.</p>
|
||||||
|
</div>
|
||||||
|
<div class="def-card">
|
||||||
|
<span class="category">Training</span>
|
||||||
|
<h3>Batch Normalization</h3>
|
||||||
|
<p>Normalizing layer inputs across each mini-batch. Reduces internal covariate shift and allows higher learning rates.</p>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
<h2 class="section-title">Generation & Sampling</h2>
|
||||||
|
<div class="def-card">
|
||||||
|
<span class="category">Sampling</span>
|
||||||
|
<h3>Temperature</h3>
|
||||||
|
<p>Controls randomness in text generation. Low (0.2) = focused and deterministic. High (0.9) = creative and varied. 1.0 = standard sampling.</p>
|
||||||
|
<div class="example"><strong>Low temp:</strong> Technical documentation, code generation<br>
|
||||||
|
<strong>High temp:</strong> Creative writing, brainstorming</div>
|
||||||
|
</div>
|
||||||
|
<div class="def-card">
|
||||||
|
<span class="category">Sampling</span>
|
||||||
|
<h3>Top-K Sampling</h3>
|
||||||
|
<p>At each step, only consider the K most likely next tokens. Reduces weird or irrelevant outputs.</p>
|
||||||
|
</div>
|
||||||
|
<div class="def-card">
|
||||||
|
<span class="category">Sampling</span>
|
||||||
|
<h3>Top-P (Nucleus) Sampling</h3>
|
||||||
|
<p>Only consider tokens whose cumulative probability reaches P. More adaptive than Top-K — automatically adjusts the number of candidates.</p>
|
||||||
|
<div class="example"><strong>Top-P = 0.9:</strong> Include the smallest set of tokens that together cover 90% probability mass.</div>
|
||||||
|
</div>
|
||||||
|
<div class="def-card">
|
||||||
|
<span class="category">Sampling</span>
|
||||||
|
<h3>Greedy Decoding</h3>
|
||||||
|
<p>Always pick the most likely next token. Fastest but can get stuck in repetitive loops. Often produces the most coherent output for factual tasks.</p>
|
||||||
|
</div>
|
||||||
|
<div class="def-card">
|
||||||
|
<span class="category">Sampling</span>
|
||||||
|
<h3>Beam Search</h3>
|
||||||
|
<p>Instead of picking the single best token at each step, keep the top B sequences and pick the best overall. Better quality but slower.</p>
|
||||||
|
</div>
|
||||||
|
<div class="def-card">
|
||||||
|
<span class="category">Sampling</span>
|
||||||
|
<h3>Logits</h3>
|
||||||
|
<p>The raw, unnormalized scores the model outputs for each token before softmax. Can be adjusted for bias correction, repetition penalties, and custom sampling.</p>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
<h2 class="section-title">Evaluation Metrics</h2>
|
||||||
|
<div class="def-card">
|
||||||
|
<span class="category">Metrics</span>
|
||||||
|
<h3>Perplexity</h3>
|
||||||
|
<p>Measures how "surprised" the model is by test data. Lower is better. A perplexity of 100 means the model is as confused as choosing uniformly from 100 options.</p>
|
||||||
|
<div class="example"><strong>Example:</strong> Perplexity 5 on a language model means, on average, it's as uncertain as picking from 5 equally likely options at each step.</div>
|
||||||
|
</div>
|
||||||
|
<div class="def-card">
|
||||||
|
<span class="category">Metrics</span>
|
||||||
|
<h3>Accuracy</h3>
|
||||||
|
<p>Percentage of correct predictions. Simple but can be misleading for imbalanced datasets.</p>
|
||||||
|
</div>
|
||||||
|
<div class="def-card">
|
||||||
|
<span class="category">Metrics</span>
|
||||||
|
<h3>Precision & Recall</h3>
|
||||||
|
<p>Precision = of all positive predictions, how many were correct? Recall = of all actual positives, how many did we find?</p>
|
||||||
|
<div class="example"><strong>Spam filter:</strong> High precision = few legitimate emails flagged. High recall = few spam emails missed.</div>
|
||||||
|
</div>
|
||||||
|
<div class="def-card">
|
||||||
|
<span class="category">Metrics</span>
|
||||||
|
<h3>F1 Score</h3>
|
||||||
|
<p>The harmonic mean of precision and recall. A single metric that balances both.</p>
|
||||||
|
</div>
|
||||||
|
<div class="def-card">
|
||||||
|
<span class="category">Metrics</span>
|
||||||
|
<h3>BLEU / ROUGE</h3>
|
||||||
|
<p>Metrics for evaluating text generation quality by comparing model output to reference text. BLEU counts n-gram overlap (used for translation). ROUGE is similar but common for summarization.</p>
|
||||||
|
</div>
|
||||||
|
<div class="def-card">
|
||||||
|
<span class="category">Metrics</span>
|
||||||
|
<h3>Tokens per Second (TPS)</h3>
|
||||||
|
<p>How many tokens the model generates per second. Measures inference speed. Typical range: 20-100+ TPS depending on model size and hardware.</p>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
<h2 class="section-title">Key Formulas</h2>
|
||||||
|
<table class="glossary-table">
|
||||||
|
<thead>
|
||||||
|
<tr><th>Concept</th><th>Formula</th><th>What it means</th></tr>
|
||||||
|
</thead>
|
||||||
|
<tbody>
|
||||||
|
<tr><td>Attention</td><td>softmax(QKᵀ/√dₖ)V</td><td>Weigh inputs by relevance</td></tr>
|
||||||
|
<tr><td>Cross-Entropy Loss</td><td>-Σ yᵢ log(pᵢ)</td><td>Penalizes wrong predictions</td></tr>
|
||||||
|
<tr><td>Softmax</td><td>eˣⁱ / Σeˣʲ</td><td>Converts scores to probabilities</td></tr>
|
||||||
|
<tr><td>ReLU</td><td>max(0, x)</td><td>Activation: passes positive values only</td></tr>
|
||||||
|
<tr><td>Layer Norm</td><td>(x - μ) / σ × γ + β</td><td>Normalizes per-sample activations</td></tr>
|
||||||
|
<tr><td>F1 Score</td><td>2 × (P×R)/(P+R)</td><td>Harmonic mean of precision & recall</td></tr>
|
||||||
|
<tr><td>Perplexity</td><td>2^(cross-entropy)</td><td>Effective branching factor</td></tr>
|
||||||
|
</tbody>
|
||||||
|
</table>
|
||||||
|
|
||||||
|
</div>
|
||||||
|
|
||||||
|
<footer>AI Cheat Sheet — A learning reference for artificial intelligence</footer>
|
||||||
|
|
||||||
|
</body>
|
||||||
|
</html>
|
||||||
154
pages/model-types.html
Normal file
154
pages/model-types.html
Normal file
@@ -0,0 +1,154 @@
|
|||||||
|
<!DOCTYPE html>
|
||||||
|
<html lang="en">
|
||||||
|
<head>
|
||||||
|
<meta charset="UTF-8">
|
||||||
|
<meta name="viewport" content="width=device-width, initial-scale=1.0">
|
||||||
|
<title>Model Types - Cheat Sheet</title>
|
||||||
|
<link rel="stylesheet" href="../css/style.css">
|
||||||
|
</head>
|
||||||
|
<body>
|
||||||
|
|
||||||
|
<nav>
|
||||||
|
<div class="nav-inner">
|
||||||
|
<a href="../index.html" class="nav-brand">AI Cheat Sheet</a>
|
||||||
|
<div class="nav-links">
|
||||||
|
<a href="terminology.html">Terminology</a>
|
||||||
|
<a href="techniques.html">Techniques</a>
|
||||||
|
<a href="use-cases.html">Use Cases</a>
|
||||||
|
<a href="model-types.html" class="active">Model Types</a>
|
||||||
|
<a href="prompts.html">Prompt Guide</a>
|
||||||
|
<a href="math.html">Math & Concepts</a>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</nav>
|
||||||
|
|
||||||
|
<div class="hero">
|
||||||
|
<h1>Model Types</h1>
|
||||||
|
<p>Architectures and families of AI models — what they are and what they do.</p>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
<div class="container">
|
||||||
|
|
||||||
|
<h2 class="section-title">Language Models</h2>
|
||||||
|
<div class="def-card">
|
||||||
|
<span class="category">Transformer</span>
|
||||||
|
<h3>LLM (Large Language Model)</h3>
|
||||||
|
<p>Neural networks based on the transformer architecture, trained on massive text corpora. They predict the next token given a sequence, enabling fluency in language tasks.</p>
|
||||||
|
<div class="example"><strong>Examples:</strong> GPT-4, Claude, Gemini, Llama 3, Mistral, Qwen</div>
|
||||||
|
</div>
|
||||||
|
<div class="def-card">
|
||||||
|
<span class="category">Transformer</span>
|
||||||
|
<h3>Encoder-Only Models</h3>
|
||||||
|
<p>Transformers designed to understand input (not generate text). Used for classification, sentiment analysis, and embedding generation.</p>
|
||||||
|
<div class="example"><strong>Examples:</strong> BERT, RoBERTa, DeBERTa</div>
|
||||||
|
</div>
|
||||||
|
<div class="def-card">
|
||||||
|
<span class="category">Transformer</span>
|
||||||
|
<h3>Decoder-Only Models</h3>
|
||||||
|
<p>Transformers designed to generate text autoregressively — the dominant architecture for modern LLMs.</p>
|
||||||
|
<div class="example"><strong>Examples:</strong> GPT series, Claude, Llama, Mistral</div>
|
||||||
|
</div>
|
||||||
|
<div class="def-card">
|
||||||
|
<span class="category">Transformer</span>
|
||||||
|
<h3>Encoder-Decoder Models</h3>
|
||||||
|
<p>Transformers with both encoder and decoder, used for tasks that transform input to output (translation, summarization).</p>
|
||||||
|
<div class="example"><strong>Examples:</strong> T5, BART, Flan-T5</div>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
<h2 class="section-title">Vision Models</h2>
|
||||||
|
<div class="def-card">
|
||||||
|
<span class="category">Vision</span>
|
||||||
|
<h3>CNN (Convolutional Neural Network)</h3>
|
||||||
|
<p>Neural networks with layers that scan images with small filters, detecting edges, textures, and patterns hierarchically. The backbone of computer vision for years.</p>
|
||||||
|
<div class="example"><strong>Examples:</strong> ResNet, EfficientNet, VGG</div>
|
||||||
|
</div>
|
||||||
|
<div class="def-card">
|
||||||
|
<span class="category">Vision</span>
|
||||||
|
<h3>ViT (Vision Transformer)</h3>
|
||||||
|
<p>Applying the transformer architecture to images by treating image patches as tokens. Often outperforms CNNs at scale.</p>
|
||||||
|
<div class="example"><strong>Examples:</strong> CLIP, DINOv2, ViT-Base</div>
|
||||||
|
</div>
|
||||||
|
<div class="def-card">
|
||||||
|
<span class="category">Vision</span>
|
||||||
|
<h3>Diffusion Models</h3>
|
||||||
|
<p>Models that generate images by iteratively denoising random noise. The architecture behind most state-of-the-art image generators.</p>
|
||||||
|
<div class="example"><strong>Examples:</strong> Stable Diffusion, DALL-E 3, Midjourney</div>
|
||||||
|
</div>
|
||||||
|
<div class="def-card">
|
||||||
|
<span class="category">Vision</span>
|
||||||
|
<h3>Multimodal Models</h3>
|
||||||
|
<p>Models that process multiple input types — text, images, audio — and can generate outputs across modalities.</p>
|
||||||
|
<div class="example"><strong>Examples:</strong> GPT-4V (vision), Claude 3, Gemini, Qwen-VL</div>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
<h2 class="section-title">Generative Models</h2>
|
||||||
|
<div class="def-card">
|
||||||
|
<span class="category">Generative</span>
|
||||||
|
<h3>GAN (Generative Adversarial Network)</h3>
|
||||||
|
<p>Two networks compete: a generator creates fake data, and a discriminator tries to detect fakes. Over time, both improve until the generator is indistinguishable from real data.</p>
|
||||||
|
<div class="example"><strong>Example:</strong> Creating photorealistic faces that don't exist (StyleGAN).</div>
|
||||||
|
</div>
|
||||||
|
<div class="def-card">
|
||||||
|
<span class="category">Generative</span>
|
||||||
|
<h3>VQ-VAE (Vector Quantized VAE)</h3>
|
||||||
|
<p>Combines autoencoders with discrete codebooks to learn compressed representations. Used as a foundation for autoregressive generation.</p>
|
||||||
|
<div class="example"><strong>Example:</strong> MusicGen (music generation), SoundStream (audio compression)</div>
|
||||||
|
</div>
|
||||||
|
<div class="def-card">
|
||||||
|
<span class="category">Generative</span>
|
||||||
|
<h3>Flow Models</h3>
|
||||||
|
<p>Models that learn a reversible transformation between data and noise, enabling exact likelihood computation and fast generation.</p>
|
||||||
|
<div class="example"><strong>Examples:</strong> DALL-E 2 uses flow matching, Glow, RealNVP</div>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
<h2 class="section-title">Other Architectures</h2>
|
||||||
|
<div class="def-card">
|
||||||
|
<span class="category">Architecture</span>
|
||||||
|
<h3>RNN / LSTM</h3>
|
||||||
|
<p>Recurrent networks that process sequences step-by-step, maintaining a hidden state. Largely replaced by transformers but still used in some applications.</p>
|
||||||
|
<div class="example"><strong>Use case:</strong> Time series prediction, speech recognition</div>
|
||||||
|
</div>
|
||||||
|
<div class="def-card">
|
||||||
|
<span class="category">Architecture</span>
|
||||||
|
<h3>Mixture of Experts (MoE)</h3>
|
||||||
|
<p>A model with multiple "expert" subnetworks. A routing mechanism selects which experts to use for each input, enabling large models that are computationally efficient at inference.</p>
|
||||||
|
<div class="example"><strong>Examples:</strong> Mixtral 8x7B, Google's PaLM-E</div>
|
||||||
|
</div>
|
||||||
|
<div class="def-card">
|
||||||
|
<span class="category">Architecture</span>
|
||||||
|
<h3>Retrieval Models</h3>
|
||||||
|
<p>Models designed specifically for semantic search — finding the most relevant documents for a query from a large corpus.</p>
|
||||||
|
<div class="example"><strong>Examples:</strong> BGE, E5, Cohere embed models</div>
|
||||||
|
</div>
|
||||||
|
<div class="def-card">
|
||||||
|
<span class="category">Architecture</span>
|
||||||
|
<h3>Small Language Models (SLMs)</h3>
|
||||||
|
<p>Compact language models (under 7B parameters) optimized for edge devices and low-latency applications. Getting remarkably capable.</p>
|
||||||
|
<div class="example"><strong>Examples:</strong> Phi-3, Gemma 2B, Qwen 1.5B, MicroLlama</div>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
<h2 class="section-title">Model Comparison</h2>
|
||||||
|
<table class="glossary-table">
|
||||||
|
<thead>
|
||||||
|
<tr><th>Model</th><th>Type</th><th>Best For</th></tr>
|
||||||
|
</thead>
|
||||||
|
<tbody>
|
||||||
|
<tr><td>GPT-4 / GPT-4o</td><td>Decoder LLM</td><td>General-purpose reasoning, coding, multimodal</td></tr>
|
||||||
|
<tr><td>Claude 3.5</td><td>Decoder LLM</td><td>Long-context analysis, coding, writing</td></tr>
|
||||||
|
<tr><td>Gemini 1.5 Pro</td><td>Decoder LLM</td><td>Massive context windows, multimodal</td></tr>
|
||||||
|
<tr><td>Llama 3</td><td>Decoder LLM</td><td>Open-source, self-hosting, fine-tuning</td></tr>
|
||||||
|
<tr><td>Mistral Large</td><td>MoE LLM</td><td>Efficient inference, multilingual</td></tr>
|
||||||
|
<tr><td>Stable Diffusion</td><td>Diffusion</td><td>Image generation, open-source</td></tr>
|
||||||
|
<tr><td>CLIP</td><td>Encoder (Vision+Text)</td><td>Image-text matching, embeddings</td></tr>
|
||||||
|
<tr><td>BERT</td><td>Encoder</td><td>Text classification, search, NLU</td></tr>
|
||||||
|
<tr><td>Whisper</td><td>Encoder-Decoder</td><td>Speech recognition, transcription</td></tr>
|
||||||
|
<tr><td>TTS models</td><td>Decoder</td><td>Text-to-speech, voice synthesis</td></tr>
|
||||||
|
</tbody>
|
||||||
|
</table>
|
||||||
|
|
||||||
|
</div>
|
||||||
|
|
||||||
|
<footer>AI Cheat Sheet — A learning reference for artificial intelligence</footer>
|
||||||
|
|
||||||
|
</body>
|
||||||
|
</html>
|
||||||
157
pages/prompts.html
Normal file
157
pages/prompts.html
Normal file
@@ -0,0 +1,157 @@
|
|||||||
|
<!DOCTYPE html>
|
||||||
|
<html lang="en">
|
||||||
|
<head>
|
||||||
|
<meta charset="UTF-8">
|
||||||
|
<meta name="viewport" content="width=device-width, initial-scale=1.0">
|
||||||
|
<title>Prompt Engineering - Cheat Sheet</title>
|
||||||
|
<link rel="stylesheet" href="../css/style.css">
|
||||||
|
</head>
|
||||||
|
<body>
|
||||||
|
|
||||||
|
<nav>
|
||||||
|
<div class="nav-inner">
|
||||||
|
<a href="../index.html" class="nav-brand">AI Cheat Sheet</a>
|
||||||
|
<div class="nav-links">
|
||||||
|
<a href="terminology.html">Terminology</a>
|
||||||
|
<a href="techniques.html">Techniques</a>
|
||||||
|
<a href="use-cases.html">Use Cases</a>
|
||||||
|
<a href="model-types.html">Model Types</a>
|
||||||
|
<a href="prompts.html" class="active">Prompt Guide</a>
|
||||||
|
<a href="math.html">Math & Concepts</a>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</nav>
|
||||||
|
|
||||||
|
<div class="hero">
|
||||||
|
<h1>Prompt Engineering Guide</h1>
|
||||||
|
<p>Techniques for getting the best results from language models.</p>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
<div class="container">
|
||||||
|
|
||||||
|
<h2 class="section-title">Prompt Patterns</h2>
|
||||||
|
<div class="prompt-block">
|
||||||
|
<span class="label">Zero-Shot</span>
|
||||||
|
<h3>Just ask — no examples needed</h3>
|
||||||
|
<p>The simplest approach: give the model a task directly. Works surprisingly well with capable models.</p>
|
||||||
|
<div class="example"><strong>Prompt:</strong> "Translate the following English text to French: 'Hello, how are you?'"</div>
|
||||||
|
</div>
|
||||||
|
<div class="prompt-block">
|
||||||
|
<span class="label">Few-Shot</span>
|
||||||
|
<h3>Show examples to guide behavior</h3>
|
||||||
|
<p>Include a few input-output examples in the prompt to teach the model the desired format or style.</p>
|
||||||
|
<div class="example"><strong>Prompt:</strong><br>
|
||||||
|
"Classify the sentiment:<br>
|
||||||
|
'I love this!' → Positive<br>
|
||||||
|
'This is terrible.' → Negative<br>
|
||||||
|
'It's okay, I guess.' → ?"</div>
|
||||||
|
</div>
|
||||||
|
<div class="prompt-block">
|
||||||
|
<span class="label">Chain-of-Thought</span>
|
||||||
|
<h3>Think step by step</h3>
|
||||||
|
<p>Asking the model to reason through a problem before answering improves accuracy on complex tasks.</p>
|
||||||
|
<div class="example"><strong>Prompt:</strong> "A store has 50 apples. They sell 12 in the morning and receive 30 more. How many do they have?"</div>
|
||||||
|
<div class="example"><strong>Without CoT:</strong> "80"<br>
|
||||||
|
<strong>With CoT:</strong> "50 - 12 = 38. 38 + 30 = 68. Answer: 68"</div>
|
||||||
|
</div>
|
||||||
|
<div class="prompt-block">
|
||||||
|
<span class="label">Role Prompting</span>
|
||||||
|
<h3>Assign a persona</h3>
|
||||||
|
<p>Telling the model to act as an expert in a domain primes it to use relevant knowledge and tone.</p>
|
||||||
|
<div class="example"><strong>Prompt:</strong> "You are a senior Python developer. Review this code for best practices and security issues."</div>
|
||||||
|
</div>
|
||||||
|
<div class="prompt-block">
|
||||||
|
<span class="label">Structured Output</span>
|
||||||
|
<h3>Force a specific format</h3>
|
||||||
|
<p>Specify the exact output format (JSON, CSV, markdown table) for programmatic use.</p>
|
||||||
|
<div class="example"><strong>Prompt:</strong> "Extract all product names and prices from this text. Return as a JSON array with keys 'name' and 'price'."</div>
|
||||||
|
</div>
|
||||||
|
<div class="prompt-block">
|
||||||
|
<span class="label">Self-Consistency</span>
|
||||||
|
<h3>Ask multiple times, pick the best</h3>
|
||||||
|
<p>Generate several answers and take the most common or highest-quality one. Improves reliability on reasoning tasks.</p>
|
||||||
|
</div>
|
||||||
|
<div class="prompt-block">
|
||||||
|
<span class="label">ReAct (Reason + Act)</span>
|
||||||
|
<h3>Think, act, observe, repeat</h3>
|
||||||
|
<p>Alternate between reasoning about a problem and taking actions (searching, calculating) to gather information.</p>
|
||||||
|
<div class="example"><strong>Prompt:</strong> "Thought: I need to find the population of Tokyo. Action: search('Tokyo population 2024')<br>Observation: Tokyo has 37 million people.<br>Thought: Now I can answer the question."</div>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
<h2 class="section-title">Prompt Tips</h2>
|
||||||
|
<div class="def-card">
|
||||||
|
<span class="category">Best Practice</span>
|
||||||
|
<h3>Be specific and detailed</h3>
|
||||||
|
<p>Vague prompts get vague answers. Specify format, length, tone, audience, and constraints.</p>
|
||||||
|
<div class="example">❌ "Write about AI."<br>
|
||||||
|
✅ "Write a 200-word blog post about AI in healthcare for a general audience. Use a friendly tone and include one real-world example."</div>
|
||||||
|
</div>
|
||||||
|
<div class="def-card">
|
||||||
|
<span class="category">Best Practice</span>
|
||||||
|
<h3>Use delimiters for clarity</h3>
|
||||||
|
<p>Separate instructions from data using quotes, XML tags, or dashes to help the model distinguish them.</p>
|
||||||
|
<div class="example"><strong>Prompt:</strong> "Summarize the text in <instructions> tags: <br><data>{paste article here}</data>"</div>
|
||||||
|
</div>
|
||||||
|
<div class="def-card">
|
||||||
|
<span class="category">Best Practice</span>
|
||||||
|
<h3>Provide context</h3>
|
||||||
|
<p>The more background you give, the better the model can tailor its response. Include relevant details, constraints, and goals.</p>
|
||||||
|
</div>
|
||||||
|
<div class="def-card">
|
||||||
|
<span class="category">Best Practice</span>
|
||||||
|
<h3>Iterate and refine</h3>
|
||||||
|
<p>First prompts are rarely perfect. Try variations, add examples, adjust constraints, and combine techniques.</p>
|
||||||
|
</div>
|
||||||
|
<div class="def-card">
|
||||||
|
<span class="category">Anti-Pattern</span>
|
||||||
|
<h3>Avoid ambiguous instructions</h3>
|
||||||
|
<p>"Make it better" or "fix this" without specifics leads to unpredictable results. State exactly what you want changed.</p>
|
||||||
|
</div>
|
||||||
|
<div class="def-card">
|
||||||
|
<span class="category">Anti-Pattern</span>
|
||||||
|
<h3>Don't overload the context window</h3>
|
||||||
|
<p>Pasting entire books or massive documents wastes tokens and can cause the model to miss key information. Summarize or use RAG.</p>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
<h2 class="section-title">Template Examples</h2>
|
||||||
|
<div class="prompt-block">
|
||||||
|
<span class="label">Analysis Template</span>
|
||||||
|
<h3>Structured analysis prompt</h3>
|
||||||
|
<div class="example"><strong>Prompt:</strong>
|
||||||
|
"Analyze the following text and provide:
|
||||||
|
1. Key topics (bullet list)
|
||||||
|
2. Overall sentiment (positive/negative/neutral) with reasoning
|
||||||
|
3. Three most important quotes
|
||||||
|
4. A one-sentence summary
|
||||||
|
Text: {text}"</div>
|
||||||
|
</div>
|
||||||
|
<div class="prompt-block">
|
||||||
|
<span class="label">Coding Template</span>
|
||||||
|
<h3>Code generation with constraints</h3>
|
||||||
|
<div class="example"><strong>Prompt:</strong>
|
||||||
|
"Write a {language} function that {task}.
|
||||||
|
Constraints:
|
||||||
|
- Handle edge cases
|
||||||
|
- Include type hints
|
||||||
|
- Add docstring
|
||||||
|
- Keep it under {N} lines
|
||||||
|
- No external dependencies"</div>
|
||||||
|
</div>
|
||||||
|
<div class="prompt-block">
|
||||||
|
<span class="label">Critique Template</span>
|
||||||
|
<h3>Self-reflection prompt</h3>
|
||||||
|
<div class="example"><strong>Prompt:</strong>
|
||||||
|
"Here is a draft response. Critique it for:
|
||||||
|
- Accuracy
|
||||||
|
- Clarity
|
||||||
|
- Completeness
|
||||||
|
- Tone
|
||||||
|
Then rewrite it incorporating your feedback."</div>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
</div>
|
||||||
|
|
||||||
|
<footer>AI Cheat Sheet — A learning reference for artificial intelligence</footer>
|
||||||
|
|
||||||
|
</body>
|
||||||
|
</html>
|
||||||
131
pages/techniques.html
Normal file
131
pages/techniques.html
Normal file
@@ -0,0 +1,131 @@
|
|||||||
|
<!DOCTYPE html>
|
||||||
|
<html lang="en">
|
||||||
|
<head>
|
||||||
|
<meta charset="UTF-8">
|
||||||
|
<meta name="viewport" content="width=device-width, initial-scale=1.0">
|
||||||
|
<title>AI Techniques - Cheat Sheet</title>
|
||||||
|
<link rel="stylesheet" href="../css/style.css">
|
||||||
|
</head>
|
||||||
|
<body>
|
||||||
|
|
||||||
|
<nav>
|
||||||
|
<div class="nav-inner">
|
||||||
|
<a href="../index.html" class="nav-brand">AI Cheat Sheet</a>
|
||||||
|
<div class="nav-links">
|
||||||
|
<a href="terminology.html">Terminology</a>
|
||||||
|
<a href="techniques.html" class="active">Techniques</a>
|
||||||
|
<a href="use-cases.html">Use Cases</a>
|
||||||
|
<a href="model-types.html">Model Types</a>
|
||||||
|
<a href="prompts.html">Prompt Guide</a>
|
||||||
|
<a href="math.html">Math & Concepts</a>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</nav>
|
||||||
|
|
||||||
|
<div class="hero">
|
||||||
|
<h1>AI Techniques</h1>
|
||||||
|
<p>How AI models are built, trained, and optimized.</p>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
<div class="container">
|
||||||
|
|
||||||
|
<h2 class="section-title">Training Techniques</h2>
|
||||||
|
<div class="def-card">
|
||||||
|
<span class="category">Training</span>
|
||||||
|
<h3>Backpropagation</h3>
|
||||||
|
<p>The core algorithm for training neural networks. It calculates the gradient of the loss function with respect to each weight by chain rule, then adjusts weights to minimize error.</p>
|
||||||
|
<div class="example"><strong>Analogy:</strong> Like adjusting a radio dial — you turn it slightly, check if the signal is clearer, and keep adjusting in the right direction.</div>
|
||||||
|
</div>
|
||||||
|
<div class="def-card">
|
||||||
|
<span class="category">Training</span>
|
||||||
|
<h3>Epoch</h3>
|
||||||
|
<p>One complete pass through the entire training dataset. Models typically train for many epochs.</p>
|
||||||
|
</div>
|
||||||
|
<div class="def-card">
|
||||||
|
<span class="category">Training</span>
|
||||||
|
<h3>Batch Size</h3>
|
||||||
|
<p>The number of training examples processed before the model's weights are updated. Larger batches are more stable but use more memory.</p>
|
||||||
|
</div>
|
||||||
|
<div class="def-card">
|
||||||
|
<span class="category">Training</span>
|
||||||
|
<h3>Learning Rate</h3>
|
||||||
|
<p>A hyperparameter that controls how much to adjust weights during each update. Too high → unstable training; too low → slow convergence.</p>
|
||||||
|
</div>
|
||||||
|
<div class="def-card">
|
||||||
|
<span class="category">Training</span>
|
||||||
|
<h3>Transfer Learning</h3>
|
||||||
|
<p>Using a model trained on one task as the starting point for a model on a second task. Saves time and data.</p>
|
||||||
|
<div class="example"><strong>Example:</strong> A model trained on Wikipedia text is fine-tuned for legal document analysis.</div>
|
||||||
|
</div>
|
||||||
|
<div class="def-card">
|
||||||
|
<span class="category">Training</span>
|
||||||
|
<h3>Data Augmentation</h3>
|
||||||
|
<p>Artificially expanding a training dataset by applying transformations (e.g., rotation, flipping, synonym replacement) to create new training examples.</p>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
<h2 class="section-title">Alignment & Improvement</h2>
|
||||||
|
<div class="def-card">
|
||||||
|
<span class="category">Alignment</span>
|
||||||
|
<h3>RLHF (Reinforcement Learning from Human Feedback)</h3>
|
||||||
|
<p>A technique to align model outputs with human preferences. Humans rank model responses, and a reward model is trained on those rankings. The main model is then fine-tuned to maximize the reward.</p>
|
||||||
|
<div class="example"><strong>Used by:</strong> ChatGPT, Claude, and other conversational AI systems to make them more helpful and harmless.</div>
|
||||||
|
</div>
|
||||||
|
<div class="def-card">
|
||||||
|
<span class="category">Alignment</span>
|
||||||
|
<h3>SFT (Supervised Fine-Tuning)</h3>
|
||||||
|
<p>Fine-tuning a model on a dataset of input-output pairs to teach it a specific format or style of response.</p>
|
||||||
|
<div class="example"><strong>Example:</strong> Training a model to respond in JSON format for API integration.</div>
|
||||||
|
</div>
|
||||||
|
<div class="def-card">
|
||||||
|
<span class="category">Alignment</span>
|
||||||
|
<h3>Prompt Tuning</h3>
|
||||||
|
<p>Instead of changing model weights, carefully crafting prompts to guide the model's behavior. Zero-cost and reversible.</p>
|
||||||
|
</div>
|
||||||
|
<div class="def-card">
|
||||||
|
<span class="category">Alignment</span>
|
||||||
|
<h3>LoRA (Low-Rank Adaptation)</h3>
|
||||||
|
<p>An efficient fine-tuning technique that adds small trainable matrices to a frozen pre-trained model, drastically reducing compute and memory needs.</p>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
<h2 class="section-title">Deployment & Optimization</h2>
|
||||||
|
<div class="def-card">
|
||||||
|
<span class="category">Optimization</span>
|
||||||
|
<h3>Quantization</h3>
|
||||||
|
<p>Reducing the precision of model weights (e.g., from 32-bit to 8-bit) to shrink model size and speed up inference with minimal accuracy loss.</p>
|
||||||
|
<div class="example"><strong>Example:</strong> A 13GB model quantized to 4-bit becomes ~3.5GB, fitting on consumer GPUs.</div>
|
||||||
|
</div>
|
||||||
|
<div class="def-card">
|
||||||
|
<span class="category">Optimization</span>
|
||||||
|
<h3>Distillation</h3>
|
||||||
|
<p>Training a smaller "student" model to mimic the behavior of a larger "teacher" model, capturing its knowledge in a more compact form.</p>
|
||||||
|
</div>
|
||||||
|
<div class="def-card">
|
||||||
|
<span class="category">Optimization</span>
|
||||||
|
<h3>Speculative Decoding</h3>
|
||||||
|
<p>Using a small model to draft multiple tokens, then having the large model verify them in parallel — speeding up generation.</p>
|
||||||
|
</div>
|
||||||
|
<div class="def-card">
|
||||||
|
<span class="category">Architecture</span>
|
||||||
|
<h3>RAG (Retrieval-Augmented Generation)</h3>
|
||||||
|
<p>Augmenting a language model with an external knowledge retrieval step. The model first searches a knowledge base, then generates a response using both the retrieved info and its own training.</p>
|
||||||
|
<div class="example"><strong>Example:</strong> A customer support bot that searches your product docs before answering questions — no fine-tuning needed.</div>
|
||||||
|
</div>
|
||||||
|
<div class="def-card">
|
||||||
|
<span class="category">Architecture</span>
|
||||||
|
<h3>Agent / Tool Use</h3>
|
||||||
|
<p>Giving an LLM the ability to call external tools (search, calculators, APIs) to accomplish multi-step tasks.</p>
|
||||||
|
<div class="example"><strong>Example:</strong> An AI that searches the web, summarizes results, and writes a report — all autonomously.</div>
|
||||||
|
</div>
|
||||||
|
<div class="def-card">
|
||||||
|
<span class="category">Architecture</span>
|
||||||
|
<h3>Chain-of-Thought</h3>
|
||||||
|
<p>Asking a model to show its reasoning step-by-step before giving an answer. Dramatically improves performance on reasoning tasks.</p>
|
||||||
|
<div class="example"><strong>Prompt:</strong> "Let's think step by step. First, ..."</div>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
</div>
|
||||||
|
|
||||||
|
<footer>AI Cheat Sheet — A learning reference for artificial intelligence</footer>
|
||||||
|
|
||||||
|
</body>
|
||||||
|
</html>
|
||||||
167
pages/terminology.html
Normal file
167
pages/terminology.html
Normal file
@@ -0,0 +1,167 @@
|
|||||||
|
<!DOCTYPE html>
|
||||||
|
<html lang="en">
|
||||||
|
<head>
|
||||||
|
<meta charset="UTF-8">
|
||||||
|
<meta name="viewport" content="width=device-width, initial-scale=1.0">
|
||||||
|
<title>AI Terminology - Cheat Sheet</title>
|
||||||
|
<link rel="stylesheet" href="../css/style.css">
|
||||||
|
</head>
|
||||||
|
<body>
|
||||||
|
|
||||||
|
<nav>
|
||||||
|
<div class="nav-inner">
|
||||||
|
<a href="../index.html" class="nav-brand">AI Cheat Sheet</a>
|
||||||
|
<div class="nav-links">
|
||||||
|
<a href="terminology.html" class="active">Terminology</a>
|
||||||
|
<a href="techniques.html">Techniques</a>
|
||||||
|
<a href="use-cases.html">Use Cases</a>
|
||||||
|
<a href="model-types.html">Model Types</a>
|
||||||
|
<a href="prompts.html">Prompt Guide</a>
|
||||||
|
<a href="math.html">Math & Concepts</a>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</nav>
|
||||||
|
|
||||||
|
<div class="hero">
|
||||||
|
<h1>AI Terminology</h1>
|
||||||
|
<p>Essential terms every AI learner should know.</p>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
<div class="container">
|
||||||
|
|
||||||
|
<h2 class="section-title">Machine Learning Basics</h2>
|
||||||
|
<div class="def-card">
|
||||||
|
<span class="category">ML</span>
|
||||||
|
<h3>Machine Learning (ML)</h3>
|
||||||
|
<p>A subset of AI where systems learn patterns from data to make decisions or predictions without being explicitly programmed for each task.</p>
|
||||||
|
</div>
|
||||||
|
<div class="def-card">
|
||||||
|
<span class="category">ML</span>
|
||||||
|
<h3>Supervised Learning</h3>
|
||||||
|
<p>Training a model on labeled data — each example has an input and a known correct output. The model learns to map inputs to outputs.</p>
|
||||||
|
<div class="example"><strong>Example:</strong> Training on emails labeled "spam" or "not spam" to build a spam filter.</div>
|
||||||
|
</div>
|
||||||
|
<div class="def-card">
|
||||||
|
<span class="category">ML</span>
|
||||||
|
<h3>Unsupervised Learning</h3>
|
||||||
|
<p>Training on unlabeled data — the model finds hidden patterns or groupings on its own.</p>
|
||||||
|
<div class="example"><strong>Example:</strong> Grouping customers by purchasing behavior without pre-defined categories.</div>
|
||||||
|
</div>
|
||||||
|
<div class="def-card">
|
||||||
|
<span class="category">ML</span>
|
||||||
|
<h3>Reinforcement Learning</h3>
|
||||||
|
<p>An agent learns by interacting with an environment, receiving rewards for good actions and penalties for bad ones, optimizing for maximum cumulative reward.</p>
|
||||||
|
<div class="example"><strong>Example:</strong> An AI learning to play chess by playing millions of games against itself.</div>
|
||||||
|
</div>
|
||||||
|
<div class="def-card">
|
||||||
|
<span class="category">ML</span>
|
||||||
|
<h3>Overfitting</h3>
|
||||||
|
<p>When a model learns the training data too well — including noise and outliers — and performs poorly on new, unseen data.</p>
|
||||||
|
</div>
|
||||||
|
<div class="def-card">
|
||||||
|
<span class="category">ML</span>
|
||||||
|
<h3>Underfitting</h3>
|
||||||
|
<p>When a model is too simple to capture the patterns in the data, performing poorly on both training and test data.</p>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
<h2 class="section-title">Natural Language Processing</h2>
|
||||||
|
<div class="def-card">
|
||||||
|
<span class="category">NLP</span>
|
||||||
|
<h3>NLP (Natural Language Processing)</h3>
|
||||||
|
<p>A field of AI focused on enabling computers to understand, interpret, and generate human language.</p>
|
||||||
|
</div>
|
||||||
|
<div class="def-card">
|
||||||
|
<span class="category">NLP</span>
|
||||||
|
<h3>Token</h3>
|
||||||
|
<p>The smallest unit of text a model processes. Tokens can be words, subwords, or characters. A single word may be split into multiple tokens.</p>
|
||||||
|
<div class="example"><strong>Example:</strong> "unhappiness" might become ["un", "happiness"] — 2 tokens.</div>
|
||||||
|
</div>
|
||||||
|
<div class="def-card">
|
||||||
|
<span class="category">NLP</span>
|
||||||
|
<h3>Embedding</h3>
|
||||||
|
<p>A numerical representation of text (or other data) in a continuous vector space, where similar items are closer together.</p>
|
||||||
|
<div class="example"><strong>Example:</strong> "king", "queen", "man", "woman" are embedded so that queen - woman + man ≈ king.</div>
|
||||||
|
</div>
|
||||||
|
<div class="def-card">
|
||||||
|
<span class="category">NLP</span>
|
||||||
|
<h3>Context Window</h3>
|
||||||
|
<p>The maximum number of tokens a model can process at once — both input and output combined.</p>
|
||||||
|
<div class="example"><strong>Example:</strong> A 128K context window means the model can read ~100,000 words in a single prompt.</div>
|
||||||
|
</div>
|
||||||
|
<div class="def-card">
|
||||||
|
<span class="category">NLP</span>
|
||||||
|
<h3>Paraphrasing</h3>
|
||||||
|
<p>Restating text in different words while preserving the original meaning. LLMs excel at this task.</p>
|
||||||
|
</div>
|
||||||
|
<div class="def-card">
|
||||||
|
<span class="category">NLP</span>
|
||||||
|
<h3>Sentiment Analysis</h3>
|
||||||
|
<p>Determining the emotional tone behind text — positive, negative, or neutral.</p>
|
||||||
|
<div class="example"><strong>Example:</strong> "This product is amazing!" → Positive</div>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
<h2 class="section-title">Model Concepts</h2>
|
||||||
|
<div class="def-card">
|
||||||
|
<span class="category">Model</span>
|
||||||
|
<h3>LLM (Large Language Model)</h3>
|
||||||
|
<p>A neural network with billions of parameters trained on massive text corpora to understand and generate human language. Examples: GPT-4, Claude, Gemini, Llama.</p>
|
||||||
|
</div>
|
||||||
|
<div class="def-card">
|
||||||
|
<span class="category">Model</span>
|
||||||
|
<h3>Pre-trained Model</h3>
|
||||||
|
<p>A model that has already been trained on a large dataset and can be used as-is or fine-tuned for specific tasks.</p>
|
||||||
|
</div>
|
||||||
|
<div class="def-card">
|
||||||
|
<span class="category">Model</span>
|
||||||
|
<h3>Fine-tuning</h3>
|
||||||
|
<p>Taking a pre-trained model and continuing to train it on a smaller, task-specific dataset to adapt its behavior.</p>
|
||||||
|
<div class="example"><strong>Example:</strong> Fine-tuning GPT-4 on medical texts so it answers healthcare questions more accurately.</div>
|
||||||
|
</div>
|
||||||
|
<div class="def-card">
|
||||||
|
<span class="category">Model</span>
|
||||||
|
<h3>Parameters</h3>
|
||||||
|
<p>The internal variables of a model that are adjusted during training. More parameters generally mean greater capacity to learn complex patterns.</p>
|
||||||
|
<div class="example"><strong>Example:</strong> GPT-4 is estimated to have trillions of parameters.</div>
|
||||||
|
</div>
|
||||||
|
<div class="def-card">
|
||||||
|
<span class="category">Model</span>
|
||||||
|
<h3>Inference</h3>
|
||||||
|
<p>The process of using a trained model to generate outputs for new inputs (as opposed to training the model).</p>
|
||||||
|
</div>
|
||||||
|
<div class="def-card">
|
||||||
|
<span class="category">Model</span>
|
||||||
|
<h3>Weights</h3>
|
||||||
|
<p>The numerical values learned during training that determine how input signals are transformed as they pass through the network.</p>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
<h2 class="section-title">Common Acronyms</h2>
|
||||||
|
<table class="glossary-table">
|
||||||
|
<thead>
|
||||||
|
<tr><th>Acronym</th><th>Meaning</th></tr>
|
||||||
|
</thead>
|
||||||
|
<tbody>
|
||||||
|
<tr><td>AI</td><td>Artificial Intelligence</td></tr>
|
||||||
|
<tr><td>ML</td><td>Machine Learning</td></tr>
|
||||||
|
<tr><td>DL</td><td>Deep Learning</td></tr>
|
||||||
|
<tr><td>NLP</td><td>Natural Language Processing</td></tr>
|
||||||
|
<tr><td>LLM</td><td>Large Language Model</td></tr>
|
||||||
|
<tr><td>RLHF</td><td>Reinforcement Learning from Human Feedback</td></tr>
|
||||||
|
<tr><td>RAG</td><td>Retrieval-Augmented Generation</td></tr>
|
||||||
|
<tr><td>API</td><td>Application Programming Interface</td></tr>
|
||||||
|
<tr><td>SFT</td><td>Supervised Fine-Tuning</td></tr>
|
||||||
|
<tr><td>PoC</td><td>Proof of Concept</td></tr>
|
||||||
|
<tr><td>GAN</td><td>Generative Adversarial Network</td></tr>
|
||||||
|
<tr><td>CNN</td><td>Convolutional Neural Network</td></tr>
|
||||||
|
<tr><td>GAN</td><td>Generative Adversarial Network</td></tr>
|
||||||
|
<tr><td>AGI</td><td>Artificial General Intelligence</td></tr>
|
||||||
|
<tr><td>STT / ASR</td><td>Speech-to-Text / Automatic Speech Recognition</td></tr>
|
||||||
|
<tr><td>TTS</td><td>Text-to-Speech</td></tr>
|
||||||
|
</tbody>
|
||||||
|
</table>
|
||||||
|
|
||||||
|
</div>
|
||||||
|
|
||||||
|
<footer>AI Cheat Sheet — A learning reference for artificial intelligence</footer>
|
||||||
|
|
||||||
|
</body>
|
||||||
|
</html>
|
||||||
158
pages/use-cases.html
Normal file
158
pages/use-cases.html
Normal file
@@ -0,0 +1,158 @@
|
|||||||
|
<!DOCTYPE html>
|
||||||
|
<html lang="en">
|
||||||
|
<head>
|
||||||
|
<meta charset="UTF-8">
|
||||||
|
<meta name="viewport" content="width=device-width, initial-scale=1.0">
|
||||||
|
<title>AI Use Cases - Cheat Sheet</title>
|
||||||
|
<link rel="stylesheet" href="../css/style.css">
|
||||||
|
</head>
|
||||||
|
<body>
|
||||||
|
|
||||||
|
<nav>
|
||||||
|
<div class="nav-inner">
|
||||||
|
<a href="../index.html" class="nav-brand">AI Cheat Sheet</a>
|
||||||
|
<div class="nav-links">
|
||||||
|
<a href="terminology.html">Terminology</a>
|
||||||
|
<a href="techniques.html">Techniques</a>
|
||||||
|
<a href="use-cases.html" class="active">Use Cases</a>
|
||||||
|
<a href="model-types.html">Model Types</a>
|
||||||
|
<a href="prompts.html">Prompt Guide</a>
|
||||||
|
<a href="math.html">Math & Concepts</a>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</nav>
|
||||||
|
|
||||||
|
<div class="hero">
|
||||||
|
<h1>AI Use Cases</h1>
|
||||||
|
<p>Real-world applications of AI across industries.</p>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
<div class="container">
|
||||||
|
|
||||||
|
<h2 class="section-title">Content & Creative</h2>
|
||||||
|
<div class="use-grid">
|
||||||
|
<div class="use-card">
|
||||||
|
<div class="icon">✍️</div>
|
||||||
|
<h3>Content Generation</h3>
|
||||||
|
<p>Writing blog posts, marketing copy, emails, social media content, and creative stories at scale.</p>
|
||||||
|
<div class="example"><strong>Prompt:</strong> "Write a 300-word product description for a noise-canceling headphone."</div>
|
||||||
|
</div>
|
||||||
|
<div class="use-card">
|
||||||
|
<div class="icon">🎨</div>
|
||||||
|
<h3>Image Generation</h3>
|
||||||
|
<p>Creating images from text descriptions using diffusion models like DALL-E, Stable Diffusion, and Midjourney.</p>
|
||||||
|
<div class="example"><strong>Prompt:</strong> "A watercolor painting of a cat astronaut floating in space, pink nebula background."</div>
|
||||||
|
</div>
|
||||||
|
<div class="use-card">
|
||||||
|
<div class="icon">🎬</div>
|
||||||
|
<h3>Video & Audio</h3>
|
||||||
|
<p>Generating videos from text, creating music, voice cloning, and dubbing across languages.</p>
|
||||||
|
</div>
|
||||||
|
<div class="use-card">
|
||||||
|
<div class="icon">📝</div>
|
||||||
|
<h3>Summarization</h3>
|
||||||
|
<p>Condensing long documents, articles, meetings, or research papers into concise summaries.</p>
|
||||||
|
<div class="example"><strong>Prompt:</strong> "Summarize this 50-page report in 5 bullet points."</div>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
<h2 class="section-title">Code & Development</h2>
|
||||||
|
<div class="use-grid">
|
||||||
|
<div class="use-card">
|
||||||
|
<div class="icon">💻</div>
|
||||||
|
<h3>Code Generation</h3>
|
||||||
|
<p>Writing code in any programming language from natural language descriptions. Tools: GitHub Copilot, Cursor.</p>
|
||||||
|
<div class="example"><strong>Prompt:</strong> "Write a Python function to sort a list of dictionaries by a given key."</div>
|
||||||
|
</div>
|
||||||
|
<div class="use-card">
|
||||||
|
<div class="icon">🐛</div>
|
||||||
|
<h3>Debugging & Review</h3>
|
||||||
|
<p>Identifying bugs, explaining error messages, suggesting improvements, and reviewing code quality.</p>
|
||||||
|
</div>
|
||||||
|
<div class="use-card">
|
||||||
|
<div class="icon">📄</div>
|
||||||
|
<h3>Documentation</h3>
|
||||||
|
<p>Auto-generating API docs, README files, inline comments, and technical documentation from code.</p>
|
||||||
|
</div>
|
||||||
|
<div class="use-card">
|
||||||
|
<div class="icon">🔄</div>
|
||||||
|
<h3>Code Translation</h3>
|
||||||
|
<p>Converting code from one language to another (e.g., JavaScript to Python, old Java to modern Java).</p>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
<h2 class="section-title">Business & Productivity</h2>
|
||||||
|
<div class="use-grid">
|
||||||
|
<div class="use-card">
|
||||||
|
<div class="icon">🤖</div>
|
||||||
|
<h3>Chatbots & Assistants</h3>
|
||||||
|
<p>24/7 customer support agents that handle FAQs, triage issues, and escalate to humans when needed.</p>
|
||||||
|
</div>
|
||||||
|
<div class="use-card">
|
||||||
|
<div class="icon">📊</div>
|
||||||
|
<h3>Data Analysis</h3>
|
||||||
|
<p>Writing SQL queries, analyzing spreadsheets, generating charts, and extracting insights from data — no coding required.</p>
|
||||||
|
<div class="example"><strong>Prompt:</strong> "Plot monthly revenue by region from this CSV."</div>
|
||||||
|
</div>
|
||||||
|
<div class="use-card">
|
||||||
|
<div class="icon">🔍</div>
|
||||||
|
<h3>Research & Search</h3>
|
||||||
|
<p>AI-powered search that reads and synthesizes multiple sources instead of just returning links.</p>
|
||||||
|
</div>
|
||||||
|
<div class="use-card">
|
||||||
|
<div class="icon">🌐</div>
|
||||||
|
<h3>Translation</h3>
|
||||||
|
<p>High-quality machine translation between 100+ languages, preserving tone and context.</p>
|
||||||
|
</div>
|
||||||
|
<div class="use-card">
|
||||||
|
<div class="icon">📧</div>
|
||||||
|
<h3>Email & Meeting Assistants</h3>
|
||||||
|
<p>Drafting emails, scheduling, summarizing meetings, and extracting action items from conversations.</p>
|
||||||
|
</div>
|
||||||
|
<div class="use-card">
|
||||||
|
<div class="icon">📋</div>
|
||||||
|
<h3>Document Processing</h3>
|
||||||
|
<p>Extracting structured data from invoices, contracts, forms, and receipts using OCR + AI.</p>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
<h2 class="section-title">Industry-Specific</h2>
|
||||||
|
<div class="use-grid">
|
||||||
|
<div class="use-card">
|
||||||
|
<div class="icon">🏥</div>
|
||||||
|
<h3>Healthcare</h3>
|
||||||
|
<p>Medical image analysis, drug discovery, clinical note generation, symptom triage, and personalized treatment plans.</p>
|
||||||
|
</div>
|
||||||
|
<div class="use-card">
|
||||||
|
<div class="icon">💰</div>
|
||||||
|
<h3>Finance</h3>
|
||||||
|
<p>Fraud detection, algorithmic trading, risk assessment, credit scoring, and compliance monitoring.</p>
|
||||||
|
</div>
|
||||||
|
<div class="use-card">
|
||||||
|
<div class="icon">🚗</div>
|
||||||
|
<h3>Automotive</h3>
|
||||||
|
<p>Autonomous driving, predictive maintenance, route optimization, and in-car voice assistants.</p>
|
||||||
|
</div>
|
||||||
|
<div class="use-card">
|
||||||
|
<div class="icon">🎓</div>
|
||||||
|
<h3>Education</h3>
|
||||||
|
<p>Personalized tutoring, automated grading, curriculum design, and interactive learning experiences.</p>
|
||||||
|
</div>
|
||||||
|
<div class="use-card">
|
||||||
|
<div class="icon">🏭</div>
|
||||||
|
<h3>Manufacturing</h3>
|
||||||
|
<p>Quality inspection via computer vision, supply chain optimization, predictive maintenance, and digital twins.</p>
|
||||||
|
</div>
|
||||||
|
<div class="use-card">
|
||||||
|
<div class="icon">⚖️</div>
|
||||||
|
<h3>Legal</h3>
|
||||||
|
<p>Contract review, legal research, case prediction, document drafting, and compliance analysis.</p>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
</div>
|
||||||
|
|
||||||
|
<footer>AI Cheat Sheet — A learning reference for artificial intelligence</footer>
|
||||||
|
|
||||||
|
</body>
|
||||||
|
</html>
|
||||||
Reference in New Issue
Block a user