LLM integration with all the tabs
This commit is contained in:
@@ -59,34 +59,40 @@
|
||||
<span class="category">ML</span>
|
||||
<h3>Machine Learning (ML)</h3>
|
||||
<p>A subset of AI where systems learn patterns from data to make decisions or predictions without being explicitly programmed for each task.</p>
|
||||
<button class="llm-btn" onclick="explainTerm('Machine Learning (ML)', 'A subset of AI where systems learn patterns from data to make decisions or predictions without being explicitly programmed for each task.')"><span class="icon">💬</span> Explain deeper</button>
|
||||
</div>
|
||||
<div class="def-card">
|
||||
<span class="category">ML</span>
|
||||
<h3>Supervised Learning</h3>
|
||||
<p>Training a model on labeled data — each example has an input and a known correct output. The model learns to map inputs to outputs.</p>
|
||||
<div class="example"><strong>Example:</strong> Training on emails labeled "spam" or "not spam" to build a spam filter.</div>
|
||||
<button class="llm-btn" onclick="explainTerm('Supervised Learning', 'Training a model on labeled data — each example has an input and a known correct output. The model learns to map inputs to outputs.')"><span class="icon">💬</span> Explain deeper</button>
|
||||
</div>
|
||||
<div class="def-card">
|
||||
<span class="category">ML</span>
|
||||
<h3>Unsupervised Learning</h3>
|
||||
<p>Training on unlabeled data — the model finds hidden patterns or groupings on its own.</p>
|
||||
<div class="example"><strong>Example:</strong> Grouping customers by purchasing behavior without pre-defined categories.</div>
|
||||
<button class="llm-btn" onclick="explainTerm('Unsupervised Learning', 'Training on unlabeled data — the model finds hidden patterns or groupings on its own.')"><span class="icon">💬</span> Explain deeper</button>
|
||||
</div>
|
||||
<div class="def-card">
|
||||
<span class="category">ML</span>
|
||||
<h3>Reinforcement Learning</h3>
|
||||
<p>An agent learns by interacting with an environment, receiving rewards for good actions and penalties for bad ones, optimizing for maximum cumulative reward.</p>
|
||||
<div class="example"><strong>Example:</strong> An AI learning to play chess by playing millions of games against itself.</div>
|
||||
<button class="llm-btn" onclick="explainTerm('Reinforcement Learning', 'An agent learns by interacting with an environment, receiving rewards for good actions and penalties for bad ones, optimizing for maximum cumulative reward.')"><span class="icon">💬</span> Explain deeper</button>
|
||||
</div>
|
||||
<div class="def-card">
|
||||
<span class="category">ML</span>
|
||||
<h3>Overfitting</h3>
|
||||
<p>When a model learns the training data too well — including noise and outliers — and performs poorly on new, unseen data.</p>
|
||||
<button class="llm-btn" onclick="explainTerm('Overfitting', 'When a model learns the training data too well — including noise and outliers — and performs poorly on new, unseen data.')"><span class="icon">💬</span> Explain deeper</button>
|
||||
</div>
|
||||
<div class="def-card">
|
||||
<span class="category">ML</span>
|
||||
<h3>Underfitting</h3>
|
||||
<p>When a model is too simple to capture the patterns in the data, performing poorly on both training and test data.</p>
|
||||
<button class="llm-btn" onclick="explainTerm('Underfitting', 'When a model is too simple to capture the patterns in the data, performing poorly on both training and test data.')"><span class="icon">💬</span> Explain deeper</button>
|
||||
</div>
|
||||
|
||||
<h2 class="section-title">Natural Language Processing</h2>
|
||||
@@ -94,35 +100,41 @@
|
||||
<span class="category">NLP</span>
|
||||
<h3>NLP (Natural Language Processing)</h3>
|
||||
<p>A field of AI focused on enabling computers to understand, interpret, and generate human language.</p>
|
||||
<button class="llm-btn" onclick="explainTerm('NLP - Natural Language Processing', 'A field of AI focused on enabling computers to understand, interpret, and generate human language.')"><span class="icon">💬</span> Explain deeper</button>
|
||||
</div>
|
||||
<div class="def-card">
|
||||
<span class="category">NLP</span>
|
||||
<h3>Token</h3>
|
||||
<p>The smallest unit of text a model processes. Tokens can be words, subwords, or characters. A single word may be split into multiple tokens.</p>
|
||||
<div class="example"><strong>Example:</strong> "unhappiness" might become ["un", "happiness"] — 2 tokens.</div>
|
||||
<button class="llm-btn" onclick="explainTerm('Token', 'The smallest unit of text a model processes. Tokens can be words, subwords, or characters. A single word may be split into multiple tokens.')"><span class="icon">💬</span> Explain deeper</button>
|
||||
</div>
|
||||
<div class="def-card">
|
||||
<span class="category">NLP</span>
|
||||
<h3>Embedding</h3>
|
||||
<p>A numerical representation of text (or other data) in a continuous vector space, where similar items are closer together.</p>
|
||||
<div class="example"><strong>Example:</strong> "king", "queen", "man", "woman" are embedded so that queen - woman + man ≈ king.</div>
|
||||
<button class="llm-btn" onclick="explainTerm('Embedding', 'A numerical representation of text (or other data) in a continuous vector space, where similar items are closer together.')"><span class="icon">💬</span> Explain deeper</button>
|
||||
</div>
|
||||
<div class="def-card">
|
||||
<span class="category">NLP</span>
|
||||
<h3>Context Window</h3>
|
||||
<p>The maximum number of tokens a model can process at once — both input and output combined.</p>
|
||||
<div class="example"><strong>Example:</strong> A 128K context window means the model can read ~100,000 words in a single prompt.</div>
|
||||
<button class="llm-btn" onclick="explainTerm('Context Window', 'The maximum number of tokens a model can process at once — both input and output combined.')"><span class="icon">💬</span> Explain deeper</button>
|
||||
</div>
|
||||
<div class="def-card">
|
||||
<span class="category">NLP</span>
|
||||
<h3>Paraphrasing</h3>
|
||||
<p>Restating text in different words while preserving the original meaning. LLMs excel at this task.</p>
|
||||
<button class="llm-btn" onclick="explainTerm('Paraphrasing', 'Restating text in different words while preserving the original meaning. LLMs excel at this task.')"><span class="icon">💬</span> Explain deeper</button>
|
||||
</div>
|
||||
<div class="def-card">
|
||||
<span class="category">NLP</span>
|
||||
<h3>Sentiment Analysis</h3>
|
||||
<p>Determining the emotional tone behind text — positive, negative, or neutral.</p>
|
||||
<div class="example"><strong>Example:</strong> "This product is amazing!" → Positive</div>
|
||||
<button class="llm-btn" onclick="explainTerm('Sentiment Analysis', 'Determining the emotional tone behind text — positive, negative, or neutral.')"><span class="icon">💬</span> Explain deeper</button>
|
||||
</div>
|
||||
|
||||
<h2 class="section-title">Model Concepts</h2>
|
||||
@@ -130,33 +142,39 @@
|
||||
<span class="category">Model</span>
|
||||
<h3>LLM (Large Language Model)</h3>
|
||||
<p>A neural network with billions of parameters trained on massive text corpora to understand and generate human language. Examples: GPT-4, Claude, Gemini, Llama.</p>
|
||||
<button class="llm-btn" onclick="explainTerm('LLM - Large Language Model', 'A neural network with billions of parameters trained on massive text corpora to understand and generate human language.')"><span class="icon">💬</span> Explain deeper</button>
|
||||
</div>
|
||||
<div class="def-card">
|
||||
<span class="category">Model</span>
|
||||
<h3>Pre-trained Model</h3>
|
||||
<p>A model that has already been trained on a large dataset and can be used as-is or fine-tuned for specific tasks.</p>
|
||||
<button class="llm-btn" onclick="explainTerm('Pre-trained Model', 'A model that has already been trained on a large dataset and can be used as-is or fine-tuned for specific tasks.')"><span class="icon">💬</span> Explain deeper</button>
|
||||
</div>
|
||||
<div class="def-card">
|
||||
<span class="category">Model</span>
|
||||
<h3>Fine-tuning</h3>
|
||||
<p>Taking a pre-trained model and continuing to train it on a smaller, task-specific dataset to adapt its behavior.</p>
|
||||
<div class="example"><strong>Example:</strong> Fine-tuning GPT-4 on medical texts so it answers healthcare questions more accurately.</div>
|
||||
<button class="llm-btn" onclick="explainTerm('Fine-tuning', 'Taking a pre-trained model and continuing to train it on a smaller, task-specific dataset to adapt its behavior.')"><span class="icon">💬</span> Explain deeper</button>
|
||||
</div>
|
||||
<div class="def-card">
|
||||
<span class="category">Model</span>
|
||||
<h3>Parameters</h3>
|
||||
<p>The internal variables of a model that are adjusted during training. More parameters generally mean greater capacity to learn complex patterns.</p>
|
||||
<div class="example"><strong>Example:</strong> GPT-4 is estimated to have trillions of parameters.</div>
|
||||
<button class="llm-btn" onclick="explainTerm('Parameters', 'The internal variables of a model that are adjusted during training. More parameters generally mean greater capacity to learn complex patterns.')"><span class="icon">💬</span> Explain deeper</button>
|
||||
</div>
|
||||
<div class="def-card">
|
||||
<span class="category">Model</span>
|
||||
<h3>Inference</h3>
|
||||
<p>The process of using a trained model to generate outputs for new inputs (as opposed to training the model).</p>
|
||||
<button class="llm-btn" onclick="explainTerm('Inference', 'The process of using a trained model to generate outputs for new inputs (as opposed to training the model).')"><span class="icon">💬</span> Explain deeper</button>
|
||||
</div>
|
||||
<div class="def-card">
|
||||
<span class="category">Model</span>
|
||||
<h3>Weights</h3>
|
||||
<p>The numerical values learned during training that determine how input signals are transformed as they pass through the network.</p>
|
||||
<button class="llm-btn" onclick="explainTerm('Weights', 'The numerical values learned during training that determine how input signals are transformed as they pass through the network.')"><span class="icon">💬</span> Explain deeper</button>
|
||||
</div>
|
||||
|
||||
<h2 class="section-title">Common Acronyms</h2>
|
||||
@@ -187,5 +205,34 @@
|
||||
|
||||
<footer>AI Cheat Sheet — A learning reference for artificial intelligence</footer>
|
||||
|
||||
<script src="../lib/modal.js"></script>
|
||||
<script src="../lib/llm.js"></script>
|
||||
<script>
|
||||
(function(){
|
||||
function explainTerm(title, definition) {
|
||||
LLMModal.open('💬 ' + title);
|
||||
var messages = [
|
||||
{ role: 'system', content: 'You are an AI educator explaining technical terms simply. Keep explanations to 2-3 short paragraphs with a practical example. Use the definition provided as your starting point.' },
|
||||
{ role: 'user', content: 'Explain this AI term in simple, practical terms: ' + title + '. Definition: ' + definition + '.' }
|
||||
];
|
||||
|
||||
var fullText = '';
|
||||
LLM.callAPI(
|
||||
messages,
|
||||
function(chunk) {
|
||||
fullText += chunk;
|
||||
LLMModal.update(fullText);
|
||||
},
|
||||
function() {},
|
||||
function(err) {
|
||||
LLMModal.error(err);
|
||||
}
|
||||
);
|
||||
}
|
||||
|
||||
window.explainTerm = explainTerm;
|
||||
})();
|
||||
</script>
|
||||
|
||||
</body>
|
||||
</html>
|
||||
|
||||
Reference in New Issue
Block a user