"From Words to Agents" Presentation
“From Words to Agents” is a 12-slide presentation that walks through the full LLM pipeline — from tokenization to the agentic loop. Unlike the Calculus example, this deck is built entirely with the Builder API in TypeScript, demonstrating how to generate complex presentations programmatically.
Preview: The Agentic Loop
Section titled “Preview: The Agentic Loop”Slides
Section titled “Slides”- From Words to Agents — Title slide with the next-token equation
- Tokenization — Breaking text into BPE tokens
- Embeddings — Token IDs to high-dimensional vectors
- Attention — Query, Key, Value mechanism with softmax
- Transformer Block — Attention → FFN → LayerNorm architecture
- Deep Representations — Stacked transformer layers
- Next Token Prediction — Logits, softmax, and sampling
- Autoregressive Loop — Iterative token generation
- Structured Output — Constrained generation (JSON, code)
- The Agentic Loop — Observe → Think → Act → Observe cycle
- The Complete Pipeline — End-to-end summary diagram
- Thank You — Closing slide
Builder Code Snippet
Section titled “Builder Code Snippet”Here’s how the first few slides are constructed:
import { presentation, darkTheme } from '@elucim/dsl';
const t = darkTheme;
const doc = presentation('From Words to Agents', darkTheme, { showNotes: true }) .slide('From Words to Agents', s => { s.title('From Words to Agents', { y: 200, fontSize: 36 }); s.subtitle('How LLMs Think, Speak, and Act', { y: 250, fontSize: 20 }); s.wait(5); s.latex('P(w_t \\mid w_1, w_2, \\ldots, w_{t-1})', { y: 330, fontSize: 32, color: t.warning, }); s.at(75); s.text('A visual journey through language models', { x: s.cx, y: 520, fontSize: 14, color: t.muted, }); }, { notes: 'Title slide — introduce the journey from tokens to agents.' })
.slide('Tokenization', s => { s.title('Step 1: Tokenization'); s.subtitle('Breaking text into pieces the model can understand'); s.wait(5); s.text('"The cat sat on the mat"', { x: s.cx, y: 155, fontSize: 18, color: t.warning, fontFamily: 'monospace', }); s.wait(5); s.arrow(s.cx, 170, s.cx, 195, { color: t.muted }); s.wait(3); const tokens = s.boxRow(['The', 'cat', 'sat', 'on', 'the', 'mat'], { y: 205, boxWidth: 75, boxHeight: 36, gap: 10, fontSize: 14, fontFamily: 'monospace', }); s.wait(5); s.connectDown(tokens, { color: t.muted }); // ... token IDs, vocabulary explanation }, { notes: 'BPE tokenization — how text becomes numbers.' })
.slide('Embeddings', s => { s.title('Step 2: Token Embeddings'); s.subtitle('Each token ID becomes a high-dimensional vector'); s.wait(5); s.boxRow(['3797'], { y: 150, boxWidth: 80, boxHeight: 36, colors: ['rgba(167,139,250,0.2)'], fontSize: 14, fontFamily: 'monospace', }); s.wait(5); s.arrow(s.cx, 195, s.cx, 225, { color: t.muted }); s.wait(3); s.matrix([['0.12', '-0.45', '0.78', '...']], { x: 300, y: 240, cellSize: 65, color: t.primary, }); }, { notes: 'Embedding lookup — from discrete IDs to continuous vectors.' })
// ... 9 more slides .build();Running the Builder
Section titled “Running the Builder”npx tsx packages/dsl/examples/build-agentic-loop.tsThis outputs a valid ElucimDocument JSON that can be rendered with <DslRenderer>.
Source
Section titled “Source”The full builder script is available at packages/dsl/examples/build-agentic-loop.ts.