<?xml version="1.0" encoding="UTF-8" ?>
  <rss version="2.0">
    <channel>
      <title>Andrew Mayne Prompts</title>
      <link>https://andrewmayneprompts.pages.dev/</link>
      <description>Prompting notes and AI systems writing.</description>
      
    <item>
      <title>The Stateless AI Guessing Game: A Prompting Lesson in Memory</title>
      <link>https://andrewmayneprompts.pages.dev/posts/01-stateless-ai-guessing-game/</link>
      <guid>https://andrewmayneprompts.pages.dev/posts/01-stateless-ai-guessing-game/</guid>
      <description>Stateless models don’t remember between turns, so you can still play a guessing game by encoding the chosen object into the transcript, such as base-10 encoding or a foreign-language rendering, to persist it across questions.</description>
    </item>
    <item>
      <title>Radio Play Scaffolds: A Better Prompt Pattern for Story Generation</title>
      <link>https://andrewmayneprompts.pages.dev/posts/02-radio-play-scaffold-for-stories/</link>
      <guid>https://andrewmayneprompts.pages.dev/posts/02-radio-play-scaffold-for-stories/</guid>
      <description>Use a radio-play scaffold with a Narrator, Characters, and an optional Editor to structure prompts so the model generates longer, more coherent narratives with clear direction.</description>
    </item>
    <item>
      <title>Building AI Choose-Your-Own Adventures with Prompt Scaffolding</title>
      <link>https://andrewmayneprompts.pages.dev/posts/03-ai-choose-your-own-adventures/</link>
      <guid>https://andrewmayneprompts.pages.dev/posts/03-ai-choose-your-own-adventures/</guid>
      <description>Create AI-driven choose-your-own-adventure experiences by grounding the model with a map, state-tracking, and short scene summaries to preserve continuity and guide branching.</description>
    </item>
    <item>
      <title>Magic Phrases for Moderation: Prompt Patterns That Improve Safety Calls</title>
      <link>https://andrewmayneprompts.pages.dev/posts/04-magic-phrases-for-moderation/</link>
      <guid>https://andrewmayneprompts.pages.dev/posts/04-magic-phrases-for-moderation/</guid>
      <description>Use standardized prompts and rating frameworks (like ESRB) along with explicit guidelines and practical examples to achieve more consistent, scalable AI-driven content moderation.</description>
    </item>
    <item>
      <title>Invoking Experts in Prompts: When Persona Framing Improves Results</title>
      <link>https://andrewmayneprompts.pages.dev/posts/05-invoking-experts-in-prompts/</link>
      <guid>https://andrewmayneprompts.pages.dev/posts/05-invoking-experts-in-prompts/</guid>
      <description>Invoking an expert persona in prompts steers the model to adopt a relevant reasoning frame, yielding clearer explanations and better solutions.</description>
    </item>
    <item>
      <title>Prompt Size Reduction Checklist: Cut Tokens Without Losing Quality</title>
      <link>https://andrewmayneprompts.pages.dev/posts/06-prompt-size-reduction-checklist/</link>
      <guid>https://andrewmayneprompts.pages.dev/posts/06-prompt-size-reduction-checklist/</guid>
      <description>Use a practical prompt-optimization checklist to reduce token usage by cleaning up examples, cutting verbosity, narrowing labels, and batching multiple classifications in a single API call for faster, cheaper results.</description>
    </item>
    <item>
      <title>Small Model Advantages: When Smaller LLMs Outperform Bigger Ones</title>
      <link>https://andrewmayneprompts.pages.dev/posts/07-small-model-advantages/</link>
      <guid>https://andrewmayneprompts.pages.dev/posts/07-small-model-advantages/</guid>
      <description>For large documents, extracting key points, phrases, and entities with a small model is cheaper, faster, and often more reliable than generating a full summary.</description>
    </item>
    <item>
      <title>Early Sentence-to-Email Prompts: A Foundational Transformation Pattern</title>
      <link>https://andrewmayneprompts.pages.dev/posts/08-early-sentence-to-email-prompts/</link>
      <guid>https://andrewmayneprompts.pages.dev/posts/08-early-sentence-to-email-prompts/</guid>
      <description>Turn a minimal instruction into a polished email by providing a handful of consistent examples and letting the model complete the pattern, illustrating in-context learning and rapid productization.</description>
    </item>
    <item>
      <title>Prompt Maker: How to Teach Prompt Patterns by Example</title>
      <link>https://andrewmayneprompts.pages.dev/posts/09-prompt-maker-teaching-prompts/</link>
      <guid>https://andrewmayneprompts.pages.dev/posts/09-prompt-maker-teaching-prompts/</guid>
      <description>Teach prompts by presenting models with a consistent, example-rich pattern so they infer the task and generate high-quality new prompts.</description>
    </item>
    <item>
      <title>Code Refactoring with GPT-3: Practical Prompt Patterns That Work</title>
      <link>https://andrewmayneprompts.pages.dev/posts/10-code-refactoring-with-gpt-3/</link>
      <guid>https://andrewmayneprompts.pages.dev/posts/10-code-refactoring-with-gpt-3/</guid>
      <description>Code-capable language models can automatically refactor entire codebases and translate code between languages to deliver faster, more efficient implementations.</description>
    </item>
    <item>
      <title>Using Small Models for Complex Natural-Language Tasks</title>
      <link>https://andrewmayneprompts.pages.dev/posts/11-using-small-models-for-complex-natural-language-tasks/</link>
      <guid>https://andrewmayneprompts.pages.dev/posts/11-using-small-models-for-complex-natural-language-tasks/</guid>
      <description>Thoughtful prompting and lightweight schemas let small language models reliably convert flexible natural-language input into structured data for real-world tasks like scheduling, at a fraction of the typical cost.</description>
    </item>
    <item>
      <title>Discovering Useful Libraries with AI Coding Prompts</title>
      <link>https://andrewmayneprompts.pages.dev/posts/12-discovering-libraries-with-models/</link>
      <guid>https://andrewmayneprompts.pages.dev/posts/12-discovering-libraries-with-models/</guid>
      <description>Asking models to solve coding problems surfaces unfamiliar libraries and tools, often revealing ready-made solutions you can reuse in projects.</description>
    </item>
    <item>
      <title>Grounding Prompts with Wikidata and SPARQL</title>
      <link>https://andrewmayneprompts.pages.dev/posts/13-grounding-prompts-with-wikidata/</link>
      <guid>https://andrewmayneprompts.pages.dev/posts/13-grounding-prompts-with-wikidata/</guid>
      <description>Ground model outputs in Wikidata by constructing SPARQL queries with correct property and entity IDs, optionally aided by a lightweight query generator or retrieval workflow, to fetch real data and reduce hallucinations.</description>
    </item>
    <item>
      <title>GPT-3 for Regex, Bucket Policies, and Solidity Tasks</title>
      <link>https://andrewmayneprompts.pages.dev/posts/14-gpt-3-regex-bucket-policies-and-solidity/</link>
      <guid>https://andrewmayneprompts.pages.dev/posts/14-gpt-3-regex-bucket-policies-and-solidity/</guid>
      <description>GPT-3 can convert tedious, syntax-heavy tasks into actionable tooling by generating regex patterns from plain English, crafting precise bucket policies, and explaining or auditing Solidity contracts.</description>
    </item>
    <item>
      <title>Large Text Pattern Analysis with Prompted Models</title>
      <link>https://andrewmayneprompts.pages.dev/posts/15-large-text-pattern-analysis/</link>
      <guid>https://andrewmayneprompts.pages.dev/posts/15-large-text-pattern-analysis/</guid>
      <description>Feed large batches of text into a single context window to extract overall patterns and sentiment across many posts, enabling scalable, non-sequential analysis while monitoring for hallucinations.</description>
    </item>
    <item>
      <title>GPT Demo Set List: Early Prompt Patterns That Still Hold Up</title>
      <link>https://andrewmayneprompts.pages.dev/posts/16-gpt-demo-set-list/</link>
      <guid>https://andrewmayneprompts.pages.dev/posts/16-gpt-demo-set-list/</guid>
      <description>Curated prompts from a GPT-3 demo set reveal practical capabilities: token-based world view, autocomplete, structured text, translation, summarization, tone and persona control, multi-voice outputs, and turning unstructured text into structured data.</description>
    </item>
    <item>
      <title>Creating Better Quiz Distractors with LLMs</title>
      <link>https://andrewmayneprompts.pages.dev/posts/17-creating-quiz-distractors/</link>
      <guid>https://andrewmayneprompts.pages.dev/posts/17-creating-quiz-distractors/</guid>
      <description>Crafting plausible quiz distractors is hard; a practical workaround is to use a smaller model with a higher temperature to generate incorrect-but-plausible options, though results can still vary.</description>
    </item>
    <item>
      <title>Character-Threaded Summarization for Long Documents</title>
      <link>https://andrewmayneprompts.pages.dev/posts/18-character-threaded-summarization/</link>
      <guid>https://andrewmayneprompts.pages.dev/posts/18-character-threaded-summarization/</guid>
      <description>For long texts, build per-entity timelines (characters, locations, key events) and then fuse them into a coherent final summary to preserve reversals and changing perspectives.</description>
    </item>
    <item>
      <title>Separating Instruction from Content: A Core Prompt Reliability Pattern</title>
      <link>https://andrewmayneprompts.pages.dev/posts/19-separating-instruction-from-content/</link>
      <guid>https://andrewmayneprompts.pages.dev/posts/19-separating-instruction-from-content/</guid>
      <description>Clearly separate instruction from content with a reliable delimiter (three hashtags often being the strongest) and present structured data (Markdown, XML, or JSON) to reduce ambiguity and improve model performance.</description>
    </item>
    <item>
      <title>GPT-3 Grammar and Style Editing in Practice</title>
      <link>https://andrewmayneprompts.pages.dev/posts/20-gpt-3-grammar-and-style/</link>
      <guid>https://andrewmayneprompts.pages.dev/posts/20-gpt-3-grammar-and-style/</guid>
      <description>GPT-3 enables advanced grammar and style edits, tone adjustment, coherence improvements, and format transformations across text without explicit training as a dedicated grammar tool.</description>
    </item>
    <item>
      <title>GPT-3 Emoji Story Demo: Narrative Compression in Tokens</title>
      <link>https://andrewmayneprompts.pages.dev/posts/21-gpt-3-emoji-story-demo/</link>
      <guid>https://andrewmayneprompts.pages.dev/posts/21-gpt-3-emoji-story-demo/</guid>
      <description>GPT-3&#39;s emoji storytelling demo shows how models compress meaning into simple token choices that render as visuals, revealing how a narrative can be told with emojis and signaling the move from text to visual tokens.</description>
    </item>
    <item>
      <title>Mini Prompts for Trick Questions and Nonsense Inputs</title>
      <link>https://andrewmayneprompts.pages.dev/posts/22-mini-prompts-for-trick-questions/</link>
      <guid>https://andrewmayneprompts.pages.dev/posts/22-mini-prompts-for-trick-questions/</guid>
      <description>A brief upfront prompt tells the model to distinguish serious questions from nonsense or trick questions and to respond appropriately.</description>
    </item>
    <item>
      <title>Hackathons and Model Capabilities: What Fast Experiments Reveal</title>
      <link>https://andrewmayneprompts.pages.dev/posts/23-hackathons-and-model-capabilities/</link>
      <guid>https://andrewmayneprompts.pages.dev/posts/23-hackathons-and-model-capabilities/</guid>
      <description>Hackathons and collaborative prompt exploration reveal a model&#39;s wide range of capabilities—from diagrams and spreadsheets to SVGs, STL files, 3D scenes, and mini apps—demonstrating practical ways to surface and showcase AI skills.</description>
    </item>
    <item>
      <title>Context vs Retrieval: A Practical Decision Framework</title>
      <link>https://andrewmayneprompts.pages.dev/posts/24-decision-framework-context-vs-retrieval/</link>
      <guid>https://andrewmayneprompts.pages.dev/posts/24-decision-framework-context-vs-retrieval/</guid>
      <description>Use a cost-driven framework to decide whether to put data in the prompt, retrieve it via keywords or embeddings, or fine-tune, guided by a spreadsheet that compares input/output costs and time investment.</description>
    </item>
    <item>
      <title>Big and Small Models in Robotics: A Hybrid Architecture</title>
      <link>https://andrewmayneprompts.pages.dev/posts/25-big-and-small-models-in-robotics/</link>
      <guid>https://andrewmayneprompts.pages.dev/posts/25-big-and-small-models-in-robotics/</guid>
      <description>Adopt a layered, multi-model architecture in robotics that pairs large, high-level models for complex reasoning with fast, specialized models for real-time perception and control, with coordinated handoffs to balance latency, capability, and safety.</description>
    </item>
    <item>
      <title>The Missing Bracket: How Tiny Formatting Errors Break Outputs</title>
      <link>https://andrewmayneprompts.pages.dev/posts/26-the-missing-bracket/</link>
      <guid>https://andrewmayneprompts.pages.dev/posts/26-the-missing-bracket/</guid>
      <description>Ambiguity from a missing closing bracket in a legal passage caused inconsistent model results, showing that thoroughly reading and correcting the input is essential for reliability.</description>
    </item>
    <item>
      <title>Memory in Conversational AI: Why Context Persistence Matters</title>
      <link>https://andrewmayneprompts.pages.dev/posts/27-memory-in-conversational-ai/</link>
      <guid>https://andrewmayneprompts.pages.dev/posts/27-memory-in-conversational-ai/</guid>
      <description>Equipping conversational AI with memory of past interactions creates coherent, context-aware dialogue and improves personalization beyond single-turn prompts.</description>
    </item>
    <item>
      <title>Compute at Scale: Growth, Limits, and AI Demand</title>
      <link>https://andrewmayneprompts.pages.dev/posts/28-compute-scale-and-limits/</link>
      <guid>https://andrewmayneprompts.pages.dev/posts/28-compute-scale-and-limits/</guid>
      <description>Compute needs will rise with human ambition, potentially by about 1,000× today, and will be met through strategic, highway-like infrastructure expansion and smarter use rather than chasing unlimited physical limits.</description>
    </item>
    <item>
      <title>Scaffolding Long-Form Content: Prompt Patterns for Coherence</title>
      <link>https://andrewmayneprompts.pages.dev/posts/29-scaffolding-long-form-content/</link>
      <guid>https://andrewmayneprompts.pages.dev/posts/29-scaffolding-long-form-content/</guid>
      <description>Break long-form writing into small, solvable steps, then progressively expand with more scenes, motivations, and reversals to produce a complete piece.</description>
    </item>
    <item>
      <title>How Small Can AI Be? Practical Limits and Opportunities</title>
      <link>https://andrewmayneprompts.pages.dev/posts/30-how-small-can-ai-be/</link>
      <guid>https://andrewmayneprompts.pages.dev/posts/30-how-small-can-ai-be/</guid>
      <description>Smaller, compressed AI models trained on task-specific data can be genuinely useful on ordinary hardware, enabling distributed, cooperative intelligence rather than relying solely on ever-larger models.</description>
    </item>
    <item>
      <title>Localization Techniques for Vision Models in Real Workflows</title>
      <link>https://andrewmayneprompts.pages.dev/posts/31-localization-techniques-for-vision-models/</link>
      <guid>https://andrewmayneprompts.pages.dev/posts/31-localization-techniques-for-vision-models/</guid>
      <description>Improve localization in vision models by combining prompting strategies (order of description), grid-based coordinates, and tiled, coarse-to-fine analysis, optionally using segmentation to isolate objects.</description>
    </item>
    <item>
      <title>Outcome-Oriented Prompting: Define Success, Then Generate</title>
      <link>https://andrewmayneprompts.pages.dev/posts/32-outcome-oriented-prompting/</link>
      <guid>https://andrewmayneprompts.pages.dev/posts/32-outcome-oriented-prompting/</guid>
      <description>Shift prompting from instructing the start to defining verifiable outcomes and success tests, then use reasoning-enabled models to draft, evaluate, and iterate until the result meets objective criteria.</description>
    </item>
    <item>
      <title>The Prompt Context Flywheel for Continuous Improvement</title>
      <link>https://andrewmayneprompts.pages.dev/posts/33-prompt-context-flywheel/</link>
      <guid>https://andrewmayneprompts.pages.dev/posts/33-prompt-context-flywheel/</guid>
      <description>Periodically mine conversations, have an LLM propose updated prompts that reflect current context, and deploy the improved prompt as a living prompt context flywheel—either in production or via shadow testing—to steadily improve responses.</description>
    </item>
    <item>
      <title>The Uneven AI Frontier: Why Capabilities Arrive Jagged</title>
      <link>https://andrewmayneprompts.pages.dev/posts/34-the-uneven-frontier-of-ai/</link>
      <guid>https://andrewmayneprompts.pages.dev/posts/34-the-uneven-frontier-of-ai/</guid>
      <description>Capabilities often arrive in messy, frame-by-frame forms rather than polished breakthroughs, so valuable insights come from imperfect experiments that hint at real potential.</description>
    </item>
    <item>
      <title>Small Capabilities, Big Ramifications in Prompt Design</title>
      <link>https://andrewmayneprompts.pages.dev/posts/35-small-capabilities-big-ramifications/</link>
      <guid>https://andrewmayneprompts.pages.dev/posts/35-small-capabilities-big-ramifications/</guid>
      <description>Expanding capabilities such as larger context windows and structured representations like arrays unlock significant practical gains, enabling handling of large codebases and the creation of more complex games.</description>
    </item>
    <item>
      <title>Prompts to Reduce Hallucinations: Practical Control Patterns</title>
      <link>https://andrewmayneprompts.pages.dev/posts/36-prompts-to-reduce-hallucinations/</link>
      <guid>https://andrewmayneprompts.pages.dev/posts/36-prompts-to-reduce-hallucinations/</guid>
      <description>Teach models to say &#39;I don&#39;t know&#39; when unsure by labeling truthful, false, and unknown statements, reducing hallucinations and boosting accuracy through prompting and fine-tuning.</description>
    </item>
    <item>
      <title>Seeded Creativity for LLMs: Controlled Randomness That Helps</title>
      <link>https://andrewmayneprompts.pages.dev/posts/37-seeded-creativity-for-llms/</link>
      <guid>https://andrewmayneprompts.pages.dev/posts/37-seeded-creativity-for-llms/</guid>
      <description>Generate random seeds outside the model, feed them into prompts, and let the LLM produce varied yet coherent output.</description>
    </item>
    <item>
      <title>Temperature in LLMs Explained: What It Actually Controls</title>
      <link>https://andrewmayneprompts.pages.dev/posts/38-temperature-in-llms-explained/</link>
      <guid>https://andrewmayneprompts.pages.dev/posts/38-temperature-in-llms-explained/</guid>
      <description>Temperature adds a controlled amount of randomness to LLMs to explore alternative paths rather than boosting creativity, helping to break repetitive outputs but risking nonsensical results at high values and often being unnecessary with modern models.</description>
    </item>
    <item>
      <title>Prompt Repetition and Rephrasing: A Reliability Tactic That Lasts</title>
      <link>https://andrewmayneprompts.pages.dev/posts/39-prompt-repetition-and-rephrasing/</link>
      <guid>https://andrewmayneprompts.pages.dev/posts/39-prompt-repetition-and-rephrasing/</guid>
      <description>Repeat or rephrase the prompt by placing it at the top and/or bottom to keep the model anchored and improve consistency on long or complex inputs.</description>
    </item>
    <item>
      <title>Model-Assisted Data Preprocessing for Better Fine-Tuning</title>
      <link>https://andrewmayneprompts.pages.dev/posts/40-model-assisted-data-preprocessing/</link>
      <guid>https://andrewmayneprompts.pages.dev/posts/40-model-assisted-data-preprocessing/</guid>
      <description>Leverage an auxiliary model to preprocess, standardize, and enrich your training data before training, yielding cleaner, more consistent, and more informative data.</description>
    </item>
    <item>
      <title>Model Identity and Statelessness: Why Explicit Context Matters</title>
      <link>https://andrewmayneprompts.pages.dev/posts/41-model-identity-and-statelessness/</link>
      <guid>https://andrewmayneprompts.pages.dev/posts/41-model-identity-and-statelessness/</guid>
      <description>LLMs are stateless and may not know their own identity unless explicitly provided in prompts or post-training guidance, and larger context windows make it easier to supply that metadata upfront.</description>
    </item>
    <item>
      <title>Cross-Temperature Hallucination Testing for Sanity Checks</title>
      <link>https://andrewmayneprompts.pages.dev/posts/42-cross-temperature-hallucination-test/</link>
      <guid>https://andrewmayneprompts.pages.dev/posts/42-cross-temperature-hallucination-test/</guid>
      <description>Cross-check AI outputs by comparing responses across temperatures and against smaller models to quickly flag hallucinations and verify with real sources.</description>
    </item>
    <item>
      <title>Style Guides for AI Writing: Getting a Specific Voice</title>
      <link>https://andrewmayneprompts.pages.dev/posts/43-style-guide-for-ai-writing/</link>
      <guid>https://andrewmayneprompts.pages.dev/posts/43-style-guide-for-ai-writing/</guid>
      <description>To get AI to write in a specific voice, first have it analyze and articulate the target style, then prompt it to write using that explicit style guide.</description>
    </item>
    <item>
      <title>GPT-4 Vision Refrigerator Demo: A Practical Multimodal Moment</title>
      <link>https://andrewmayneprompts.pages.dev/posts/44-gpt-4-vision-refrigerator-demo/</link>
      <guid>https://andrewmayneprompts.pages.dev/posts/44-gpt-4-vision-refrigerator-demo/</guid>
      <description>A fridge photo serves as a simple, human-centered demo to show GPT-4&#39;s multimodal understanding and practical usefulness.</description>
    </item>
    <item>
      <title>Fine-Tuning Methods Guide: SFT, DPO, and Beyond</title>
      <link>https://andrewmayneprompts.pages.dev/posts/45-fine-tuning-methods-guide/</link>
      <guid>https://andrewmayneprompts.pages.dev/posts/45-fine-tuning-methods-guide/</guid>
      <description>Fine-tuning is a toolbox of SFT, DPO, reinforcement fine-tuning, and vision fine-tuning; pick the method by your goal (memorization vs generalization, explicit behavior, reasoning with graders, or robust augmentation) rather than defaults.</description>
    </item>
    <item>
      <title>Cost Savings via Fine-Tuning Smaller Models</title>
      <link>https://andrewmayneprompts.pages.dev/posts/46-cost-savings-via-fine-tuning/</link>
      <guid>https://andrewmayneprompts.pages.dev/posts/46-cost-savings-via-fine-tuning/</guid>
      <description>Fine-tune a smaller model on high-quality examples derived from a larger model to preserve performance while substantially lowering per-call costs, with potential to step down to even smaller models as you scale the dataset.</description>
    </item>
    <item>
      <title>Bracketing Letters for Wordle: Token-Level Prompt Control</title>
      <link>https://andrewmayneprompts.pages.dev/posts/47-bracketing-letters-for-wordle/</link>
      <guid>https://andrewmayneprompts.pages.dev/posts/47-bracketing-letters-for-wordle/</guid>
      <description>Token-level input can derail Wordle-like tasks; using a bracketed, character-level representation lets the model track each letter and constraint reliably.</description>
    </item>
    <item>
      <title>Fine-Tuning Fundamentals: When to Use It and When Not To</title>
      <link>https://andrewmayneprompts.pages.dev/posts/48-fine-tuning-fundamentals/</link>
      <guid>https://andrewmayneprompts.pages.dev/posts/48-fine-tuning-fundamentals/</guid>
      <description>Fine-tuning is a final option after prompting and RAG, chosen for memorization of facts or generalization of behavior, with practical steps to test on small models first and format data accordingly (facts in the assistant message; behavior in user/assistant pairs) before scaling.</description>
    </item>
    <item>
      <title>Rethinking best_of in GPT-3: Why It Misleads</title>
      <link>https://andrewmayneprompts.pages.dev/posts/49-rethinking-best-of-in-gpt-3/</link>
      <guid>https://andrewmayneprompts.pages.dev/posts/49-rethinking-best-of-in-gpt-3/</guid>
      <description>Relying on best_of to improve LLM accuracy is misguided; the practical fix is to define clear task boundaries with better prompts and use outlier examples to ground interpretation, which can let you use smaller models and single-shot prompts while reducing cost.</description>
    </item>
    <item>
      <title>The Fifth-Grade Summary Moment: Audience-Aware Compression</title>
      <link>https://andrewmayneprompts.pages.dev/posts/50-fifth-grade-summary-moment/</link>
      <guid>https://andrewmayneprompts.pages.dev/posts/50-fifth-grade-summary-moment/</guid>
      <description>Generative summarization creates original, audience-tailored explanations rather than mere extracts, so specify the target reader and evaluate quality by usefulness to that audience.</description>
    </item>
    <item>
      <title>Crystallized vs Fluid Intelligence in Language Models</title>
      <link>https://andrewmayneprompts.pages.dev/posts/51-crystallized-vs-fluid-intelligence/</link>
      <guid>https://andrewmayneprompts.pages.dev/posts/51-crystallized-vs-fluid-intelligence/</guid>
      <description>Distinguish crystallized intelligence (memory of facts) from fluid intelligence (generalization) in language models and tailor evaluation and training to balance recall with robust reasoning.</description>
    </item>
    <item>
      <title>Vision Models at the Frontier: What Changed and Why</title>
      <link>https://andrewmayneprompts.pages.dev/posts/52-vision-models-the-frontier/</link>
      <guid>https://andrewmayneprompts.pages.dev/posts/52-vision-models-the-frontier/</guid>
      <description>Vision and video models are the AI frontier, capable of learning from images and sequences to reason about the real world, with synthetic data and multimodal prompts as practical levers.</description>
    </item>
    <item>
      <title>The Evolution of Prompts: From Completion to Systems</title>
      <link>https://andrewmayneprompts.pages.dev/posts/53-evolution-of-prompts/</link>
      <guid>https://andrewmayneprompts.pages.dev/posts/53-evolution-of-prompts/</guid>
      <description>Prompts have evolved from pattern-based completion to outcome-focused instructions, and the practical takeaway is to provide the simplest, clearest description of the finished product and its success criteria so the model can deliver the desired outcome.</description>
    </item>
    <item>
      <title>Embedding-Based Retrieval Strategies That Actually Work</title>
      <link>https://andrewmayneprompts.pages.dev/posts/54-embedding-based-retrieval-strategies/</link>
      <guid>https://andrewmayneprompts.pages.dev/posts/54-embedding-based-retrieval-strategies/</guid>
      <description>Embeddings are learned, high-dimensional representations used for retrieval, and the practical takeaway is to standardize and synthesize documents into retrieval-optimized representations rather than embedding raw text.</description>
    </item>
    <item>
      <title>Personal AI Evaluation Methods for Real-World Quality</title>
      <link>https://andrewmayneprompts.pages.dev/posts/55-personal-ai-evaluation-methods/</link>
      <guid>https://andrewmayneprompts.pages.dev/posts/55-personal-ai-evaluation-methods/</guid>
      <description>Design and run your own diverse, task-specific evaluation suite to gauge AI model improvements beyond benchmarks, tailoring tests to your real use case and including multi-modal reasoning.</description>
    </item>
    <item>
      <title>Lessons from an Ambitious AI Build</title>
      <link>https://andrewmayneprompts.pages.dev/posts/56-lessons-from-an-ambitious-ai-build/</link>
      <guid>https://andrewmayneprompts.pages.dev/posts/56-lessons-from-an-ambitious-ai-build/</guid>
      <description>Tackling a truly ambitious AI build forces intense, hands-on learning in prompt design, tool usage, and system design tradeoffs, yielding practical, scalable know-how for real AI apps.</description>
    </item>
    <item>
      <title>Why I Didn&#39;t Launch AI Channels</title>
      <link>https://andrewmayneprompts.pages.dev/posts/57-why-i-didnt-launch-ai-channels/</link>
      <guid>https://andrewmayneprompts.pages.dev/posts/57-why-i-didnt-launch-ai-channels/</guid>
      <description>High costs and slow response times with GPT-3 made AI Channels impractical as a consumer product, so I prioritized learning and joined OpenAI instead of launching.</description>
    </item>
    <item>
      <title>Context as an AI Lever: The Compounding Effect of Longer Windows</title>
      <link>https://andrewmayneprompts.pages.dev/posts/58-context-as-ai-lever/</link>
      <guid>https://andrewmayneprompts.pages.dev/posts/58-context-as-ai-lever/</guid>
      <description>Expanding context length unlocks new capabilities, enabling reliable handling of long documents, deeper reasoning, and more practical AI tasks.</description>
    </item>
    <item>
      <title>Tool Makers vs Tool Users: Where Product Value Actually Lives</title>
      <link>https://andrewmayneprompts.pages.dev/posts/59-tool-makers-vs-tool-users/</link>
      <guid>https://andrewmayneprompts.pages.dev/posts/59-tool-makers-vs-tool-users/</guid>
      <description>The key is removing friction and focusing on user usability, not just expanding capability, to achieve real AI adoption.</description>
    </item>
    <item>
      <title>Base Models vs Post-Training: What Each Layer Does</title>
      <link>https://andrewmayneprompts.pages.dev/posts/60-base-models-and-post-training/</link>
      <guid>https://andrewmayneprompts.pages.dev/posts/60-base-models-and-post-training/</guid>
      <description>Base models are broad, raw text learners, while post-training adds an instruction-driven layer that greatly increases usefulness but can lead to overfitting, so the takeaway is to balance raw capabilities with careful post-training and prompt design.</description>
    </item>
    <item>
      <title>Magic Words in Prompting: Domain Terms That Steer Behavior</title>
      <link>https://andrewmayneprompts.pages.dev/posts/61-magic-words-in-prompting/</link>
      <guid>https://andrewmayneprompts.pages.dev/posts/61-magic-words-in-prompting/</guid>
      <description>Anchor prompts with domain-specific terminology and canonical formats to steer the model toward the desired structure and tone.</description>
    </item>
    <item>
      <title>The Frontier Is Wider Than It Looks</title>
      <link>https://andrewmayneprompts.pages.dev/posts/62-the-frontier-is-wider/</link>
      <guid>https://andrewmayneprompts.pages.dev/posts/62-the-frontier-is-wider/</guid>
      <description>The frontier is wider than ever, and the key takeaway is to invest in reasoning-based prompting and a middle-layer classification to guide answers, enabling safer, cheaper, and more reliable AI.</description>
    </item>
    <item>
      <title>Challenging AI Paper Claims with Practical Replication</title>
      <link>https://andrewmayneprompts.pages.dev/posts/63-challenging-ai-paper-claims/</link>
      <guid>https://andrewmayneprompts.pages.dev/posts/63-challenging-ai-paper-claims/</guid>
      <description>Bold claims of AI limitations are often training artifacts in a fast-moving field; treat them as testable hypotheses and verify by re-running experiments with varied data formats so the model learns relationships in its outputs, not just the prompts.</description>
    </item>
    <item>
      <title>Understanding Embeddings for Better Prompting and Retrieval</title>
      <link>https://andrewmayneprompts.pages.dev/posts/64-understanding-embeddings/</link>
      <guid>https://andrewmayneprompts.pages.dev/posts/64-understanding-embeddings/</guid>
      <description>Embeddings are high-dimensional word representations that encode multiple relational axes, and choosing prompt words that sit in the right regions of that space can steer model behavior more effectively than lengthy instructions.</description>
    </item>
    <item>
      <title>Small Models, Big Knowledge: Prompting Past the First Guess</title>
      <link>https://andrewmayneprompts.pages.dev/posts/65-small-models-big-knowledge/</link>
      <guid>https://andrewmayneprompts.pages.dev/posts/65-small-models-big-knowledge/</guid>
      <description>Smaller language models aren’t inherently dumb; their true potential shows when prompts steer retrieval away from easy generalizations, unlocking non-obvious knowledge and cutting costs.</description>
    </item>
    <item>
      <title>How I Became OpenAI&#39;s First Prompt Engineer</title>
      <link>https://andrewmayneprompts.pages.dev/posts/66-becoming-openais-first-prompt-engineer/</link>
      <guid>https://andrewmayneprompts.pages.dev/posts/66-becoming-openais-first-prompt-engineer/</guid>
      <description>Dive deep into an AI frontier, rigorously test and document prompts, and openly share useful findings to stand out and land a pioneering role like OpenAI&#39;s first prompt engineer.</description>
    </item>
    <item>
      <title>GPT Tools: Fast Prototypes, Real Constraints, and Shipping</title>
      <link>https://andrewmayneprompts.pages.dev/posts/67-gpt-tools/</link>
      <guid>https://andrewmayneprompts.pages.dev/posts/67-gpt-tools/</guid>
      <description>OpenAI’s rapid progress rested on human coordination and practical tooling—such as a token counter and a four-model comparison report—more than perfect code.</description>
    </item>
    </channel>
  </rss>