Make Flashcards from What You Read: Active Recall Prompts

C022 πŸ“ Notes & Memory 1 Prompt

Make Flashcards from What You Read: Active Recall Prompts

Generate effective flashcards at multiple cognitive levels: test surface facts, deep comprehension, and real-world application.

5 min read 4 Cognitive Levels Guide 2 of 5
PR032 The Retrieval Practice Generator
To test yourself on what you read
I just read this: “[paste passage]” Create a retrieval practice set: – 3 questions testing surface understanding – 2 questions testing deeper comprehension – 1 question requiring me to apply this idea to a new situation – 1 question connecting this to other knowledge Don’t give answers yet β€” I’ll try first, then ask.
β–Ά Watch This Guide
🎴
Build Your Flashcard Collection 365 articles designed for comprehension β€” perfect material for generating flashcard-worthy content.
Explore Course β†’

What Makes Good Flashcards

Most flashcards fail because they test recognition instead of recall. You see the question, something feels familiar, you flip the card and say “yeah, I knew that.” But you didn’t β€” you recognized it. Recognizing is not remembering.

Good flashcards force active recall: you must produce the answer from memory, not just recognize it when you see it. This retrieval effort is what actually builds lasting memory. It feels harder because it is harder β€” and that’s the point.

The Retrieval Practice Generator (PR032) creates questions at four cognitive levels, not just one. Surface questions test basic facts. Comprehension questions test whether you understand what it means. Application questions test whether you can use the concept in a new situation. Connection questions test whether you can link it to other knowledge.

This multi-level approach prevents a common trap: you can answer surface questions perfectly while having no real understanding. By mixing question types, you discover gaps you didn’t know you had.

The Flashcard Prompt

PR032 asks AI to generate 7 questions at four levels β€” crucially, without providing answers. This matters. The learning happens when you attempt to answer before checking.

Here’s the workflow: paste a passage, get questions, try to answer each one out loud or in writing, then ask for answers and compare. Questions you got wrong or struggled with become your actual flashcards. Questions you answered easily? You don’t need flashcards for those β€” you already know them.

This approach is more efficient than flashcarding everything. Most AI flashcard tools generate cards for every fact in a passage. You end up with 50 cards, 40 of which test things you already know. The retrieval practice approach identifies what you actually need to learn.

πŸ’‘ Pro Tip

After attempting answers, tell the AI which questions you struggled with. Ask: “I couldn’t answer questions 3 and 5. Create 2-3 more questions on those specific concepts at varying difficulty levels.” This targets your weak spots directly.

Question Types Explained

Surface questions test basic facts and definitions. “What is the term for…?” or “According to the passage, what percentage…?” These are the easiest to answer and the least valuable for deep learning β€” but they verify you absorbed the raw information.

Comprehension questions test whether you understand the meaning. “Why does this phenomenon occur?” or “What is the relationship between X and Y?” These require you to explain, not just recall. If you can’t answer in your own words, you don’t really understand.

Application questions test transfer to new situations. “How would this principle apply to [different context]?” or “If the conditions changed in this way, what would happen?” These are hard β€” and that’s why they’re valuable. They reveal whether you can use the concept, not just describe it.

Connection questions test integration with existing knowledge. “How does this relate to [something you already know]?” or “What does this remind you of from [other field]?” These build your knowledge network, linking new ideas to established ones.

For a deeper review system using these question types over time, see Spaced Recall from Articles (C025).

πŸ“Œ The Struggle Test

If you can answer a flashcard question instantly without thinking, the card is too easy and wasting your time. Good flashcards should require a moment of effort β€” that effort is the learning. Delete easy cards, keep challenging ones.

Export Tips: Getting Cards into Your System

Once you’ve identified questions worth keeping, you need to get them into a spaced repetition system. Here’s how to format for the major apps:

For Anki: Ask AI to format as “Question [tab] Answer” with each card on a new line. Import using File β†’ Import, set field separator to Tab. Or use the semicolon format: “Question;Answer” and set separator to semicolon.

For Quizlet: Ask AI to format as “Question – Answer” with each card on a new line. Use Quizlet’s import feature, set the term-definition separator to ” – ” (space-dash-space).

For Notion/Obsidian: Ask AI to format cards as toggle blocks (Notion) or callouts (Obsidian) with question visible and answer hidden. This works for quick review within your existing note system.

For cards that need more context than simple Q&A, use Cornell Notes (C021) instead β€” the cue column serves as built-in self-testing without needing a separate app.

Explore more memory systems in the Notes & Memory pillar or start with the complete AI for Reading hub.

Frequently Asked Questions

Recognition is “do I know this when I see it?” Recall is “can I produce this from memory?” Recognition is easy β€” recall is hard. And the hard effort of recall is what builds lasting memory. That’s why good flashcards make you produce the answer, not just confirm it looks familiar.
The retrieval attempt is the learning. If you see the answer before trying, you’ve lost the learning opportunity. By attempting first, you strengthen memory traces and discover which concepts you actually need to study. Questions you can already answer don’t need flashcards.
As few as possible while capturing what matters. The goal isn’t to flashcard every fact β€” it’s to flashcard concepts you couldn’t recall when tested. After trying the prompt’s questions, you might find only 2-3 need actual flashcards. Quality beats quantity.
Anki is the most powerful but has a learning curve. Quizlet is simpler and works well for most purposes. RemNote and Obsidian plugins work if you already use those tools. The best app is the one you’ll actually use consistently.
πŸ“š The Ultimate Reading Course

Build Active Recall Into Every Article

365 articles designed for comprehension β€” perfect material for practicing retrieval-based learning.

Start Learning β†’
1,098 Practice Questions 365 Articles with Analysis 6 Courses + Community

3 More Note-Taking Guides Await

You’ve mastered flashcards. Next, explore Zettelkasten, reading journals, and spaced recall systems.

Notes & Memory Pillar

Make a Glossary from Any Article: Definitions, Examples & Misconceptions

C014 🧠 Understand Difficult Text 2 Prompts

Make a Glossary from Any Article: Definitions, Examples & Misconceptions

Auto-generate a glossary from any text: key terms, contextual definitions, examples, and common misconceptions to avoid.

5 min read 2 Prompts Guide 6 of 6
PR019 The “Words I Should Know” Identifier
Step 1: Find key terms to define
Here’s a passage I’m reading: “[paste passage]” Identify vocabulary I should pay attention to: – Which words are central to understanding this passage? – Which words might appear in similar texts on this topic? – Which words have specialized meanings in this context vs. everyday use? – Rank them by importance for comprehension.
PR015 The Contextual Word Explorer
Step 2: Define each term in context
In this sentence: “[paste sentence]” The word “[word]” is used. Don’t just define it. Help me understand: – What does it mean in THIS specific context? – What connotations or tone does it carry here? – What other words could the author have used, and why this one? – How does this word choice affect meaning or tone?
β–Ά Watch This Guide
πŸ“š
Build Domain Vocabulary Systematically 365 articles with key terms in context β€” grow your technical vocabulary every day.
Explore Course β†’

Choose the Right Terms to Define

Not every unfamiliar word deserves glossary treatment. Some words are peripheral β€” you can understand the passage without them. Others are central β€” miss them and the whole argument collapses. PR019 helps you sort the difference.

The “Words I Should Know” Identifier (PR019) ranks vocabulary by importance for comprehension. It identifies which words are central to understanding, which appear in similar texts on the topic, and which have specialized meanings in this context versus everyday use.

This matters because time is limited. If an article has 20 unfamiliar terms, learning all of them equally wastes effort. Focus on the 5-7 that matter most. Those are the ones that unlock understanding β€” and the ones you’ll encounter again.

To create a glossary from an article efficiently, start with PR019 to identify your targets before diving into definitions.

Define Each Term in Context

Dictionary definitions fail for the same reason translations fail: they give general meanings, not specific ones. The word “culture” in a microbiology paper means something different than in an anthropology paper. Context determines meaning.

The Contextual Word Explorer (PR015) goes beyond definitions. For each term, it reveals what the word means in THIS specific context, what connotations or tone it carries, what alternative words the author could have used, and how this word choice affects meaning.

This depth matters for comprehension. When you understand not just what a word means but why the author chose it, you understand the passage at a deeper level. You see the author’s choices, not just the content.

πŸ’‘ Pro Tip

For each term, add a “misconception to avoid” note. Example: “Correlation” β€” misconception: correlation implies causation. These notes prevent common errors when you apply the term later.

Add Examples That Cement Understanding

Definitions tell you what a word means. Examples show you what it looks like. After getting contextual definitions, follow up with: “Give me a concrete example of [term] from a different domain.”

Cross-domain examples are especially powerful. If you’re learning about “arbitrage” in finance, an example from everyday life (buying cheap concert tickets and reselling them) makes the concept portable. You understand the principle, not just the application.

For deep vocabulary work, see the Vocabulary-in-Context Prompt Pack (C006) which includes collocations, tone analysis, and usage practice beyond what the glossary workflow covers.

Flag Common Misconceptions

Many technical terms carry baggage β€” common misunderstandings that persist even after you’ve read the definition. “Theory” in science doesn’t mean “guess.” “Significant” in statistics doesn’t mean “important.” “Organic” in chemistry has nothing to do with farming.

For each glossary term, ask: “What do people commonly get wrong about this term?” Then note the misconception explicitly. This preemptive correction saves you from errors that feel correct but aren’t.

The Jargon Translator (C010) handles single-term misconceptions well. For systematic glossary work, add the misconception step after defining each term.

πŸ“Œ The Glossary Workflow

1. Use PR019 to identify which terms matter most. 2. Use PR015 to define each term in context (not dictionary style). 3. Add a concrete example from a different domain. 4. Note the common misconception to avoid. 5. Test yourself by defining terms from memory after reading.

Quick Review: Test Your Glossary

A glossary you never review is a glossary that doesn’t help. After building your glossary, close the article and try to define each term from memory. Can you explain what it means in context? Can you give an example? Can you name the misconception to avoid?

If you can’t, you’ve collected definitions β€” but you haven’t learned them. Go back to that section of the article. Re-read it with the definition fresh in mind. The glossary should support comprehension, not replace it.

For long-term retention, revisit the glossary a day later using spaced repetition. The Understand Difficult Text pillar has more tools for building lasting comprehension. Return to the AI for Reading hub for the complete prompt ecosystem.

Frequently Asked Questions

Dictionaries give general definitions. This workflow gives you contextual meaning β€” what a word means in THIS passage, why the author chose it, and what connotations it carries. Words often have specialized meanings in specific fields that dictionaries miss.
No β€” only for dense technical content, unfamiliar topics, or texts you’ll need to reference later. For casual reading, use the Jargon Translator (C010) on specific terms instead of building a full glossary.
PR019 ranks terms by importance for comprehension. Focus on the top 5-10 terms that are central to understanding. More than 15 terms suggests you might need background knowledge first β€” try the Prerequisites Prompt (C011).
After reading, close the article and try to define each term from memory. If you can’t, re-read that section. For long-term retention, revisit the glossary a day later and test yourself again using spaced repetition.
πŸ“š The Ultimate Reading Course

Build Domain Vocabulary Daily

365 articles with rich terminology β€” encounter key terms in context and build lasting comprehension.

Start Learning β†’
1,098 Practice Questions 365 Articles with Analysis 6 Courses + Community

You’ve Completed the Comprehension Pillar

All 6 guides mastered. Ready to explore summarization, note-taking, or critical reading?

Understand Difficult Text Pillar

Literary Passage Deep Dive: Close Reading with AI

C037 πŸ“‹ Prompts Library Reading Skills

Literary Passage Deep Dive

Close reading for literature: surface meaning, literary devices, theme development, and reader effect analysis.

5 min read 1 Prompt Genre Guide
PR044 Literary Passage Deep Dive
For fiction, literary nonfiction, poetry
Here’s a passage from literature: “[paste passage]” Guide my close reading: – What’s happening at the surface level? – What literary devices are at work? – What themes or ideas are being developed? – What’s the effect on the reader, and how is it achieved? – What might I miss on a casual read?
πŸ“š
365 Literary Passages for Practice Close reading skills transfer to every complex text. Build the eye for detail that serves you forever.
Explore Course β†’

What Close Reading Reveals

Surface reading tells you what happens. Close reading tells you how and why. When you close read, every word becomes a choice the author made β€” and you can ask why that choice and not another. The literary analysis prompt PR044 guides this process systematically.

Start with surface meaning: what’s literally happening? Character actions, setting details, dialogue. Don’t interpret yet β€” just make sure you understand. Many reading errors come from rushing to interpretation before establishing the basic facts of the passage.

Then move to literary devices. Not to catalog them like trophies, but to ask: what work is this device doing? That metaphor isn’t decoration β€” it’s connecting two ideas. That repetition isn’t accident β€” it’s creating emphasis. The prompt helps you identify devices, but your job is to connect them to meaning.

Literary Devices: The Author’s Toolkit

Literary devices are techniques writers use to create meaning beyond the literal. The close reading prompt helps you identify which ones are active in your passage. Some common devices to watch for:

Imagery β€” sensory details that create pictures in the mind. Not just visual; listen for sounds, textures, smells, tastes. Metaphor and simile β€” comparisons that illuminate by connecting unlike things. Symbolism β€” objects or images that carry meaning beyond themselves. Irony β€” gaps between appearance and reality, between what’s said and what’s meant. Foreshadowing β€” hints planted early that bloom later.

Beyond these, pay attention to diction β€” why “trudged” instead of “walked”? Why “crimson” instead of “red”? Word choice reveals tone and attitude. Sentence structure matters too: short sentences create urgency; long, winding sentences can mimic confusion or drowning. For more on detecting author attitude, see Read Between the Lines.

πŸ’‘ The “Why This?” Question

The core move of close reading is simple: ask “Why this?” Why this word and not another? Why this image here? Why does the sentence break at this point? Why is this character named that? Every “why” opens a door. AI can generate possibilities; you decide which matter most.

Theme Analysis: What’s Really Being Said

Themes are the ideas a work explores β€” not the plot, but what the plot is about. A story might follow a soldier returning from war (plot), while exploring themes of trauma, homecoming, and the impossibility of returning to who you were. Themes are usually abstract: love, death, identity, power, freedom, memory, belonging.

In close reading, you connect devices to themes. That recurring water imagery? It connects to emotional overwhelm. That metaphor comparing the house to a prison? It advances the theme of entrapment. Patterns matter: not just one metaphor, but which metaphors cluster together, how they develop, where they peak and resolve.

PR044 asks about themes being “developed” because themes aren’t usually stated β€” they emerge through accumulation. By the end of a work, the theme should feel inevitable, built from a hundred deliberate choices. For strategies to maintain this kind of engagement through longer texts, see Active Reading Prompts.

πŸ“Œ Passage Selection

You can’t close read an entire novel at this depth. Select key passages: openings (which establish tone and themes), climaxes (where tensions peak), pivotal scenes (where characters change), and endings (which resolve or deliberately don’t). These are where authors concentrate their craft. PR044 works best on 500-1000 word passages.

Reader Effect: How and Why It Works

Literature isn’t just about meaning β€” it’s about experience. Close reading examines what the passage makes you feel and how that effect is achieved. Tension, sympathy, unease, hope, dread, recognition β€” these aren’t accidents. They’re engineered through specific techniques.

The prompt asks: “What’s the effect on the reader, and how is it achieved?” This separates what you feel from why you feel it. You might feel dread, but why? The clipped sentences? The isolated protagonist? The detail that doesn’t quite fit? Once you understand the mechanism, you understand the craft β€” and can appreciate it more fully, or recognize when it’s being used to manipulate you.

The final question β€” “What might I miss on a casual read?” β€” surfaces the buried elements: structural echoes between beginning and end, intertextual references to other works, unreliable narration you didn’t catch, symbolic patterns that only emerge on reflection. This is where AI excels: pattern recognition across the passage, connections to literary tradition, context you might lack.

For the complete collection of genre-specific reading approaches, explore the AI Reading Prompts Library. For the broader framework connecting all reading skills, see AI for Reading.

Frequently Asked Questions

Surface reading focuses on what happens: plot, events, who said what. Close reading asks why and how: Why did the author choose that word? How does that image connect to the theme? What effect does this structure create? Close reading treats every choice as deliberate and meaningful. It’s slower but reveals layers that casual reading misses.
AI excels at pattern recognition and catalog tasks: listing every metaphor, tracking recurring images, noting structural parallels. It’s also good at providing context (historical, biographical, genre conventions) and generating interpretive possibilities. It’s less good at deciding which interpretation matters most β€” that’s where your judgment comes in. Use AI to surface options, then select what resonates.
Close reading works best on short, dense passages: a paragraph, a scene, a poem, a page. Longer passages require too much compression β€” you’ll get surface-level analysis spread thin. For novels or long works, select key passages (openings, climaxes, pivotal scenes, endings) and close read those. PR044 handles about 500-1000 words well.
Common devices include: imagery (sensory details), metaphor and simile (comparisons), symbolism (objects/images carrying meaning), irony (gap between appearance and reality), foreshadowing (hints at future events), diction (word choice for effect), syntax (sentence structure), point of view (who narrates and what they know), and tone (attitude conveyed). PR044 will flag the specific devices present in your passage.
365 Articles β€’ RC Questions

See What Others Miss

Close reading skills transfer to every complex text you’ll ever encounter. Practice on literary passages and watch your ability to notice deepen across all your reading.

Join the Course β€” β‚Ή2,499 β†’
Literary Passages Deep Analysis Transferable Skills

Read Deeper. See More.

Pick a passage from whatever you’re reading. Run PR044. Notice what layers emerge beneath the surface β€” the devices, the themes, the craft. That’s where literature lives.

Prompts Library Pillar

Limitations & Assumptions Prompt: What the Paper Admits

C063 πŸ”¬ Research Papers 1 Prompt

Limitations & Assumptions Prompt: What the Paper Admits

Every study has boundaries. This prompt helps you find the limitations authors acknowledge β€” and the ones they don’t β€” so you know exactly how far to trust the findings.

6 min read 1 Prompt Guide 3 of 6
PR040 Academic Paper Navigator
Use before reading a research paper
I’m reading an academic paper. Here’s the abstract: “[paste abstract]” Before I read the full paper, help me: – Identify the research question and why it matters – Understand what to pay attention to in each section (intro, methods, results, discussion) – Flag jargon I should look up first – Tell me what questions to keep in mind while reading
πŸ”¬
Build Critical Reading Skills 365 articles with RC questions that train you to spot assumptions and evaluate evidence quality.
Explore Course β†’

The Two Types of Weaknesses in Any Paper

Every research paper has boundaries. The question is whether the authors acknowledge them β€” and whether you can spot the ones they don’t.

Stated limitations are boundaries the researchers acknowledge directly. Look in the discussion section for phrases like “one limitation of this study…” or “future research should address…” These show intellectual honesty.

Unstated assumptions are things the researchers take for granted without proving. These are often more important because they suggest blind spots. Did the authors assume their sample represents the population? That participants answered honestly? That the measurement tool is valid?

How to Use the Prompt for Limitations Analysis

Start with PR040 to map the paper’s structure. Then add a targeted follow-up: “Now focus on limitations and assumptions. What does this paper explicitly acknowledge as limitations? What assumptions does it make without proving them?”

AI will separate stated from unstated weaknesses, giving you a clearer picture of how much to trust the findings.

⚑ Pro Tip

After identifying limitations, ask: “For each limitation, does it weaken the main finding, narrow its applicability, or invalidate it entirely?” Not all limitations are equal.

Common Categories of Limitations

Sample limitations: Too small, not representative, convenience sampling, attrition bias

Measurement limitations: Self-report bias, instrument validity, operationalization choices

Design limitations: No control group, correlational (not causal), short timeframe

Scope limitations: Single context, narrow population, artificial setting

πŸ’‘ Example: Stated vs. Unstated

Stated: “Our sample of university students may not generalize to the broader adult population.”

Unstated: The study assumes participants didn’t change their behavior because they knew they were being observed (Hawthorne effect). This assumption is never mentioned.

How to Evaluate Impact on Conclusions

Once you’ve identified limitations, assess their severity:

Fatal flaws invalidate the core finding. If the control group isn’t comparable to the treatment group, the comparison is meaningless.

Scope restrictions narrow the applicability. The finding might be true, but only for specific populations or contexts.

Minor caveats are worth noting but don’t threaten the main conclusion. Every study has these.

⚠ Important Limitation

AI can identify potential weaknesses, but it can’t judge whether a limitation is fatal without domain expertise. Use AI’s output as a checklist for your own evaluation.

Build Your Critical Reading Stack

The Limitations & Assumptions Prompt works best with:

Methods Decoder β€” Understand what the study did before evaluating its weaknesses

Reproducibility Checklist β€” Assess transparency and documentation quality

Paper Map Prompt β€” Get the full picture before zooming in on limitations

Frequently Asked Questions

Limitations sections exist because no study is perfect. Responsible researchers acknowledge boundaries β€” sample size constraints, methodological trade-offs, scope restrictions. A well-written limitations section actually increases a paper’s credibility.
Limitations are boundaries the researchers acknowledge β€” what the study didn’t or couldn’t do. Assumptions are things taken for granted without proving β€” like assuming survey respondents answered honestly. Limitations are usually stated; assumptions often need to be inferred.
Not necessarily. Every study has limitations β€” that’s the nature of research. What matters is whether the limitations undermine core findings. Judge papers by whether conclusions are proportional to evidence, not by the number of limitations listed.
Look at three areas: the methodology (what alternative approaches could have been used?), the sample (who was excluded?), and the scope (what questions remain unanswered?). AI can help surface these by comparing the study’s approach to standard research practices.
πŸ“š The Ultimate Reading Course

Read Research Papers Critically

Build the skills to evaluate evidence quality and spot assumptions across 365 real articles.

Start Learning β†’
1,098 Practice Questions 365 Articles 6 Courses

Never Miss a Hidden Assumption

You’ve got the Limitations & Assumptions toolkit. Next, check reproducibility and find related work.

All Research Paper Guides

Key Takeaways vs Key Quotes: Extract Both

C019 πŸ“ Summarize Articles

Key Takeaways vs Key Quotes: Extract Both

Two outputs in one: main takeaways in your words plus the exact quotes worth saving, with clear separation.

5 min read 2 Prompts Guide 5 of 6
PR057 The Quote Extractor
To capture key quotes with context
Here’s an article: “[paste article]” Extract the most valuable quotes: – Identify 3-5 quotes worth saving (exact text) – For each quote, explain: – Why this quote matters – What it captures that a paraphrase would lose – How I might use this quote – Also give me the key takeaways that DON’T need direct quoting
PR030 The Layered Summary
When you need different summary depths
Here’s a text I want to remember: “[paste text]” Create three versions: – Tweet version (under 280 characters): The absolute core – Paragraph version: Core idea + key supporting points – Teaching version: How I would explain this to someone unfamiliar with the topic
β–Ά Watch This Guide
πŸ’Ž
Find the Gems in Every Article 365 articles with quotable insights β€” perfect practice for developing your extraction instincts.
Explore Course β†’

Takeaways vs Quotes: Why You Need Both

Most readers do one of two things: they highlight everything (creating a sea of yellow with no signal), or they paraphrase everything (losing the author’s exact words when those words matter). Neither approach serves you well.

To extract key takeaways from an article effectively, you need to separate two distinct outputs: ideas you can restate in your own words, and quotes you should preserve exactly as written. The difference isn’t about importance β€” it’s about what gets lost in translation.

Takeaways are concepts you understand well enough to explain differently. They become part of your mental model. Quotes are language so precise, memorable, or authoritative that paraphrasing would weaken them. They stay in the author’s voice because that voice adds something.

The Quote Extractor prompt (PR057) forces this separation. It asks AI to identify quotes worth saving, explain why each one matters, and separately deliver the takeaways that don’t need direct quoting. You get both outputs, clearly distinguished.

The Two-Prompt Workflow

Start with the Quote Extractor (PR057) when you suspect an article has quotable material β€” opinion pieces, thought leadership, research with memorable findings. The prompt asks for 3-5 quotes with context for each.

For each quote, you get three things: why it matters, what a paraphrase would lose, and how you might use it. This context transforms random highlighting into purposeful collection. You’re not just saving words β€” you’re building a library of evidence, examples, and language you can deploy later.

The prompt also delivers key takeaways that don’t need direct quoting. These are the ideas you should internalize and be able to explain in your own voice. They’re no less important than the quotes β€” they’re just better served by paraphrase.

If you need additional summary formats after extracting quotes, follow up with the Layered Summary (C015). Use the quotes for citation and evidence; use the summaries for comprehension and memory.

πŸ’‘ Pro Tip

Before using the Quote Extractor, ask yourself: “Will I ever need to cite this source?” If yes, extract quotes. If you’re reading purely for learning and won’t reference the text again, skip quotes and use the Layered Summary instead.

Scoring Your Output: What Makes a Good Quote

Not all quotes are equal. Here’s how to evaluate whether a quote is worth keeping:

Memorable phrasing: The author said it in a way that sticks. “Move fast and break things” is a quote; “iterate quickly and accept failures” is a paraphrase. The first one is worth saving; the second you can reconstruct anytime.

Technical precision: Definitions, formulas, or specific claims where exact wording matters. “Inflation is always and everywhere a monetary phenomenon” (Friedman) makes a specific claim that paraphrase would dilute.

Authorial authority: When who said it matters as much as what they said. A quote from the CEO about company strategy carries different weight than your summary of their strategy.

Evidence and data: Specific numbers, statistics, or findings you might cite. “Revenue grew 47% YoY” is worth preserving exactly; “revenue grew significantly” loses the precision.

If a quote doesn’t hit at least one of these criteria, it’s probably a takeaway in disguise. Paraphrase it and move on.

πŸ“Œ The Quote Test

Ask: “Would a paraphrase lose something important?” If yes, save the quote. If you can say it equally well in your own words, paraphrase. This simple test prevents over-quoting (cluttered notes) and under-quoting (lost gems).

Example: Quotes + Takeaways in Action

Say you read an article about remote work productivity. Here’s what the output might look like:

QUOTES WORTH SAVING:

“Productivity isn’t about hours logged β€” it’s about clarity achieved.” Why it matters: Reframes the entire productivity debate. How to use: Opening line for a presentation on async work.

“Teams that document decisions outperform teams that discuss decisions by 34%.” Why it matters: Specific, citable statistic. How to use: Evidence for documentation culture proposal.

TAKEAWAYS (no quote needed):

Remote work success depends more on communication norms than on tools. Async communication reduces interruptions but requires intentional social connection. Managers should measure outcomes, not activity.

Notice the separation: quotes carry language or data you’d lose by paraphrasing; takeaways carry ideas you can express yourself. Both matter. Both deserve their own treatment.

For building a more sophisticated note-taking system with these extractions, see Highlight Smarter (C026) or explore the full Summarize Articles pillar.

Frequently Asked Questions

Save exact quotes when the specific wording matters β€” memorable phrasing, technical precision, or when you’ll cite the source. Paraphrase when you need the idea but not the exact words. The Quote Extractor prompt helps you identify which is which, so you’re not over-quoting (cluttered notes) or under-quoting (losing powerful language).
Three to five quotes is usually optimal for a standard article (1,000-3,000 words). More than five suggests you’re highlighting too much β€” if everything is important, nothing is. Fewer than three might mean you’re missing genuinely quotable insights. The prompt asks for this range specifically to force prioritization.
A quote is worth saving when paraphrasing would lose something important: memorable phrasing that sticks, precise technical language, a surprising insight that needs the author’s exact framing, or evidence you might cite later. If you can say it equally well in your own words, paraphrase instead.
Store quotes with context: the source, why it matters, and how you might use it. The prompt provides this context automatically. For note-taking systems like Zettelkasten (C023), quotes become atomic notes with links. For research, they become evidence with citations. For writing, they become supporting material you can weave into your arguments.
365 Articles β€’ RC Questions

Build Your Quote Collection

Practice extracting the gems from diverse, high-quality content. Develop the instinct for what’s worth saving versus what to paraphrase.

Join the Course β€” β‚Ή2,499 β†’
Quotable Content Analysis Practice Memory Systems

One More Summary Guide Awaits

You’ve mastered quote extraction. Next, learn to summarize for different purposes: learning, deciding, or sharing.

Summarize Articles Pillar

How to Simplify Complex Text with AI: 3-Step Workflow

C009 🧠 Understand Difficult Text 2 Prompts

How to Simplify Complex Text with AI: 3-Step Workflow

A 3-step workflow to decode any complex text: identify thesis, paraphrase systematically, and generate clarifying examples.

6 min read 2 Prompts 3-Step Workflow
PR006 The Confusion Unpacker
Step 1: When a passage confuses you
I’m reading a passage and this part confuses me: “[paste confusing section]” Don’t simplify or summarize yet. Instead: – Identify what makes this difficult (complex syntax, assumed knowledge, abstract concepts, unfamiliar references?) – Break down the logic step by step – Explain any implicit assumptions the author is making – Only then restate the core idea in plain language
PR009 The Dense Passage Decoder
Step 2: For information-dense text
This passage is information-dense: “[paste passage]” Create a layered explanation: – Layer 1: The single core point in one sentence – Layer 2: The 3-4 key supporting elements – Layer 3: The nuances, qualifications, and exceptions – Layer 4: What’s deliberately left unsaid or simplified by the author
β–Ά Watch This Guide
πŸ’‘
Practice on Real Complex Texts 365 articles ranging from accessible to challenging β€” build your comprehension muscle daily.
Explore Course β†’

Step 1: Identify What Makes the Text Difficult

Most people approach complex text by immediately asking AI to “simplify this” or “explain in simple terms.” That’s backwards. You skip the most valuable step: understanding why the text is hard in the first place.

The Confusion Unpacker prompt (PR006) starts by diagnosing the difficulty. Is it complex syntax with nested clauses? Assumed background knowledge you’re missing? Abstract concepts that need grounding? Unfamiliar references or jargon?

This matters because different sources of difficulty require different solutions. Complex syntax needs untangling. Missing background knowledge needs filling. Abstract concepts need examples. Jargon needs the Jargon Translator. If you skip diagnosis, you get generic simplification that often loses important nuance.

When you simplify complex text with AI using a structured workflow, you preserve what matters. The thesis stays intact. The logic remains visible. You understand not just what the text says but why it was hard to understand.

Step 2: Paraphrase Systematically with Layers

After diagnosing the difficulty, use the Dense Passage Decoder prompt (PR009) to create a layered explanation. This isn’t just simplification β€” it’s systematic unpacking from simple to nuanced.

Layer 1 gives you the single core point in one sentence. This is the thesis, the main claim, the key takeaway. If you can’t state this clearly, you haven’t understood the passage.

Layer 2 adds the 3-4 supporting elements: the main reasons, evidence, or sub-points that hold up the core claim. These are the structural pillars.

Layer 3 captures nuances, qualifications, and exceptions. This is where complexity lives β€” the “but,” “however,” and “except when” that make ideas true rather than oversimplified.

Layer 4 reveals what the author deliberately left unsaid or simplified. This is expert-level reading: recognizing the gaps and assumptions baked into any explanation.

πŸ’‘ Pro Tip

Don’t skip Layer 4. Understanding what’s left out is often more valuable than what’s included. Authors make choices about what to simplify β€” knowing those choices makes you a critical reader.

Step 3: Generate Examples and Analogies

Abstract ideas become concrete through examples. After getting a layered explanation, follow up with: “Give me a concrete example of this concept” or “Create an analogy using [something I’m familiar with].”

This step bridges the gap between understanding words and understanding ideas. You might correctly paraphrase a passage about “opportunity cost” but not truly grasp it until you see an example about choosing between studying and socializing.

For more on this technique, see the ELI5 to Expert prompt which generates explanations at multiple levels, or the Analogy Builder for domain-specific comparisons.

πŸ“Œ The 3-Step Workflow

1. Diagnose (PR006) β€” What makes this hard? Don’t simplify yet. 2. Layer (PR009) β€” Core point β†’ Supporting elements β†’ Nuances β†’ What’s left out. 3. Ground β€” Request concrete examples or analogies. This sequence preserves nuance while building genuine understanding.

When to Use Each Prompt

Use PR006 (Confusion Unpacker) when you’re genuinely confused β€” when you’ve read a passage twice and still can’t figure out what it means. The prompt helps you figure out what you’re confused about, which is half the battle.

Use PR009 (Dense Passage Decoder) when text is information-dense but not necessarily confusing. Academic papers, technical documentation, policy briefs β€” content that packs a lot of meaning into few words. The layered structure extracts the hierarchy of ideas.

For straightforward jargon translation without the full workflow, the Jargon Translator (C010) handles technical terminology directly. The full Understand Difficult Text pillar has prompts for every level of complexity.

Common Mistakes to Avoid

Mistake 1: Asking for simplification without diagnosis. “Simplify this” gives you generic output. “What makes this difficult, then simplify” gives you targeted help.

Mistake 2: Stopping at Layer 1. The core point is necessary but not sufficient. Nuances (Layer 3) and gaps (Layer 4) are where real understanding develops.

Mistake 3: Not testing your understanding. After getting an explanation, try restating it in your own words without looking at the AI output. If you can’t, you’ve only read the simplification β€” you haven’t learned from it.

Return to the AI for Reading hub for the complete prompt ecosystem, or explore the Understand Difficult Text pillar for more comprehension tools.

Frequently Asked Questions

Generic simplification often loses important nuance. A structured workflow preserves what matters: the thesis stays intact, the logic remains visible, and you understand not just what the text says but why it’s hard to understand in the first place.
PR006 (Confusion Unpacker) diagnoses WHY something is difficult before simplifying. PR009 (Dense Passage Decoder) creates layered explanations from simple to complex. Use PR006 when you’re confused; use PR009 when text is information-dense but not necessarily confusing.
Try explaining it to yourself without looking at the AI output. If you can restate the core idea, the supporting points, and one nuance or exception, you understand it. If you can’t, you’ve only read the simplification β€” not learned from it.
Always add it for abstract concepts, theoretical frameworks, or anything you need to remember and apply later. Skip it for straightforward technical content where you just need to decode jargon β€” the Jargon Translator prompt handles that better.
πŸ“š The Ultimate Reading Course

Decode Complex Ideas Daily

365 articles from accessible to challenging β€” build the comprehension muscle that makes difficult text manageable.

Start Learning β†’
1,098 Practice Questions 365 Articles with Analysis 6 Courses + Community

5 More Comprehension Guides Await

You’ve mastered simplification. Next, explore jargon translation, analogies, and sentence-level analysis.

Understand Difficult Text Pillar

Highlight Smarter: What to Highlight and Why

C026 🧠 Notes & Memory 2 Prompts

Highlight Smarter: What to Highlight and Why

Stop highlighting everything: AI prompts that identify what’s actually worth marking and why.

5 min read Selective Marking Guide 6 of 6
PR019 The Key Term Identifier
Find what’s worth highlighting
Here’s a passage I’m reading: “[paste passage]” Identify vocabulary I should pay attention to: – Which words are central to understanding this passage? – Which words might appear in similar texts on this topic? – Which words have specialized meanings in this context vs. everyday use? – Rank them by importance for comprehension.
PR035 The Highlight Validator
Check if your highlights capture the core
Here’s a passage: “[paste passage]” Here are the parts I highlighted: “[paste your highlights]” Evaluate my highlighting: – Did I capture the core argument or main claim? – Did I miss any crucial supporting evidence? – Did I highlight too much context/setup? – What should I add or remove from my highlights?
β–Ά Watch This Guide
🎯
Practice Selective Reading 365 articles to master the art of identifying what matters β€” highlight less, remember more.
Explore Course β†’

What to Highlight: The Four Categories

Most people highlight too much. They mark anything that seems interesting, ending up with pages of yellow that offer no more guidance than unmarked text. Research consistently shows that heavy highlighters remember no better than non-highlighters. The problem isn’t highlighting itself β€” it’s indiscriminate highlighting.

What to highlight while reading comes down to four categories:

Core arguments: The main claims the author is making. Not the setup, not the examples β€” the actual thesis and key supporting points. If you had to explain what this text argues in two sentences, what would you quote?

Surprising facts: Information that challenged your existing beliefs or taught you something you didn’t know. If nothing surprises you, either you already knew this material or you weren’t reading actively.

Key term definitions: Specialized vocabulary or familiar words used in specialized ways. These are the words you’d need to understand to discuss this topic with an expert.

Passages you’ll revisit: Quotes you might use in your own writing, reference points for future projects, or ideas you want to develop further. The test: will you actually come back to this?

Everything else β€” context, transitions, examples that illustrate without adding new information β€” can stay unmarked.

The Prompts: Before and After

PR019: Identify Before Marking

The Key Term Identifier helps you spot what’s actually central before you start marking. Ask AI to identify which words are central to understanding, which might appear in similar texts, and which have specialized meanings. This creates a mental filter: when you see these terms, they’re candidates for highlighting. When you don’t, they’re probably not.

The ranking by importance is particularly useful. Not all key terms are equally important β€” some are foundational concepts, others are supporting vocabulary. Highlight the foundational ones first.

PR035: Validate After Marking

The Highlight Validator checks your work. Share the original passage and your highlights, and ask AI to evaluate: Did you capture the core? Did you miss crucial evidence? Did you highlight too much setup?

This feedback loop trains your judgment. After a few rounds, you’ll naturally start making better selections without needing the validator.

πŸ’‘ Pro Tip

Read a section completely before highlighting anything. Context changes what seems important. What looks crucial in paragraph one might be just setup for the real insight in paragraph five. First-pass highlighting often marks the wrong things.

Examples: Good vs Bad Highlighting

Bad highlighting: Marking entire paragraphs. Highlighting introductory phrases like “Research shows that…” or “It’s important to note that…” Marking anything that sounds impressive without checking if it’s actually the core claim.

Good highlighting: Marking just the specific claim within a longer paragraph. Highlighting the number or finding, not the framing around it. Marking the term being defined, not the full definition (you can reconstruct context from the term).

The test: if you looked at only your highlights, could you reconstruct the main argument? If yes, you highlighted well. If you’d need the surrounding text to make sense of them, you highlighted too narrowly. If the highlights alone feel overwhelming, you highlighted too much.

The Key Takeaways vs Key Quotes prompt (C019) can help distinguish between passages worth paraphrasing (takeaways) and passages worth preserving verbatim (quotes for highlighting).

πŸ“Œ From Highlights to Notes

Highlights are raw material, not finished product. Use the Zettelkasten prompt (C023) to convert highlights into atomic notes. Each highlight becomes a standalone idea with a title, core concept in your words, and connections to other ideas.

Building the Habit

Selective highlighting is a skill that improves with practice. Start by deliberately under-highlighting β€” you can always add more on a second pass, but removing mental clutter is harder. Use the prompts to calibrate your judgment, and over time you’ll internalize what’s worth marking.

You’ve now completed the Notes & Memory pillar. For the complete toolkit, return to the pillar page or explore the full AI for Reading hub.

Frequently Asked Questions

When you highlight everything, nothing stands out. Research shows heavy highlighters remember no better than non-highlighters. The act of selecting forces processing β€” if you skip the selection, you skip the thinking. Highlight less, remember more.
Four categories: core arguments (the main claims), surprising facts (things that challenged your assumptions), definitions of key terms (specialized vocabulary you’ll need), and passages you’ll revisit (for notes, quotes, or reference). Everything else can be skipped.
After, or at least after you’ve finished a section. Reading first gives you context that changes what seems important. What looks crucial in paragraph one might be just setup for the real insight in paragraph five.
Use the Zettelkasten prompt (C023) to convert highlights into atomic notes. Each highlight becomes a standalone idea with a title, core concept in your words, and connections to other ideas. Highlights are raw material β€” notes are the finished product.
πŸ“š The Ultimate Reading Course

You’ve Completed Notes & Memory

Practice everything you learned β€” 365 articles with guided note-taking, highlighting, and retention strategies.

Start Learning β†’
1,098 Practice Questions 365 Articles with Analysis 6 Courses + Community

Notes & Memory Pillar Complete

6 guides mastered. Explore AI Reading Coach routines, critical reading, or exam prep next.

Notes & Memory Pillar

Fact-Check Mode: What to Verify and How

C042 βš–οΈ Critical Reading 2 Prompts

Fact-Check Mode: What to Verify and How

Generate verification checklists with AI, then verify claims yourself using authoritative sources. AI organizes; you verify.

7 min read Verification Workflow Guide 2 of 5
PR022 The Source Interrogator
Use to evaluate sources and generate research questions
I’m reading this source: “[paste article or key excerpts]” Help me evaluate it critically: – What perspective or bias might the author/publication have? – What’s the author’s expertise or authority on this topic? – What audience is this written for, and how might that shape content? – What questions should I research independently after reading this?
PR023 The Evidence Evaluator
Use to assess evidence quality for specific claims
This passage uses evidence to support a claim: “[paste passage]” Evaluate the evidence: – What type of evidence is used (data, anecdote, expert opinion, analogy, example)? – How strong is the connection between evidence and conclusion? – What would stronger evidence look like? – Is this evidence representative or cherry-picked?
β–Ά Watch This Guide
πŸ”
Practice Critical Evaluation on Real Articles The Ultimate Reading Course includes 365 articles with expert analysis β€” learn to spot what needs verification.
Explore Course β†’

The Limits of AI for Fact-Checking

Let’s start with what you need to know: AI cannot reliably fact-check. This isn’t a limitation we’ll overcome soon β€” it’s structural. AI models generate responses based on patterns in training data, not real-time verification against authoritative sources.

This creates a dangerous situation: AI can confidently state incorrect information. It can cite sources that don’t exist. It can present outdated data as current. When you ask AI “is this true?”, the answer you get might be wrong β€” and you’d have no way of knowing without checking yourself.

So why use AI for fact-checking at all? Because AI excels at a different task: generating verification checklists. AI can identify which claims in an article are verifiable, prioritize them by importance, and suggest where to find authoritative sources. AI does the organizing; you do the checking.

⚠️ Critical Warning

Never ask AI “is this claim true?” and trust the answer. AI will confidently respond β€” but that confidence doesn’t correlate with accuracy. Use AI to identify what to verify and where to look, then verify yourself.

Building a Verification Checklist

The first prompt (PR022 β€” Source Interrogator) generates research questions. Its final output β€” “what questions should I research independently” β€” is your verification checklist. But not all claims deserve equal attention.

Prioritize central claims. If the article’s main argument depends on a specific statistic being accurate, that statistic goes to the top of your list. Peripheral details matter less.

Flag surprising claims. Information that seems too convenient, too dramatic, or too perfectly aligned with the author’s thesis deserves extra scrutiny.

Check attributed statements. When an article quotes someone or cites a study, verify both the existence of the source and the accuracy of the representation.

Verify numbers first. Statistics, percentages, dates, and quantities are the easiest claims to verify β€” and the most commonly wrong.

Where to Look: Matching Claims to Sources

Different claim types require different verification sources:

Government statistics: Go to official databases. For economic data, the source is usually a government statistics bureau. For health data, it’s the health ministry or WHO.

Scientific claims: Peer-reviewed papers are the standard. Google Scholar, PubMed, and university library databases help. Check if the cited study actually supports the claim.

Quotes and statements: Look for original transcripts, video recordings, or official press releases. Be wary of quotes that appear only in secondary sources.

Company and financial data: Public companies file mandatory disclosures (SEC filings in the US). Press releases come from company newsrooms.

πŸ’‘ Pro Tip

After generating your verification checklist with AI, add one more prompt: “For each claim I should verify, suggest the most authoritative primary source where I could find the original data or statement.”

The Two Prompts in Action

PR022 (Source Interrogator) operates at the article level β€” it evaluates the source, identifies potential biases, and generates research questions. Run this first.

PR023 (Evidence Evaluator) operates at the claim level β€” it examines specific evidence supporting specific claims. Use this when you’ve identified a central claim and want to understand how well it’s supported.

A typical workflow: Run PR022 on the entire article. Extract the verification checklist. For the highest-priority claims, run PR023 to assess evidence quality. Then verify externally using suggested sources.

This connects to other critical reading skills in this pillar. The Argument Map Prompt (C043) separates claims from evidence structure. The What’s Missing Prompt (C044) identifies gaps in coverage.

Frequently Asked Questions

No. AI can confidently state incorrect information. Use AI to identify what to verify and where to look, but always verify claims yourself using authoritative primary sources. AI organizes your verification workflow; it doesn’t replace it.
Prioritize: (1) central claims the argument depends on, (2) surprising or dramatic claims, (3) statistics and numbers, and (4) quoted statements and cited studies. Ask: if this claim were false, would it change the article’s conclusion?
Primary sources over secondary. Institutional accountability (government agencies, peer-reviewed journals). Methodological transparency. Domain expertise. A company’s quarterly report is more authoritative than a news article about that report.
PR022 (Source Interrogator) first, on the whole article β€” generates your verification checklist. Then PR023 (Evidence Evaluator) on specific high-priority claims β€” assesses how well each claim is supported. Then verify externally.
πŸ“š The Ultimate Reading Course

Train Your Verification Instinct

365 articles with expert analysis β€” learn to spot claims that need checking and evaluate evidence quality.

Start Learning β†’
1,098 Practice Questions 365 Articles with Analysis 6 Courses + Community

3 More Critical Reading Guides Await

You’ve learned verification workflows. Next, map argument structures, find what’s missing, and hunt for hidden assumptions.

All Critical Reading Guides

Executive Summary Prompt for Busy Readers

C017 πŸ“ Summarize Articles

Executive Summary Prompt for Busy Readers

Get decision-ready summaries: context, key findings, implications, and recommended action β€” all in under 300 words.

5 min read 1 Prompt Guide 3 of 6
PR030 The Layered Summary (Executive Mode)
When you need decision-ready output
Here’s a text I need to act on: “[paste text]” I’m a [your role] deciding [your decision context]. Create an executive summary (under 300 words) with these sections: – CONTEXT: Why this matters right now (1-2 sentences) – KEY FINDINGS: The 3-5 most important facts or insights – IMPLICATIONS: What this means for my situation – RECOMMENDED ACTION: What I should do next
β–Ά Watch This Guide
πŸ’Ό
Read Faster. Decide Better. 365 articles across business, science, and policy β€” perfect material for executive summary practice.
Explore Course β†’

Why Executive Summaries Exist

Regular summaries tell you what a text says. Executive summaries tell you what to do about it. The difference is intent: one is for understanding, the other is for action.

When you’re reading to make decisions β€” whether to approve a proposal, change strategy, invest resources, or advise a team β€” you don’t need comprehensive understanding. You need the minimum information required to act wisely. Executive summaries strip everything else.

The base summary prompt (C015) works for general comprehension. This variant restructures output for decision-making contexts.

The Executive Summary Template

A good executive summary prompt produces four distinct sections, each serving a specific purpose in moving from information to action.

Context (1-2 sentences): Why does this matter right now? This grounds the reader in urgency or relevance. Without context, even important findings feel abstract. Example: “The FDA announced new approval guidelines that take effect Q2, affecting our three pending submissions.”

Key Findings (3-5 points): The most important facts, data points, or insights from the reading. Not everything important β€” just what’s most relevant to the decision at hand. Strip adjectives, keep numbers, prioritize actionable information.

Implications: What this means for your specific situation. This is where you add your role context to the prompt β€” “I’m a product manager evaluating market entry” produces different implications than “I’m an investor assessing risk.” Generic implications are useless implications.

Recommended Action: What you should do next. One concrete step, not a vague suggestion. “Schedule a meeting with legal to review compliance requirements” not “Consider the legal implications.”

πŸ’‘ Pro Tip

The 300-word limit is intentional. If your executive summary exceeds 300 words, you haven’t identified what actually matters. Constraints force prioritization β€” embrace them.

Output Sections in Detail

Let’s examine each section more closely, because the structure is what makes executive summaries actionable.

Context answers “why now?” It creates urgency or establishes relevance. Bad context: “This article discusses market trends.” Good context: “Consumer behavior shifted post-pandemic, and our Q3 strategy assumes the old patterns still hold.”

Key Findings are facts, not interpretations. They should be verifiable from the source text. Avoid: “The author argues convincingly…” Include: “Revenue grew 23% YoY, but customer acquisition cost increased 40%.” Numbers, dates, names, specific claims.

Implications connect findings to your situation. This requires you to tell the AI who you are and what decision you’re facing. The same article about interest rate changes has different implications for a borrower vs. a lender vs. an investor. Be specific about your context.

Recommended Action is singular and concrete. Not “consider various options” but “proceed with Option B and schedule review for March 15.” If you can’t recommend one action, you haven’t processed the information fully β€” or the reading genuinely doesn’t support a clear direction (in which case, say that).

For workplace applications, the Reading for Work pillar has additional prompts for memos, briefs, and stakeholder updates.

πŸ“Œ The Four-Section Rule

Context β†’ Findings β†’ Implications β†’ Action. This sequence moves from “what happened” to “what it means” to “what to do.” Skipping sections or reordering them breaks the logic chain. Train yourself (and your AI) to follow this structure consistently.

Example: From Article to Executive Summary

Say you’re a startup founder reading an article about enterprise sales cycles lengthening in 2024. Here’s what the output might look like:

CONTEXT: Enterprise sales cycles extended 27% in 2024 according to Gartner data. Our current runway assumes 6-month cycles; if this trend holds, we need to revisit forecasts.

KEY FINDINGS: Average B2B deal now takes 8.2 months (up from 6.5). Budget freezes affected 64% of enterprises. Security reviews added 3-4 weeks to procurement. Companies with existing vendor relationships saw minimal impact.

IMPLICATIONS: Our Q3 revenue projections assumed closing 4 enterprise deals. At current pace, we may close 2-3. This creates a potential $400K shortfall. Our existing customer expansion strategy becomes more valuable than new logo acquisition.

RECOMMENDED ACTION: Shift Q3 focus from new enterprise acquisition to expanding existing accounts. Present revised forecast at next board meeting with scenario planning for 9-month average cycles.

Notice how each section builds on the previous. The context frames the findings; the findings inform the implications; the implications drive the action. This is what makes executive summaries useful β€” they’re designed for momentum.

For longer documents where you need more detailed extraction before summarizing, use the Article to Action Memo prompt (C047) or explore the full AI for Reading hub.

Frequently Asked Questions

Use an executive summary when you need to make a decision or take action based on the reading. Regular summaries capture what the text says; executive summaries translate that into what it means for you. If you’re reading for general knowledge, use the Layered Summary (C015). If you’re reading to decide or act, use the executive format.
Context (why this matters now), Key Findings (the 3-5 most important facts), Implications (what this means for your situation), and Recommended Action (what you should do next). This structure ensures you move from understanding to action, not just comprehension.
Under 300 words for most articles, under 500 for longer reports. The constraint is the point β€” if you can’t fit it in 300 words, you haven’t identified what actually matters. Busy readers need density, not completeness. Every word should earn its place.
Absolutely β€” this is where the executive summary format shines. Add your role and decision context to the prompt: ‘I’m a product manager deciding whether to enter this market’ or ‘I’m an investor evaluating this sector.’ The AI will tailor implications and recommendations to your specific situation.
365 Articles β€’ RC Questions

Turn Reading into Decisions

Practice extracting what matters with diverse, challenging content. Build the skill that separates information consumers from decision makers.

Join the Course β€” β‚Ή2,499 β†’
Business Content Policy Analysis Research Reports

3 More Summary Guides Await

You’ve mastered executive summaries. Next, learn to verify accuracy, extract key quotes, and summarize for different purposes.

Summarize Articles Pillar

Evidence Check Prompt: Data vs Opinion vs Anecdote

C040 βš–οΈ Critical Reading Critical Reading

Evidence Check Prompt

Classify every claim: is it data, expert opinion, anecdote, or unsupported assertion? Plus red flags to watch.

5 min read 1 Prompt Evidence Analysis
PR023 The Evidence Evaluator
For assessing claim quality
This passage uses evidence to support a claim: “[paste passage]” Evaluate the evidence: – What type of evidence is used (data, anecdote, expert opinion, analogy, example)? – How strong is the connection between evidence and conclusion? – What would stronger evidence look like? – Is this evidence representative or cherry-picked?
πŸ”
Build Evidence Literacy 365 articles with evidence analysis β€” train yourself to spot weak arguments automatically.
Explore Course β†’

Evidence Types: A Quick Taxonomy

Not all evidence is created equal. When a passage makes a claim, the evidence checklist asks: what type of evidence supports it? Here’s the quick taxonomy:

Data β€” Numbers, statistics, study results, measured outcomes. The strongest type when the methodology is sound, but watch for p-hacking, small samples, and results that haven’t been replicated. “Studies show 70% of users prefer X” is data.

Anecdote β€” Individual stories, personal experiences, specific cases. Compelling and memorable, but not necessarily representative. “My friend tried X and loved it” is anecdote. Good for illustration, weak for proving patterns.

Expert opinion β€” What authorities in the field say. Varies enormously based on whether the expert is actually in the relevant field, whether there’s expert consensus, and whether they’re citing evidence or just asserting. “Dr. X believes…” is weaker than “The scientific consensus is…”

Analogy β€” Comparison to something similar. “Just like X, this situation…” Useful for understanding, weak for proving. Analogies illuminate but don’t establish β€” the situations might differ in crucial ways.

Example β€” Specific instances cited as representative. Stronger than anecdotes when carefully chosen, weaker when cherry-picked to support a predetermined conclusion. Ask: is this example typical or exceptional?

πŸ’‘ The Evidence Hierarchy

For most claims, evidence strength roughly runs: meta-analyses > randomized controlled trials > large observational studies > small studies > expert consensus > individual expert opinion > examples > analogies > anecdotes > unsupported assertions. Context matters, but this hierarchy is a useful starting point.

Using PR023: What the Prompt Reveals

The evidence checklist in PR023 asks four questions. First: what type of evidence is being used? Just naming it is clarifying. “This is an anecdote” or “This is a single study” immediately calibrates your confidence.

Second: how strong is the connection between evidence and conclusion? This is where many arguments fail. The evidence might be solid, but the leap to the conclusion unjustified. “Studies show people who eat breakfast are healthier” doesn’t prove “Eating breakfast makes you healthy” β€” there could be confounding factors.

Third: what would stronger evidence look like? This question surfaces what’s missing. If someone cites one study, stronger evidence would be a meta-analysis. If they cite expert opinion, stronger evidence would be data. Knowing what better evidence looks like helps you assess what you have.

Fourth: is this evidence representative or cherry-picked? This is crucial. Cherry-picking β€” selecting only evidence that supports your conclusion while ignoring contradictory evidence β€” is one of the most common failures in argumentation. For more on detecting bias in source selection, see the Bias Scanner Prompt.

Red Flags: Evidence Problems to Watch

Beyond classification, watch for these common evidence problems:

Correlation as causation: “X and Y occur together, therefore X causes Y.” Maybe. Or maybe Y causes X. Or Z causes both. Or it’s coincidence. Correlation suggests investigation, not conclusion.

Unrepresentative samples: Evidence drawn from unusual populations generalized to everyone. College students, online survey respondents, and people willing to participate in studies may not represent the general population.

Outdated evidence: Especially in fast-moving fields. Evidence from 2010 about social media, AI, or medical treatments may not apply to 2024 realities. Ask: has anything changed that would affect this evidence?

Single studies: One study proves little. Science advances through replication. Be especially skeptical of dramatic single-study findings that haven’t been reproduced.

Conflicts of interest: Evidence from sources with financial or ideological stakes in the conclusion. Doesn’t automatically invalidate it, but warrants extra scrutiny. For verification strategies, see Fact-Check Mode.

⚠️ The Unsupported Assertion

Watch for claims presented as fact with zero evidence β€” not even anecdotes. Phrases like “Everyone knows…” “It’s obvious that…” “Clearly…” often flag unsupported assertions. The author expects you to accept the claim without evidence. Maybe it’s true. But it’s not argued.

From Evidence Check to Decision

After running PR023, you’ll have a clearer picture of how well-supported each claim is. Use this to calibrate your confidence. Claims supported by strong, representative data deserve more weight than claims supported only by anecdotes and analogies.

But don’t reject everything that lacks perfect evidence. Most real-world decisions happen under uncertainty. The goal isn’t to accept only proven claims β€” it’s to know which parts of an argument you’re accepting on faith versus which are supported by evidence.

For the complete critical reading toolkit, explore the Critical Reading pillar. For the broader reading improvement framework, see AI for Reading.

Frequently Asked Questions

Data is systematic: collected across many cases, often with controls, representing patterns. Anecdotes are individual stories or examples β€” vivid but not necessarily representative. “Studies show 70% of users prefer X” is data. “My friend tried X and loved it” is anecdote. Both can be true, but they support conclusions differently. Data suggests patterns; anecdotes illustrate possibilities.
It depends. Expert opinion is stronger when: the expert has relevant credentials (not just general authority), the claim is within their specialty, there’s consensus among experts, and they’re citing evidence rather than just asserting. It’s weaker when: the expert is outside their field, there’s significant expert disagreement, or they have conflicts of interest. Expert opinion points toward truth but doesn’t guarantee it.
Watch for: single studies cited when meta-analyses exist, time periods chosen to support a narrative (why start the graph there?), examples that are unusually dramatic or favorable, absence of counterexamples that likely exist, and evidence that only comes from sources with a stake in the conclusion. Ask: “If I looked at ALL the evidence, would this pattern hold?”
An unsupported assertion is a claim presented as fact with no evidence offered β€” not even an anecdote or appeal to authority. Watch for confident statements that assume rather than argue: “Everyone knows…” “It’s obvious that…” “Clearly…” These phrases often flag claims the author expects you to accept without evidence. They might be true, but they’re not argued.
365 Articles β€’ RC Questions

Don’t Get Fooled. Get Evidence Literate.

Practice evaluating evidence across articles from business, science, policy, and culture. Build the instincts that protect you from weak arguments.

Join the Course β€” β‚Ή2,499 β†’
Evidence Types Red Flag Training Critical Thinking

Know What You’re Trusting

Next time you read a persuasive piece, run PR023. Classify the evidence. Note the red flags. Decide which claims are supported and which you’re accepting on faith. That’s reading with your eyes open.

Critical Reading Pillar

ELI5 to Expert: A Multi-Level Explanation Prompt

C004 πŸ“‹ AI Reading Prompts 1 Prompt

ELI5 to Expert: A Multi-Level Explanation Prompt

Get explanations at any level: from 5-year-old simple to expert nuanced, using one adaptive AI prompt template.

5 min read 1 Prompt Guide 4 of 8
PR009 The Dense Passage Decoder
Use for information-dense academic or technical writing
This passage is information-dense: “[paste passage]” Create a layered explanation: – Layer 1: The single core point in one sentence – Layer 2: The 3-4 key supporting elements – Layer 3: The nuances, qualifications, and exceptions – Layer 4: What’s deliberately left unsaid or simplified by the author
β–Ά Watch This Guide
πŸŽ“
Practice Decoding Dense Text Daily The Ultimate Reading Course includes 365 articles across difficulty levels β€” perfect for building layered comprehension skills.
Explore Course β†’

How the Layered Approach Works

Dense text creates a specific problem: everything seems equally important. An academic paper might pack three arguments, two qualifications, a methodology note, and an implied critique into a single paragraph. Without a hierarchy, you’re overwhelmed.

The ELI5 prompt for complex topics solves this by requesting explanation in layers. Instead of one flat response, you get four levels of depth β€” and you can stop at any level.

Layer 1 is the “explain like I’m five” level. One sentence. The absolute core. If someone asked “what’s this about?” in an elevator, this is your answer.

Layer 2 adds structure. The 3-4 supporting elements that hold up the core point. Still simple, but now you understand why the core point is true or important.

Layer 3 brings nuance. The qualifications, exceptions, and edge cases. This is where “it’s complicated” becomes specific β€” you learn under what conditions the main claim might not hold.

Layer 4 goes meta. What did the author simplify? What’s implied but not stated? What would experts notice that beginners would miss? This layer is for when you need to critically evaluate the text, not just understand it.

The Template in Action

Let’s say you paste a dense paragraph about monetary policy. Here’s what the layers might look like:

Layer 1: “Central banks raise interest rates to slow inflation by making borrowing more expensive.”

Layer 2: “Supporting elements: (1) Higher rates reduce consumer spending, (2) businesses delay investments, (3) currency strengthens making imports cheaper, (4) expectations shift as people anticipate slower growth.”

Layer 3: “Nuances: This works with demand-driven inflation but not supply shocks. Effects lag by 12-18 months. Small economies can’t raise rates independently without capital flight. The relationship breaks down at very low or very high rate levels.”

Layer 4: “Unsaid: The author assumes inflation expectations are ‘anchored’ and doesn’t address what happens when they’re not. Also omits distributional effects β€” rate hikes hurt borrowers and help savers, which has political implications the passage doesn’t mention.”

Notice how each layer adds complexity without contradicting the previous one. You build understanding progressively instead of drowning in detail.

πŸ’‘ Pro Tip

If you only need a quick grasp, read Layer 1 and stop. If you’re writing about the topic, you need Layer 3. If you’re critiquing or fact-checking, Layer 4 is essential. Match the layer to your purpose.

Examples by Subject Area

For Scientific Papers

Layer 1 gives you the main finding. Layer 2 explains the evidence. Layer 3 reveals the limitations the authors acknowledge. Layer 4 exposes what they didn’t test or assumed without proof.

For Philosophy

Layer 1 states the thesis. Layer 2 outlines the argument structure. Layer 3 addresses counterarguments and qualifications. Layer 4 shows what the philosopher takes for granted β€” the hidden premises.

For Legal Documents

Layer 1 gives you the bottom line β€” what you can or can’t do. Layer 2 explains the conditions and requirements. Layer 3 covers exceptions and edge cases. Layer 4 reveals what’s ambiguous or likely to be contested.

For Technical Documentation

Layer 1 tells you what the thing does. Layer 2 explains how to use it. Layer 3 covers configuration options and limitations. Layer 4 reveals what the docs don’t tell you β€” common pitfalls, implicit requirements.

Common Mistakes to Avoid

Mistake 1: Skipping Layer 1. Even if you’re an expert, start with the core point. Dense text can bury the main idea in qualifications. Layer 1 ensures you haven’t missed the forest for the trees.

Mistake 2: Treating all layers as equally important. They’re not. Layer 1 is always essential. Layer 4 is only for deep analysis. Don’t spend time on nuances if you just need the gist.

Mistake 3: Using this for simple text. If the passage is already clear, layered explanation adds complexity without benefit. Use the no-fluff prompt instead for straightforward content.

Mistake 4: Not following up. If a layer confuses you, ask AI to expand just that layer. “Can you give me more examples for Layer 3?” is better than re-running the whole prompt.

πŸ“Œ When to Use This vs Other Prompts

Use the layered prompt when text is dense β€” lots of information packed tightly. Use the simplify complex text workflow when text is difficult β€” unclear language or assumed knowledge. Dense and difficult are different problems. Sometimes you face both β€” use both prompts.

The AI Reading Prompts Library has tools for every comprehension challenge. The article understanding prompts give you a full toolkit. But for dense academic or technical writing, start here β€” with layers.

Frequently Asked Questions

ELI5 stands for ‘Explain Like I’m 5’ β€” a request for the simplest possible explanation of a complex topic. The layered prompt approach starts at ELI5 level (Layer 1) and builds up to expert nuance (Layer 4), letting you stop wherever you need.
Use the Dense Passage Decoder prompt (PR009) which creates four layers: core point in one sentence, key supporting elements, nuances and exceptions, and what’s deliberately left unsaid. Each layer adds complexity β€” stop at whatever level serves your purpose.
Use layered explanations for information-dense academic or technical writing where you need to understand the core idea before tackling complexity. Use regular summaries for news articles or straightforward content where depth isn’t the challenge.
Yes. The layered approach works for scientific papers, legal documents, philosophy, economics, technical documentation β€” any text where understanding the hierarchy of ideas matters more than just getting the gist.
πŸ“š The Ultimate Reading Course

Simple to Complex. Layer by Layer.

Build comprehension skills progressively with 365 articles across difficulty levels β€” from accessible to expert-level dense text.

Start Learning β†’
1,098 Practice Questions 365 Articles with Analysis 6 Courses + Community

4 More Prompt Guides Await

You’ve learned layered explanations. Next, explore Socratic questioning, vocabulary building, and structured reading methods.

All AI Reading Prompts

Competitive Intel Prompt: Positioning, Pricing & Weak Spots

C051 πŸ’Ό Reading for Work 1 Prompt

Competitive Intel Prompt: Extract Positioning, Pricing & Weak Spots

Turn competitor content into strategic intelligence β€” extract positioning claims, pricing signals, strategic assumptions, and vulnerabilities from any source.

6 min read 5 Dimensions Guide 5 of 6
PR043 Competitive Intel Extractor
Use for competitor analysis from any source
I’m reading competitor content (press release, product page, earnings call, or industry report): “[paste excerpt]” Help me extract competitive intelligence: – What’s the key positioning claim or value proposition? – What pricing signals are present (premium, budget, enterprise tiers, discounts)? – What assumptions underlie their strategy? – What potential weak spots or vulnerabilities can I identify? – What questions should I research further before acting on this?
β–Ά Watch This Guide
πŸ“Š
Read Business Content Like a Strategist The Ultimate Reading Course includes 365 articles with analysis questions that sharpen inference, evidence evaluation, and strategic thinking.
Explore Course β†’

What to Look For in Competitive Analysis

Most people read competitor content passively β€” skimming for interesting facts, noting features, maybe flagging a price point. But competitive analysis from article content requires active extraction. You’re not reading for entertainment. You’re reading to answer specific strategic questions.

The challenge is that competitors rarely state their strategy directly. They don’t announce “we’re positioning as the premium option” or “our weakness is enterprise scalability.” Instead, these insights hide in word choice, emphasis patterns, and what’s conspicuously absent.

Effective competitive intelligence extraction focuses on four dimensions: positioning (how they want to be perceived), pricing signals (what their monetization strategy reveals), strategic assumptions (what must be true for their approach to work), and vulnerabilities (where their armor has gaps).

The Prompt: How to Use It

The Competitive Intel Extractor (PR043) works with any competitor content β€” press releases, product pages, earnings call transcripts, industry reports, customer reviews, or news coverage. Each source type reveals different intelligence:

Press releases reveal positioning and strategic priorities. Look for how they describe themselves versus competitors.

Product pages expose feature emphasis and target customer profiles. What’s highlighted? What’s buried?

Earnings calls surface pricing pressure, market concerns, and strategic pivots that don’t appear in marketing materials.

Job postings reveal where they’re investing. Hiring for AI engineers? Enterprise sales? International expansion? Strategy follows hiring.

Customer reviews expose real-world weak spots that marketing hides.

πŸ’‘ Pro Tip

After running the prompt, ask this follow-up: “Based on these findings, what’s one strategic move our company could make that exploits their weak spots while leveraging our strengths?” This forces the analysis into actionable recommendations.

Output Format: What You’ll Get

The prompt generates structured output across five dimensions:

1. Key Positioning Claim: AI identifies the competitor’s core value proposition β€” how they want the market to perceive them.

2. Pricing Signals: Even without explicit pricing, competitor content reveals monetization strategy through language about “enterprise,” “starter,” “custom pricing,” or “free forever.”

3. Strategic Assumptions: Every strategy rests on assumptions about the market, customers, or technology. AI surfaces what must be true for the competitor’s approach to work.

4. Potential Weak Spots: Vulnerabilities emerge from overreliance, missing capabilities, or strategic blind spots.

5. Questions for Further Research: Good competitive analysis generates more questions than answers. The prompt identifies what to investigate next.

πŸ“Œ Multiple Sources

If you’re analyzing multiple sources on the same competitor, run them separately first. The Research Brief prompt can then synthesize findings into a unified competitive profile.

For the complete work-reading toolkit, explore the Reading for Work pillar or return to the AI for Reading hub.

Frequently Asked Questions

Each source reveals different intelligence. Press releases show positioning priorities. Earnings calls expose market concerns. Job postings reveal investment areas. Product pages show feature emphasis. Customer reviews expose real weaknesses. Use multiple source types for a complete picture.
Look for language patterns: “enterprise,” “custom pricing,” “contact sales” signal premium positioning. “Free forever,” “starter,” “pay as you go” signal accessibility. Absence of pricing often means sales-driven deals at premium rates. Mentions of discounting or bundles reveal competitive pressure.
After running the prompt, ask: “Based on these findings, what’s one strategic move that exploits their weak spots while leveraging our strengths?” This forces the analysis into recommendations. Also use the “questions for further research” to validate findings before major decisions.
Run the prompt separately on each competitor’s content first. Then use the Research Brief prompt (C052) to synthesize findings across competitors, identifying patterns in positioning, shared vulnerabilities, and market gaps no one is addressing.
πŸ“š The Ultimate Reading Course

Read Like a Strategist

365 articles with analysis questions that build inference, evidence evaluation, and strategic thinking β€” exactly what competitive analysis demands.

Start Learning β†’
1,098 Practice Questions 365 Articles with Analysis 6 Courses + Community

Turn Reading into Competitive Advantage

Every competitor press release, every earnings call, every product page contains intelligence. The Competitive Intel Extractor helps you find it.

All Reading for Work Guides

Complete Bundle - Exceptional Value

Everything you need for reading mastery in one comprehensive package

Why This Bundle Is Worth It

πŸ“š

6 Complete Courses

100-120 hours of structured learning from theory to advanced practice. Worth β‚Ή5,000+ individually.

πŸ“„

365 Premium Articles

Each with 4-part analysis (PDF + RC + Podcast + Video). 1,460 content pieces total. Unmatched depth.

πŸ’¬

1 Year Community Access

1,000-1,500+ fresh articles, peer discussions, instructor support. Practice until exam day.

❓

2,400+ Practice Questions

Comprehensive question bank covering all RC types. More practice than any other course.

🎯

Multi-Format Learning

Video, audio, PDF, quizzes, discussions. Learn the way that works best for you.

πŸ† Complete Bundle
β‚Ή2,499

One-time payment. No subscription.

✨ Everything Included:

  • βœ“ 6 Complete Courses
  • βœ“ 365 Fully-Analyzed Articles
  • βœ“ 1 Year Community Access
  • βœ“ 1,000-1,500+ Fresh Articles
  • βœ“ 2,400+ Practice Questions
  • βœ“ FREE Diagnostic Test
  • βœ“ Multi-Format Learning
  • βœ“ Progress Tracking
  • βœ“ Expert Support
  • βœ“ Certificate of Completion
Enroll Now β†’
πŸ”’ 100% Money-Back Guarantee
Prashant Chadha

Connect with Prashant

Founder, WordPandit & The Learning Inc Network

With 18+ years of teaching experience and a passion for making learning accessible, I'm here to help you navigate competitive exams. Whether it's UPSC, SSC, Banking, or CAT prepβ€”let's connect and solve it together.

18+
Years Teaching
50,000+
Students Guided
8
Learning Platforms

Stuck on a Topic? Let's Solve It Together! πŸ’‘

Don't let doubts slow you down. Whether it's reading comprehension, vocabulary building, or exam strategyβ€”I'm here to help. Choose your preferred way to connect and let's tackle your challenges head-on.

🌟 Explore The Learning Inc. Network

8 specialized platforms. 1 mission: Your success in competitive exams.

Trusted by 50,000+ learners across India
×