Limitations & Assumptions Prompt: What the Paper Admits

C063 πŸ”¬ Research Papers 1 Prompt

Limitations & Assumptions Prompt: What the Paper Admits

Every study has boundaries. This prompt helps you find the limitations authors acknowledge β€” and the ones they don’t β€” so you know exactly how far to trust the findings.

6 min read 1 Prompt Guide 3 of 6
PR040 Academic Paper Navigator
Use before reading a research paper
I’m reading an academic paper. Here’s the abstract: “[paste abstract]” Before I read the full paper, help me: – Identify the research question and why it matters – Understand what to pay attention to in each section (intro, methods, results, discussion) – Flag jargon I should look up first – Tell me what questions to keep in mind while reading
πŸ”¬
Build Critical Reading Skills 365 articles with RC questions that train you to spot assumptions and evaluate evidence quality.
Explore Course β†’

The Two Types of Weaknesses in Any Paper

Every research paper has boundaries. The question is whether the authors acknowledge them β€” and whether you can spot the ones they don’t.

Stated limitations are boundaries the researchers acknowledge directly. Look in the discussion section for phrases like “one limitation of this study…” or “future research should address…” These show intellectual honesty.

Unstated assumptions are things the researchers take for granted without proving. These are often more important because they suggest blind spots. Did the authors assume their sample represents the population? That participants answered honestly? That the measurement tool is valid?

How to Use the Prompt for Limitations Analysis

Start with PR040 to map the paper’s structure. Then add a targeted follow-up: “Now focus on limitations and assumptions. What does this paper explicitly acknowledge as limitations? What assumptions does it make without proving them?”

AI will separate stated from unstated weaknesses, giving you a clearer picture of how much to trust the findings.

⚑ Pro Tip

After identifying limitations, ask: “For each limitation, does it weaken the main finding, narrow its applicability, or invalidate it entirely?” Not all limitations are equal.

Common Categories of Limitations

Sample limitations: Too small, not representative, convenience sampling, attrition bias

Measurement limitations: Self-report bias, instrument validity, operationalization choices

Design limitations: No control group, correlational (not causal), short timeframe

Scope limitations: Single context, narrow population, artificial setting

πŸ’‘ Example: Stated vs. Unstated

Stated: “Our sample of university students may not generalize to the broader adult population.”

Unstated: The study assumes participants didn’t change their behavior because they knew they were being observed (Hawthorne effect). This assumption is never mentioned.

How to Evaluate Impact on Conclusions

Once you’ve identified limitations, assess their severity:

Fatal flaws invalidate the core finding. If the control group isn’t comparable to the treatment group, the comparison is meaningless.

Scope restrictions narrow the applicability. The finding might be true, but only for specific populations or contexts.

Minor caveats are worth noting but don’t threaten the main conclusion. Every study has these.

⚠ Important Limitation

AI can identify potential weaknesses, but it can’t judge whether a limitation is fatal without domain expertise. Use AI’s output as a checklist for your own evaluation.

Build Your Critical Reading Stack

The Limitations & Assumptions Prompt works best with:

Methods Decoder β€” Understand what the study did before evaluating its weaknesses

Reproducibility Checklist β€” Assess transparency and documentation quality

Paper Map Prompt β€” Get the full picture before zooming in on limitations

Frequently Asked Questions

Limitations sections exist because no study is perfect. Responsible researchers acknowledge boundaries β€” sample size constraints, methodological trade-offs, scope restrictions. A well-written limitations section actually increases a paper’s credibility.
Limitations are boundaries the researchers acknowledge β€” what the study didn’t or couldn’t do. Assumptions are things taken for granted without proving β€” like assuming survey respondents answered honestly. Limitations are usually stated; assumptions often need to be inferred.
Not necessarily. Every study has limitations β€” that’s the nature of research. What matters is whether the limitations undermine core findings. Judge papers by whether conclusions are proportional to evidence, not by the number of limitations listed.
Look at three areas: the methodology (what alternative approaches could have been used?), the sample (who was excluded?), and the scope (what questions remain unanswered?). AI can help surface these by comparing the study’s approach to standard research practices.
πŸ“š The Ultimate Reading Course

Read Research Papers Critically

Build the skills to evaluate evidence quality and spot assumptions across 365 real articles.

Start Learning β†’
1,098 Practice Questions 365 Articles 6 Courses

Never Miss a Hidden Assumption

You’ve got the Limitations & Assumptions toolkit. Next, check reproducibility and find related work.

All Research Paper Guides

Key Takeaways vs Key Quotes: Extract Both

C019 πŸ“ Summarize Articles

Key Takeaways vs Key Quotes: Extract Both

Two outputs in one: main takeaways in your words plus the exact quotes worth saving, with clear separation.

5 min read 2 Prompts Guide 5 of 6
PR057 The Quote Extractor
To capture key quotes with context
Here’s an article: “[paste article]” Extract the most valuable quotes: – Identify 3-5 quotes worth saving (exact text) – For each quote, explain: – Why this quote matters – What it captures that a paraphrase would lose – How I might use this quote – Also give me the key takeaways that DON’T need direct quoting
PR030 The Layered Summary
When you need different summary depths
Here’s a text I want to remember: “[paste text]” Create three versions: – Tweet version (under 280 characters): The absolute core – Paragraph version: Core idea + key supporting points – Teaching version: How I would explain this to someone unfamiliar with the topic
β–Ά Watch This Guide
πŸ’Ž
Find the Gems in Every Article 365 articles with quotable insights β€” perfect practice for developing your extraction instincts.
Explore Course β†’

Takeaways vs Quotes: Why You Need Both

Most readers do one of two things: they highlight everything (creating a sea of yellow with no signal), or they paraphrase everything (losing the author’s exact words when those words matter). Neither approach serves you well.

To extract key takeaways from an article effectively, you need to separate two distinct outputs: ideas you can restate in your own words, and quotes you should preserve exactly as written. The difference isn’t about importance β€” it’s about what gets lost in translation.

Takeaways are concepts you understand well enough to explain differently. They become part of your mental model. Quotes are language so precise, memorable, or authoritative that paraphrasing would weaken them. They stay in the author’s voice because that voice adds something.

The Quote Extractor prompt (PR057) forces this separation. It asks AI to identify quotes worth saving, explain why each one matters, and separately deliver the takeaways that don’t need direct quoting. You get both outputs, clearly distinguished.

The Two-Prompt Workflow

Start with the Quote Extractor (PR057) when you suspect an article has quotable material β€” opinion pieces, thought leadership, research with memorable findings. The prompt asks for 3-5 quotes with context for each.

For each quote, you get three things: why it matters, what a paraphrase would lose, and how you might use it. This context transforms random highlighting into purposeful collection. You’re not just saving words β€” you’re building a library of evidence, examples, and language you can deploy later.

The prompt also delivers key takeaways that don’t need direct quoting. These are the ideas you should internalize and be able to explain in your own voice. They’re no less important than the quotes β€” they’re just better served by paraphrase.

If you need additional summary formats after extracting quotes, follow up with the Layered Summary (C015). Use the quotes for citation and evidence; use the summaries for comprehension and memory.

πŸ’‘ Pro Tip

Before using the Quote Extractor, ask yourself: “Will I ever need to cite this source?” If yes, extract quotes. If you’re reading purely for learning and won’t reference the text again, skip quotes and use the Layered Summary instead.

Scoring Your Output: What Makes a Good Quote

Not all quotes are equal. Here’s how to evaluate whether a quote is worth keeping:

Memorable phrasing: The author said it in a way that sticks. “Move fast and break things” is a quote; “iterate quickly and accept failures” is a paraphrase. The first one is worth saving; the second you can reconstruct anytime.

Technical precision: Definitions, formulas, or specific claims where exact wording matters. “Inflation is always and everywhere a monetary phenomenon” (Friedman) makes a specific claim that paraphrase would dilute.

Authorial authority: When who said it matters as much as what they said. A quote from the CEO about company strategy carries different weight than your summary of their strategy.

Evidence and data: Specific numbers, statistics, or findings you might cite. “Revenue grew 47% YoY” is worth preserving exactly; “revenue grew significantly” loses the precision.

If a quote doesn’t hit at least one of these criteria, it’s probably a takeaway in disguise. Paraphrase it and move on.

πŸ“Œ The Quote Test

Ask: “Would a paraphrase lose something important?” If yes, save the quote. If you can say it equally well in your own words, paraphrase. This simple test prevents over-quoting (cluttered notes) and under-quoting (lost gems).

Example: Quotes + Takeaways in Action

Say you read an article about remote work productivity. Here’s what the output might look like:

QUOTES WORTH SAVING:

“Productivity isn’t about hours logged β€” it’s about clarity achieved.” Why it matters: Reframes the entire productivity debate. How to use: Opening line for a presentation on async work.

“Teams that document decisions outperform teams that discuss decisions by 34%.” Why it matters: Specific, citable statistic. How to use: Evidence for documentation culture proposal.

TAKEAWAYS (no quote needed):

Remote work success depends more on communication norms than on tools. Async communication reduces interruptions but requires intentional social connection. Managers should measure outcomes, not activity.

Notice the separation: quotes carry language or data you’d lose by paraphrasing; takeaways carry ideas you can express yourself. Both matter. Both deserve their own treatment.

For building a more sophisticated note-taking system with these extractions, see Highlight Smarter (C026) or explore the full Summarize Articles pillar.

Frequently Asked Questions

Save exact quotes when the specific wording matters β€” memorable phrasing, technical precision, or when you’ll cite the source. Paraphrase when you need the idea but not the exact words. The Quote Extractor prompt helps you identify which is which, so you’re not over-quoting (cluttered notes) or under-quoting (losing powerful language).
Three to five quotes is usually optimal for a standard article (1,000-3,000 words). More than five suggests you’re highlighting too much β€” if everything is important, nothing is. Fewer than three might mean you’re missing genuinely quotable insights. The prompt asks for this range specifically to force prioritization.
A quote is worth saving when paraphrasing would lose something important: memorable phrasing that sticks, precise technical language, a surprising insight that needs the author’s exact framing, or evidence you might cite later. If you can say it equally well in your own words, paraphrase instead.
Store quotes with context: the source, why it matters, and how you might use it. The prompt provides this context automatically. For note-taking systems like Zettelkasten (C023), quotes become atomic notes with links. For research, they become evidence with citations. For writing, they become supporting material you can weave into your arguments.
365 Articles β€’ RC Questions

Build Your Quote Collection

Practice extracting the gems from diverse, high-quality content. Develop the instinct for what’s worth saving versus what to paraphrase.

Join the Course β€” β‚Ή2,499 β†’
Quotable Content Analysis Practice Memory Systems

One More Summary Guide Awaits

You’ve mastered quote extraction. Next, learn to summarize for different purposes: learning, deciding, or sharing.

Summarize Articles Pillar

How to Simplify Complex Text with AI: 3-Step Workflow

C009 🧠 Understand Difficult Text 2 Prompts

How to Simplify Complex Text with AI: 3-Step Workflow

A 3-step workflow to decode any complex text: identify thesis, paraphrase systematically, and generate clarifying examples.

6 min read 2 Prompts 3-Step Workflow
PR006 The Confusion Unpacker
Step 1: When a passage confuses you
I’m reading a passage and this part confuses me: “[paste confusing section]” Don’t simplify or summarize yet. Instead: – Identify what makes this difficult (complex syntax, assumed knowledge, abstract concepts, unfamiliar references?) – Break down the logic step by step – Explain any implicit assumptions the author is making – Only then restate the core idea in plain language
PR009 The Dense Passage Decoder
Step 2: For information-dense text
This passage is information-dense: “[paste passage]” Create a layered explanation: – Layer 1: The single core point in one sentence – Layer 2: The 3-4 key supporting elements – Layer 3: The nuances, qualifications, and exceptions – Layer 4: What’s deliberately left unsaid or simplified by the author
β–Ά Watch This Guide
πŸ’‘
Practice on Real Complex Texts 365 articles ranging from accessible to challenging β€” build your comprehension muscle daily.
Explore Course β†’

Step 1: Identify What Makes the Text Difficult

Most people approach complex text by immediately asking AI to “simplify this” or “explain in simple terms.” That’s backwards. You skip the most valuable step: understanding why the text is hard in the first place.

The Confusion Unpacker prompt (PR006) starts by diagnosing the difficulty. Is it complex syntax with nested clauses? Assumed background knowledge you’re missing? Abstract concepts that need grounding? Unfamiliar references or jargon?

This matters because different sources of difficulty require different solutions. Complex syntax needs untangling. Missing background knowledge needs filling. Abstract concepts need examples. Jargon needs the Jargon Translator. If you skip diagnosis, you get generic simplification that often loses important nuance.

When you simplify complex text with AI using a structured workflow, you preserve what matters. The thesis stays intact. The logic remains visible. You understand not just what the text says but why it was hard to understand.

Step 2: Paraphrase Systematically with Layers

After diagnosing the difficulty, use the Dense Passage Decoder prompt (PR009) to create a layered explanation. This isn’t just simplification β€” it’s systematic unpacking from simple to nuanced.

Layer 1 gives you the single core point in one sentence. This is the thesis, the main claim, the key takeaway. If you can’t state this clearly, you haven’t understood the passage.

Layer 2 adds the 3-4 supporting elements: the main reasons, evidence, or sub-points that hold up the core claim. These are the structural pillars.

Layer 3 captures nuances, qualifications, and exceptions. This is where complexity lives β€” the “but,” “however,” and “except when” that make ideas true rather than oversimplified.

Layer 4 reveals what the author deliberately left unsaid or simplified. This is expert-level reading: recognizing the gaps and assumptions baked into any explanation.

πŸ’‘ Pro Tip

Don’t skip Layer 4. Understanding what’s left out is often more valuable than what’s included. Authors make choices about what to simplify β€” knowing those choices makes you a critical reader.

Step 3: Generate Examples and Analogies

Abstract ideas become concrete through examples. After getting a layered explanation, follow up with: “Give me a concrete example of this concept” or “Create an analogy using [something I’m familiar with].”

This step bridges the gap between understanding words and understanding ideas. You might correctly paraphrase a passage about “opportunity cost” but not truly grasp it until you see an example about choosing between studying and socializing.

For more on this technique, see the ELI5 to Expert prompt which generates explanations at multiple levels, or the Analogy Builder for domain-specific comparisons.

πŸ“Œ The 3-Step Workflow

1. Diagnose (PR006) β€” What makes this hard? Don’t simplify yet. 2. Layer (PR009) β€” Core point β†’ Supporting elements β†’ Nuances β†’ What’s left out. 3. Ground β€” Request concrete examples or analogies. This sequence preserves nuance while building genuine understanding.

When to Use Each Prompt

Use PR006 (Confusion Unpacker) when you’re genuinely confused β€” when you’ve read a passage twice and still can’t figure out what it means. The prompt helps you figure out what you’re confused about, which is half the battle.

Use PR009 (Dense Passage Decoder) when text is information-dense but not necessarily confusing. Academic papers, technical documentation, policy briefs β€” content that packs a lot of meaning into few words. The layered structure extracts the hierarchy of ideas.

For straightforward jargon translation without the full workflow, the Jargon Translator (C010) handles technical terminology directly. The full Understand Difficult Text pillar has prompts for every level of complexity.

Common Mistakes to Avoid

Mistake 1: Asking for simplification without diagnosis. “Simplify this” gives you generic output. “What makes this difficult, then simplify” gives you targeted help.

Mistake 2: Stopping at Layer 1. The core point is necessary but not sufficient. Nuances (Layer 3) and gaps (Layer 4) are where real understanding develops.

Mistake 3: Not testing your understanding. After getting an explanation, try restating it in your own words without looking at the AI output. If you can’t, you’ve only read the simplification β€” you haven’t learned from it.

Return to the AI for Reading hub for the complete prompt ecosystem, or explore the Understand Difficult Text pillar for more comprehension tools.

Frequently Asked Questions

Generic simplification often loses important nuance. A structured workflow preserves what matters: the thesis stays intact, the logic remains visible, and you understand not just what the text says but why it’s hard to understand in the first place.
PR006 (Confusion Unpacker) diagnoses WHY something is difficult before simplifying. PR009 (Dense Passage Decoder) creates layered explanations from simple to complex. Use PR006 when you’re confused; use PR009 when text is information-dense but not necessarily confusing.
Try explaining it to yourself without looking at the AI output. If you can restate the core idea, the supporting points, and one nuance or exception, you understand it. If you can’t, you’ve only read the simplification β€” not learned from it.
Always add it for abstract concepts, theoretical frameworks, or anything you need to remember and apply later. Skip it for straightforward technical content where you just need to decode jargon β€” the Jargon Translator prompt handles that better.
πŸ“š The Ultimate Reading Course

Decode Complex Ideas Daily

365 articles from accessible to challenging β€” build the comprehension muscle that makes difficult text manageable.

Start Learning β†’
1,098 Practice Questions 365 Articles with Analysis 6 Courses + Community

5 More Comprehension Guides Await

You’ve mastered simplification. Next, explore jargon translation, analogies, and sentence-level analysis.

Understand Difficult Text Pillar

Highlight Smarter: What to Highlight and Why

C026 🧠 Notes & Memory 2 Prompts

Highlight Smarter: What to Highlight and Why

Stop highlighting everything: AI prompts that identify what’s actually worth marking and why.

5 min read Selective Marking Guide 6 of 6
PR019 The Key Term Identifier
Find what’s worth highlighting
Here’s a passage I’m reading: “[paste passage]” Identify vocabulary I should pay attention to: – Which words are central to understanding this passage? – Which words might appear in similar texts on this topic? – Which words have specialized meanings in this context vs. everyday use? – Rank them by importance for comprehension.
PR035 The Highlight Validator
Check if your highlights capture the core
Here’s a passage: “[paste passage]” Here are the parts I highlighted: “[paste your highlights]” Evaluate my highlighting: – Did I capture the core argument or main claim? – Did I miss any crucial supporting evidence? – Did I highlight too much context/setup? – What should I add or remove from my highlights?
β–Ά Watch This Guide
🎯
Practice Selective Reading 365 articles to master the art of identifying what matters β€” highlight less, remember more.
Explore Course β†’

What to Highlight: The Four Categories

Most people highlight too much. They mark anything that seems interesting, ending up with pages of yellow that offer no more guidance than unmarked text. Research consistently shows that heavy highlighters remember no better than non-highlighters. The problem isn’t highlighting itself β€” it’s indiscriminate highlighting.

What to highlight while reading comes down to four categories:

Core arguments: The main claims the author is making. Not the setup, not the examples β€” the actual thesis and key supporting points. If you had to explain what this text argues in two sentences, what would you quote?

Surprising facts: Information that challenged your existing beliefs or taught you something you didn’t know. If nothing surprises you, either you already knew this material or you weren’t reading actively.

Key term definitions: Specialized vocabulary or familiar words used in specialized ways. These are the words you’d need to understand to discuss this topic with an expert.

Passages you’ll revisit: Quotes you might use in your own writing, reference points for future projects, or ideas you want to develop further. The test: will you actually come back to this?

Everything else β€” context, transitions, examples that illustrate without adding new information β€” can stay unmarked.

The Prompts: Before and After

PR019: Identify Before Marking

The Key Term Identifier helps you spot what’s actually central before you start marking. Ask AI to identify which words are central to understanding, which might appear in similar texts, and which have specialized meanings. This creates a mental filter: when you see these terms, they’re candidates for highlighting. When you don’t, they’re probably not.

The ranking by importance is particularly useful. Not all key terms are equally important β€” some are foundational concepts, others are supporting vocabulary. Highlight the foundational ones first.

PR035: Validate After Marking

The Highlight Validator checks your work. Share the original passage and your highlights, and ask AI to evaluate: Did you capture the core? Did you miss crucial evidence? Did you highlight too much setup?

This feedback loop trains your judgment. After a few rounds, you’ll naturally start making better selections without needing the validator.

πŸ’‘ Pro Tip

Read a section completely before highlighting anything. Context changes what seems important. What looks crucial in paragraph one might be just setup for the real insight in paragraph five. First-pass highlighting often marks the wrong things.

Examples: Good vs Bad Highlighting

Bad highlighting: Marking entire paragraphs. Highlighting introductory phrases like “Research shows that…” or “It’s important to note that…” Marking anything that sounds impressive without checking if it’s actually the core claim.

Good highlighting: Marking just the specific claim within a longer paragraph. Highlighting the number or finding, not the framing around it. Marking the term being defined, not the full definition (you can reconstruct context from the term).

The test: if you looked at only your highlights, could you reconstruct the main argument? If yes, you highlighted well. If you’d need the surrounding text to make sense of them, you highlighted too narrowly. If the highlights alone feel overwhelming, you highlighted too much.

The Key Takeaways vs Key Quotes prompt (C019) can help distinguish between passages worth paraphrasing (takeaways) and passages worth preserving verbatim (quotes for highlighting).

πŸ“Œ From Highlights to Notes

Highlights are raw material, not finished product. Use the Zettelkasten prompt (C023) to convert highlights into atomic notes. Each highlight becomes a standalone idea with a title, core concept in your words, and connections to other ideas.

Building the Habit

Selective highlighting is a skill that improves with practice. Start by deliberately under-highlighting β€” you can always add more on a second pass, but removing mental clutter is harder. Use the prompts to calibrate your judgment, and over time you’ll internalize what’s worth marking.

You’ve now completed the Notes & Memory pillar. For the complete toolkit, return to the pillar page or explore the full AI for Reading hub.

Frequently Asked Questions

When you highlight everything, nothing stands out. Research shows heavy highlighters remember no better than non-highlighters. The act of selecting forces processing β€” if you skip the selection, you skip the thinking. Highlight less, remember more.
Four categories: core arguments (the main claims), surprising facts (things that challenged your assumptions), definitions of key terms (specialized vocabulary you’ll need), and passages you’ll revisit (for notes, quotes, or reference). Everything else can be skipped.
After, or at least after you’ve finished a section. Reading first gives you context that changes what seems important. What looks crucial in paragraph one might be just setup for the real insight in paragraph five.
Use the Zettelkasten prompt (C023) to convert highlights into atomic notes. Each highlight becomes a standalone idea with a title, core concept in your words, and connections to other ideas. Highlights are raw material β€” notes are the finished product.
πŸ“š The Ultimate Reading Course

You’ve Completed Notes & Memory

Practice everything you learned β€” 365 articles with guided note-taking, highlighting, and retention strategies.

Start Learning β†’
1,098 Practice Questions 365 Articles with Analysis 6 Courses + Community

Notes & Memory Pillar Complete

6 guides mastered. Explore AI Reading Coach routines, critical reading, or exam prep next.

Notes & Memory Pillar

Fact-Check Mode: What to Verify and How

C042 βš–οΈ Critical Reading 2 Prompts

Fact-Check Mode: What to Verify and How

Generate verification checklists with AI, then verify claims yourself using authoritative sources. AI organizes; you verify.

7 min read Verification Workflow Guide 2 of 5
PR022 The Source Interrogator
Use to evaluate sources and generate research questions
I’m reading this source: “[paste article or key excerpts]” Help me evaluate it critically: – What perspective or bias might the author/publication have? – What’s the author’s expertise or authority on this topic? – What audience is this written for, and how might that shape content? – What questions should I research independently after reading this?
PR023 The Evidence Evaluator
Use to assess evidence quality for specific claims
This passage uses evidence to support a claim: “[paste passage]” Evaluate the evidence: – What type of evidence is used (data, anecdote, expert opinion, analogy, example)? – How strong is the connection between evidence and conclusion? – What would stronger evidence look like? – Is this evidence representative or cherry-picked?
β–Ά Watch This Guide
πŸ”
Practice Critical Evaluation on Real Articles The Ultimate Reading Course includes 365 articles with expert analysis β€” learn to spot what needs verification.
Explore Course β†’

The Limits of AI for Fact-Checking

Let’s start with what you need to know: AI cannot reliably fact-check. This isn’t a limitation we’ll overcome soon β€” it’s structural. AI models generate responses based on patterns in training data, not real-time verification against authoritative sources.

This creates a dangerous situation: AI can confidently state incorrect information. It can cite sources that don’t exist. It can present outdated data as current. When you ask AI “is this true?”, the answer you get might be wrong β€” and you’d have no way of knowing without checking yourself.

So why use AI for fact-checking at all? Because AI excels at a different task: generating verification checklists. AI can identify which claims in an article are verifiable, prioritize them by importance, and suggest where to find authoritative sources. AI does the organizing; you do the checking.

⚠️ Critical Warning

Never ask AI “is this claim true?” and trust the answer. AI will confidently respond β€” but that confidence doesn’t correlate with accuracy. Use AI to identify what to verify and where to look, then verify yourself.

Building a Verification Checklist

The first prompt (PR022 β€” Source Interrogator) generates research questions. Its final output β€” “what questions should I research independently” β€” is your verification checklist. But not all claims deserve equal attention.

Prioritize central claims. If the article’s main argument depends on a specific statistic being accurate, that statistic goes to the top of your list. Peripheral details matter less.

Flag surprising claims. Information that seems too convenient, too dramatic, or too perfectly aligned with the author’s thesis deserves extra scrutiny.

Check attributed statements. When an article quotes someone or cites a study, verify both the existence of the source and the accuracy of the representation.

Verify numbers first. Statistics, percentages, dates, and quantities are the easiest claims to verify β€” and the most commonly wrong.

Where to Look: Matching Claims to Sources

Different claim types require different verification sources:

Government statistics: Go to official databases. For economic data, the source is usually a government statistics bureau. For health data, it’s the health ministry or WHO.

Scientific claims: Peer-reviewed papers are the standard. Google Scholar, PubMed, and university library databases help. Check if the cited study actually supports the claim.

Quotes and statements: Look for original transcripts, video recordings, or official press releases. Be wary of quotes that appear only in secondary sources.

Company and financial data: Public companies file mandatory disclosures (SEC filings in the US). Press releases come from company newsrooms.

πŸ’‘ Pro Tip

After generating your verification checklist with AI, add one more prompt: “For each claim I should verify, suggest the most authoritative primary source where I could find the original data or statement.”

The Two Prompts in Action

PR022 (Source Interrogator) operates at the article level β€” it evaluates the source, identifies potential biases, and generates research questions. Run this first.

PR023 (Evidence Evaluator) operates at the claim level β€” it examines specific evidence supporting specific claims. Use this when you’ve identified a central claim and want to understand how well it’s supported.

A typical workflow: Run PR022 on the entire article. Extract the verification checklist. For the highest-priority claims, run PR023 to assess evidence quality. Then verify externally using suggested sources.

This connects to other critical reading skills in this pillar. The Argument Map Prompt (C043) separates claims from evidence structure. The What’s Missing Prompt (C044) identifies gaps in coverage.

Frequently Asked Questions

No. AI can confidently state incorrect information. Use AI to identify what to verify and where to look, but always verify claims yourself using authoritative primary sources. AI organizes your verification workflow; it doesn’t replace it.
Prioritize: (1) central claims the argument depends on, (2) surprising or dramatic claims, (3) statistics and numbers, and (4) quoted statements and cited studies. Ask: if this claim were false, would it change the article’s conclusion?
Primary sources over secondary. Institutional accountability (government agencies, peer-reviewed journals). Methodological transparency. Domain expertise. A company’s quarterly report is more authoritative than a news article about that report.
PR022 (Source Interrogator) first, on the whole article β€” generates your verification checklist. Then PR023 (Evidence Evaluator) on specific high-priority claims β€” assesses how well each claim is supported. Then verify externally.
πŸ“š The Ultimate Reading Course

Train Your Verification Instinct

365 articles with expert analysis β€” learn to spot claims that need checking and evaluate evidence quality.

Start Learning β†’
1,098 Practice Questions 365 Articles with Analysis 6 Courses + Community

3 More Critical Reading Guides Await

You’ve learned verification workflows. Next, map argument structures, find what’s missing, and hunt for hidden assumptions.

All Critical Reading Guides

Executive Summary Prompt for Busy Readers

C017 πŸ“ Summarize Articles

Executive Summary Prompt for Busy Readers

Get decision-ready summaries: context, key findings, implications, and recommended action β€” all in under 300 words.

5 min read 1 Prompt Guide 3 of 6
PR030 The Layered Summary (Executive Mode)
When you need decision-ready output
Here’s a text I need to act on: “[paste text]” I’m a [your role] deciding [your decision context]. Create an executive summary (under 300 words) with these sections: – CONTEXT: Why this matters right now (1-2 sentences) – KEY FINDINGS: The 3-5 most important facts or insights – IMPLICATIONS: What this means for my situation – RECOMMENDED ACTION: What I should do next
β–Ά Watch This Guide
πŸ’Ό
Read Faster. Decide Better. 365 articles across business, science, and policy β€” perfect material for executive summary practice.
Explore Course β†’

Why Executive Summaries Exist

Regular summaries tell you what a text says. Executive summaries tell you what to do about it. The difference is intent: one is for understanding, the other is for action.

When you’re reading to make decisions β€” whether to approve a proposal, change strategy, invest resources, or advise a team β€” you don’t need comprehensive understanding. You need the minimum information required to act wisely. Executive summaries strip everything else.

The base summary prompt (C015) works for general comprehension. This variant restructures output for decision-making contexts.

The Executive Summary Template

A good executive summary prompt produces four distinct sections, each serving a specific purpose in moving from information to action.

Context (1-2 sentences): Why does this matter right now? This grounds the reader in urgency or relevance. Without context, even important findings feel abstract. Example: “The FDA announced new approval guidelines that take effect Q2, affecting our three pending submissions.”

Key Findings (3-5 points): The most important facts, data points, or insights from the reading. Not everything important β€” just what’s most relevant to the decision at hand. Strip adjectives, keep numbers, prioritize actionable information.

Implications: What this means for your specific situation. This is where you add your role context to the prompt β€” “I’m a product manager evaluating market entry” produces different implications than “I’m an investor assessing risk.” Generic implications are useless implications.

Recommended Action: What you should do next. One concrete step, not a vague suggestion. “Schedule a meeting with legal to review compliance requirements” not “Consider the legal implications.”

πŸ’‘ Pro Tip

The 300-word limit is intentional. If your executive summary exceeds 300 words, you haven’t identified what actually matters. Constraints force prioritization β€” embrace them.

Output Sections in Detail

Let’s examine each section more closely, because the structure is what makes executive summaries actionable.

Context answers “why now?” It creates urgency or establishes relevance. Bad context: “This article discusses market trends.” Good context: “Consumer behavior shifted post-pandemic, and our Q3 strategy assumes the old patterns still hold.”

Key Findings are facts, not interpretations. They should be verifiable from the source text. Avoid: “The author argues convincingly…” Include: “Revenue grew 23% YoY, but customer acquisition cost increased 40%.” Numbers, dates, names, specific claims.

Implications connect findings to your situation. This requires you to tell the AI who you are and what decision you’re facing. The same article about interest rate changes has different implications for a borrower vs. a lender vs. an investor. Be specific about your context.

Recommended Action is singular and concrete. Not “consider various options” but “proceed with Option B and schedule review for March 15.” If you can’t recommend one action, you haven’t processed the information fully β€” or the reading genuinely doesn’t support a clear direction (in which case, say that).

For workplace applications, the Reading for Work pillar has additional prompts for memos, briefs, and stakeholder updates.

πŸ“Œ The Four-Section Rule

Context β†’ Findings β†’ Implications β†’ Action. This sequence moves from “what happened” to “what it means” to “what to do.” Skipping sections or reordering them breaks the logic chain. Train yourself (and your AI) to follow this structure consistently.

Example: From Article to Executive Summary

Say you’re a startup founder reading an article about enterprise sales cycles lengthening in 2024. Here’s what the output might look like:

CONTEXT: Enterprise sales cycles extended 27% in 2024 according to Gartner data. Our current runway assumes 6-month cycles; if this trend holds, we need to revisit forecasts.

KEY FINDINGS: Average B2B deal now takes 8.2 months (up from 6.5). Budget freezes affected 64% of enterprises. Security reviews added 3-4 weeks to procurement. Companies with existing vendor relationships saw minimal impact.

IMPLICATIONS: Our Q3 revenue projections assumed closing 4 enterprise deals. At current pace, we may close 2-3. This creates a potential $400K shortfall. Our existing customer expansion strategy becomes more valuable than new logo acquisition.

RECOMMENDED ACTION: Shift Q3 focus from new enterprise acquisition to expanding existing accounts. Present revised forecast at next board meeting with scenario planning for 9-month average cycles.

Notice how each section builds on the previous. The context frames the findings; the findings inform the implications; the implications drive the action. This is what makes executive summaries useful β€” they’re designed for momentum.

For longer documents where you need more detailed extraction before summarizing, use the Article to Action Memo prompt (C047) or explore the full AI for Reading hub.

Frequently Asked Questions

Use an executive summary when you need to make a decision or take action based on the reading. Regular summaries capture what the text says; executive summaries translate that into what it means for you. If you’re reading for general knowledge, use the Layered Summary (C015). If you’re reading to decide or act, use the executive format.
Context (why this matters now), Key Findings (the 3-5 most important facts), Implications (what this means for your situation), and Recommended Action (what you should do next). This structure ensures you move from understanding to action, not just comprehension.
Under 300 words for most articles, under 500 for longer reports. The constraint is the point β€” if you can’t fit it in 300 words, you haven’t identified what actually matters. Busy readers need density, not completeness. Every word should earn its place.
Absolutely β€” this is where the executive summary format shines. Add your role and decision context to the prompt: ‘I’m a product manager deciding whether to enter this market’ or ‘I’m an investor evaluating this sector.’ The AI will tailor implications and recommendations to your specific situation.
365 Articles β€’ RC Questions

Turn Reading into Decisions

Practice extracting what matters with diverse, challenging content. Build the skill that separates information consumers from decision makers.

Join the Course β€” β‚Ή2,499 β†’
Business Content Policy Analysis Research Reports

3 More Summary Guides Await

You’ve mastered executive summaries. Next, learn to verify accuracy, extract key quotes, and summarize for different purposes.

Summarize Articles Pillar

Evidence Check Prompt: Data vs Opinion vs Anecdote

C040 βš–οΈ Critical Reading Critical Reading

Evidence Check Prompt

Classify every claim: is it data, expert opinion, anecdote, or unsupported assertion? Plus red flags to watch.

5 min read 1 Prompt Evidence Analysis
PR023 The Evidence Evaluator
For assessing claim quality
This passage uses evidence to support a claim: “[paste passage]” Evaluate the evidence: – What type of evidence is used (data, anecdote, expert opinion, analogy, example)? – How strong is the connection between evidence and conclusion? – What would stronger evidence look like? – Is this evidence representative or cherry-picked?
πŸ”
Build Evidence Literacy 365 articles with evidence analysis β€” train yourself to spot weak arguments automatically.
Explore Course β†’

Evidence Types: A Quick Taxonomy

Not all evidence is created equal. When a passage makes a claim, the evidence checklist asks: what type of evidence supports it? Here’s the quick taxonomy:

Data β€” Numbers, statistics, study results, measured outcomes. The strongest type when the methodology is sound, but watch for p-hacking, small samples, and results that haven’t been replicated. “Studies show 70% of users prefer X” is data.

Anecdote β€” Individual stories, personal experiences, specific cases. Compelling and memorable, but not necessarily representative. “My friend tried X and loved it” is anecdote. Good for illustration, weak for proving patterns.

Expert opinion β€” What authorities in the field say. Varies enormously based on whether the expert is actually in the relevant field, whether there’s expert consensus, and whether they’re citing evidence or just asserting. “Dr. X believes…” is weaker than “The scientific consensus is…”

Analogy β€” Comparison to something similar. “Just like X, this situation…” Useful for understanding, weak for proving. Analogies illuminate but don’t establish β€” the situations might differ in crucial ways.

Example β€” Specific instances cited as representative. Stronger than anecdotes when carefully chosen, weaker when cherry-picked to support a predetermined conclusion. Ask: is this example typical or exceptional?

πŸ’‘ The Evidence Hierarchy

For most claims, evidence strength roughly runs: meta-analyses > randomized controlled trials > large observational studies > small studies > expert consensus > individual expert opinion > examples > analogies > anecdotes > unsupported assertions. Context matters, but this hierarchy is a useful starting point.

Using PR023: What the Prompt Reveals

The evidence checklist in PR023 asks four questions. First: what type of evidence is being used? Just naming it is clarifying. “This is an anecdote” or “This is a single study” immediately calibrates your confidence.

Second: how strong is the connection between evidence and conclusion? This is where many arguments fail. The evidence might be solid, but the leap to the conclusion unjustified. “Studies show people who eat breakfast are healthier” doesn’t prove “Eating breakfast makes you healthy” β€” there could be confounding factors.

Third: what would stronger evidence look like? This question surfaces what’s missing. If someone cites one study, stronger evidence would be a meta-analysis. If they cite expert opinion, stronger evidence would be data. Knowing what better evidence looks like helps you assess what you have.

Fourth: is this evidence representative or cherry-picked? This is crucial. Cherry-picking β€” selecting only evidence that supports your conclusion while ignoring contradictory evidence β€” is one of the most common failures in argumentation. For more on detecting bias in source selection, see the Bias Scanner Prompt.

Red Flags: Evidence Problems to Watch

Beyond classification, watch for these common evidence problems:

Correlation as causation: “X and Y occur together, therefore X causes Y.” Maybe. Or maybe Y causes X. Or Z causes both. Or it’s coincidence. Correlation suggests investigation, not conclusion.

Unrepresentative samples: Evidence drawn from unusual populations generalized to everyone. College students, online survey respondents, and people willing to participate in studies may not represent the general population.

Outdated evidence: Especially in fast-moving fields. Evidence from 2010 about social media, AI, or medical treatments may not apply to 2024 realities. Ask: has anything changed that would affect this evidence?

Single studies: One study proves little. Science advances through replication. Be especially skeptical of dramatic single-study findings that haven’t been reproduced.

Conflicts of interest: Evidence from sources with financial or ideological stakes in the conclusion. Doesn’t automatically invalidate it, but warrants extra scrutiny. For verification strategies, see Fact-Check Mode.

⚠️ The Unsupported Assertion

Watch for claims presented as fact with zero evidence β€” not even anecdotes. Phrases like “Everyone knows…” “It’s obvious that…” “Clearly…” often flag unsupported assertions. The author expects you to accept the claim without evidence. Maybe it’s true. But it’s not argued.

From Evidence Check to Decision

After running PR023, you’ll have a clearer picture of how well-supported each claim is. Use this to calibrate your confidence. Claims supported by strong, representative data deserve more weight than claims supported only by anecdotes and analogies.

But don’t reject everything that lacks perfect evidence. Most real-world decisions happen under uncertainty. The goal isn’t to accept only proven claims β€” it’s to know which parts of an argument you’re accepting on faith versus which are supported by evidence.

For the complete critical reading toolkit, explore the Critical Reading pillar. For the broader reading improvement framework, see AI for Reading.

Frequently Asked Questions

Data is systematic: collected across many cases, often with controls, representing patterns. Anecdotes are individual stories or examples β€” vivid but not necessarily representative. “Studies show 70% of users prefer X” is data. “My friend tried X and loved it” is anecdote. Both can be true, but they support conclusions differently. Data suggests patterns; anecdotes illustrate possibilities.
It depends. Expert opinion is stronger when: the expert has relevant credentials (not just general authority), the claim is within their specialty, there’s consensus among experts, and they’re citing evidence rather than just asserting. It’s weaker when: the expert is outside their field, there’s significant expert disagreement, or they have conflicts of interest. Expert opinion points toward truth but doesn’t guarantee it.
Watch for: single studies cited when meta-analyses exist, time periods chosen to support a narrative (why start the graph there?), examples that are unusually dramatic or favorable, absence of counterexamples that likely exist, and evidence that only comes from sources with a stake in the conclusion. Ask: “If I looked at ALL the evidence, would this pattern hold?”
An unsupported assertion is a claim presented as fact with no evidence offered β€” not even an anecdote or appeal to authority. Watch for confident statements that assume rather than argue: “Everyone knows…” “It’s obvious that…” “Clearly…” These phrases often flag claims the author expects you to accept without evidence. They might be true, but they’re not argued.
365 Articles β€’ RC Questions

Don’t Get Fooled. Get Evidence Literate.

Practice evaluating evidence across articles from business, science, policy, and culture. Build the instincts that protect you from weak arguments.

Join the Course β€” β‚Ή2,499 β†’
Evidence Types Red Flag Training Critical Thinking

Know What You’re Trusting

Next time you read a persuasive piece, run PR023. Classify the evidence. Note the red flags. Decide which claims are supported and which you’re accepting on faith. That’s reading with your eyes open.

Critical Reading Pillar

ELI5 to Expert: A Multi-Level Explanation Prompt

C004 πŸ“‹ AI Reading Prompts 1 Prompt

ELI5 to Expert: A Multi-Level Explanation Prompt

Get explanations at any level: from 5-year-old simple to expert nuanced, using one adaptive AI prompt template.

5 min read 1 Prompt Guide 4 of 8
PR009 The Dense Passage Decoder
Use for information-dense academic or technical writing
This passage is information-dense: “[paste passage]” Create a layered explanation: – Layer 1: The single core point in one sentence – Layer 2: The 3-4 key supporting elements – Layer 3: The nuances, qualifications, and exceptions – Layer 4: What’s deliberately left unsaid or simplified by the author
β–Ά Watch This Guide
πŸŽ“
Practice Decoding Dense Text Daily The Ultimate Reading Course includes 365 articles across difficulty levels β€” perfect for building layered comprehension skills.
Explore Course β†’

How the Layered Approach Works

Dense text creates a specific problem: everything seems equally important. An academic paper might pack three arguments, two qualifications, a methodology note, and an implied critique into a single paragraph. Without a hierarchy, you’re overwhelmed.

The ELI5 prompt for complex topics solves this by requesting explanation in layers. Instead of one flat response, you get four levels of depth β€” and you can stop at any level.

Layer 1 is the “explain like I’m five” level. One sentence. The absolute core. If someone asked “what’s this about?” in an elevator, this is your answer.

Layer 2 adds structure. The 3-4 supporting elements that hold up the core point. Still simple, but now you understand why the core point is true or important.

Layer 3 brings nuance. The qualifications, exceptions, and edge cases. This is where “it’s complicated” becomes specific β€” you learn under what conditions the main claim might not hold.

Layer 4 goes meta. What did the author simplify? What’s implied but not stated? What would experts notice that beginners would miss? This layer is for when you need to critically evaluate the text, not just understand it.

The Template in Action

Let’s say you paste a dense paragraph about monetary policy. Here’s what the layers might look like:

Layer 1: “Central banks raise interest rates to slow inflation by making borrowing more expensive.”

Layer 2: “Supporting elements: (1) Higher rates reduce consumer spending, (2) businesses delay investments, (3) currency strengthens making imports cheaper, (4) expectations shift as people anticipate slower growth.”

Layer 3: “Nuances: This works with demand-driven inflation but not supply shocks. Effects lag by 12-18 months. Small economies can’t raise rates independently without capital flight. The relationship breaks down at very low or very high rate levels.”

Layer 4: “Unsaid: The author assumes inflation expectations are ‘anchored’ and doesn’t address what happens when they’re not. Also omits distributional effects β€” rate hikes hurt borrowers and help savers, which has political implications the passage doesn’t mention.”

Notice how each layer adds complexity without contradicting the previous one. You build understanding progressively instead of drowning in detail.

πŸ’‘ Pro Tip

If you only need a quick grasp, read Layer 1 and stop. If you’re writing about the topic, you need Layer 3. If you’re critiquing or fact-checking, Layer 4 is essential. Match the layer to your purpose.

Examples by Subject Area

For Scientific Papers

Layer 1 gives you the main finding. Layer 2 explains the evidence. Layer 3 reveals the limitations the authors acknowledge. Layer 4 exposes what they didn’t test or assumed without proof.

For Philosophy

Layer 1 states the thesis. Layer 2 outlines the argument structure. Layer 3 addresses counterarguments and qualifications. Layer 4 shows what the philosopher takes for granted β€” the hidden premises.

For Legal Documents

Layer 1 gives you the bottom line β€” what you can or can’t do. Layer 2 explains the conditions and requirements. Layer 3 covers exceptions and edge cases. Layer 4 reveals what’s ambiguous or likely to be contested.

For Technical Documentation

Layer 1 tells you what the thing does. Layer 2 explains how to use it. Layer 3 covers configuration options and limitations. Layer 4 reveals what the docs don’t tell you β€” common pitfalls, implicit requirements.

Common Mistakes to Avoid

Mistake 1: Skipping Layer 1. Even if you’re an expert, start with the core point. Dense text can bury the main idea in qualifications. Layer 1 ensures you haven’t missed the forest for the trees.

Mistake 2: Treating all layers as equally important. They’re not. Layer 1 is always essential. Layer 4 is only for deep analysis. Don’t spend time on nuances if you just need the gist.

Mistake 3: Using this for simple text. If the passage is already clear, layered explanation adds complexity without benefit. Use the no-fluff prompt instead for straightforward content.

Mistake 4: Not following up. If a layer confuses you, ask AI to expand just that layer. “Can you give me more examples for Layer 3?” is better than re-running the whole prompt.

πŸ“Œ When to Use This vs Other Prompts

Use the layered prompt when text is dense β€” lots of information packed tightly. Use the simplify complex text workflow when text is difficult β€” unclear language or assumed knowledge. Dense and difficult are different problems. Sometimes you face both β€” use both prompts.

The AI Reading Prompts Library has tools for every comprehension challenge. The article understanding prompts give you a full toolkit. But for dense academic or technical writing, start here β€” with layers.

Frequently Asked Questions

ELI5 stands for ‘Explain Like I’m 5’ β€” a request for the simplest possible explanation of a complex topic. The layered prompt approach starts at ELI5 level (Layer 1) and builds up to expert nuance (Layer 4), letting you stop wherever you need.
Use the Dense Passage Decoder prompt (PR009) which creates four layers: core point in one sentence, key supporting elements, nuances and exceptions, and what’s deliberately left unsaid. Each layer adds complexity β€” stop at whatever level serves your purpose.
Use layered explanations for information-dense academic or technical writing where you need to understand the core idea before tackling complexity. Use regular summaries for news articles or straightforward content where depth isn’t the challenge.
Yes. The layered approach works for scientific papers, legal documents, philosophy, economics, technical documentation β€” any text where understanding the hierarchy of ideas matters more than just getting the gist.
πŸ“š The Ultimate Reading Course

Simple to Complex. Layer by Layer.

Build comprehension skills progressively with 365 articles across difficulty levels β€” from accessible to expert-level dense text.

Start Learning β†’
1,098 Practice Questions 365 Articles with Analysis 6 Courses + Community

4 More Prompt Guides Await

You’ve learned layered explanations. Next, explore Socratic questioning, vocabulary building, and structured reading methods.

All AI Reading Prompts

Competitive Intel Prompt: Positioning, Pricing & Weak Spots

C051 πŸ’Ό Reading for Work 1 Prompt

Competitive Intel Prompt: Extract Positioning, Pricing & Weak Spots

Turn competitor content into strategic intelligence β€” extract positioning claims, pricing signals, strategic assumptions, and vulnerabilities from any source.

6 min read 5 Dimensions Guide 5 of 6
PR043 Competitive Intel Extractor
Use for competitor analysis from any source
I’m reading competitor content (press release, product page, earnings call, or industry report): “[paste excerpt]” Help me extract competitive intelligence: – What’s the key positioning claim or value proposition? – What pricing signals are present (premium, budget, enterprise tiers, discounts)? – What assumptions underlie their strategy? – What potential weak spots or vulnerabilities can I identify? – What questions should I research further before acting on this?
β–Ά Watch This Guide
πŸ“Š
Read Business Content Like a Strategist The Ultimate Reading Course includes 365 articles with analysis questions that sharpen inference, evidence evaluation, and strategic thinking.
Explore Course β†’

What to Look For in Competitive Analysis

Most people read competitor content passively β€” skimming for interesting facts, noting features, maybe flagging a price point. But competitive analysis from article content requires active extraction. You’re not reading for entertainment. You’re reading to answer specific strategic questions.

The challenge is that competitors rarely state their strategy directly. They don’t announce “we’re positioning as the premium option” or “our weakness is enterprise scalability.” Instead, these insights hide in word choice, emphasis patterns, and what’s conspicuously absent.

Effective competitive intelligence extraction focuses on four dimensions: positioning (how they want to be perceived), pricing signals (what their monetization strategy reveals), strategic assumptions (what must be true for their approach to work), and vulnerabilities (where their armor has gaps).

The Prompt: How to Use It

The Competitive Intel Extractor (PR043) works with any competitor content β€” press releases, product pages, earnings call transcripts, industry reports, customer reviews, or news coverage. Each source type reveals different intelligence:

Press releases reveal positioning and strategic priorities. Look for how they describe themselves versus competitors.

Product pages expose feature emphasis and target customer profiles. What’s highlighted? What’s buried?

Earnings calls surface pricing pressure, market concerns, and strategic pivots that don’t appear in marketing materials.

Job postings reveal where they’re investing. Hiring for AI engineers? Enterprise sales? International expansion? Strategy follows hiring.

Customer reviews expose real-world weak spots that marketing hides.

πŸ’‘ Pro Tip

After running the prompt, ask this follow-up: “Based on these findings, what’s one strategic move our company could make that exploits their weak spots while leveraging our strengths?” This forces the analysis into actionable recommendations.

Output Format: What You’ll Get

The prompt generates structured output across five dimensions:

1. Key Positioning Claim: AI identifies the competitor’s core value proposition β€” how they want the market to perceive them.

2. Pricing Signals: Even without explicit pricing, competitor content reveals monetization strategy through language about “enterprise,” “starter,” “custom pricing,” or “free forever.”

3. Strategic Assumptions: Every strategy rests on assumptions about the market, customers, or technology. AI surfaces what must be true for the competitor’s approach to work.

4. Potential Weak Spots: Vulnerabilities emerge from overreliance, missing capabilities, or strategic blind spots.

5. Questions for Further Research: Good competitive analysis generates more questions than answers. The prompt identifies what to investigate next.

πŸ“Œ Multiple Sources

If you’re analyzing multiple sources on the same competitor, run them separately first. The Research Brief prompt can then synthesize findings into a unified competitive profile.

For the complete work-reading toolkit, explore the Reading for Work pillar or return to the AI for Reading hub.

Frequently Asked Questions

Each source reveals different intelligence. Press releases show positioning priorities. Earnings calls expose market concerns. Job postings reveal investment areas. Product pages show feature emphasis. Customer reviews expose real weaknesses. Use multiple source types for a complete picture.
Look for language patterns: “enterprise,” “custom pricing,” “contact sales” signal premium positioning. “Free forever,” “starter,” “pay as you go” signal accessibility. Absence of pricing often means sales-driven deals at premium rates. Mentions of discounting or bundles reveal competitive pressure.
After running the prompt, ask: “Based on these findings, what’s one strategic move that exploits their weak spots while leveraging our strengths?” This forces the analysis into recommendations. Also use the “questions for further research” to validate findings before major decisions.
Run the prompt separately on each competitor’s content first. Then use the Research Brief prompt (C052) to synthesize findings across competitors, identifying patterns in positioning, shared vulnerabilities, and market gaps no one is addressing.
πŸ“š The Ultimate Reading Course

Read Like a Strategist

365 articles with analysis questions that build inference, evidence evaluation, and strategic thinking β€” exactly what competitive analysis demands.

Start Learning β†’
1,098 Practice Questions 365 Articles with Analysis 6 Courses + Community

Turn Reading into Competitive Advantage

Every competitor press release, every earnings call, every product page contains intelligence. The Competitive Intel Extractor helps you find it.

All Reading for Work Guides

Difficulty Calibrator: Know What Makes a Passage Hard

C072 πŸŽ“ RC Exam Prep 1 Prompt

Difficulty Calibrator: Why This Passage Is Hard

Passages are hard for specific, identifiable reasons. Know the four difficulty factors, adjust your strategy before your first read, and turn hard passages into manageable ones.

6 min read Calibration Guide 6 of 6
PR051 The Difficulty Calibrator
Use to assess passage difficulty before diving in
Here is a passage: “[paste passage]” Assess its difficulty level and tell me: – What makes this easy, medium, or hard? – What specific challenges does this passage present (dense vocabulary, complex argument, abstract topic)? – What type of test-taker would struggle with this, and why? – How should I adjust my approach based on difficulty?
β–Ά Watch This Guide
πŸŽ“
365 Passages at Every Difficulty Level The Ultimate Reading Course gives you structured practice from easy to expert β€” with difficulty progressions built in.
Explore Course β†’

The Four Difficulty Factors That Actually Matter

Most test-takers think passage difficulty is binary β€” either it is easy or hard. But RC difficulty analysis reveals something far more useful: passages are hard for specific, identifiable reasons. Once you know the reason, you can adjust your strategy before you have even finished your first read.

Vocabulary density β€” When a passage uses technical terms, jargon, or rare words, comprehension slows at the sentence level. Easiest factor to work around: you do not need to know every word, just which words are critical to the argument.

Argument complexity β€” Multiple viewpoints, nested claims, or reasoning where each step depends on the last. You are tracking three positions simultaneously.

Abstraction level β€” Ideas, theories, and concepts without concrete examples. A passage about epistemological implications is harder than one about why mirrors do not reverse left and right even with identical vocabulary.

Information density β€” How much content per sentence. Dense passages require slower reading and more working memory.

πŸ“Œ Key Insight

A passage can be hard for one factor and easy for the others. A science passage might have dense vocabulary but a simple argument. A philosophy passage might use simple words but have extremely abstract reasoning. Knowing which factor is driving the difficulty tells you exactly where to invest your reading time.

You have now completed the full RC Exam Prep toolkit. For the complete AI reading system, return to the AI for Reading hub.

Frequently Asked Questions

Rate passages with PR051 before attempting them, then track your accuracy. Over time, you will notice patterns: which difficulty factors trip you up, which you handle well.
Only if your exam allows you to choose passage order. The goal of difficulty calibration is to adjust your approach, not to avoid hard passages entirely.
Track which factor costs you the most points. Most test-takers struggle with either abstraction or argument complexity β€” vocabulary is the easiest to work around.
πŸ“š The Ultimate Reading Course

You Have Completed the RC Exam Prep Pillar

All 6 guides mastered: passage strategy, question types, trap answers, inference questions, timed practice, and difficulty calibration.

Start Learning β†’
1,098 Practice Questions 365 Articles with Analysis 6 Courses + Community

You Have Completed the RC Exam Prep Pillar

All 6 guides mastered: passage strategy, question types, trap answers, inference questions, timed practice, and difficulty calibration. Ready to explore other pillars?

All RC Exam Prep Guides

Decision Matrix from Reading: Options, Tradeoffs & Recommendation

C050 πŸ’Ό Reading for Work 1 Prompt

Decision Matrix from Reading: Options, Tradeoffs & Recommendation

Turn complex reading into clear decisions: extract options, map tradeoffs, and get AI-generated recommendations with explicit reasoning.

5 min read 1 Prompt Guide 4 of 6
PR043 Business/Report Reader
Use to extract decisions & tradeoffs
I’m reading a business report or case study: “[paste excerpt]” Help me extract value: – What’s the key takeaway for decision-making? – What data matters vs. what’s noise? – What assumptions underlie the analysis? – What questions should I ask before acting on this?
βš–οΈ
Make Better Decisions from Better Reading Practice extracting insights and weighing tradeoffs across 365 real articles.
Explore Course β†’

When to Use a Decision Matrix (Not Just a Summary)

A summary tells you what the reading said. A decision matrix tells you what to do about it. Use this decision matrix prompt when you face:

Multiple viable options that aren’t clearly ranked. Tradeoffs that need to be made explicit. Stakeholders who want to see reasoning laid out. Decisions where “it depends” is honest but not useful.

The Two-Step Prompt Workflow

Step 1: Run PR043 to extract the decision landscape. This surfaces what matters for decision-making, separates signal from noise, and identifies assumptions.

Step 2: Request the matrix format. After AI returns the extraction, add: “Now structure this as a decision matrix. Rows = options. Columns = key criteria (cost, time, risk, impact). Include a recommendation with reasoning.”

⚑ Pro Tip

Add decision constraints to Step 2: “We prioritize speed over cost. Budget is fixed at $50K. The decision-maker is risk-averse.” These constraints let AI weight the matrix appropriately.

Example: What the Matrix Output Looks Like

πŸ’‘ Example Output

Decision Question: Which CRM vendor should we select?

Options: Vendor A (enterprise), Vendor B (mid-market), Vendor C (startup-focused)

Criteria: Implementation time | Total cost (3-year) | Integration complexity | User adoption risk

Recommendation: Vendor B β€” best balance of cost and implementation speed given our 6-month timeline. Vendor A is stronger long-term but requires 9+ months. Vendor C is cheapest but integration risk is high.

What would change this: If timeline extended to 12 months β†’ Vendor A. If budget cut by 40% β†’ Vendor C with risk mitigation.

Customizing Criteria for Your Context

Default criteria (cost, time, risk, impact) work for most business decisions, but customize based on what your decision making with AI requires:

For technical decisions: Add scalability, maintenance burden, team capability match

For hiring decisions: Add culture fit, growth potential, compensation expectations

For product decisions: Add customer impact, competitive differentiation, development effort

⚠ Important Limitation

AI can structure options and identify tradeoffs, but it can’t weigh your organization’s priorities. The matrix is a decision aid, not a decision maker.

What If the Reading Doesn’t List Options?

Ask AI to infer options: “Based on this reading, what are the implicit options being considered?” Often articles present a recommended path without explicitly listing alternatives. AI can surface what the author chose not to emphasize.

Build Your Decision Toolkit

The Decision Matrix works alongside Action Memo for recommendations, Stakeholder Update for communication, and the broader Reading for Work pillar.

Frequently Asked Questions

Use a decision matrix when you have multiple options to compare, when tradeoffs aren’t obvious, or when you need to present options to stakeholders who want to see the reasoning laid out.
Ask: what matters most to the decision-makers? Common criteria include cost, time, risk, impact, and feasibility. Add context when running the prompt.
AI can do both. Ask for “options only” for neutrality, or “recommendation with reasoning” when you want AI to take a stance. Treat it as a starting point.
Ask AI to infer options. Often articles present a recommended path without listing alternatives. AI can surface what the author chose not to emphasize.
πŸ“š The Ultimate Reading Course

Turn Reading Into Better Decisions

Practice extracting options, weighing tradeoffs, and building decision-ready outputs across 365 real articles.

Start Learning β†’
1,098 Practice Questions 365 Articles 6 Courses

Decide With Confidence

You’ve got the decision matrix workflow. Add competitive intel and research briefs to complete your professional reading stack.

All Reading for Work Guides

Comprehension Check-In: Mid-Reading Self-Test Prompts

C034 🎯 Reading Coach 1 Prompt

Comprehension Check-In: Mid-Reading Self-Test Prompts

Catch confusion early with mid-reading checks β€” verify your understanding, get fix-up strategies, and stop wasted reading time.

5 min read Mid-Reading Tool Guide 4 of 4
PR035 The Comprehension Check-In
Mid-reading to verify understanding
I’m reading this text: “[paste passage]” My current understanding: [what you think it means] My confidence level: [high/medium/low] Help me check my comprehension: – Is my understanding accurate? – What signals should tell me if I’m on track or lost? – What should I re-read or look up? – What fix-up strategies would help here?
β–Ά Watch This Guide
βœ…
Built-In Comprehension Checks 365 articles with questions after each section β€” never finish confused again.
Explore Course β†’

Why Check Mid-Reading?

Most readers wait until the end to discover they didn’t understand. By then, it’s too late β€” you’ve wasted time reading without comprehending, and now you have to start over.

A comprehension check prompt catches confusion early. Each misunderstood section compounds confusion in later sections. If you misunderstand paragraph 2, paragraphs 3-10 make even less sense. Catching the problem at paragraph 2 saves you from reading 8 paragraphs confused.

This is called self-test reading or metacognitive monitoring β€” actively checking whether you understand as you go. Skilled readers do this automatically. The prompt teaches you to do it deliberately until it becomes habit.

The Prompt: PR035

PR035 has three components that work together:

1. State your understanding: Force yourself to articulate what you think the passage means. Vague feelings of “I get it” often mask actual confusion. Writing it out exposes gaps.

2. Rate your confidence: High, medium, or low. This creates calibration data. Over time, you’ll learn when your confidence matches reality and when you’re overconfident.

3. Get AI verification: The prompt asks AI to check your accuracy, suggest signals for being on/off track, and recommend fix-up strategies if needed.

πŸ’‘ Pro Tip

Rate your confidence BEFORE getting AI feedback. If you rate after, you’ll unconsciously adjust based on how you think you’ll score. The learning comes from the mismatch between your prediction and reality.

Fix-Up Strategies

Fix-up strategies are what you do when comprehension breaks down. Having a toolkit ready prevents the default response of “just push through confused.”

Re-read: Sometimes one careful re-reading clears confusion. But if the same section confuses you twice, re-reading isn’t enough β€” you need a different strategy.

Look up vocabulary: If unfamiliar words caused confusion, define them and re-read with understanding.

Get background context: If you lack the prerequisite knowledge, read an overview of the topic first, then return to the original text.

Slow your pace: For dense material, reading faster than you can process creates the illusion of progress. Slow down deliberately.

Take notes: Writing forces processing. If passive reading isn’t working, switch to active note-taking.

Ask questions: Turn to Socratic Reading Prompts (C005) to generate questions that drive understanding.

πŸ“Œ Prevention vs Recovery

Fix-up strategies recover from confusion. But prevention is better. Use the Strategy Advisor (C033) before reading to set an appropriate pace and approach. Prevention reduces how often you need recovery.

When to Use Check-Ins

Don’t check after every sentence β€” that’s too disruptive. Instead, check at natural break points:

End of sections: If the text has headings, check after each section.

After complex points: When something feels important or difficult, pause and verify.

When you notice confusion: The moment you think “wait, what?” β€” that’s the signal to stop and check.

At page/time intervals: Every 5-10 minutes or every few pages, even if you feel fine.

For the full coaching toolkit, explore the Reading Coach pillar or return to the AI for Reading hub.

Frequently Asked Questions

At natural breaks: end of sections, after complex points, or when you notice confusion. For difficult material, every 5-10 minutes. For easier content, less frequently. The goal is catching problems early without disrupting flow.
That’s overconfidence β€” the most dangerous calibration error. It means you don’t know what you don’t know. Track these instances. Over time, you’ll learn which content types trigger your overconfidence and can adjust by checking more frequently.
Diagnose the confusion first. Vocabulary problem β†’ look up words. Missing background β†’ get context. Reading too fast β†’ slow down. General fuzziness β†’ take notes. The AI feedback will often suggest which strategy matches your specific confusion.
Short-term, yes. But confused reading that requires re-reading is slower overall. Catching problems early reduces total time spent. And as checking becomes habitual, you’ll do it automatically and faster. Investment now, returns later.
πŸ“š The Ultimate Reading Course

Built-In Comprehension Checks

365 articles with questions after each section β€” train metacognitive monitoring automatically.

Start Learning β†’
1,098 Practice Questions 365 Articles with Analysis 6 Courses + Community

You’ve Completed the Reading Coach Pillar

All 4 coaching guides mastered. Ready to explore comprehension, summarization, or critical reading?

Reading Coach Pillar

Complete Bundle - Exceptional Value

Everything you need for reading mastery in one comprehensive package

Why This Bundle Is Worth It

πŸ“š

6 Complete Courses

100-120 hours of structured learning from theory to advanced practice. Worth β‚Ή5,000+ individually.

πŸ“„

365 Premium Articles

Each with 4-part analysis (PDF + RC + Podcast + Video). 1,460 content pieces total. Unmatched depth.

πŸ’¬

1 Year Community Access

1,000-1,500+ fresh articles, peer discussions, instructor support. Practice until exam day.

❓

2,400+ Practice Questions

Comprehensive question bank covering all RC types. More practice than any other course.

🎯

Multi-Format Learning

Video, audio, PDF, quizzes, discussions. Learn the way that works best for you.

πŸ† Complete Bundle
β‚Ή2,499

One-time payment. No subscription.

✨ Everything Included:

  • βœ“ 6 Complete Courses
  • βœ“ 365 Fully-Analyzed Articles
  • βœ“ 1 Year Community Access
  • βœ“ 1,000-1,500+ Fresh Articles
  • βœ“ 2,400+ Practice Questions
  • βœ“ FREE Diagnostic Test
  • βœ“ Multi-Format Learning
  • βœ“ Progress Tracking
  • βœ“ Expert Support
  • βœ“ Certificate of Completion
Enroll Now β†’
πŸ”’ 100% Money-Back Guarantee
Prashant Chadha

Connect with Prashant

Founder, WordPandit & The Learning Inc Network

With 18+ years of teaching experience and a passion for making learning accessible, I'm here to help you navigate competitive exams. Whether it's UPSC, SSC, Banking, or CAT prepβ€”let's connect and solve it together.

18+
Years Teaching
50,000+
Students Guided
8
Learning Platforms

Stuck on a Topic? Let's Solve It Together! πŸ’‘

Don't let doubts slow you down. Whether it's reading comprehension, vocabulary building, or exam strategyβ€”I'm here to help. Choose your preferred way to connect and let's tackle your challenges head-on.

🌟 Explore The Learning Inc. Network

8 specialized platforms. 1 mission: Your success in competitive exams.

Trusted by 50,000+ learners across India
×