Evidence Check Prompt
Classify every claim: is it data, expert opinion, anecdote, or unsupported assertion? Plus red flags to watch.
Evidence Types: A Quick Taxonomy
Not all evidence is created equal. When a passage makes a claim, the evidence checklist asks: what type of evidence supports it? Here’s the quick taxonomy:
Data β Numbers, statistics, study results, measured outcomes. The strongest type when the methodology is sound, but watch for p-hacking, small samples, and results that haven’t been replicated. “Studies show 70% of users prefer X” is data.
Anecdote β Individual stories, personal experiences, specific cases. Compelling and memorable, but not necessarily representative. “My friend tried X and loved it” is anecdote. Good for illustration, weak for proving patterns.
Expert opinion β What authorities in the field say. Varies enormously based on whether the expert is actually in the relevant field, whether there’s expert consensus, and whether they’re citing evidence or just asserting. “Dr. X believes…” is weaker than “The scientific consensus is…”
Analogy β Comparison to something similar. “Just like X, this situation…” Useful for understanding, weak for proving. Analogies illuminate but don’t establish β the situations might differ in crucial ways.
Example β Specific instances cited as representative. Stronger than anecdotes when carefully chosen, weaker when cherry-picked to support a predetermined conclusion. Ask: is this example typical or exceptional?
For most claims, evidence strength roughly runs: meta-analyses > randomized controlled trials > large observational studies > small studies > expert consensus > individual expert opinion > examples > analogies > anecdotes > unsupported assertions. Context matters, but this hierarchy is a useful starting point.
Using PR023: What the Prompt Reveals
The evidence checklist in PR023 asks four questions. First: what type of evidence is being used? Just naming it is clarifying. “This is an anecdote” or “This is a single study” immediately calibrates your confidence.
Second: how strong is the connection between evidence and conclusion? This is where many arguments fail. The evidence might be solid, but the leap to the conclusion unjustified. “Studies show people who eat breakfast are healthier” doesn’t prove “Eating breakfast makes you healthy” β there could be confounding factors.
Third: what would stronger evidence look like? This question surfaces what’s missing. If someone cites one study, stronger evidence would be a meta-analysis. If they cite expert opinion, stronger evidence would be data. Knowing what better evidence looks like helps you assess what you have.
Fourth: is this evidence representative or cherry-picked? This is crucial. Cherry-picking β selecting only evidence that supports your conclusion while ignoring contradictory evidence β is one of the most common failures in argumentation. For more on detecting bias in source selection, see the Bias Scanner Prompt.
Red Flags: Evidence Problems to Watch
Beyond classification, watch for these common evidence problems:
Correlation as causation: “X and Y occur together, therefore X causes Y.” Maybe. Or maybe Y causes X. Or Z causes both. Or it’s coincidence. Correlation suggests investigation, not conclusion.
Unrepresentative samples: Evidence drawn from unusual populations generalized to everyone. College students, online survey respondents, and people willing to participate in studies may not represent the general population.
Outdated evidence: Especially in fast-moving fields. Evidence from 2010 about social media, AI, or medical treatments may not apply to 2024 realities. Ask: has anything changed that would affect this evidence?
Single studies: One study proves little. Science advances through replication. Be especially skeptical of dramatic single-study findings that haven’t been reproduced.
Conflicts of interest: Evidence from sources with financial or ideological stakes in the conclusion. Doesn’t automatically invalidate it, but warrants extra scrutiny. For verification strategies, see Fact-Check Mode.
Watch for claims presented as fact with zero evidence β not even anecdotes. Phrases like “Everyone knows…” “It’s obvious that…” “Clearly…” often flag unsupported assertions. The author expects you to accept the claim without evidence. Maybe it’s true. But it’s not argued.
From Evidence Check to Decision
After running PR023, you’ll have a clearer picture of how well-supported each claim is. Use this to calibrate your confidence. Claims supported by strong, representative data deserve more weight than claims supported only by anecdotes and analogies.
But don’t reject everything that lacks perfect evidence. Most real-world decisions happen under uncertainty. The goal isn’t to accept only proven claims β it’s to know which parts of an argument you’re accepting on faith versus which are supported by evidence.
For the complete critical reading toolkit, explore the Critical Reading pillar. For the broader reading improvement framework, see AI for Reading.
Frequently Asked Questions
Don’t Get Fooled. Get Evidence Literate.
Practice evaluating evidence across articles from business, science, policy, and culture. Build the instincts that protect you from weak arguments.
Join the Course β βΉ2,499 βKnow What You’re Trusting
Next time you read a persuasive piece, run PR023. Classify the evidence. Note the red flags. Decide which claims are supported and which you’re accepting on faith. That’s reading with your eyes open.
Critical Reading Pillar