Readability Scoring
Readability scoring is table stakes. Every writing tool does it. What DraftLift adds is slop detection — active quality enforcement that catches the patterns making your content sound machine-generated.
Metrics update as you type, so you see the impact of every edit in real time.
Scoring methodology
DraftLift uses two established readability formulas:
Flesch-Kincaid Grade Level
Estimates the U.S. school grade level required to understand the text. Based on average sentence length and syllables per word.
| Grade level | Audience | Example platforms |
|---|
| 5-6 | General public | Social media, X posts |
| 7-8 | Broad professional audience | LinkedIn, email newsletters |
| 9-10 | Educated readers | Blog posts, articles |
| 11+ | Specialist audience | Technical or academic content |
Flesch Reading Ease
A 0-100 score where higher means easier to read:
| Score | Readability |
|---|
| 90-100 | Very easy — understood by an 11-year-old |
| 60-70 | Standard — plain English |
| 30-60 | Difficult — college-level readers |
| 0-30 | Very difficult — academic or technical writing |
Additional metrics
Beyond the two core scores, the editor sidebar tracks:
- Word count and character count — Essential for platform-specific length requirements
- Reading time — Estimated time to read at average speed
- Average words per sentence — Shorter sentences improve readability; aim for 15-20 words on average
- Paragraph count — More paragraphs with fewer sentences generally improve scannability
Anti-slop quality system
This is where DraftLift goes beyond scoring into active quality enforcement. The Anti-Slop Quality System analyzes content for patterns that mark generic AI writing:
Critical patterns
Issues that make content sound obviously machine-generated:
- Robotic sentence structures (“It’s not X, it’s Y”)
- AI vocabulary crutches (utilize, leverage, synergy, paradigm, myriad)
- Infomercial hooks (“The best part?”, “Ready to level up?”, “Here’s the thing”)
- Vague attributions and fabricated statistics
Warning patterns
Issues that weaken writing quality:
- Em-dash overuse
- Arrow symbols and emoji clusters
- Formulaic triple-item lists
- Binary opposite structures (“It’s not about X, it’s about Y”)
Style signals
Subtler patterns worth reviewing:
- Filler word density
- Adverb overuse
- Copula avoidance (unnecessarily replacing “is/are” with action verbs)
- Synonym cycling (using different words for the same concept to sound varied)
Each flagged pattern includes a severity level and a specific suggestion. Patterns are highlighted inline in the editor so you can fix them in context.
Readability targets vary by platform. Match your score to your audience:
| Platform | Target grade level | Why |
|---|
| X / Twitter | 5-7 | Must be instantly understood while scrolling |
| LinkedIn | 7-9 | Professional but accessible |
| Short-form video | 5-7 | Scripts should sound natural when spoken |
| Email newsletter | 7-8 | Scannable in a busy inbox |
| Blog post | 8-10 | Readers expect more depth |
| YouTube script | 6-8 | Spoken content should feel conversational |
Don’t chase the lowest possible score. A blog post about database architecture should be at a higher grade level than a LinkedIn motivational post. Match readability to your audience and platform.