Quick Summary
- What: A tool that compares actual API responses against documented specifications and calls out discrepancies with sarcastic precision.
The Great Documentation Charade
Let's talk about the corporate documentation lifecycle. It goes something like this:
- Developer writes actual working API
- Product manager demands documentation "for compliance"
- Developer hastily writes docs during their last hour on Friday
- API evolves over six months
- Documentation becomes a historical artifact, like cave paintings or your college GPA
- New developers treat it as gospel and waste days of their lives
The beautiful irony? We developers, who pride ourselves on logic and evidence-based thinking, will trust a README.md file written two years ago more than the actual HTTP response staring us in the face. It's like continuing to use a map from 1998 because "it was official at the time."
I've personally witnessed developers:
- Debug their authentication code for hours because the docs said "Bearer token" but the API actually wanted "Token Bearer" (apparently APIs have grammar preferences)
- Implement pagination that never worked because the documented
pageparameter was actually calledpin production - Parse JSON that was supposed to have a
datafield but actually returnedresult,items, or sometimes just raw disappointment
Enter the Lie Detector
After one particularly egregious incident involving a "RESTful" API that returned HTML error pages (yes, really), I decided enough was enough. I built API Docs Lie Detector to solve this exact problem. It's the digital equivalent of bringing a fact-checker to a political debate.
The concept is beautifully simple: give it your API documentation (OpenAPI/Swagger, Postman collection, or even just a simple JSON schema) and point it at your actual API endpoints. The tool will:
- Make real HTTP requests to your endpoints
- Compare responses against what's documented
- Calculate a "truthiness score" (0-100%, where 100% means "the docs might actually be accurate for once")
- Generate hilariously sarcastic error messages when lies are detected
It's not just about catching errors - it's about providing the cathartic satisfaction of having proof that you're not crazy. That endpoint really DOES return a 418 "I'm a teapot" status code instead of the documented 200 OK.
How to Use It (Without Losing Your Mind)
Installation is as straightforward as the documentation should be (but never is):
npm install -g api-docs-lie-detector
# or
pip install api-docs-lie-detector
# or just clone the repo because we're all friends here
Basic usage looks something like this:
import { LieDetector } from 'api-docs-lie-detector';
const detector = new LieDetector({
docsPath: './openapi.yaml',
baseUrl: 'https://api.your-service.com',
sarcasmLevel: 'maximum' // optional, but recommended
});
const results = await detector.audit();
console.log(results.truthinessScore);
// Output: 42% (The docs are almost as accurate as a weather forecast)
Check out the full source code on GitHub for more advanced configurations, like ignoring certain endpoints or customizing the lie detection thresholds.
Key Features That Actually Work
- Compares documented endpoints with actual responses: Like a detective comparing a suspect's alibi to security camera footage
- Flags mismatched status codes, parameters, and response schemas: Because nothing says "professional" like returning a 500 error for valid input
- Generates a 'truthiness score' for documentation accuracy: 0-100%, where anything above 70% is probably a typo
- Creates sarcastic error messages when lies are detected: "The docs say this endpoint returns user data. It actually returns what appears to be XML. Are we in 2004?"
- Exportable reports: Perfect for attaching to JIRA tickets with the comment "See? I told you so"
- CI/CD integration: Automatically fail builds when documentation truthiness drops below your team's shame threshold
The Brutal Honesty You Deserve
Here's what a typical lie detection report looks like:
π Auditing: GET /api/users/{id}
β LIE DETECTED: Status Code Mismatch
Documented: 200 OK
Actual: 404 Not Found
Sarcasm: "The user doesn't exist. Or maybe the endpoint doesn't. Who can say?"
β LIE DETECTED: Response Schema Mismatch
Documented: { "id": "string", "name": "string" }
Actual: { "error": "Please contact support", "ticketId": 69420 }
Sarcasm: "Nothing says 'RESTful' like returning a support ticket instead of data"
π Truthiness Score: 33%
Verdict: "These docs are about as reliable as a chocolate teapot."
Why This Actually Matters
Beyond the cathartic humor, this tool solves real problems:
- Reduces debugging time: Instead of wondering if your code is wrong, you'll KNOW the docs are wrong
- Improves onboarding: New developers won't waste their first week fighting phantom bugs
- Creates accountability: When documentation accuracy becomes measurable, teams actually maintain it
- Saves sanity: Priceless
The best part? You can run this against third-party APIs too. Imagine sending a report to that SaaS vendor with "Your documentation is 28% accurate" and watching them scramble to fix it. It's the kind of power that should probably come with a supervillain origin story.
Conclusion: Stop Trusting, Start Verifying
API documentation will continue to lie to us. It's in its nature, like cats ignoring you or JavaScript having another framework of the week. But now, you have a weapon against the deception.
API Docs Lie Detector won't write your documentation for you (we're not miracle workers), but it will tell you exactly how badly it sucks. And sometimes, that's the first step toward making it suck less.
Try it out today: https://github.com/BoopyCode/api-docs-lie-detector
Your future self will thank you when you're debugging actual code problems instead of documentation fantasies. And if the tool reports 100% accuracy? Well, you should probably run it again - clearly something's wrong with the lie detector.
π¬ Discussion
Add a Comment