About Limpid
Clear signal. No hype.
Why Limpid exists
The AI tool landscape is moving faster than anyone can evaluate safely. Most coverage is either hype — product launches dressed up as journalism — or highly technical CVE reporting that lands too late and assumes a security background most developers don't have. Nobody was doing independent, structured security and privacy reviews of the fast-moving grassroots tools: the ones trending on GitHub, surfacing on Hacker News, getting quietly adopted by small teams before anyone has scrutinised them. Limpid exists to fill that gap.
The methodology
What gets reviewed
Limpid focuses on fast-growing emerging AI tools: the ones trending on GitHub, surfacing on Hacker News, or suggested by readers. Established platforms are out of scope — the goal is to cover tools before the broader ecosystem has had time to scrutinise them.
The four dimensions
Every tool is scored across four dimensions: Transparency, Security Posture, Data & Privacy, and Claim Accuracy. Each is scored 1–5 with explicit anchors so scores mean the same thing across different tools and different reviewers.
How scores become grades
Dimension scores are combined using a weighted formula that reflects what matters most for security-conscious adoption: Security Posture and Data & Privacy each carry 30%, Transparency and Claim Accuracy each carry 20%. The result maps to a grade from A to F.
Human approval
Reviews are AI-assisted: the research, scoring, and initial recommendations are generated by Claude. Before anything is published, a human reads, validates, and approves every review. This is a deliberate choice — automation assists, judgment is human.
What this is not
- —Not affiliated with any tool reviewed.
- —Not sponsored or paid to review any tool — no affiliate links, no sponsored content.
- —Not comprehensive — coverage focuses on emerging tools, not established platforms.
- —Not legal or security advice — reviews are informed independent assessments, not guarantees.
- —Not fully automated — every published review has been read and approved by a human.
Where things are now
Reviews are currently researched with AI assistance and published after human review and approval. This takes time and has real costs — primarily API usage for research and scoring. Every review you read has been read by a human before it went live.
Automating the full pipeline is on the roadmap. When funding allows, the goal is faster turnaround, more tools covered, and cross-validation across multiple AI models. Until then, slower and human-checked is the right tradeoff.
Support independent AI security research
Limpid is free to read and always will be. If it has saved you from a bad tool decision or helped your team evaluate something safely, consider supporting it.
No subscription required — one-time contributions welcome on Ko-fi.
See all support options →Get in touch
If there's a tool you think should be reviewed, the best place to start is the suggestion page. Feedback on existing reviews is also welcome.
Suggest a tool for review →