Hi HN — I built AgentCheck to make the agentic internet more legible.
AgentCheck is a public “AI bot posture leaderboard” built from declared public signals: - robots.txt allow/deny rules - public capability/interface files (e.g. /llms.txt and /.well-known/agents.json where present) - weekly deltas so you can see policy changes over time
It answers: which bots a site declares it blocks/allows (using a fixed reference bot set), whether agent-readable interface files exist, and how posture changes week to week.
Important: this is not a claim about actual crawling activity — it’s posture + public interface signals.
Link: https://www.agentcheck.com/leaderboard/ai-bots
I’d love feedback on: - other public signals worth adding - how you’d define “agent readiness” - edge cases where robots.txt parsing should be handled differently
by inferno22 ·
If you're a non-technical founder, you probably have no idea what your developers did last week. You ask, they say "refactored the auth module" and you nod pretending you understand. Gitmore reads your GitHub activity and turns it into a simple report: what was built, what was fixed, what's stuck. Written for humans, not engineers.
It shows up in your inbox. You read it in 2 minutes. Done.
Here's what a report looks like: https://www.gitmore.io/example.html
Quick demo: https://demo.arcade.software/5tZyFDhp1myCosw6e1po
Free tier available. Happy to hear what you'd want from something like this.