Today, I launched blunder.clinic, a daily chess puzzle app that provides realistic positions for you to try to not blunder on. These are similar to traditional chess puzzles (i.e., tactics), but different in a few key ways.
There are two popular ways to self-study chess: tactics and following along with professional games or with an engine. These are obviously helpful, but both have downsides.
When playing puzzles, just by knowing you are playing a puzzle means that you are biased towards looking for specific types of moves (checkmates, queen sacrifices, etc.). But in real life, you don't know what positions actually have tactics available, so you can waste your time looking for tactics, or, even worse, make a blunder by thinking there is a tactic when there really isn't.
When following along with an engine, there are tons of positions where an engine comes up with a move that you simply would never have seen and can't possibly understand. These are very low signal for learners, and it is hard to differentiate between positions like that and high-signal positions that are on the edge of your ability.[^1]
blunder.clinic addresses both of these problems by giving you positions where people of your skill level actually blundered, but the best move is something that isn't too far beyond your capability to understand and learn from. We do this by leveraging stockfish for positional evaluations and maia[1] for difficulty evaluation.
Overall, the main purpose of blunder.clinic is to help you stop blundering easy positions!
You can read a bit more about it here: https://mcognetta.github.io/posts/blunder-clinic/
[1]: Maia (https://www.maiachess.com/) is a family of chess models trained on real games. The inputs are a board position and a player rating, and the output is a probability distribution of moves. You can use this to answer queries like "How likely do we think a player of XYZ rating would pick the best move?"
Hey HN, I'm Shubh, Co-Founder of Raccoon AI.
Raccoon AI is like having something between Claude Code and Cursor in the web.
The agent has its own computer with a terminal, browser, and internet, and it is built with the right balance of collaboration and autonomy.
You can talk to it mid-task, send it more files while it's still running, or just let it go and come back to a finished result.
It's the kind of product where you open it to try one thing and end up spending two hours because you keep thinking of more things to throw at it.
The thing that most people get excited about is that sessions chain across completely unrelated task types. You can go from market research (real citations, generated charts) to raw data analysis (dump your db, ask questions) to a full interactive app, all in one conversation sharing the same context.
It has unlimited context through auto summarization, which is really good with Ace Max.
It connects to Gmail, GitHub, Google Drive, Notion, Outlook, and 40+ other tools. You can add your own via custom MCP servers.
Raccoon AI is built on top of our own agents SDK, ACE, which hit SOTA on GAIA benchmark with a score of 92.67.
A bit of background: We're a team of 3, and we started about 1.5 years ago to build the best possible browser agent to ever exist, after a couple of pivots we arrived at this and have been constantly shipping and growing since October.
Happy to go deep on the architecture or talk about the limitations and excited about the feedback.
Site: https://raccoonai.tech