Hello everyone!
I'm currently in a journey to learn and improve my Elixir and Go skills (my daily job uses C++) and looking through my backlog for projects to take on I decided Elixir is the perfect language to write a highly-parallel BitTorrent tracker. So I have spent my free time these last 3 months writing one! Now I think it has enough features to present it to the world (and a docker image to give it a quick try).
I know some people see trackers as relics of the past now that DHT and PEX are common but I think they still serve a purpose in today's Internet (purely talking about public trackers). That said there is not a lot going on in terms of new developments since everyone just throws opentracker in a vps a calls it a day (honorable exceptions: aquatic and torrust).
I plan to continue development for the foreseeable future and add some (optional) esoteric features along the way so if anyone currently operates a tracker please give a try and enjoy the lack of crashes.
note: only swarm_printout.ex has been vibe coded, the rest has all been written by hand.
I've found it helps PR reviewers when they can look through a set of commits with clear messages and logically organized changes. Typically reviewers prefer a larger quantity of smaller changes versus a smaller quantity of larger changes. Sometimes it gets really messy to break up a change into sufficiently small PRs, so thoughtful commits are a great way of further subdividing changes in PRs. It can be pretty time consuming to do this though, so this tool automates the process with the help of AI.
The tool sends the diff of your git branch against a base branch to an LLM provider. The LLM provider responds with a set of suggested commits with sensible commit messages, change groupings, and descriptions. When you explicitly accept the proposed changes, the tool re-writes the commit history on your branch to match the LLM's suggestion. Then you can force push your branch to your remote to make it match.
The default AI provider is your locally running Ollama server. Cloud providers can be explicitly configured via CLI argument or in a config file, but keeping local models as the default helps to protect against unintentional data sharing. The tool always creates a backup branch in case you need to easily revert in case of changing your mind or an error in commit re-writing. Note that re-writing commit history to a remote branch requires a force push, which is something your team/org will need to be ok with. As long as you are working on a feature branch this is usually fine, but it's always worth checking if you are not sure.
To try it out, simply build the project yourself from source, or use attached bootable ISO image of the system (in Releases on Github) and run it in QEMU.
I've been working with the Featureform team on their new open-source project, [EnrichMCP][1], a Python ORM framework that helps AI agents understand and interact with your data in a structured, semantic way.
EnrichMCP is built on top of [MCP][2] and acts like an ORM, but for agents instead of humans. You define your data model using SQLAlchemy, APIs, or custom logic, and EnrichMCP turns it into a type-safe, introspectable interface that agents can discover, traverse, and invoke.
It auto-generates tools from your models, validates all I/O with Pydantic, handles relationships, and supports schema discovery. Agents can go from user → orders → product naturally, just like a developer navigating an ORM.
We use this internally to let agents query production systems, call APIs, apply business logic, and even integrate ML models. It works out of the box with SQLAlchemy and is easy to extend to any data source.
If you're building agentic systems or anything AI-native, I'd love your feedback. Code and docs are here: https://github.com/featureform/enrichmcp. Happy to answer any questions.
RM2000 Tape Recorder makes it stupid simple to grab audio samples and organize them: just record the sample, give it a title (and maybe some tags), and it is saved neatly into a directory of your choosing.
I'm a huge datahoarder and have always appreciated tools / services like PureRef and Are.na which help me make sense of everything I collect. Those services concern themselves with images and video - I wondered, why can't the same be done with music and audiofiles?
I actually got the inspiration for the filenaming scheme from the Emacs Denote package - every sample is saved in the format of title--tag1--tag2.mp3. Emacs Denote does something similar, for example an identifier--title--keywords.org .
I chose this method as any file browser with fuzzy search can search through samples, i.e. - the Ableton file browser. Just search up some of the tags, and a title, and you'll be able to find your sample.
I wanted this app to look good, as well (and is why I spent so much time making it!) The app is made with a mix of SwiftUI and AppKit, while the assets were rendered in Sketch
I appreciate your time and I'd love to hear your thoughts on it. If you do download it, and find suggestions / bugs, please let me know!
Cheers
I kept slamming into Claude Code limits mid-session and couldn’t find a quick way to see how close I was getting, so I hacked together a tiny local tracker.
Streams your prompt + completion usage in real time
Predicts whether you’ll hit the cap before the session ends
Runs 100 % locally (no auth, no server)
Presets for Pro, Max × 5, Max × 20 — tweak a JSON if your plan’s different
GitHub: https://github.com/Maciek-roboblog/Claude-Code-Usage-Monitor
It’s already spared me a few “why did my run just stop?” moments, but it’s still rough around the edges. Feedback, bug reports, and PRs welcome!
I got tired of the push-to-registry/pull-from-registry dance every time I needed to deploy a Docker image.
In certain cases, using a full-fledged external (or even local) registry is annoying overhead. And if you think about it, there's already a form of registry present on any of your Docker-enabled hosts — the Docker's own image storage.
So I built Unregistry [1] that exposes Docker's (containerd) image storage through a standard registry API. It adds a `docker pussh` command that pushes images directly to remote Docker daemons over SSH. It transfers only the missing layers, making it fast and efficient.
docker pussh myapp:latest user@server
Under the hood, it starts a temporary unregistry container on the remote host, pushes to it through an SSH tunnel, and cleans up when done.I've built it as a byproduct while working on Uncloud [2], a tool for deploying containers across a network of Docker hosts, and figured it'd be useful as a standalone project.
Would love to hear your thoughts and use cases!
I was the main contributor to workout.lol, an open-source fitness app to easily build a workout routine. The project had traction (1.4k GitHub stars, 95 forks, ~20K visits/month), but was eventually sold due to video licensing hurdles. The new owner stopped maintaining it, and the repo went abandoned.
Over the next 9 months, I sent 15 emails to try to save it : no replies. Feature requests & issues were ignored. The community was left with a "broken" tool let's say.
I couldn't just let it die So I built the new version from scratch with the same open-source spirit, but a better architecture long-term vision, more features and no license problems.
It's called : Workout.cool (https://workout.cool). What it offers: 100% open-source, MIT-licensed - 1200+ exercises (with videos, attributes, translations) - Progress tracking - Multilingual-ready - Self-hostable
I'm not doing this for money. I'm doing it because I believe in open fitness tools, and I’ve been passionate about strength training for 15+ years.
If this resonates with you, feel free to: - Star the repo - Share with fitness/tech friends - Suggest features - Contribute code/design/docs
Together, we can build the open-source fitness platform we all wanted to easily build a workout routine and get in shape
Website: https://workout.cool GitHub: https://github.com/Snouzy/workout-cool