dlgo is a pure Go deep learning inference engine. It loads GGUF models and runs them on CPU with no dependencies beyond the standard library (SIMD acceleration is optional via CGo).
I built this because I wanted to add local LLM inference to a Go project without shelling out to Python or linking against llama.cpp. The whole thing is go get github.com/computerex/dlgo and you're running models.
It supports LLaMA, Qwen 2/3/3.5, Gemma 2/3, Phi-2/4, SmolLM2, Mistral, and Whisper speech-to-text. Architectures are expressed as a declarative per-layer spec resolved at load time, so adding a new model family is mostly just describing its layer structure rather than writing a new forward pass.
Performance on a single CPU thread with Q4_K_M quantization: ~31 tok/s for LLaMA 3.2 1B, ~48 tok/s for Qwen3 0.6B, ~16 tok/s for Qwen3.5 2B (which has a hybrid attention + Gated Delta Network architecture). Not going to beat llama.cpp on raw speed, but it's fast enough to be useful and the ergonomics of a native Go library are hard to beat.
Supports 25+ GGML quantization formats (Q4_0 through Q8_0, all K-quants, I-quants, F16, BF16, F32). The GGUF parser, dequantization, tokenizer, forward pass, and sampling are all implemented from scratch.
by usmangurowa ·
Hey, I’m Usman.
I spent years using time trackers that made me feel like I was just clocking in for a factory shift. I’d have these incredible "flow" sessions, it’s 1 AM, I’m deep in the zone, I’ve just refactored an entire auth system and pushed 6 clean commits and I’d look at my tracker only to see: "3 hours: TypeScript."
No context. No story. No soul. Just a cold bar on a chart.
Meanwhile, my friend finishes a 10K run, and Strava celebrates it with maps, elevation splits, and a flood of kudos from the community. It’s motivating. It’s human. I started wondering: Why don’t we have that for the work we’re actually proud of?
So, I built Kodo.
It’s not a tracker; it’s a narrative
The core philosophy behind Kodo is shifting the question from "Did you work enough?" to "Look what you achieved."
It runs passively in your IDE, but instead of just logging minutes, it uses AI to turn your raw activity into a story. If you’ve spent two hours jumping between three different files and a specific branch, Kodo doesn't just say "Coding." It says: "Refactored the authentication flow and killed that critical login bug." It’s the summary you wish you could write for your standup, generated for you.
Solving the "Surveillance Ick"
As developers, we have a collective allergy to trackers because they usually exist for managers, not us. I built Kodo with a few "non-negotiables":
Privacy First: Kodo never reads your source code. Period. If you're feeling private, "Stealth Mode" logs timestamps and nothing else. Social, not Competitive: We have a social feed where teammates can see you’re online or drop a "kudos" on a big session. It’s not a leaderboard to see who worked the most; it’s a way to feel less lonely when you’re shipping at midnight. The Burnout Nudge: I’ve been the guy coding at 3 AM on fumes, thinking I’m a hero when I’m actually just breaking things. Kodo gives you a Cognitive Freshness Score. It’ll actually nudge you to take a break after 90 minutes of high-intensity work.
The Stack (For the curious)
I’m a huge fan of the T3/Supabase ecosystem, so I kept the engine modern and fast:
Frontend/API: Next.js (App Router) + Hono. Database/Auth: Postgress + Drizzle ORM + Better-Auth. Styling: Tailwind CSS v4 and Shadcn/UI (because DX matters). Extension: Pure TypeScript for the VS Code family (cursor, windsurf, antigravity), Kotlin for JetBrain and typescript again for Claude code(Yes Kodo also track your Claude code sessions).
The AI layer uses OpenAI and Anthropic to parse your metadata into those human readable summaries, and I even added 5 different "AI Coach" personalities (from "Hype" to "Wellness") so you can choose the vibe that fits your team’s culture.
Give it a spin
I built this because we deserve better than a punch card. We deserve a tool that recognizes the craft, the flow, and the effort it takes to build things.
You can try it out at [kodo.codes](https://kodo.codes).
It supports basically everything (VS Code, Cursor, IntelliJ, even Claude Code). Create an API key, drop it in your editor, and just... code. Kodo handles the rest.
I’d love to hear what you think especially from anyone else who’s felt that "productivity tool burnout."
Try kodo at kodo.codes
by alnah ·
Because the industry is using more and more agents to write code, produce tools, and build systems, I was curious about how do you people manage docstrings? Do you still think them to be consumed by humans first? Do you write them for AIs? If yes, I am very curious how you do that? Or did you stop to write docstrings, because AIs don't need it? What are your experiments with the docstrings? What are your results? What did you discover? I was thinking about what would be good docstrings for an AI? Is it just, no docstrings? Or is a docstring for AIs? Does the readability for humans is even AI compatible when we speak of docstrings? Thank you.
Hi HN! I built CodeTrackr, an open-source, privacy-first alternative to WakaTime.
It tracks coding activity and provides real-time analytics, global leaderboards, and a plugin system. The goal is to give developers useful productivity insights while keeping full ownership of their data.
Key features: - WakaTime-compatible API (works with existing editor extensions) - Real-time dashboard using WebSockets - Global and language leaderboards - Self-hostable (Docker included) - Plugin system using JavaScript snippets - GitHub and GitLab login
Stack: Rust + Axum + PostgreSQL + Redis + Vanilla JS
Repository: https://github.com/livrasand/CodeTrackr
I would really appreciate feedback, especially regarding: - security - architecture - potential areas for improvement
If you're interested, you're also welcome to build plugins for the plugin store or create IDE extensions for your favorite editors.
Thanks for taking a look!
PS: I used ChatGPT to translate this; my native language is Spanish, and my English is limited.