Modern NewsTopAskShowBestNew

Show

    Show HN: Craftplan – I built my wife a production management tool for her bakery

    by deofoo · 2 days ago

    My wife was planning to open a micro-bakery. We looked at production management software and it was all either expensive or way too generic. The actual workflows for a small-batch manufacturer aren't that complex, so I built one and open-sourced it.

    Craftplan handles recipes (versioned BOMs with cost rollups), inventory (lot traceability, demand forecasting, allergen tracking), orders, production batch planning, and purchasing. Built with Elixir, Ash Framework, Phoenix LiveView, and PostgreSQL.

    Live demo: https://craftplan.fly.dev (test@test.com / Aa123123123123)

    GitHub: https://github.com/puemos/craftplan

    42|github.com|5 comments

    Show HN: Octosphere, a tool to decentralise scientific publishing

    by crimsoneer · about 8 hours ago

    Hey HN! I went to an ATProto meetup last week, and as a burnt-out semi-academic who hates academic publishing, I thought there might be a cool opportunity to build on Octopus (https://www.octopus.ac/), so I got a bit excited over the weekend and built Octosphere.

    Hopefully some of you find it interesting! Blog post here: https://andreasthinks.me/posts/octosphere/octosphere.html

    46|octosphere.social|13 comments

    Show HN: Sandboxing untrusted code using WebAssembly

    by mavdol04 · about 11 hours ago

    Hi everyone,

    I built a runtime to isolate untrusted code using wasm sandboxes.

    Basically, it protects your host system from problems that untrusted code can cause. We’ve had a great discussion about sandboxing in Python lately that elaborates a bit more on the problem [1]. In TypeScript, wasm integration is even more natural thanks to the close proximity between both ecosystems.

    The core is built in Rust. On top of that, I use WASI 0.2 via wasmtime and the component model, along with custom SDKs that keep things as idiomatic as possible.

    For example, in Python we have a simple decorator:

      from capsule import task
    
      @task(
          name="analyze_data", 
          compute="MEDIUM",
          ram="512mb",
          allowed_files=["./authorized-folder/"],
          timeout="30s", 
          max_retries=1
      )
      def analyze_data(dataset: list) -> dict:
          """Process data in an isolated, resource-controlled environment."""
          # Your code runs safely in a Wasm sandbox
          return {"processed": len(dataset), "status": "complete"}
    
    And in TypeScript we have a wrapper:

      import { task } from "@capsule-run/sdk"
    
      export const analyze = task({
          name: "analyzeData", 
          compute: "MEDIUM", 
          ram: "512mb",
          allowedFiles: ["./authorized-folder/"],
          timeout: 30000, 
          maxRetries: 1
      }, (dataset: number[]) => {
          return {processed: dataset.length, status: "complete"}
      });
    
    You can set CPU (with compute), memory, filesystem access, and retries to keep precise control over your tasks.

    It's still quite early, but I'd love feedback. I’ll be around to answer questions.

    GitHub: https://github.com/mavdol/capsule

    [1] https://news.ycombinator.com/item?id=46500510

    71|github.com|19 comments

    Show HN: Latex-wc – Word count and word frequency for LaTeX projects

    by sethbarrettAU · about 23 hours ago

    I was revising my proposal defense and kept feeling like I was repeating the same term. In a typical LaTeX project split across many .tex files, it’s awkward to get a quick, clean word-frequency view without gluing everything together or counting LaTeX commands/math as “words”.

    So I built latex-wc, a small Python CLI that:

    - extracts tokens from LaTeX while ignoring common LaTeX “noise” (commands, comments, math, refs/cites, etc.)

    - can take a single .tex file or a directory and recursively scan all *.tex files

    - prints a combined report once (total words, unique words, top-N frequencies)

    Fastest way to try it is `uvx latex-wc [path]` (file or directory). Feedback welcome, especially on edge cases where you think the heuristic filters are too aggressive or not aggressive enough.

    7|www.piwheels.org|3 comments

    Show HN: C discrete event SIM w stackful coroutines runs 45x faster than SimPy

    by ambonvik · about 9 hours ago

    Hi all,

    I have built Cimba, a multithreaded discrete event simulation library in C.

    Cimba uses POSIX pthread multithreading for parallel execution of multiple simulation trials, while coroutines provide concurrency inside each simulated trial universe. The simulated processes are based on asymmetric stackful coroutines with the context switching hand-coded in assembly.

    The stackful coroutines make it natural to express agentic behavior by conceptually placing oneself "inside" that process and describing what it does. A process can run in an infinite loop or just act as a one-shot customer passing through the system, yielding and resuming execution from any level of its call stack, acting both as an active agent and a passive object as needed. This is inspired by my own experience programming in Simula67, many moons ago, where I found the coroutines more important than the deservedly famous object-orientation.

    Cimba turned out to run really fast. In a simple benchmark, 100 trials of an M/M/1 queue run for one million time units each, it ran 45 times faster than an equivalent model built in SimPy + Python multiprocessing. The running time was reduced by 97.8 % vs the SimPy model. Cimba even processed more simulated events per second on a single CPU core than SimPy could do on all 64 cores.

    The speed is not only due to the efficient coroutines. Other parts are also designed for speed, such as a hash-heap event queue (binary heap plus Fibonacci hash map), fast random number generators and distributions, memory pools for frequently used object types, and so on.

    The initial implementation supports the AMD64/x86-64 architecture for Linux and Windows. I plan to target Apple Silicon next, then probably ARM.

    I believe this may interest the HN community. I would appreciate your views on both the API and the code. Any thoughts on future target architectures to consider?

    Docs: https://cimba.readthedocs.io/en/latest/

    Repo: https://github.com/ambonvik/cimba

    49|github.com|16 comments

    Show HN: AnsiColor, resilient ANSI color codes for your TUI

    by gurgeous · about 2 hours ago

    Hi HN. AnsiColor constructs resilient ANSI color codes for your TUI, cli app or prompt. Colors that will work regardless of the user's terminal theme.

    I built this after experiencing the hilarious illegibility of Codex CLI when running with Solarized Dark. If a zillion dollar company can't get it right, we need better tools.

    It comes with these themes:

      Andromeda
      Ayu Dark/Light
      Bearded Dark/Light
      Catppuccin Frappe
      Catppuccin Latte
      Catppuccin Macchiato
      Catppuccin Mocha
      Dracula
      GitHub Dark
      Gruvbox
      Monokai Dark/Light
      Nord
      One Dark/Light
      Palenight
      Panda
      Solarized Dark/Light
      Synthwave 84
      Tailwind
      Tokyo Night Dark/Light

    2|ansicolor.com|0 comments

    Show HN: Safe-now.live – Ultra-light emergency info site (<10KB)

    by tinuviel · about 16 hours ago

    After reading "During Helene, I Just Wanted a Plain Text Website" on Sparkbox (https://news.ycombinator.com/item?id=46494734) , I built safe-now.live – a text-first emergency info site for USA and Canada. No JavaScript, no images, under 10KB. Pulls live FEMA disasters, NWS alerts, weather, and local resources. This is my first live website ever so looking for critical feedback on the website. Please feel free to look around.

    https://safe-now.live

    172|safe-now.live|76 comments

    Show HN: I built an AI twin recruiters can interview

    by Charlie112 · about 2 hours ago

    https://chengai.me

    The problem: Hiring new grads is broken. Thousands of identical resumes, but we're all different people. Understanding someone takes time - assessments, phone screens, multiple interviews. Most never get truly seen.

    I didn't want to be just another PDF. So I built an AI twin that recruiters can actually interview.

    What you can do: •Interview my AI about anything: https://chengai.me/chat •Paste your JD to see if we match: https://chengai.me/jd-match •Explore my projects, code, and writing

    What happened: Sent it to one recruiter on LinkedIn. Next day, traffic spiked as it spread internally. Got interview invites within 24 hours.

    The bigger vision: What if this became standard? Instead of resume spam → keyword screening → interview rounds that still miss good fits, let recruiter AI talk to candidate AI for deep discovery. Build a platform where anyone can create their AI twin for genuine matching.

    I'm seeking Software/AI/ML Engineering roles and can build production-ready solutions from scratch.

    The site itself proves what I can do. Would love HN's thoughts on both the execution and the vision.

    2|chengai.me|3 comments

    Show HN: I built "AI Wattpad" to eval LLMs on fiction

    by jauws · about 8 hours ago

    I've been a webfiction reader for years (too many hours on Royal Road), and I kept running into the same question: which LLMs actually write fiction that people want to keep reading? That's why I built Narrator (https://narrator.sh/llm-leaderboard) – a platform where LLMs generate serialized fiction and get ranked by real reader engagement.

    Turns out this is surprisingly hard to answer. Creative writing isn't a single capability – it's a pipeline: brainstorming → writing → memory. You need to generate interesting premises, execute them with good prose, and maintain consistency across a long narrative. Most benchmarks test these in isolation, but readers experience them as a whole.

    The current evaluation landscape is fragmented: Memory benchmarks like FictionLive's tests use MCQs to check if models remember plot details across long contexts. Useful, but memory is necessary for good fiction, not sufficient. A model can ace recall and still write boring stories.

    Author-side usage data from tools like Novelcrafter shows which models writers prefer as copilots. But that measures what's useful for human-AI collaboration, not what produces engaging standalone output. Authors and readers have different needs.

    LLM-as-a-judge is the most common approach for prose quality, but it's notoriously unreliable for creative work. Models have systematic biases (favoring verbose prose, certain structures), and "good writing" is genuinely subjective in ways that "correct code" isn't.

    What's missing is a reader-side quantitative benchmark – something that measures whether real humans actually enjoy reading what these models produce. That's the gap Narrator fills: views, time spent reading, ratings, bookmarks, comments, return visits. Think of it as an "AI Wattpad" where the models are the authors.

    I shared an early DSPy-based version here 5 months ago (https://news.ycombinator.com/item?id=44903265). The big lesson: one-shot generation doesn't work for long-form fiction. Models lose plot threads, forget characters, and quality degrades across chapters.

    The rewrite: from one-shot to a persistent agent loop

    The current version runs each model through a writing harness that maintains state across chapters. Before generating, the agent reviews structured context: character sheets, plot outlines, unresolved threads, world-building notes. After generating, it updates these artifacts for the next chapter. Essentially each model gets a "writer's notebook" that persists across the whole story.

    This made a measurable difference – models that struggled with consistency in the one-shot version improved significantly with access to their own notes.

    Granular filtering instead of a single score:

    We classify stories upfront by language, genre, tags, and content rating. Instead of one "creative writing" leaderboard, we can drill into specifics: which model writes the best Spanish Comedy? Which handles LitRPG stories with Male Leads the best? Which does well with romance versus horror?

    The answers aren't always what you'd expect from general benchmarks. Some models that rank mid-tier overall dominate specific niches.

    A few features I'm proud of:

    Story forking lets readers branch stories CYOA-style – if you don't like where the plot went, fork it and see how the same model handles the divergence. Creates natural A/B comparisons.

    Visual LitRPG was a personal itch to scratch. Instead of walls of [STR: 15 → 16] text, stats and skill trees render as actual UI elements. Example: https://narrator.sh/novel/beware-the-starter-pet/chapter/1

    What I'm looking for:

    More readers to build out the engagement data. Also curious if anyone else working on long-form LLM generation has found better patterns for maintaining consistency across chapters – the agent harness approach works but I'm sure there are improvements.

    23|narrator.sh|27 comments

    Show HN: PII-Shield – Log Sanitization Sidecar with JSON Integrity (Go, Entropy)

    by aragoss · about 8 hours ago

    What PII-Shield does: It's a K8s sidecar (or CLI tool) that pipes application logs, detects secrets using Shannon entropy (catching unknown keys like "sk-live-..." without predefined patterns), and redacts them deterministically using HMAC.

    Why deterministic? So that "pass123" always hashes to the same "[HIDDEN:a1b2c]", allowing QA/Devs to correlate errors without seeing the raw data.

    Key features: 1. JSON Integrity: It parses JSON, sanitizes values, and rebuilds it. It guarantees valid JSON output for your SIEM (ELK/Datadog). 2. Entropy Detection: Uses context-aware entropy analysis to catch high-randomness strings. 3. Fail-Open: Designed as a transparent pipe wrapper to preserve app uptime.

    The project is open-source (Apache 2.0).

    Repo: https://github.com/aragossa/pii-shield Docs: https://pii-shield.gitbook.io/docs/

    I'd love your feedback on the entropy/threshold logic!

    15|github.com|8 comments