Modern NewsTopAskShowBestNew

New

Show HN: XML, Markdown, or JSON: Which gives LLMs the most reliable boundaries?

by systima · 1 minute ago

1|systima.ai|0 comments

Activision put awkward pressure to make a game about Iran invading Israel

by spaghetdefects · 2 minutes ago

1|www.eurogamer.net|0 comments

Ascend: Run Python Functions on Kubernetes

by todsacerdoti · 4 minutes ago

1|ocramz.github.io|0 comments

BYD rolls out EV batteries with 5-minute 'flash charging.' But there's a catch

by jmercouris · 7 minutes ago

1|techcrunch.com|1 comments

Ask HN: Anyone using "Deep Agents" for production or operational tasks?

by codecracker3001 · 9 minutes ago

Is anyone using deep agents (LangChain or others like Claude Code, Codex) internally in their company or for real work that is non-coding?

I'm building one for a specific task, mainly to run through cron (but users can also chat/ask/provide feedback), and I'm trying to understand best practices.

Can anyone share any examples? I'd like to see something that is not a research agent, a coding agent, or a data/financial analyst agent, but something that does real work.

1||0 comments

ChatGPT for Excel and new financial data integrations

by surprisetalk · 11 minutes ago

1|openai.com|0 comments

'ATM jackpotting' leads FBI to issue warning. Here's what to know

by rmason · 12 minutes ago

2|www.usatoday.com|0 comments

Show HN: AgentShield – Real-time risk monitoring for AI agents

by jairooh · 13 minutes ago

1|useagentshield.com|0 comments

Parenting as a Solo Founder

by speckx · 15 minutes ago

1|www.benjaminoakes.com|0 comments

The Cost of Simple

by falsename · 16 minutes ago

2|www.metateam.ai|0 comments

The AI Industry's Moment of Gloom, Doom, and Profit

by cdrnsf · 17 minutes ago

1|www.motherjones.com|0 comments

FBI Nabs Contractor for Allegedly Stealing Crypto from Marshals

by pilingual · 19 minutes ago

2|www.bloomberg.com|0 comments

Show HN: Docker pulls more than it needs to - and how we can fix it

by a_t48 · 19 minutes ago

Hi all!

I've built a small tool to visualize how inefficient `docker pull` is, in preparation for standing up a new Docker registry + transport. It's bugged me for a while that updating one dependency with Docker drags along many other changes. It's a huge problem with Docker+robotics. With dozens or hundreds of dependencies, there's no "right" way to organize the layers that doesn't end up invalidating a bunch of layers on a single dependency update - and this is ignoring things like compiled code, embedded ML weights, etc. Even worse, many robotics deployments are on terrible internet, either due to being out in the boonies or due to customer shenanagins. I've been up at 4AM before supporting a field tech who needs to pull 100MB of mostly unchanged Docker layers to 8 robots on a 1Mbps connnection. (and I don't think that robotics is the only industry that runs into this, either - see the ollama example, that's a painful pull)

What if Docker were smarter and knew about the files were already on disk? How many copies of `python3.10` do I have floating around `/var/lib/docker`. For that matter, how many copies of it does DockerHub have? A registry that could address and deduplicate at the file level rather than just the layer level is surely cheaper to run.

This tool:

    - Given two docker images, one you have and one you are pulling, finds how much data docker pull would use, as well as how much data is _actually_ required to pull

    - Shows an estiimate for how much time you will save on various levels of cruddy internet

    - There's a bunch of examples given of situations where more intelligent pulls would help, but the two image names are free text, feel free to write your own values there and try it out (one at a time though, there's a work queue to analyze new image pairs)

The one thing I wish it had but haven't gotten around to fitting in the UI somehow is a visualization of the files that _didn't_ change but are getting pulled anyhow.

It was written entirely in Claude Code, which is a new experience for me. I don't know nextjs at all, I don't generally write frontends. I could have written the backend maybe a little slower than Claude, but the frontend would have taken me 4x as long and wouldn't have been as pretty. It helped that I knew what I wanted on the backend, I think.

The registry/transport/snapshotter(?) I'm building will allow both sharing files across docker layers on your local machine well as in the registry. There's a bit of prior art with this, but only on the client side. The eStargz format allows splitting apart the metadata for a filesystem and the contents, while still remaining OCI compliant - but it does lazy pulls of the contents, and has no deduplication. I think it could easily compete with other image providers both on cost (due to using less storage and bandwidth...everywhere) as well as speed.

If you'd be interested, please reach out.

3|dockerpull.com|1 comments

GrapheneOS: Microsoft Authenticator does not support secure Android OS

by RachelF · 20 minutes ago

2|www.heise.de|1 comments

Show HN: Stoneforge – Open-source orchestration for parallel AI coding agents

by adamjking3 · 21 minutes ago

I built this because I was running 3-5 Claude Code instances on the same repo and burning out from constantly context switching between terminal windows. Carefully making sure agents' work didn't overlap, preventing context windows from degrading, manually enforcing documentation/memory policies, and re-explaining decisions across sessions.

Stoneforge is the coordination layer I wanted. A Director agent breaks goals into tasks. A dispatch daemon assigns them to workers when available, each task runs in its own git worktree. Stewards review completed tasks and squash-merge to main if everything passes inspection, otherwise they handoff the task with review comments to be picked up by a new agent. When a worker hits its context limit, it commits, writes handoff notes, and exits, so the next worker can pick up on the same branch with a fresh context window and important notes from the previous agent's work.

Some design decisions that might be interesting to this crowd:

- Fully event-sourced with a complete audit log. - Supports syncing tasks to Github or Linear, and documents to Notion, Obsidian, or a local folder. Custom providers are also supported. - JSONL as source of truth, SQLite as a disposable cache. JSONL diffs, merges across branches, and survives corruption. SQLite gives you FTS5 and indexed queries. The SQLite .db can be rebuilt on different devices in seconds. - No approval gates by default. If five agents each need confirmation for every file write, you won't be moving any faster. Review happens at the merge steward level. - Worktrees over containers. The conflict surface for coding agents is git and the file system, containers or remote instances are overkill. Worktrees create in milliseconds, share node_modules and build caches, and don't need Docker or separate servers. - You can run multiple Claude Code / Codex plans simultaneously on the same codebase.

Works with Claude Code, OpenAI Codex, and OpenCode. Apache 2.0. GitHub: https://github.com/stoneforge-ai/stoneforge

Happy to discuss the architecture or any of the tradeoffs.

1|stoneforge.ai|0 comments

ChatGPT vs. MOSQUITO Trolley Problem [YouTube] [video]

by sydney6 · 22 minutes ago

1|www.youtube.com|1 comments

Attempted Hack of Water Treatment Plant in 2021 [pdf]

by sans_souse · 22 minutes ago

1|vault.fbi.gov|0 comments

Mac Studio 512GB RAM Option Disappears Amid Global DRAM Shortage

by ashivkum · 23 minutes ago

5|www.macrumors.com|1 comments

Cluely Retracts June 2025 Revenue Statement

by tech234a · 24 minutes ago

1|twitter.com|0 comments

Auto update and visualize your AI chat context

by nickk81 · 25 minutes ago

1|99helpers.com|0 comments