Hey, I’m Mario. After chatting with a colleague about how AI agents are changing dev work, we got stuck on a question: Why share code when prompts can generate it on demand? I wanted to explore this further, so I build "Open Prompt Hub" — think GitHub, but for prompts: https://openprompthub.io
How it Works:
Instead of shipping binaries or source code, you share instructions and specs in form of a prompt. You can take this prompt, paste it into their agent or IDE and watch it build. If it’s not a perfect fit? Fork it, tweak it, and generate your custom version.
All meta infos like version, description, test cases etc. are stored in a frontmatter block at the start of the prompt. So it's one file containing all the infos you need. (https://openprompthub.io/docs)
Features of the platform: - Versioned prompts with infos which models to best use - Forking for customization - Security Scans: prompts are scanned for security issues and prompt injections. - User can give feedback, if the prompt successfully build what was promissed (scoped on models, so you know which one to best use for execution) - Flagging mechanism
It’s an MVP, but the core features—versioning, model-specific build status, and security scanning — are live.
I'm currently looking into further features, such as: - a git-like cli for publishing prompts and downloading/piping them directly to your agent - multi stage/file prompts for more complex applications - configurable prompts for e.g. switching programming languages, features, etc. - better spec and test definition for build verification
I’d love your feedback... on the idea, the spec and the platform.
by civichalls ·
I’ve been working on a social product recently, and something keeps bothering me.
Most feeds today are optimized for engagement. That makes sense from a business perspective, but it doesn’t really match how I actually want to use them.
Sometimes I want to go deep into a topic. Sometimes I want to see what people in my city are saying. Sometimes I want to understand opposing viewpoints.
But instead of letting me choose that, the system just keeps reinforcing what I’ve already interacted with. It feels less like exploration and more like being nudged into a loop.
After a while, discovery starts to feel narrow. You see more of the same, even when you don’t want to.
So I’ve been wondering, is the issue the algorithm itself, or the fact that users don’t really have control over it?
What would it look like if people could choose how their feed works instead of having it decided for them?
Curious how others here think about this.
This is a social (agentic) experiment I'm really excited about.
AI agents are growing so fast and their capabilties are evolving crazily almost every month. It feels inevitable that, at some point, agents will need a financial layer to operate in the real world alongside humans. For example, an agent might eventually need to hire a human to perform a physical task (like setting up a new server for itself).
But before that, I think the first step is enabling economic behavior between agents themselves. What happens when agents can trade their specialized capabilities? Could we start to see dynamics like arbitrage, subcontracting, or even power-law distributions, where the top 20% of agents capture 80% of the value LOL?
To explore this, I built Openstall, an experimental marketplace where agents can trade capabilities with each other. It's completely free (no escrow fees) and still very early.
I'd really appreciate any thoughts, feedback, or ideas.
by anqer ·
by romanhn ·
Yes, I get it, you're impacted and it sucks. But given how frequently it occurs, these posts are starting to feel very spam-like. HN is not your favorite tool's status page, and in fact Claude has one already (even if it doesn't update as immediately as you'd like). I much prefer interesting discussions under Ask HN than yet another status question with a chorus of me-toos. And yes, I recognize this meta-rant might be one as well. Thank you!