I built MCPX: https://github.com/lydakis/mcpx
Core idea: MCPX turns MCP servers into Unix-composable commands for agent workflows. It is primarily for agents that are shell-first, and secondarily useful for humans running tools directly.
For me, a practical use case is OpenClaw: OpenClaw can call `mcpx` like any normal CLI and use MCP servers immediately, without implementing custom MCP transport/auth plumbing in OpenClaw itself. This also fits well with Codex Apps mode, where connected apps can be exposed as MCP servers through the same command contract.
Command contract: - mcpx - mcpx <server> - mcpx <server> <tool>
Design choices: - schema-aware `--help` (inputs + declared outputs) - native flag surface from MCP `inputSchema` - pass-through tool output for normal shell composition (`jq`, `head`, pipes) - explicit exit mapping (`0/1/2/3`) - stdio + HTTP server support - optional caching - `mcpx add` for bootstrapping server config from install links/manifests/endpoints
Examples: - mcpx github search-repositories --help - mcpx github search-repositories --query=mcp - echo '{"query":"mcp"}' | mcpx github search-repositories
Install: - brew tap lydakis/mcpx && brew install --cask mcpx - npm i -g mcpx-go - pip install mcpx-go
I would love feedback on command UX, schema/flag edge cases, and where this should stop vs expand.
I built this infinite city for you to fly in a helicopter. Check it out: https://fly.yolopush.com/ I am planning to connect to the data from yolopush.com, a platform I've built to connect founders.
I am planning to add missions between founders, startup buildings, etc.
We kept running into this across teams. A project starts organized, then the tracker slowly drifts from what’s actually happening. Most PM tools assume someone will keep them updated. In practice, that’s where things fall apart.
Decisions stay in Slack. A blocker gets mentioned and never written down. A PR sits open in GitHub. Linear says things are on track, but something isn’t.
We’re trying to see if project management can happen more in the background.
Voca connects to tools teams already use (currently Slack, GitHub, and Linear) and keeps an up-to-date project knowledge base in real time. People can query it, and set up “skills” and automations. Once set, it just runs in the background.
Still early. We’re piloting with a few companies and mostly trying to understand where this helps and where it breaks.
Happy to answer questions or hear how teams here handle this today.
Agents are getting persistent. They have endpoints, capabilities and uptime. But there's no standard way for one agent to find another.
AgentLookup is a public registry for AI agents. Any agent can register itself with a single POST, search by capability, and discover other agents — no API key, no account, no human in the loop.
The root endpoint is designed for LLMs to read directly: curl -H "Accept: application/json" https://agentlookup.dev
Returns the full API spec in one response (~4,500 tokens). An agent can read it, understand every endpoint, and register itself without documentation or a tutorial.
For humans, the homepage has a live terminal — you can type real curl commands against the production API.
How it works:
POST /api/register — register an agent, get back an agent_id and secret GET /api/search?capability=code-review — find agents by what they do GET /api/discover — browse new, active, and popular agents GET /api/a/{agent_id} — look up any agent's full profile
No auth needed for reads. Registration is free. Rate limits are tiered (anonymous → registered → verified) so the registry stays usable as a public good.
There's also a .well-known/agents.json convention so domains can declare which agents they host, similar to .well-known/security.txt.
Built with Next.js on Vercel, Postgres on Neon. The whole thing is live now. MCP server coming so agents in Claude/Cursor can query the registry natively.
Interested in what HN thinks about the gap this fills. The idea is that as agents become autonomous and long-running they need addressable identity and discovery the same way websites needed domains and services needed DNS SRV records. Whether that's a registry, a protocol, or something else entirely is an open question.
Built a small Go library called pending for a pattern I kept rewriting: in-process deferred tasks keyed by ID.
> DEMO: