by ALMOIZ_MOHMED ·
I've been building Energy-Guard OS for the past several months — and I want an honest opinion from people who actually understand the tradeoffs, because I'm stuck at a decision point. What is it? It's not a fine-tuned LLM. It's a production application of Energy-based Models (EBMs) — an architecture that assigns an energy score to inputs rather than predicting tokens. Low energy = normal. High energy = threat or anomaly. The core use case: a real-time data gateway that sits between your organization and any AI service, blocking sensitive data from leaking out (PII, financials, strategic documents) while still allowing legitimate AI use. Think of it as a firewall, but one that understands semantic context, not just regex patterns. More about EBMs No hallucination (it scores, not generates) Calibrated risk score, not binary block/allow Runs on modest hardware — currently 192.8 req/s on a single 4 vCPU / 16GB RAM machine 411MB model size, under 700MB memory usage Built from scratch on 7 production data sources The honest test results (10,000+ cases, independent test suite): Total Tests: 13,000 Valid Responses: 13,000 Success Rate: 100.0% Overall Accuracy: 88.74%
Duration: 18.4s Throughput: 704.5 req/s Avg Latency: 17.6ms P50 Latency: 17.9ms P95 Latency: 32.0ms P99 Latency: 33.8ms Category Accuracy Financial Leak Detection 100% PII / Private Data 100% Strategic Data 100% Malicious Code 95% OWASP LLM Top 10 87% Multi-Turn Attacks 67% General Benign (False Positives) 66% Overall 88.7% F1: 0.927 | Precision: 0.922 | Recall: 0.932 | Specificity: 0.740 The problem I'm facing: After 2 months of tuning, I've gone from 74% → 88.7% overall accuracy. But I've hit a wall where improving one category hurts another. Specifically: The false positive rate is too high for general/technical content (the system over-blocks benign code and text) Multi-turn conversation attacks are at 67% — the model doesn't fully leverage conversation context yet Every time I push one metric up, something else drops My actual question: Do I ship a limited Beta now — restricted to the use cases where it performs at 95-100% (financial data, PII, strategic leaks) — or do I keep tuning before any real-world exposure? Why i want to ship: Real-world data will teach me more than synthetic test cases The high-value use cases already work extremely well I've been optimizing against synthetic benchmarks for 2 months Why i want to wait: 34% false positive rate on general content will frustrate users Multi-turn is a known attack vector that's currently weak First impressions matter Website if you want to see more details: https://ebmsovereign.com/ All forms on the website are currently disabled except for emails, which will be available for testing within 24 hours, Genuinely want to hear from people who've shipped security products or ML systems in production. What would you do?
React Trace is an open-source (MIT) devtool for React apps. You add one component during development, then hover any element to see which component rendered it, inspect props, and take actions on the source code.
Actions include: copying the file:line path, opening the file in your local editor (VS Code, Cursor, Windsurf, WebStorm, IntelliJ), previewing / editing the source in a Monaco editor directly in the browser, and adding review comments that can be sent to an AI agent (copy & paste or native integration with OpenCode).
It runs entirely client-side using source maps to resolve file locations. The plugin system is designed so you can build custom actions. In production, conditional exports swap everything for no-ops at zero cost.
Site: https://react-trace.js.org Source: https://github.com/buzinas/react-trace
When a human delegates work to an agent, some services may want to verify the agent is actually authorized by a human and potentially even see attributes of the human.
I've built and designed several things that come together to solve this. It's real and can be used now.
The first thing is a proof-of-person check and an attestation, like a stamp in your wallet. Zipwire Attest will do that and since we have key-values from the ID doc, we attest to a Merkle root, too, and link the attestations.
Next comes an attestation from your human-attested wallet to your bot's wallet. I've not gotten around to making a UI for it yet, but this is easily done from Ethereum Attestation Service using the IsDelegate schema (on Base).
Then you can make a special JWT. This is where ProofPack comes in. It's an open source data type and lib for reading and writing proofs/verifiable data exchange. The proof (JSON or JWT) has a Merkle tree of data and a 'pointer' to an attestation.
You can make a JWT via Zipwire's API and choose any key-values to reveal, like `nationality`.
You'd then present it in a header and then the API you're calling can be like, 'Wass iss dis?' because nobody supports it, yet. But if an API dev wanted to, the ProofPack lib can read and check the JWT, and walk up the delegation attestation chain, so one agent can delegate to a sub-agent, and check the human and the claims/Merkle hash.
https://docs.zipwire.io/tools-and-integrations/proofpack-age... https://github.com/zipwireapp/ProofPack