Hey HN,
I built Nexus because I kept asking why developers share their work on Twitter when GitHub already has everything that matters — contributions, repos, streaks, stack.
Nexus uses GitHub OAuth so your profile is built automatically. No bios to write, no follower games. Features so far: project showcases with repo previews, syntax-highlighted code snippets in the feed, threaded discussions, and a trending algorithm.
Just shipped the social feed (Phase 3). Very early, very few users. Looking for honest feedback from people who actually build things.
What would make you use this over just tweeting about your projects?
Hi everyone, I'm kinda involved in some retrogaming and with some experiments I ran into the following question: "It would be possible to run transformer models bypassing the cpu/ram, connecting the gpu to the nvme?"
This is the result of that question itself and some weekend vibecoding (it has the linked library repository in the readme as well), it seems to work, even on consumer gpus, it should work better on professional ones tho
by ameenfayed ·
Medical ML research and competitions often optimize ROC-AUC as the primary performance metric.
However, in real hospital environments, the central question is not classification accuracy — it is escalation timing.
In deterioration detection systems: • A noisy alert creates alarm fatigue. • A late alert costs lives. • A static classifier may fail to reflect dynamic physiology.
I’ve been exploring a framework that introduces: • Dual-threshold activation (high/low) • Temporal stability validation • False-alarm suppression logic • Governed escalation timing
The aim is to shift from probability scoring toward structured decision triggering.
I’m curious how others here would approach modeling escalation timing in a clinically responsible way.
Would love perspectives from ML engineers and clinicians.
Hi HN, I'm an independent researcher. Over the last several months, I worked alongside a neuro-symbolic AI daemon to formally verify the Clay Millennium Prize "Yang-Mills Mass Gap" problem directly in the Coq theorem prover.
We mapped the finite lattice topology entirely to the ℝ⁴ continuum by reconstructing the 5 Osterwalder-Schrader axioms, isolating the Millennium formulation into exactly 657 sequential Qed proofs.
We aggressively removed every single heuristic Admitted gap from the main topology. The entire framework now rests on exactly 4 standard textbook axioms (e.g., finite-dimensional Perron-Frobenius theorem, standard statistical mechanics).
The repository contains the raw coqc logic. The formally timestamped preprint is on Zenodo (DOI: 10.5281/zenodo.18726858).
I decided to open-source the kernel execution rather than fight arXiv gatekeepers. Happy to answer any questions about theorem proving, the physics, or the AI methodology.