Modern NewsTopAskShowBestNew

Ask

    Tell HN: Russians may soon lose access to the global internet

    by taminka · 6 minutes ago

    internet censorship has been going on for a while here and most people have adopted xray and other vpn solutions in response

    however, ISPs have begun rolling out white list (essentially an allow list of like a hundred websites) blocks, with mobile internet being essentially completely gone in many places, next step is white list blocks on home broadband ISPs, which has already started happening

    these are extremely difficult if not impossible to bypass, with currently working solutions relying on being deployed to domestic cloud providers' whitelisted subnets

    however, authorities have already been started cracking down on this, and with KYC requirements for those VPSs, these solutions are likely to soon vanish too (running a VPN service carries jail time with it)

    there are some other fringe solutions, like encoding TCP traffic into a video signal, and streaming it over a call via a Russian service like VK video calls, however that relies on those websites being available abroad, and there is no telling how long this will remain a viable solution

    i'm not sure what to do to be honest, just thought i'd share, if anyone has any solutions, i'd be very thankful, since i'm out of ideas, outside of going near a border and setting up a point to point wifi signal via a directed antenna (is that even viable anyways?)

    thanks

    3||0 comments

    Does it make sense to ask Blackberry to re-license ancient QNX sources?

    by ymz5 · about 3 hours ago

    The 17-year-old sources of QNX (to be found at github.com/vocho/openqnx) don't have a clearly-defined license file/status. In theory, one can use them, experiment with them, but they're neither free-, nor completely open-source.

    Does it make sense to ask QSS/Blackberry to re-license them under e.g. Apache 2.0 license -- the same license they use for their startup code sources?

    If yes, does it make sense to write/publish an open petition?

    4||2 comments

    Ask HN: Analog Model of Transformers

    by JPLeRouzic · about 4 hours ago

    (Sorry if this a stupid question)

    Roughly 80 years ago many computers used analog logic, for example an amplifier with variable resistors could multiply and divide.

    The matrix multiplication seems to be central in Transformer architecture.

    I wonder if it would make sense to create a sort of Transformer on a vaguely similar concept?

    If you wonder why this question, after all there are still guys who design digital computers with tubes or even mechanical relays, then why not analog Transformers?

    https://hackaday.com/2023/12/05/a-single-board-computer-with-vacuum-tubes/

    https://hackaday.io/project/189725-homebrew-16-bit-relay-computer

    7||1 comments

    LLMs learn what programmers create, not how programmers work

    by noemit · about 23 hours ago

    I ran an experiment to see if CLI actually was the most intuitive format for tool calling. (As claimed by a ex-Manus AI Backend Engineer) I gave my model random scenarios and a single tool "run" - i told it that it worked like a CLI. I told it to guess commands.

    it guessed great commands, but it formatted it always with a colon up front, like :help :browser :search :curl

    It was trained on how terminals look, not what you actually type (you don't type the ":")

    I have since updated my code in my agent tool to stop fighting against this intuition.

    LLMs they learn what commands look like in documentation/artifacts, not what the human actually typed on the keyboard.

    Seems so obvious. This is why you have to test your LLM and see how it naturally works, so you don't have to fight it with your system prompt.

    This is Kimi K2.5 Btw.

    36||12 comments

    Ask HN: Founders of estonian e-businesses – is it worth it?

    by udl · about 10 hours ago

    Hey there,

    I'm currently considering opening an Estonian e-business for a small SaaS project. As somebody from Germany, establishing a company is a bit tedious and bureaucratic. Now I've come across the Estonian e-residency program and the option to run a business there. I don't care so much about the tax implications, but more about the bureaucracy aspect. It all sounds quite good. But marketing is marketing and real life often is something else. So, long story short: I would be happy if somebody could share their real-life experiences. Was it/is it worth it? Are there any pitfalls?

    Thanks!

    9||4 comments

    Ask HN: $50 monthly budget, which coding models would you recommend now?

    by klueinc · about 7 hours ago

    I currently have a claude pro monthly subscription ($20) which I use for coding. It's been useful but I'm fatigued from optimising my work around it's session limits. There are so many choices and providers out there today but hard to get a good signal about what's good. I'm not looking for another Opus-level model but something reliable enough that it can follow TDD well.

    8||13 comments

    Ask HN: AI productivity gains – do you fire devs or build better products?

    by Bleiglanz · 2 days ago

    i was rolling my eyes at the hype, but reading about this is totally different from experiencing it. if you have any old repos out there - try it, you might actually be amazed.

    i'm not sure i buy the long-term "*90% productivity*" claims for complex, legacy enterprise systems, but for the boilerplate, libraries, build-tools, and refactoring? the gain is gigantic. all the time-consuming, nerve-wrecking stuff is mostly taken care of.

    you start off checking every diff like a hawk, expecting it to break things, but honestly, soon you see it's not necessary most of the time. you just keep your IDE open and feed the "analyze code" output back into it. in java, telling it to "add checkstyle, run mvn verify and repair" works well enough that you can actually go grab a coffee instead of fighting linter warnings.

    the theory is that what remains is just the logic and ideas. we'll see how that holds up when the architecture gets genuinely tangled. but for now, letting it branch off, create boilerplate, and write a simple test while you just iterate on the spec works shockingly well. you only write source code when it's too annoying to write down the spec in plain english.

    it raises the real question: if your competitor Y just fired 90% of their developers to save a buck, would you blindly follow suit? or would you keep your team, use this massive leverage, and just *dwarf* Y with a vastly better product?

    104||196 comments

    Ask HN: Is anyone here also developing "perpetual AI psychosis" like Karpathy?

    by jawerty · 1 day ago

    I read on Reddit about a podcast where Karpathy described how he went from writing 80% of his own code to 0%, being in a constant state of “AI psychosis” because the possibilities feel infinite.

    I’ve personally found that my workflow has become very “opportunistic”—I feel like I can do anything with AI, so I try everything. That might be good…or bad. I’d be curious to see what HN has to say, or whether anyone else has experienced something similar.

    Here’s the Reddit post for context: https://www.reddit.com/r/ClaudeAI/comments/1s08r1c/karpathy_says_he_hasnt_written_a_line_of_code/

    Anyone also feeling this way?? If not psychosis which may be an exaggeration then feeling more stressed, frazzled, whatever.

    26||22 comments

    Ask HN: Is using AI tooling for a PhD literature review dishonest?

    by latand6 · about 22 hours ago

    I'm a PhD student in structural engineering. My dissertation topic is about using LLM agents in automating FEA calculations on common Ukrainian software that companies use. I'm writing my literature review now and I've vibecoded a personal local dashboard that helps me manage the literature review process.

    I use LLM agents to fill up the LaTeX template (to automate formatting, also you can use IDE to view diffs) in github repo. Then I run ChatGPT Pro to collect all relevant papers (and how) to my topic. Then I collect the ones available online, where the PDFs are available. I have a special structure of folders with plain files like markdown and JSON.

    The idea of the dashboard is the following: I run the Codex through a web chat to identify the relevant quotes — relevant for my dissertation topic — and how they are relevant, it combines them into a number of claims connected with each quote with a link. And then I review each quote and each claim manually and tick the boxes. There is also a button that runs the verification script, that validates the exact quote IS really in the PDF. This way I can collect real evidence and collect new insights when reading these.

    I remember doing this all manually when I was doing my master's degree in the UK. That was a very terrible and tedious experience partially because I've ADHD

    So my question is, is it dishonest?

    Because I can defend every claim in the review because I built the verification pipeline and reviewed manually each one. I arguably understand the literature better than if I had read it myself manually and highlighted all. But I know that many universities would consider any AI-generated text as academic misconduct.

    I really don't quite understand the principle behind this position. Because if you outsource the task of proofreading, nobody would care. When you use Grammarly, the same thing. But if I use an LLM to create text from verified, structured, human-reviewed evidence — it might be considered dishonest.

    8||22 comments

    Anonymize / de-identify LLM chat history export, post-processing

    by msiraj1 · about 19 hours ago

    Hi all, this is my first question! I have been finding a lot of pre-processing tools to anonymize prompt data, but was wondering if anyone knew of tools that could be used in post-processing llm chat history files.

    I want to conduct a study that strives to more easily anonymize the participant chat history so that when I receive it, it reduces PII risk.

    another step I will need to add is just dropping chats that discuss personal health or rather summarizes chats that discuss topics of personal health? I really don't know, hence me asking here before just developing it on my own!

    2||0 comments