by chrisecker ·
I've built a rich text data model for a desktop word processor in Python, based on a persistent balanced n-ary tree with cached weights for O(log n) index translation. The document model uses only four element types: Text, Container, Single, and Group — where Group is purely structural (for balancing) and has no semantic meaning in the document. Individual elements are immutable; insert and takeout return new trees rather than mutating the old one. This guarantees that old indices remain valid as long as the old tree exists. I'm aware of Ropes, Finger Trees, and ProseMirror's flat index model. Is there prior art I should know about — specifically for rich text document models with these properties?
by smudgy3746 ·
Hi HN,
This is a tool I've worked on the past few months.
Instead of giving LLM tools SSH access or installing them on a server, the following command:
$ promptctl ssh user@server
makes a set of locally defined prompts "magically" appear within the remote shell as executable command line programs.For example, I have locally defined prompts for `llm-analyze-config` and `askai`. Then on (any) remote host I can:
$ promptctl ssh user@host
# Now on remote host
$ llm-analyze-config /etc/nginx.conf
$ cat docker-compose.yml | askai "add a load balancer"
the prompts behind `llm-analyze-config` and `askai` execute on my local computer (even though they're invoked remotely) via the llm of my choosing.This way LLM tools are never granted SSH access to the server, and nothing needs to be installed to the server. In fact, the server does not even need outbound internet connections to be enabled.
Eager to get feedback!
by VWWHFSfQ ·
It seems as llm coding agents become more and more sophisticated and capable, that adage may not be true anymore. It seems increasingly likely that you can have all three.