Am working on an iPhone app and impressed with how well Claude is able to generate decent/working code with prompts in plain English. I don’t have previous experience in building apps or swift but have a C++ background. Working in smaller chunks and incrementally adding features rather than a large prompt for the whole app seems more practical, is easier to review and build confidence.
Adding/prompting features one by one, reviewing code and then testing the resulting binary feels like the new programming workflow
The line of defenses are different. All my Linux applications are either installed via Flatpak (which runs in a sandbox) or via the official package registry (which requires programs to be open source, and has a strong track record)
Funny story…I used to whistle and imitate birds calls. Some of them used to go along and keep making the same sound along with me (more like I was making the same sound as them). Very few others, after repeating a few times, popped on me. Happened enough times for it to not be a coincidence
Wasn’t Joyent set up as a Solaris first stack alternative after Sun’s acquisition ?. Meaning they were Solaris first and the solutions built on top came after.
Why can’t browsers/servers just store a standard English dictionary and communicate via indexes?. Anything that isn’t in the dictionary can be sent raw. I’ve always had this thought but don’t see why it isn’t implemented. Might get a bit more involved with other languages but the principle remains the same.
Thinking about it a bit more, we are doing this at the character level- a Unicode table, so why can’t we lookup words or maybe even common sentences ?
Compression is limited by the pigeonhole principle. You can't get any compression for free.
There's every possible text in Pi, but on average it's going to cost the same or more to encode the location of the text than the text itself.
To get compression, you can only shift costs around, by making some things take fewer bits to represent, at the cost of making everything else take more bits to disambiguate (e.g. instead of all bytes taking 8 bits, you can make a specific byte take 1 bit, but all other bytes will need 9 bits).
To be able to reference words from an English dictionary, you will have to dedicate some sequences of bits to them in the compressed stream.
If you use your best and shortest sequences, you're wasting them on picking from an inflexible fixed dictionary, instead of representing data in some more sophisticated way that is more frequently useful (which decoders already do by building adaptive dictionaries on the fly and other dynamic techniques).
If you try to avoid hurting normal compression and assign less valuable longer sequences of bits to the dictionary words instead, these sequences will likely end up being longer than the words themselves.
Agree, I used to say that documenting a program precisely and comprehensively ends up being code. We either need a DSL that can specify at a higher level or use domain specific LLMs.
Long time ago, I had the same problem while ray tracing using Monte Carlo techniques. Using Mersenne Twister fixed the clustering and grid like randomization.
https://en.m.wikipedia.org/wiki/Mersenne_Twister
Adding/prompting features one by one, reviewing code and then testing the resulting binary feels like the new programming workflow
Prompt/REview/Test - PRET.