Hacker Newsnew | past | comments | ask | show | jobs | submit | longhaul's commentslogin

Am working on an iPhone app and impressed with how well Claude is able to generate decent/working code with prompts in plain English. I don’t have previous experience in building apps or swift but have a C++ background. Working in smaller chunks and incrementally adding features rather than a large prompt for the whole app seems more practical, is easier to review and build confidence.

Adding/prompting features one by one, reviewing code and then testing the resulting binary feels like the new programming workflow

Prompt/REview/Test - PRET.


How do you deal with malware/etc ?. Are there reliable products available ?


The line of defenses are different. All my Linux applications are either installed via Flatpak (which runs in a sandbox) or via the official package registry (which requires programs to be open source, and has a strong track record)


This was my thesis as well but only insiders can confirm how much of it is true


Funny story…I used to whistle and imitate birds calls. Some of them used to go along and keep making the same sound along with me (more like I was making the same sound as them). Very few others, after repeating a few times, popped on me. Happened enough times for it to not be a coincidence


Read a paper long time ago, something to do with increase of sulfur in the brain


Wasn’t Joyent set up as a Solaris first stack alternative after Sun’s acquisition ?. Meaning they were Solaris first and the solutions built on top came after.


Why can’t browsers/servers just store a standard English dictionary and communicate via indexes?. Anything that isn’t in the dictionary can be sent raw. I’ve always had this thought but don’t see why it isn’t implemented. Might get a bit more involved with other languages but the principle remains the same.

Thinking about it a bit more, we are doing this at the character level- a Unicode table, so why can’t we lookup words or maybe even common sentences ?


Compression is limited by the pigeonhole principle. You can't get any compression for free.

There's every possible text in Pi, but on average it's going to cost the same or more to encode the location of the text than the text itself.

To get compression, you can only shift costs around, by making some things take fewer bits to represent, at the cost of making everything else take more bits to disambiguate (e.g. instead of all bytes taking 8 bits, you can make a specific byte take 1 bit, but all other bytes will need 9 bits).

To be able to reference words from an English dictionary, you will have to dedicate some sequences of bits to them in the compressed stream.

If you use your best and shortest sequences, you're wasting them on picking from an inflexible fixed dictionary, instead of representing data in some more sophisticated way that is more frequently useful (which decoders already do by building adaptive dictionaries on the fly and other dynamic techniques).

If you try to avoid hurting normal compression and assign less valuable longer sequences of bits to the dictionary words instead, these sequences will likely end up being longer than the words themselves.


Compression algorithms like Brotli already do this:

https://www.rfc-editor.org/rfc/rfc7932#page-28


Brotli has a built-in dictionary.


QA is what SEs will be doing - testing , followed by feedback to LLMs. Why can’t just product folks do this eventually w/o SEs?


Product folks don’t know what they want a lot of the time and don’t know what’s possible


Agree, I used to say that documenting a program precisely and comprehensively ends up being code. We either need a DSL that can specify at a higher level or use domain specific LLMs.


Long time ago, I had the same problem while ray tracing using Monte Carlo techniques. Using Mersenne Twister fixed the clustering and grid like randomization. https://en.m.wikipedia.org/wiki/Mersenne_Twister


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: