Hacker Newsnew | past | comments | ask | show | jobs | submit | vitaelabitur's commentslogin

Shameless plug for my structured LLM outputs handbook which is written in a similar spirit: https://nanonets.com/cookbooks/structured-llm-outputs/


Thanks for sharing, appreciate it.

Would you say it is convenient to use it for writing, say, 500 words at a go?


If you’re good at hand writing, there is no difference between it and a real paper notebook. There are also many notebook templates.


> And really, really boring and slow a lot of the time

If you only watch the story-driven scenes in Lawrence of Arabia, and skip the prolonged shots of the desert, you would miss out feeling the same vastness and heat Lawrence is feeling.

There is a limit to how much a film can make you think or feel. Films that reach the highest limits need "boring" voids in-between the primary scenes. These voids are not to ingest more, but to help digest what has been ingested in previous scenes, with subliminal scenes and silence that let the right thoughts and feelings grow.


The word you are looking for, if you are looking for one, is declinism.


I wasn't but I'm glad you told me, thanks!


We traded books for films, and now films for short videos, always moving towards what is easier to enjoy.

Quite a while ago, books became a taste that needs to be patiently acquired. Someone starting to read today is more likely to develop the taste by gradually easing into books that demand more and more. Say maybe Huxley -> Camus -> Wilde -> Dostoevsky.

Now that short clips are here, the same has happened to films. The uninitiated need to sit through Scorsese, Hitchcock, Wilder, Kubrick, Altman before attempting Fellini, Antonioni, Tarkovsky, Ozu, Resnais.

And by the way, someone who is naturally inclined to love films (or books) won't be affected, even today. Am I wrong? The way they are described here, I would crush these film students.


I think TV series are bigger than films now. They have established characters and story line spanned over several shorter episodes with cliffhangers and recaps etc. Once you get into a series you follow it for several seasons. It's a preferred way to tell stories.

I usually prefer films over TV series because I find just these tropes tiring. I find TV series have quite inefficient story telling and spend most of its time trying to get me hooked to watch the next episode.


It helped that books were all we had. I probably would have preferred little snippets of dopamine, too.

I'm kinda glad I walked across campus glued to a book. But it was the same low tolerance for boredom that people show today.


This is my all-time favorite mathematics book. It approaches calculus fundamentals through a series of fictional conversations between a teacher and a student.

It is easy to read, yet I remember it gave me an odd visceral sense of what calculus is (or maybe I just thought it did).

I am reading it again.


Reinforcement learning for humans. I like it!


You are absolutely right — Let me know if you want to read my personal anecdote on "Dead Internet Theory"...

Yeah, I especially hate how paranoid everyone is (but rightly so). I am constantly suspicious of others' perfectly original work being AI, and others are constantly suspicious of my work being AI.


We used Docusaurus.


One of the authors here. I've read the paper. Brilliant work, especially the slicing implementation for denser token masks.


> Increasing context length by complaining about schema errors is almost always worse from an end quality perspective than just retrying till the schema passes.

Another way to do this is to use a hybrid approach. You perform unconstrained generation first, and then constrained generation on the failures.


There's no difference in the output distribution between always doing constrained generation and only doing it on the failures though. What's the advantage?


There's no advantage wrt output quality, but it can be more economical in some high-error regimes, with less LLM calls used in resampling (max 2 for most errors).


My point is that if you're capable of doing constrained generation and want to try once and the constrain on failure, since that has the same output distribution as doing constrained generation in the first place, you'd be better off just doing constrained generation always (max of 1 LLM call for the class of errors fixed by this).

There's only a different distribution with 2+ initial attempts before falling back to constrained, at least if I haven't screwed up any math.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: