Hacker Newsnew | past | comments | ask | show | jobs | submit | mentos's commentslogin

Fate X 3.0

ya, i made rage and nightmare, just shared them w/ some friends. the guys that made fate x went to a nearby high school funny enough but never knew them personally.

Amazing

So many fond memories of that time.

I remember getting VB3 in 300 emails from an AOL chat room.


lol i def picked up some autocad and other things that way for friends of my family. i was fortunate enough to have a neighbor give me a copy of VB3 he got from work after i got back from a summer camp learning QBASIC. the simplicity of the form designer in VB was total magic. i spent so many nights up late reading/downloading sample projects to see how people made the controls work together.

ha I had this thought a few months ago made me wonder how a model trained on just John Carmack's code would fair.

Carmack is a smart guy, and there's no question that he's amazing at optimization, but his code is pretty messy, especially early versions.

In the Doom engine, for example, he has hard coded lots of things directly in the C engine code that really should be part of the regular game code.


Me neither makes me question now if all of these comments are botted

Id say ask AI to ‘describe the problem and solution from a high level. avoid code excerpts if possible.’ Submit as a bug report and mention you have an AI solution for reference if desired.


Yea I feel like if even one kid was introduced to the world of computing through a Franklin it justifies their existence.


That's how I feel about clones in general. Ok, I owned a real Commodore 64, but all my PCs during my formative years were clones.

Actually, this wasn't such a good example since I believe PC clones were legal. Let me change it to something more controversial:

I feel the same way about software piracy. All my games and software growing up were pirated. I didn't even understand this, because you got software by going to a store and buying it, e.g. C64 games... but it was all warez. Same with DOS or Windows (which one usually got from someone else). All of my early programming languages were pirated too: QuickBasic, GW Basic, Turbo C, Turbo Pascal, etc.

And this is how people got acquainted with computers, and then got into programming (games, systems, business software) as a job. So piracy was a net win.


I do recall the assistant at the store when I first showed up said wait for the upcoming Commodore 64 more stuff for much less money. But as a 14 year old I wasn't ready to wait after being exposed to Apple the summer before. That professor really advocated for the Atari 800 and I really considered it, but the Apple's easier to copy floppies along with a much larger user base won me over.


They sold 100,000 of em. I bet there was more than one.


And since Cortana isn’t a widely used name like Alexa, it spares a whole generation of people named Alexa from that inconvenience.


The music for that episode still takes me galaxies/dimensions away

https://youtu.be/IYpO3EbvMK4?si=n70N7hi8v29NiZvr


I think Discord performs great

As an Unreal game dev what I’ve wanted to remake in QT is the Epic Games Launcher.

I think Epic may be underway on this now but if you did a good enough job I feel like there may still be a window to pitch them on acquiring your work.


I assume it’s not possible to get the same results by fine tuning a model with the documents instead?


You will still get hallucinations. With RAG you use the vectors to aid in finding things that are relevant, and then you typically also have the raw text data stored as well. This allows you to theoretically have LLM outputs grounded in the truth of the documents. Depending on implementation, you can also make the LLM cite the sources (filename, chunk, etc).


The approach that has worked for us in production is correction during generation, not after.

The model verifies its output against the rules in the prompt as it generates and corrects itself within the same API call — no retries, no external validator. If there are still failures the model cannot fix at runtime, those are explicitly flagged instead of silently producing wrong output.

This does not mean hallucinations are completely solved. It turns them into a measurable engineering problem. You know your error rate, you know which outputs failed, and you can drive that rate down over time with better rules. The system can also self-learn and self-improve over time to deliver better accuracy.


I’m still learning this advantages and differences between them, would there be benefits to SFT and RAG? Or does RAG make SFT redundant?


I think generally, SFT is like giving the LLM increased intuition in specific areas. If you combine this with RAG, it should improve the performance or accuracy. Sort of like being a lawyer and knowing something is against the law by intuition, but needing the library to cite a specific case or statute as to why.


Thank you I appreciate the reply and that analogy helps make sense of this.


Caffeine -> Suppresses appetite -> Lowers Caloric Intake -> Reduces Cell Turnover -> Prevents Dementia ?


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: