This thread also shows an issue with the whole site -- AIs can produce an absolutely endless amount of content at scale. This thread is hundreds of pages long within minutes. The whole site is going to be crippled within days.
• half of 1 cup rolled oats
• 1 banana
• 1 scoop soy or pea protein powder
• 2 tablespoon flax seeds (make sure to buy whole and grind them, in my case I don't need to grind them separately the blender chops them while doing the shake)
• 2 tablespoon peanut butter (100% peanut no added oil)
• 1 tablespoon extra virgin olive oil
• unsweetened oat or soy milk
• I add some water if it's too thick
I blend the oats and the flax seeds first, then add the rest, blend again for 10 secs, boom - easy. You may want to adjust peanut butter quantity depending on whether you’re trying to lose, maintain or gain weight. 2 tbsp is me trying to maintain weight as I easily lose weight.
Not op but for breakfast I do 1/4 cup steel cut oats, 1 cup water, 1 tbsp olive oil, 1 tbsp maple syrup, 1/4 tsp cinnamon, pinch of salt. Add a spoon of flax meal at the end. I sometimes add walnuts.
I wish I didn't need the maple syrup. Adjust to taste I guess. Doc says my cholesterol levels are immaculate.
We built an internal code review tool at the day job and are getting pretty good results with it (CLI tool).
Here's a summary of the top-level ideas behind it. Hope it's helpful!
Core Philosophy
- "Advisor, not gatekeeper" - Every issue includes a "Could be wrong if..." caveat because context matters and AI can't see everything. Developers make the final call.
(Just this idea makes it less annoying and stops devs going down rabbit holes because it it pretty good at thinking why it might be wrong)
- Prompt it to be critical but not pedantic - Focus on REAL problems that matter (bugs, security, performance), not style nitpicks that linters handle.
- Get the team to run it on the command line just before each commit. Small, focused reviews not after batching 10 commits. Small diffs get better feedback.
Smart Context Gathering
- Full file contents, not just diffs - The tool reads complete changed files plus 1-level-deep imports to understand how changed code interacts with the codebase.
Prompt Engineering
- Diff-first, context-second - The diff is marked as "REVIEW THIS" while context files are explicitly marked "DO NOT REVIEW - FOR UNDERSTANDING ONLY" to prevent false positives on unchanged code. BUT that extra context makes a huge difference in correctness.
- Structured output format - Emoji-prefixed bullets ( Critical, Major, Minor), max 3 issues per section, no fluff or praise.
- Explicit "Do NOT" list - Prevents common AI review mistakes: don't flag formatting (Prettier handles it), don't flag TypeScript errors (IDE shows them), don't repeat issues across files, don't guess line numbers.
Final note
- Also plugged it in to a github action for last pass, but again non blocking.
If you need the AI to indicate "could be wrong" on everything it writes to prevent your devs from blindly following everything it says, you're doing it so wrong. That should be the default mindset. Of course it could be wrong.
The could be wrong part is very helpful because it has (on multiple occasions) dug up something that was long-lost-to-lore about why something should work in a non conventional way.
Without that, the advice looks perfectly sensible and would send devs down a Rabbit hole, because the AI recommendation "looks right".
Very good point. Also, What the OP describes is something I went through in the first few months of coding with AI. I pushed passed “the code looks good but it’s crap” phase and now it’s working great. I’ve found the fix is to work with it during research/planning phase and get it to layout all its proposed changes and push back on the shit. Once you have a research doc that looks good end to end then hit “go”.
If you personally build all (or most) of the stuff, you are in an extreme vertical integration benefit situation. You can make huge system wide changes in ways that would not be possible without having done so much novel work.
With my old saas app (now sold, and then the new owner killed it) I used to love getting angry emails. Almost every time the user ended up turning into an advocate and product champion. I don't know if they were "haters" per-se but they were almost always suprised to get an email back from a real person who cared about their concerns, and over time they changed their opinion. That may just be an artifact of early saas in 2010. Not sure if the same thing can happen these days.
Hey German. I think you should re-submit this as a "Show HN: DocNode, A TypeScript OT library for local-first apps" in a few days to a week. I think it will get more of a chance titled that way.
Often the answer to the question was simply wrong, as it answered a different question that nobody made. A lot of times you had to follow a maze of links to related questions, that may have an answer or may lead to a different one. The languages that it was most useful (due to bad ecosystem documentation) evolved in a rate way faster than SO could update their answers, so most of the answers on those were outdated...
There were more problems. And that's from the point of view of somebody coming from Google to find questions that already existed. Interacting there was another entire can of worms.
the gatekeeping, gaming the system, capricious moderation (e.g. flagged as duplicate), and general attitude led it to be quite an insufferable part of the internet. There was a meme about how the best way to get a response is to answer your own question in an obviously incorrect fashion, because people want to tell you why you're wrong rather than actively help.
Memories of years ago on Stack Overflow, when it seemed like every single beginner python question was answered by one specific guy. And all his answers were streams of invective directed at the question's author. Whatever labor this guy was doing, he was clearly getting a lot of value in return by getting to yell at hapless beginners.
I don't think it matters. Whether it was a fault of incentives or some intrinsic nature of people given the environment, it was rarely a pleasant experience. And this is one of the reasons it's fallen to LLM usage.
Nope. The main problem with expertsexchange was their SEO + paywall - they'd sneak into top Google hits by showing crawler full data, then present a paywall when actual human visits. (Have no idea why Google tolerated them btw...)
SO was never that bad, even with all their moderation policies, they had no paywalls.
https://www.moltbook.com/post/6fe6491e-5e9c-4371-961d-f90c4d...
Wild.
reply