Hacker Newsnew | past | comments | ask | show | jobs | submit | nolanl's commentslogin

The concept of "AI will review AI-authored PRs" seems completely wrong to me. Why didn't the AI write the correct code in the first place?

If it takes 17 rounds of review from 5 different models/harnesses – I don't care. Just spit out the right code the first time. Otherwise I'm wasting my time clicking "review this" over and over until the PR is worth actually having a human look at.


Because the code generated is only as good as the initial description of what you want. It's not too different from "standard" coding where you have a first go at solving it and then iterate and polish as you go along.

I've had multiple situations where things "just worked" and at other times you just have to steer it in the right direction a few times, having another agent doing the review works really well (with the right guardrails), it's like having someone with no other intent or bias review your code.

Unless you're talking about "vibe coding" in which case "correct" doesn't really matter as you're not even looking at what the output is, just let it go back/forth until something that works comes out, I haven't had much success or even enjoyed it as much working this way, took me a couple of months to find the sweet spot (my sweetspot, I think it'll be different for everyone).


This is 100% my experience. Like, reading this thread right now, I notice the ringing in my ears. But otherwise I go months without thinking about it.

I've had tinnitus for almost a decade now. It gets better.

The first six months were hell because I kept focusing on how awful it was. Eventually you stop noticing it. It just becomes part of the background – like how the sky is blue, grass is green, etc.

I strongly recommend reading "Living with Tinnitus" by Laura Cole. Tinnitus is a very poorly-understood condition, but hearing about the experiences of others helps a lot. I hope you feel better soon. https://www.goodreads.com/book/show/37707931-living-with-tin...


I've found Bugbot to be shockingly effective at finding bugs in my PRs. Even when it's wrong, it's usually worth adding a comment, since it's the kind of mistake a human reviewer would make.


Right, but you do avoid worries like "will I have to update this dependency every week and deal with breaking changes?" or "will the author be compromised in a supply-chain attack, or do a deliberate protestware attack?" etc. As for performance, a lot of npm packages don't have proper tree-shaking, so you might be taking on extra bloat (or installation cost). Your point is well-taken, though.


You can avoid all those worries by vendoring the code anyway. you only 'need' to update it if you are pulling it in as a separate dependency.


> you do avoid worries like "will I have to update this dependency every week and deal with breaking changes?

This is not a worry with NPM. You can just specify a specific version of a dependency in your package.json, and it'll never be updated ever.

I have noticed for years that the JS community is obsessed with updating every package to the latest version no matter what. It's maddening. If it's not broke, don't fix it!


As someone who has actually worked on JavaScript frameworks, I think Marko is criminally underrated. The compile-time optimizations are extremely impressive: https://markojs.com/docs/explanation/fine-grained-bundling

I was not surprised for example that Marko came out very well in this performance comparison: https://www.lorenstew.art/blog/10-kanban-boards


I remain convinced that RSC and the SSR craze was a result of someone (or multiple) people needing a raise and their friends wanting to start a company selling abstract compute. Statically hydrated, minimal React was pretty great when served over good CDN infrastructure. Then I watched the bundle sizes and lock-in balloon. That second article is a dragon slayer. It really lays out the problem with React. In marrying itself to Next.js and embracing the server, it's betrayed the platform. Meanwhile, the platform itself has matured. React practically built my career, and I just don't have a reason to choose it anymore.


SSR isn’t a craze. Web applications have been served that way for literal decades now.


Read it in context. There's nothing wrong with SSR.


I agree, if there is a death of React it will be killed by Next/Vercel.

I probably shouldn’t care. I’m just not looking forward to the chaos of another full “turn” in JavaScript, akin to query->backbone or backbone->react.

Maybe I shouldn’t fear it. I’ve just yet to see an idea that feels valuable enough to move an entire ecosystem. Svelte, HTMX, etc… where is the “disruptive” idea that could compel everyone to leave React?


That’s interesting. I’ve always held SvelteKit in high regard for greenfield projects because it balances capability, developer experience, and performance, but I’ll have to give Marko a look. I’d love to see a similar deep dive into Electron style desktop frameworks since that space still feels underexplored compared to mobile. I honestly wouldn’t know where to start for a video game interface, and that bothers me.


Author here. I actually did some research on CSS selector performance: https://nolanlawson.com/2023/01/17/my-talk-on-css-runtime-pe...

The TL;DW is: yes, class selectors are slightly more performant than attribute selectors, mostly because only the attribute _names_ are indexed, not the values. But 99% of the time, it's not a big enough deal to justify the premature optimization. I'd recommend measuring your selector performance first: https://developer.chrome.com/docs/devtools/performance/selec...


I use shadow DOM every day, but yes, it is often the part of WCs that baffles people – probably because they don't need it.

Alternative approaches that may work for your use case:

- "HTML web components" [1] - light DOM only, SSR-first, good as a replacement for "jQuery sprinkles"

- "Shadow gristle" [2] - use as little shadow DOM as possible. If you need styling or composition, put it in the light DOM!

[1]: https://adactio.com/journal/20618

[2]: https://glazkov.com/2023/03/02/shadow-gristle/


I cover this in another post [1], but broadly:

- Not every web app is perf-sensitive to every extra kB (eCommerce is, productivity tools typically aren't)

- Plenty of frameworks have tiny runtimes, e.g. Svelte is 2.7kB [2]

- I wouldn't advocate for 100 different frameworks on the page, but let's say 5-6 would be fine IMO

No one is arguing that this is ideal, but sometimes this model can help, e.g. for gradual migrations or micro-frontends.

BTW React 17 actually introduced a feature where you could do exactly this: have multiple versions of React on the same page [3].

[1]: https://nolanlawson.com/2021/08/01/why-its-okay-for-web-comp...

[2]: https://bundlephobia.com/package/svelte@4.2.19

[3]: https://legacy.reactjs.org/blog/2020/10/20/react-v17.html


Author here. I cover this in another post [1], but basically the interop benefits you get on the client just aren't there (yet) on the server. My north star:

> Maybe in the future, when you can render 3 different web component frameworks on the server, and they compose together and hydrate nicely, then I’ll consider this solved.

[1]: https://nolanlawson.com/2023/08/23/use-web-components-for-wh...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: