Hacker Newsnew | past | comments | ask | show | jobs | submit | ffin's commentslogin

Unfortunately the demo doesn't work on Firefox from me due to the WebGPU requirement. The creator does have a video that shows off the technique nicely however:

https://www.youtube.com/watch?v=SELiz9VRg5Q


> Agents, by making it easiest to write code, means there will be a lot more software.

He's saying that agents make code much cheaper, therefore there will be a large increase in demand for code. This appears to be exactly what you're describing.


Is data about how long links are on the front page available publicly anywhere?


There are a number of third party sites that do this, such as hnrankings.info:

https://hnrankings.info/46339600/

https://hnrankings.info/46004118/


I find Zed has some really frustrating UX choices. I’ll run an operation and it will either fail quietly, or be running in the background for a while with no indication that it is doing so.


I think Google Docs does this automatically.


> this supposed ability of determining whether or not content is AI-generated doesn't exist.

It seems like you’re just wrong here? Em dashes aside, the ‘style’ of llm generated text is pretty distinct, and is something many people are able to distinguish.


No, I'm not wrong. Someone could easily write in the default output style of ChatGPT by hand (which will probably become increasingly common the longer that style remains in place), and someone could easily collaborate with ChatGPT on writing that looks nothing like what you're thinking.

If organizations like schools are going to rely on tools that claim to detect AI-generated text with a useful level of reliability, they better have zero false positives. But of course they can't, because unless the tool involves time travel that isn't possible. At best, such tools can detect non-ASCII punctuation marks and overly cliched/formulaic writing, neither of which is academic dishonesty.


Okay, you’re right that the LLM writing style isn’t singularly producible by LLM’s. However, I’m not sure why this writing style would become increasingly common? I don’t see why people would mimic text that is seen as low quality or associated with academic dishonesty.

Additionally, I do think it is valuable to determine if a piece of text is valuable, or more precisely, what I’m looking for. As others have said, if I want info from a LLM about a subject, it is trivial for me to get that. Oftentimes I am looking for text written by people though.


However, I’m not sure why this writing style would become increasingly common?

I was basing that on a few factors, off the top of my head:

1. Someone might pick up mannerisms while using LLMs to help learn a new language, similarly to how an old friend of mine from Germany spoke English with an Australian accent because of where she learned English.

2. Lonely or asocial people who spend too much time with LLMs might subconsciously pick up habits from them.

3. Generation Beta will never have known a world without LLMs. It's not that difficult to imagine that ChatGPT will be a major formative influence on many of them.

As others have said, if I want info from a LLM about a subject, it is trivial for me to get that.

Sure, it's trivial for anyone to look up a simple fact. It's not so trivial for you to spend an hour deep-diving into a subject with an LLM and manually fact-checking information it provides before eventually landing on an LLM-generated blurb that provides exactly the information you were looking for. It's also not trivial for you to reproduce the list of detailed hand-written bullet points that someone might have provided as source material for an LLM to generate a first draft.


This is all future concerns; if it happens, then people can change their heuristics. There's no point trying to predict all possible futures in everything that you do.


The comment you're replying to isn't related to the topic of heuristics. The first part is explicitly an answer to a question concerning a future prediction.


I think you may be confused. This is an upgrade for the Framework 16, not the Framework 12, which is the 2-in-1.


Oh true :( The above comment then is a wish for a similar upgrade.


The only real way to do this “for free” that I can think of would be to self-host on an old laptop. Unless you meant free as in open source.


If the promise is: when using the AT Protocol you have control over your own data, then this is self-guaranteeing, since it is a part of the spec that you can self host a PDS.

The promise that Bluesky will always be compliant with the spec, or that the spec won’t ever change to disallow this isn’t self-guaranteeing, but you could say something similar about any of these self guaranteeing promises. For example the promise that Obsidian will always use markdown isn’t self-guaranteeing.


> The promise that Obsidian will always use markdown isn’t self-guaranteeing.

True, but Obsidian doesn't make that promise. The promise is "file over app": you control the files you create. In this way the promise is not reversible, and self-verifiable.

"...will always use markdown" is not something any app can guarantee. At best an open source app can guarantee it for a specific version (assuming it doesn't require a connection, or the user can self-host the server).


In the US I was taught you don’t need to signal at roundabouts. Am I doing something terribly wrong?


>Signal when you change lanes or exit the roundabout.

California Driver Handbook

https://www.dmv.ca.gov/web/eng_pdf/dl600.pdf


The UK version:

https://www.highwaycodeuk.co.uk/roundabouts.html

I think the main difference is the expectation that you start signalling, if appropriate, before joining the roundabout. My main complaint (OK one of my main complaints) is that drivers turning right start signalling right and then forget to signal left as they leave...


Roundabouts are a one way road, you don´t need to signal when driving on it. But you do need to indicate when you are leaving the roundabout.


Might be a regionalism, but here in Oregon, we don't signal going in, but signal right before we intend to exit. That way the next incoming driver can enter the roundabout and keep traffic flowing. We have a LOT of roundabouts though, like dozens upon dozens, and many of them are over saturated. It may be a local response to the traffic patterns here, not sure.


The most important thing is for everyone to speak the same protocol, provided that the protocol meets some minimum standard of fitness-for-purpose. But… yeah, I think you're doing it wrong.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: