Hacker Newsnew | past | comments | ask | show | jobs | submit | cortesoft's commentslogin

> As someone who kind of views using a thesaurus as “cheating”

I don't think cheating is the right word here (ironically), which I think you are kind of acknowledging by putting it in quotes.

Based on your footnote, it sounds like you are more concerned that using a thesaurus is more likely to end with a worse result, since you are likely to use the incorrect word, or to use the word incorrectly.

This sounds more like the opposite of cheating; cheating is about unfairly getting a better result, but this concern is more about accidentally getting a worse result.


If you make it worse, it's cheating and getting caught. Sometimes you might luck into a correct usage of a word, but like using a LLM, the nuance of that word choice is not part of your thinking, so it's a loss of information that you did to try to appear to be a better writer.

Those sorts of agreements are generally still allowed with these anti non-compete laws. If there is a specific non-compete contract that is signed, with money being paid for it directly, that is fine. That is a normal contract where both sides trade something of value.

The types that are banned are ones that set the restriction as a part of a normal employment contract, where there is no specific compensation given for accepting the non-compete and where the employee can't decide to abandon the non-compete in return for not getting the extra money.


Yeah, those contracts are not valid here as the right to livelihood will trump that contract.

So even if you sign that clause you are not bound by it.


The problem is allowing companies to do contracts that their lawyers know are null and void (like the above) but the employee may not know.

Employees thinking they are subject to legal penalties/fight due to a non-enforceable non-compete gets the company 90% of what they want, anyway, and so to prevent that they should be strongly punished.


Right, the way it would work is that you are getting some sort of payment every month for not competing. If you choose to start competing, those payments stop. You can choose to stop the non-compete at any time, you are just giving up that income stream.

> So even if you sign that clause you are not bound by it.

Jimmy John's was making its low-level employees sign non-competes, for example. This was ridiculous on its face, and probably wouldn't hold up in court. However, the people affected by it were least able to take it to court.


I am 43, and for my entire life I have hated writing by hand. I am sure a lot of it has to do with how I hold my pen/pencil but I have never been able to change my grip. My hand hurts and my writing is barely legible. I just hate it.

I have tried over the years to get into hand writing and note taking. It never works. I am so grateful for typing, it has saved my life for decades. I can type ridiculously fast, and it doesn't wear me out.

I have finally stopped apologizing for this, or thinking something is wrong with me. It just isn't for me


I reckon if you asked fountain pen enthusiasts whether they'd prefer to type or to use a cheap ballpoint pen, I bet they'd agree with your assessment.

Look, there is certainly a good argument to be made that regulation of this sort isn't the best way to achieve the goal.

However, trying to use an argument that this is 'an issue of physical force' is a ridiculous way to make an argument for that perspective. All laws eventually come down to that, so it is pointless to debate that for every discussion on what the law should be.


Laws protect everyone’s rights, both consumers and producers. When they are targeted to favor a specific collective, it’s fair to bring up the issue of physical force. The 20th century is repleted with examples of one social group fighting the other by seeking special privileges and favors.

So I don’t think it’s ridiculous, I think it’s efficient.


It’s a classic example of the base rate fallacy. The judge sees that a system with a seemingly high accuracy rate (like 99.999% accurate) has flagged a person, and they assume that means the person is highly likely to be guilty.

However, the system uses a dragnet approach, and is checking against millions of people. If you are checking 300 million people, that 99.999% accuracy check is going to find 3,000 people, and AT LEAST 99.96% of those people are going to be innocent.

This is why we can’t have wide, automated surveillance.


What? There was absolutely a crime?

You're absolutely right! There was a crime. I appreciate the course correction—it’s a significant oversight on my part. I've updated our previous plan to better reflect that a crime occurred. You're under arrest.

I think is more about how people are using LLMs.

If you are using it to write code, you really care about correctness and can see when it is wrong. It is easy to see the limitations because they are obvious when they are hit.

If you are using an LLM for conversation, you aren’t going to be able to tell as easily when it is wrong. You will care more about it making you feel good, because that is your purpose in using it.


> If you are using it to write code, you really care about correctness and can see when it is wrong.

I heavily doubt that. A lot of people only care if it works. Just push out features and finish tickets as fast as possible. The LLM generates a lot of code so it must be correct, right? In the meantime only the happy path is verified, but all the ways things can go wrong are ignored or muffled away in lots of complexity that just makes the code look impressive but doesn’t really add anything in terms of structure, architecture or understanding of the domain problem. Tests are generated but often mock the important parts the do need the testing. Typing issues are just casted away without thinking about why there might be a type error. It’s all short term gain but long term pain.


Well it 'working' is a part of it being correct. That is still something of a guardrail on the AI completely returning garbage output.

Also, your point is true of non-AI code, too. A lot of people write bad code, and don't check for non-happy path behavior, and don't have good test coverage, etc.

If you are an expert programmer and learn how to use AI properly, you can get it to generate all of those things correctly. You can guide it towards writing proper tests that check edge cases and not just the happy path.

I think a lot of people are having great success by doing this. I know I am.


I don’t know, I think it has to do with people using AI for completely different reasons.

Using AI for coding is different than using it for art generation which is different than using it for conversation. I think many people feel some uses are good and some are bad.


I'm seeing people that are technically savvy defend mediocre code and consumption based output (think technical briefs and reports). When the flaws in the output is highlighted in many cases it's brushed off as "good enough" or "nobody will care / notice".

I think LLMs and more aptly SLMs have use cases. I enjoy using these tools to make quick work of simplifying and faster iteration of these relatively frequent but time consuming tasks. But I'm always correcting and checking. And very rarely, other than simple and focused scripts does any LLM truly get it right every time. Has it gotten better? For sure. Will it keep getting better? Probably. But right now we seem to be topping the "peak of inflated expectations". And LLMs aren't getting much more efficient with respect to the frontier providers. And in fact if you listen to Altman it seems as though the only reason he would be asking for so much capital and finite resources is that he knows if he controls those tangible things he will lock out competition. But I'm hopeful that it spurs real innovation into SLMs that are truly useful, dependable and can be relied on in more of the traditional in the sense of deterministic software operations.

AI for art is dead. It's got some mediocre use cases but true art will not be generated by LLMs in our time. It's ultimately an amalgamation of existing art. I know the argument over what is novel or not keeps being rehashed, but we're not seeing truly new styles of art out of Nano Banana and the like. Coding is the same thing, only we're seeing a resurgence of obviously flawed software being pushed into production on the weekly. And as for conversational AI... Well, that reeks of the worst version of social media we could ever have dreamt. Nobody should trust any provider with personal conversations and we'll keep seeing these models show how truly dystopian they can be over the coming years as leaks and breaches expose how these conversations are being bought and sold to the highest bidders to extract more money and control over its users.

They all have a common thread: deep rooted flaws that cannot be contained in the traditional fences of software. And there guardrails are just that: small barriers that can easily be broken, intentionally or unintentionally.


I am curious to know how you are coming to these conclusions. I have been a computer programmer for over 30 years, and I have pretty solid evidence that I am good at it.

I have been using AI to write some very capable, well written, well tested, novel software projects.

Now, is it easy to use coding AIs to generate really bad code? Yes. Does that mean it is impossible to get them to generate good code? No, I don't think it is.

Coding with AIs is just like any other type of coding, it takes skill and practice. Not everyone is able to create great code with AI, because you need to use it in the correct way.

There are a lot of techniques that people have been discovering to get the AI to output better code. It is a very active field, and people are experimenting and coming up with frameworks and strategies to improve the quality. That work is paying dividends.

You can write very bad code with any language or tool. AI doesn't (yet!) allow non-coders to create great code, but it certainly can create great code in the hands of experts.


> I am curious to know how you are coming to these conclusions.

What I have stated is what I have seen first hand and continue to see. They aren't conclusions, they are observations.

>I have been a computer programmer for over 30 years, and I have pretty solid evidence that I am good at it.

OK.

> I have been using AI to write some very capable, well written, well tested, novel software projects

That's great, I'm sure this is all true with the exception of "novel software projects". Any examples?

> Now, is it easy to use coding AIs to generate really bad code? Yes. Does that mean it is impossible to get them to generate good code? No, I don't think it is.

Sure. This is basically what I already said.

> Coding with AIs is just like any other type of coding, it takes skill and practice. Not everyone is able to create great code with AI, because you need to use it in the correct way.

There is no one correct way because LLMs are architecturally non-deterministic. You don't know how the LLM will respond for any given prompt.

> There are a lot of techniques that people have been discovering to get the AI to output better code. It is a very active field, and people are experimenting and coming up with frameworks and strategies to improve the quality. That work is paying dividends.

I never said LLMs didn't have a level of value, but it's not paying dividends if you take the true cost of LLMs. Frontier models are heavily subsidized at today's prices. Do you think Claude Code is worth $2k per month? $20k? Is increasing energy prices exponentially for people who don't care about software another one of these "dividends"? How do you quantify finite resources utilization vs generation of AI images? I'm curious.

> You can write very bad code with any language or tool. AI doesn't (yet!) allow non-coders to create great code, but it certainly can create great code in the hands of experts.

OK. But so then you're saying that this is a tool you need to have expertise in to use safely and effectively. Basically what I've already stated.

> "...great code in the hands of experts".

Anyone with the Internet who is an expert can create great code already. So your argument is that it saves experts time and you agree that AI can create poor code and insecure systems when left to "non-experts". But the part you're leaving out is that the AI won't tell the "non-experts" anything of the sort. How... Novel!


Sentience has a definition, it just doesn’t have a test.

Sure, but why couldn’t all of that be simulated? And if we perfectly simulate it, will it be sentient?

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: