Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I think one of the more prominent issues folks take with mass training on OSS is that the companies doing it are now profiting for having done it.

In his follow-up post he talks about him open sourcing old games as a gift, and he doesn't much care how people receive that gift, just that they do.

He doesn't acknowledge that Anthropic, OpenAI, etc, are profiting while the original authors are not.

The original authors most of the time didn't write the software to profit. But that doesn't mean they don't care if other people profit from their work.

It's odd to me that he doesn't acknowledge this.



I'm no Carmack, but everything I've released as open source is a gift with no strings (unless it was to a project with a restrictive license). A gift with strings isn't exactly a gift.

If you take my gift and profit, it doesn't hurt me, there were no strings. Your users presumably benefit from the software I wrote, unless you're using it for evil, but I don't have enough clout to use an only IBM may use it for evil license. You benefit from the software I wrote. I've made the world a better place and I didn't have to market or support my software; win-win.

I've done plenty of software for hire too. I've used plenty of open source software for work. Ocassionally, I've been able to contribute to open source while working for hire, which is always awesome. It's great to be paid to find and fix problems my employer is having and be able to contribute upstream to fix them for lots more people.


I'm the same, I've seen some of my stuff pop up in the weirdest places and I was ok with it. But I understand and respect that people who published code under restrictive licenses may have a problem. The GPL is absolutely "NOT-a-free-gift" license, in both wording and spirit.

If someone published something as MIT and doesn't like it being used for LLM training, yeah that person can only blame themselves.

For GPL, it all depends if you consider a LLM "derivative software" of the GPL code it was trained on. It's fair to have an opinion on that either way, but I don't think it's fair to treat that opinion as the obvious truth. The same applies to art, a lot of it is visible on the Internet but that doesn't make it "a gift".


To clarify, GPL is not a free as in "free gift", but it is free as in "freedom".

The giving back part is strongly related to the "freedom", not related to whether you profit from it or not.


> To clarify, GPL is not a free as in "free gift", but it is free as in "freedom

To clarify further: "freedom" for the end user, and not the developer leveraging GPL code in their software product.


Absolutely not. GPL is freedom for the authors. The end users have conditions they must meet to use the software. Those conditions are restrictions. That is precisely the opposite of freedom for end users.

To anticipate objections, the conditions keep the software "free for everyone", which is true. But that's still explicitly freedom for the authors. The conditions preemptively eliminate end users who would otherwise find the software valuable. Because it is not freedom for end users.


MIT license requires credit.


So does the BSD license. Copyright must be reproduced


Most licenses do.


BSD0 doesnt


Ahhhh yes that's one that lawyers might have fun with. MIT says:

> The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.

My personal thought on that: it's going to be almost guaranteed that, if an LLM is producing stuff it clearly derived from a certain piece of code XYZ, it will also be capable of producing the correct answer to the question "what's the license for XYZ?" And lawyers will successfully argue that this counts as "included".


The point was to separate MIT and GPL was wrong.

> My personal thought on that: it's going to be almost guaranteed that, if an LLM is producing stuff it clearly derived from a certain piece of code XYZ, it will also be capable of producing the correct answer to the question "what's the license for XYZ?" And lawyers will successfully argue that this counts as "included".

The MIT license terms are not say the name the license if asked. They are The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.

And this would be improbable for many reasons I think.


Presumably you are licensing your code as MIT or a similar license.

Not all code is licensed that way. Some open-source code had strings attached, but AI launders the code and makes them moot.


If you want to attach strings which involve restricting access, open source is not the way to go.


You're right - the reality of the world today is that open-sourced code is slurped up by AI companies, all questions of legality/ethics aside. But this was not the reality of the world that existed when the code was licensed and released. That is why it is easy to empathize with code authors who did not expect their code to be used in this manner.


Nah I neither agree nor empathize. Anyone with a reasonable understanding of how the internet works knows that putting something on it means that thing can be used in a myriad of ways, many of them unanticipated. That's something one implicitly signs up for when posting content of their own free will. If the gift isn't to be wholly given, don't give it at all; put it behind a wall so it's clear that even though it's "available", it isn't a gift.


By far the most popular strings involve restricting restricting access. That is, viral licenses which require derived works to also be open source.


> restricting restricting access

And that's exactly the point. The rule of copyright is explicitly used against itself, which makes it a legitimizing string.


No one cares. Copyright in general is done, and we are all stronger now. Don't fight AI, fight for open models.


Great! So I assume it is now Completely Fine to rip Netflix / Hulu / Disney+ / whatever and share it with everyone I know?

Copyright isn't "done", copyright has just been restricted to the rich and powerful. AI has essentially made it legal to steal from anyone who isn't rich enough to sue you - which in the case of the main AI companies means everyone except a handful of giants.


TIL I'm "rich and powerful." It doesn't feel any different, I've got to say.


The thing is, copyright is not done. The legal framework still exists and is enforced so I am not sure how to read your reply as anything other than a strongly worded opinion. Just ask Disney.

I use AI every day in my dev workflows, yet I am still easily able to empathize with those who did not intend for their code to be laundered through AI to remove their attribution (or whatever other caveats applied in their licensing.)


The thing is, nobody in China gives a rat's patoot about copyright. If we do, they win.

A compromise might have been possible, based on treaties engineered by the people who brought us the TPP, but nobody in the current US government is capable of negotiating anything like that or inclined to try. And it wouldn't exactly leave the rest of us better off if they did.

As a result, copyright is a zero-sum game from a US perspective, which matters because that's where the majority of leading research happens on the majority of available compute. Every inch of ground gained by Big IP comes at America's expense.

So they must lose, decisively and soon. Yes, the GPL will be lost as collateral damage. I'm OK with that. You will be, too.


> Just ask Disney.

Disney saw which way the wind is blowing and invested over a billion into OpenAI


If they saw the wind they wouldn't have chosen OpenAI


I know tech normally breaks the rules/laws and have been able to just force through their desired outcome (to the detriment of society), but I don't think they are going to be able just ignore copyright. If anything those who depend on copyright see how ruthlessly/poor faith tech has treated previous industries and/or basically anyone once they have the leverage.

Tech is becoming universally hated whereas before it was adored and treated optimistically/preferably.


there are no open models. none. zero.

there are binary files that some companies are allowing you to download, for now. it was called shareware in the old days.

one day the tap will close and we'll see then what open models really means


Not true; e.g. https://allenai.org/open-models .

For my own purposes, open weights are 95% as good, to be honest. I understand that not everyone will agree with that. As long as training takes hundreds of millions of dollars' worth of somebody else's compute, we're always going to be at the big companies' mercy to some extent.

At some point they will start to restrict access, as you suggest, and that's the point where the righteous indignation displayed by the neo-Luddites will be necessary and helpful. What I advocate is simply to save up enough outrage for that battle. Don't waste your passion defending legacy copyright interests.


> and that's the point where the righteous indignation displayed by the neo-Luddites will be necessary and helpful

At that point it will be far, far, faaaaar too late.

> Don't waste your passion defending legacy copyright interests

The companies training big models are actively respecting copyright from anyone big enough to actually fight back, and soaking everyone else.

They are actively furthering the entrenchment of Big IP Law.


They are actively furthering the entrenchment of Big IP Law.

China: lol


From a political perspective there's no closing that tap, only opening it further. As long as China exists there will be constant pressure to try to stay ahead, or at least match Chinese models. And China is gleefully increasing that pressure over time, just waiting for the slip that causes a serious migration to their models.


> If you take my gift and profit, it doesn't hurt me

My opinion is that it actually hurts everyone when the open source commons are looted for private profits


Carmack is wealthy, and will do OK even if every single software-related job is terminated and human-mediated code-generation is relegated to hobby-status. Other people's milages vary.

My motivations are very different: the projects I authored and maintained were deliberately all GPL-licensed, my contributions to other OSS are motivated by the desire to help other people - not to an amorphous "world."


Correct. And certainly not to people and companies who'd like to use my work to deny end users the rights to control their computing.

That's the whole point of the GPL to me. The code I release is not an unconditional gift. It definitely has strings attached on purpose.

LLMs completely break this. I'm helping very rich people build the systems they impose to the world and that have awful externalities, and these systems help others build proprietary software. I can't say I'm too happy about this.


So, definitely not just for corporations to make insanely massive profits off?


How much do you think people would pay for this patch?

https://github.com/openssl/openssl/pull/1320

If you had to pay for it seperately, would you include it in anything?

And yet, including it everywhere helps people with clients that can't be upgraded. Maybe less now, rsa_dhe is not deployed so much and hopefully windows 8 is also not deployed so much.


I'm not sure that's true. You may not see it that way, but you're still participating in a capitalist society. Not that there's necessarily something wrong with that, but you have to acknowledge that and act accordingly.

Most people wouldn't work for free. Yet companies like OpenAI, Anthropic and Google exploit OSS maintainers like that. They're winning and we're losing. And if they have their way, millions of programmers will lose their livelihood.


It's interesting that the "natural reaction" to releasing an open source project, have it be successful, and have some Amazon "steal" it (leave the argument aside, that's how people will feel, big company makes money using the gift) is somehow worse than if you work for Big Company, they pay you, and then later use your code to make billions.


Seems pretty understandable to me. In the former, you work on something hoping that real people will find it useful. In the latter, you're explicitly doing work for a paycheck.


Yeah, it's rhymes with people getting mad about pharmacos charging outrageous prices for life saving drugs they developed in order to charge outrageous prices. In both cases (drugs and OSS) it's an ugly process that produces great and greatly uneven value to humanity, but the alternatives are less value overall, even to those on the losing side of the uneven value.


>it's an ugly process that produces great and greatly uneven value to humanity

That'd be far more believable if it weren't for the fact a vast majority of the research is publicly funded for those drug companies. They have no issues selling their drugs for less money in other markets while still turning a profit. And there's absolutely no indication they'd cease to exist with just outrageous profits, not "crippling entire economies" level profits.


The cheapest part of the research is publicly funded. The extreme costs come from taking the outputs of public research and trialing and developing it into a viable drug.

Pharma profits also aren’t particularly noteworthy. Their revenues are, because of the ubiquity of their need, but profit margins for Pharma is pretty middle of the road compared to other industries.


So I agree with you in that it's ugly, and they do take the lion's share of benefit from public research. That said, the public research doesn't run human trials, scale up, or QC production. Still ugly, still valuable.


Most open source licenses have strings attached, the terms of the licence say what those “strings” are. Like requiring attribution.


That sounds fun. I am trying to find potential employers who need me to write or fix code, and ideally contribute upstream along with it. Any ideas where to start? I am thinking something "chill". I am trying to avoid large corporations.


One of the changes I have made in recent years is to move to the unlicence. I am ok with people using my code. I'm not ok with people saying that other people shouldn't be allowed to use my code.


>I think one of the more prominent issues folks take with mass training on OSS is that the companies doing it are now profiting for having done it.

What makes this more objectionable than profiting off open source projects by using it directly? eg. tech giants using linux as a server OS, rather than having to pay microsoft thousands per server for a windows server license? With the original GPL, they don't even have to contribute back any patches.


>What makes this more objectionable than profiting off open source projects by using it directly?

i can brag if netflix is using my X or facebook runs all their stuff with my Y. that can help me land consulting gigs, solicit donations, etc.


This is an edge case in OSS. Even among software packages used by Netflix and Amazon, few of them were attributable to a single maintainer or small group of individuals. They've long since become community developed projects.


Netflix and Amazon use many packages of all sizes. And contributions to projects with many contributors helped people get jobs.


How would you even know that Netflix or Amazon uses your package?


Their open source software depended on or derived from your package. They included your copyright notice with software they distributed. Someone contributed code. Someone reported a bug. Someone requested a feature. Someone mentioned it at a conference. I could continue.


More people use Linux, more recognition Linux itself get which directly or indirectly gets some more donations, developers etc.

With AI, the link is not clear at all. Its just pure consumption. There is no recognition.


> There is no recognition

I've never written or contributed to open source code with this being the goal. I never even considered this is why people do it.


it has never been my explicit goal. but i have certainly enjoyed the rewards of recognition (e.g. i was able to lean on a successful project of mine to help land a nice consulting gig) and it would be silly to ignore that.

(edit: the comment i replied to was edited to be more a statement about themselves rather than a question about other developers, so my comment probably makes less sense now)


I don't dispute your own personal motives, but if it's never been a goal for most people, then CC0 would be more popular than the BSD or MIT license - it's simpler and much more legally straightforward to apply.


I worked on several open source projects both voluntarily or for work. The recognition doesn't really need to be financial. If people out there are using what you are building, contributing back, appreciating it -- it gives you motivation to continue working. Its human nature. One of the reason why there are so many abandoned projects out there.


Competition. Using my open source projects directly doesn't kill my employment. AI company explicitly say they want to put me out of work, using my code aginst me.


There is a major difference between open-sourcing a completed product versus being an open source maintainer, and I'm disappointed that Carmack is drawing a false equivalence here.


Plus unless I'm wrong he's talking about products that were released several years ago and milked for money already.


You were not wrong.


Isn't that the case, and even the point, of all open source, even before AI?

What's the point of a gift if the receiver isn't allowed to benefit/profit from it?

For instance, do you think Linus is upset that ~90% of all internet servers are running his os, for profit, without paying him?

Of course he isn't, that was the point of the whole thing!

Are you upset Netflix, Google, and heck, even Microsoft are raking in millions from services running on Linux? No? Of course you aren't. The original author never expected to be paid. He gave the gift of open source, and what a gift it is!


Linus T explicitly licensed Linux under a license that allows anyone to run it but requires people who modify modifications to share those modifications.


> but requires people who modify modifications to share those modifications.

Not exactly. You can modify Linux and run it yourself all you want without obligation to share your changes. The sharing requirements are more limited and involve distribution.


Correct! This is the exact reason anyone who wants to use the os itself as a moat uses FreeBSD as a base instead, and add proprietary modifications to it. FreeBSD also being a open source gift, that does not have those requirements that Linux does.

Prominent examples include Sony PlayStation, and Apple OSX.


You dont know what GPL is?

It's not an unconditional gift, it's got strings attached.

AI training on GPL works is basically IP laundering, you're taking the product without paying the asking prices.


IP as a concept has always been equal parts dystopian and farcical, and efforts to enforce it have become increasingly strained over time. Property requires scarcity. Ideas aren’t scarce. My consumption of an idea is affected by your consumption of an idea.

AI has simply increased the intensity of this friction between IP and reality to a degree that it can’t be ignored or patched over any longer.


I do know what it is, I've even read the licence in full!

What specific paragraph in the GPL prohibits training of AI on it? I guess it might be a matter of interpretation, but by my reading, it is allowed.

Ps. In the future, try to refrain from using demeaning rethorical questions like the one this comment starts with, it only serves to foster toxicity. Please and thank you Ds.


> What specific paragraph in the GPL prohibits training of AI on it? I guess it might be a matter of interpretation, but by my reading, it is allowed.

It's not a matter of interpretation - any derivative product is also GPL, and if you don't want the derivative product to be GPL, then don't use the original product.


Is reading source code using it? Can you restrict people from doing that? What actually makes a derivative work.

Can I put up a sign with a fact on it, can people who see the sign not use the fact unless they agree with my terms and conditions? That certainly would be the case if we went wiTh some sense of derived.

The law needs specifics for a reason, if it were down to what each individual felt it means in the moment it would be useless.

The most recent legal findings have said that training on legally acquired data does not violate copyright.


Facts aren't subject to copyright.

As for what constitutes a derivative work, this is a matter of law. In the US,

A "derivative work" is a work based upon one or more preexisting works, such as a translation, musical arrangement, dramatization, fictionalization, motion picture version, sound recording, art reproduction, abridgment, condensation, or any other form in which a work may be recast, transformed, or adapted. A work consisting of editorial revisions, annotations, elaborations, or other modifications which, as a whole, represent an original work of authorship, is a "derivative work".

(17 U.S.C. § 101)


Good point about facts. It applies similarly to properties that provide functionality. One tends to lose track of the fact that much of software shouldn't be copyrightable in the first place, it's just a pretence that has evolved due to how much people like money.

It's a stretch to say that training a model falls under that definition of derivative work. It's be like saying that building a house after reading a book on how to build a house makes the house a derivative work. I can just imagine cookbooks introducing limited licences on who you can feed with their recipes.


So the pixel editor I made using AI that was trained on, among other things, the Linux kernel, is to be considered derivative of the Linux kernel?

And that's not an interpretation?


This is just the divide between capital and labor though, isn't it? See also: everything is a remix; great artists steal.

I'm on both sides. I've contributed to open source. I use AI both in my personal projects now and to make money for my employer.

I'm still not sure how I feel about any of it, but to me the bigger problem is the division between capital and labor and the growing wealth inequality divide.


> great artists steal.

That quote is about inspiration, not just using others' work or style.

T. S. Eliot's version from 1920 put it best imho:

> Immature poets imitate; mature poets steal; bad poets deface what they take, and good poets make it into something better, or at least something different. The good poet welds his theft into a whole of feeling which is unique, utterly different from that from which it was torn; the bad poet throws it into something which has no cohesion.


Are you suggesting that authors didn't know or understand that commercial exploitation of their OSS contributions was possible? If so, that is a complete misrepresentation of history. There has always been open-source licenses that disallowed commercial use. Authors have chosen not to use them, and instead chose licenses, such as MIT/GPL, that allowed commercial use. And there has always been commercial use of OSS. Big companies, small companies, tech companies, oil and gas companies, weapons manufacturers, banks, hardware companies, etc. They all use OSS and they all make a profit from it, without giving anything back to the people who originally wrote it. That's not an edge case or an unexpected consequence, it a fundamental tenet of free (as in freedom) software: You do not get to choose who uses it, or how they use it.


> There has always been open-source licenses that disallowed commercial use.

There were source available licenses against commercial use. Free Software Definition and Open Source Definition said a license must allow any use.


> But that doesn't mean they don't care if other people profit from their work

This doesn't make sense. You make something and put out there, for free, of your own will. Why do you care if someone takes it and makes a profit? Shouldn't you have taken that profit route yourself before if that's what you wanted?


Getting the credit and the modifications is the profit.

You basically are looking at a contract and saying you aren't going to agree to the terms but you're taking the product anyway.


What seems stranger to me is not acknowledging, that most popular OSS explicitly permitted for profit use. It's essentially what made them popular.

Obviously LLMs are new and nobody knew that they would happen. But the part where most popular OSS willfully committed to broad for profit use is not.


> He doesn't acknowledge that Anthropic, OpenAI, etc, are profiting while the original authors are not.

How is this different than any company that uses the open source software?

I find this argument hard to swallow. If open source contributors want to profit from their code being used and prevent big companies from using it or learning from it, open sourcing it would be an irrational choice.


>How is this different than any company that uses the open source software?

recognition for the authors, which can lead to all sorts of opportunities. "netflix uses my X for their Y, worldwide" opens doors.


Can you cite an actual example of a FAANG company using X for Y that is also primarily attributable to a single developer? That is, someone who can say "uses my X"?

Not a community-developed project with a lot of contributors, but a software that would realistically qualify as being mostly attributable to one person?

Redis is an easy example, but the author of that doesn't need to say "Netflix uses my X" because the software is popular by itself. AI being trained on Redis code hasn't done anything to diminish that, as far as I can tell.


>Can you cite an actual example of a FAANG company using [...]

FAANG specifically? no, i am not familiar with their entire tech stacks.

but i have leaned on my single-developer projects (being used in other, not owned by me, software) to help land consulting gigs.


> I think one of the more prominent issues folks take with mass training on OSS is that the companies doing it are now profiting for having done it.

He says it's a gift, and if people do whatever, he doesn't care; he already gave it away.

I think it's interesting that nobody would cry that Fabien should shovel cash from his book sales towards Carmack, nor should those who learned how to code by reading source owe something to the authors beyond gratitude and maybe a note here and there.

Even things like Apple's new implementation of SMB, which is "code clean" from GPLv3 Samba, but likely still leans on the years and years of experience and documentation about the SMB protocol.


> He says it's a gift, and if people do whatever, he doesn't care; he already gave it away.

That's his choice and I assume he licensed his code accordingly. That doesn't mean that the choices of others who used different licenses are invalid.


It's also odd to release software under a license allowing commercial use if the authors didn't want that.


It has never been the case that publishing a work entitles you to a share of all profits that are downstream of your work. Copyright law protects your ability to receive profits that result from the distribution of the work itself, but that's quite limited.

If you publish a cookbook, you should get a portion of the sales of the cookbook itself, and no one should be allowed to distribute copies of it for free to undermine your sales.

What you don't get is a portion of the revenues of restaurants that use your recipes!


It's not even the profit, but that there is often no new code being contributed.

AI provides an offramp for people to disengage from social coding. People don't see the point because they still don't understand the difference between barely getting something to work and meaningfully improving that thing with new ideas.


if no code is contributed back then why is there an ongoing problem with massive amounts of PRs?


I didn't say slop. I said code.

The whole point of contributing to open source is to make decisions and the code is the medium.


> profiting for having done it.

Isn't that permitted by some of the more popular licences? If you care about others profiting from your work you'd choose an appropriate licence. And then you'd temper your expectations and hope for the best because you know there will be less than perfect compliance. It's like lending money to family or friends. You can hope they pay you back, but better to consider it a gift because there's a good chance they won't.

Is it worse because it's AI for some reason? I'm having trouble pinning down exactly what the gripe is. Is it license compliance? Is it AI specific? Is it some notion about uncool behavior in what some people see as a community?


A lot of the use of open source code has directly breached the terms under which that code is shared and they are now monetising the sale of this code.


Yeah the main difference seems to be that he open sourced the games after he got very wealthy from them not before. So of course at that point you can easily feel magnanimous about bestowing gifts.

Open sourcing something from the start and essentially giving up any ability to profit from the use of your work when companies are often making huge profits from it seems less easy in comparison.


> But that doesn't mean they don't care if other people profit from their work.

He clearly states his opinions. He doesn't care if other people profit from his code.

>> GPL would prevent outright exploitation by our competitors, but those were to allay fears of my partners to allow me to make the gift

He believes other members in OSS community should have this mindset. Of course it might not be fair, especially for members who are as financially fortunate as him. His point is clear nevertheless.


I haven't given a cent to openai or anthropic but they have given me many thousands of tokens for free.


why is someone making money out of it a bad thing, if they're not preventing other people from using the code?

That's the point? I agree and roughly it's one of two.

A: you made this as a free gift to anyone including openai B: you made this to profit yourself in some way

The argument he makes is if you did the second one don't do opensource?

It does kill a ton of opensource companies though and truth is that model of operating now is not going to work in this new age.

Also is sad because it means the whole system will collapse. The processes that made him famous can no longer be followed. Your open source code will be used by countless people and they will never know your name.

It's not called a distruptive tech for nothing. Can't un opensource all that code without lobotomizing every AI model.


If folks don't want LLMs scanning their codebases we should just make some new OSS licenses. Basically, "GPL/BSD/MIT + You pinky promise not to scan this for machine learning".

Either it works and the AI makers stop stop slurping up OSS or it doesn't hold up in court and shrinkwrap licenses are deemed bullshit. A win/win scenario if you ask me.


Its a lot less odd when you remember that he's running an AI company himself.


I'm seeing your comment's downvoted, I'd like to hear from those that did as to why. Doesn't his current venture with his AGI startup Keen Technologies deserve being called out as a potential conflict of interest, here?


Because whether there is a conflict of interest or not, the argument can and should be examined on its own merits.


Ah.. So the old “It is difficult to get a man to understand something, when his salary depends upon his not understanding it!”.


Yes, but likely in the exact inverse than what is implied here. Carmack has generational wealth, he is likely fine financially regardless of how AI pans out. The many individuals who feel they should be financially compensated for code they open sourced are likely far more invested financially in that particular outcome.


>I think one of the more prominent issues folks take with mass training on OSS is that the companies doing it are now profiting for having done it.

I've noticed this thing where people who have decided they are strongly "anti-AI" will just parrot talking points without really thinking them through, and this is a common one.

Someone made this argument to me recently, but when probed, they were also against open weights models training on OSS as well, because they simply don't want LLMs to exist as a going concern. It seems like the profit "reason" is just a convenient bullet point that resonates with people that dislike corporations or the current capitalist structure.

Similarly, plenty of folks driving big gas guzzling vehicles and generally not terribly climate-focused will spread misinformation about AI water usage. It's frankly kind of maddening. I wish people would just give their actual reasons, which are largely (actually) motivated by perceived economic vulnerability.


I am anti-AI art and will never fund any thing created from AI-art. It lacks emotion. It only can copy and attempt to duplicate existing art.

The time taken to make art is therapeutic to the artist and is expressed in their end product. It helps them keep balance in their lives, calm them, and fight depression.

Everything I have seen from AI-art is dis-formed from reality. AI-art will enhance body dis-morphia in the younger generation the more real looking it gets.

I am 100% for laws that Norway has were it must be labeled that a photo has been edit. AI-art should need to be labeled to help prevent body dis-morphia. Body dis-morphia leads to eating disorders, depression and suicidal thoughts and actions.


>I am anti-AI art and will never fund any thing created from AI-art. It lacks emotion. It only can copy and attempt to duplicate existing art.

Sure, but there's a difference between being anti-AI in X use case, and anti-AI across the board. I see you didn't mention LLMs here, which are the biggest AI use case right now.

That said, a competent artist can produce cool collaborative works with AI image models. Folks have won art competitions using these tools. As AI image models like Nano Banana get more adept at manipulating images, it's likely to become yet another tool like Photoshop for human expression. That said, I don't think people one-shotting fully synthetic images is really artistic expression, so I agree with that much.

>Everything I have seen from AI-art is dis-formed from reality. AI-art will enhance body dis-morphia in the younger generation the more real looking it gets.

Is this...new? The advertising industry mastered this long before AI. We probably needed regulation back then, too. I'm not sure why AI is special here.


I recommend _Gödel, Escher, Bach_ by Douglas Hofstadter. [0] There is not single reason but multiple of reasons why large problems existing, bad things happen, or people reject ideas. By trying to reduce to a single idea, you are rejecting Gödel and accepting the idea that a universal math can exist.

Please do not apply Whataboutism to labeling about edited images [1]. Labeling should also apply to manually edited content. Difference between manually edited and AI editing is talent. Few people know how to manually edit. AI allows anyone auto-edit content. Auto-edited via AI allows for the dumbest to modify and be fooled by the edits.

I gave a reason for why I will not spend a penny on AI created art; games, movies, music, ... pictures. A person engaging with a prompt has no worldly knowledge of mediums. Working with medium is a trained talent. [2] Typing into a prompt has no artistic talent with mediums. That is part of it because adding to it expands the complexity.

Anti-AI can easily be seen just reading news and company statements. Anti-AI is being socially engineer by companies that gave / give AI as the reason for firing works. CEO's trying to pump their stock by saying humans are no longer needed. News articles about jobs being taken over and replaced. These give a bleak future and help prop up the ultra wealth.

LLMs can easily be summed up, pun intended. Had a non-computer / tech illiterate state why they like AI. They don't have to read the report and it can summarize it for them. Don't want to spend time writing an email, AI can do it form it for them. Both have the same long term affect. Lack of true understanding of the subject matter. The person that uses LLMs does not know the content long-term unlike the person that reads the full report. The person that takes time to write the email will become better at doing so were the LLM user will not.

I have not see any value in LLMs summaries. It may provide a true answer or a false "hallucination". If I want to learn I want to read the core content, not some summary. This assists me long term with better understand than those just seeking a simple _yes_ / _no_ answer. Understand allows for the content to be applied in both yes and no; based on context.

AI (Artificial Intelligence) is a marking farce. It is ML (Machine Learning). No one has yet conceived of AI because it can not learn by engagement with reality. ML only regurgitates what it is trained on with out evolution of knowledge / real world experience. Like all applications garbage in = garbage out.

ML is good at only one thing. Assisting with removing inherent bias. Something the movie _Money Ball_ examines and shows as proof of concept. That movie should really be called _Remove Inherent Bias_ but that title does not market or sell. Analyzing CAT or PET scans is a good example where ML can assist. A persons emotional state affects their ability to apply logic. _Thinking, Slow and Fast_ talks about how humans change their bias because of hunger. [3] It also is exemplified in how charisma affects logic. People that meet Adolf Hilter did not see him as a bad guy. [4] Those that did not converse with Adolf Hilter had a better understand of his character. This is the same reason judges will release the bad guy, that commits more crime, and keeps the guy with the good character in jail.

I can go even longer but will leave it at this. Left out increase of computer components, increase of electricity and water costs. And suppression of wages. People falsely imprisoned because of AI. And the black-box of the content it has been trained on. AI psychosis ... Don't want to add to the weighted value ..., another pun.

P.S. I forbade LLMs and any ML or AI from using this content. If any AL / ML / LLMs utilize this content you owe me no less than $1,000,000 in content usage fee per-token analysis.

[0] https://en.wikipedia.org/wiki/G%C3%B6del,_Escher,_Bach

[1] https://en.wikipedia.org/wiki/Whataboutism

[2] https://en.wikipedia.org/wiki/Spielberg_(film)

[3] https://en.wikipedia.org/wiki/Thinking,_Fast_and_Slow

[4] https://en.wikipedia.org/wiki/Talking_to_Strangers


so what?


Carmack is the same person comfortable with delaying talks of ethical treatment of a digital being, or what even constitutes one until in his eyes "they demonstrate the capabilities of a two year old" by which point, with the scale we distribute these models at, and the dependence we're pushing the world to adopt on them, we'll be well into the "implicit atrocity zone", and so far down the sunk cost trail, everyone will just decide to skip the ethics talk altogether if we wait that long. This is in spite of being a family man, which raises serious questions to me about how he must treat them. It does not surprise me at all the man has blindspots I could fit a semi-truck in.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: