Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Have We Reached Peak AI? (wheresyoured.at)
53 points by minimaxir on March 18, 2024 | hide | past | favorite | 40 comments


This is the second time in as many weeks I have seen someone ask if a process which essentially cannot go in reverse has “peaked.” Makes as much sense as asking if this is peak FLOPS-per-dollar. The only think I can think of that could cause us to reach peak AI would be a meteor.


Though it is essentially monotonic, I feel like these people are trying to imply that the higher order derivatives are turning negative when they say that it has peaked. That's probably also a cold take, but at least it's not a nonsensical one.


> This interview is important for a few reasons, but let's start with the most obvious: the Chief Technology Officer of OpenAI either can't or won't explain what materials its video-creating generative AI was trained on.

I wonder what’s left of “Open” in OpenAI.


Their wallets when they're raking the profits into them :)


It's all public, stolen, scraped, pirated data. Like everybody knows this. Any data that was available online, it was used for training. I don't know why we pretend this wasn't the case. As an industry we need to be more truthful and honest.


Still... I don't think it's bad as such. We all benefit from it. I view it in a similar way to Google indexing. Or human learning. We all benefit from society's knowledge as a whole. Especially for models that are open source (well, public weights) I think it's a good thing.

Of course not everyone agrees but only in the rarest of instances will it return source data verbatim.


Even if that's true we don't all benefit from it equally, or in the same ways. An artist whose work was used without compensation for training data and who now can't find paid work benefits how? Just in the same vague general sense that we all do? That hardly seems honest.

But hey if it is true then we can nationalize it. If it's built on our collective labor and applied to our collective benefit then there is no justification for letting the profits accumulate to a few companies and their investors.


What is stolen?


Pictures, images, fanfic. All pirated copies.


But the original owners of those pictures, images, fanfic, all still have them, so stolen seems like the wrong use of word.


give the semantics a rest, I would love to hear a more cogent argument for why microsoft and its shareholders should get to profit from unlicensed data just because they found a way to repackage it into a helpful harmless assistant


So you're saying that any data, regardless of origin should be licensed. So should we paying dividend to einstien for e=mc2? Or should he have paid dividends to any of the math or physics before him that he used to further humanity? I mean since it's just semantics since he clearly "stole" the e, the =, the m, the c, and the 2.


math isn't covered by copyright law

really my opinion is copyright should be 7 years from publication date, but whatever it is, i would like for it to be applied equally


I must be in as much of a bubble as the author is because I genuinely don't understand how people believe this.

This piece is centered on the media around AI, not AI itself. Tons of business people and journalists are quoted, no people who already use it in their day to day life (like I and many people here probably do), or as part of their job where it undeniably creates very real value. Or researchers who would be happy to answer the question of exactly how AI can create value, if the business people don't want to.


The scaling laws don't show an inflection point yet [0], DeepMind and OpenAI are producing rudimentary agents, and inference keeps getting cheaper.

It's certainly possible that we passed the capability inflection point months ago and we'll only see a slowing of progress in the future, but even if so there's no evidence that AGI will be significantly less capable than humans when we approach the asymptote of the S-curve we're on. Models that can achieve median human performance will be far cheaper to operate than the median human salary. All the SOTA benchmarks are against experts, not median humans (as they should be). Currently, benchmarks that may be platouing[0] are above human performance.

I've always been less interested by the individual SOTA results themselves than by the curves they're following.

[0] https://contextual.ai/plotting-progress-in-ai/


The author is oddly (suspiciously?) hostile towards AI. I get skepticism, but this piece is awfully reminiscent of the "internet has peaked" naysayers of the early 90's.


People who write for a living seem to often write about how AI is bad.

Reminds me of the Upton Sinclair quote about not expecting someone whose job depends on not understanding a thing to understand it.


Writers can hugely benefit from AI. Not really by having the work done for them but for rephrasing etc.


I wish people could understand that creatives actually enjoy the process. Artists enjoy creating art. Writers enjoy writing. Having a computer rewrite your sentences for you reduces the art of writing to a merely mechanical process, which is not only dehumanizing but which strips the work of the author's unique signature and tone. You could train an LLM on everything Stephen King ever wrote, but the end result is only ever going to come across as a parody of what he's already written.

Yes, it's more efficient, and more productive, but not every job is like software where the only relevant measure of quality is how quickly you can get a minimum viable product to market. Some things are worth the effort and craft.


I'm not saying to take the result as-is, but you could ask the AI for suggestions. A lot of authors tend to get hung up on the same words over and over which becomes tedious as a reader.

It's really not so different from a synonym dictionary. Just a lot easier and with better context.

Also, many writers don't enjoy it as such. Sometimes it's just production. Think of journalists asked at the last moment to cut their page-long piece to 800 characters. Instead of spending 20 minutes cursing the editor and their mother and then settling into the work they can now whip it into the LLM and check it. Or marketeers having to rewrite a bunch of corporate drivel for a different target audience.


I think it's at least in part because they are completely out of the loop: "[I] realized that in the last year, I have met exactly one person who has [used generative AI] — a writer that used it for synonyms."


clearly they don't know any coders


The only programmers I "know" using it for work are commenters on HN. I've seen coworkers try and tried myself but so far haven't seen a regular effective use of it except what people describe here.


I don't either put code in or use it's output directly. I treat it as a conversational oracle over stackoverflow / all documentation ever written :D


Future already here, not evenly distributed etc


My brother in law is a manager at a ~15,000 person software company that is a heavy Microsoft shop. He said they received a license discount if they added Github copilot for all of their (many thousands) of engineers. Essentially it would have cost more not to enable Copilot.

We thought the reason this happened was so MS could goose their "paid" user accounts for Copilot to make the revenue look better, with the hope that his company would eventually see the utility of Copilot and then pay for it voluntarily once the next license renewal comes up.

He doesn't hate Copilot but the consensus is they will not be willing to pay for it on the next renewal, it isn't really that useful (so far).


Haha we did the same. Every one of us with a GitHub account was forced to "request" a CoPilot license. Whether we wanted to or not. I didn't because I mainly use it as a repository for safekeeping scripts. I was wondering why. Now I know. Thanks.


Peak AI? No. Peak GenAI? Almost.

People keep equating GenAI with AI.


Just call them NN, because that's what they are. Though I realize that's a lost cause by now unfortunately.


Multi Layer Perceptrons.


Not sure how you think we're almost at peak gen ai when image and text models are still making HUGE gains, we're still in the low hanging fruit phase my friend.


I think the author simply assumes that Murati doesn't answer because she doesn't know. She's not answering for legal and competitive reasons.

The author is clearly unfamiliar with ML from a technical perspective as well as the state of the industry, which account for the things not detailed or not precisely estimated in the interview


They don't assume that:

> This interview is important for a few reasons, but let's start with the most obvious: the Chief Technology Officer of OpenAI either can't or won't explain what materials its video-creating generative AI was trained on.


Exactly. As a questions it amounts to "say something that'll score you a billion dollar lawsuit".


I just laugh and roll my eyes when people claim that AI produces very little value.

Chat AI is a useful product with many individual users who pay because they get value from it.

Maybe it can’t autonomously execute business processes, yet, but it’s an absolute superpower for learning anything new.

Having an AI is like having 24/7 access to an expert on any subject. This expert isn’t always right, and is a bit of an idiot sometimes, but having immediate access to their knowledge is powerful. How do people not see this?


I see it more as 24/7 access to a bright high school teacher. ChatGPT is far from expert at anything. Mile wide and about a foot and a half deep. (Deepening all the time, though.)


Peak AI *hype*, hopefuly


Betteridge's Law of Headlines [0] remains undefeated.

[0]https://en.wikipedia.org/wiki/Betteridge%27s_law_of_headline...


yes


"I see little evidence that OpenAI has any plans to help creators other than trying to convince them to use its tools." -- Is she expecting them to send monthly tithe checks to creators? The tone is suggestive, and to me that seems insane. Better tools is exactly how I would expect OpenAI to help content creators.

"Chang weakly ripostes by laughing about her kids using ChatGPT to write papers, to which Hoffman retorts with his own version of "extending creativity," saying that the hope would be that the interaction with AI will teach students to "create much more interesting papers," a point at which Chang should have asked him what that actually fucking means." -- This is purposefully obtuse. Less time spent researching minutiae and trying to figure out how to pack your material into the format means more time available to come up with an interesting thesis and coming up with clever points or turns of phrase for GPT to incorporate.

"Stern fails to push back here on numerous fronts — that "larval reasoning" is a completely meaningless term, and that, in general, Altman has failed to actually explain what he means." -- This one isn't hard to figure out if you spend a minute using these tools, and the fact that the author didn't shows that they didn't do proper due diligence to write an article worth considering. For example, "Hey ChatGPT, I have an idea but I'm not sure how to go about proving/disproving it, and I am missing some background information, what books and authors are a good source of information, and experiments could I do to improve my understanding?"

I could go on but at this point I couldn't be bothered to read a cold take from someone who's obviously got an anti AI agenda (probably due to connections to people in art/journalism who are 100% going to lose their jobs).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: