Hacker Newsnew | past | comments | ask | show | jobs | submit | agluszak's commentslogin


Idk, I'm skeptical. Is there any proof that these multi agent orchestrators with fancy names actually do anything other than consuming more tokens?


Reasoning density.

I have a specific use case (financial analysis), that is at the edge of what is possible with this models (accuracy wise).

Gemini 2 was the beginning, you could see this technology could be helpful in this specific analysis but plenty of errors (not unlike a junior analyst). Gemini 2.5 flash was great actually useable, errors made were consistent.

This is where it gets interesting, I could add additional points to my system prompt, yes it would fix those errors but it would degrade the answer elsewhere, often it wouldn't be incorrect but merely much simpler less nuanced and less clever.

This is where multi-agents helped it actually meant the prompt can be broken down so that answers remain "clever". There is a big con to this, it is slow, slow to the point that I chose to stick with a single prompt (the request didn't work well operating in parallel as the other prompt surfaced factors for it to consider).

However Gemini 3 flash is now smart enough that I'd now consider my financial analysis solved. All with one prompt.


It's hard to accurately measure but one advantage that the multi-agent approach has seems to be speed. I routinely see Sisyphus launching up to 4 sub agents to read/analyse a file and/or to do things in parallel.

The quality of the output depends more on the underlying LLM. GLM 4.7 isn't going to beat Opus but Opus with an orchestra seems to be faster and perhaps marginally better than with a more linear approach.

Ofcourse this burns a lot of tokens but with a cheap subscription like z. ai or with a corporate budget does it really matter?


Soo instead of solving the problem that the university supposedly doesn't have the money to have normal oral exams, they enshittified and techbrosified the entire process?

Thank god I had a chance to study in pre-AI times.


What do you mean by "made-up quote"? The author didn't make it up, see https://en.wikipedia.org/wiki/A_journey_of_a_thousand_miles_...

This level of nit-picking is disheartening...


I think they meant:

> "If you can't explain it simply, you don't understand it well enough." - Albert Einstein

Which is listed at https://en.wikiquote.org/wiki/Albert_Einstein#Misattributed


The reference is most likely to:

> "If you can't explain it simply, you don't understand it well enough."

Which TFA (and others) misattribute to Einstein


> Enabling the feature in Workspace says that “you agree to let Google Workspace use your Workspace content and activity to personalize your experience across Workspace,” according to the settings page, but according to Google, that does not mean handing over the content of your emails to use for AI training.

Google be like: "trust me bro"


User-facing software is full of language like that these days and I find it really frustrating, because it never helps answer the questions attentive people actually have, like will that mean my emails get dumped into the next Gemini training run?


Maybe my brain has rotted from paying a bit more attention to privacy language than the average person, but IMO in this case it's fairly clear to me that using your activity “to personalize your experience across Workspace” does not mean using it for training Gemini. The “personalize” means it's for training the recommendation/categorization systems for you, like which emails get marked as "important" or not (the settings at https://mail.google.com/mail/u/0/#settings/inbox).

(In some very broad sense I guess critics could call this “training AI” as there's an ML system somewhere whose parameters associated with your account get updated, but I think we can all agree this is not what we think of as “training AI”, i.e. going into a cross-user dataset for training Gemini or whatever.)

(I guess what Google should do, and should have done years/decades ago, is create a fixed set of categories of how your data can be used (aggregate statistics, training Gemini, personalization…) and use the same language across products, legal, everything.)


I interpret it that way too, but my bigger problem is in explaining it to other people. I can't credibly point nervous users at that language and say "what this actually means is X", because there's too much vagueness and wiggle room built into the language that the companies publish.


If you are using Google Workspace you decided to trust them a while ago.


Maybe circumstances have changed? I certainly trusted 2008 Google a lot more than Google in 2025. It's really amazing to see a company just throw trust and goodwill out the window, even worse to see that it pays.


It sounds like you have some serious knowledge gaps about AI. It's perfectly normal to use AI on a dataset without incorporating that dataset into training a new AI model. If you download a free model and run it offline on your data, your data doesn't get magically incorporated into the model on the site you originally downloaded it from.

This is the same thing, and it's backed by a contract and the threat of lawsuits from the many businesses using Google Workspace.


> If this post makes it to the top of HN, you could well be outed. You will be eventually.

At least it will be funny then...

> Is it colorful? What's it aimed at?

It's just a single "dick" in Polish inserted in a debug logging statement


Now let's wait until US invests 50B in Anthropic!


Look at that! where did this money come from?


If you have some spare time, please consider contributing! The community is really nice, I have sent a few PRs myself :)


Correct, it's ignored. I find it strange that the EU isn't looking into it.


Anna's archive has already fulfilled G's needs (training Gemini) so now it's time to pretend it never existed ;)


Did Anna's Archive also organize much of the world's information and made it universally accessible, for some time?


actually yes. and we re talking about high quality information, not random comments


They’re… yes. Yes, that’s exactly what they have done and continue to do. Are you familiar with it?


That phrase is Google's mission statement.


I thought their mission statement was "Don't be evil", until they shortened it for practicality to just "Be evil". It's certainly how they've been behaving in recent years.


It's now "Don't be evil*"

* Subject to terms and conditions, lack of evil not be available in all regions.


That wasn't ever a mission statement, and fwiw it was in the employee handbook still in 2023 when I got laid off.


They changed it more than ten years ago into "Do the right thing".


I like "Don't be evil" better. It inherently acknowledges their position of power in a way that "do the right thing" hides


Motto, not mission statement.


I think the comment is saying Google was also doing that.


Anna's archive doesn't engage in privacy-eroding antitrust/monopolistic activities (yet), so there's that I suppose...


They're doing one site less now


It's not delisted. Anna's Archive is huge. The fact that Google participates in an entirely voluntary transparency log that gives you this information should illustrate to you where they stand on the issue of their needing to be compliant to the DMCA. It isn't clear to me why online communities constantly invent fan fiction of evil enemies when organizations merely comply with a reasonable interpretation of the law of the land they are incorporated in.


Apparently corpo doesn’t hesitate to remove it when it benefits consumer, because “we just follow the law, citizen!” But when it benefits corpo it takes decades of suing and multi-billion fines to make a change.

Totally not evil, just business, comrade, amirite?


100% Here in Germany its invisible deleted, and the process handle by a private company


no one, and i mean no one, has to invent the history of evil corporations doing evil things. Climate change? Cigarettes?, shit let's go modern. CZ? SBF?

if it's not clear to you may i suggest with the upmost respect that you read surveillance capitalism by zuboff (a successor to manufactured consent in my humble opinion).

I guess my question is where do you get the confidence or belief these companies are doing anything BUT evil? how many of americas biggest companies' workers need food aid from the govt? look up what % of army grunts are food insecure. in the heart of empire.

Where on earth do you get this faith in companies from?


Publicly traded corporations are machines whose only lawful purpose is to make money. They are legally obligated to be sociopathic systems. They aren't evil like an axe murderer, they're evil like a gasoline fire. They may be useful when properly controlled, but they're certainly never worth defending in the way you seem to feel the need to


>Publicly traded corporations are machines whose only lawful purpose is to make money.

Hey, so this isn't the case at all, publicly traded companies are under no lawful obligation to focus only on making money. Fiduciary duty does not mean this in any way. It's a common misconception whose perpetuation is harmful. Let's stop doing it.


> publicly traded companies are under no lawful obligation to focus only on making money

You changed the word "purpose" to "obligation"

I think there is a big difference b/w the two.

I would consider a correction in both of these statements, that the only purpose isn't to make money but rather to make valuation (but same thing most of the times)

They'd rather lose on profits or even burn the profits if that would mean that somehow their valuation could grow faster.

But sooner or later the profits will catch up to the evaluation (I hope) and only profitable companies should have their valuations based on top of that in an efficient economy.

Public traded corporations get money from people indirectly via retirement funds or directly via investing in them directly. The whole idea becomes that the profit to a person retiring is not the profits of the company but rather the valuation of the company. Of course, they aren't a legal obligation to profit itself but I would consider them to be almost under legal obligation to valuation otherwise they would be removed out of being publicly traded or in things like S&P 500 etc.

As an example, in my limited knowledge, take Costco, some rich guy would say for them to raise the price of its hotdog etc. from 1.50$ to 3-4$ for insanely more profits. Yet, they have their own philosophy etc. and that philosophy is partially the reason of their valuation as well.

When the rumour that costco is raising the prices of their hot dogs, someone might expect stock prices to increase considering more "profit" in future but rather the stock prices dropped.. by a huge margin if I remember correctly.

most companies are investing into AI simply because its driving their valuations up like crazy.

I don't think its an understatement to say that companies are willing to do anything for their valuations.

Facebook would try to detect if girls are insecure about their body and try to show advertisements to them. This is in my opinion, predatory nature showed by the corporation. For what purpose? for the valuation.


Potato, potahto. While you're right that the law doesn't state it, it's also true that it is the only goal they have, so there's that.


The purpose of a system is what it does.


It's not "a system". Each company is run by different people, and is under different pressures, and makes different decisions. Monolithing that is silly.



Each company is a system, though. And they exhibit certain behaviors common to their type.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: