“We’re building an age-prediction system to estimate age based on how people use ChatGPT.” Is there something wrong with simply asking the user when they register? (volatile age not DOB).
For a company that sees itself as the undisputed leader and that wants to raise $7 trillion to build fabs, they deserve some of the heaviest levels of scrutiny in the world.
If OpenAI's investment prospectus relies on them reaching AGI before the tech becomes commoditized, everyone is going to look for that weakness.
I was on an airplane and there was high-speed Internet on the airplane. That's the newest thing that I know exists. And I'm sitting on the plane and they go, open up your laptop, you can go on the Internet.
And it's fast, and I'm watching YouTube clips. It's amazing. I'm on an airplane! And then it breaks down. And they apologize, the Internet's not working. And the guy next to me goes, 'This is bullshit.' I mean, how quickly does the world owe him something that he knew existed only 10 seconds ago?"
Soon, all the middle class jobs will be converted to profits for the capital/data center owners, so they have to spend while they can before the economy crashes due to lack of spending.
Not invariably. Some of those people are the ones who want to draw 7 red lines all perpendicular, some with green ink, some with transparent and one that looks like a kitten.
No, people who say "it's bullshit" and then do something to fix the bullshit are the ones that push technology forward. Most people who say "it's bullshit" instantly when something isn't perfect for exactly what they want right now are just whingers and will never contribute anything except unconstructive criticism.
There's someone with this comment in every thread. Meanwhile, no one answers this because they are getting value. Please take the time to learn, it will give you value.
I’m a consultant. Having looked at several enterprises, there’s a lot of work being done to make a lot of things that don’t really work.
The bigger the ambition, the harder they’re failing. Some well designed isolated use cases are ok. Mostly things about listening and summarizing text to aid humans.
I have yet to see a successful application that is generating good content. IMO replacing the first draft of content creation and having experts review and fix it is, like, the stupidest strategy you can do. The people you replace are the people at the bottom of the pyramid who are supposed do this work to upskill and become domain experts so they can later review stuff. If they’re no longer needed, you’re going to one day lose your reviewer, and with it, the ability to assess your generated drafts. It’s a foot gun.
I mean, no, not generally. but the success rate of other tools is much higher.
A lot of companies are trying to build these general purpose bots that just magically know everything about the company and have these but knowledge bases, but they just don’t work.
I'm someone who generally was a "doubter", but I've dramatically softened my stance on this topic.
Two things:
I was casually watching Andreas Kling's streams on Ladybird development (where he was developing a JIT compiler for JS) and was blown away at the accuracy of completions (and the frequency of those completions)
Prior to this, I'd only ever copypasta'd code from ChatGPT output on occasion.
I started adopting the IDE/Editor extensions and prototyping small projects.
There's now small tools and utilities I've written that I'd not have written otherwise, or would have taken twice the time invested had I'd not used these tools.
With that said, they'd be of no use without oversight, but as a productivity enhancement, the benefits are enormous.
For my mental health I’ve stopped replying to comments where it’s clear the author has no intention of having a discussion and instead wants their share their opinion and have it reinforced by others.
No, we don’t have AGI or anything close to it. Yes, AI has come a long way in the past decade and many people find it useful in their day-to-day lives.
It’s difficult to know where AI will be in 10 years, but the current rate of improvement is staggering.
> Meanwhile, no one answers this because they are getting value.
You're literally doing the same thing you're accusing of. Every HN thread is full of AI boosters claiming AI to be the future with no backing evidence.
Riddle me this. If all these people are "getting value", why are all these companies losing horrendous amounts of money? Why has nobody figured out how to be profitable?
> Please take the time to learn, it will give you value.
Yeah, yeah, just prompt engineer harder. That'll make the stochastic parrot useful. Anyone who has criticism just does so because they're dumb and you're smart. Same as it always was. Everyone opposed to the metaverse just didn't get it bro. You didn't get NFTs bro. You didn't get blockchain bro.
None of these previous bubbles had money in it (beyond scamming idiots), if AI wants to prove it's not another empty tech bubble, pay up. Show me the money. Should be easy, if it's automating so many expensive man-hours of labour. People would be lining up to pay OpenAI.
> Riddle me this. If all these people are "getting value", why are all these companies losing horrendous amounts of money? Why has nobody figured out how to be profitable?
While I agree that LLMs are not currently working great for most envisioned use cases; this premise here is not a good argument. Large LLM providers are not trying to be profitable at the moment. They’re trying to grow and that’s pretty sensible.
Uber was the poster child of this, and for all its mockery, Uber is now an unqualified profitable company.
I'm not sure I would call incinerating 11b dollars a year to the point where you need to do one of the biggest raises ever and it doesn't even buy you a year of runway sensible.
Think of all the search engines alltheweb, yahoo, astalavista,... where sooo much money got poored in, and finally there was just one winner taking it all. That's the race openai is trying to win now. The competition is fierce and we can just play with all kinds of models for free and we do nothing but complaining.
> Why has nobody figured out how to be profitable?
From what I've seen claimed about OpenAI finances, this is easy: It's a Red Queen's race — "it takes all the running you can do, to keep in the same place".
If their financial position was as simple as "we run this API, we charge X, the running cost is Y", then they're already at X > Y.
But if that was all OpenAI were actually doing, they'd have stopped developing new versions or making the existing models more efficient some time back, while the rest of the industry kept improving their models and lowering their prices, and they'd be irrelevant.
> People would be lining up to pay OpenAI.
They are.
Not that this is either sufficient or necessary to actually guarantees anything about real value. For lack of sufficiency: people collectively paid a lot for cryptocurrencies and NFTs, too (and before then and outside tech, homeopathic tinctures and sub-prime mortgages); For lack of necessity: there's plenty of free-to-download models.
I get a huge benefit even just from the free chat models. I could afford to pay for better models, but why bother when free is so good? Every time a new model comes out, the old paid option becomes the new free option.
• Build toys that would otherwise require me to learn new APIs (I can read python, but it's not my day job)
• Learn new things like OpenSCAD
• To improve my German
• Learn about the world by allowing me to take photos of things in this world that I don't understand and ask them a question about the content, e.g. why random trees have bands or rectangles of white paint on them
• Help me shopping, by taking a photo of the supermarket that I happen to be in at the time and ask them where I should look for some item I can't find
• Help with meal prep, by allowing me to get a recipe based on what food and constraints I've got at hand rather than the traditional method of "if you want x, buy y ingredients"
Even if they're just an offline version of Wikipedia or Google, they're already a more useful interface for the same actual content.
That was puzzles me now. Everyone with a semblance of expertise in engineering knows that if you start with a tool and try to find a problem it could solve you are doing it wrong. The right way is the opposite - you start with a problem, and find the best tool to solve it, and if it's the new shiny tool - so be it, but most of the time it's not.
Except the whole tech world starting with the CEOs seems to do it the "wrong" way with LLMs. People and whole companies are encouraged to find what these things might be actually useful for.
Yeah, if you're specific about it and know what to expect it's usually workable. In any case, this blog post is an indicator of what's about to come next.
It’s then best we’ve got for achieving actually meaningful privacy and anonymity. It has a huge body of research behind it that is regularly ignored by those coming up with sexy or off-the-cuff alternatives.
It’s the most popular so it gets the most attention: from academics, criminals, law enforcement, journalists, …
Why not just have greater number of relays by default? Internet bandwidth tends to increase over time, and the odds of this correlation attack are roughly proportional to the attacker's share of relays to the power of the number of relays used.
So latency issues permitting, you would expect the default number of relays to increase over time to accommodate increases in attacker sophistication. I don't think many would mind waiting for a page to load for a minute if it increased privacy by 100x or 1000x.
If you’re advocating for a bigger network… we need more relay operators. Can’t wave a magic wand. There’s like 8000 relays. Haven’t looked in a while.
Or if you were arguing for increasing the number of relays in a circuit, that doesn’t increase security. It’s like one of the OG tor research papers deciding on 3. Bad guy just needs the first and the last. Middle irrelevant.
> we need more relay operators. Can’t wave a magic wand. There’s like 8000 relays. Haven’t looked in a while.
The reason that there are so few relays and exit nodes is that everyone that runs an exit node believes, for very good reason, that they'll be opening themselves up to subpoenas and arrest for operating one. You know who never has to worry about getting arrested? Surveillance agencies tasked with running exit nodes.
Consider the two classes of relay and exit operators:
1. People who operate relays and exit nodes long term, spending money to do so with no possibility or expectation of receiving money in return, and opening themselves up to legal liability for doing so, whose only tangible benefit comes from the gratification of contributing to an anonymous online network
2. Government agencies who operate relays and exit nodes long term, spending government allocated money to operate servers, with no material risk to the agencies and whose tangible benefit comes from deanonymizing anonymous users. Crucially, the agencies are specifically tasked with deanonymizing these users.
Now, I guess the question is whether or not you think the people in group 1 have more members and more material resources than the agencies in group 2. Do you believe that there are more people willing to spend money to run the risk of having equipment seized and arrest for no gain other than philosophical gratification than there are government computers running cost and risk free, deanonymizing traffic (which is their job to do)?
>Or if you were arguing for increasing the number of relays in a circuit, that doesn’t increase security. It’s like one of the OG tor research papers deciding on 3. Bad guy just needs the first and the last. Middle irrelevant.
Because of timing attacks? There are ways to mitigate timing attacks if you are patient (but I think clearnet webservers are not very patient and my drop your connection)
And yeah mitigation gets you into a huge body of research that’s inconclusive on practical usability. Eg so much overhead that it’s too slow and 10 people can use a 1000 relay network and still get just 1 Mbps goodput each. Contrived example.
People need to actually be able to use the network, and the more people the better for the individual.
There’s minor things tor does, but more should somehow be done. Somehow…
>It’s then best we’ve got for achieving actually meaningful privacy and anonymity
...while being practical.
One could argue that there is i2p. But i2p is slow, a little bit harder to use, and from what I can remember, doesn't allow you to easily browse the clearnet (regular websites).
These sort of “Tor evangelism” comments are so tiring, frankly. There are quite a few like it in this thread, in response to…not people poo-pooing Tor, or throwing the baby out with the bathwater, rather making quite level-headed and reasonable claims as to the shortcomings and limitations of the network / protocol / service / whatever.
One should be able to make these quite reasonable determinations about how easy it’d be to capture and identify Tor traffic without a bunch of whataboutism and “it’s still really good though, ok!” replies which seek to unjustifiably minimise valid concerns because one feels the need to…go on and bat for the project that they feel some association with, or something.
The self-congratulatory cultiness of it only makes me quite suspicious of those making these comments, and if anything further dissuades me from ever committing any time or resources to the project.
The issue is that the people making 'level headed' claims have read none of the literature and their mathematical ability seems to end at multiplying numbers together.
It sounds reasonable to anyone who hasn't read the papers, to anyone that has these comments are so wrong that you can't even start explaining what's going wrong without a papers worth of explanation that the people don't read.
I see the post got flagged, sad to see but if that's the verdict this'll be the last comment on this post.
The title is not "I caught a Russian developer doing bad things during war!!", the title mentions "a cautionary tale", which from my point of view is a PSA through the means of sharing observations and my interpretation, with some speculation to inform the reader of possible avenues which may affect them, if not through this repository, through another.
To close my writing I'll include the content of a comment in response to a different user, which should define my intent:
"My point would mainly be to spread awareness and share an experience and my interpretation of it, not "slander" and paint a target on my back by namecalling and divulging more information which doesn't serve a purpose beyond wanting clout under the assumption that the war does not affect myself and others around me."