Hacker Newsnew | past | comments | ask | show | jobs | submit | Ucalegon's commentslogin

Racism and fascism have been used correctly, its just that people do not like to be have their beliefs associated with negative things and thus, rather than perform self-reflection about themselves, instead the problem exists elsewhere. I am sure you can come up with outliers that prove what you are saying is true, but across the vast majority of applications of the use of both words they are correct relative to definitions of both words.

>As a former R&D scientist there is no way I’d inject any peptide that hasn’t at least gone through a phase 1 safety study in humans. Otherwise you have no idea what it could be doing to your body.

A lot of people do not understand the trial system or the value of Phase 0/1 tests when it comes to the substances that they put into their body. And thanks to the influencer/grifter/biohacker ecosystem that exists, more people would put their trust in accidental evidence, from people who's incentive it is to make money off of them, while complaining about the pharmaceutical industry operates off of a profit motive.



The problem with this argument is that forcing people to use technology, without proper training and against their will, introduces them to risks as well. Anyone with older parents/family can tell you the harms that come with phishing and other fraud scenarios that cost more than just accommodating people not using technology, both at the micro and macro level. Insulting people and bullying them into technology adoption when there are relatively simple fixes to the problem seem better than increasing risk exposure for no reason other than 'I believe that people who don't use technology are somehow lesser'.

The worst thing about this entire discourse is the root of the entire "just print this one guy his tickets on-demand" argument is that it assumes, at its base, that once you hit a certain age you immediately become a moron incapable of learning anything new or adjusting your day-to-day life at all.

And 80-year old person is just as smart as a 20-year old. He's perfectly capable of learning how to use a $50 smartphone to access his $5-200k/yr season tickets, he just doesn't want to. It sounds like he was told years and years ago they were moving this direction, and they've been printing him tickets as an exception, and they've decided to stop the exception. He's had 20 years to get a smart phone and learn how to use it. The fact that he now has to choose is a prison of his own making.


I don't think the discourse is about just this one guy, it's about an entire class of people for whom swiping around a smartphone is a bewildering experience they managed to live their whole life so far without. If you're not adept at it, it makes you feel stupid, maybe you haven't had that experience but there's more to being a luddite than stubbornness.

If I can get along with the rest of my life on a flip phone, it seems pretty unreasonable to buy a device just to buy sports tickets.


> If I can get along with the rest of my life on a flip phone, it seems pretty unreasonable to buy a device just to buy sports tickets.

I would agree. It also seems unreasonable to expect the organization to make an exception to a completely legitimate anti-scalping measure for one person.


>for one person

For everybody. Nobody should be forced to use a proprietary phone app.


Why not? Going to a Dodgers game is not a constitutional right, if the business wants to make it harder for people to give them money that might be stupid but it's their right.

Do you know how many old people get scammed per year in the United States because they are using technology that they are trained on, but assume that they have to use the technology in order to function each year with minimal practical gain relative to the costs? Its around 12.5 billion dollars in 2024, up from 10 billion in 2023 [1]. Why is introducing someone to that risk worth it to watch a baseball game?

Asserting that individual 'get smart' doesn't actually solve for the actual harms and if it were just simple, we would not be seeing the upward trends in fraud that we are seeing within the elderly.

[1] https://www.aarp.org/money/scams-fraud/older-adults-ftc-frau...

edit: fixed the years


The numbers you mention are total fraud losses. Most of fraud has nothing to do with phones, it is fraudulent money transfers and card charges.

Where is the initial point of engagement when it comes to most scams targeting the elderly? It is via phones, email, and messaging services.

80 year old people do not have the same neuroplasticity as 20 year olds. It is not reasonable to expect them to quickly learn new things that are constantly changing.

In particular, it's very reasonable to be 80 and decide "I don't want to deal with learning how to use a smartphone and getting one".


> It is not reasonable to expect them to quickly learn new things that are constantly changing.

Of course it is. Maybe if we didn't normalize people refusing to learn things for no other reason than "I don't wanna" they'd have better neuroplasticity.

> it's very reasonable to be 80 and decide "I don't want to deal with learning how to use a smartphone and getting one".

I agree with you 100% on this but it doesn't logically follow from that that you get to make the Will Call clerk for the Dodgers print your ticket for every game even though you've been told for multiple years that season tickets are going paperless as an anti-scalping measure.


Then it’s reasonable to expect ticket sellers to use modern technology to implement zero-knowledge, physical rfid token, etc measures that prevent scalping.

The technology does exist, but it might take more effort than a lazy smartphone app - that probably isn’t effective against scalping anyway. Can’t a phone app / QR code etc be forged?


Im going to be harsh, sorry.

In this case nobody is forcing them to buy a dodgers ticket. It’s a completely optional and absurdly expensive luxury good that is purely for leisure. They can simply not but a ticket if they don't want to accept conditions of sale.


Yeah... I mean, who says I should have to put in wheelchair ramps for my ballpark that seats tens of thousands? I mean, so few people use/need them, I should just be able to refuse service to those people. Right?

/sarc


I don't want to blow your mind but choosing not to have a smartphone and being in a wheelchair are not remotely comparable.

So, you want to force people to give money to specific, monopolistic, corporations? Why would I want a smart phone if I'm blind... how am I expected to use a smart phone when I am blind, exactly?

Because quality of life doesn't have a value in of itself. Especially for the elderly, they should be excluded from enjoying the end of their life simply because no wants to think of a solution to the problem that doesn't require them to introduce massive amounts of risk into their life which, also, negatively impacts their quality of life.

If you work in an industry that is solely based off of customer delight, stories like these are what you are looking avoid due to brand damage. It is going to cost more time/energy to deal with the backlash than just coming up with a simple solution in the first place.



The devil is in the details. For example, OAI does not have regional processing for AU [0] and their ZDR does not cover files[1]. Anthropic's ZDR [2] also does not cover files, so you really need to be careful, as a patient/consumer, to ensure that your health, or other sensitive data, that is being processed by SaaS frontier models is not contained in files. Which is asking a a lot of the medical provider to know how their systems work, they won't, which is why I will never opt in.

[0] https://developers.openai.com/api/docs/guides/your-data#whic...

[1] https://developers.openai.com/api/docs/guides/your-data#stor...

[2] https://platform.claude.com/docs/en/build-with-claude/zero-d...


Azure OpenAI is not the same as paying OpenAI directly. While you may not be able to pay OpenAI for them to run models in Australia, you can pay Azure: https://azure.microsoft.com/en-au/pricing/details/azure-open...

The models are licensed to Microsoft, and you pay them for the inference.


There is no way to upload files as a part of context with Azure deployments, you have to use the OAI API [0], and without having an architecture diagram of the solution, I am not going to trust it based off of the known native limitations with Azure's OAI implementation.

[0] https://github.com/openai/openai-python/issues/2300


Marketing is marketing, nothing about it was ever about being factual when there is a total addressable market to go after and dollars to be made! This is inline with much of the other marketing that exists in the AI space as it stands now, not mention the use of AGI within the space as it stands currently.


Sure, but there are plenty of cases where a deceptive name has been considered enough to at least warrant an investigation: https://en.wikipedia.org/wiki/Long_Blockchain_Corp.

I'm not saying anything is going to happen, ARM holdings has a lot more money and lawyers than Long Blockchain did, but I'm just saying that it's not weird to think that a deceptive name could be considered false advertising.


That would not hold up considering that they consistently use 'agentic' in their press release and make no mention of 'artificial general intelligence'. Just because two things have the same acronym does not mean that they stand for the same thing. Marketing being cheeky is not a crime.


It's not "being cheeky". They know that the holy grail for AI is AGI. They know that people are going to see the acronym AGI and assume Artificial General Intelligence. They know that people aren't going to read the full article.

This isn't just a crass joke or a pun, it's outright deception. I'm not a lawyer, maybe it wouldn't hold up in court, but you cannot convince me that they aren't doing this on purpose.


of course they did it on purpose but thats not illegal. They are not at fault for individuals not reading what the acronym stands for and the intent that they place within the press release, which is very, very clear. They are not obligated or liable for others lack of due diligence.


They may not be criminally liable but they are at fault for sure.

The AGI in "Arm AGI CPU" isn't an acronym and there is no coincidence.

Leaders in the email security space have been seeing this for a while now [0], this is not new. The problem is the means to protect consumer mailboxes outside of Gmail, isn't cost effective since most people do not actually pay for their consumer mailbox and the impacts of compromised accounts do not actually impact the providers. It is going to be interesting to see how this plays out in the consumer space as the complexity of the problem continues to grow while the technology used to stop it stays in the early-2010s.

[0] https://siliconangle.com/2023/12/19/new-report-warns-rise-ai...


With various websites planning to introduce micro-transactions to read their contents, maybe the end-users should start charging for email deliveries.

You want to send me an email? Please give me $1 first, and if I don't like your content I can, without notice, change that number to $50 per email.


I agree, and I think the answer is that what used to be free, and is now infected with all sorts of enshittification, will be paid-for to be useful.

I pay for email via Fastmail, don't really have a spam problem. I think this addresses your point above, that to have an effective spam filter takes money, and free email doesn't generate money.

I pay for search via Kagi, don't see all those crappy Google Ads and actually get useful search.

I can see the other services (socials, messaging) moving to a paid model to solve the same issues.


The problem is the cat is already out of the bag on the technology. Anyone can go over to Huggingface, follow a cookbook [0], and build their own models from the ground up. He cannot prevent that from taking place or other organizations releasing full open weight/open training data models as well, on permissive licenses, which give individuals access to be able to modify those models as they see fit. Sam wishes he had control over that but he doesn't nor will he ever.

[0] https://huggingface.co/docs/transformers/index


Im thinking mainly if they manage to get some kind of regulations that make open source impractical for commercial use, or hardware gets too expensive for small hobbyists and bootstrapped startups, or if the large data center models wildly out class open source models. I love using open source models but I can't do what I can do with 1m context opus, and that gap could get worse? Or maybe not, it could close, I don't know for sure, and how long will Chinese companies keep giving out their open source models? Lots of unknowns.


I know someone who just spent 10 days of GPU time on a RTX 3060 to build a DSLM [0] which outperforms existing, VC backed (including from Sam himself), frontier model wrappers that runs on sub 500 dollar consumer hardware which provides 100% accurate work product, which those frontier model wrappers cannot do. The fact that a two man team in a backwater flyover town speaks to how out of the badly out of the bag the tech is. Where the money is going to be isn't based off of building the biggest models possible with all of the data, its going to be about building models which specifically solve problems and can run affordably within enterprise environments by building to proprietary data since thats the differentiator for most businesses. Anthropic/OAI just do not have the business model to support this mode of model development to customers who will reliably pay.

[0] https://www.gartner.com/en/articles/domain-specific-language...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: