Hacker Newsnew | past | comments | ask | show | jobs | submit | readams's commentslogin

As always, Matt Levine has the best take on this:

https://www.bloomberg.com/opinion/newsletters/2026-02-24/ai-...

"I am sorry. But if you go to Jump Trading and Jane Street and say “hello, I have an unregulated poorly designed mechanism that could lead to $50 billion of market value collapsing overnight, would you like to trade with me,” they are going to say yes, but their eyes are going to light up, you know? If at Time 0 you give them an extremely gameable system that can produce billions of dollars of profit, at Time 10 your system is going to be a smoking wreckage and they are going to have billions of dollars of profit. That’s their whole job, you know? I couldn’t tell you in advance what all the intermediate steps will be, and in fact in hindsight I cannot tell you what the intermediate steps actually were, how Jump and Jane Street made money off the collapse of Terra. But as a heuristic, I mean, come on. Terra was like “hello we have a balloon full of money, here is a pin, dooooooon’t pop the balloon.” Guess what!"


I don't really follow this space, but is evaporating $50bn not a concern to anybody?

The $50bn is what you get when you multiply the number of coins times the price of one coin. This is a kind of fun number to think about but there are a lot of ways it doesn’t match up with reality.

Like, if I make a company with ten billion shares, and then put shares for sale at $5 a piece, and you buy one, then my company would also have a $50bn valuation, by the same logic that Terra / Luna had a $50bn valuation.


The reality is that the $50bn was already gone, the collapse just revealed it.

It's like with Madoff; the billions weren't lost when it collapsed, the billions were already gone (or never existed).


There was never a real $50bn to be evaporated. It's like saying that $1.2tn evaporated in the Bitcoin market drawdown since October - it doesn't mean value has been destroyed, it means the market's estimate of how much value existed was wrong.

It was enough of a concern to cause a worldwide manhunt for the guy responsible who is now in prison.

you might consider me an ass, but I think every dollar you put into crypto should be considered dead money. it is not protected by anything. you are gambling on nothing

How is US currency any different?

The government with a monopoly on violence and lawmaking uses it to pay for things and to collect taxes.

The concept of addiction seems be quite diluted at this point. Does it really make sense to say that, because you're trying to make a product that people like, that this means you're addicting them (intentionally or otherwise) to your product?

Food should not taste good? Books should not be entertaining? Don't try to make your video game fun, or some people may become addicted.


Good things there are entire fields of medical experts working to understand the exact mechanisms and harm and we're not leaving it up to you.

Not to mention how often we keep catching these companies with explicit policies to make people never want to leave the app.


> Good things there are entire fields of medical experts working to understand the exact mechanisms and harm and we're not leaving it up to you.

No, that doesn't work. Harm is a normative concept, not an empirical one, so there's no role for "expertise" to play in defining it. Medical experts can describe mechanisms of causality, and their associated effects, but deciding whether those effects constitute harm is something that actually is up to each individual to decide, since it is an inherently subjective evaluation.

> Not to mention how often we keep catching these companies with explicit policies to make people never want to leave the app.

Yes, and attesting one thing while doing another is certainly something they can be held accountable for -- perhaps even legally, in some cases. But this attempt at treating social media as equivalent to physically addictive chemicals is pure equivocation, and making claims like this actually undercuts the credibility of otherwise valid critiques of social media.

At the end of the day, this is a cultural issue, not a medical one, and needs to be solved via cultural norms, not via political intervention based on contrived pretenses.


Just to make sure I wasn't misunderstanding you, I double checked the meaning of "normative." This is the first result from google:

"establishing, relating to, or deriving from a standard or norm, especially of behavior."

And other sources have something similar. I'm interpreting your comment as saying "(psychological) harm is subjective, and because it can not be measured empirically, it's not possible to have expertise on this topic."

Fortunately, there are real world consequences that can be measured. If I take an action that makes many people say "ow!" and we acknowledge that expression as an indicator of pain, even though I can't measure the exact level of pain each person is experiencing, I can measure how many people are saying "ow!" I can measure the relationship between the intensity of my action, and the number of people that respond negatively. There's plenty of room for empiricism here, and a whole field of mathematics (statistics) that supports handling "normative" experiences. They even have a distribution for it!

The foundation of law is not scientific exactness or scientific empiricism. It is the mechanism by which a state establishes norms. A law against murder does not stop murder, but it does tell you that society does not appreciate it.


They are saying that judgements of what qualifies as harm is something like a judgement of what is good, or what is right or wrong. That’s not the same thing as evaluating whether something causes pain. You can measure whether something caused pain, sure. (Well, the sort of limitations you mentioned in measuring pain exist, but as you said, they are not a major issue.)

“Harm” isn’t the same thing as “pain”.

I would say that when I bite my finger to make a point, I experience pain, but this doesn’t cause me any suffering nor any harm. If something broke my arm, I claim that this is harm to me. While this (“if my arm were broken, that would be harm to me”) might seem like an obvious statement, and I do claim that it is a fact, not just an opinion, I think I agree that it is a normative claim. It is a claim about what counts as good or bad for me.

I don’t think normative claims (such as “It is immoral to murder someone.”) are empirical claims? (Though I do claim that they at least often have truth values.)


I'd go beyond that and even say that one might consider something harmful, but be willing to endure a certain level of harm in pursuit of something of higher value.

For example, I once asked a smoker why she smoked, and the response was "because I love it" -- when I asked if the enjoyment was worth the health risks, she said "yes; I never planned to live forever". She was making a conscious decision to seek short-term pleasure at the cost of potential longer-term damage to her health. At that point, there wasn't really anything remaining to debate about.


I didn’t mean to imply that the harmful effects of something can’t be worth it for the beneficial effects of that thing. Yeah, if someone is trapped, doing something that frees them and also breaks their arm, may well be an appropriate action for them to take.

> The foundation of law is not scientific exactness or scientific empiricism. It is the mechanism by which a state establishes norms.

Exactly. So it sounds like you're agreeing with me that qualification of a particular effect as "harm" is not a matter of "medical expertise", but is rather a question of subjective norms that is in fact on the opposite side of the is-ought gap from the side at which expertise is applicable.

> A law against murder does not stop murder, but it does tell you that society does not appreciate it.

Well, not exactly. This presumes that "society" in the abstract (a) actually has a general consensus on the question, and that (b) the rules imposed by the legal system reflect that broad consensus, rather than reflecting the values or intentions of the people administering the legal system, without necessarily aligning with those of the general public.

There are a lot of questions that do have broad consensus across society, but also a lot of subjective questions that different people answer very differently. And I think that the level of consensus that actually exists in terms of considering things causing physical injury or pain as "harm" is far, far greater than the level of consensus on treating anything that causes emotional stress as "harm".

I don't think that the "negative response" criteria that you're articulating is sufficient to reveal an underlying normative consensus: I would not presume that most people would equate harm with any kind of negative reaction. For example, I would personally not consider something harmful merely on account of being annoying, insulting, or even morally questionable (though there's often overlap in the last case).


I have to point out that your original post is technically correct because you specified "medical expertise" as the focus of your argument and psychologists aren't MDs. The field has some questionable aspects (and outcomes) to be sure, but I don't think it's completely without merit, and as a consequence, I feel the spirit of your argument is still wrong. You said:

> At the end of the day, this is a cultural issue, not a medical one, and needs to be solved via cultural norms, not via political intervention based on contrived pretenses

It is possible to consider people's subjective experiences in tandem with the consequences of those experiences and make an empirical judgement. The consequences can be quantified, even though the subjective experience itself can't.

If we found that people began committing suicide after using social media, would you suggest this can't be studied, and that a government wouldn't have good reason to want to legislate against social media in these circumstances?

This is really all I'm trying to get at. Replace suicide with depression, reduced quality of life, addiction. Whatever you like. If it holds in the suicide case, it holds in all of them.


> I have to point out that your original post is technically correct because you specified "medical expertise" as the focus of your argument and psychologists aren't MDs.

It's also correct because "harm" is a normative concept, which expertise per se doesn't apply to.

> It is possible to consider people's subjective experiences in tandem with the consequences of those experiences and make an empirical judgement.

Well, no, not really. First, you have to be aware of their subjective experiences, and not just speculating or projecting your own assumptions on to them, then you have to know what criteria to apply to the evaluation of the consequences of those experiences, which can only come from the particular values that they subscribe to, irrespective of your own. And "empirical judgment" is a dubious concept, since, again, judgment is inherently normative.

> If we found that people began committing suicide after using social media, would you suggest this can't be studied,

Anything can be studied, but the extent to which the conclusions of study can be validated for something like this is quite limited. First, you'd be studying something that is a drastic outlier -- only a tiny proportion of the population even attempts suicide for any reason at all.

Second, you're dealing with something with complex causality, much of which can't be directly observed or measured except by the subject themselves, so there's no way to eliminate confounding factors or construct control groups.

Finally, with so many ideological and pecuniary interests attached to a topic like this, it would be difficult to conduct such a study in an institutional setting without it being potentially skewed by bias, and the aforementioned difficulty in setting up controlled experiments would make it difficult for replication to factor out bias.

So I don't think I'd rely on formal studies for this sort of thing, especially when the motivation is to rationalize normative conclusions rather than understand the world as it is.

> and that a government wouldn't have good reason to want to legislate against social media in these circumstances?

No, I don't think that would be a sufficient reason. Even if it were happening, not everything is the government's responsibility, and not every social problem has a political solution.

> Replace suicide with depression, reduced quality of life, addiction. Whatever you like. If it holds in the suicide case, it holds in all of them.

I don't think it holds in any of them.


According to Wikipedia

> Addiction is ... a persistent and intense urge to use a drug or engage in a behavior that produces an immediate psychological reward, despite substantial harm and other negative consequences

Immediate psychological reward = dopamine hits from likes and shares

Harm and other negative consequences = anxiety, depression, low self-esteem, FOMO, less connection with friends and family, etc...

Food is not as easy to make addictive because the psychological reward diminishes as you get full. The exception to this is people with an eating disorder, who use eating as a way to cope with or avoid difficult feelings.


High sugar food is addictive as you don't feel full fast enough consuming empty calories.

And yet somewhere around that 6th donut it will hit and you will stop.

These companies all hired psychologists to help design systems that maximize dopamine release and introduce loops that drive compulsive behavior.

Besides, they aren’t making great products and haven’t for some time. Is anyone happy with Facebook as a product? Does anyone who used Instagram before it became the a shittier TikTok / ultimate ad medium think it’s a better product today?


>These companies all hired psychologists to help design systems that maximize dopamine release and introduce loops that drive compulsive behavior.

This seems like the important bit: these systems weren't designed just for enjoyment. They hired experts in habit formation.

I talked to a friend recently about this and she described it as feeling hollow. When she stayed up all night playing a game she really liked, she enjoyed herself and might have had regrets about giving up some sleep, but didn't necessarily regret the time spent. She found is nourishing in some way. Similarly to feeling compelled to keep reading a great book, or even eat an extra bit of something particularly great dessert.

But at the same time, she would describe staying up until 3-4am regularly scrolling TikTok and would just feel awful the next day. She didn't want to be up doing it, it wasn't actually really fun or enjoyable, but she just...did it anyway.

I'll also note that there are games that are designed for maximum addictiveness that probably also leave you feeling "hollow" in the way that TikTok does, too, so this isn't necessarily to say that games are universally different. But it's clear that there's a psychological mechanism that some companies use in their design that is intended to hijack, rather than just provide "fun" or entertainment.

I don't know what we do about that, or how/if it should be regulated in some way, but it's pretty clear that there is a real difference.


You can see how regulatory requirements drive corporate behaviors. Instagram and TikTok in particular behave much differently in Europe or Asia vs the US.

TikTok is very different. Instagram runs an algorithm that delivers consistently better content from my POV.


There's people with unhealthy relationships with both food and video games and I'm comfortable saying they suffer from addiction.

So then do you punish the chefs for making their food too appealing?

If the monopolist chef is deliberately adding addictive ingredients that causes health problems, I think, yes, they're the ones to punish or address the problem with.

Facebook does not have a monopoly on social media. (He says, writing on a competing social media site.)

> addictive ingredients that causes health problems

Like sugar? Are we going to make candy illegal now? Through the court system, retroactively, with no legislative mandate?


We may requires, high sugar food to be labeled like cigarettes, maximum portion size available (largest drink can be 500ml), put more tax on it, advertise against it, ban in schools, ban advertisements in children program/movies.

The law takes intent into consideration, candy makers are not intending to make someone addicted to their product. This lawsuit is showing the intent behind certain user experience features was to addict users, not just make it a sweet and nice place to be.

You think candy companies aren't doing everything they can to get repeat customers?

There's no law saying social media has to be a "sweet and nice place to be", and that was never the goal. They want to make it an interesting place to be so you keep coming back, and there's no law against that. Trying to create one ex post facto via the court system is a really dumb idea.



Well, think of it this way. You could make a meal out of healthy, fresh, whole foods cooked expertly. Or you could give someone a bag of Doritos. Nobody on "My 600lb Life" got there because they were eating great food. They were eating a lot of bad food that doesn't fire satiety signals in their head.

Addictive and Good are not exactly the same thing -- something can be objectively good and not addictive, and vice versa.


this feels like a false equivalence and slippery slope fallacy.

Clearly things like cigarettes and hard drugs are bad and need very heavy regulations if not outright banned. There are lots of gray areas, for sure, but that doesn't mean we shouldn't take things on a case-by-case basis and impose reasonable restrictions on things that produce measurable harm.

Whether or not social media does produce that measurable harm is not my area of expertise, but that doesn't mean we can't study it and figure it out.


Oddly the countries that don’t do this have far better outcomes.

Imagine being allowed to have a beer outside, or after 2 am, oh the humanity. Surely such a society would devolve immediately into chaos.

What if the government wasn’t meant to be a strange parent that let you kill your kids but felt having a beer outside was too much freedom. It might just lead to being the happiest country on earth.


> Imagine being allowed to have a beer outside, or after 2 am, oh the humanity.

Where do you live that this is not possible?

(I know you’re speaking loosely, I.e. you mean “where I live bars have to stop serving alcohol at 2 Am” but it’s so loose that there’s 0 argument made here, figured I’d touch on another aspect leading to that, other replies cover the others. Ex. The 2 AM law isn’t about you it’s about neighborhoods with bars)


It’s illegal to drink in public in Washington state [1]. I believe this is the case in most places in the United States. Las Vegas is a notable exception.

[1]: https://app.leg.wa.gov/RCW/default.aspx?cite=66.44.100


Can't tell if you're being earnest or pedantic (if earnest, I grew up in a poorer neighborhood than HN so maybe I'm just more familiar with the solution. The Wire has a scene that'll explain it better than I: https://www.youtube.com/watch?v=GV9MamysCfQ)

I don’t see how either of those are relevant. The question is where you can legally drink in public. The answer is very few places in the United States. People break laws all the time.

Illinois sells liquor in grocery stores but not after 2am. Or maybe it was a local ordinance. The town next to me was 1am then you couldn’t buy liquor at the 24 hour grocer. So not just bars.

The person who said smoking and hard drugs, and you said a beer outside after 2am. Those aren't the same thing!

> Oddly the countries that don’t do this have far better outcomes

Go on



For example, smoking tobacco in Japan… wait a minute

> this feels like a false equivalence and slippery slope fallacy.

The slippery slope fallacy is purely a logical fallacy, meaning that it's fallacious to argue that any movement in one direction logically entails further movements in the same direction. Arguing that a slippery slope empirically exists -- i.e. that observable forces in the world are affecting things such that movement in one direction does manifestly make further movement in that direction more likely -- is absolutely not an instance of the slippery slope fallacy.

A concrete instance of the metaphor itself makes this clear: if you grease up an inclined plane, then an object dropped at the top of it will slide to the bottom. Similarly, if you put in place legal precedents and establish the enforcement apparatus for a novel state intervention then you are making further interventions in that direction more likely. This is especially true in a political climate where factional interest that actually are pushing for more extreme forms of intervention manifestly are operating. Political slippery slopes are a very observable phenomenon, and it is not a fallacy to point them out.

> Whether or not social media does produce that measurable harm is not my area of expertise, but that doesn't mean we can't study it and figure it out.

It's true that the fact that it isn't your area of expertise doesn't mean we can't study it and figure it out.

Rather the thing that does mean that we can't study it and figure it out is that what constitute "harm" is a normative question, not an empirical one, and the extent to which there is widespread consensus on that question is a bounded one -- the more distant we get from evaluating physical, quantifiable impacts, and the more we progress into the intangible and subjective, the less agreement there is.

And where there is agreement in modern American society, it tends in the opposite direction of what you're implying here: apart from very narrow categories, most people would not consider mere exposure to information or non-physical social interactions to be things that can inflict harm, at least not to a level sufficient to justify preemptive intervention.


okay it's not a slippery slope, but it's something similar (that's why I said "feels like"). He's trying to establish a continuum of things that have a variety of addictive properties in an attempt to discredit the whole idea of addiction ("Don't try to make your video game fun, or some people may become addicted")..

> apart from very narrow categories, most people would not consider mere exposure to information or non-physical social interactions to be things that can inflict harm

That's an extremely disingenuous interpretation of social media. Huge straw man. We're talking about infinite-scrolling A/B tested apps that are engineered to keep eyeballs on the screen at the first and foremost priority for the primary benefit of the company, not the user.


As far as I can tell, even in US, the most litigious nation in the world, you can't SUCCESSFULLY sue e.g. a cigarette maker or alcohol maker for making you addicted.

(I emphasize successfully because of course you can sue anyone for anything. The question is what lawsuits are winnable based on empirical data of what lawsuits were won).

If you could, that would be the end of those businesses. The addiction is beyond dispute and if every alcoholic could win a lawsuits against a winemaker, there would be no winemakers left.

In that context it seems patently absurd that you could sue Facebook for making you addicted.

It would be absurd to create a law that makes it possible without first making such laws for alcohol and cigarettes.

It's also patently absurd that we (where "we" here is leftist politicians) are allowing open drug dealing in populated areas of San Francisco and yet this is what we discuss today and not politician's systemic failure to fix easily fixable problems for which we already have laws making them illegal.


Those companies are required to publicise the addictive nature of their products, and required to advertise services to aid those addicted.

Facebook consistently argues they are not a source of harm, and do none of that.

If the consumer isn't proactively being informed, then no, litigation isn't patently absurd.

"Informed consent" is what you're missing, here.


Since we're being condescending here: what you're missing is absence of laws making a given activity illegal.

As far as I know there's no law that you could use to claim that Facebook did something illegal based on some notion of making addictive products.

Just like there are no laws you could use to claim a winemaker did something illegal based on some notion of making addictive products.

And I think it would be absurd to make what Facebook does illegal before we make what winemaker does illegal.

And we tried with winemakers. Educate yourself on dark times of Prohibition. (you opened the condescension doors).

> Those companies are required to publicise the addictive nature of their products, and required to advertise services to aid those addicted

I've never seen such advertising so I suspect you pulled that factoid out of your ass. Easy for you to correct me: laws have numbers, cite one.

If there is such law for say, alcohol, I wouldn't be opposed to such requirements for Facebook.

I mean, it obviously would end up as ineffective annoyance that doesn't deter or fix anything, like cookie banners, but have at it.

So yeah, it's still patently absurd to sue Facebook claiming addiction.


> I've never seen such advertising so I suspect you pulled that factoid out of your ass. Easy for you to correct me: laws have numbers, cite one.

More than one, but how about we have the FDA do the informing here, as I've apparently pissed you off:

https://www.fda.gov/tobacco-products/labeling-and-warning-st...


> okay it's not a slippery slope, but it's something similar (that's why I said "feels like"). He's trying to establish a continuum of things that have a variety of addictive properties in an attempt to discredit the whole idea of addiction ("Don't try to make your video game fun, or some people may become addicted")..

But he actually is correct. Use the same term to describe the effects of ingesting biologically active chemicals and the effects of emotionally engaging activity -- which in this case mostly consists of exposure to information -- absolutely is disingenuous equivocation. People in this very thread are comparing Instagram with ingestion of alcohol or tobacco products, and that absolutely is a prevarication.

It's not unreasonable to observe the course of these debates, and suspect that the people invoking the language of addiction are doing so as a pretext for treating what is actually a cultural issue instead as a medical one, so as to falsely appeal to empirical certainty to answer questions that actually demand normative debate.


Diluted only if one doesn’t know the definition of addiction

Food? Some products sold as food are most certainly addictive.

Video games? As just one example, Candy Crush is a vacuous waste of anyone's time and money, with plenty of tales of addiction.

Books? People used to think novels were addictive and bad news: https://archive.is/WDDCH


> Does it really make sense to say that, because you're trying to make a product that people like, that this means you're addicting them (intentionally or otherwise) to your product?

That's not what these companies did though. Their goal has been maximizing engagement and stickiness. Not enjoyment or usefulness. A company operating in good faith delivering a valuable product that serves the consumer should not be lumped in with Meta et al who have been shown on multiple occasions to be abusing psychological techniques to the benefit of their wallets and to the detriment of their users' mental health.


But the intent is to make as much as money as possible with zero care for the users well being.

I worked at Tinder for example and you would think that company in an ethical world would be thinking about how to make dating better, how to make people more matches spending less time on the app. Nope, we literally had projects called "Whale" and the focus was selling on absolutely useful and even harmful features that generated money


I can't speak for others' definition of addiction but Facebook has been pretty bad about artificially inflating users' activities. Outright fake notifications, even spamming people's 2FA phone numbers

I am addicted to Hacker News. Who can I sue?

Indeed. As a wise man once said:

"Who is to say what's right these days, what with all our modern ideas and products?"


So I think two things:

1. It's ok to want certain outcomes as a society. Like maybe this is a little conservative or whatever, but we can't just like stand by and be like, well everyone's dumb, no one's having sex, people are dying, healthcare costs are spiking, there goes our economy. Like I wish we would legalize smoking again, but I understand why we don't.

2. I think one could make an argument that over-optimization is immoral. This Paula Deen video really made me sort of understand the excess that leads to the obesity epidemic. She takes what used to be a desert, wraps it in like three other deserts, fries it and then that's now one desert with twice the calories:

https://www.youtube.com/watch?v=HYbpWcw6MfA

But like, companies are trying to architect food to fit more fat and sugar in. Instagram doesn't go to people and ask them what they want, they study behavioral psychology to get people to use their products more. At some point, letting giant multinational corporations do whatever they want to hack people's brains is a kind of nihilism and absence of free choice that you're trying to avoid.

Monopolies are bad. Overoptimization is bad. It should be ok for us as a culture to reject micro-transactions. It's ok for us to have a shared morality. even if that means Epic games makes a little less money on Fortnight.

I think one measure should be. How much do people wish they did a thing less.

https://fortune.com/well/article/nearly-half-of-gen-zers-wis...

I used to watch like 6 hours of TV a day. Loved every minute of it. Same thing with video games. Same thing with my favorite restaurant, don't feel the same way about smoking or like the M&Ms I buy in the checkout aisle of the grocery store.


AI is already taking over content generation


Eventually created and consumed by AI, fewer and fewer humans will consume it.


I will consume less of it, and have actively blocked or unsubscribed from orgs that promote it, but the generation behind us won't have these scruples.


This is a bit thin to be drawing any conclusions. Only what one person claims. Has this happened to anyone else? Might there be another reason (that they're not telling us) that this happened?


Sucks that we can't trust our own Government to tell us what is going on/what they are doing. WTF happened to the USA being an open society?



My son, an 18 year college student with no legal issues ever (except a speeding ticket) - had his Global Entry revoked last year. For no apparent reason. We filed an appeal and are waiting for a response. From everything I've read on it, it seems it could take upwards of 18 months to get a response. But per the article you linked, it seems that less than half are able to get it reinstated. So I'm not really hopeful.


In these modern times of ours, the word literally has taken on a new meaning, which is "not literally but with emphasis." This seems like the most likely explanation.


Even if that's the intended meaning of literally, it is still a reckless exaggeration. I'm pretty sure that Stephenson's endings are no more abrupt than some of Shakespeare's (check out Hamlet and Macbeth) or some of Frank Herbert's (see Dune and Children of Dune), and I never hear anyone go out of their way to describe either of them as being unable to write endings.


Everything from Stephenson after Anathem is an unremitting slog. He needs an editor who won't back down from telling him he needs to cut a third of his pages.


Reamde and Fall are quite readable. But what does this have to do with endings?


Remade was snappy but Fall went on forever.


> some of Frank Herbert's (see Dune and Children of Dune),

I mean, Dune does in fact end mid-story, which is probably worse.


No, no it doesn't. Are you talking about the recent movies that split the first novel into two movies? The novel Dune ends after Paul defeats his enemies and becomes emperor.


The Dune series has six novels, the final one is Chapter House Dune, which does in fact end mid story.

I know this because I read them in the 90s and didn't realise that Frank Herbert was dead for quite some time after reading Chapter House.


I know that, I've read them too. In the SP, and in this thread we're discussing endings to novels. No one is complaining about a series that isn't finished due to the author's death.


Hence my comment, "which is probably worse".


I interpret the sense of "literally" here in the opposite way, i.e. without it the sentence may be taken to mean that the books metaphorically stop mid-sentence, but with it, they're saying that it's non-metaphorical and they really do. It would be bizarre wording otherwise.


These modern times that literally began in 1769. Oxford English Dictionary, “literally (adv.), sense I.1.c,” June 2025, https://doi.org/10.1093/OED/9189024563.


The use of the word "literally" to be used as emphasis started in the 1700s, and people have been complaining about it since at least 1909

https://en.wikipedia.org/wiki/Literally#As_an_intensifier


Hard to believe this when it's such a cut and dry claim about text. What does exaggeration even imply in that context?


“Literally” is commonly used as emphasis, but not as hyperbole. So it’s still a misleading misrepresentation just the same.


literally


That doesn't work as well since you want people with crypto wallets you can steal. People applying for a blockchain company are far more likely to have this.


It's likely to work. It's the same dudes.

Scroll back through any AI evangelist's twitter (if they are still on Twitter, and they are) and it is better odds than a coin toss that you find they were an evangelist for either NFTs or crypto.

I mean the CEO of OpenAI is also the CEO of a shitcoin-for-your-iris-scans company, for one.

(Prosaically: these things are usually spear-phishing of some kind anyway, are they not?)


Amnesty has long since squandered any credibility they once had


Why?


I second that. Why?


At least in my country, the amnesty only acts against left wing politicians and governments, you can literally compare similar cases or even worse that they intentionally don't emit no judgment, report or whatsoever


That's weird I have somehow opposite experience. I came from Europe and amnesty here usually agitate for foreign political prisoners or local unjustice. Some more crooked politicians of course hate them for obvious reasons but overall they are absolutely good actor.

I just wondering, would you mind to share from which part of the world you are for some context?


Here's Nvidia's CPUs, which are increasingly a required part of their data center offerings:

https://www.nvidia.com/en-us/data-center/grace-cpu/


Required in that Nvidia would like to sell them to you. But customers seem to be hesitant and prefer x86-based DGX and similar systems. At least from what I've heard and seen.


Big tech has generally not loved this because they know that adding friction like id checks massively reduces attach rates. This is watered down enough that it's likely seen as a lesser evil.


A new account on Facebook, Instagram or Google/YouTube will usually instantly get restricted and triggers either ID or Phone verification anyways.


And yet, if you tried it, you might find this often not as true as you might think.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: