Against the Storm (and excellent rouguelite city-builder) does this in a really cool way. Pausing is a core mechanic of the game, and you frequently pause while you place building or things like that - and all the visual animations stop (fire, rain, trees swaying, etc).
But when you find a broken ancient seal in the forest, the giant creepy eyeball moving around in it keeps moving even when you pause the game, which helps emphasise how other-worldly it is.
> The reason that the rich were so rich, Vimes reasoned, was because they managed to spend less money.
The "boots" item feels less true, because expensive doesn't seem to be as correlated with "good quality" as it used to. But the general statement still very much stands.
Things like financial products that charge higher interest rates to poorer people, or services that offer discounts for paying annually rather than monthly are great examples of this. And less direct things, like being able to drive to cheaper shops and buy in bulk, or being able to do preventative maintenance to avoid a cheap fix turning into an expensive one.
It can still apply to individual items, as long as you're careful about what you buy and do your research to make sure you're actually buying high quality boots, and not just cheap ones with an expensive logo on the side.
It's also just broadly true about whole categories. For example home ownership. Most poor people rent, which means having a place to live costs them money, but they get nothing for that money as a result, they just need to keep paying forever.
Utilities, in my country people who aren't trusted to pay for electricity, gas, even water (which you need to live!) in arrears have to pay up front for it, so maybe I use 500 kWh of electricity, and I've agreed to pay 20p per kWh = £100, at the end of the month I get a bill for £100 and I settle that a few days later, if I don't eventually I get angry letters and eventually a court summons. That's electricity I used two weeks ago and I won't even pay for it until May. But if I was poor, I might find my best option is I pre-pay £10 to get 40 kWh of electricity. So that 500kWh would cost £120 and I have to buy it first before I use it and if at any time I forget or can't pay the lights go off immediately that my credit runs out.
>Things like financial products that charge higher interest rates to poorer people, or services that offer discounts for paying annually rather than monthly are great examples of this.
Exchange of future cash flows are not comparable to a one time exchange of goods or services due to the risk of default.
> And less direct things, like being able to drive to cheaper shops and buy in bulk, or being able to do preventative maintenance to avoid a cheap fix turning into an expensive one.
This is a good example, but the best example I can think of is having sufficient cash flow to be able to purchase a home in a higher socioeconomic neighborhood, because if you have kids, you are effectively paying almost nothing for a higher quality education since a lot of comes back to you in the form of equity and your child’s increased chances of financial stability.
Expensive doesn't guarantee high quality, but very cheap almost always means low quality. A £200 pair of boots might be great and last for a decade, or might be overpriced and fall apart after six months. But a £5 pair are definitely going to be crap.
Which is why it makes sense to buy the 5 pound shoes 40 times if they last at least 3 months. Except for running shoes, I just get the Costco ones for $20 to $30 and toss them in 6 to 12 months.
But then you're constantly either breaking in a new pair, or dealing with a pair that's falling apart, and you're lucky to get 1 month of good comfortable wear out of those cheap shoes. And you have to go buy them every three months, and shoe models change constantly so you have to find the current cheap pair that actually fits you.
I am lucky I have a wide range that I find comfortable, because the $30 Costco shoes and the $180 On Clouds are all the same to me. I also don't buy them every 3 months, maybe 6 months at most frequent. Last time was probably almost a year ago, and I got 2 pairs, one to keep nice so they're presentable, and the other for literally anything else, and they look terrible, but still aren't coming apart.
Plus it gives the ransomware gangs a whole new angle they can use.
So, remember how you illegally paid us a ransom a few months ago? Unless you want to go to prison, then you better...
We're already seeing this against companies who pay ransoms and fail to report the breaches when they're legally required to - but it would be much worse if it's against individuals who are criminally liable.
It's one of those ideas that sounds nice in theory, but doesn't survive contact with the real world. In the same way that many people would say that you shouldn't negotiate with terrorists or kidnappers; but if it's their loved one who's being held and tortured they'll very quickly change their mind.
Getting to a world where no one pays ransoms and the ransomware groups give up and go away would be the ideal, and we'd all love to get there. But outlawing paying ransoms basically sacrificing everyone who gets ransomwared in the meantime until we get to that state for the greater good.
And where companies get hit, they'll try hard to find ways around that, because the alternative may well be shutting down the business. But if something like a hospital gets hit, are governments really going to be able to stand behind the "you can't pay a ransom" policy when that could directly lead to deaths?
Financial costs won't solve the problem for companies, because they're hard to enforce. You'd be weighting up the cost of dealing with the fallout of getting hacked against the cost of paying the random and the chance that you might get caught and fined. If that former cost is existential for the business, then it'd always be worth paying and taking the risk.
The only real way around that would personal consequences for the owners/directors of the company - "get caught paying a ransom and the whole board goes to jail" would certainly discourage people. And also provide a wonderful opportunity for blackmail when people did.
Not to mention all the problems of fining public sector organisations, and how counter-productive that usually is.
Right, make the penalty for paying a ransom catastrophic. Very few employees will risk a criminal conviction and years in federal prison just to protect their employer.
It's all fun and games until it's your livelihood at stake, and then it makes a lot more sense to acquiesce, lick your wounds, and keep your business alive.
Getting hacked is no fun, but companies don't deserve to die because something in their tech stack was vulnerable.
I respectfully disagree - I do agree that the natural financial death of a company probably shouldn't result in bailouts, but if I as a company get breached because my fully-updated, follows-best-practices Windows Domain got hacked because of a vulnerability in Microsoft's stuff? That's hardly fair.
Shouldn't I be able to sue Microsoft for financial relief?
That is an acceptable outcome. Life isn't fair. Companies fail all the time for a variety of unfair reasons. This will force customers to demand that Microsoft and other software vendors improve their own security practices and/or indemnify customers for damages from breaches. You can sue Microsoft for financial relief if they breach your contract.
I've also seen roads that have these kind of signs, but they only apply during busy hours.
However, as with any traffic controls they're useless if they're not actually enforced. Which is a shame, because it'd be absolutely trivial to automate that detection with cameras.
Let dashcam footage be used as evidence of traffic violations and behold how quick will drivers themselves be to send every such piece of footage to the police.
This kind of thing always makes me nervous, because you end with a mix of methods where you can (supposedly) pass arbitrary user input to them and they'll safely handle it, and methods where you can't do that without introducing vulnerabilities - but it's not at all clear which is which from the names. Ideally you design that in from the state, so any dangerous functions are very clearly dangerous from the name. But you can't easily do that down the line.
I'm also rather sceptical of things that "sanitise" HTML, both because there's a long history of them having holes, and because it's not immediately clear what that means, and what exactly is considered "safe".
You are right that the concept of "safe" is nebulous, but the goal here is specifically to be XSS-safe [1]. Elements or properties that could allow scripts to execute are removed. This functionality lives in the user agent and prevents adding unsafe elements to the DOM itself, so it should be easier to get correct than a string-to-string sanitizer. The logic of "is the element currently being added to the DOM a <script>" is fundamentally easier to get right than "does this HTML string include a script tag".
It's certainly an improvement over people trying to homebrew their own sanitisers. But that distinction of being XSS-safe is a potentially subtle one, and could end up being dangerous if people don't carefully consider whether XSS-safe is good enough when they're handling arbitrary users input like that.
Also has made me nervous for years that there's been no schema against which one can validate HTML. "You want to validate? Paste your URL into the online validation tool."
But for html snippets you can pretty much just check that tags follow a couple simple rules between <> and that they're closed or not closed correctly.
> it's not at all clear which is which from the names. Ideally you design that in from the [start]
It was, and there is: setting elementNode.textContent is safe for untrusted inputs, and setting elementNode.innerHTML is unsafe for untrusted inputs. The former will escape everything, and the latter won't escape anything.
You are right that these "sanitizers" are fundamentally confused:
> "HTML sanitization" is never going to be solved because it's not solvable.¶ There's no getting around knowing whether or any arbitrary string is legitimate markup from a trusted source or some untrusted input that needs to be treated like text. This is a hard requirement.
The Web platform folks who are responsible for getting fundamental APIs standardized and implemented natively are in a position to know better, and they should know better. This API should not have made it past proposal stage and should not have been added to browsers.
> There's no getting around knowing whether or any arbitrary string is legitimate markup from a trusted source or some untrusted input that needs to be treated like text. This is a hard requirement.
It is not a hard requirement that untrusted input is "treated like text". And this API lets you customize exactly what tags/attributes are allowed in the untrusted input. That's way better than telling everyone to write their own; it's not trivial.
It is not a hard requirement that untrusted input is "treated like text".
It's also not a hard requirement that I defend the position that there's a hard requirement for untrusted input to be treated like text. That isn't my position, and it's not what I wrote.
Given that it is not a hard requirement that untrusted input be treated like text, it wouldn't make sense for anyone to claim that it is—and therefore it doesn't make sense for someone, presented with I did write, to strenuously argue with me that such a tortured, implausible, uncharitable, non-sensical interpretation of what I wrote was something that I have to account for (versus the interpretation that does match what I wrote and is actually true and makes sense).
You are, willfully or not, misconstruing what I have written.
> That's way better than telling everyone to write their own; it's not trivial.
Right, it's not trivial. It's so far the opposite of trivial that it's (as I said the first time—and again, just now) not solvable.
No one should be writing their own.
No one should be trying to write their own.
No one should be using this API at all.
And no one should have pushed for its implementation.
Briefly though, if you have an untrusted string then you need to either treat it like text or sanitize it. I don't see any other options.
So if people shouldn't use this sanitizer or write their own, then the only option left is treating the string as text. But you're vehemently arguing that's not what you said.
What's the other way to use an untrusted string? Other than "don't", but that means not taking input and only works for toy apps.
I don't see how I differed from what you said? You divided strings going into HTML into two categories, where one category uses textContent and the other category uses innerHTML. My point is to disagree with those categories, not whatever subtle thing you're taking issue with.
I'd say I'm interested in hearing how you reason that knowing whether you need to pay at least $1000 in unpaid taxes to the IRS doesn't put you in one bucket or another, but I'm not.
The IRS thing indirectly has categories but it doesn't say what to do with them, and what to do with them is what I disagreed with your original post on. I didn't say all input is untrusted or whatever analogizes to your tax thing.
Anyway, I see you edited your previous post after I wrote my reply.
If you weren't trying to divide things into two categories, you wrote it very confusingly. When you say how to handle trusted strings, then say how to handle untrusted strings, then say "There's no getting around knowing whether or any arbitrary string is legitimate markup from a trusted source or some untrusted input that needs to be treated like text. This is a hard requirement." it really sounds like that's supposed that's supposed to cover all cases.
Me thinking you were using two categories is an honest mistake, not malicious misquoting.
And reading your original post that way is the interpretation that makes it stronger. If there are more categories then SetHTML is no longer "fundamentally confused". Your argument against it falls apart.
Guess how interested I am in pretending that a debate with you—about this or anything else—is worthwhile (or anything, really, other than an even bigger waste of time than it already has been).
The idea is you wouldn't mix innerHTML and setHTML, you would eliminate all usage of innerHTML and use the new setHTMLUnsafe if you needed the old functionality.
Right. Like how any potential reader is familiar with the risks of sql injection which is why nothing has ever been hacked that way.
Or how any potential driver is familiar with seat belts which is why everybody wears them and nobody’s been thrown from a car since they were invented.
So we shouldn’t mark anything as unsafe then? And give no indication whatsoever?
The issue isn’t that the word “safe” doesn’t appear in safe variants, it’s that “unsafe” makes your intentions clear: “I know this is unsafe, but it’s fine because of X and Y”.
If you want to adopt this in your project, you can add a linter that explicitly bans innerHTML (and then go fix the issues it finds). Obviously Mozilla cannot magically fix the code of every website on the web but the tools exist for _your_ website.
I kinda like the way JS evolved into a modern language, where essentially ~everyone uses a linter that e.g. prevents the use of `var`. Sure, it's technically still in the language, but it's almost never used anymore.
(Assuming transpilers have stopped outputting it, which I'm not confident about.)
Depending on the transpiler and mode of operation, `var` is sometimes emitted.
For example, esbuild will emit var when targeting ESM, for performance and minification reasons. Because ESM has its own inherent scope barrier, this is fine, but it won't apply the same optimizations when targeting (e.g.) IIFE, because it's not fine in that context.
Yeah, using a kilowatt GPU for string replacement is going to be the killer feature. I probably shouldn't even be joking, people are using it like this already
This one is literally matching "innerHTML = X" and setting "setHTML(X)" instead. Not some complex data format transformation
But I can see what you mean, even if then it would still be better for it to print the code that does what you want (uses a few Wh) than doing the actual transformation itself (prone to mistakes, injection attacks, and uses however many tokens your input data is)
My experience is that they somehow print quite modern code despite things like ES6 being too new to be standard knowledge even for me and I'm not even middle-aged yet
Maybe the last 10 years saw so much more modern code than the last cumulative 40+ years of coding and so modern code is statistically more likely to be output? Or maybe they assign higher weights to more recent commits/sources during training? Not sure but it seems to be good at picking this up. And you can always feed the info into its context window until then
This is not my experience. Claude has been happily generating code over the past week that is full of implicit any and using code that's been deprecated for at least 2 years.
>> Maybe the last 10 years saw so much more modern code than the last cumulative 40+ years of coding and so modern code is statistically more likely to be output?
The rate of change has made defining "modern" even more difficult and the timeframe brief, plus all that new code is based on old code, so it's more like a leaning tower than some sort of solid foundation.
Exactly, I learned coding JS before 2015 (it was my first language, picked up during what is probably called middle school in english). I haven't had to learn it again from scratch, so I need to go out of my way to find if there is maybe a better way to do the thing I can already do fine. It's not automatic knowledge, yet the LLM seems to have no trouble with it, so I'm pointing out that they seem to not have problems upgrading. The grandparent comment suggested it would need to be trained anew to use this new method instead. Given how much old (non-ES6) JS there is, apparently it gets it quite easily so any update that includes some amount of this new code will probably do it just fine
Theoretically, if you could train your own, and remove all references to the deprecated code in the training data, it wouldn't be able to emit deprecated code. Realistically that ability is out of reach at the hobbiest level so it will have to remain theoretical for at least a few more iterations of Moore's law.
Ideally you should be able to set a global property somewhere (as a web developer) that disallows outdated APIs like `innerHTML`, but with the Big Caveat that your website will not work on browsers older than X. But maybe there's web standards for that already, backup content if a browser is considered outdated.
It's not an "outdated API". It's still good for what it was always meant for: parsing trusted, application-generated markup and atomically inserting it into the content tree as a replacement for a given element's existing children.
> set a global property somewhere (as a web developer) that disallows[…] `innerHTML`
> I'm also rather sceptical of things that "sanitise" HTML, both because there's a long history of them having holes, and because it's not immediately clear what that means, and what exactly is considered "safe".
What is safe depends on where the sanitized HTML is going, on what you're doing with it.
It isn't possible to "sanitize HTML" after collecting it so that, when you use it in the future, it will be safe. "Safe" is defined by the use.
But it is possible to sanitize it before using it, when you know what the use will be.
Edit: I don't mean this flippantly. If I want to render, say, my blog entry on your site, will I need to select every markup element from a dropdown list of custom elements that only accept text a la Wordpress?
Yeah someone tells me something has been made “safe” is nice but unless I know exactly what that means … it’s easy to say safe by someone who doesn’t have to deal with it when the bad corner case happens.
Oh and it’s safe… in this browser… not that one, so this idea of safety is kinda dead to me for now.
That's why I only allow user input of alphanumeric ascii characters. No need to worry about sanitation then, and you can just remove all the characters that don't match.
(It's a joke, but it is also 100% XSS, SQL injection, etc. safe and future proof)
BTW, HTML allows inline SVG with an XML-flavored syntax that interprets <script/> and <title> differently. It's a goldmine for sanitizer escapes. There are completely bonkers syntax switching and error recovery rules that interact with parsing modes (there's even an edge case where a particular attribute value switches between HTML and XML-ish parsing rules).
Don't even try to allow inline <svg> from untrusted sources! (and then you still must sanitise any svg files you host)
It may be using some of the same deserialization machinery, but "parsing" is a broad term that includes things that the sanitizer is doing and that the browser's ordinary content-processing → rendering path does not.
Even with this being a native API, there are still two parsers that need to be maintained. What a native API achieves is to shift the onus for maintaining synchronicity between the two onto the browser makers. That's not nothing, but it's also not the sort of free lunch that some people naively believe it is.
If that'd been the design from the start, then sure. But it's not at all obvious that setHTML is safe with arbitrary user input (for a given value of "safe") and innerHTML is dangerous.
At this point that API has been around for decades and is probably impossible to deprecate without breaking fairly large amounts of the web. The only option is to introduce a new and better API, and maybe eventually have the browser throw out console warnings if a page still uses the old innerHTML API. I doubt any browser vendor will be gung ho enough to actually remove it for a very long time.
Ah,yes, british constitutional law. In a country where no parliaments can bind its successors it means there is no constitution and the constitutional law is a polite fiction poorly held together with tradition and precedent.
All systems have weaknesses, but the utter criminal farce the US system has been betrayed to be yields a situation where zero Americans should be gloating about their constitution or values any more.
Oh look, Trump just declared a new, 10% global tariff because lol laws. Congress is busted. There are essentially zero real laws for the plutocrat class.
That was the fastest Supreme Court ruling in UK history though...
Similarly in the US, Watergate (Nixon impeachment) took only 16 days, and Bush v. Gore (contested election) took just 30 days to reach a Supreme Court judgement.
It takes a long time for something to get through all the appeals. Getting an injunction to put a stop to something during the appeals doesn't take that long.
The problem in this case is that Congress made such a mess of the law that the lower court judges didn't think the outcome obvious enough to grant the injunction.
As pointed out in other comments this process is entirely by choice of the court. In other cases where they just felt like ruling on something they have put things on their emergency docket and ruled on them immediately. Letting this situation ride for a year was a choice by the court.
Not doing something you could have done is frequently less of a choice and more of a lack of bandwidth to simultaneously consider everything which is happening at the same time. The vast majority of cases don't make it onto the emergency docket.
Many reasonable people would argue this was significant / enough of an emergency to justify devoting that bandwidth, even by the standards of the Supreme Court.
> The problem in this case is that Congress made such a mess of the law that the lower court judges didn't think the outcome obvious enough to grant the injunction.
"On Wednesday, the U.S. Court of International Trade dealt an early blow to that strategy. The bipartisan panel of judges, one of whom had been appointed by Mr. Trump, ruled that the law did not grant the president “unbounded authority” to impose tariffs on nearly every country, as Mr. Trump had sought. As a result, the president’s tariffs were declared illegal, and the court ordered a halt to their collection within the next 10 days."
"Just before she spoke, a federal judge in a separate case ordered another, temporary halt to many of Mr. Trump’s tariffs, ruling in favor of an educational toy company in Illinois, whose lawyers told the court it was harmed by Mr. Trump’s actions."
The appellate court decides whether to stay the injunction based on how likely they think you are to win more than which docket they think the Supreme Court is going to use. Cases going on the emergency docket are not common.
> If multiple appeals courts thought this case was a winner for the administration, we have an even bigger problem.
Do we? The law here was a mess. Prediction markets didn't have the outcome at anything like a certainty and the relevant stocks are up on the decision, implying it wasn't already priced in -- and both of those are with the benefit of the transcripts once the case was already at the Supreme Court to feel out how the Justices were leaning, which the intermediary appellate court wouldn't have had at the time.
> Sure. But some of them look clearly destined for it.
It's not a thing anyone should be banking on in any case. And if that was actually their expectation then they could just as easily have not stayed the injunction and just let the Supreme Court do it if they were inclined to.
That wouldn't explain the prediction markets thinking the administration had a double digit chance of winning. The sure things go 99:1.
> Hindsight is, as always, 20/20.
It's not a matter of knowing which docket would be used. Why stay the injunction at all if you think the Supreme Court is going to immediately reverse you?
"Though he normally aligns with Thomas and Alito, Gorsuch may be more likely to vote against Trump’s tariffs than Kavanaugh is, according to Prelogar. “It might actually be the chief, Barrett and Gorsuch who are in play,” she said."
"During the argument, several Justices expressed skepticism about the IEEPA expanding the President’s powers to encompass the ability to set tariffs."
This was the widespread conclusion back then; that the justices were clearly skeptical and that the government was struggling to figure out an effective argument.
They did not remove the injunctions. They stayed them.
Again, a stay does not necessarily mean “we think this is a winning case”. It can mean “the potential damage from this exceeds a threshold”. In fact, the appeals court affirmed the underlying ruling striking down the tariffs.
> The U.S. Court of Appeals for the Federal Circuit, in a 7‑4 decision on Aug. 29, 2025, struck down President Donald Trump's use of the International Emergency Economic Powers Act (IEEPA or the Act) to impose sweeping tariffs on nearly all imported goods from nearly all U.S. trading partners.
Although the Federal Circuit, in V.O.S. Selections, Inc. v. Trump, affirmed the U.S. Court of International Trade's (CIT) merits judgment, it nevertheless vacated the universal injunction issued by the CIT and remanded the case for further relief proceedings. The appellate court also stayed its decision until Oct. 14, 2025, allowing time for the government to appeal to the U.S. Supreme Court.
"On Friday, March 14, 2025, Trump signed presidential proclamation 10903, invoking the Alien Enemies Act and asserting that Tren de Aragua, a criminal organization from Venezuela, had invaded the United States. The White House did not announce that the proclamation had been signed until the afternoon of the next day."
"Very early on Saturday, March 15, the American Civil Liberties Union (ACLU) and Democracy Forward filed a class action suit in the District Court for the District of Columbia on behalf of five Venezuelan men held in immigration detention… The suit was assigned to judge James Boasberg. That morning, noting the exigent circumstances, he approved a temporary restraining order for the five plaintiffs, and he ordered a 5 p.m. hearing to determine whether he would certify the class in the class action."
"On March 28, 2025, the Trump administration filed an emergency appeal with the US Supreme Court, asking it to vacate Boasberg's temporary restraining orders and to immediately allow the administration to resume deportations under the Alien Enemies Act while it considered the request to vacate. On April 7, in a per curiam decision, the court vacated Boasberg's orders…"
TL;DR: Trump signs executive order on March 14. Judge puts it on hold on March 15. Admin appeals on March 28. SCOTUS intervenes by April 7.
That's a distinction entirely invented by the court, and under their control.
The emergency docket is whatever they want to treat as an emergency. The decision not to treat this as such - it's hard to imagine many clearer examples of "immediate irreprable harm" - was clearly partisan.
But when you find a broken ancient seal in the forest, the giant creepy eyeball moving around in it keeps moving even when you pause the game, which helps emphasise how other-worldly it is.
reply