Hacker Newsnew | past | comments | ask | show | jobs | submit | amluto's commentslogin

> On x86 a spinlock release doesn't need a memory barrier (unless you do insane things) / lock prefix, but a futex based lock does (because you otherwise may not realize you need to futex wake).

Now you've gotten me wondering. This issue is, in some sense, artificial: the actual conceptual futex unlock operation does not require sequential consistency. What's needed is (roughly, anyway) an release operation that synchronizes with whoever subsequently acquires the lock (on x86, any non-WC store is sufficient) along with a promise that the kernel will get notified eventually (and preferably fairly quickly) if there was a non-spinning sleeper. But there is no requirement that the notification occur in any particular order wrt anything else except that the unlock must be visible by the time the notification occurs [0]; there isn't even a requirement that the notification not occur if there is no futex waiter.

I think that, in common cache coherence protocols, this is kind of straightforward -- the unlock is a store-release, and as long as the cache line ends up being written locally, the hardware or ucode or whatever simply [1] needs to check whether a needs-notification flag is set in the same cacheline. Or the futex-wait operation needs to do a super-heavyweight barrier to synchronize with the releasing thread even though the releasing thread does not otherwise have any barrier that would do the job.

One nasty approach that might work is to use something like membarrier, but I'm guessing that membarrier is so outrageously expensive that this would be a huge performance loss.

But maybe there are sneaky tricks. I'm wondering whether CMPXCHG (no lock) is secretly good enough for this. Imagine a lock word where bit 0 set means locked and bit 1 set means that there is a waiter. The wait operation observes (via plain MOV?) that bit 0 is set and then sets bit 1 (let's say this is done with LOCK CMPXCHG for simplicity) and then calls futex_wait(), so it thinks the lock word has the value 3. The unlock operation does plain CMPXCHG to release the lock. The failure case would be that it reports success while changing the value from 1 to 0. I don't know whether this can happen on Intel or AMD architectures.

I do expect that it would be nearly impossible to convince an x86 CPU vendor to commit to an answer either way.

(Do other architectures, e.g. the most recent ARM variants, have an RMW release operation that naturally does this? I've tried, and entirely failed AFAICT, to convince x86 HW designers to add lighter weight atomics.)

[0] Visible to the remote thread, but the kernel can easily mediate this, effectively for free.

[1] Famous last words. At least in ossified microarchitectures, nothing is simple.


> > On x86 a spinlock release doesn't need a memory barrier (unless you do insane things) / lock prefix, but a futex based lock does (because you otherwise may not realize you need to futex wake).

> Now you've gotten me wondering. This issue is, in some sense, artificial: the actual conceptual futex unlock operation does not require sequential consistency. What's needed is (roughly, anyway) an release operation that synchronizes with whoever subsequently acquires the lock (on x86, any non-WC store is sufficient) along with a promise that the kernel will get notified eventually (and preferably fairly quickly) if there was a non-spinning sleeper. But there is no requirement that the notification occur in any particular order wrt anything else except that the unlock must be visible by the time the notification occurs [0]; there isn't even a requirement that the notification not occur if there is no futex waiter.

Hah.

> ... > But maybe there are sneaky tricks. I'm wondering whether CMPXCHG (no lock) is secretly good enough for this. Imagine a lock word where bit 0 set means locked and bit 1 set means that there is a waiter. The wait operation observes (via plain MOV?) that bit 0 is set and then sets bit 1 (let's say this is done with LOCK CMPXCHG for simplicity) and then calls futex_wait(), so it thinks the lock word has the value 3. The unlock operation does plain CMPXCHG to release the lock. The failure case would be that it reports success while changing the value from 1 to 0. I don't know whether this can happen on Intel or AMD architectures.

I suspect the problem isn't so much the lock prefix, but that the non-futex spinlock release just is a store, whereas a futex release has to be a RMW operation.

I'm talking out of my ass here, but my guess is that the reason for the performance gain of the plain-store-is-a-spinlock-release on x86 comes from being able to do the release via the store buffer, without having to wait for exclusive ownership of the cache line. Due to being a somewhat contended simple spinlock, often embedded on the same line as the to-be-protected data, it's common for the line not not be in modified ownership anymore at release.


> I suspect the problem isn't so much the lock prefix, but that the non-futex spinlock release just is a store, whereas a futex release has to be a RMW operation.

> I'm talking out of my ass here, but my guess is that the reason for the performance gain of the plain-store-is-a-spinlock-release on x86 comes from being able to do the release via the store buffer, without having to wait for exclusive ownership of the cache line.

I don’t think so. The CPU is pretty good about hiding that kind of latency — reading a contended cache line and doing a correctly predicted branch shouldn’t stall anything after it.

But LOCK and MFENCE are quite expensive.


It might be easier for DC transmission components to be standardized. Sure, anything with complex controls has a lot more opportunity to fail to interoperate, but DC gear can often be configured for different voltage ratios and can much more directly control how much current flows where.

Maybe the grid needs a multi-source agreement for equipment like the network industry has for optics.


It would be very strange for the government to sue someone for redistributing videos on the basis that they had been taken from an illegally flown drone.

Well, it's a good thing we aren't living under a very strange, corrupt, and incompetent government.

(/s if it wasn't obvious, and anyone who needed that should try changing the channel every once in a while)


Do check that your heater isn’t doing something ridiculous. A while back I helped someone debug a Mitsubishi Electric system on which the installer had set the fan speed control to high instead of auto (it’s an easily accessible setting on the thermostat). I forget exactly how much power was saved, but IIRC it was well over 30kWh/day.

I don’t know where all that energy was going. I expected some improvement but not anywhere near that much.


> Starlink seems to just print money.

Does it? Those satellites are individually dirt cheap compared to historical communication satellites, but Starlink requires a whole lot of them and they depreciate outrageously quickly.

Compare to my personal favorite communication medium, single-mode-fiber. SMF from 20-30 years ago still works, is compatible with most current-generation wavelengths, and can carry extremely high bandwidth per strand if users are willing to put fancy optics and muxes at the ends or can carry lower speeds at transceiver prices that would have been almost unimaginably low 20 years ago.

Starlink satellites seem to have zero or even slightly negative value after five years.


Getting fiber to a house is relatively expensive, especially houses in more rural areas which is Starlink's main market. A Starlink satellite costs a lot more but can serve many customers.

Let's say a Starlink satellite costs $2 million all-in. (They launch about 25 at a time, the launch costs something like $25 million, add in another million for the satellite itself and operations.) They have about 10,000 satellites in orbit currently, and about 10 million customers. That's about 1,000 customers per satellite, so a five-year cost of $2,000 per customer. That's a fair bit less than it costs to run fiber to a rural house. And Starlink is pretty much a monopoly in their main markets (terrestrial telecoms is usually at least a duopoly) so they can charge more. I pay $85/month for symmetric gigabit fiber. Starlink charges $80/month for 200Mbps, or $120/month for "max." On top of that, they can charge enormous amounts for commercial users like airliners and cruise ships.

According to https://www.reuters.com/business/finance/spacex-generated-ab..., Starlink revenue last year was north of $8 billion. They'd need to launch 2,000 satellites per year to maintain the current fleet. If $2 million is an accurate price tag for them, then that's $4 billion/year. Pretty nice profit, and there's a lot of room for growth.


This seems generally correct, but there are some things to note.

Once fiber is installed, it’s not particularly expensive to maintain, indefinitely. That $2k/customer needs to be paid again every five years, whereas for fiber it’s much closer to being a one time cost. (To be fair, fiber still depreciates and gets damaged.) And fiber is not that expensive to install: Starlink clearly wins for truly rural areas, but for merely low-density suburban areas it’s not nearly so clear.

Starlink’s performance is not awesome compared to high quality DOCSIS fiber deployments, so they will struggle in areas that are well served by the latter, which covers quite a lot of the population by ability to pay, at least in developed markets. So there’s a limited total addressable market issue.

Of course, Starlink may have other valuable applications, especially military.


I don't think they're competitive anywhere a halfway decent terrestrial option is available. But there are enough places where those aren't available to get them 10 million customers and growing, which is enough.

If I were them, my big concern would be getting overtaken by the buildout of cellular connectivity. A good 5G connection could be competitive. But if their direct-to-cell stuff works out, we might see the opposite: rural cellular infrastructure stops being built out or even diminishes because it's cheaper to provide coverage by satellite.


One thing I find rather amazing about all of this is the degree to which the Bitcoin community has tried, for years, to claim that quantum computers will be another other than a complete break.

Sure, it takes a pretty nice quantum computer or a pretty good algorithm or a degree of malice on the part of miners to break pay-to-script-hash if your wallet has the right properties, but that seems like a pretty weak excuse for the fact that the entire scheme is broken, completely, by QC.

Does there even exist a credible post-quantum proof protocol that could be used to “rescue” P2SH wallets?


The best proposal I have heard for rescuing P2SH wallets after cryptographically relevant quantum computers exist is to require vulnerable wallets to precommit to transactions a day ahead of time. The precommitment doesn't reveal the public key. When the public key must be exposed as part of the actual transaction, an attacker cannot redirect the transaction for at least one day because they don't have a valid precommitment to point to yet.

That’s kind of adorable. Would you need to pay to record a commitment? If so, how? If not, what stops someone from DoSing the whole scheme?

I don't think you're understanding how cryptography works. A commitment is basically a hash that is both binding and hiding. In this example it's probably easiest to think of it as a hash. So you hash your post-quantum public key (something like falcon-512) and then sign that hash with your actual bitcoin private key (ecdsa, discrete-log, not quantum safe) and then publish that message to the bitcoin network. Then quantum happens at some point and bitcoin needs to migrate but where do funds go? Well you reveal the post-quantum public key and then you can prove that funds from the ecdsa key should go there. From a technical perspective, this is a complete and fool proof system. DoSing isn't really a concern if you publish to the actual bitcoin network and it's impossible for someone to use up the key space (2^108 combinations at least).

The reason this is a dumb idea is because coordination and timing. When does the cutover happen? Who decides which transactions no longer count as they were "broken" b/c of quantum computing? The idea is broken but not from technical fundamentals.


The DoS attack in this scenario is someone just submitting reasonable-looking but ultimately bad precommitments as fast as possible. The intuition is that precommitments must be hard to validate because, if there was an easy validation mechanism, you would have just used that mechanism as the transaction mechanism. And so all these junk random precommitments look potentially legitimate and end up being stored for later verification. So all you have to do to take down the system is fill up the available storage with junk, which (given the size of bot networks and the cost of storing something for a day) seems very doable.

If the question is storage, bitcoin itself provides a perfectly good mechanism. idk the exact costs but it'd be in the range of ~$0.45 to store a commitment. That's cheap enough to enable good users with small numbers of keys but also expensive enough to prevent spam. It's kind of the whole point of blockchains.

As for verification being expensive, it sounds like you don't know the actual costs. It's basically a hash. Finding the pre-image of a hash is very expensive to the point of being impossible. Verifying a pre-image + hash function = a hash is extremely cheap. That's the whole point of 1-way functions. Bitcoin itself is at ~1000 EH/s (exahashes per second)

Again, this isn't a technical problem. It's a coordination problem.


Commitment validation would indeed be trivial.

This whole scheme fails if an attacker can manage to delay a transaction for a day, and if the commitment also commits to a particular transaction fee, then the user trying to rescue their funds can’t easily offer a larger transaction fee if their transaction is delayed. But if the commitment does not commit to a transaction fee, then an attacker could force the transaction fee to increase arbitrarily.

Maybe the right strategy would be commit, separately, to multiple different choices of transaction fee.


Yes, that would be a concern. You could require a proof of work to submit a precommitment, so that DoSing was at least expensive to do. You could have some sort of deposit mechanism, where a precommitment would lock down 0.1 bitcoins (from a quantum-secure wallet) until the precommitment was used. I admit I'm glad I don't have to figure out those details.

24-hour latency to make a payment? What is this, the 20th century?

This is for rescue, not for payment. Once you've moved the coins to quantum-secure wallet, the delay would no longer be needed.

...probably some people would be very inconvenienced by this. But not as inconvenienced as having the coins stolen or declared forever inaccessible.


> ...probably some people would be very inconvenienced by this. But not as inconvenienced as having the coins stolen or declared forever inaccessible.

I don't know why anyone f's around with crypto anymore. So many caveats, such a scammy ecosystem. It just doesn't seem worth the trouble to support a ransomware and money laundering tool.


> the Bitcoin community has tried, for years, to claim that quantum computers will be another other than a complete break.

Who specifically is claiming this? Satoshi literally mentioned the need to upgrade if QC is viable on bitcointalk in 2010.


Call me crazy, but I think if bitcoin is ever broken they're more likely to move to a centralized ledger than a more secure decentralized ledger. Roughly nobody invested in bitcoin cares about the original mission, they just care about their asset prices.

And the asset prices (at least partially) depend on true believers in the mission.

On the brightside at least we'll have a clear indicator for when quantum computers actually arrive.

If Bitcoin is broken then your bank encryption and everything else is broken also.

As far as I know quantum computers still can't even honestly factor 7x3=21, so you are good. And the 5x3=15 is iffy about how honest that was either.

https://news.ycombinator.com/item?id=45082587

Bitcoin uses 256-bit encryption, it's a universe away from 5x3=15.


You are assuming that progress on factoring will be smooth, but this is unlikely to be true. The scaling challenges of quantum computers are very front-loaded. I know this sounds crazy, but there is a sense in which the step from 15 to 21 is larger than the step from 21 to 1522605027922533360535618378132637429718068114961380688657908494580122963258952897654000350692006139 (the RSA100 challenge number).

Consider the neutral atom proposal from TFA. They say they need tens of thousands of qubits to attack 256 bit keys. Existing machines have demonstrated six thousand atom qubits [1]. Since the size is ~halfway there, why haven't the existing machines broken 128 bit keys yet? Basically: because they need to improve gate fidelity and do system integration to combine together various pieces that have so far only been demonstrated separately and solve some other problems. These dense block codes have minimum sizes and minimum qubit qualities you must satisfy in order for the code to function. In that kind of situation, gradual improvement can take you surprisingly suddenly from "the dense code isn't working yet so I can't factor 21" to "the dense code is working great now, so I can factor RSA100". Probably things won't play out quite like that... but if your job is to be prepared for quantum attacks then you really need to worry about those kinds of scenarios.

[1]: https://www.nature.com/articles/s41586-025-09641-4


1) yes, everything is affected, but everything else is being migrated to PQC as we speak

2) "256-bit encryption" has different meanings in different contexts. "256-bit security" generally refers to cryptosystem for which an attack takes roughly 2^256 operations. this is true for AES-256 (symmetric encryption) assuming classical adversaries. this is not true for elliptic curve-based algorithms even though the standard curves are "256-bit curves", but that refers to the size of the group and consequently to the size of the private key. the best general attacks use Pollard's rho algorithm which takes roughly 2^128 operations, i.e., 256-bit curves have 128-bit security.

in the context of quantum attackers, AES-256 is still fine although theoretically QCs halve the security; however its not that big of a deal in practice and ultimately AES-128 is still fine, because doing 2^64 "quantum operations" is presumed to be difficult to do in practice due to parallelization issues etc.

the elliptic curve signatures (used in Bitcoin) are attacked using Shor's algorithm where the big deal is that it is asymptotically polynomial (about O(n^3)) meaning that factoring a 256-bit number is only 256^3/4^3 = 262144x more difficult compared to factoring 15. this is a big difference from "standard" exponential complexity where the difficulty increases exponentially by factors of 2^n. (+ lets ignore that elliptic curve signatures dont rely on factoring but the problem is essentially the same because Shor does both because those are hidden subgroup problems)

the analysis is more complex but most of it is essentially in that paper and explains it nicely.


Your bank doesn’t depend only on cryptography. It would be still a lot of effort to simply make transfer from a bank account. Quantum computer will not magically give an answer for a password of a hash you don’t have. TLS is moving to post quantum as we speak.

For crypto currency you have all the data you need to break whole system ready in your hands as you will be able to produce private key from public keys of wallets. Cryptocurrency depends only on cryptography.


In Bitcoin's case, public keys are only revealed during a transaction.

And every transaction completely spends the source keypairs' funds.

So the only attack vector a quantum computer could use is:

1. Observing newly broadcast/unconfirmed transactions

2. Deriving the private key(s) from the public key(s)

3. Creating and broadcasting its own transaction using the stolen keypairs before the original transaction confirms (presumably with a higher fee to win the confirmation race).

Please correct me if I'm wrong.

EDIT: correction: every transaction completely spends any selected UTXO of an associated keypair, not all of the "source keypairs' funds". Thus the attack vector also includes being able to steal from any keypair that has ever made a transaction and also has UTXOs.


The newest transaction mechanism (taproot; P2TR) exposes the public key of the receiver as part of the transaction. If it becomes more commonly used, the supply of bitcoins with exposed public keys would start going up again. See figure 5 of https://arxiv.org/pdf/2603.28846#page=14 .

So everything basically.

If your bank’s encryption is broken in the future, then, to recover, you will need to change your password, and that’s all. Bitcoin does not have that luxury.

Also, your bank can switch to securing TLS with post-quantum key exchange algorithms with little difficulty and with no particular scalability or re-architecting challenges.

As for “256-bit”, the best known quantum attack against symmetric ciphers is Grover’s algorithm, and Grover’s algorithm will never break a targeted 256-bit symmetric key in the lifetime of the universe even if run by a hypothetical alien civilization with a Dyson sphere. (It might plausibly break one of many targeted keys in a multi-key attack run by advanced aliens, but this won’t steal your money and it could be easily mitigated by moving to 384 or 512 bits.)


All serious financial businesses already have a quantum strategy and are actively working on transitioning their cryptography to post-quantum secure algorithms.

Bitcoin doesn't use 256 bit encryption, unless you mean 256-bit hashing. The cryptographic algorithms that are mostly under quantum threat are asymmetric, e.g. digital signatures.


> All serious financial businesses already have a quantum strategy and are actively working on transitioning their cryptography to post-quantum secure algorithms.

That’s hilarious and it’s not even April 1 anymore.

Lots of serious financial businesses still use FTP or use SFTP running some unbelievably bad server implementation on a Windows machine somewhere that uses such outdated cryptography that it doesn’t even interoperate with modern OpenSSH. Operations do not necessarily score highly on the ACID scale. It’s tied together with duct tape and baling wire.

On the other hand, the system works and is really remarkably resilient to various failure modes. You would be hard pressed to cause more than severe annoyance by compromising these crappy old systems.


I work in the sector, and I'm responsible in my own organisation for our quantum strategy. Most, if not all, of the serious players are doing this. This doesn't mean we have replaced all our old crypto; far from it.

NIST has defined a timeline for post quantum readiness to be complete by 2035. Crypto migrations historically take a long time; you can't just replace your own stuff, or upgrade just a server. All the clients that interact have to upgrade as well or it all breaks.


> If Bitcoin is broken then your bank encryption and everything else is broken also.

Its a lot easier for your bank to change encryption methods than it is for bitcoin. Presumably you mean TLS here (where else do banks use encryption? Disk encryption?). People are already deploying experiments with quantum-proof TLS.

> As far as I know quantum computers still can't even honestly factor 7x3=21, so you are good. And the 5x3=15 is iffy about how honest that was either.

This is probably the wrong way to look at it. Once you start multiplying numbers together (for real, using error corrected qubits), you are already like 85% there. Like if this was a marathon, the multiplying thing is like a km from the finish line. By the time you start seeing people there the race would already be mostly over.


I still don’t really get the argument, like okay this extremely rich theoretical attacker can obtain the private key for the cert my service uses, and somehow they’re able to sniff my traffic and could then somehow extract creds. But that doesn’t give them my 2fa which is needed to book each transaction, and as soon as these attacks are in the wild anti fraud/surveillance systems will be in much harder mode.

I don’t see QC coming as meaning bank accounts will be emptied.

disclaimer: I work at a bank on such systems


My bank definitely doesn't require 2FA on every transaction. It only requires it to log in. I guess other people have more security concious banks then me.

Even still, i think there is some benefit to attackers being able to passively monitor connections. Getting the info neccesary to conduct some other type of fraud outside of the system. Lots of frauds live or die on knowing enough about the victim's financial situation.

However it really doesn't matter, when it happens we will just switch to different encryption.


It’s turning into a bit of a grift now. So many crypto agility “consultants “ popping up with their slop graphics. Never mind the fact that even if a relevant quantum computer is built it will still cost the user millions of dollars to break each RSA key pair…

I dont neccesarily think it would cost millions per key pair. Hard to say with the technology so immature, but it seems like the sort of thing with huge upfront costs but low marginal costs. Once you have a QC you dont have to build a new one for the next key pair.

The problem here is the word "will". Because they don't exist.

No, it’s completely wrong. It’s a very minor refinement of a terrible yet sadly common design that merely mitigates one specific way that the terrible design can fail.

See my other comment here. By the time you call the OP’s proposed verify API you have already screwed up as a precondition of calling the API.


You’re right, but I think the commenter you’re replying to is also right.

The OP is using unreadable hex strings in a way that obscures what’s actually going on. If you turn those strings into functionally equivalent text, then the signatures are computed over:

    (serialized object, “This is a TreeRoot”)
and the verifier calls the API:

    func Verify(key Key, sig []byte, obj VerifiableObjecter) error
(I assume they meant Object not Objector.)

This API is wrong, full stop. Do not use this design. Sure, it might catch one specific screwup, but it will not catch subtler errors like confusing a TreeRoot that the signer trusts with a TreeRoot that means something else entirely. And it requires canonical encodings, which serves no purpose here. And it forces the verifier to deserialize unverified data, which is a big mistake.

The right solution is to have the sender sign a message, where:

(a) At the time of verification, the message is just bytes, and

(b) The message is structured such that it contains all the information needed to interpret it correctly.

So the message might be a serialization of a union where one element is “I trust this TreeRoot” and another is “I revoke this key”, etc. and the verification API verifies bytes.

If you want to get fancy and make domain separation and forward-and-backward-compatibility easier, then build a mini deserializer into the verifier that deserializes tuples of bytes, or at most UUIDs or similar. So you could sign (UUID indicating protocol v1 message type Foo, serialization of a Foo). And you make that explicit to the caller. And the verifier (a) takes bytes as input and (b) does not even try to parse them into a tuple until after verifying the signature.

P.S. Any protocol that uses the OP’s design must be quite tortured. How exactly is there a sensible protocol where you receive a message, read enough of it to figure out what type (in the protobuf sense) it contains such that there is more than one possible choice, then verify the data of that type? Are they expecting that you have a message containing a oneof and you sign only the oneof instead of the entire message? Why?


I think in practice it doesn't work to deserialize only verified data. Snowpack has a mechanism for this but I found it impractical to require all use cases fit this form.

I'm not sure exactly what system you're describing, but I hope it doesn't involve hand-marshaling and unmarshaling of data structures. Your requirement (b) seems at odds with forward-compatibility. Lack of forward-compatibility complicates upgrades, especially in federated systems when you cannot expect all nodes to upgrade at once.

I might be biased, but it's been possible to write a sufficiently complex system (https://github.com/foks-proj/go-foks) without feeling "tortured." It's actually quite the opposite, the cleanest system I've programmed in for these use cases, and I've tried many over the last 25 years. Never am I guessing how to verify an object, I'm not sure how that follows.

I also think it's worth noting that the same mechanism works for MAC'ing, Encryption and prefixed hashing. Just today I came across this IDL code:

  struct ChunkNoncePayload @0xadba174b7e8dcc08 {
    id @0 : lib.FileID;
    offset @1 : lib.Offset;
    final @2 : Bool;
  }
And the following Go code for making a nonce for encrypting a chunk of a large file:

  pld := lcl.ChunkNoncePayload{
    Id:     fid,
    Offset: offset,
    Final:  isFinal,
  }
  hsh, err := core.PrefixedHash(&pld)
  if err != nil {
    return nil, err
  }
  var ret [24]byte
  copy(ret[:], (\*hsh)[0:24])
  return &ret, nil
As in signing, it's nice to know this nonce won't conflict with some other nonce for a different data type, and the domain separation in the IDL guarantees that.

> Re: I'd be astonished if 1000s of exit strategies weren't deep planned, maybe a dozen best-outcomes planned, before a single plane bombed anything. The US knows how to exit this.

> Isn't this just wishfull thinking?

The administration could have asked their favorite LLM to plan 1000 exit strategies, kind of like how, if you asked an LLM to make up a reciprocal tariff formula, you would have gotten approximately the administration’s formula.

None of this means that the results are at all useful.


I find the whole field of radiology to be utterly baffling. There are doctors who specialize in, and hopefully understand, specific diseases and/or parts of the body. But we have radiologists who are supposed to be able to look at images, taken by quite a variety of technologies and parameters, of any part of the body, and are expected to accurately interpret the findings, possibly without any relevant context.

In my personal experience interacting with the medical system, it’s, unsurprisingly, quite common for an actual specialist to look at the same images a radiologist looked at, and see something quite different. And it’s nearly always the case that a specialist or a reasonable careful non-specialist who is willing to read a bit of the literature or even ask a chatbot [0], will figure out that at least half of what the radiologist says is utterly irrelevant.

So I think that the degree to which ML can perform as well as a radiologist is not necessarily a great measurement for ML’s ability to assist with medical care.

[0] Carefully. Mindlessly asking a chatbot will give complete nonsense.


Irrelevant to them. A radiologist is on the hook for missing a tiny possible tumor in a scan for a blood clot.

They like to show off occasionally. We had a rectal foreign body that was described as a Phillips-head screwdriver. I was hoping to catch them out by noticing it was Pozidriv, but it was in fact a Phillips.


I'd take it further/slightly parallel direction. Medicine is at the same time a science and a weird "feel and experience" area.

On the one hand it's a science: controlled experiments, calculated dosages, all based on an understanding of low level biology, fancy imaging methods, measuring currents in people's bodies and so on.

On the other hand, there seems to be plenty of "he seems fine to me", "tests came back fine but something seems off to me so let's try another test", "doesn't seem to be responding to this drug, let's try the other one", "in my experience this drug works better than that one". It seems like a pretty big chunk of subjectivity is actually a part of the field.


> On the one hand it's a science: controlled experiments

Those experiments are so hilariously expensive these days, and the results are often not actually fully published, so good data is often unavailable.

> calculated dosages

Often calculated based, in large part, on researchers’ vibes and their vibes when designing experiments.

> all based on an understanding of low level biology

There are many, many drugs with partially or even almost fully unknown mechanisms.


Radiologists work best in consultation with the physicians ordering the studies. Sadly, this is less and less common as workloads increase in medicine. When I started 20 years ago there were whole teams that came through the radiology department every morning to review all of the cases on their patients. Now I go weeks without seeing another physician.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: