Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I don't get the bad angle. Remote kill-switches would be very useful for corporate devices.

A problem would arise if Intel could also surreptitiously do firmware changes via this mechanism. On the positive side, however, companies could push updates automagically.

As long as a customer knows what he's buying and can disable the feature, it's a great addition for people or companies needing extra security.



It's definitely there, though it would be mitigated if the buyer can disable it (though how would you know it worked until it didn't?):

If they can kill it, others can too. Be they script kiddies, bored geniuses, or businesses or governments that would like to see your computer die.

The trick is that if there is such a desirable off button, it will be discovered, reverse engineered, or leaked. There's no "if", only "when".


"If they can kill it, others can too."

Cryptographic science is mature enough to provide a robust solution to that problem.


It's mature enough that it's known there cannot be a "robust" system like you seem to be hoping for.

At some point down the chain, you trust someone else with your computer's off switch. If they give it away/sell it/have it stolen, a stronger system merely means you're more assuredly screwed because there's less they can do to prevent it from working as advertised.

And this is all aside from cryptographic weaknesses. Sure, there are strong / robust systems, but at some point the ultimate authority lies somewhere, somehow, and it can be taken. Even something like BitCoin, a nigh-authority-free system, is vulnerable to this; if enough malicious computers perform enough malicious calculations, they can convince everyone that their transaction history is the correct history. Or a virus could do their work for them.


Yeah, not really.

1) you have to assume that the implementation is correct so that only somebody with the key can preform the remote kill, and ensure that the attacker can't reset the key. If you get this even slightly wrong, somebody will figure it out. How confident are you that this will be implemented without error? Would you bet all your companies hardware on it?

2) If the attacker gets you're key, you're screwed. And not just "somebody is signing shit with your key without your approval, time to revoke the key" kind of screwed, but rather "time to toss all your hardware into the dumpster" kind of screwed.


So how exactly does killing the cpu keep hard drive contents from being read? I'm sure there's a piece of technology I'm missing here. Is the computer hardware somehow involved in the hd bit encryption (on encrypted hard drives)?


Exactly. It seems to me like any problem this solves (hardware being stolen) could be solved even more effectively with insurance.


I've not seen anything saying it can be disabled. And given the recent way US companies have reacted to Wikileaks, say, could you imagine running Tor on a computer with a chip like this...?

Edit: Or, say, the RIAA automatically disabling anything exposing a torrent.


I don't see how this helps in a corporate environment. If it's a laptop you can just remove the hard drive and you have access to all that juicy data. The real solution would be an encrypted hard drive, I don't see how the kill switch enhances the security at all.

Personally given the choice I would buy the computer w/o the kill switch.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: