Hacker Newsnew | past | comments | ask | show | jobs | submit | MagicalTux's commentslogin

So I tried feeding Claude all of MtGox's original (2011) sourcecode, git history and other relevant information, and asked for a report. Here's the result which isn't news for me but covers some elements that weren't public knowledge until now.


Battering RAM has been demonstrated to work well against Intel's "Scalable SGX" which is also known as SGX 2, and uses static encryption key to allow SGX to use more of the system's memory.

For example at VP.NET we're using SGX 1, which uses AES-CTR for memory encryption which is not susceptible to memory reply attack, and comes with a limit of 512MB of ram. It's a lot of pain working with a very small memory allocation (especially nowadays where most machines come with 128GB+). batteringram.eu calls that "Client SGX" with a checkmark on "Read", but reading the actual paper it only mentions being able to know which areas of memory were written to (see 7.1). There might be applications where memory access pattern gives detail on the underlying work performed, but this is likely coarse (encryption is likely per page) and unlikely to yield to anything useful.

This said we are also exploring other TEEs including Intel TDX, and having a wider array of options will give us the ability to instantly disable any technology for which we know security has been compromised.


Intel TDX unfortunately suffers from the exact same vulnerability as Scalable SGX. The underlying root cause is the lack of randomized encryption; using a static-adversary encryption scheme (XTS) rather than a dynamic-adversary one. The result is that plaintext-ciphertext mappings are unchanged at a fixed memory address. While the choice of scheme might initially seem puzzling, it is due to a randomized encryption scheme requiring counters for each memory block, which has a prohibitive on-chip memory cost when scaling to hundreds of GBs of memory.


The SGX certificate is signed by intel and includes a certification of the hash of the code loaded in the secure enclave ("MRENCLAVE").

When the client connects to the server, the server presents a tls certificate that includes an attestation (with OID 1.3.6.1.4.1.311.105.1) which certifies a number of things:

- the TLS certificate's own public key (to make sure the connection is secure) - The enclave hash

It is signed by Intel with a chain of custody going to intel's CA root. It's not "just a magic number" but "a magic number certified by Intel", of course it's up to you to choose to trust Intel or not, but it goes a much longer way than any other VPN.


The comment was added before the implementation of the IPC buffer & shuffling and was left there, sorry about that.

In an older version packets were sent back in sequence to their original connection to the host, as it was faster.

We since then implemented a system where nproc (16+) buffers receiving packets running at differed intervals, meaning that while packets are processed "in order" the fact this runs in multiple threads, reading packets even from the same client will cause these to be put in queues that will be flushed at different timings.

We have performed many tests and implementing a more straightforward randomized queue (by allocating memory, handling array of pointers of buffers, shuffling these, and sending these shuffled) did not make much of a difference in terms of randomization but resulted a huge loss in performance due to the limitations of the SGX environment.

As we implement other trusted environments (TEE) we will be implementing other strategies and obfuscation methods.


Similar to TLS, the attestation includes a signature and a x509 certificate with a chain of trust to Intel's CA. The whole attestation is certified by Intel to be valid and details such as the enclave fingerprint (MRENCLAVE) are generated by the CPU to be part of the attestation.

This whole process is already widely used in financial and automotive sectors to ensure servers are indeed running what they claim to be running, and well documented.


Remember that this only works if the cpu can be trusted! The hardware still has to be secure.


That's very informative, thanks!


Intel SGX comes with an attestation process aiming at exactly that. The attestation contains a number of details, such as the hardware configuration (cpu microcode version, BIOS, etc) and the hash of the enclave code. At system startup the CPU gets a certificate from Intel confirming the configuration is known safe, which is used by the CPU to in turn certify the enclave is indeed running code with a given fingerprint.

When the connection is established we verify the whole certificate chain up to Intel, and we verify the TLS connection itself is part of the attestation (public key is attested).


This is the server-side part of things. It receives encrypted traffic from your (and other customers) device, and routes it to the Internet.

This guarantees that your traffic isn't being linked to you, and is mixed up with others in a way that makes it difficult for someone to attribute it to you, as long as you also protect yourself on the application side (clear cookies, no tracking browser extension, etc)


> This guarantees that your traffic isn't being linked to you, and is mixed up with others in a way that makes it difficult for someone to attribute it to you

What would prevent you (or someone who has gained access to your infrastructure) from routing each connection to a unique instance of the server software and tracking what traffic goes in/out of each instance?


Nothing, it's not technically possible to prevent that with their system.


No. The verifiable part receives an already-decrypted copy of your traffic and mixes it with everyone else's traffic. Source: https://vp.net/l/en-US/technical#:~:text=cryptographic%20dat...

I have not inspected whether the procedure suggested for verifying the enclave contents is correct. It's irrelevant if you need to prove that the decrypted traffic, while still being associated with your identity, goes ONLY into the enclave and is not sent to, let's say, KGB via a separate channel.


(First off, duskwuff's attack is pretty epic. I do feel like there might be a way to ensure there is only exactly one giant server--not that that would scale well--but, it also sounds like you didn't deal with it ;P. The rest of my comment is going to assume that you only have a single instance.)

A packet goes in to your server and a packet goes out of your server: the code managing the enclave can just track this (and someone not even on the same server can figure this out almost perfectly just by timing analysis). What are you, thereby, actually mixing up in the middle?

You can add some kind of probably-small (as otherwise TCP will start to collapse) delay, but that doesn't really help as people are sending a lot of packets from their one source to the same destination, so the delay you add is going to be over some distribution that I can statistics out.

You can add a ton of cover traffic to the server, but each interesting output packet is still going to be able to be correlated with one input packet, and the extra input packets aren't really going to change that. I'd want to see lots of statistics showing you actually obfuscated something real.

The only thing you can trivially do is prove that you don't know which valid paying user is sending you the packets (which is also something that one could think might be of value even if you did have a separate copy of the server running for every user that connected, as it hides something from you)...

...but, SGX is, frankly, a dumb way to do that, as we have ways to do that that are actually cryptographically secure -- aka, blinded tokens (the mechanism used in Privacy Pass for IP reputation and Brave for its ad rewards) -- instead of relying on SGX (which not only is, at best, something we have to trust Intel on, but something which is routinely broken).


You're welcome to use cryptocurrencies (we have a page for that), and our system only links your identity at connection time to ensure you have a valid subscription. Your traffic isn't tied to your identity, and you can look at the code to verify that.


You cannot offer service for money with user anonymity. Your legal knows that.


Cryptocurrencies? Aka the least private form of transactions, where not only the sender and receiver know, but the whole block chain immutably stores for everyone else to view?


> Cryptocurrencies? Aka the least private form of transactions, where not only the sender and receiver know, but the whole block chain immutably stores for everyone else to view?

There are cryptocurrencies like ZCash, Monero, Zano, Freedom Dollar, etc. that are sufficiently private.


So what do you suggest instead?


Semi-serious: redeemable codes you can buy at a national retail chain, ostensibly using cash. It has the unfortunate side effect of training people to fall for scams, however. Bonus points if you can somehow make the codes worthless on the black market, I guess.


Some VPNs kind of offer that. I know at least one that sells physical redeemable cards you can buy - maybe physically in some countries, but in mine it's only available on Amazon. Even that option should be safe for keeping your identifying data from the VPN provider, even in the situation where they betray their promises on not holding onto your data. This is because Amazon can't know which exact code was sent out to you, and the provider in turn doesn't have any additional info to associate with that code, besides knowing if it's valid or not. The biggest downside is that now Amazon knows you paid for this service, even if they don't know the specifics.

There's also an option to just mail them cash, but some countries may seize all mailed cash if discovered.


cryptocurrency != bitcoin. monero has solved this issue for almost a decade.


they just got exploited a few days ago


they did not. the attack did not change anything about the privacy and anonymity. it isn't even an exploit, its simply using tens of million of dollar in mining costs to try doing a 51% attack and spread FUD which worked on you.

this isn't reddit. if you are trying to act like a know it all at least do basic research.


a 51% attack poc is successfully executed and you call it FUD lmao.

crypto rubes are hilarious.


They have not reached 51%. 30% was so far the highest which means at worst they can do selfish mining. Again none of it is relevant to the topic of privacy and anonymity even if they did reach 51%.


Intel audits configuration on system launch and verifies it runs something they know safe. That involves CPU, CPU microcode, BIOS version and a few other things (SGX may not work if you don't have the right RAM for example).

The final signature comes in the form of a x509 cerificate signed with ECDSA.

What's more important to me is that SGX still has a lot of security researchers attempting (and currently failing) to break it further.


Depends on your threat model. You cannot, under any circumstance, prove (mathematically) that a peer is the only controller of a private key.

Again, I would love to know if I'm wrong.

The fact that no publicly disclosed threat actor has been identified says nothing.


Proving a negative that information has not been shared has been a challenge from the beginning of information.

Are you suggesting a solution for this situation?


In this case it's rather like trusting that a government issued private key has not been stored by the government for further use.


The US government might be able to pressure Intel into doing something with SGX, but there are way too many eyes on this for it to go unnoticed in my opinion, especially considering SGX has been around for so long and messed with by so many security researchers.

The US government also likely learned a lesson from early attempts at backdoors (RSA, etc) that this kind of things do not stay hidden and do not reflect well.

We've thought about this long and hard and planning to mitigate this as much as possible. Meanwhile we still offer something that is a huge step forward compared to what is standard in the industry.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: