Sad. Apple seemed interested, at one point anyway.
But damn, it does work amazingly well site-to-site. I've managed ~50 Mbps throughput using bbcp (four streams) between Tor .onion services via OnionCat. And ~190 Mbps total from one source transferring simultaneously to five target servers. Each peer had six .onion services.
With six OnionCat interfaces per peer, in MPTCP full-mesh mode, there are up to ~36 subflows per TCP connection. So using bbcp with four streams, there are as many as ~150 tcp6 connections via Tor per bbcp transfer. And with five simultaneous transfers, the MPTCP kernel in the source VPS was managing up to ~750 tcp6 connections. That's impressive!
Apple is still pushing it, which is great. I'm just not sure how well it's going to scale -- MPTCP adds an extra layer of indirection, and extra locking in the easy case where it's just one server handling an IP. In the load balancing case, people are going to have to teach their load balancers a lot of new tricks to get the subflows aligned. If using anycast, the client is likely to be using multiple networks, so using the same server address seems likely to get to a different pop; exposing a pop specific extra server ip seems like something people don't want to do, since exposing that may make it easier to do a single pop.
I've been debugging an issue where incidentally I'm hitting 1gbps on a single tcp connection (server to server, with tls), so I'm not sure why MPTCP is required? ;) But I guess if we had it, I would probably hit 2gbps instead of being capped by the one nic.
Where it gets useful at consumer level is when your phone can hit WiFi and 4G simultaneously, so you can aggregate. And it's even more useful when both WiFi and 4G are iffy, so you seamlessly use one, the other, or both.
And yes, if both of your servers have to gigabit NICs, you can get 2 Gbps. But only if those uplinks aren't bottlenecked at 1 Gbps at the rack or data center level.
But damn, it does work amazingly well site-to-site. I've managed ~50 Mbps throughput using bbcp (four streams) between Tor .onion services via OnionCat. And ~190 Mbps total from one source transferring simultaneously to five target servers. Each peer had six .onion services.
With six OnionCat interfaces per peer, in MPTCP full-mesh mode, there are up to ~36 subflows per TCP connection. So using bbcp with four streams, there are as many as ~150 tcp6 connections via Tor per bbcp transfer. And with five simultaneous transfers, the MPTCP kernel in the source VPS was managing up to ~750 tcp6 connections. That's impressive!
https://ipfs.io/ipfs/QmUDV2KHrAgs84oUc7z9zQmZ3whx1NB6YDPv8ZR...