If GPU manufacturers can make an eGPU that's standalone and connect to PC/Laptop via Thunderbolt3(USB-C), that'd be pretty neat especially if they can make it <$100 more expensive than the cost of GPU in PC version. I can see a large portion of people who doesn't want to spend $300+ on an external GPU box just to be able to upgrade the $200 card inside.
You've got to have a decently made chassis to accommodate and cool a full-sized card and a meaty power supply for the latest and greatest GPUs. And that's before you add any docking functionality for extra ports.
The cheapest thing I've seen to date is the Alienware Amp which sells for around $150 but uses Alienware's proprietary port, which IIRC, is a bit slower than TB3.
I haven't seen them tested side by side but actually both TB3 and the Alienware Amplifier offer 4 lanes of PCIe gen3. So they should be pretty much identical.
As for the lower-end pricing, it's very feasible to fit a laptop-oriented MXM GPU moodule in a lower power envelope. The GTX 1060 mobile variant only draws 80 watts, and the GTX 1050 mobile sips power at 50 watts. They do obviously need active cooling, but not a LOT of active cooling.
Note that the 10-series of GPUs are not cut-down versions like previous generations. The 1050ti mobile is just as fast as the 1050ti desktop card at same clocks, they bin the chips so it can work with less power.
It's very reasonable to imagine a sealed laptop dock (as opposed to a full enclosure) with a 50 watt 1050ti inside selling for $300. Comin' soon!
I don't think desktop solutions (like Microsoft speech synthesizer) does a great job at producing "natural" and life-like speeches. Sure, it does read the text, but that's about it.
To properly produce speech, the synthesizer needs to take into account context of around words to determine how to pronounce ambiguous words or how to pace the speech. What about the tone? How about making the speech sounds more engaging? These are the things that need data model to work great. However, it's just difficult to cram that into a desktop app. And why would any company do that when they can put the service online and charge for it (which is totally fair)?
I was thinking more about the other way because that's the hard part, we can understand synthesized voices better than they can understand us.
Apple seems far and away the best desktop solution. For other OS's though, it's frustrating that they haven't improved on what computers of the early eighties could do. Now you've made me nostalgic though, I'll never forget the first time I heard my amiga 500 read my words.