It's basically imperative GPU accelerated NumPy with autograd and really nice libraries. You can be productive in it in a day (much less if you already know numpy), and if something goes wrong, it tells you what went wrong. It's also easily twice as fast as TF on GPU, trivial to use on multi-GPU setups, has an easy to use dataset interface, and allows you to run batches that are twice as large (which means faster training).
Google certainly has _technical_ capacity to do what PyTorch does, but not the _organizational_ capacity to scrap TF and start over. So they're trying to half-ass it with TF 2.0. It still sucks though.
Google certainly has _technical_ capacity to do what PyTorch does, but not the _organizational_ capacity to scrap TF and start over. So they're trying to half-ass it with TF 2.0. It still sucks though.