One underappreciated feature of S3 - that allowed it to excel in workloads like the Tables feature described in the article - is that it's able to function as the world's highest throughout network filesystem. And you don't have to do anything to configure it (as the article points out). By storing data on S3, you get to access the full cross-sectional bandwidth of EC2, which is colossal. For effectively all workloads, you will max out your network connection before S3's. This enables workloads that can't scale anywhere else. Things like data pipelines generating unplanned hundred-terabit-per-second traffic spikes with hotspots that would crash any filesystem cluster I've ever seen. And you don't have to pay a lot for it- once you're done using the bandwidth, you can archive the data elsewhere or delete it.
You've totally hit the nail on the head. This is the real moat of S3, the fact that they have so much front-end throughput available from the gigantic buildout that folks can take advantage of without any capacity pre-planning.