Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Aren't schema changes are actually a big problem when there is a huge amount of data in the table. Adding a column / index to a table that has 500M rows in it is usually the pita. This is definitely something on the right direction, but would love to see this supporting this with the data itself.


That is exactly what we _do_ support. Our team includes the original developer of gh-ost for MySQL, which has been built to execute these kinds of massive changes at scale at GitHub. We've integrated it tightly with Vitess. Once your branch is ready to be merged with production, it executes that change in a completely non-blocking way.


Ok, but most of the time you would like to test that change on a copy or almost up to date replica, to see measure things like how long it does take. If you were copying the data, with things like PII filtering, to the branch development database and allow folks to test it there it would be even more amazing. Great start tho!


Give it a shot, because the current implementation specifically side steps the need for that full copy, and does let you test functionally against fully up-to-date production data.

You're right, though, it doesn't cover all of those uses cases. Luckily, Vitess does offer most of that out of the box already. Just need them exposed through the PlanetScale UI. :)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: