Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Ask HN: Software testing usually sucks. Did you ever see it work at scale?
5 points by l1am0 on Feb 6, 2023 | hide | past | favorite | 5 comments
Not a single time I saw software testing actually working at scale.

You always find yourself in a hard place between:

- Only tests for the testing sake. Bazillion unit tests which not really ensure that your application does what it should

- "Test push". You once focus a few weeks/quarter on building a testing setup which then slowly degrades again over time as you have to focus on "getting real work done"

Have you ever seen testing at scale done right? What were your main takeaways?

Would be super curious to learn about processes and tools you used.



> Only tests for the testing sake. Bazillion unit tests which not really ensure that your application does what it should

This is what happens when tests and code coverage are mandated.

Yes, testing does work, and is an important part of any large-scale project.

The key is to recognize and implement a few different kinds of testing:

1) Unit tests, to help with the development process, and run any part of the code quickly, without any external dependencies, and test for correctness.

2) Performance tests - performance is a feature, and it's important to recognize regressions or improvements.

3) Integration tests, to see if the software actually works with its external dependencies.

4) Post-release checkouts and monitoring - after releasing the software, a method to check that the release itself was successful (as opposed to the software), before any live users are exposed to it.

These are all important, and while there may be some overlap, often people don't see distinction and try to make do with just one kind, and argue that the other kinds are "wrong", which is counterproductive.


> 4) Post-release checkouts and monitoring

How did you ensure that in the past? Sounds to me like it would not really be possible to do this before live users are exposed to the system


How effectively you can do this depends on how you direct traffic. But I'd do it by having the new version running on production servers where traffic is directed elsewhere. Or, sometimes you can do something similar with feature flags --- push the code everywhere, but only turn on the feature flags for test users at first. Doesn't help if your feature flag or other traffic direction is broken, of course.

Also, if you use the same packaging for 'staging' and 'production', staging tests your packaging.


I've seen software testing work at scale when there's a company-wide mindset to quality, as opposed to 'quality is the job of one person/department':

- Shift left testing: quality can be implemented at all stages of the development process, from requirements gathering to deployment. Issues are easier and cheaper to fix the earlier they are discovered, and it does not necessarily take a software tester to discover these, although testers can advocate for a mindset of quality - Quality as a shared mindset/goal, not just one team's job: every team member seeks to quality control their bit of work, this inevitably involves developers doing a basic level of sanity testing on their code. When done right this should reduce the long feedback loop to discovering issues when a team completes a piece of work, hands it over to QA, then context switches over to another piece of work.


After all these decades nothing beats having some smart QA manual testers that know the UI and expected results inside and out.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: