First thing I'd say is that there's no "one size fits all" approach. The right answer depends on the language, the project, the requirements, the amount of churn, the timeline, etc. So I 100% believe you when you say you've found success by moving the focus toward implementation tests. =)
But in my experiences, I've had major issues with a reliance on integration tests. This is almost exclusively in Rails.
- They're slow, because there wasn't an easy way to stub out the slow stuff
- If you stub out the slow stuff now you have no coverage for the stubbed stuff, unless you also have unit tests covering the guts of the app
- When integration tests fail, it can take a lot of work to investigate the cause, as opposed to granular unit tests
Our default approach is to mock and stub as little as possible, but of course there are things we just have to mock, like external service calls. However, we also have UI automation tests that run on a fully deployed and integrated system, which mostly covers that.
Slowness is only a problem when it's a problem, and can sometimes be solved with better tooling. I'd feel more confident in a system with 1000 integration tests that take 2 hours to run over one with 10,000 unit tests that take 10 minutes to run.
On your last point, this is positive and negative. Yes, it can be harder to find the issue, but IME integration tests actually find the issues that will cause things to break in production, and it's easier to find out why a test is failing than why an error is being logged.
But in my experiences, I've had major issues with a reliance on integration tests. This is almost exclusively in Rails.
- They're slow, because there wasn't an easy way to stub out the slow stuff
- If you stub out the slow stuff now you have no coverage for the stubbed stuff, unless you also have unit tests covering the guts of the app
- When integration tests fail, it can take a lot of work to investigate the cause, as opposed to granular unit tests