I was asked recently if the number of bugs found is the best measure of the value of testing. Honestly, I had to think about this for a bit. But after a few minutes I decided that counting the number of bugs found is actually a really poor way to measure the value of testing.
Consider low-level testing (unit and API testing). When your developers write tests do you want them to log formal bug reports for every error they find? Probably not, this is a development activity, we want them to deliver a complete set of passing tests for each change they make while fixing the bugs they find along the way. How about for the functional testing on the integrated product. Are you happier if that testing finds 100 bugs with a new feature or 0? Well, 100 might mean that the feature was poorly tested before it was sent to QA, but what does 0 mean. Does it mean that the testers did a bad job, or that the quality of the feature was high before it went to QA?
Measuring the value of testing by the number of bugs found is the completely wrong metric. The true value of testing is to formalize correct behavior in a repeatable way. When someone creates a set of tests with automatically checked expected results, that can be run every time the code changes, they are contributing to the infrastructure of quality. A well-designed test will provide value for years, and pay for itself many times over.
Every time a test fails it provides value, it prevents a bug from getting into the code base.