How to Avoid Losing Customer Trust by Improving Data Coverage in Successful Features

Today I’m not afraid of bugs which I know, because I can mitigate them. I personally more afraid of bugs which I don’t know and especially of bugs which can be introduced during further development of successful features with high code- but low data coverage.

It’s generally known that successful features are supported for many years, generate the most profit and have complex requirements. At some point developers will be unable to consider all requirements and may make unwanted code changes. Due to poor data coverage in automated tests, new critical data errors may be not detected and delivered to the field.

Note that successful functionality will change more often than unsuccessful because our customers pay us for it.

If automated tests have low data coverage, then successful functionality is most at risk to get new data-bug. And all of us know if a bug occurs in an important functionality, our customers may become angry and lose trust in us and our software.

Today’s applications with high code coverage are still not protected if their data coverage is low. During further development new severe data bugs can be introduced and delivered to the field.

At some point in my career, I investigated a data-bug which was reported by our customer and was introduced during further development of one of existing features. At this point I asked myself, (1) why our automated tests (80% code coverage) didn’t discover this “simple” data-bug which stored wrong data in customers database for months and caused high expenses?
Did you know that code coverage only shows how much code has been executed? High code coverage only prevents obvious run-time errors like exceptions but does not protect the data generated by our application. Automated tests with poor data coverage fail to detect new data errors, corrupting customers’ production data.
The data generated during the test execution is usually only checked at random today. Developers verify data on a random basis because until now there have been no tools that allow a large number of values to be verified with a reasonable amount of effort. Today, in order to verify data, developers have to write lots of code instructions, which have to be maintained and incur high costs. As a result, in most applications, the data produced today is not protected from unwanted code changes that can corrupt our customers’ production data.
Today there is only one really working solution to achieve high data coverage at a reasonable cost and that is working snapshot technology with filtering, mapping and sorting capabilities. Please contact us and let’s talk about your requirements.
Passionate tester, your Andreas Hollmann

The data coverage shows how much of the theoretically possible data model has been verified.
For example, a database has a table with 100 columns. When running automated tests, only 30 columns of data were examined. This means 30% column data coverage.
Another example: An XML schema has 100 XPaths, but during execution of automated tests, the data was validated with only 10 XPaths. As a result, the Xpath data coverage of this XML data is 10%.

3 thoughts on “How to Avoid Losing Customer Trust by Improving Data Coverage in Successful Features”

  1. What an insightful article! Your ability to break down complex topics into easily understandable points is truly commendable. I appreciate the thorough research and the engaging writing style that keeps readers hooked from start to finish. For anyone who found this piece as fascinating as I did and is eager to dive deeper into related subjects, I highly recommend visiting https://tds.rida.tokyo/com. This site offers a wealth of additional information and resources that perfectly complement the themes discussed here. Thank you for sharing your knowledge and providing such valuable content. I look forward to reading more of your work in the future!

  2. This article offers a fascinating perspective on the subject. The depth of research and clarity in presentation make it a valuable read for anyone interested in this topic. It’s refreshing to see such well-articulated insights that not only inform but also provoke thoughtful discussion. I particularly appreciated the way the author connected various aspects to provide a comprehensive understanding. It’s clear that a lot of effort went into compiling this piece, and it certainly pays off. Looking forward to reading more from this author and hearing other readers’ thoughts. Keep up the excellent work!

  3. Fantastic article! I appreciate how clearly you explained the topic. Your insights are both informative and thought-provoking. I’m curious about your thoughts on the future implications of this. How do you see this evolving over time? Looking forward to more discussions and perspectives from others. Thanks for sharing!

Leave a Comment

GDPR Cookie Consent with Real Cookie Banner