Manufacturing Code: Unit & Integration Testing

What should we test and how important is it? Do we aim for 100% coverage? Do we test at all?

June 23, 2018

Testing code seems to be a simultaneously controversial and uncontroversial subject within software development. It seems like I find relatively few people who outwardly speak out against the practice but a decent chunk who don't actually implement it. Most of the latter claim that they don't have time to do it. A quick google will yield many sources saying that this is just an excuse, that testing actually improves both the quality and speed of development and many people arguing against that conclusion.

At work it is a somewhat divisive subject. Our web developers do little and in most cases no forms of automated testing. Our mobile app developers seem somewhat indifferent but still utilize it as a best practice. The Python/API developers (who I work with most) almost always have fully integrated testing suites with CI & deployments. Perhaps the most concerning is the QA team who has a fairly negative attitude towards it. Even though it fundamentally doesn't replace them and should make their job easier they appear threatened by it and have dismissed the idea when I bring it up.

One of the core arguments people bring up is that they don't know what to test. When tackling this problem when I first started writing tests in Java for code at my old job I had little guidance on this subject so I leaned very heavily on what I learned in manufacturing and I still try to maintain that philosphy.

"Quality" in manufacturing is a very statistical process. Every day at set intervals samples are gathered from production and taken to a laboratory to be tested. What those tests are vary between the product being manufactured. In textiles tensile strength is a fairly common measurement:

tensile_test.jpg

Test results are then added to a database which is, in most cases, connected to a software platform that performs some level of statistical analysis over them. Managers and/or engineers are alerted to patterns that, in best practice, are determined by the Nelson Rules. These rules rely on "run charts" or "control charts" to identify patterns of anomalous behavior within the process:

rule6.png

These charts also label a businesses "control" and "release" limits. Control limits indicate that a process has strayed out of bounds of what is considered normal and some form of "control" should be taken to correct it. Release limits indicate the most extreme cases in which a product is allowed to be shipped to a customer and variation outside of them will usually result in the product being discarded. These limits are set as business requirements to meet our product specifications - they certify that our product is of the quality we are selling it as.

When we right code we also have business requirements. Typically when I write tests for my code I treat them in the same way I treated the chemical processes I worked on. I want my tests to cover the requirements that my application is being built to implement. A passing test is certifying that the requirement is met. A failing test means that a requirement is not met.

I am still new to this field but I have come to not believe in even attempting to push for 100% code coverage as I have found it to:

  1. be rarely achievable in any practical amount of work or time frame
  2. incentives green check marks and lines of code, regardless of what those test are actually certifying

I also think that testing beyond strict requirements can still be a good thing - it really depends on the application you are working on and the situation. However when beginning a new feature to work on I will almost always either ask for or write out the requirements, and then write out test cases to meet them.