In DevOps, Whither Testing?

In the first part of my email interview with him last week, open source software engineer Stephen McDonald argued that a key in getting development and operations to work well together was to take a page out of the playbook of small organizations: “Wide responsibilities with multiple hats, and reduced layers of management.”

This week, the veteran Australia-based team leader tackles some more of our fave topics at ServiceVirtualization.com. We start with views from McDonald, co-creator of a Google Reader replacement called Kouio, on the importance of putting testing on the to-do list of development teams. 

What in your view is the best way to get software testing done so that the code runs well while preventing battles between the developers and the testers? 

As with the distinction between development and operations, the same principles and caveats can be found between development and testing. Testing as a separate role removes accountability from developers. It’s all too easy for animosity to build between these roles given the nature of their relationship, where each party rationalizes responsibility for quality onto the other. 

Let’s remove the role of the specialist tester who lacks the ability to not only find issues, but also resolve them. Let’s remove the role of the developer who has no need to take ownership of the quality level in what they produce. Let’s get developers who are rock-solid testers, working with each other in unison performing the exact same development and testing roles as each other, back to back. 

What is your view of the use of automation and simulation of software testing? Is this a good thing or a bad thing, and why? 

It’s a great thing, but it’s no silver bullet. Automated unit tests should be implemented hand in hand with rigorous user acceptance testing, as part of an overall testing strategy. All too often, automated testing can provide a false sense of security, particularly in the context of continuous integration and deployments. 

Another issue with automated testing is one where development teams treat testing as a religion, and go entirely overboard with the amount of automated testing implemented, in a classic case of not seeing the forest for the trees. I’ve joined teams where the amount of testing implemented was so over the top, that the majority of time spent on feature development would end up consumed by refactoring tests simply to match new or changed functionality. 

This religious approach to testing stems heavily from the dynamic languages communities. Languages like Python and Ruby that have risen to popularity throughout the last decade can offer enormous productivity gains up front, with their brevity and lack of type declaration. These gains, however, can often turn into technical debt, swept under the rug early on, that manifests itself as a project matures. Large realms of errors that require this religious approach to unit testing can be attributed to type errors that could automatically be detected by a compiler. I think the resurgence in statically typed languages, such as Scala and Go, over the last few years, which use techniques such as type inference to alleviate some of the verbosity typically found in their predecessors, will go a long way over the coming years in addressing this area specifically. 

Agile development and testing – is it realistic to do in large organizations? 

Absolutely, but to reiterate – look to the models found within startups: strong generalists wearing many hats, removing the distinction between development, operations and testing. Increased autonomy, accountability and ownership (is key.)