Is manual software testing going the way of the dinosaur?

Is manual software going the way of dinosaurs and dodo birds?

That’s the question that Gerie Owen ponders in a recent TechTarget piece. A business solutions analyst at Northeast Utilities in the Boston area, Owen argues that manual testing will always be around, despite the increasing use of automation.

Owen clearly knows her stuff, having overseen large testing projects. But while I agree in principle with her analysis, I think she misses a key point about testing automation.

I’ll get to that in a bit, after a brief synopsis of her argument.

Owen acknowledges that there are areas of the development process where automated testing works best:

  • Unit tests, in which the smallest parts of a software application that can be tested are put through the wringer to see if they’re working correctly, “should almost always be automated,” she says.
  • This, she says, is why test-driven development works so well. (Test-driven development involves doing the unit testing method on source code. The idea, according to TechTarget, is to get the thing working, and then work out the speed bumps later.)
  • Automation is also effective in regression testing, Owen says. Regression testing is a fancy term for testing any changes that have been made to a program to ensure the old code works with the new stuff.
  • Yet another area where automation is useful is in maintaining low testing debt in Agile methodologies, according to Owen. Also known as technical debt, this term refers essentially to the amount of work that needs doing before a job can be considered complete. Agile, of course, is a development method that values simplicity in coding, frequent testing and delivering functional bits of software when they’re ready. 

Where humans must do the testing

All of that said, Owen notes, there are two key areas where manual testing is important.

One is when “a product is new and still undergoing significant change.” Essentially, if a software application is getting changed frequently, the cost of keeping the related automation testing software up to date “may outweigh its benefits,” she says.

The other area is “usability and human experience,” she writes. That’s because there’s no way to program the testing software to duplicate what a person will experience when using the application in question.

In a similar vein, she says, wearable technology and mobile devices require human testing, to ensure they work everywhere a person may go and to see if people from different demographic groups will cotton to them. 

What’s missing?

Fair enough. Artificial intelligence notwithstanding, computers can’t yet duplicate how people will think and feel about a software application. I get that.

What many people in the testing community miss about automation and simulation is that these approaches were never intended to completely replace humans in the testing process.

Rather, automation and simulation are simply there to take the nasty, mind-numbing jobs in testing off people’s hands, so they can focus on the value they truly bring to the table.

Might some testers see their jobs in jeopardy? Perhaps. But that’s why they need to work on staying current in the latest and greatest in software development and testing.

In that sense, the dilemma they face is no different than the age-old issue of automation replacing manual labor in fields ranging from auto manufacturing to sewing, glass-bottle blowing and mining. And yes, we journalists are feeling the effects from this trend as well.

Just as I have had to adapt to the impact of technology on my profession, so must testers improve themselves to adapt to what’s happening in their field.

The use of automation is here to stay. For all people, the question is whether we can improve our skills to remain useful in the Information Age.

Jeff Bounds writes about technology from Dallas.