When testing is automated, what happens to human testers?

A new research project being conducted in Texas holds the promise of automating and dramatically speeding up software development. Which raises the question: When all the testing is automated, what will happen to the testers? Don’t worry, human. The news for you is quite good.

Led by a team from Rice University, the $11 million effort centers on building a tool called PLINY, which a news release says will both “autocomplete” and “autocorrect” code for programmers, “much like the software that completes search queries and corrects spelling on today’s Web browsers and smartphones.”

The four-year project is funded by the Defense Advanced Research Project Agency, or DARPA, and will feature more than two dozen computer scientists from Rice, the University of Texas at Austin, the University of Wisconsin-Madison, and a company called GrammaTech, the news release says.

The PLINY project is part of a DARPA program called Mining and Understanding Software Enclaves, or MUSE. That effort involves gathering “hundreds of billions” of lines of code from open source software and creating a searchable database where users can find stuff like vulnerabilities, the release says.

“We envision a system where a programmer writes a few lines of code, hits a button, and the rest of the code appears,” said Swarat Chaudhuri, as assistant professor of computer science at Rice who is co-principal investigator on the work. “And not only that, the rest of the code should work seamlessly with the code that’s already been written.”

The principal investigator, Rice computer science department chair Vivek Sarkar, said in the press release that the goal is something like “autocomplete for code, but in a much more sophisticated way.”

PLINY will use a data-mining tool that scans open-source code on an on-going basis. From that, it will add to and refine the core database, which programmers can use when they need help finishing or debugging code, the release says. Just like today’s current auto-correct software, the engine will give what it deems the most likely answer first, though programmers can look at various other possibilities to find what they need.

To be sure, the team has its work cut out. Chaudhuri said in the news release that PLINY will need to recognize and match similar patterns, regardless of differences in programming languages and code specifications. “The system will have to explore different ways of interweaving code retrieved through search into a programmer’s partially completed draft program and analyze the resulting code to make sure that it does not have bugs or security flaws,” the release says.

Along with Sarkar and Chaudhuri, other co-principal investigators from Rice include Chris Jermaine, associate professor of computer science, and Keith Cooper and Moshe Vardi.

What if it works?

To be sure, this all sounds great. Programmers currently have to write code one line at a time, the same way they’ve done it for decades. Who wouldn’t want to make that process faster and easier?

But if PLINY works as planned, it seemingly could have big implications for both developers and testers alike.

For one thing, I wonder whether developers –- now the kings of the software world –- could see a few cracks in their crowns. The issue would be that developers’ roles might change from being the architects of software to simply people who point to what’s needed and let a PLINY-type engine largely take it from there.

As has been pointed out elsewhere, the implication would be that the developers’ jobs might not take as much skill as they do now –- and that companies might need fewer of those people to do the work.

I could also see testers having the same worries. A key proposed function of PLINY is automatically testing software that the engine produces for bugs. While I doubt any software engine is sophisticated enough to know how humans will react when they use it, it’s easy to foresee PLINY-type engines leading to more automation in testing –- and less need for humans for that work.

The upshot

To be sure, all of this is pie-in-the-sky talk for now. The goals behind PLINY are huge, and the people working it have a lot of work ahead. Obviously, there’s no guarantee they will be successful.

But with the rapid pace of advancement in modern technology, it’s reasonable to assume that even if this project doesn’t work as planned, somebody will create something like it that will. It seems unlikely that programmers 10 years from now will still be writing code line by line, or that testers’ jobs will the be same, either.

The lesson, I think, is that both developers and testers must avoid sitting on their technological laurels. Anybody in either of those fields needs to constantly push himself or herself to get the latest and greatest accreditations, along with staying educated on the most cutting edge stuff in their particular realm.

Engines like PLINY will continually handle more of the grunt work of development and testing.  The trick for humans will be providing value beyond what those engines deliver through creativity that machines can’t match.

The silver lining for humans is that automation will lead to better products, and it will free up both developers and testers to do work that is more creative — and more fun.