How service virtualization battles the enemy of software constraints

Battling software constraints

If it’s true that service virtualization is helping companies save money and produce better software, faster — and it is true — then why do some enterprises continue to do things the old, tired way? As we speculated a few weeks ago, one impediment could be a fundamental lack of understanding about why projects run late, run over budget and, ultimately, run off the rails.

So, for the benefit of upper-level executives trying to get their heads around the key issues, we thought it was time to go back to basics on service virtualization and answer some essential questions about what the technology is and how it works. We’re excerpting pieces of a great book on the topic, Service Virtualization: Reality is Overrated. How to Win the Innovation Race by Virtualizing Everything, by John Michelsen and Jason English. The book was published in 2012, but it still holds up quite well as a place to start in understanding Service Virtualization.

In this installment, we’ll focus on the problem vexing all software developers: Constraints. It’s a problem that only gets worse with each iteration of technology and apps must increasingly “talk” to limitless other apps.

Here’s an excerpt from Chapter 4:

Constraints: The enemy of agility

Constraints are any dependencies that delay the completion of a task in the software development lifecycle that are not within the control of the team responsible for the task. Constraints are the primary reasons why business software projects are delivered late, over budget, and with poor quality.

Ask Bob the development Manager why his team missed another delivery deadline, and you will never hear Bob say, “It’s because we’re just not smart enough . . .” or “My team just isn’t motivated enough …” You will instead likely hear Bob rationalize the failure thusly:

“We did everything we could do. Sally’s team didn’t finish in time for us.” “We spent weeks waiting for mainframe access.”
“We spent more time doing data setup and reset than we did testing.”

Constraints kill agility. Constraints will keep you up at night. The only thing reassuring about constraints is that just about every enterprise it development shop has them in spades, so you aren’t alone. Your definitions may vary, but here are four root constraints:

Unavailable systems and limited capacity

All large companies must deal with the environmental constraints of unavailable systems, such as mainframes and incomplete components. Teams need to have an appropriate system environment in place in order to develop any new functionality and validate that the application is working correctly. Examples of unavailable systems include:

  • A key mainframe that is used for important customer transactions has only one shared test partition made available to your development team by it ops for only a few hours a week.
  • The test environment for the system is 1/100 of the scale and performance of production. You cannot sufficiently performance test your application because of the downstream capacity limitations.
  • A critical web service that your app will call on is still under development by that team — and not expected to be available until just days before your delivery deadline.
  • A third party/SAAS-based transaction service provider only allows 5 test transactions per day on its system before they start charging you for every transaction, not nearly enough to cover your scenarios.

What happens when the preceding situations occur? The project stops, and teams simply wait. There’s a reason why you often find active foosball or ping-pong tables in the development areas of a company, but none in customer service

Conflicting delivery schedules

While lack of availability is the most commonly identified constraint, it is certainly not the only reason software projects fail to meet expectations. There’s also the fact that developers are often coding in the blind. Unless the requirement your team is developing for is incredibly simple, the app under development is seldom self-contained. The code will eventually interact with components that are owned and managed by other teams, each of which may be on their own independent develop-test-release timeline.

Though you may try to split up the functionality so development teams can decouple and work in parallel with each other, most business applications aren’t so easily compartmentalized. There is usually some need to synchronize with the changes of other teams.

Data management and volatility

As software becomes more complex and distributed, and handles more customers and transactions over time, it is also generating an exponential increase each year in resulting data. Some systems of record have become so large and unwieldy (petabytes or zettabytes even), that they can barely even be managed. You have dozens of data sources in a wide variety of storage containers, and the data problem is only getting worse. The term big data was coined to describe the massive amount of unstructured data being captured in consumer and transaction activity online. Data is a big, hairy constraint for every enterprise development effort.

Have your teams ever struggled to set up just the right scenarios across multiple systems, only to “burn” them all with a single test cycle? Have you seen issues with regulatory or privacy rules about exactly how customer data is used within development and testing cycles? Or found it difficult to re- create scenarios in test systems for those unique types of edge conditions that happen in production?

The most obvious solution is the conventional practice of Test Data Management (TDM): extracting a subset of production data directly from all the involved systems into a local TDM database, and then importing that data into the non-production systems.

But there are many reasons the traditional approach to TDM isn’t working:

  • Fragile data: applications change often—requiring frequent, precisely-timed extract, manipulate, and setup activities.
  • “Burned” data: live transactions often “burn” a carefully constructed set of test data upon use (your previously zero- balance customer now has a balance!), making the data unusable for that purpose again and requiring either re- import or very difficult, manual undoing of the changes made.
  • Complexity: Heterogeneous sources — SQL, IMs, VSAM, Flat Files, XML, third-party service interfaces — vary widely, whereas most TDM solutions only deal with a subset of possible data sources. Moreover, Big data brings non-relational data sources to the mix.
  • Security and regulations. Strict laws and industry standards govern the protection of private customer data (ID and bank account numbers, medical records, etc.) by development and test teams, as well as accountability standards for how that data is stored and shared.
  • Labor- and cost-intensive: Many development shops report that 60 percent or more of test cycle time is spent exclusively on manual data configuration and maintenance activities.
  • Difficult-to-reproduce scenarios: It’s hard to isolate and recreate specific input-and-response scenarios.

Third-party costs and control

Not all companies suffer the constraint of data management equally, but third-party costs arise as a “do-or-die” aspect of application development as it moves toward ever more composite and cloud-based application architectures.

Custom software development and management of applications can be incredibly expensive. Therefore, it makes a lot of sense for the enterprise to offload systems and functionality, whenever possible, to another company that specializes in providing that functionality via a service-based model. This third-party provider then charges the company a fee for any access or remote use of that SAAS offering, cloud service, or managed system resource.

Let’s look at a major airline with a critical customer ticketing application that is under constant development. They outsource the reservation management aspects of their business to a Gds (Global distribution service) like Sabre or Galileo, and the payment management to another company’s payment gateway, and so on, paying a fee each time their ticketing app submits a request to these third-party services. These fees are perfectly acceptable in production, where they are justified by the resulting revenue opportunity the airline gets from selling the ticket. But in preproduction, that transaction fee is considered by the business to be a cost of development, not a cost of revenue. Think about the number of unique customer travel scenarios that must be validated, as well as the peak levels of customer traffic the airline must develop and tune their app to perform under.

*****

As the authors go on to describe, the manual process of stubs and mocks — the practice of creating your own, static versions of dependencies to overcome constraints — are not enough to overcome these constraint problems. Because mocks and stubs are not dynamic, and because they take so long to create, you end up effectively just creating more constraints.

That’s why the inventors of service virtualization first went to work on the solution.

Because many of these issues are invisible to business executives, they often don’t appreciate the problem of software constraints on it teams until they see the unfortunate end result, a software delay or serious malfunction that can cause headlines when it manifests as a problem affecting customers. Service virtualization can erase that worry.