How Service Virtualization reduces your infrastructure footprint


Editor’s note: This blog is one in a series where we excerpt portions of Service Virtualization: Reality is Overrated, a must-have book from the guys who basically invented SV as a way to develop software better, cheaper and faster. The excerpt is republished with permission.

Solving the problem of available IT infrastructure for software development doesn’t seem like the sexiest thing service virtualization can do outside its ties to “Green IT” initiatives for reducing energy consumption. However, the potential business value goes way beyond environmental impact, and the ROI it can generate sure is sexy once it is understood by your company’s management.

Every large company continuously accumulates additional infrastructure to support ongoing business and new service offerings. This includes buying more web servers and app servers, additional mainframe partitions, increased network capacity, more software licenses, exponentially larger databases, and additional transaction space on third-party and shared resources.

When conventional server virtualization emerged on the scene starting around the year 2000, businesses jumped on it with haste, and that consolidation created an immediate reduction in capital expense (CapEx) by reducing hardware and server room costs. However, if we follow Moore’s law, we also know that these commodities will also become faster, more compact, more efficient, and cheaper every month as technology advances.

So while the use of VMs and hypervisors hastened the reduction of “under-utilized” system resource costs and saved some power in the server room, it couldn’t touch the even costlier and faster growing infrastructure availability problems of “over-utilized” systems needed to support distributed enterprise applications.

Businesses that identify infrastructure availability as a growing problem are basically complaining about over-utilized system constraints in their software environments. These over-utilized resources cannot be easily replicated, controlled, or accessed by development teams and partners when needed. This results in endemic project delays and failures.

These constraints are very sore spots and should be easy to identify — as you will find teams waiting around for access or data setups to happen. As a general rule, we recommend prioritizing SV rollout where the most conflict and wait time is occurring first by conducting a formal or informal survey of development and product managers. Here are some things to look for:

  • Core business applications that are handling critical daily transactions for customers; therefore, they “lock out” development teams due to necessity.
  • Enterprise back-end systems (SAP, Oracle apps, managed services, and mainframes) with too few test instances to support the number of distributed development and test teams that need them.
  • External SaaS applications or data services that charge per-use fees for pre-production traffic, have availability problems, or impose harsh “caps” and shut off access after a few non-customer requests.
  • IT operations and environment groups that are overwhelmed with software lab provisioning requests from multiple teams, often with little or no budget to improve their situation.
  • Performance labs that are seemingly stocked with technology, but suspiciously sitting idle most of the time due to access issues outside the performance lab.
  • Regulation or IT governance policies that prevent distributed teams, partners, and offshore resources from accessing the systems and data they need to work with to move forward.

Make a report card of all the preceding choke points you discover in your initial survey, and set a value to solving each of them in your environment. The value of eliminating a constraint would include the following:

  • Number of teams or resources waiting on access to the infrastructure constraint and potential wasted hours of labor.
  • Criticality or lost revenue of projects the constraint is delaying or derailing.
  • Cost of replicating another copy of the constraint or buying more test regions and partitions.
  • Amount of time spent manually configuring the resource, including importing and cleaning up datasets for different teams’ testing activities.
  • Reduced impact of development activities and changes on infrastructure that handles live customer transactions.

The preceding list not need be a mathematical calculation. Simply rate the constraints for starters on an estimated 1–5 scale of severity. One piece of infrastructure likely to make your most wanted list for optimization will be the mainframe.

Screen Shot 2016-04-21 at 8.01.40 AM

Yeah, this part is boring. It’s about old mainframe technology; there’s really not much development happening in there. It’s stodgy, monolithic stuff … right? Right?

Well, we now realize nothing could be farther from the truth. Many enterprise it leads we talk to say they still spend as much as 50–60 percent of their development and change integration time within mainframe environments. In fact, when you take a closer look at most mainframes you find they are not monolithic at all. Mainframes encompass whole landscapes of service-oriented apps in and of themselves.

For most enterprises, business rides on the mainframe. In these groups, you have all the same constraints in the service-oriented world—different teams maintaining business logic for interconnected components across different regions; data sources and so on. Mainframe development teams often find themselves constrained for access, waiting for critical data scenarios to be set up in other mainframe regions, and in conflict over resources.

IT operations teams don’t want to rock the boat for real customers by allowing developers and testers to play “under the hood,” yet new test region environments are extremely difficult and expensive to produce. SV should be practiced in a similar fashion within the mainframe, capturing and modeling dependencies between components. For instance, simulating the other half of CICS-CICS transactions or gathering scenarios from an IMS region as it makes calls to the data layer.

In short, don’t leave efficiencies on the table inside the mainframe. We must ensure that we get under the hood and liberate mainframe development of constraints with Service Virtualization, in addition to the upstream application layers.Screen Shot 2016-04-21 at 7.58.06 AM

Before, enterprises had only two choices when they needed to address constrained infrastructure: Suffer the expense of delays, or write a huge check.

Without realistic infrastructure, our applications won’t successfully get to market — but the cost of building more of these complex environments through conventional means is becoming so high that it almost seems like a joke when you hear folks tell you what it takes. We know that VPs of development and IT directors are delivering unwelcome purchase requests like these to executives all over the world when asked, “What do you need to do this right?”

Try building an environment that is even just 25 percent of the size of production. That’s configuring every server and licensing every component — a massive effort and cost just to get a version that will still never be an adequate simulation of production.

It’s not like companies aspire to attain a big infrastructure — that’s just what happens when a company gets big. Take for instance a company like PayPal. When it was in startup mode in 1998, a small development team probably built the first prototype of their app in two to three months. But fast forward to PayPal today as part of a huge enterprise inside eBay. Now it functions more like a bank. There are more hooks to other systems and baggage to contend with for each successive release, more customers relying on promised support, bigger databases and more services and systems to talk to, each of which may be owned and managed by different groups.

The problem with infrastructure costs for development, test, and partner labs is that they create a very big hit to CapEx— in the form of big purchases and big-bang implementation projects. But that’s not all — each new infrastructure buy creates a very large and growing operational expense (OpEx) for maintaining and upgrading the lab environment constantly in order to keep up with configuration changes, increased data, etc. The more environment infrastructure you buy, the more that infrastructure becomes a job in and of itself, with its own dedicated maintenance and support team.

A leading firm we know wanted to ensure flawless partner integration and performance, so management demanded that IT build a certification environment representing 100 percent of production. The IT department came back with an estimate of $60 million for starters, plus at least $15 million/year maintenance to try to keep it current! That was just not going to happen!

Many companies don’t count “cost avoidance” as hard value results. But that $60 million outlay estimate wasn’t ridiculous given the complexity at hand. Whether the firm would have bitten the bullet or not, it was clear they couldn’t survive for long without a more complete environment.
Using SV, that company was able to replace most of that expected development infrastructure outlay with virtual models and virtual data management within two quarterly release cycles, at a fraction of the cost.