You have test cases for peak customer demand, for negative returns, and for compatibility with every customer platform under the sun. Maybe now you even need a test case for high humidity.
That’s right, according to a recent report from the performance monitoring firm Apteligent, apps run about 15% slower in the summer. Why? Because science:
“The explanation is due to the science behind the propogation of radio waves,” the company says. “Increases in water vapor cause attentuation of the waves, especially at higher frequency bands. This means that the humid summer months will cause degradation in signal strength, and slight delays in data delivered to the handsets of your customer base.”
The company’s data found a pronounced difference in latency between winter and summer, somewhere on the order of 15 percent. Apteligent didn’t cite much data for its findings. The study was informal, based on performance data from its clients, including big firms like Netflix.
Over at IEEE Spectrum, Amy Nordrum took a closer look at Apteligent’s humidity claim and found that, while it can’t be confirmed, it’s certainly plausible. She also wrote that the difference detected between winter and summer is relatively tiny — about 60 milliseconds. Most customers probably wouldn’t notice — I never would.
Then again, if you’re running an app highly dependent on low latency and minimal packet loss between Point A and Point B — think banking or commodity trading — 60 milliseconds might as well be hours or days. There are traders on Wall Street who would kill for a 60 millisecond advantage on other traders.
So, yes, it matters.
Nordrum quoted a senior software engineer who says even blink-of-an-eye latency is worth worrying about and that “the Apteligent report might mean that developers should start to make an effort to test their apps in humid conditions as well as on dry days.”
Said the engineer, Eric Richardson:
“Up until now, I don’t think weather has ever been on our minds … But now that it is, I guess it kind of brings in the perspective to do more realistic testing as opposed to just sitting in the office connected to Wi-Fi.”
Au contraire, Mr. Richardson. You actually can sit there in the office on your Wi-Fi and test any imaginable test case — even a 60 millisecond increase in latency due to high humidity — with Service Virtualization.
With SV, you have on-demand ability to test for a virtually unlimited set of test cases and scenarios.
“To really test a system, you’ve got to be able to recreate all sorts of unusual conditions. These may be in the form of specific error messages, or unusually large message payloads, or slow response times from remote method calls. But it’s not always easy to recreate these unusual conditions with traditional testing techniques, just like it’s not easy to get the moon to rise at a specific point on the horizon.
To me, this is perhaps the strongest selling point for Service Virtualization; the ability to recreate not just the 99 percent of “happy path” test cases, but also to recreate the less-than 1 percent of “unusual” conditions on demand. It means the difference between you learning about how your system fails, or your customers learning about how your system fails. You choose. “