The ability of the Stuxnet virus to devastate Iran’s nuclear program showed just how vulnerable critical infrastructure can be to a cyber attack. While we might cheer on whomever it was that attacked Iran (most likely agents working on behalf of the United States and Israel), such an attack might easily be pointed in our direction some day.
Testing that employs such methods as Service Virtualization could help to identify vulnerabilities in our critical utility networks, but to date little has been done toward that end. Meanwhile, utilities around the world continue to use vulnerable equipment.
We spoke with Eireann Leverett, senior security researcher at IOActive and a self-proclaimed “internet harm reductionist,” about some of the challenges in leveraging better testing to improve electrical network security and reliability.
SV: What are the biggest concerns with respect to utility security?
EL: There are plenty of vulnerabilities on these devices as much of the code was written more than 10 years ago. Any of the new attacks or things we have learned in computer science and vulnerability research can still affect those older systems.
There a several protocols used to speak to these devices that do not involve a lot of authentication and cryptography. Originally these networks were running on copper and it would have been difficult to gain access to these networks. But the utilities have been switching to less expensive IP networks, while the control systems don’t have security built in.
The real issue is one of manpower and testing. Engineers know what they need in terms of reliability, but when it comes to computer science and security, they have not done as much engineering.
Some of the software has low-quality assurance levels, and when it comes to security it is exceptionally poor. This is a societal issue as much as a technical one. We want cheap electrical power, but we also want electricity that continues to flow.
SV: Are these just technical issues, or have there been real threats in the field?
EL: Studies in the literature and databases of industrial systems have documented 200 to 400 incidents. But these are not all directed attacks. Some involve accidents and others may have been targeted.
There are also cases of attacks that were not designed for industrial control systems but affected them. For example, the Slammer worm was detected, affecting the safety monitoring system of the Davis-Besse nuclear plant in Ohio while the plant was offline in 2003. In another case, the Zotob worm knocked 13 of DaimlerChrysler’s plants offline for an hour in 2005. These incidents are relatively infrequent, but are increasing.
SV: What are examples of power infrastructure vulnerabilities?
EL: I have been focusing on industrial Ethernet switches and have found some vulnerabilities, which could allow a hacker to access some of these older systems. Most of it is because they are outdated in terms of security, even though they are brand new in engineering terms.
Adam Crain, an independent security researcher has done some research on improper input validation of the DNP3 protocol for remote terminal units.
SV: What is the state of quality assurance for power infrastructure?
EL: QA for engineering has been challenging. It took us a while in the automotive industry to recognize that crash testing had value. The same thing now applies in the power industry, where doing destructive testing in logic and software can help us learn how to keep systems up and running. But traditionally we don’t test live systems. We can examine some of these systems from a passive approach, but you don’t want to send invalid inputs to see if we can crash a live production system.
That testing has to be done in a separate environment.
A vendor that is serious about this will test the physical properties of the device, but might not test all the inputs and outputs. The people that really need to do this are the buyers. They have a tendency to assume that since they are spending $5 million on the equipment, it should work fine on their power networks. But in many cases integrators might connect equipment from one vendor, programmable logic controllers (PLC) from other manufacturers, and industrial control from others. They need to be testing these at the point of purchase.
If we can get vendors to do more in the security realm, things will improve. Right now, they are not financially motivated to do this testing. This kind of testing has risen, but is not at the top of their agenda. They are selling control systems, but it is up to the utilities deploying them to make sure they are secure. This incurs a kind of security debt.
SV: What is security debt?
EL: Like any kind of debt, if you let yourself live with it for too long, you become comfortable with paying down the interest rather than paying off the principal, just like a consumer that becomes comfortable sending in the minimum monthly payment. With software that has been around for 20 years and might be around for another 20, your more important concern is keeping the systems up and running.
We have a lot of unsafe devices out there we would like to refresh but cannot afford as a nation to pull them out. It is important to make sure there are people in the utilities focused on tracking vulnerabilities and getting those fixed as much as they can. Part of the issue is the budget to test. Maybe that is an issue where legislation could come in handy. For example, they could offer utilities tax breaks for fixing existing vulnerabilities.
SV: What could be done to ensure a more secure software development lifecycle?
EL: The No. 1 thing is that when new systems are put into place, they get tested thoroughly and to make sure developers are following secure coding practices. The utilities could ask the vendors selling them the systems to produce documentation or do audits regarding their practices for a secure software development lifecycle. The vendors would have to say they use static analysis and have taken measures to protect against buffer overflows and other vulnerabilities. They would have to get into a dialogue about security.
Vendors have a tendency to focus on new features and not examine vulnerabilities in the systems. They don’t tend to look at the code base for buffer overflows or hard-coded maintenance accounts. The vendor thinks it is enough to have a secure password. Unfortunately, reverse engineering can sometimes find the user name and password in the code that uses these systems. This has the potential to create vulnerabilities at the national level.
The drive is for new features rather than making the code we already have more robust.
SV: What challenges does integration play in security?
EL: There are concerns not just about the integration of devices in a network, but also at a component level. One of our researchers, Reid Wightman, found vulnerabilities in the CoDeSys library used in over 100 PLCs. What often happens is that development managers will reuse these libraries because it is cheaper than rewriting the code. But then you have the inherent vulnerabilities in the library.
Wightman and I wrote a whitepaper where we scanned IPv4 addresses and found over 600 devices that had not been patched a year after this discovery was reported. This is, in part, due to the patching problem in the utilities industry. Whereas home computers might be patched once a week, it is harder to do with utilities where the equipment needs to be constantly running.
It is not practical to pull the equipment offline to patch. Utilities need to go to the vendors and ask for them to redesign these systems to make this easier. This would require systems with double redundancies so they don’t have to go offline during the process. This is a problem utilities cannot solve on their own, but as a society we can solve it.
On a network layer, you might have perfectly valid input and output characteristics on the devices, but as you put those pieces together you sometimes end up with a larger attack surface.
SV: What are the problems with simulating systems that combine logical and physical devices?
EL: Usually, the challenge is the volume of interaction. Also if I build a simulation tool for GE systems, I also need to find out how ABB systems work as well. When it comes to simulating the power grid as an entity, it is challenging from an engineering, mathematical, and computational resource perspective.
It is tough to estimate the characteristics of these control systems and understand the implications of, say, opening a switch in one place that could affect other equipment. Thus, it is difficult to simulate the effect of a sophisticated attack on the grid.
You can simulate, in a business sense, the impact on the economy. You can simulate the spread of a virus from system to system. These models always involve a level of estimation. I have not seen one that is accurate in the real world.
There are good tools for analyzing the software and firmware on these systems. But even these basic practices are not widely adhered to in industrial control systems. A lot of older computer science techniques could be applied if vendors took them seriously, but this will only happen if the utilities start demanding these in their products.
SV: What are the limits of security initiatives like GridEx playing in addressing these problems?
EL: GridEx has helped, but this is testing for a different kind of security properties than at the individual utility level. It is focused on helping the security and resilience of the nation to cyber attacks. The emphasis is on information sharing between utilities, regulators and the military. This has helped the nation become more resilient, but does not fix the security debt and vulnerabilities in existing products.
SV: What could improve tools for modeling and testing these systems?
EL: The IEEE has a reference model for state estimation with about 30-40 test cases in a small network. But this is only for a small network, not for testing the network you have built. And this is just for engineering, not cyber-security.
Power engineering is a real and practical discipline, but the sheer number of variables in building and simulating a sophisticated model is prohibitive. If we had more computer scientists working on this they could bring better tools for improving the speed of modeling, while the electrical engineers know more about what is happening on the ground. As you combine those two disciplines, we could get more realistic results.