Security myths and architectural realities

38

Author: Bruce Byfield

From user accounts to viruses, security is one of the basic concerns in computing. Yet, although everybody talks about security, much of what average computer users believe about security is inaccurate, because their explanations refer primarily to symptoms of system behavior rather than to principles of system design. In this interview with Dan Razzell,
a computer scientist with over 25 years of experience in system architecture and security, we discuss the differences between how average computer users and security professionals approach security.

Razzell was system manager at the Laboratory for Computational Intelligence at the University of British Columbia, as well as director of operations for WestGrid, Canada’s leading supercomputer facility. He is currently an independent security consultant. Razzell positions himself as a security architect, meaning that he approaches security as a process grounded in design and implementation.

Definitions of security

NF: In the media and popular imagination, security is usually defined as preventing crackers from accessing a system, either directly or by means of a virus. By describing yourself as a security architect, you seem to imply another definition. Can you explain?

DR: We need to begin with a theory of system security, and that leads us to think in terms of architecture. The architectural goal is to ensure that you have control over your information. While malicious attack is certainly one way to put information at risk, not all risks imply an adversary. Our theory has to cover them all. Most are quite unintended, either caused directly by human error independent of the system, or indirectly from some consequence of system design or implementation.

In order to be secure in the foregoing sense, a system must not only do what you expect, but also nothing that you don’t expect. Although we have to give equal weight to both of these criteria, unfortunately we tend to focus mostly on one side of the relationship. That effect will in turn carry through from requirements to design, implementation, testing, deployment, and use. Time passes. Then one day, the system does something surprising, such as damaging a bunch of data or releasing it onto the net. But we never specified that it shouldn’t in our requirements. Of course, mistakes can be made at any stage in system development, but the point is that we can’t even consider them as mistakes except in reference to our stated requirements.

That may seem obvious in theory, but in practice it’s hard to think of every consequence. I’ll give an example. When organizations began to network Unix systems together back in the 1980s, it was attractive to have them share a common user namespace, so that users could access shared filesystems, distribute their computation across multiple systems, and so on, all secured using the same identity. Very nice. But sharing the user accounts implied sharing the root account, since root is an account too. So if you could get root on any system, say by booting off a diagnostic disk, you would then have root across the whole namespace. That combination of design requirements led to an unintended effect.

We have to look critically at requirements. But architecture and engineering also refer to a common body of knowledge, expressed somewhat imprecisely in terms of principles: ideas like simplicity, symmetry, modularity, composability, scalability. Though not strictly necessary, a set of principles is a powerful way of remembering what works. They help to guide us during the creative process of design and implementation. They can even help us to elicit clearer requirements.

There are some principles specific to security as well, for example containment, least privilege, and validation. Apart from being somewhat more specialized, they can be applied just like other principles to development and analysis, which suggests to me that security really does operate in the same manner as architecture and engineering.

NF: Most people think of security as something that they add to a system. They install certain types of software, such as a firewall or anti-virus program, or change a number of configuration settings, and their system is secure. From our past conversations, it seems that you disagree with this perception. Can you tell us why?

DR: More features inevitably means more ways for things to go wrong. That violates the principle of simplicity, to which security happens to be very sensitive. To put it another way, if you can see a way to improve security by removing a feature, that’s always better than to get the same effect by adding a feature.

When you have practice in thinking in terms of security principles, it can sometimes seem pretty obvious when a certain feature would be unwelcome. A virus is only harmless data, isn’t it, unless your system is designed to run it on sight. That’s why most system architects know better than to have the system treat any old string of bits as executable content. You don’t need a set of requirements to know that such behavior is no feature at all but a classic vulnerability. And it’s likewise obvious that the same principle applies to email attachments, document macros, controls embedded in web pages, click to install, and so on.

But suppose that you went ahead and designed a system which heavily depended on these features. Now you’re really in trouble. You can’t just disable the feature, and you would have a lot of redesigning to do if you wanted to get that feature out of the system. It might seem more expedient to add some new feature which tries to hide the symptoms. But security isn’t about symptoms. The essential design vulnerability never really goes away. It just becomes disguised.

I don’t mean to imply that every new security feature is a bad idea. Of course not. On balance, maybe a firewall is a good idea. A tools which automates configuration management might be a very good idea. The point I’m making is, how do you go about making that kind of assessment? That’s why security is a process, not a product.

Security considerations and principles

NF: What is the first step that users should take in securing their systems?

DR: I’d say that the first step is to build a survivable system. Survival in system terms means knowing exactly where you are, how you got there, and how to go forward. It has a lot to do with modularity and configuration. In practical terms, it requires a clean separation between operating system, application software, and data, so that you can replace one of these elements without disturbing the others. It also means separating the process of installing the system from the process of managing its configuration. Configuration management is the primary means we have of building and reasoning about secure systems, so we want our configuration choices to survive even when the system itself doesn’t. That’s a really effective insight into system management as well.

I can’t offer detailed advice here, since the process of system configuration differs from one operating system to another, even from one distribution to another. But just as a point of reference, take a look at Red Hat Kickstart, SuSE AutoYaST, and Solaris JumpStart.

That would be my first step, to set up for configuration management and system recovery. Your system may not yet be secure at this point, but it will allow you to recover cleanly from a security incident. Now you can start hardening it properly.

NF: Many types of software, such as anti-virus programs or tripwire, detect intrusions after they occur. How useful are these after-the-fact protections?

DR: They are never as valuable as having an intrinsically secure system in the first place. But, then, how can you be sure that you have one? So I think it makes sense to consider detection as an additional layer of defense.

That said, the only accurate form of detection uses a known point of reference: your system, for example. That’s how tripwire works. Virus detection, on the other hand, is always tracking a moving target, with no guarantee of success.

Detection is good, but prevention is better. If you have the choice, use the system which is least prone to viruses.

NF: In the last few years, firewalls have entered the general public’s consciousness. Most operating systems now routinely offer a firewall, although it is not always turned on by default. To a lay person, this seems like a positive trend. Is it?

DR: I’d say so. Firewalls offer a manageable layer of defense without significant complexity. If I look at the history of a system and see a strong firewall appearing early in its design, that’s also a useful litmus test. It tells me that the designers understood the principle of defense in depth and took it seriously.

As you say, not all firewalls are configured securely by default, so take nothing for granted.

NF: I regularly run across GNU/Linux users who do routine work while logged in as root. They argue that, since they are behind a firewall, running as root poses no danger to them. What is wrong with this position from a security architect’s viewpoint?

DR: Nobody plans to have an accident, but they happen anyway. It has nothing to do with firewalls. Someone who offers that argument is really not thinking very hard. The system provides a separation of privilege so that accidental damage can be contained in one part of the system. It seems only sensible to use it.

I say this as someone who has done his share of massive damage to production systems while running as root. Sometimes you have to run as root, but if you do it casually, you’re just asking for trouble.

NF: How do the principles of secure architecture apply to setting up access to a system?

DR: Architecture is a way of understanding of how to shape environments in relation to their use. You have a given set of requirements, which as I mentioned come in two forms, both equally important. Don’t forget to consider both. Then you have the configuration options which a given system allows. Then you have your own principled judgment.

So, off you go. Based on the options available to you, how can the system meet your stated requirements? Which of the alternatives appears best according to principle, and why? What are the consequences of making the wrong choice? Now go ahead and choose the best one.

As you can see, there’s no mystery to it. At the same time, I think it can be as deep an inquiry as you’d care to make it. That’s why it should be called architecture. You have to apply a lot of judgment in order to do it well, but most of us have a fine sense of architectural judgment that we hardly ever get to use. I believe that the world would be a richer place if we cultivated it more fully.

NF:What challenges are posed by devices such as laptops and flash drives that may be constantly added and removed from a system?

DR: Not many, to be honest. A laptop looks exactly like any other system on the network. If we don’t watch out, we’re inclined to include it in our network security model without thinking about exactly what its portability implies. One consequence is that systems are now expected to physically appear and disappear on the network. Well then, are these really the same systems from one day to the next? How can we be sure? And how can we know what happens to them when they’re not on the network? Networks have firewalls in order to prevent data from inadvertently flowing to or from the systems on the network. But if a laptop physically moves from one network to another, the firewall is no longer where we think it is in our supposedly secure network topology.

Removable media are a relatively minor concern architecturally. The reason is that they’re passive devices, not agents of computation. Obviously, they provide a distinctive pathway by which data can get on and off your system, but that’s fundamentally no different than any other data pathway such as the network. It may be worth remembering that before the network era, viruses commonly used to propagate over removable media, but that was due to system vulnerabilities, not because removability itself was a problem.

The advantages of openness

NF:You’re an advocate of open design and implementation. Why is openness important for security? To many users, the idea seems counter-intuitive. Their first reaction is that security lies in secrecy.

DR: There are direct advantages to security that come from opening the design and implementation of a system. The original insight goes back to Auguste Kerckhoffs in 1883. He saw that in a cryptographic system, it’s one thing to depend on a secret key, and quite another to depend on a secret mechanism.

If your key is captured, you can always tear it up and make another one. Distributing the new key may be painful, but you can do it. But if the secret of the mechanism ever gets out, too bad. So don’t make it a secret in the first place! In fact, share it widely with your colleagues, and see what flaws they can find in it before you begin tooling up for production. That’s exactly how science makes progress. It may seem extravagant, but it’s really a very conservative strategy for validating any new concept.

An open design secures supply. For example, in an industrial setting, production would be at risk without an assured supply of parts. In common practice, that means being able to contract with several independent suppliers of a given part. If problems develop with one source of parts, you can switch to another.

It’s just the same for system security. What happens if the software reveals a security flaw? If its design is secret, there is likely no alternate implementation that can be substituted. If the design is open, on the other hand, competing implementations may already exist, and in any case could be developed according to need. Such a design is thus inherently more secure than if it were closed.

If the implementation is open as well, it may be possible to repair the flaw directly. That works especially well for simple errors such as buffer overflow, input sanitation, and so on, which also happen to be the most common sorts of security errors in implementation.

Standards, by the way, arise out of a common consensus around design or implementation. They signal a mature and stable industry. In order to get to the point of developing an open standard, the industry must already have gained experience using various open designs and implementations. In other words, open design and open implementation are critical precursors to standardization, as well as being valuable in their own right.

Understanding Security

NF: Everyone pays lip-service to the importance of security. Yet companies are often reluctant to pay for security. At home, users often disregard the most simple precautions. Why is it so hard for people to grasp the basics of security?

DR: I know this experience all too well! I think the answer involves a couple of confusing properties of security itself.

As Bruce Schneier says, the problem with bad security is that it looks just like good security. What he really means is that you can’t depend on observable symptoms to understand security. Likewise, the return on investment for security is not directly observable. It’s a matter of risk analysis at best.

The next problem is that security, being pervasive and emergent, is hard to generalize, nor does it reduce to particulars; security basics really mean the application of general principles under a given set of requirements. All this abstraction can seem a bit equivocal, but the fact is, you really have to think about security.

Finally, there is the problem of proving a negative. It’s one thing to prove that a system will behave in a certain way given a set of conditions. Just meet the conditions and observe what happens. But how do you prove that the system will never behave in some given way? Using a symptomatic approach, you would have to test an infinite variety of conditions. Or you would have to have an open system and a huge appetite for proofs of correctness.

NF: Security often seems at odds with convenience. Faced with a choice, many users will take convenience over security. Is this opposition inevitable? Where did it originate? To what extent can you strike a balance between security and convenience?

DR: Sometimes security and convenience are in tension, but not always. Either these two forces are legitimately opposed, in which case the only sensible response is to accept some kind of trade off; or they’re not opposed, in which case the apparent trade off is just exposing some kind of bad design. For example, our present inconvenience due to spam didn’t come as a trade off for greater security, quite the contrary.

With computing we have, even in an artificially restricted configuration space, the equivalent of a car with a thousand knobs on the dashboard. They interact in complex ways, many of which have some effect on security. It’s really more like an aircraft than a car. There is a lot to think about. Is the ability to dump fuel in flight a good or bad thing for security? What about an ejector seat, or on-board oxygen? And how do these factors relate to convenience?

The answer to these questions, as for all artifacts, is that the trade-off depends on context. But where there is a clear trade-off, system architects have the ideal opportunity to apply the principle of security by default, which requires the user to make a deliberate choice to make the system less secure. That much we can say. It’s such a great principle, it ought to be a law.

NF: You’ve said to me that security is currently not an exact science, but more a set of overlapping principles. How can understanding these principles help average users?

DR: You know the saying about how you can give a man a fish, or you can teach him to fish? Security principles try to capture the essence of fishing, so to speak, so that you can teach yourself to fish in your part of the world.

So, you may say, that’s very nice, but I think it would be better to hire a security expert than to try to learn all this myself. Fair enough. But let me make an observation. A lot of people are suspicious of auto mechanics. Car repairs cost a lot of money, and they don’t always fix the problem. None of these people really know enough to judge whether or not they’re being treated fairly. So, not surprisingly, their relationship with the mechanic tends to be more adversarial than collaborative.

Wouldn’t a better approach be to learn enough about the subject so as to have an informed dialog with the mechanic? You may not be inclined to do the work yourself, but at least you would know what to look for. And when the mechanic offers advice, you will have a reasonable basis for assessing his competence. That encourages honest business, it tends to saves you money, and you end up learning something useful into the bargain. What’s not to like?

Category:

  • Security