Our goal in this video is to show you a bit about how policies and procedures interact with programs. First of all though, a couple of general ideas. Security is cumulative in the sense that if you have non-secure components and you connect them together, what you get will be non-secure. In particular, the connections are part of the problem here too. So, even if you connect secure components, the resulting system or service that you get may not be secure. So, if you're composing these non-secure modules, let's focus on that for now. How do you handle this? Well, what you can do is put what are called shims at the interface between the two modules. The shim will check to be sure that what's the modules are sending to one another is what you expect and not something else. But the shims themselves pose problem. What happens if they're not secure, or not written well, or non-robust, or misinstalled, or if they can be bypassed? So, again, when you do this style of programming, when you compose components that are non-secure, that don't meet the requirements you need, you have to be very sure that number one, there's some intermediary that will ensure the traffic between the components satisfies the security requirements and is robust, and you also have to make sure that it can't be bypassed. Now, I've made some comments about relying on libraries and such. When you refactor code, reuse code, use external libraries modules or services, you inherit all their bugs and assumptions. As an example of this, consider in 1999, the RSAREF2 library buffer bug. RSAREF2 implements the RSA cryptosystem. It's done by professionals, so the cryptography is excellent, and it's very widely used in commercial software. Well, it turned out it had a buffer overflow. Normally, that's kind of, who cares? Well, the problem was it was used in many implementations of SSH, SSL, TLS. Well, TLS wasn't around then but SSL. It was used to provide security, it ran at fairly high privileges or on a system that I normally wouldn't have access to. If I could exploit the buffer overflow, I would get access to that system or get the extra privileges. This is an example of a library that was crafted to do one thing and it did it very, very well. But it had a flaw, a vulnerability that allowed people to escalate privileges. Needless to say, this has long been fixed, was fixed long ago. The other thing is this, even if you are aware of those assumptions, when you move your program into another environment those assumptions may no longer hold. It's imperative you document those assumptions. When people do run your program in their environments they can see, well, what could go wrong, while they made assumption A. They made assumptions was a standalone system and it's not. How do network interactions affect the way my program will run that sort of thing? Now, I mentioned the policies and procedures really defined and enhanced security. The policy defines that the procedures or mechanisms for ensuring the policies carried out. They affect both security and robustness. You have two choices here. If you know the target environment where your software will be used, you can then take that into consideration. So, that the software you write is secure with respect to the given policy. The procedures are followed and you should also build in cases for when the procedures were not followed. Could you detect that? Could you do something intelligent with the air and so forth? But if you don't know where your software is going to be used then really you don't know the policy or the procedures. It's going to be used into a wide variety of environments. In this case, it's best not to assume any particular policy. Except, the very basics like no escalation of privileges without authorization, that sort of thing. But it's best to assume that the environment may pose problems. As you write your program, you're building code to check for those potential problems and handle them intelligently. This brings up, again, something that I want to emphasize throughout this. When we say we have a vulnerability, what we're really saying is we have a vulnerability with respect to a security policy. The policy says what a system is allowed and not allowed to do. For example, a policy may say, user Bishop can access directory XYZ or user Bishop may not access directory XYZ. It says, allow or disallow. When the policy is created, it has to take into account a number of factors that are non-technical such as laws, regulations, environment, customs, things like that but also the limitations of people. If you ask a system administrator or a user who is running a program, that gives privileges to do five or six things, all at the same time, you're going to have security problem. Because, no one could do five or six things all at the same time. So, the policy and the procedures have to be created in such a way that they are actually doable. Now, the programs we're going to talk about, which I call interesting programs on the next slide. All change privileges or run with high privileges the privileges the key here. Now, there are a number of different types of flaws that will cover. But the two biggies are the first one is trusting the environment. Programs assume they're loaded into the system and run as the output of the compiler. The linker with the libraries. Many programs assume, that input will be well-formed, bad assumption. Many programs assume that when they start up, the environment is pristine. That is the only open files are the input, the output in the air. When someone doesn't interrupt, the program will handle that interrupt or this operating system well. Those assumptions are often invalid. We'll talk about specific examples in another course. But very often, users will type input that is not expected as an example, when I was learning the program, when your program asks for a number, one of the things that teaching assistants loved to do was type, I don't know. See how your program reacted and it usually crashed because it's expecting an integer and I don't know, is not an integer. You need to watch out for these assumptions. The classic assumption here is that of indivisibility. Where two actions take place and it's assumed that, for example, the two actions may follow one another immediately in the code. It's assumed that nothing can happen after the first but before the second, that would affect that operations that they're carrying out. This is called a race condition and to put it bluntly, it's usually not true. We'll talk a lot about those as well.