So, in this video, what we're going to talk about is in general, what do you look for? What should you look at? What should you study to try to find these flaws or potential flaws? Later on, we'll go into extreme detail about what to look for, but the following are just some general guidelines to keep in mind as the course goes along or as the sequence moves along. The first thing is assumptions. Always check your assumptions. Also, don't be afraid to think sideways. This is often called out of the box thinking but as someone pointed out to me, " What happens if there is no box? " There shouldn't be a box. The example that's on this next slide really illustrates this. This was long ago. There were a series of break-ins at a very high security installation, and the police couldn't stop the attacks. So, they managed to trace the attacks and it turned out to becoming from the Netherlands. In fact, there was a high school student who would come home from school and do his thing. This was so long ago, there was no law in the Netherlands against breaking into computers. So, when they called the Dutch Police, the Dutch police said, "Okay. Let's check him out. Not a spy. We're not interested. Go away." So, how did the police stop the student? United States law doesn't apply in the Netherlands. Every time they tried to block the student, they couldn't get in. The way they stopped him was apparently to, according to the story I heard, to a very junior police officer who just joined the force and was very new. He suggested calling the student's mother and asking her to tell him to stop it. That's exactly what they did and the attacks stopped the next day and never occurred again. But it's a non-technical solution to a technical problem. People often overlook that sometimes the easiest way to solve a technical problem is through non-technical means. That points to the basic rule. If you know what your assumptions are, you're 95 percent of the way to a more secure system or program. Many of the assumptions are about what you trust. So, what you do is you write your program as, what am I assuming here? What am I trusting? What happens if my assumption is wrong or my trust is misplaced? If the program does something wrong, should I continue or should I not continue? The key point here is basically to think like someone who is trying to deal with the unexpected, things that you don't expect will ever happen. Someone once drew a very good comparison between security and efficiency. With efficiency, you focus in making the entire program faster. If there is one or two outlier cases that will almost never arise, you typically don't worry about them until you've handled the average. But with security, it's exactly the opposite. You secure the average but the outliers are really the ones you worry about because those are the ones that attackers look for. So, where do you find these assumptions? Well, when I was learning to do this stuff, a wonderful mentor, gentleman by the name of Robert Abbott, taught me to look in manuals. He was right. Because when you read the manuals, they'll often say things like can, must, should, will, ought, have to, things like that. Try not doing it. Or if it says no more than or can't, shouldn't, limit, maximum and so forth, minimum, try going beyond that. Try doing it. Try giving a number bigger than the maximum or smaller than the minimum. If it says an integer, give it a floating-point number, type I don't know and see what happens. More often than not, this will lead you to security problems. Also, look in the manual for ambiguities. There, as you will see later on, there are cases where the manual say for example, if this condition occurs, here's how you respond or here's how the program will respond. Then a little bit later, if that condition occurs and it turns out the two are the same, this is the same as the earlier one, it will respond differently. That contradiction raises an issue of how did the programmers actually handle this? In many ways, good accurate manuals tell you the assumptions that are made. They'll do it implicitly but sometimes explicitly but usually implicitly. This, by the way, is why it's so interesting to me that people who want any information that could lead you to finding vulnerabilities hidden, because that would basically mean eliminating all the manuals. I don't think people really want that. Some general thoughts, look at interactions. The program has to interact with the system in order for a vulnerability to have any effect or an exploit to have any effect. So, look for anything involving internal or external components, user IO, network interactions, like looking things up in a DNS or going to a web server, anything involving dependencies. What do your program depend on? Cryptography. There's an awful lot of cryptographic code out there that's written very well, and even more that's written badly. Also, check what the code is supposed to do. In one well-known case, the vendor implemented their own version of SSL. The version of SSL did confidentiality so no one could read the traffic going over the network. The problem was in the context in which this would be used, and the vendor knew it but I guess didn't make the connection, the data that was being sent over the connection would be published within 10 minutes. What was really critical was that the data not be changed from the starting point to the ending point. Integrity. The vendor didn't check that. In fact, didn't even implemented. So, he's type of attack called man in the middle would have allowed the attacker to change anything in the connection. So, it's a classic example of a vendor writing their own cryptography and getting both the purpose and the implementation wrong. Another one is cleaning up. If your program asks for a password and later on it crashes and dumps core, unless you've wiped out that password, it'll appear in the core file. This used to be a way to get information to break into systems. You would connect to a server that would ask for a password and then cause the server to crash. You then reconnect and retrieve the core file and you could find authentication information that you could then test and possibly use to get into the system. The last thing is being too helpful, and this is contradictory because everyone's taught error messages should be fully informative. It depends on the context. Consider login into a system. I type a login name and I type a password, and the system prompt back and says no such user. Now I don't have to worry about the password, I know that the username I typed in doesn't exist. So, I can do it again and I can keep guessing until I get a username. Once it says bad password, I know I've got a name of the user on the system. So, in that context, you need to tell the user that something went wrong. So what you should say is, invalid login, or incorrect name or password, or something like that. So, they can't identify which one is wrong. Yes, this is a little bit more painful for the user but on the other hand, it protects the security of your system and it's not that onerous to the user. On the next slide, try to figure out what the problem is. What does the program intended to do? It does something, but is that what it intended to do? The old newspaper person's motto of who, what, where, when, why and how. The five Ws and the H are very good here. Who should be running the program? Where should they be running it? Are there restrictions on when they should run it? What is it supposed to do? Why is it doing that? Is there some reason that affects the assumptions? How is it doing it? Because how describes the interactions. Also, understand the limitations of a system. For example, in traditional Linux and Unix, without the augmentations that had been added over the past few years, you can't secure anything from root. Root can do anything it wants. So, you accept there's an assumption that the root user is non-malicious. Why? Because the access that they can get using whatever you write is unlimited. By the way, encryption doesn't help much here. If the data is encrypted on the system, because then root can always add a backdoor or a Trojan horse to the decryption program or the encryption program. The only way to make sure that data is not read is to put it on the system encrypted and pull it off the system encrypted. But even there, the user could delete it. So, that becomes an issue of availability. This is an iterative process, you won't find everything the first time you go through. Also, the environment may change, you never know. Let's go onto the next video.