Marcus Ranum recently published his 6 ‘dumbest ideas in computer security’. Whilst some of what he says may seem like rational sense, a lot of it isn’t quite as practical as you might initially think. I’m not taking a personal pop at Ranum, I simply disagree with him in some areas. Here’s my thoughts on the matter.
#1) Default Permit
Providing access unless it is explicitly denied, or Default permit as Ranum puts it seems to fly in the face of all things secure. Indeed, surely it must make better sense to deny access until explicitly permitted? This concept is most familiar to users of firewalls and TCP wrappers. In the good old days, attackers would find a new service to target, such as RPC services or Windows file sharing. The response was to block traffic destined to attack flavour of the month. Of course, these days any decent firewall administrator ensures that a ‘drop all’ rule is in place in most rulesets.
You may have noticed that I’ve specified ‘most’ rather than ‘all’. That’s because default permit is actually an important part of meeting functional requirements. If the default action is to deny, then you’re going to spend a lot of time determining what to permit. Fine if you have something small or important to protect, but a downright pain if what you’re protecting (e.g. two separate internal systems occupying the same trust domain) uses random ports – as is often the case on older backup systems (Veritas backup exec was particularly notorious for this).
Another example Ranum cites is that of the spyware blocker. The idea that the user should be required to permit access to programs on a case by case basis may make sense at first. After all, how many programs do you use? Word processors? Art/Photo packages? Mp3 players? A browser or two? Ranum states that he uses around 30 programs. Unfortunately, the Windows world is a little more complex. Do you really want to say yay or nay to ‘lsass.exe’ without really understanding what it does? Do you really want to have to understand what it does in order to make that decision? Me neither, and I do the security â€˜thingâ€™ for a living. On top of this, there are other things that run on your system that arenâ€™t processes, such as libraries and Browser Helper Objects (also known as BHOs) covered later. How can these be managed? Can a user really be trusted to make these decisions?
There is a ‘third’ – or at least ’second and a half’ way. Using technologies designed for the mobile market, a correctly managed mechanism for running ’signed’ code could be the way forward. An open, independent code signing network would enable users to run trusted code without having to make awkward and oft uninformed decisions. The decision to run unsigned code can be left with the user. It is of course much, much more complicated to implement than that, but I don’t really want to go into details here.
Just to leave you with another example of how a straight ‘default permit’ system wouldn’t work, but where a ‘flexible permit’ system would is that of the e-commerce site. The default permit method allows anyone from anywhere to access any part of a given website. Using a default deny mechanism, you have to explicitly permit URLs on a case by case basis. Fine for small sites, but practically useless for the enterprise – are you going to update a static system to take into account every possible dynamic url your portal can generate? A flexible mechanism would provide some leeway, perhaps using regular expressions for part or all of URLs. I’d imagine this would work better as an access list using a format such as:
match_all (/condition 1/, /condition 2/, …) permit
match_one (/condition 1/, /condition 2/, …) permit
This would allow rules to be formulated for variables as opposed to urls. Even so, this would still get complicated for larger sites and may prove too difficult to manage. If used as a substitute for input validation, this could expose people to a false sense of security, as they go about their business blissfully unaware of the hole in just one field.
#2) Enumerating Badness
So security practitioners got into the habit of “Enumerating Badness” listing all the bad things that we know about. Once you list all the badness, then you can put things in place to detect it, or block it.
Further on, Ranum elaborates on this with the following:
Examine a typical antivirus package and you’ll see it knows about 75,000+ viruses that might infect your machine. Compare that to the legitimate 30 or so apps that I’ve installed on my machine, and you can see it’s rather dumb to try to track 75,000 pieces of Badness when even a simpleton could track 30 pieces of Goodness. In fact, if I were to simply track the 30 pieces of Goodness on my machine, and allow nothing else to run, I would have simultaneously solved the following problems:
Remote Control Trojans
Exploits that involve executing pre-installed code that you don’t use regularly
Unfortunately, it’s not as simple as that. Whilst it is true that monitoring 30-odd apps for issues would be simple enough, the fact is that the user needs to monitor more than that. The user needs to monitor the normal behaviour of the applications and detect anomalies. Think that I’m getting ahead of myself here? Ok, what about Browser Helper Objects? Browser Helper Objects, or BHO’s are components (not even programs) that run when IE starts. For all you firefox fans, consider the greasemonkey scripts you run the closest thing to an equivalent. BHOs don’t show up in task manager, and for the most part go around helping people do their job. BHOs are also a major source of spyware. Now monitoring the behaviour of the worlds most used browser isn’t difficult, because there’s a clear market for it. Monitoring Winamp (who’s skins offer significant amounts of functionality) or Macromedia Flash animations (which have an insanely scary amount of functionality) are more difficult. Whilst I agree that enumerating badness isn’t the answer, enumerating goodness is much more complex than people think.
So what’s the solution? To be perfectly fair, I don’t know. Personally, I try to go for a mixture of both depending upon the size of the system. A small web site running on a single Linux system can enumerate goodness to it’s hearts content. A major SAP deployment on the other hand doesn’t stand a chance. It all boils down to common sense, something sadly in short supply these days.
#3) Penetrate and Patch
This is one of my favourites. For years I lived in a world where applying the latest patches made a lot of sense. Prior to this, I lived in a world where you only patched when you had to, systems more or less ran themselves and you spent more time playing Wolfenstein than worrying about security. I left this world when I entered the larger enterprise market. Larger enterprises have some fairly common challenges. A few regular ones I see are:
- Patches at least 1/3/6/9/12/18/24 months out of date
- Use of unencrypted/unauthenticated protocols for management
- Ridiculous reliance upon out-of-date/insecure versions of protocols (e.g. SNMPv1, NIS+, ‘r’ services)
- Lack of, or insufficient segregation (of network, duties, trust domains etc.)
- Everything, and I mean every possible service switched on
- Straight-up crack-induced insane practices
- Open access to critical data (file shares, passwordless databases etc.)
Of that list that I see in every corporate, patches are high on the list of worries but donâ€™t seem to add much in terms of remediative value. By contrast, if people spent time turning off unnecessary services, they’d reduce their requirement to patch by an order of magnitude. Of course, vulnerability scanners usually count unnecessary services as one issue each, whereas patch revision issues are almost always per instance.
Having said that, what do you do when you have hardened your host, you have to run netbios or you need to use some whacked-out old protocol to talk to robby the robot running TOPS-10 in the corner? You patch. If patching is your only hope, you’re bound to run into trouble sooner or later. Like having a firewall between you and the Internet, or like double-checking the parachute before you jump out of a plane, patching is something you do out of necessity, not out of want.
The penetrate and patch concept is not perfect. It’s there to detect and fix operational problems identified as a result of bad practices on behalf of the designers, the implementers or the coders. Segregation, proper access control and sufficient hardening reduces the likelihood of threat and exposure, but not the impact, nor the motivation of an attacker.
#4) Hacking is cool
Ranum tells us that hacking isn’t cool, and that learning to hack is really dumb as it makes us reliant upon the “Penetrate and Patch” idea. I disagree. Hacking is cool. The Matrix, Hackers, Swordfish, they aren’t real. Thompson, Ritchie, Draper, Woz, Stephens are. Why are these crazy hackers cool? Because they looked under the hood and changed the things they saw to suit them. They ceased to be consumers, and became part of the solution.
One of the best ways a developer can design and implement secure applications is to learn how the attacks work. By understanding this, they are able to apply this knowledge in fixing the issues. Application security is a bizarre fruit, in that you teach developers who already understand how things work, but just need to see the world from a different angle. Many years ago, ethical hacking courses used to offer the same thing. These days it appears that the course material has mostly stayed still, pausing only to drop the ethical elements while the rest of us moved on.
Putting it into perspective, I recently spent a few days with a client running a workshop on secure testing. The client, concerned that their outsourcing partner was pulling the wool over their eyes wanted to be able to discern builds from each other. The workshop was a success and whilst they learnt a little about the techniques hackers use, their needs were better met by a greater understanding of OS and service fingerprinting.
Had they have gone on a ‘two-day accelerated hacking camp’, they would’ve seen whizzy demos of ettercap, ethereal, metasploit and other fun things but learnt nothing about what they needed to know to do the job. This is what happens when commercial pressures commodotize a service. It’s already happening to network testing and vulnerability assessment.
But back to the counterpoint, which is that Hacking (as opposed to cracking) is actually pretty damned cool. Hacking brought you XBOX Media Center, BitTorrent, The World Wide Web (or at least its number one web server, Apache) and refillable ink cartridges.
In a somewhat bizarre twist of fate, hacking is now becoming criminalised. People are being arrested for theoretical breaches of copyright, despite potential for fair use. People are also being shut out (case in point, anyone who mod-chips an Xbox to use Xbox Media Center canâ€™t have an active mod-chip and play Xbox Live) from functionality if the change a product to suit their needs. But I digress. Believe me, hacking is cool, and it’s about to get a whole lot cooler…
#5) Educating Users
Ranum’s argument centres around the idea that if ‘
it[user education] was going to work, it would’ve worked by now
. The issue is that in many cases, education has worked. For example, one organisation I was doing some work for had a worm outbreak. Users had already been informed about the anti-virus system running on their desktops, laptops, mail servers and on the proxy used to connect to the Internet. Therefore, in the event of a virus or worm outbreak and the network going to mud, it’s generally best to unplug your laptop when the networks team rings up and asks you to do so rather than whinge and moan about not being able to do your job. Sound crazy? Maybe, but this firm managed to avoid the network saturation and re-infections that occurred when worms such as Welchia hit.
Educating users works best when its used as a tool to communicate policy. One of my current clients has recently finished developing a programme of training to inform users what to do in an emergency, who to call and when etc. This programme just missed the July 7th London bombings, but interest in the programme shot up shortly afterwards. Consequently, the user-facing elements of Disaster Recovery and Business Continuity policy have been communicated to the business users in a highly effective way, and you can more or less guarantee that should any more bombs go off that everyone will know which number to call.
Of course there’s more to education than continuity awareness. Training is more than communicating password policy. As part of a greater awareness programme, injecting security elements into it can be highly useful. As a case in point, supplying home broadband connections to business users for remote working provides an excellent opportunity to remind users of the corporate acceptable use policy for Internet and E-mail access, which should then extend to use of the broadband connection. Not all training needs to be physical. Some of the best training I’ve seen has been delivered over CBT using tools such as Moodle. Gone are the days of ‘click next to continue’ flash animations and death by powerpoint. These days online training is an experience rich in interactivity.
#6) Action is better than inaction
The view that Ranum puts across makes sense at first. The idea of stepping politely out of the way of a bandwagon as opposed to jumping directly onto it sounds like a great idea. In fact, I really like this quote:
“hold off on outsourcing your security for a year or two and then get recommendations and opinions from the bloody, battered survivors – if there are any.”
Fantastic stuff, and all very valid. Unfortunately, for many larger businesses, jumping on the bandwagon is simply not done as it takes such a long to make a decision. Having said that, I lost count of how many people bought IDS when IPS was just round the corner, and I know for a fact that many people got burned when PKI first appeared.
On the action vs inaction debate, itâ€™s important to consider external pressures that may force businesses onto bandwagons. Just think of how much money has been spent on â€˜Sarbanes-Oxley compliant solutionsâ€™, Syslog servers that are â€˜HIPAA compliantâ€™, and magic beans thatâ€¦ wait a minute, the magic beans thing is another post.
If you’re in an environment where regulatory pressures are high to do something about a given issue, then you’re going to have to jump. The same goes for anything else where there are pressures, and believe me – there are pressures everywhere. To make matters worse, itâ€™s often the case that the guy signing the cheque has no idea of how the solution being pitched works, how (or if) it will help the business achieve its goals or even whether the money spent is worthwhile.
From a security angle, maybe it’s better to debate reaction Vs. pro-action. Indeed, pragmatism appears to be the way forward. I often see organisations that go over the top with password policies, only to find them circumvented by people removing authentication for ease of use or sticking post-its everywhere. Iâ€™ve also seen companies that remove disk drives from all devices, only to find USB ports sticking out of the back of each system.
A pragmatic approach implies that where appropriate, a pro-active (ie pro-action) approach is used whereby a re-active (ie inaction) approach is used where the cost of an incident is likely to be less than the cost of implementation, or other factors come into play.
In summary, I thought that although some of what MJR wrote made a lot of sense (and there certainly was some nodding as I read his essay), I kept getting the impression that whilst some of his points were certainly valid, I have my concerns about how we can eradicate some of the issues. I hate penetrate and patch with a vengeance, but it is ultimately a necessary evil. Shoring up defences elsewhere buys you time, but little else. Other areas, such as his points on user education I simply donâ€™t agree with, but as a friend once told me â€œFor every expert, there is an equal and opposite expertâ€. Maybe I just found mine.