Security Through Obscurity
If a security consultant explains to you (or your system administration staff) that one or two holes do exist but that it is extremely unlikely that they can be exploited, carefully consider his explanation. Interrogate him as to what "extremely unlikely" means and why he thinks the contingency is just so.
If his explanation is that the level of technical expertise required is highly advanced, this is still not a valid reason to let it slide, particularly if there are currently no known solutions to the problem. If there are options, take them. Never assume (or allow a consultant to assume) that because a hole is obscure or difficult to exploit that it is okay to allow that hole to exist.
Only several months ago, it was theorized that a Java applet could not access a client's hard disk drive. That has since been proven to be false. The argument initially supporting the "impossibility" of the task was this: The programming skill required was not typically a level attained by most crackers. That was patently incorrect. Crackers spend many hours trying to determine new holes (or new ways of implementing old ones). With the introduction of new technologies, such as Java and ActiveX, there is no telling how far a cracker could take a certain technique.
Security through obscurity was once a sound philosophy. Many years ago, when the average computer user had little knowledge of his own operating system (let alone knowledge of multiple operating systems), the security-through-obscurity approach tended to work out. Things were more or less managed on a need-to-know basis. The problem with security through obscurity, however, becomes more obvious on closer examination. It breaks down to matters of trust.
In the old days, when security through obscurity was practiced religiously, it required that certain users know information about the system; for example, where passwords were located and what special characters had to be typed at the prompt. It was common, actually, for a machine, upon connection, to issue a rather cryptic prompt. (Perhaps this can be likened to the prompt one might have received as a Delphi user just a few years ago.) This prompt was expecting a series of commands, including the carrier service, the terminal emulation, and so on. Until these variables were entered correctly (with some valid response, of which there were many), nothing would happen. For example, if the wrong string was entered, a simple ? would appear. A hacker coming across such a system would naturally be intrigued, but he could spend many hours (if not weeks) typing in commands that would fail. (Although the command HELP seems to be a pretty universal way to get information on almost any system.)
Things changed when more experienced users began distributing information about systems. As more and more information leaked out, more sophisticated methods of breaching security were developed. For example, it was shortly after the first release of internal procedures in CBI (the Equifax credit-reporting system) that commercial-grade software packages were developed to facilitate a breaking and entering into that famous computerized consumer credit bureau. These efforts finally culminated with the introduction of a tool called CBIHACK that automated most of the effort behind cracking Equifax.
Today, it is common for users to know several operating systems in at least a fleeting way. More importantly, however, information about systems security has been so widely disseminated that at this stage, even those starting their career in cracking know where password files are located, how authentication is accomplished, and so forth. As such, security through obscurity is now no longer available as a valid stance, nor should it be, especially for one insidious element of it--the fact that for it to work at all, humans must be trusted with information. For example, even when this philosophy had some value, one or more individuals with an instant need-to-know might later become liabilities. Disgruntled employees are historically well known to be in this category. As insiders, they would typically know things about a system (procedures, logins, passwords, and so forth). That knowledge made the security inherently flawed from the start.
It is for these reasons that many authentication procedures are now automated. In automated authentication procedures, the human being plays no part. Unfortunately, however, as you will learn in Chapter 28, "Spoofing Attacks," even these automated procedures are now suspect.
In any event, view with suspicion any proposal that a security hole (small though it may be) should be left alone.
Choosing a Consultant
There are many considerations in choosing a security consultant. First, it is not necessary that you contract one of the Big Ten firms (for example, Coopers and Lybrand) to secure your network. If you are a small business, this is likely cost prohibitive. Also, it is overkill. These firms typically take big contracts for networks that harbor hundreds (or in WANs, thousands) of machines.
If you are a small firm and cannot afford to invest a lot of money in security, you may have to choose more carefully. However, your consultant should meet at least all the following requirements:
He should be local.
He should have at least four years experience as a system administrator (or apprentice administrator) on your platform. (If some of that experience was in a university, that is just fine.)
He should have a solid reputation.
Generally, he should not have a criminal record.
He should have verifiable references.
Why Local?
Your consultant should be local because you will need to have him available on a regular basis. Also, as I've noted, remote administration of a network is just not a wise thing.
Experience
You notice that I say that university experience will suffice, so long as it does not comprise the totality of the consultant's security education. Why? Because the academic community is probably the closest to the cutting edge of security. If you thumb through this book and examine the references, you will notice that the majority of serious security papers were authored by those in the academic community. In fact, even many of the so-called commercial white papers cited within this book were also authored by students--students who graduated and started security firms.
Reputation
I suggest that your consultant should have a solid reputation, but I want to qualify that. There are two points to be made here, one of which I made at the beginning of this book. Just because former clients of a consultant have not experienced security breaches does not necessarily mean that the consultant's reputation is solid. As I have said, many so-called security spe- cialists conduct their "evaluation" knowing that they have left the system vulnerable. In this scenario, the individual knows a little something about security, but just enough to leave his clients in a vulnerable situation with a false sense of security. Technically, a totally unprotected network could survive unharmed for months on the Internet so long as crackers don't stumble across it.
It would be good if you could verify that your potential consultant had been involved in monitoring and perhaps plugging an actual breach. Good examples are situations where he may have been involved in an investigation of a criminal trespass or other network violation.
Equally, past experience working for an ISP is always a plus.
No comments:
Post a Comment