Secure code is software that doesn't create security vulnerabilities on your computer. Previous versions of IIS, for example, included a number of vulnerabilities that attackers could use to gain unauthorized access to your servers, and so could be considered insecure code.
So how can you tell if code is secure? There's no easy way, because modern software is very complex, and you as an administrator don't have much input into how it's created. In fact, you can make the point that no code can ever be considered completely secure. However, you can decide which software authors?both individuals and companies?you trust to do a good job of writing secure code. As long as you can guarantee that your software comes from them with no alterations, you can trust the code to be as secure as possible.
Windows Server 2003 includes the ability to run signed code, which is software that carries a digital signature. The signature gives you two guarantees:
The software definitely comes from that particular author or manufacturer.
The software has not been altered since it was signed.
Code signing uses public key encryption to produce a digital signature (see Chapter 9 for more details on this process). The software author uses code signing software, along with a private encryption key issued for the purpose of signing code. The signing software examines the author's software code and produces a checksum. The checksum is a reasonably unique number produced by a mathematical algorithm, and any given software code will always produce the same checksum. If so much as a single byte of the code is altered, the checksum will be significantly different.
The signing software encrypts the checksum using the author's private encryption key. The encrypted checksum, known as a digital signature, is attached as part of the software code file, making it available to anyone who receives the software. The signing software also attaches the digital certificate that contains the software publisher's public key.
When a user installs or downloads the software, Windows automatically retrieves the author's public encryption key from the certificate that is distributed in the file. This certificate is first validated to ensure that it is authentic and can chain to a trusted public root. Windows next uses the public key to decrypt the checksum. The computer then runs the exact same checksum algorithm on the files that are digitally signed and verifies that the resulting checksum matches the now-unencrypted checksum. If it does, the computer knows two facts:
The software hasn't changed since it was signed by the author. If the software has changed, the computer would have generated a different checksum than the one contained in the software's digital signature.
The software was, in fact, provided by that specific author. If it was signed by someone else, the author's public key would have been unable to decrypt the digital signature in the first place.
So how dangerous is unsigned code? Examining the protections that signed code provides, you can imagine the dangers that unsigned code can represent:
Software could be sent to you by attackers, yet made to seem as if it came from a reputable author. For example, an attacker could send malicious code and make it seem as if that code had come from Microsoft or another trusted software publisher. This is a common tactic used by attackers.
Internal file shares can be populated with malicious applications such as trojans and backdoors, or existing business application share points can be replaced with such undesired programs. Users may unknowingly install these applications and expose their computers to attack.
Legitimate software could be modified to include additional, malicious code. Without the verification provided by a digital signature, virus code could be added to software, compromising your network's security.
Internal software file shares could be compromised, and normally trusted code could be replaced with malicious code. In this type of attack, a single compromised file could affect many users throughout an organization.
Many applications have their own application language, such as macros that run within Microsoft Office. These macros can be dangerous and have caused considerable damage in the past. They can now be signed just like any other code to ensure their unaltered state and help provide some assurance against attack.
While not all unsigned software is inherently evil, unsigned software always presents a risk. Even if the software's author is trustworthy, unsigned code provides no assurance that the software wasn't modified after it left the author's hands or even that the software really did come from that author. For example, a software vendor could email you a perfectly innocent update to one of your applications. Without a digital signature, though, you have no way of knowing if someone intercepted the email and modified the software update for her own nefarious purposes. This is referred to as a man in the middle attack, because someone between you and your trusted software vendor modified the code.You can assume that most malicious code is unsigned, but that's not an absolute, as many spyware and malware applications are signed by well-known root CAs.
As an extension of these principles, any software produced by your company's own software developers, if you have any, should be signed. Signatures will ensure that the code is never modified to include a virus and that the software really did come from your company's internal software developers. As you'll learn in Chapter 9, you can even issue your own digital certificates for internal use, reducing the cost of signing software.
Medical Office Update Attack
An attack occurred in 2002 that could have been prevented by signed software and ensuring the client computers verified the signature before execution. The names in this example have been omitted to prevent embarrassment.
A well-known software manufacturer has a large installed base for its medical office software. This software is expensive and requires frequent updates because of changing insurance regulations, privacy laws, and the like. Until recently, this manufacturer distributed software updates to customers who paid for maintenance by sending a CD in the mail. However, the rising cost of manufacturing and postage inspired a cost-cutting effort. The manufacturer moved to distribution of its software updates via email. It informed its customers via email and letter that they would be receiving their updates monthly in email. They simply needed to read their email on the computer running the medical office software and double-click the attachment to update their system. For protection, they should ensure that the From: field in email was from the software company.
This worked well, until a group of attackers decided to propagate a virus to the medical offices. The attack was almost too simple. They obtained a list of doctors' offices and their email addresses through a series of social engineering attacks against both the software company and the offices themselves. They then constructed an email body similar to the legitimate body. The attachment contained the virus and some simple code that created a dialog box, informing the user that the update was successful.
To distribute the virus, the attacker spoofed the email so that it appeared to be sent from the software manufacturer's technical support center. Because of the large number of insecure email servers available on the Internet, the attack was launched from several servers to help disguise the true source of the attack. This attack was quite successful; many doctors' offices lost all their records since their last backup, and in more than one case, there was no backup at all.
The flaw that allowed this exploit to occur is the assumption by all parties that email attachments from sources you trust are always safe. This assumption has been exploited in the past but usually not to this extent.
To counter this exploit, the software manufacturer could require all computers running its software to enforce software restriction policies. It could then distribute its code and updates, digitally signed, with confidence that the systems would remain secure. Although this might impact other applications running on the medical office computers until those applications were available with digital signatures, it is almost certainly worth the inconvenience to ensure the security and confidentiality of this information.
Reputable authors have no reason not to sign their code to provide you with those assurances. Signing code requires a software utility, which Microsoft and other vendors provide for free. A code signing encryption key pair costs between $300 and $1000 per year, depending on the certificate vendor the key pair is purchased from.
Once purchased, the key pair can be used to sign an unlimited number of software packages. Code signing is not an expensive proposition, and reputable authors can easily justify the expense. Companies can even sign their code with a self-produced key and provide you with the public portion of the key. This technique requires a bit more effort on your part, since you have to download the key, but it allows publishers to sign their code for practically no cost whatsoever.
Code signing applies to two types of software: device drivers and regular software applications. Device drivers are pieces of software that interface with hardware, such as a mouse or removable storage device. Signed device drivers are especially important, because device drivers execute with special and powerful permissions under Windows in kernel mode. A maliciously written (or modified) device driver can cause an incredible amount of damage to a computer or network.
Regular software applications are the ones you and your users run on a day-to-day basis. These can do a great deal of damage, too, especially if executed by an administrator, since administrators have such wide-ranging capabilities on the network. In fact, the potential for a malicious application to use your administrator credentials to wreak havoc is a primary reason behind POLA, the principle of least access, as discussed in Chapter 2. If you use your administrator user account only when you actually need to perform administrative tasks, you'll reduce the likelihood that a malicious piece of software can use your credentials.