Thank you for sharing!

Your article was successfully shared with the contacts you provided.
Experts increasingly advise that the only way to make sure business software and networks are secure is through trial by fire. That’s why there’s now a booming business in “penetration testing,” or the art of the “ethical hack.” Consultants find out where security is weak by trying to penetrate a business’s systems. If they succeed, they advise how they did it, so that the business has a better chance of preventing real hackers from breaking in. Ethical hacking is becoming an especially valued tool among security-intensive technology companies, such as financial services ventures, entities that handle personal medical data, and retail companies that manage customer account information. And in federal agencies, under the Government Information Security Act (44 U.S.C. � 3531 et seq.), enacted in 2000, computer systems are being subjected to penetration testing — tests that a number of agencies’ systems have recently failed, prompting calls for quick improvements in light of the terrorism of 2001. Yet it’s important to do a careful legal check before putting ethical hackers to work. Intrusive security testing, if not properly planned and implemented, can result in lawsuits and liability. DO YOU HAVE THE RIGHT TO HACK? Like scheduled fire drills, pre-announced penetration tests may lose some of their effectiveness along with the element of surprise. Nevertheless, confirming that you have the right to conduct the test, even if it means tipping off the target, is a vital first step. Rule No. 1 — making sure you have the right to hack — applies not only to the consultant hired to do the hacking but also to the company for which ethical hacking is being performed. (This and all subsequent references to “the company” apply to any organization that uses computers and cares about information security.) The company that retains a security consultant for penetration testing typically authorizes the consultant, by contract, to engage in certain acts. These acts may range from looking for vulnerabilities in code to trying to persuade a company employee to reveal a password. The scope of the authorization defines the line between the consultant performing its contractual obligations and, potentially, committing a crime. For example, the Computer Fraud and Abuse Act of 1986 (18 U.S.C. � 1030(a)) prescribes criminal penalties not only for unauthorized access, but also for exceeding authorized access to certain computers. So the consultant must make sure that its agreement with the company clearly authorizes any intrusive acts in which the consultant expects to engage. By analogy, consider what might happen if the Federal Aviation Administration retained a security tester to try to penetrate a checkpoint at Dulles Airport outside Washington, D.C., with an unloaded gun, and the tester instead brought a gun that was loaded — or tried to breach security at nearby BWI airport. A consulting firm that employs ethical hackers also needs to make sure that its employees stay within bounds. Likewise, the company needs to consider what its options will be if they don’t. Perhaps the ultimate security testing nightmare is that a security firm’s personnel — who may include at least a few (former) “unethical” hackers — will detect significant vulnerabilities but, rather than reporting them, will exploit them for illegal gain. As security expert John Bumgarner pointed out in a Security Management article on penetration testing (“Waive Goodbye to Liability,” Jan. 1, 2001), “an unscrupulous member of a penetration test team might steal and resell corporate secrets.” What’s perhaps less obvious is that a company that wants to “hack itself” — no matter how ethical its hackers — also must make sure it has the right to do so. You might think that a company is always entitled to try to penetrate its own systems, just like a person is free to try to crack the lock on his own front door. But these situations are not quite the same. There are at least two circumstances in which the company may need authorization to hack itself. The first is when the company is a licensee rather than an owner of the software involved. The second is when the testing involves access to a third-party network. In either case, a company that tries to test its own security without obtaining the requisite permissions can face significant legal peril. BEWARE LICENSE RESTRICTIONS Much of the software a company uses is likely to be licensed to it by a software supplier that retains ownership of the copyright and other intellectual property rights. The company’s rights to use the software are governed by the license agreement. Most software license agreements expressly bar decompilation or disassembly, which is the process of deriving source code from object code. They also typically forbid “reverse engineering,” which conceivably extends to additional means of determining how the software works. Intrusive tests of the security of software applications often involve attempted decompilation or other acts that potentially fit within a definition of reverse engineering. Moreover, even without decompilation or reverse engineering, such tests may involve acts that fall outside the license grant or that violate express restrictions on the scope of the license. For example, the license may limit use to certain authorized users who are employees of the licensee, thus excluding nonemployee consultants. Or it may state that the software may be used only for certain functional purposes for which it is designed. Use by a consultant for security testing may not meet these contractual requirements. It may even constitute a material breach, which a licensor could invoke as grounds to terminate the license. As a result, a company that contemplates security testing of any software that it uses as a licensee needs to focus on more than just its agreement with the testing firm. It also must scrutinize its software license agreements. Unless the license agreements clearly permit the contemplated security testing, the licensee should obtain express written authorization from its licensors before allowing the testing to proceed. The concern here is not merely hypothetical. Most licensors will have an intense interest in any activities that have the potential to expose security vulnerabilities in their products. At a minimum, they are likely to insist that the results be reported to them and kept confidential. If a licensee has tests performed without permission and the tests reveal significant weaknesses in the licensor’s product, the licensor may be highly motivated to take legal action against both the party that authorizes the tests and the party that conducts them. A company should also make sure it has permission from any third-party network services providers before performing penetration testing of a computer network, even if the company is the sole or primary user of the network or seeks to access only portions of the network that carry its own data. Most providers of network services authorize access only for specific purposes and contractually require compliance with network security policies. It’s a rare security policy that doesn’t ban hacking. Even a test aimed at a particular company’s own data may affect the network on a broader scale. For example, it may affect the overall functionality or performance of the network, not only for that company but also for other users of the network. Either the intrusion or responsive measures by the network provider can result in network downtime, for which the network provider is likely to be obligated to compensate all its customers under service level agreements. In the worst case, the situation may expose other users to breaches of security. Because of these risks, some network services providers flatly refuse to authorize penetration testing. Perhaps even more than some software licensors, network services providers are likely to be extremely interested in any penetration attempts, whatever the purpose. They are not likely to look favorably on any attempts that occur without their consent. If a network services provider forbids penetration testing and won’t budge from its position, consider asking to see the reports of security testing, if any, that the provider itself has performed or commissioned. If the provider has no test data to show, the security of the network may be problematic. In this context, it’s worth noting how the Digital Millennium Copyright Act (17 U.S.C. � 1201 et seq.) does and does not protect security testing. With certain exceptions, � 1201(a)(1)(A) of the DMCA prohibits any “circumvention” of “a technical measure that effectively controls access to a work protected” by copyright. Section 1201(j) sets forth a qualified exception for “security testing,” which it defines as “accessing a computer, computer system, or computer network, solely for the purpose of good faith testing, investigating, or correcting, a security flaw or vulnerability, with the authorization of the owner or operator of such computer, computer system, or computer network.” Because the exception applies only to security testing of a network “with the authorization of the owner or operator of such … network,” it remains essential to obtain consent from a third-party network provider before penetration testing. The exception reduces the risk that authorized testing will violate the DMCA, but does not address the more basic issue of gaining permission. The need for authorization from a licensor or network provider can be addressed at the time security testing is performed, but if the agreements governing the company’s use of the software and the network are already in place, such authorization may be difficult to obtain. For this reason, companies that anticipate needing to conduct or authorize intrusive testing should seek specific authorization when they originally negotiate software license and network services agreements. Companies with sensitive technologies or in business areas that depend on a high level of security are especially well-advised to anticipate the mechanics of security testing when they first build infrastructure and applications. In addition to authorization for security testing, such companies should negotiate for representations and warranties concerning security methods, as well as to require the licensor or network provider to notify them of security issues and provide fixes. MAKE SURE YOU’RE PROTECTED Companies with technology activities that require reliable security increasingly are finding that ethical hacking is a worthwhile, and perhaps essential, security investment. If a company makes this investment, though, the last thing it wants is to create legal problems or to compromise the very security that the testing is designed to help strengthen. Unfortunately, if you want the kernel you must crack the nut. By its very nature, penetration testing generates information that, in the wrong hands, can create an even greater security risk. Each party must look for ways to protect itself. The consultant typically will seek to limit its liability, perhaps to all or some of the fees paid to it for its services. Although other providers of information technology services often indemnify customers for infringement or breach by the provider, ethical hacking consultants may resist granting such indemnification. In fact, the consultant may even seek indemnification from the company for which the testing is performed. As noted earlier, the consultant may be asked to try to penetrate software or networks that the company uses but does not own. In such circumstances, the consultant may depend on the company to obtain authorization, and the consultant thus may expect to be indemnified if it incurs liabilities because of the company’s failure to do so. The company, on the other hand, is likely to want the consultant to remain liable at least to some extent, both to provide a degree of compensation in the event of problems and to give the consultant adequate incentives to be careful and accountable. For example, the company may insist that, at a minimum, the consultant be liable for damages caused by gross negligence or willful breach. Of course, numerous other contractual provisions are important from the standpoint of each party to a security services agreement. Each party to such an agreement should treat it as one of its most sensitive contracts. Ethical hacking can make a company more secure, but it’s not without legal hazards. These hazards make careful legal planning an essential ingredient of a successful security effort. Douglas E. Phillips is of counsel at D.C.’s Covington & Burling. He advises clients in e-finance and other technology-intensive fields on software and network services transactions, including consulting agreements for security testing. The views expressed in this article are solely his own.

This content has been archived. It is available through our partners, LexisNexis® and Bloomberg Law.

To view this content, please continue to their sites.

Not a Lexis Advance® Subscriber?
Subscribe Now

Not a Bloomberg Law Subscriber?
Subscribe Now

Why am I seeing this?

LexisNexis® and Bloomberg Law are third party online distributors of the broad collection of current and archived versions of ALM's legal news publications. LexisNexis® and Bloomberg Law customers are able to access and use ALM's content, including content from the National Law Journal, The American Lawyer, Legaltech News, The New York Law Journal, and Corporate Counsel, as well as other sources of legal information.

For questions call 1-877-256-2472 or contact us at [email protected]


ALM Legal Publication Newsletters

Sign Up Today and Never Miss Another Story.

As part of your digital membership, you can sign up for an unlimited number of a wide range of complimentary newsletters. Visit your My Account page to make your selections. Get the timely legal news and critical analysis you cannot afford to miss. Tailored just for you. In your inbox. Every day.

Copyright © 2021 ALM Media Properties, LLC. All Rights Reserved.