Creating Security Policies and Implementing Identity Management with Active Directory

235 Pages • 90,558 Words • PDF • 2.8 MB
Uploaded at 2021-09-24 06:51

This document was submitted by our user and they confirm that they have the consent to share it. Assuming that you are writer or own the copyright of this document, report to us by using this DMCA report button.

Chapter 1 Architecting the Human Factor Solutions in this chapter: • • • • •

Balancing Security and Usability Managing External Network Access Managing Partner and Vendor Networking Securing Sensitive Internal Networks Developing and Maintaining Organizational Awareness

Chapter 2 Creating Effective Corporate Security Policies Solutions in this Chapter: • The Founding Principles of a Good Security Policy • Safeguarding Against Future Attacks • Avoiding Shelfware Policies • Understanding Current Policy Standards • Creating Corporate Security Policies • Implementing and Enforcing Corporate Security Policies • Reviewing Corporate Security Policies

Chapter 3 Planning and Implementing an Active Directory Infrastructure Solutions in this chapter: • Plan a strategy for placing global catalog servers. • Evaluate network traffic considerations when placing global catalog servers. • Evaluate the need to enable universal group caching. • Implement an Active Directory directory service forest and domain structure. • Create the forest root domain. • Create a child domain. • Create and configure Application Data Partitions.

• Install and configure an Active Directory domain controller. • Set an Active Directory forest and domain functional level based on requirements. • Establish trust relationships. Types of trust relationships might include external trusts, shortcut trusts, and cross-forest trusts.

Chapter 4 Managing and Maintaining an Active Directory Infrastructure Solutions in this chapter: • • • • • • • •

Manage an Active Directory forest and domain structure. Manage trust relationships. Manage schema modifications. Managing UPN Suffixes. Add or remove a UPN suffix. Restore Active Directory directory services. Perform an authoritative restore operation. Perform a nonauthoritative restore operation.

Chapter 5 Managing User Identity and Authentication Solutions in this chapter: Identity Management Identity Management with Microsoft’s Metadirectory MMS Architecture Password Policies User Authentication Single Sign-on Authentication Types Internet Authentication Service Creating a User Authorization Strategy Using Smart Cards Implementing Smart Cards Create a password policy for domain users



3:29 PM

Page 77

Chapter 1

Architecting the Human Factor

Architecting the Human Factor Solutions in this chapter: • • • • •

Balancing Security and Usability Managing External Network Access Managing Partner and Vendor Networking Securing Sensitive Internal Networks Developing and Maintaining Organizational Awareness

Introduction Developing, implementing, and managing enterprise-wide security is a multiple discipline project. As an organization continues to expand, management’s demand for usability and integration often takes precedence over security concerns. New networks are brought up as quickly as the physical layer is in place, and in the ongoing firefight that most administrators and information security staff endure every day, little time is left for well-organized efforts to tighten the “soft and chewy center” that so many corporate networks exhibit. In working to secure and support systems, networks, software packages, disaster recovery planning, and the host of other activities that make up most of our days, it is often forgotten that all of this effort is ultimately to support only one individual: the user. In any capacity you might serve within an IT organization, your tasks (however esoteric they may seem) are engineered to provide your users with safe, reliable access to the resources they require to do their jobs. Users are the drivers of corporate technology, but are rarely factored when discussions of security come up. When new threats are exposed, there is a rush to seal the gates, ensuring that threats are halted outside of the organization’s center. It is this oversight that led to massive internal network disruptions during events as far back as the Melissa virus, and as recently as Nimda, Code Red, and the SQL Null Password worm Spida. In this chapter, I provide you with some of the things I’ve learned in assisting organizations with the aftermath of these events, the lessons learned in post-mortem, and the justification they provide for improved internal security. By exploring common security issues past and present and identifying common elements, I lay the foundation for instituting effective internal security, both through available technical means and organizational techniques.

Balancing Security and Usability The term “security” as it is used in this book refers to the process of ensuring the privacy, integrity, ownership, and accessibility of the intangibles commonly referred to as data. Any failure to provide these four requirements will lead to a situation perceived as a security breach. Whether the incident involves disclosure of payroll records (privacy), the unauthorized alteration of a publicly


disseminated press release (integrity), misappropriation of software code or hardware designs (ownership), or a system failure that results in staff members being unable to conduct their daily business (accessibility), an organization’s security personnel will be among the first responders and will likely be called to task in the aftermath. Hang around any group of security-minded individuals long enough and eventually you will overhear someone say “Hey, well, they wanted it secured at all costs, so I unplugged it.” This flippant remark underscores the conflict between ensuring the privacy, integrity, and ownership of data while not impacting its accessibility. If it were not for the necessity of access, we could all simply hit the big red emergency power button in the data-center and head for Maui, supremely confident that our data is secure. As part of your role in securing your environment, you have undoubtedly seen security initiatives that have been criticized, scaled back, or eliminated altogether because they had an adverse impact on accessibility. Upon implementation of such initiatives, a roar often goes up across the user community, leading to a managerial decree that legitimate business justification exists that exceed the benefit of your project. What’s worse, these events can establish a precedent with both management and the user community, making it more difficult to implement future plans. When you mount your next security initiative and submit your project plan for management approval, those in charge of reviewing your proposal will look right past the benefits of your project and remember only the spin control they had to conduct the last time you implemented changes in the name of security. It is far too simple to become so wrapped up in implementing bulletproof security that you lose sight of the needs of the people you are responsible for supporting. In order to avoid developing a reputation for causing problems rather than providing solutions, you need to make certain that you have looked at every potential security measure from all sides, including the perspectives of both upper management and the users who will be affected. It sounds simple, but this aspect is all too often overlooked, and if you fail to consider the impact your projects will have on the organization, you will find it increasingly difficult to implement new measures. In many cases, you need to relate only the anticipated impact in your project plan, and perhaps prepare brief documentation to be distributed to those groups and individuals impacted. Managers do not like to be surprised, and in many cases surprise is met by frustration, distrust, and outrage. If properly documented ahead of time, the same changes that would cause an uproar and frustration may simply result in quiet acceptance. This planning and communication is the heart of balancing your security needs with your clients' usability expectations. With this balance in mind, let’s take a look at some of the factors that have influenced internal security practices over the past few years. These factors include the risks that personnel passively and actively introduce, the internal security model that a company follows, the role a security policy plays in user response to security measures, and the role that virus defense plays in the overall security strategy.

Personnel as a Security Risk Think of an incident that you’ve responded to in the past. Trace back the sequence of events that triggered your involvement, and you will undoubtedly be


able to cite at least one critical juncture where human intervention contributed directly to the event, be it through ignorance, apathy, coercion, or malicious intent. Quite often these miscues are entirely forgivable, regardless of the havoc they wreak. The best example of user-initiated events comes from the immensely successful mail-borne viruses of the recent past, including Melissa, LoveLetter, and Kournikova. These viruses, and their many imitators (LoveLetter and Kournikova were in and of themselves imitations of the original Melissa virus) made their way into the record books by compromising the end user, the most trusted element of corporate infrastructure. Personnel are the autonomous processing engines of an organization. Whether they are responsible for processing paperwork, managing projects, finessing public relations, establishing and shepherding corporate direction, or providing final product delivery, they all work as part of a massive system known collectively as the company. The practices and philosophies guiding this intricate system of cogs, spindles, drivers, and output have evolved over decades. Computers and networked systems were introduced to this system over the past thirty years, and systematic information security procedures have only begun in earnest over the past twenty years. Your job as a security administrator is to design and implement checkpoints, controls, and defenses that can be applied to the organizational machine without disrupting the processes already in place. You have probably heard of the principle of least privilege, an adage that states that for any task, the operator should have only the permissions necessary to complete the task. In the case of macro viruses, usability enhancements present in the workgroup application suite were hijacked to help the code spread, and in many instances a lack of permissions on large-scale distribution lists led to disastrous consequences. Small enhancements for usability were not counterbalanced with security measures, creating a pathway for hostile code. Individuals can impact the organizational security posture in a variety of ways, both passive and active. Worms, Trojans, and viruses tend to exploit the user passively, and do so on a grand scale, which draws more attention to the issue. However, individuals can actively contribute to security issues as well, such as when a technically savvy user installs his own wireless access point. In the following case studies, you’ll see how both passive and active user involvement contributed to two different automated exploits.

Case Studies: Autonomous Intruders As security professionals, we have concerned ourselves with the unknown—the subtle, near indecipherable surgical attacks that have almost no impact on normal business proceedings, but can expose our most sensitive data to the world. We have great respect for the researcher who discovers a remotely exploitable buffer overflow in a prominent HTTP server, but we loathe the deplorable script-kiddie who develops a macro-virus that collapses half our infrastructure overnight. Many people who work in security even eschew virus incidents and defense as being more of a PC support issue. However, viruses, worms, and Trojans have helped raise awareness about internal security, as we’ll see later in this chapter. In this section, you’ll get a look at two such applications that have had an impact on internal security, and see how users were taken advantage of to help the code spread. Although the progression of the events in the case studies are based on factual accounts, the names and other circumstances have been changed to protect the innocent.


Study 1: Melissa On March 26, 1999, a document began appearing on a number of sexually oriented Usenet newsgroups, carrying within it a list of pornographic Web sites and passwords. This document also contained one of the most potent Microsoft VBScript viruses to date, and upon opening the document hostile code would use well-documented hooks to create a new e-mail message, address it to the first 50 entries of the default address book, insert a compelling subject, attach the document, and deliver the e-mail. Steve McGuinness had just logged into his system at a major financial institution in New York City. He was always an early riser, and usually was in the office long before anyone else. It was still dark, the sun had yet to inch it’s way over the artificial horizon imposed by Manhattan’s coastal skyline. As Outlook opened, Steve began reviewing the subjects of the messages in bold, those that had arrived since his departure the night before. Immediately Steve noticed that the messages were similar, and a quick review of the “From” addresses provided an additional hint that something was wrong, Steve hadn’t received so much as a friendly wave from Hank Strossen since the unfortunate Schaumsburg incident, yet here was a message from Hank with the subject, “Important Message From Hank Strossen”. Steve also had “Important Messages” from Cheryl Fitzpatrick and Mario Andres to boot. Steve knew instinctively something wasn’t right about this. Four messages with the same subject meant a prank—one of the IT guys had probably sent out these messages as a reminder to always shut down your workstation, or at least use a password-protected screensaver. Such pranks were not uncommon—Steve thought back to the morning he’d come into the office to find his laptop had been stolen, only to find that an IT manager had taken it hostage since it wasn’t locked down. Steve clicked the paperclip to open the attached document, and upon seeing the list of pornographic Web sites, immediately closed the word processor. He made a note to himself to contact IT when they got in (probably a couple of hours from now) and pulled up a spreadsheet he’d been working on. While he worked, more and more of the messages popped up in his mailbox as Steve’s co-workers up and down the eastern seaboard began reviewing their email. By 8:15 A.M., the corporate mail servers had become overwhelmed with Melissa instances, and the message stores began to fail. In order to stem the flood of messages and put a halt to the rampant spread of the virus, the mail servers were pulled from the network, and business operations ground to a halt. Although it could be argued that since Steve (and each of his co-workers) had to open the message attachment to activate the virus, their involvement was active, Melissa was socially engineered to take advantage of normal user behavior. Since the body of the message didn’t contain any useful content, the user would open the attachment to see if there was anything meaningful within. When confronted with a document full of links to pornographic Web sites, the user would simply close the document and not mention it out of embarrassment.

Study 2: Sadmind/IIS Worm In May of 2001, many Microsoft IIS Web site administrators began to find their Web sites being defaced with an anti–United States government slogan and an email address within the domain. It rapidly became clear that a new worm had entered the wild, and was having great success in attacking Microsoft Web servers.


Chris Noonan had just started as a junior-level Solaris administrator with a large consulting firm. After completing orientation, one of his first tasks was to build his Solaris Ultra-10 desktop to his liking. Chris was ecstatic, at a previous job he had deployed an entire Internet presence using RedHat Linux, but by working with an old Sparc 5 workstation he’d purchased from a friend, he’d been able to get this new job working with Solaris systems. Chris spent much of the day downloading and compiling his favorite tools, and getting comfortable with his new surroundings. By midday, Chris had configured his favorite Web browser, shell, and terminal emulator on his desktop, and spent lunch browsing some security Web sites for new tools he might want to load on his system. On one site, he found a post with source-code for a Solaris buffer overflow against the Sun Solstice AdminSuite RPC program, sadmind. Curious, and looking to score points with his new employers, Chris downloaded and compiled the code, and ran it against his own machine. With a basic understanding of buffer overflows, Chris hoped the small program would provide him with a privileged shell, and then later this afternoon he could demonstrate the hack to his supervisor. Instead, after announcing “buffer-overflow sent,” the tool simply exited. Disappointed, Chris deleted the application and source code, and continued working. Meanwhile, Chris’ system began making outbound connections on both TCP/80 and TCP/111 to random addresses both in and out of his corporate network. A new service had been started as well, a root-shell listener on TCP/600, and his .rhosts file had been appended with “+ +”, permitting the use of rtools to any host that could access the appropriate service port on Chris’ system. Later in the afternoon, a senior Solaris administrator sounded the alarm that a worm was present on the network. A cronjob on his workstation had alerted him via pager that his system had begun listening on port 600, and he quickly learned from the syslog that his sadmind task had crashed. He noticed many outbound connections on port 111, and the network engineers began sniffing the network segments for other systems making similar outbound connections. Altogether, three infected systems were identified and disconnected, among them Chris’ new workstation. Offline, the creation times of the alternate inetd configuration file were compared for each system, and Chris’ system was determined to be the first infected. The next day, the worm was found to have been responsible for two intranet Web server defacements, and two very irate network-abuse complaints had been filed from the ISP for their Internet segment. This sequence of events represents the best-case scenario for a Sadmind/IIS worm. In most cases, the Solaris hosts infected were workhorse machines, not subject to the same sort of scrutiny as that of the administrator who found the new listening port. The exploit that the worm used to compromise Solaris systems was over two years old, so affected machines tended to be the neglected NTP server or fragile application servers whose admins were reluctant to keep up-to-date with patches. Had it not been for the worm’s noisy IIS server defacements, this worm may have been quite successful at propagating quietly to lie dormant, triggering on a certain time or by some sort of passive network activation, such as bringing down a host that the worm has been pinging at specific intervals. In this case, Chris’ excitement and efforts to impress his new co-workers led to his willful introduction of a worm. Regardless of his intentions, Chris


actively obtained hostile code and executed it while on the corporate network, leading to a security incident.

The State of Internal Security Despite the NIPC statistics indicating that the vast majority of losses incurred by information security incidents originate within the corporate network, security administrators at many organizations still follow the “exoskeleton” approach to information security, continuing to devote the majority of their time to fortifying the gates, paying little attention to the extensive Web of sensitive systems distributed throughout their internal networks. This concept is reinforced with every virus and worm that is discovered “in the wild”—since the majority of security threats start outside of the organization, the damage can be prevented by ensuring that they don’t get inside. The exoskeleton security paradigm exists due to the evolution of the network. When networks were first deployed in commercial environments, hackers and viruses were more or less the stuff of science fiction. Before the Internet became a business requirement, a wide-area network (WAN) was actually a collection of point-to-point virtual private networks (VPNs). The idea of an employee wreaking havoc on her own company’s digital resources was laughable. As the Internet grew and organizations began joining public networks to their previously independent systems, the media began to distribute stories of the “hacker”, the unshaven social misfit cola-addict whose technical genius was devoted entirely to ushering in an anarchic society by manipulating traffic on the information superhighway. Executive orders were issued, and walls were built to protect the organization from the inhabitants of the digital jungle that existed beyond the phone closet. The end result of this transition was an isolationist approach. With a firewall defending the internal networks from intrusion by external interests, the organization was deemed secure. Additional security measures were limited to defining access rights on public servers and ensuring e-mail privacy. Internal users were not viewed as the same type of threat as the external influences beyond the corporate firewalls, so the same deterrents were not necessary to defend against them. Thanks in large part to the wake-up call from the virus incidents of the past few years, many organizations have begun implementing some programs and controls to bolster security from the inside. Some organizations have even begun to apply the exoskeleton approach to some of their more sensitive departments, using techniques that we will discuss in the section, “Securing Sensitive Internal Networks.” But largely, the exoskeleton approach of “crunchy outside, chewy center” is still the norm. The balance of security and usability generally follows a trend like a teeter-totter—at any time, usability is increasing and security implications are not countered, and so the balance shifts in favor of usability. This makes sense, because usability follows the pace of business while security follows the pace of the threat. So periodically, a substantial new threat is discovered, and security countermeasures bring the scales closer to even. The threat of hackers compromising networks from the public Internet brought about the countermeasure of firewalls and exoskeleton security, and the threat of autonomous code brought about the introduction of anti-virus components


throughout the enterprise. Of course, adding to the security side of the balance can occasionally have an effect on usability, as you’ll see in the next section.

User Community Response Users can be like children. If a toddler has never seen a particular toy, he is totally indifferent to it. However, if he encounters another child playing with a Tickle-Me-Elmo, he begins to express a desire for one of his own, in his unique fashion. Finally, once he’s gotten his own Tickle-Me-Elmo, he will not likely give it up without a severe tantrum ensuing. The same applies to end users and network access. Users quickly blur the line between privileges and permissions when they have access to something they enjoy. During the flurry of mail-borne viruses in 1999 and 2000, some organizations made emergency policy changes to restrict access to Web-based mail services such as Hotmail to minimize the ingress of mail viruses through uncontrolled systems. At one company I worked with, this touched off a battle between users and gateway administrators as the new restrictions interrupted the normal course of business. Regardless of the fact that most users’ Web-mail accounts were of a purely personal nature, the introduction of filters caused multiple calls to the help desk. The user-base was inflamed, and immediately people began seeking alternate paths of access. In one example, a user discovered that using the Babelfish translation service ( set to translate Spanish to English on the Hotmail Web site allowed access. Another discovered that Hotmail could be accessed through alternate domain names that hadn’t been blocked, and their discovery traveled by word-of-mouth. Over the course of the next week, administrators monitored Internet access logs and blocked more than 50 URLs that had not been on the original list. This is an example of a case where user impact and response was not properly anticipated and addressed. As stated earlier, in many cases you can garner user support (or at least minimize active circumvention) for your initiatives simply by communicating more effectively. Well-crafted policy documents can help mitigate negative community response by providing guidelines and reference materials for managing community response. This is discussed in depth in Chapter 2, “Creating Effective Corporate Security Policies”, in the section “Implementing and Enforcing Corporate Security Policies.” Another example of a change that evoked a substantial user response is peer-to-peer file-sharing applications. In many companies, software like Napster had been given plenty of time to take root before efforts were made to stop the use of the software. When the “Wrapster” application made it possible to share more than just music files on the Napster service, file sharing became a more tangible threat. As organizations began blocking the Napster Web site and central servers, other file-sharing applications began to gain popularity. Users discovered that they could use a Gnutella variant, or later the Kazaa network or Audiogalaxy, and many of these new applications could share any file type, without the use of a plug-in like “Wrapster.” With the help of the Internet, users are becoming more and more computer savvy. Installation guides and Web forums for chat programs or filesharing applications often include detailed instructions on how to navigate corporate proxies and firewalls. Not long ago, there was little opportunity for a user to obtain new software to install, but now many free or shareware


applications are little more than a mouse click away. This new accessibility made virus defense more important than ever.

The Role of Virus Defense in Overall Security I have always had a certain distaste for virus activity. In my initial foray into information security, I worked as a consultant for a major anti-virus software vendor, assisting with implementation and management of corporate virusdefense systems. Viruses to me represented a waste of talent; they were mindless destructive forces exploiting simplistic security flaws in an effort to do little more than create a fast-propagating chain letter. There was no elegance, no mystique, no art—they were little more than a nuisance. Administrators, engineers, and technicians who consider themselves to be security-savvy frequently distance themselves from virus defense. In some organizations, the teams responsible for firewalls and gateway access have little to no interaction with the system administrators tasked with virus defense. After all, virus defense is very basic—simply get the anti-virus software loaded on all devices and ensure that they’re updated frequently. This is a role for desktop support, not an experienced white-hat. Frequently, innovative viruses are billed as a “proof-of-concept.” Their developers claim (be it from jail or anonymous remailer) that they created the code simply to show what could be done due to the security flaws in certain applications or operating systems. Their motivations, they insist, were to bring serious security issues to light. This is akin to demonstrating that fire will burn skin by detonating a nuclear warhead. However obnoxious, viruses have continually raised the bar in the security industry. Anti-virus software has set a precedent for network-wide defense mechanisms. Over the past three years, almost every organization I’ve worked with had corporate guidelines dictating that all file servers, e-mail gateways, Internet proxies, and desktops run an approved anti-virus package. Many anti-virus vendors now provide corporate editions of their software that can be centrally managed. Anti-virus systems have blazed a trail from central servers down to the desktop, and are regarded as a critical part of infrastructure. Can intrusion detection systems, personal firewalls, and vulnerability assessment tools be far behind?

Managing External Network Access The Internet has been both a boon and a bane for productivity in the workplace. Although some users benefit greatly from the information services available on the Internet, other users will invariably waste hours on message boards, instant messaging, and less-family-friendly pursuits. Regardless of the potential abuses, the Internet has become a core resource for hundreds of disciplines, placing a wealth of reference materials a few short keystrokes away. In this section, you’ll explore how organizations manage access to resources beyond the network borders. One of the first obstacles to external access management is the corporate network architecture and the Internet access method used. To minimize congestion over limited bandwidth private framerelay links or virtual private networking between various organizational offices, many companies have permitted each remote office to manage its own public Internet access, a method that provides multiple inbound access points that need


to be secured. Aside from the duplicated cost of hardware and software, multiple access points complicate policy enforcement as well. The technologies described in this section apply to both distributed and centralized Internet access schemas; however, you will quickly see how managing these processes for multiple access points quickly justifies the cost of centralized external network access. If you are unsure of which method is in place in your organization, refer to Figure 1.1. Figure 1.1 Distributed and Centralized External Network Access Schemas

Gaining Control: Proxying Services In a rare reversal of form following function, one of the best security practices in the industry was born of the prohibitive costs of obtaining IP address space. For most organizations, the primary reason for establishing any sort of Internet presence was the advent of e-mail. E-mail, and it’s underlying protocol SMTP (Simple Mail Transfer Protocol) was not particularly well-suited for desktop delivery since it required constant connectivity, and so common sense dictated that organizations implement an internal e-mail distribution system and then add an SMTP gateway to facilitate inbound and outbound messaging. Other protocols, however, did not immediately lend themselves to the store-and-forward technique of SMTP. A short while later, protocols such as HTTP (HyperText Transfer Protocol) and FTP (File Transfer Protocol) began to find their way into IT group meetings. Slowly, the Web was advancing, and more and more organizations were beginning to find legitimate business uses for these protocols. But unlike the asynchronous person-to-person nature of SMTP, these protocols were designed to transfer data directly from a computer to the user in real time.


Initially, these obstacles were overcome by assigning a very select group of internal systems public addresses so that network users could access these resources. But as demand and justification grew, a new solution had to be found—thus, the first network access centralization began. Two techniques evolved to permit users on a private network to access external services, proxies and NAT (network address translation). Network address translation predated proxies and was initially intended as a large-scale solution for dealing with the rapid depletion of the IPv4 address space (see RFC 1744, “Observations on the Management of the Internet Address Space,” and RFC 1631, “The IP Network Address Translator [NAT]”). There are two forms of NAT, referred to as static and dynamic. In static NAT, there is a one-to-one relationship between external and internal IP addresses, whereas dynamic NAT maintains a one-to-many relationship. With dynamic NAT, multiple internal systems can share the same external IP address. Internal hosts access external networks through a NAT-enabled gateway that tracks the port and protocol used in the transaction and ensures that inbound responses are directed to the correct internal host. NAT is completely unaware of the contents of the connections it maintains, it simply provides network-level IP address space sharing. Proxies operate higher in the OSI model, at the session and presentations layers. Proxies are aware of the parameters of the services they support, and make requests on behalf of the client. This service awareness means that proxies are limited to providing a certain set of protocols that they can understand, and usually require the client to have facilities for negotiating proxied connections. In addition, proxies are capable of providing logging, authentication, and content filtering. There are two major categories of proxies, the multiprotocol SOCKS proxy and the more service-centric HTTP/FTP proxies.

Managing Web Traffic: HTTP Proxying Today, most organizations make use of HTTP proxies in some form or another. An HTTP proxy can be used to provide content filtering, document caching services, restrict access based on authentication credentials or source address, and provide accountability for Internet usage. Today, many personal broadband network providers (such as DSL and Cable) provide caching proxies to reduce network traffic and increase the transfer rates for commonly accessed sites. Almost all HTTP proxies available today can also proxy FTP traffic as an added bonus. Transparent HTTP proxies are gaining ground as well. With a transparent HTTP proxy, a decision is made on a network level (often by a router or firewall) to direct TCP traffic destined for common HTTP ports (for example, 80 and 443) to a proxy device. This allows large organizations to implement proxies without worrying about how to deploy the proxy configuration information to thousands of clients. The difficulty with transparent proxies, however, occurs when a given Web site operates on a nonstandard port, such as TCP/81. You can identify these sites in your browser because the port designation is included at the end of the URL, such as Most transparent proxies would miss this request, and if proper outbound firewalling is in effect, the request would fail. HTTP proxies provide other benefits, such as content caching and filtering. Caching serves two purposes, minimizing bandwidth requirements for


commonly accessed resources and providing far greater performance to the end user. If another user has already loaded the New York Times home page at recently, the next user to request that site will be served the content as fast as the local network can carry it from the proxy to the browser. If constantly growing bandwidth is a concern for your organization, and HTTP traffic accounts for the majority of inbound traffic, a caching proxy can be a great help. Notes from the Underground… Protect Your Proxies!

When an attacker wants to profile and/or attempt to compromise a Web site, their first concern is to make sure that the activity cannot be easily traced back to them. More advanced hackers will make use of a previously exploited system that they now “own,” launching their attacks from that host or a chain of compromised hosts to increase the chances that inadequate logging on one of the systems will render a trace impossible. Less experienced or accomplished attackers, however, will tunnel their requests through an open proxy, working from the logic that if the proxy is open, the odds that it is being adequately logged are minimal. Open proxies can cause major headaches when an abuse complaint is lodged against your company with logs showing that your proxy was the source address of unauthorized Web vulnerability scans, or worse yet, compromises. Proxies should be firewalled to prevent inbound connections on the service port from noninternal addresses, and should be tested regularly, either manually or with the assistance of a vulnerability assessment service. Some Web servers, too, can be hijacked as proxies, so be sure to include all your Web servers in your scans. If you want to do a manual test of a Web server or proxy, the process is very simple. Use your system’s telnet client to connect to the proxy or Webserver’s service port as shown here: C:\>telnet 80 Connecting to… GET HTTP/1.0 [HTTP data returned here]

Review the returned data to ascertain if it is coming from or not. Bear in mind, many Web servers and proxies are configured to return a default page when they are unable to access the data you’ve requested, so although you may get a whole lot of HTML code back from this test, you need to review the contents of the HTML to decide whether or not it is the page you requested. If you’re testing your own proxies from outside, you would expect to see a connection failure, as shown here: C:\>telnet 80 Connecting to… Could not open a connection to host on port 80 : Connect failed

This message indicates that the service is not available from your host, and is what you’d expect to see if you were trying to use your corporate HTTP proxy from an Internet café or your home connection.


Managing the Wildcards: SOCKS Proxying The SOCKS protocol was developed by David Koblas and further extended by Ying-Da Lee in an effort to provide a multiprotocol relay to permit better access control for TCP services. While dynamic NAT could be used to permit internal users to access an array of external services, there was no way to log accesses or restrict certain protocols from use. HTTP and FTP proxies were common, but there were few proxies available to address less common services such as telnet, gopher, and finger. The first commonly used SOCKS implementation was SOCKS version 4. This release supported most TCP services, but did not provide for any active authentication; access control was handled based on source IP address, the ident service, and a “user ID” field. This field could be used to provide additional access rights for certain users, but no facility was provided for passwords. SOCKS version 4 was a very simple protocol, only two methods were available for managing connections: CONNECT and BIND. After verifying access rights based on the used ID field, source IP address, destination IP address and/or destination port, the CONNECT method would establish the outbound connection to the external service. When a successful CONNECT had completed, the client would issue a BIND statement to establish a return channel to complete the circuit. Two separate TCP sessions were utilized, one between the internal client and the SOCKS proxy, and a second between the SOCKS proxy and the external host. In March 1996, Ying-Da Lee and David Koblas as well as a collection of researchers from companies including IBM, Unify, and Hewlett-Packard drafted RFC 1928, describing SOCKS protocol version 5. This new version of the protocol extended the original SOCKS protocol by providing support for UDP services, strong authentication, and IPv6 addressing. In addition to the CONNECT and BIND methods used in SOCKS version 4, SOCKS version 5 added a new method called UDP ASSOCIATE. This method used the TCP connection between the client and SOCKS proxy to govern a UDP service relay. This addition to the SOCKS protocol allowed the proxying of burgeoning services such as streaming media.

Who, What, Where? The Case for Authentication and Logging Although proxies were originally conceived and created in order to facilitate and simplify outbound network access through firewall devices, by centralizing outbound access they provided a way for administrators to see how their bandwidth was being utilized. Some organizations even adopted billing systems to distribute the cost of maintaining an Internet presence across their various departments or other organizational units. Although maintaining verbose logs can be a costly proposition in terms of storage space and hidden administrative costs, the benefits far outweigh these costs. Access logs have provided the necessary documentation for addressing all sorts of security and personnel issues because they can provide a step-by-step account of all external access, eliminating the need for costly forensic investigations. Damage & Defense… The Advantages of Verbose Logging


In one example of the power of verbose logs, the Human Resources department had contacted me in regard to a wrongful termination suit that had been brought against my employer. The employee had been dismissed after it was discovered that he had been posing as a company executive and distributing fake insider information on a Web-based financial discussion forum. The individual had brought a suit against the company; claiming that he was not responsible for the posts and seeking lost pay and damages. At the time, our organization did not require authentication for Web access, so we had to correlate the user’s IP address with our logs. My co-workers and I contacted the IT manager of the ex-employee’s department, and located the PC that he had used during his employment. (This was not by chance—corporate policy dictated that a dismissed employee’s PC be decommissioned for at least 60 days). By correlating the MAC address of the PC against the DHCP logs from the time of the Web-forum postings, we were able to isolate the user’s IP address at the time of the postings. We ran a simple query against our Web proxy logs from the time period and provided a detailed list of the user’s accesses to Human Resources. When the ex-employee’s lawyer was presented with the access logs, the suit was dropped immediately—not only had the individual executed POST commands against the site in question with times correlating almost exactly to the posts, but each request to the site had the user’s forum login ID embedded within the URL. In this instance, we were able to use asset-tracking documentation, DHCP server logs, and HTTP proxy logs to associate an individual with specific network activity. Had we instituted a proxy authentication scheme, there would have been no need to track down the MAC address or DHCP logs, the individual’s username would have been listed right in the access logs. The sidebar example in this section, "The Advantages of Verbose Logging," represents a reactive stance to network abuse. Carefully managed logging provides extensive resources for reacting to events, but how can you prevent this type of abuse before it happens? Even within an organization, Internet access tends to have an anonymous feel to it, since so many people are browsing the Web simultaneously, users are not concerned that their activity is going to raise a red flag. Content filtering software can help somewhat, as when the user encounters a filter she is reminded that access is subject to limitations, and by association, monitoring. In my experience however, nothing provides a more successful preventive measure than active authentication. Active authentication describes an access control where a user must actually enter her username and password in order to access a resource. Usually, credentials are cached until a certain period of inactivity has passed, to prevent users from having to re-enter their login information each time they try to make a connection. Although this additional login has a certain nuisance quotient, the act of entering personal information reminds the user that they are directly responsible anything they do online. When a user is presented the login dialog, the plain-brown-wrapper illusion of the Internet is immediately dispelled, and the user will police her activity more acutely.

Handling Difficult Services Occasionally, valid business justifications exist for greater outbound access than is deemed acceptable for the general user base. Imagine you are the Internet services coordinator for a major entertainment company. You are supporting


roughly 250,000 users and each of your primary network access points are running a steady 25 Mbps during business hours. You have dozens of proxy devices, mail gateways, firewalls and other Internet-enabled devices under your immediate control. You manage all of the corporate content filters, you handle spam patrol on your mail gateways, and no one can bring up a Web server until you’ve approved the configuration and opened the firewall. If it comes from outside the corporate network, it comes through you. One sunny California morning, you step into your office and find an urgent message in your inbox. Legal has become aware of rampant piracy of your company’s products and intellectual property, and they want you to provide them instructions on how to gain access to IRC (Internet Relay Chat), Kazaa, Gnutella, and Usenet. Immediately. Before you’ve even had the opportunity to begin spewing profanities and randomly blocking IPs belonging to Legal, another urgent e-mail appears—the CFO’s son is away at computer camp, and the CFO wants to use America Online’s Instant Messenger (AIM) to chat with his kid. The system administrator configured the application with the SOCKS proxy settings, but it won’t connect. Welcome to the land of exceptions! Unless carefully managed, special requests such as these can whittle away at carefully planned and implemented security measures. In this section, I discuss some of the services that make up these exceptions (instant messaging, external e-mail access points, and filesharing protocols) and provide suggestions on how to minimize their potential impact on your organization.

Instant Messaging I don’t need to tell you that instant messaging has exploded over the past few years. You also needn’t be told that these chat programs can be a substantial drain on productivity—you’ve probably seen it yourself. The effect of chat on an employee’s attention span is so negative that many organizations have instituted a ban on their use. So how do we as Internet administrators manage the use of chat services? Despite repeated attempts by the various instant-messaging vendors to agree upon a standard open protocol for chat services, each vendor still uses its own protocol for linking the client up to the network. Yahoo’s instant messenger application communicates over TCP/5050, America Online’s implementation connects on TCP/5190. So blocking these services should be fairly basic: Simply implement filters on your SOCKS proxy servers to deny outbound connections to TCP/5050 or 5190, right? Wrong! Instant messaging is a business, and the vendors want as many as users as they can get their hands on. Users of instant-messaging applications range from teenagers to grandparents, and the software vendors want their product to work without the user having to obtain special permission from the likes of you. So they’ve begun equipping their applications with intelligent firewall traversal techniques. Try blocking TCP/5050 out of your network and loading up Yahoo’s instant messenger. The connection process will take a minute or more, but it will likely succeed. With absolutely no prompting from the user, the application realized that it was unable to communicate on TCP/5050 and tried to connect to the service on a port other than TCP/5050—in my most recent test case, the fallback port was TCP/23—the reserved port for telnet, and it was successful.


When next I opened Yahoo, the application once again used the telnet port and connected quickly. Blocking outbound telnet resulted in Yahoo’s connecting over TCP/80, the HTTP service port, again without any user input. The application makes use of the local Internet settings, so the user doesn’t even need to enter proxy information. Recently, more instant messaging providers have been adding new functionality, further increasing the risks imposed by their software. Instant messaging–based file transfer has provided another potential ingress point for malicious code, and vulnerabilities discovered in popular chat engines such as America Online’s application have left internal users exposed to possible system compromise when they are using certain versions of the chat client.

External E-Mail Access Points Many organizations have statements in their “Acceptable Use Policy” that forbid or limit personal e-mails on company computing equipment, and often extend to permit company-appointed individuals to read employee e-mail without obtaining user consent. These policies have been integral in the advent of external e-mail access points, such as those offered by Hotmail, Yahoo, and other Web portals. The number of portals offering free e-mail access is almost too numerous to count; individuals will now implement free e-mail accounts for any number of reasons, for example Anime Nation ( offers free e-mail on any of 70 domains for fans of various anime productions. Like instant messaging, they are a common source of wasted productivity. The security issues with external e-mail access points are plain. They can provide an additional entry point for hostile code. They are commonly used for disseminating information anonymously, which can incur more subtle security risks for data such as intellectual property, or far worse, financial information. Some of these risks are easily mitigated at the desktop. Much effort has gone into developing browser security in recent years. As Microsoft’s Internet Explorer became the de facto standard, multiple exploits were introduced taking advantage of Microsoft’s Visual Basic for Applications scripting language, and the limited security features present in early versions of Internet Explorer. Eventually, Microsoft began offering content signatures, such as Authenticode, to give administrators a way to take the decision away from the user. Browsers could be deployed with security features locked in, applying rudimentary policies to what a user could and could not download and install from a Web site. Combined with a corporate gateway HTTP virus scanner, these changes have gone a long way towards reducing the risk of hostile code entering through email access points.

File-Sharing Protocols Napster, Kazaa, Morpheus, Gnutella, iMesh—the list goes on and on. each time one file-sharing service is brought down by legal action, three others pop up and begin to grow in popularity. Some of these services can function purely over HTTP, proxies and all, whereas others require unfettered network access or a SOCKS proxy device to link up to their network. The legal issues of distributing and storing copyrighted content aside, most organizations see these peer-to-peer networks as a detriment to productivity and have implemented policies restricting or forbidding their use.


Legislation introduced in 2002 would even allow copyright holders to launch attacks against users of these file-sharing networks who are suspected of making protected content available publicly, without threat of legal action. The bill, the P2P Piracy Prevention Act (H.R. 5211), introduced by Howard Berman, D-California (, would exempt copyright holders and the organizations that represent them from prosecution if they were to disable or otherwise impair a peer-to-peer network. The only way to undermine a true peerto-peer network is to disrupt the peers themselves—even if they happen to be living on your corporate network. Although the earliest popular file-sharing applications limited the types of files they would carry, newer systems make no such distinction, and permit sharing of any file, including hostile code. The Kournikova virus reminded system administrators how social engineering can impact corporate security, but who can guess what form the next serious security outbreak would take?

Solving The Problem Unfortunately, there is no silver bullet to eliminate the risks posed by the services described in the preceding section. Virus scanners, both server- and client-level, and an effective signature update scheme goes a long way towards minimizing the introduction of malicious code, but anti-virus software protects only against known threats, and even then only when the code is either self-propagating or so commonly deployed that customers have demanded detection for it. I have been present on conference calls where virus scanner product managers were providing reasons why Trojans, if not self-propagating, are not “viruses” and are therefore outside the realm of virus defense. As more and more of these applications become proxy-aware, and developers harness local networking libraries to afford themselves the same preconfigured network access available to installed browser services, it should become clear to administrators that the reactive techniques provided by anti-virus software are ineffective. To fully protect the enterprise, these threats must be stopped before they can enter. This means stopping them at the various external access points. Content filters are now a necessity for corporate computing environments. Although many complaints have been lodged against filter vendors over the years (for failing to disclose filter lists, or over-aggressive filtering), the benefits of outsourcing your content filtering efforts far outweigh the potential failings of an in-house system. One need only look at the proliferation of Web-mail providers to recognize that managing filter lists is a monumental task. Although early filtering devices incurred a substantial performance hit from the burden of comparing URLs to the massive databases of inappropriate content, most commercial proxy vendors have now established partnerships with content filtering firms to minimize the performance impact. Quite frequently in a large organization, one or more departments will request exception from content filtering, for business reasons. Legal departments, Human Resources, Information Technology, and even Research and Development groups can often have legitimate reasons for accessing content that filters block. If this is the case in your organization, configure these users for an alternate, unfiltered proxy that uses authentication. Many proxies are available today that can integrate into established authentication schemes, and as described in the “Who, What, Where? The Case for Authentication and Logging” section


earlier in this chapter, users subject to outbound access authentication are usually more careful about what they access. Although content filters can provide a great deal of control over outbound Web services, and in some cases can even filter mail traffic, they can be easily circumvented by applications that work with SOCKS proxies. So if you choose to implement SOCKS proxies to handle nonstandard network services, it is imperative that you work from the principle of least privilege. One organization I’ve worked with had implemented a fully authenticated and filtered HTTP proxy system but had an unfiltered SOCKS proxy in place (on the same IP address, no less) that permitted all traffic, including HTTP. Employees had discovered that if they changed the proxy port to 1080 with Internet Explorer, they were no longer prompted for credentials and could access filtered sites. One particularly resourceful employee had figured this out, and within six months more than 300 users were configured to use only the SOCKS proxy for outbound access. All SOCKS proxies, even the NEC “SOCKS Reference Proxy,” provide access controls based on source and destination addresses and service ports. Many provide varying levels of access based on authentication credentials. If your user base requires access to nonstandard services, make use of these access controls to minimize your exposure. If you currently have an unfiltered or minimally filtered SOCKS proxy, use current access logs to profile the services that your users are passing through the system. Then, implement access controls initially to allow only those services. Once access controls are in place, work with the individuals responsible for updating and maintaining the company’s Acceptable Use Policy document to begin restricting prohibited services, slowly. By implementing these changes slowly and carefully, you will minimize the impact, and will have the opportunity to address legitimate exceptions on a caseby-case basis in an acceptable timeframe. Each successful service restriction will pave the way for a more secure environment.

Managing Partner and Vendor Networking More and more frequently, partners and vendors are requesting and obtaining limited cross-organizational access to conduct business and provide support more easily. Collaborative partnerships and more complicated software are blurring network borders by providing inroads well beyond the DMZ. In this section, I review the implications of this type of access and provide suggestions on developing effective implementations. In many cases, your business partners will require access only to a single host or small group of hosts on your internal network. These devices may be file servers, database servers, or custom gateway applications for managing collaborative access to resources. In any event, your task as a network administrator is to ensure that the solution implemented provides the requisite access while minimizing the potential for abuse, intentional or otherwise. In this section, I present two different methods of managing these types of networking relationships with third-party entities. There are two common approaches to discuss, virtual private networking (VPN) and extranet shared resource management. Figure 1.2 shows how these resource sharing methods differ. Figure 1.2 Extranet vs. VPN Vendor/Partner Access Methods


Developing VPN Access Procedures Virtual private networks (VPNs) were originally conceived and implemented to allow organizations to conduct business across public networks without exposing data to intermediate hosts and systems. Prior to this time, large organizations that wanted secure wide area networks (WANs) were forced to develop their own backbone networks at great cost and effort. Aside from the telecommunications costs of deploying links to remote locations, these organizations also had to develop their own network operations infrastructures, often employing dozens of network engineers to support current infrastructures and manage growth. VPNs provided a method for security-conscious organizations to take advantage of the extensive infrastructure developed by large-scale telecommunication companies by eliminating the possibility of data interception through strong encryption. Initially deployed as a gateway-to-gateway solution, VPNs were quickly adapted to client-to-gateway applications, permitting individual hosts outside of the corporate network to operate as if they were on the corporate network. As the need for cross-organizational collaboration or support became more pressing, VPNs presented themselves as an effective avenue for managing these needs. If the infrastructure was already in place, VPN access could be implemented relatively quickly and with minimal cost. Partners were provided VPN clients and permitted to access the network as would a remote employee. However, the VPN approach to partner access has quite a few hidden costs and potential failings when viewed from the perspective of ensuring network security. Few organizations have the resources to analyze the true requirements of each VPN access request, and to minimize support load, there is a tendency to treat all remote clients as trusted entities. Even if restrictions are imposed on these clients, they are usually afforded far more access than


necessary. Due to the complexities of managing remote access, the principle of least-privilege is frequently overlooked. Remote clients are not subject to the same enforcement methods used for internal hosts. Although you have spent countless hours developing and implementing border control policies to keep unwanted elements out of your internal network through the use of content filters, virus scanners, firewalls, and acceptable use policies, your remote clients are free from these limitations once they disconnect from your network. If their local networks do not provide adequate virus defense, or if their devices are compromised due to inadequate security practices, they can carry these problems directly into your network, bypassing all your defenses. This is not to say that VPNs cannot be configured in a secure fashion, minimizing the risk to your internal network. Through the use of well-designed remote access policies, proper VPN configuration and careful supervision of remote access gateways, you can continue to harness the cost-effective nature of VPNs. There are two primary categories that need to be addressed in order to ensure a successful and secure remote access implementation. The first is organizational, involving formal coordination of requests and approvals, and documentation of the same. The second is technical, pertaining to the selection and configuration of the remote access gateway, and the implementation of individual requests.

Organizational VPN Access Procedures The organizational aspect of your remote access solution should be a welldefined process of activity commencing when the first request is made to permit remote access, following through the process of activation and periodically verifying compliance after the request has been granted. The following steps provide some suggestions for developing this phase of a request: 1. Prepare a document template to be completed by the internal requestor of remote access. The questions this document should address include the following: • Justification for remote access request Why does the remote party need access? This open-ended question will help identify situations where remote access may not really be necessary, or where access can be limited in scope or duration. • Anticipated frequency of access How frequently will this connection be used? If access is anticipated to be infrequent, can the account be left disabled between uses? •

Resources required for task What system(s) does the remote client need to access? What specific services will the remote client require? It is best if your remote access policy restricts the types of service provided to third-party entities, in which case you can provide a checklist of the service types available and provide space for justification. Authentication and access-control What form of authentication and access-control is in place on the target systems? It should be made clear to the internal requesting party that once access is approved, the administrator(s) of the hosts


being made available via VPN are responsible for ensuring that the host cannot be used as a proxy to gain additional network access. Contact information for resource administrators Does the VPN administrator know how to contact the host administrator? The VPN administrators should have the ability to contact the administrator(s) of the hosts made accessible to the VPN to ensure that they are aware of the access and that they have taken the necessary steps to secure the target system.

Duration of access Is there a limit to the duration of the active account? All too frequently, VPN access is provided in an openended fashion, accounts will remain active long after their usefulness has passed. To prevent this, set a limit to the duration, and require account access review and renewal at regular intervals (6 to 12 months). 2. Prepare a document template to be completed by the primary contact of the external party. This document should primarily serve to convey your organization’s remote access policy, obtain contact information, and verify the information submitted by the internal requestor. This document should include the following: •

Complete remote access policy document Generally, the remote access policy is based off of the company’s acceptable use policy, edited to reflect the levels of access provided by the VPN.

Access checklist A short document detailing a procedure to ensure compliance with the remote access policy. Because policy documents tend to be quite verbose and littered with legalese, this document provides a simplified list of activities to perform prior to establishing a VPN connection. For example, instructing users to verify their anti-virus signatures and scan their hosts, disconnect from any networks not required by the VPN connection, etc.

Acknowledgement form A brief document to be signed by the external party confirming receipt of the policy document and preconnection checklist, and signaling their intent to follow these guidelines.

Confirmation questionnaire A brief document to be completed by the external party providing secondary request justification and access duration. These responses can be compared to those submitted by the internal requestor to ensure that the internal requestor has not approved more access than is truly required by the remote party. 3. Appoint a VPN coordination team to manage remote access requests. Once the documents have been filed, team members will be responsible for validating that the request parameters (reason, duration, etc.) on both internal and external requests are reasonably similar in scope. This team is also tasked with escalating requests that impose additional security risks, such as when a remote party


requires services beyond simple client-server access, like interactive system control or administrative access levels. The processes for approval should provide formal escalation triggers and procedures to avoid confusion about what is and is not acceptable. 4. Once requests have been validated, the VPN coordination team should contact the administrators of the internal devices that will be made accessible, to verify both that they are aware of the remote access request and that they are confident that making the host(s) available will not impose any additional security risks to the organization. 5. If your organization has an audit team responsible for verifying information security policy compliance, involve them in this process as well. If possible, arrange for audit to verify any access limitations are in place before releasing the login information to the remote party. 6. Finally, the VPN coordination team can activate the remote access account and begin their periodic access review and renewal schedule.

Technical VPN Access Procedures The technical aspect of the remote access solution deals with developing a remote-access infrastructure that will support the requirements and granularity laid out in the documents provided in the organizational phase. Approving a request to allow NetBIOS access to the file server at is moot if your infrastructure has no way of enforcing the destination address limitations. By the same token, if your VPN devices do provide such functionality but are extremely difficult to manage, the VPN administrators may be lax about applying access controls. When selecting your VPN provider, look for the following features to assist the administrators in providing controlled access: • •

Easily configurable access control policies, capable of being enabled on a user or group basis. Time based access controls, such as inactivity timeouts and account deactivation.

Customizable clients and enforcement, to permit administrators to lock down client options and prevent users from connecting using noncustomized versions.

Client network isolation—when connected to the VPN, the client should not be able to access any resources outside of the VPN. This will eliminate the chance that a compromised VPN client could act as a proxy for other hosts on the remote network.

If your organization has multiple access points, look for a VPN concentrator that supports centralized logging and configuration to minimize support and maintenance tasks.

With these features at their disposal, VPN administrators will have an easier time implementing and supporting the requirements they receive from the VPN coordination team. In the next section, I discuss extranets—a system of managing collaborative projects by creating external DMZs with equal trust for each ID_MANAGE_01.doc

member of the network. It is possible to create similar environments within the corporate borders by deploying internal DMZs and providing VPN access to these semitrusted networks. Quite often when interactive access is required to internal hosts, there is no way to prevent “leapfrogging” from that host to other restricted areas of the network. By deploying internal DMZs that are accessible via VPN, you can restrict outbound access from hosts within the DMZ, minimizing the potential for abuse.

Developing Partner Extranets Everyone is familiar with the term “intranet.” Typically, “intranet” is used to describe a Web-server inaccessible to networks beyond the corporate borders. Intranets are generally used for collaboration and information distribution, and thanks to the multiplatform nature of common Internet protocols (FTP, HTTP, SMTP), can be used by a variety of clients with mostly the same look and feel from system to system. Logically then, extranets are external implementations of intranets, extending the same benefits of multiplatform collaboration, only situated outside of the corporate network. When full interactive access to a host is not required (such as that required for a vendor to support a device or software program), extranets can usually provide all the collaborative capacity required for most partnership arrangements. By establishing an independent, protected network on any of the partner sites (or somewhere external to both networks), the support costs and overhead associated with VPN implementations can be avoided. Transitional data security is handled through traditional encryption techniques such as HTTP over SSL, and authentication can be addressed using HTTP authentication methods or custom authentication built into the workgroup application. Most partner relationships can be addressed in this fashion, whether the business requirement is supply-chain management, collaborative development, or cross-organizational project management. Central resources are established that can be accessed not only by the internal network users from each of the partners, but also by remote users connecting from any number of locations. Establishing an extranet is no more difficult than creating a DMZ. Extranets are hosted on hardened devices behind a firewall, and can be administered either locally or across the wire using a single administrative VPN, a far more cost effective solution than providing each extranet client their own VPN link. When necessary, gateway-to-gateway VPNs can provide back channel access to resources internal to the various partners, such as inventory databases. The most challenging aspect of deploying an extranet is selecting or developing the applications that will provide access to the clients. In many cases, off-the-shelf collaborative tools such as Microsoft’s SharePoint ( can be adapted to provide the functionality required, in other cases custom applications may need to be developed. In most cases, these custom applications are merely front-ends to established workflow systems within the partners’ networks. Extranets avoid many of the difficulties inherent in deploying VPNbased systems, including the most common challenge of passing VPN traffic through corporate firewalls. IPSec traffic can be challenging to proxy properly, operating neither on UDP or TCP, but over IP protocol 50. Authentication and access control is handled on the application level, reducing the risk of excessive privilege created by the complicated nature of VPNs. By establishing a network


that provides only the services and applications that you intend to share across organizations, many support overhead and security issues are circumvented. Although establishing an extranet will represent additional cost and effort at the start of a project versus adapting current VPN models, the initial investment will be recouped when the total cost of ownership is analyzed.

Securing Sensitive Internal Networks When we consider security threats to our networks, we tend to think along the lines of malicious outsiders attempting to compromise our network through border devices. Our concerns are for the privacy and integrity of our e-commerce applications, customer databases, Web servers, and other data that lives dangerously close to the outermost network borders. In most organizations, the security team is formed to manage the threat of outsiders, to prevent hackers from gaining entry to our networks, and so we have concentrated our resources on monitoring the doors. Like in a retail store in the mall, nobody cares where the merchandise wanders within the store itself; the only concern is when someone tries to take it outside without proper authorization. Largely, the network security group is not involved in maintaining data security within the organization. Despite the common interest and knowledge base, audit and incident response teams rarely coordinate with those individuals responsible for border security. This demarcation of groups and responsibilities is the primary reason that so many organizations suffer from the “soft and chewy center.” Although great effort is exerted maintaining the patch levels of Internetfacing devices, internal systems hosting far more sensitive data than a corporate Web server are frequently left several patch levels behind. When the Spida Microsoft SQL server worm was making its rounds, I spoke with administrators of large corporate environments that discovered they had as many as 1,100 SQL servers with a blank sa password, some hosting remarkably sensitive information. Many IT administrators discount the technical capabilities of their user bases, mistakenly assuming that the requisite technical skills to compromise an internal host render their internal network secure by default. Most of these administrators have never taken courses in penetration testing, and they are unaware of how simply many very damaging attacks can be launched. A wouldbe hacker need not go back to school to obtain a computer science degree; a couple of books and some Web searches can quickly impart enough knowledge to make even moderately savvy users a genuine threat. Although securing every host on the internal network may not be plausible for most organizations, there are a number of departments within every company that deserve special attention. For varying reasons, these departments host data that could pose significant risk to the welfare of the organization should the information be made available to the wrong people. In the following sections, I review some of the commonly targeted materials and the networks in which they live, and provide suggestions for bridging the gap between internal and external security by working to protect these environments. Before beginning discussion of how to correct internal security issues in your most sensitive environments, you need to determine where you are most vulnerable. Whereas a financial services company will be most concerned about protecting the privacy and integrity of their clientele’s fiscal data, a company


whose primary product is software applications will place greater stock in securing their development networks. Every location where your organization hosts sensitive data will have different profiles that you will need to take into account when developing solutions for securing them. In the following sections, I review two user groups common to all organizations and address both the threat against them and how to effectively manage that risk.

Protecting Human Resources and Accounting Earnings reports. Salary data. Stock options and 401k. Home addresses. Bonuses. This is just some of the information that can be discovered if one gains access to systems used for Human Resources and Accounting. Out of all the internal departments, these two administrative groups provide the richest landscape of potential targets for hostile or even mischievous employees. And yet in many organizations, these systems sit unfiltered on the internal network, sometimes even sporting DNS or NetBIOS names that betray their purpose. Due to the sensitivity of the information they work with, Accounting and Human Resources department heads are usually more receptive to changes in their environment to increase security. The users and managers understand the implications of a compromise of their data, and are quick to accept suggestions and assistance in preventing this type of event. This tendency makes these departments ideal proving grounds for implementing internal security. Since these groups also tend to work with a great deal of sensitive physical data as well, it is common for these users to be physically segregated from the rest of the organization. Network security in this case can follow the physical example; by implementing similar network-level segregation, you can establish departmental DMZs within the internal network. Done carefully, this migration can go unnoticed by users, and if your first internal DMZ is deployed without significant impact to productivity, you will encounter less resistance when mounting other internal security initiatives. This is not to imply that you should not involve the users in the deployment; on the contrary, coordination should take place at regular intervals, if only to provide status updates and offer a forum for addressing their concerns. Deploying internal DMZs is less difficult than it may initially sound. The first step involves preparing the network for isolation by migrating the address schemes used by these departments to one that can be routed independently of surrounding departments. Since most large organizations use DHCP to manage internal addressing, this migration can occur almost transparently from a user perspective. Simply determine the machine names of the relevant systems, and you can identify the MAC addresses of the hosts using a simple nbtstat sweep. Once the MAC addresses have been identified, the DHCP server can handle doling out the new addressing scheme, just ensure that routing is in place. So now that your sensitive departments are logically separated from other networks, you can begin developing the rest of the infrastructure necessary to implement a true DMZ for this network. Deploy an open firewall (any-any) at the border of the network, and implement logging. Since it is important to the success of the project and future endeavors that you minimize the impact of this deployment, you will need to analyze their current resource requirements before you can begin to implement blocking. In particular when reviewing logs, you will want to see what kind of legitimate inbound traffic exists. To reduce the risk of


adverse impact to NetBIOS networking (assuming these departments are primarily Windows-based), you may want to arrange the deployment of a domain controller within your secured network. As you gain a clearer understanding of the traffic required, you can begin to bring up firewall rules to address the permitted traffic, and by logging your (still open) cleanup rule you will have a clear picture of when your ruleset is complete. At that point, you can switch the cleanup to deny, and your implementation project will be complete. Remember, you must maintain a solid relationship with any department whose security you support. If their needs change, you must have clearly defined processes in place to address any new networking requirements with minimal delay. Think of your first internal DMZ as your first customer; their continued satisfaction with your service will speak volumes more than any data you could present when trying to implement similar initiatives in the future.

Protecting Executive and Managerial Staff Managerial staff can be some of the best or worst partners of the IT and network security teams. Depending on their prior experiences with IT efforts, they can help to pave the way for new initiatives or create substantial roadblocks in the name of productivity. A manager whose team lost two days worth of productivity because his department was not made aware of a major network topology shift will be far less eager to cooperate with IT than the manager who has never had such difficulties. This is the essence of security versus usability—when usability is adversely impacted, security efforts suffer the consequences. Most managers are privy to more information than their subordinates, and bear the responsibility of keeping that data private. Often, the information they have is exactly what their subordinates want to see, be it salary reviews, disciplinary documentation, or detailed directives from above regarding company-wide layoffs. But since management works closely with their teams, they do not lend themselves to the DMZ security model like Accounting and Human Resources. Further complicating matters, managers are not usually the most tech-savvy organizational members, so there is little they can do on their own to ensure the privacy and integrity of their data. Fortunately, there are tools available that require minimal training and can provide a great deal of security to these distributed users, shielding them from network-based intrusions and even protecting sensitive data when they leave their PC unattended and forget to lock the console. Two of the most effective of these tools are personal firewalls and data encryption tools. Personal firewalls have gotten a bad rap in many organizations because they can be too invasive and obtrusive to a user. Nobody likes to have a window pop up in front of the document they’re working on informing them of some event in arcane language they don’t understand. The default installations of these applications are very intrusive to the user, and the benefits to these intrusions do not outweigh the hassle imposed when these programs interrupt workflow. Some of these tools even attempt to profile applications and inform the user when new applications are launched, potentially representing a substantial intrusion to the user. Vendors of personal firewalls have heard these complaints and reacted with some excellent solutions to managing these problems. Many personal firewall vendors now provide methods for the software to be installed with


customized default settings, so you can design a policy that minimizes user interaction while still providing adequate protection. Desktop protection can be substantially improved simply by denying most unsolicited inbound connections. Although it is important to provide network-level defense for personnel who have access to sensitive information, it is far more likely that intrusions and information disclosure will occur due simply to chance. An executive who has just completed a particularly long and involving conference call may be pressed for time to attend another meeting, and in haste, neglect to secure their PC. Now any data that resides on his PC or is accessible using his logged-in credentials is open to whoever should happen to walk by. Granted, this sort of event can be protected against in most cases by the use of a password-protected screensaver, and physical security (such as locking the office on exit) further minimizes this risk. But have you ever misaddressed an e-mail, perhaps by clicking on the wrong line item in the address book or neglecting to double-check what your email client auto-resolved the To addresses to? It’s happened to large corporations such as Cisco (a February 6, 2002 earnings statement was mistakenly distributed to a large number of Cisco employees the evening before public release, ( and Kaiser-Permanente (“Sensitive Kaiser E-mails Go Astray,” August 10, 2000, Washington Post). Data encryption, both in transit and locally, can prevent accidents such as these by rendering data unreadable except to those who hold approved keys. A step beyond password protection, encrypted data does not rely on the application to provide security, so not even a byte-wise review of the storage medium can ascertain the contents of encrypted messages or files. A number of encryption applications and algorithms are available to address varying levels of concern, but perhaps the most popular data encryption tools are those built around the Pretty Good Privacy (PGP) system, developed by Phil Zimmermann in the early 1990s. PGP has evolved over the years, from its initial freeware releases, to purchase and commercialization by Network Associates, Inc., who very recently sold the rights to the software to the PGP Corporation ( Compatible versions of PGP are still available as freeware from the International PGP Home Page ( The commercial aspects of PGP lie in the usability enhancements and some additional functionality such as enterprise escrow keys and file system encryption. PGP does take some getting used to, and unlike personal firewalls PGP is an active protection method. However, in my experience users tend to react positively to PGP and its capabilities; there tends to be a certain James Bondesque quality to dealing with encrypted communications. For e-mail applications, PGP adoption is brought about more by word-of-mouth, with users reminding one another to encrypt certain types of communiqués. In the case of a misdirected financial report, PGP key selection forces the user to review the recipient list one more time, and if encrypted messages later find their way to individuals for whom the message was not intended, they will be unable to decrypt the contents. The commercial versions of PGP also include a file-system encryption tool that allows creation of logical disks that act like a physical hard disk, until either the system is shut down or a certain period of inactivity passes. By keeping sensitive documents on such volumes, the chances of a passerby or a thief gaining access are greatly reduced. These encrypted volumes can be created as small or as large as a user wants, and occupy a subset of a physical hard disk as a standard file, so they can be backed up as easily as any other data. These volumes


can even be recorded to CD or other portable media to allow safe distribution of sensitive files. Many companies are afraid of widespread use of encryption for fear of losing their own data due to forgotten passwords. Currently available commercial PGP suites account for this through the use of escrow keys, a system in which one or more trusted corporate officers maintain a key which can decrypt all communications encrypted by keys generated within the organization.

Developing and Maintaining Organizational Awareness So far, we’ve covered some of the more frequently neglected aspects of managing internal security with effective border control. We’ve focused so far primarily on establishing your electronic customs checkpoints, with border patrol officers such as firewalls, Web and generic server proxies, logging, VPNs, and extranets. Although adequately managing your network borders can help to prevent a substantial portion of the threats to your environment (and your sanity), there are always going to be access points that you simply cannot hope to control. Users who bring their laptops home with them can easily provide a roaming proxy for autonomous threats such as worms, Trojan horses, and other applications that are forbidden by corporate policy. VPN tunnels can transport similar risks undetected through border controls, due to their encryption. A software update from a vendor might inadvertently contain the next Code Red, as of yet undetected in an inactive gestational state, waiting for a certain date six months in the future. No matter how locked down your borders may be, there will always be risks and vulnerabilities that must be addressed. In the remainder of this chapter, I review strategies and techniques for mitigating these risks on an organizational level. Although I touch briefly on the technical issues involving internal firewalling and intrusion detection, our primary focus here will be developing the human infrastructure and resources necessary to address both incident response and prevention.

Quantifying the Need for Security One of the first things that you can do to increase awareness is to attempt to quantify the unknown elements of risk that cross your network on a daily basis. By simply monitoring current network traffic at certain checkpoints, you can get an understanding of what kind of data traverses your network, and with the trained eye afforded by resources such as books like this one, identify your exposure to current known threats and begin to extrapolate susceptibility to future issues. Depending on the resources available to you and your department, both fiscal and time-based, there are a number of approaches you can take to this step. Cataloging network usage can be fun too, for many geeks—you’ll be amazed at some of the things you can learn about how your servers and clients communicate. If your environment is such that you’ve already implemented internal intrusion detection, QoS (quality of service) or advanced traffic monitoring, feel free to skip ahead a paragraph or two. Right now we’re going to offer some suggestions to the less fortunate administrators. Get your hands on a midrange PC system, and build up an inexpensive traffic monitoring application such as the Snort IDS ( Snort is an


open source, multiplatform intrusion detection system built around the libpcap packet capture library. Arrange to gain access to a spanning port at one of your internal network peering points. Make sure the system is working by triggering a few of the sensors, make sure there’s enough disk space to capture a fair number of incidents, and leave the system alone for a couple of days. Once you’ve given your impromptu IDS some time to get to know your network, take a look at the results. If you enabled a good number of capture rules, you will undoubtedly have a mighty collection of information about what kind of data is traversing the peering point you chose. Look for some of the obvious threats: SQL connections, NetBIOS traffic, and various attack signatures. If your data isn’t that juicy, don’t make the mistake of assuming that you’re in the clear; many organizations with extensive IDS infrastructures can go days at a time without any sort of alert being generated. Just take what you can from the data you’ve gathered and put the system back online. Regardless of whether you have this data at your fingertips or if you need to deploy bare-bones solutions such as the Snort system described here, your goal is to take this data and work it into a document expressing what kind of threats you perceive on your network. If you’re reading this book, and in particular this chapter, it’s evident that you’re interested in doing something about securing your network. Your challenge, however, is to put together convincing, easily consumed data to help you advance your security agenda to your less securitysavvy co-workers and managers. Be careful though—your passion, or paranoia, may encourage you to play upon the fears of your audience. Although fear can be an excellent motivator outside of the office, in the business realm such tactics will be readily apparent to the decision makers.

Developing Effective Awareness Campaigns Whether you’re an administrator responsible for the security of the systems under your direct control, a CISO, or somewhere in the middle, odds are you do not have direct contact with the mass of individuals who are the last line of defense in the war against downtime. Even if you had the authority, the geographical distribution of every node of your network is likely too extensive for you to hope to manage it. To say nothing of all the other projects on your plate at any given moment. Although the information technology group is tasked with ensuring the security of the enterprise, little thought is given to the true extents of such an edict. In most cases, your job is primarily to investigate, recommend, and implement technological solutions to problems that are typically far more analog in their origin. No amount of anti-virus products, firewalls, proxies, or intrusion detection systems can avert attacks that are rooted in social engineering, or in simple ignorance of security threats. More and more, effective user education and distributed information security responsibility is becoming the most effective means of defense. Currently, if a major security incident occurs in any department within an organization, that group’s IT and the corporate IT groups are primarily held responsible. With the exception of the costs of any downtime, little impact is felt by the executive influence of the offending department. This is as it should be, because they have not failed to fulfill any responsibilities—with no roles or responsibilities pertaining to the systems used by their employees, or policies affecting those systems, they are in the clear. They can point fingers at IT, both


central and local, since they bear no responsibility for preventing the events that led up to the incident. And if they bear no responsibility, their employees cannot either. In order to get the attention of the user base, project leaders need to provide incentive to the managers of those groups to help IT get the word out about how to recognize and respond to potential security threats. Companies are reluctant to issue decrees of additional responsibilities to their various executive and managerial department heads, simply for fear of inundating them with so many responsibilities that they cannot fulfill their primary job functions. So in order to involve the various management levels in information security, project leaders have to make the task as simple as possible. When you assign responsibility to execute predefined tasks, you greatly increase the chances that it will be accomplished. There are many examples of fairly straightforward tasks that can be assigned to these managers; enforcement of acceptable-use policies (though the detection of violations is and always will be an IT responsibility) is one of the most common ways to involve management in information security. And as you’ll see in this section, company-wide awareness campaigns also leave room for engaging management in your information security posture. Although much can be done to protect your users from inadvertently causing harm to the company by implementing technology-based safeguards such as those described earlier in this chapter, in many cases the user base becomes the last line of defense. If we could magically teach users to never leave their workstations unsecured and to recognize and delete suspicious e-mails, a considerable portion of major security incidents would never come to fruition. In this section, we are not going to concentrate on the messages themselves, because these run the gamut from the universal, such as anti-virus updates and dealing with suspicious e-mails or system behavior, to more specialized information, such as maintaining the security of proprietary information. In your position, you are the best judge of what risks in your organization can best be mitigated by user education, and so I forego the contents and instead spend time looking at the distribution methods themselves. I touch on three common approaches to disseminating security awareness materials, and let you decide which methods or combinations best fit your organization and user base: • •

Centralized corporate IT department Distributed department campaigning

Pure enforcement

Creating Awareness via a Centralized Corporate IT Department In this approach, corporate IT assumes responsibility for developing and distributing security awareness campaigns. Typically, this is implemented secondarily to centralized help-desk awareness programs. Your organization may already have produced mouse pads, buttons, or posters that include the help-desk telephone number and instructions to contact this number for any computer issues. Sometimes, this task is handed to the messaging group, and periodic company-wide e-mails are distributed including information on what to do if you have computer issues. Depending on the creative forces behind the campaign, this method can have varying results. Typically, such help-desk awareness promotions work


passively, when a user has a problem, they look at the poster or search for the most recent e-mail to find the number of the help-desk. The communications received from corporate IT are often given the same attention as spam—a cursory glance before moving on to the next e-mail. Even plastering offices with posters or mouse pads can be overlooked; this is the same effect advertisers work to overcome everyday. People are just immune to advertisements today, having trained themselves to look past banner ads, ignore billboards, and skip entire pages in the newspaper. One approach I’ve seen to this issue was very creative, and I would imagine, far more effective than blanket advertising in any medium. The corporate messaging department issued monthly e-mails, but in order to reduce the number of users who just skipped to the next e-mail, they would include a humorous IT-related anecdote in each distribution. It was the IT equivalent of Reader’s Digest’s “Life in These United States” feature. Readers were invited to provide their own submissions, and published submissions won a $25 gift certificate. Although this campaign resulted in an additional $300 annual line item in the department budget, the number of users who actually read the communications was likely much higher than that of bland policy reminder emails. The remainder of the e-mail was designed to complement the entertainment value of the IT story, further encouraging users to read the whole e-mail. Corporate IT has at its disposal a number of communication methods that can provide an excellent avenue for bringing content to the attention of network users. If all Internet traffic flows through IT-controlled proxy servers, it is technologically feasible to take a page from online advertisers and employ popup ads or click through policy reminders. Creative e-mails can both convey useful information and get a handle on the number of e-mail clients that will automatically request HTTP content embedded in e-mails (for example, the monthly e-mail described in the preceding paragraph could include a transparent GIF image link to an intranet server, the intranet server’s access logs could then provide a list of all clients who requested the image). But whether the communication and information gathering is passive or active, the most challenging obstacle in centralized awareness campaigns is getting the attention of the user to ensure the information within can take root.

Creating Awareness via a Distributed Departmental Campaign In some highly compartmentalized organizations, it may be beneficial to distribute the responsibility for security awareness to individual departments. This approach is useful in that it allows the department to fine-tune the messages to be relayed to the user base to more accurately reflect the particular resources and output of their group. For example, if global messages are deployed that focus heavily on preventing data theft or inadvertent release of proprietary documents are seen by administrative staff such as that of the Accounting departments, you will immediately lose the attention of the user in both the current instance and future attempts. When a local department is tasked with delivering certain messages, you place some of the responsibility for user activity in the hands of the department heads. The excuse, “Well, how were we supposed to know?” loses all of its merit. However, if the responsibility is delegated and never executed, you are in a


worse position than if you’d decided on using the centralized IT method described previously. In many cases, departmental campaigning will supplement a centralized general campaign. Issues that can impact users regardless of department are left to IT to manage; more specific concerns such as data privacy and integrity are delegated to the organizational groups who require such advanced defense. For example, although anti-virus awareness programs might happen on a global scale, a sensitive data encryption initiative might focus on departments such as research and development. By placing responsibility for such an initiative in the hands of the department managers (IT or otherwise), you will find that the departments that need help will ask for it, as opposed to playing the ostrich with their heads in the sand. The development of such programs will vary greatly from one organization to the next, but as with any interdepartmental initiative, the first task is to enlist the help of the senior management of your department. Once you convince them of the potential benefits of distributing the load of user education, they should be more than willing to help you craft a project plan, identify the departments most in need of such programs, and facilitate the interdepartmental communication to get the program off the ground.

Creating Awareness via Pure Enforcement In a pure enforcement awareness campaign, you count on feedback from automated defense systems to provide awareness back to your user base. A prime example is a content filter scheme that responds to forbidden requests with a customized message designed not only to inform the user that their request has been denied, but also to remind the user that when using corporate resources, their activity is subject to scrutiny. This approach can be quite effective, in a fashion similar to that of authenticated proxy usage described in the “Who, What, Where? The Case for Authentication and Logging” section earlier in this chapter. However, there is the potential for this method to backfire. If users regard their IT department in an adversarial fashion, they may be afraid to ask for help at some of the most critical junctures, such as when erratic system behavior makes them fear they may have a virus. If a user opens an e-mail and finds themselves facing a pop-up dialog box declaring, “Your system has just been infected by the $00p4h-l33t k14n!!!!,” then decides to go to lunch and hope that another employee takes the fall for introducing the virus, your pure enforcement awareness campaign has just given a new virus free reign on a system. There’s another technique I’ve heard discussed for enforcement-based awareness campaigns, but have never heard of being put into practice. The idea was to distribute a fake virus-like e-mail to a sampling of corporate users to evaluate how the users handled the message. The subject would be based off the real-world social-engineering successes of viruses such as LoveLetter or Melissa, such as “Here’s that file you requested.” With the message having a from-address of an imaginary internal user, the idea was to see how many users opened the message either by using built-in receipt notification, logging accesses to an intranet Web server resource requested by the message, or even including a copy of the Eicar test virus (not actually a virus, but an industry-accepted test signature distributed by the European Institute for Computer Anti-Virus Research, to see how many of the recipients contacted the help desk, or if


centralized anti-virus reporting is enabled, created alerts in that system. Depending on the results, users would either receive a congratulatory notice on their handling of the message or be contacted by their local IT administrators to ensure that their anti-virus software was installed and configured correctly, and to explain how they should have handled the message. Again, this approach could be construed as contentious, but if the follow-up direct communication is handled properly this sort of fire drill could help build an undercurrent of vigilance in an organization. As described in the introduction, there is an element of psychology involved in designing awareness campaigns. Your task is to provide a balance, effectively conveying what users can do to help minimize the various risks to an organization, reminding them of their responsibilities as a corporate network user, and encouraging them to ask for help when they need it. The threat of repercussions should be saved for the most egregious offenders; if a user has reached the point where she needs to be threatened, it’s probably time to recommend disciplinary action anyway. Your legal due diligence is provided for in your Acceptable Use Policy (you do have one of those, don’t you?) so in most cases, reiterating the potential for repercussions will ultimately be counterproductive.

Company-Wide Incident Response Teams Most organizations of any size and geographic distribution found themselves hastily developing interdepartmental response procedures in the spring of 1999. As the Melissa virus knocked out the core communication medium, the bridge lines went up and calls went out to the IT managers of offices all over the world. United in a single goal, restoring business as usual, companies that previously had no formal incident response planning spontaneously created a corporate incident response team. At the time, I was working as a deployment consultant for one of the largest providers of anti-virus software, and had the opportunity to join many of these conference calls to help coordinate their response. During those 72 hours of coffee and conference calls, taken anywhere from my car, to my office, and by waking hour 42, lying in the grass in front of my office building, I saw some of the best and worst corporate response teams working to restore their information services and get their offices back online. A few days later, I placed a few follow-up calls to some of our clients, and heard some of the background on how their coordinated responses had come together, and what they were planning for the future in light of that event. Out of the rubble of the Melissa virus, new vigilance had risen, and organizations that had never faced epidemic threats before had a new frame of reference to help them develop, or in some cases create, company-wide incident response teams. Most of this development came about as a solution to problems they had faced in managing their response to the most recent issue. The biggest obstacle most of my clients faced in the opening hours of March 26th was the rapid loss of e-mail communication. Initial responses by most messaging groups upon detecting the virus was to shut down Internet mail gateways, leaving internal message transfer agents enabled, but quickly it became clear that having already entered the internal network and hijacking distribution lists, it was necessary to bring down e-mail entirely. Unfortunately, security administrators were no different than any other users, and relied almost entirely on their e-mail client’s address books for locating contact information for


company personnel. With the corporate messaging servers down, initial contact had to be performed through contact spidering, or simply waiting for the phone to ring at the corporate NOC or help desk. In the days following the virus, intranet sites were developed that provided IT contact information for each of the distributed offices, including primary, secondary, and backup contacts for each department and geographical region. Management of the site was assigned to volunteers from the IT department at corporate headquarters, and oversight was given to the Chief Security Officer, Director of Information Security, or equivalent. A permanent conference line was established, and the details provided to all primary contacts. In the event of a corporate communications disruption, all IT personnel assigned to the response team were to call into that conference. As a contingency plan for issues with the conference line, a cascading contact plan was developed. At the start of an incident involving communications failures, a conference call would be established, and the contact plan would be activated. Each person in the plan was responsible for contacting three other individuals in the tree, and in this manner a single call could begin to disseminate information to all the relevant personnel. There was a common thread I noticed in clients who had difficulties getting back online, even after having gotten all the necessary representatives on a conference call. In most of these organizations, despite having all the right contacts available, there was still contention over responsibilities. In one instance, IT teams from remote organizations were reluctant to take the necessary steps to secure their environments, insisting that the central IT group should be responsible for managing matters pertaining to organizational security. In another organization, the messaging group refused to bring up remote sites until those sites could provide documentation showing that all desktops at the site had been updated with the latest anti-virus software. It wasn’t until their CIO joined the call and issued a direct order that the messaging group conceded that they could not place ultimatums on other organizations. Each member of an incident response team should have a clearly defined circle of responsibility. These circles should be directly related to the member’s position in an organizational chart, with the relevant corporate hierarchies providing the incident response team’s chain of command. At the top of the chart, where an organizational diagram would reflect corporate headquarters, sits the CIO, CSO, or Directory of Information Security. The chart will continue down in a multitier format, with remote offices at the bottom of the chart as leaves. So for example, the team member from corporate IT who acts as liaison to the distributed retail locations would be responsible for ensuring that the proper steps are being taken at each of the retail locations. It is important to keep in mind that incident response could require the skills of any of four different specialties (networking, messaging, desktop, and server support), and at each of the upper levels of the hierarchy there should be representatives of each specialty. By ensuring that each of these specialties is adequately represented in a response team, you are prepared to deal with any emergency, no matter what aspect of your infrastructure is effected. Finally, once the team is developed you must find a way to maintain the team. At one company I worked with, the Director of Information Security instituted a plan to run a fire drill twice a year, setting off an alarm and seeing how long it took for all the core team members to join the call. After the call, each of the primary contacts was asked to submit updated contact sheets, since


the fire drill frequently identified personnel changes that would have otherwise gone unnoticed. Another company decided to dual-purpose the organizational incident response team as an information security steering committee. Quarterly meetings were held at corporate headquarters and video conferencing was used to allow remote locations to join in. At each meeting, roundtable discussions were held to review the status of various projects and identify any issues that team members were concerned about. To keep the meeting interesting, vendors or industry professionals were invited in to give presentations on various topics. By developing and maintaining an incident response team such as this, your organization will be able to take advantage of the best talents and ideas of your entire organization, both during emergencies and normal day-to-day operations. Properly developed and maintained, this team can save your organization both time and money when the next worst-case scenario finds its way into your environment.

Security Checklist •

Make certain that users are aware of what they can do to help protect company resources. If a user in your organization suspected that they might have just released a virus, what would they do? Do they know who to call? More importantly, would they be afraid to call?

Periodically review basic internal network security, and document your findings. Use the results to provide justification for continued internal protection initiatives. How chewy is your network? Use common enumeration techniques to try and build a blueprint of your company’s network. Can you access departmental servers? How about databases? If a motivated hacker sat down at a desk in one of your facilities, how much critical data could be compromised?

Determine whether you have adequate border policing. Try to download and run some common rogue applications, like file-sharing networks or instant messaging program. Are your acceptable use policy documents up to date with what you actually permit use to? Make sure these documents are kept up to date and frequently communicated to users. Refer also to Chapter 2 for more on managing policies.

Work with the administrators and management staff necessary to make sure you can answer each of these questions. If one of your users uploaded company-owned intellectual property to a public Web site, could you prove it? Are logs managed effectively? Is authentication required to access external network resources? What if the user sent the intellectual property via e-mail?

Summary At initial glance, information security is a fairly straightforward field. When asked by laymen what I do for a living, and receiving blank stares when I reply “information security,” I usually find myself explaining that my job is to keep


hackers out. But as we’ve discussed here, managing information security for an organization is not merely a technical position. As related in the beginning of the chapter, “security” is the careful balancing of the integrity, privacy, ownership, and accessibility of information. Effectively addressing all four of these requirements entails a working knowledge of technology, business practices, corporate politics and psychology. The one common element in all of these disciplines is the individual: the systems administrator, the executive, the administrative professional, the vendor or the partner. Despite all the statistics, case-studies and first-hand experiences, this one common element that binds all the elements of an effective security posture together is commonly regarded not as a core resource, but as the primary obstacle. Although indeed steps must be taken to protect the organization from the actions of these corporate entities, concentrating solely on this aspect of security is in fact the issue at the heart of a reactive security stance. In order to transition to a truly proactive security posture, the individuals, every one of them, must be part of the plan. Actively engaging those people who make use of your systems in protecting the resources that they rely on to do their jobs distributes the overall load of security management. Educating users on how to recognize security risks and providing simple procedures for relaying their observations through the proper channels can have a greater impact on the potential for expensive security incidents than any amount of hardware, software, or re-architecting. Although ten years ago the number of people within an organization who had the capacity to cause far-reaching security incidents was very small, in today’s distributed environments anyone with a networked device poses a potential risk. This shift in the potential sources of a threat requires a change in the approaches used to address them. By implementing the technology solutions provided both in this chapter and elsewhere in this book, in conjunction with the organizational safeguards and techniques provided in the preceding pages, you can begin the transition away from reactive security in your organization. Review your current Internet access controls—is authentication in use? Is there content filtering in place? How long are logs kept? Does your company have any information security awareness programs today? When was the last time you reviewed your remote access policy documents? By asking these questions of yourself and your co-workers, you can begin to allocate more resources towards prevention. This change in stance will take time and effort, but when compared to the ongoing financial and time-based costs of managing incidents after the fact, you will find that it is time well spent.

Solutions Fast Track Balancing Security and Usability •

Communication and anticipation are key aspects of any security initiative; without careful planning, adverse impact can haunt all your future projects.

Personnel are both your partners and your adversaries—many major incidents can never take hold without the assistance of human intervention, either passively or actively.


Viruses, and corporate defense thereof, have paved the way for advancing security on all fronts by providing a precedent for mandatory security tools all the way to the desktop.

The crunchy-outside, chewy-inside exoskeleton security paradigm of recent history has proven itself a dangerous game, time and time again.

Managing External Network Access •

The Internet has changed the way that people work throughout an organization, but left unchecked this use can leave gaping holes in network defense.

Proxy services and careful implementation of least-privilege access policies can act as a filter for information entering and exiting your organization, letting the good data in and the sharp bits out.

Managing Partner and Vendor Extranets •

Partnerships, mergers, and closer vendor relations are blurring network borders, creating more challenges for network security, perforating the crunchy outside.

Develop and maintain a comprehensive remote access policy document specifically for third-party partner or vendor relationships, defining more strict controls and acceptable use criteria. Technologies such as virtual private networks (VPNs) and independent extranet solutions can provide secure methods to share information and provide access.

In many cases, these tools and techniques are managed the same as their purely internal or external counterparts—all that’s required to repurpose the technologies is a new perspective.

Securing Sensitive Internal Networks •

Some parts of the chewy network center are more critical than others, and demand special attention.

With an implicit need for protection, these sensitive networks can help establish precedent for bringing proven security practices inside of the corporate borders. Education and established tools and techniques such as encryption, firewalling and network segmentation can be adapted to protect these “internal DMZs.”

Developing and Maintaining Organizational Awareness •

Since security is only as strong as its weakest link, it is imperative to recognize the role of the individual in that chain and develop methods of fortifying their role in network defense.

By tailoring your message to the specific needs and concerns of the various entities of your organization, you can bring about change in a subtle but effective fashion.


Capitalize on your previous experiences—learn from previous mistakes and successes to develop better preparedness for future events.

Links to Sites •

CSO Online ( This site, and its classic media subscription magazine (free to those who qualify) provides articles targeted to executive-level security personnel. In addition to providing insight into the mindset of the decision makers, its articles tend to focus on organizational security, instead of the more localized approach taken by many other security-related sites and publications.

SecurityFocus ( This site needs little introduction to those in the computer security industry. It includes insightful articles from people “in the trenches,” from technical howto’s to more strategic discussions on matters pertaining to information security professionals.

International Information Systems Security Certifications Consortium ( This is the site for the providers of the well-known CISSP and SSCP certifications. (ISC)2, as they are commonly called, has established its mission as compiling a relevant “Common Body of Knowledge” and developing testing to validate a candidate’s understanding of the core principles of information security. (ISC)2 provides a reference list of publications that are representative of the concepts tested for both of its certifications, many of these books are considered to be the authoritative source of information on their realm within information security.

Mailing Lists •

VulnWatch ([email protected]) VulnWatch is a rare mailing list in that it is extensively moderated with a high signal-to-noise ratio. Filtered to provide nothing but vulnerability announcements, this is an excellent resource for the administrator whose e-mail volume is already much too high.

Firewall Wizards ( Although far less moderated than VulnWatch, Firewall Wizards can provide valuable insight into industry trends in managing network access. This list concentrates primarily on managing border access control, but its members are well-versed in all aspects of access control.

Frequently Asked Questions Q: The users in my general populace aren’t technically savvy. Why should I spend our limited resources on protecting against “internal threats?”


A: In years past, diversity of systems, security-by-obscurity, and the rarity of technically knowledgeable individuals lent credibility to the border-patrol approach to corporate security. But recent events have shown time and time again that the most costly security incidents are those that take advantage of today’s standardized systems and minimally defended internal systems to wreak havoc and incur great expense due to lost productivity and response costs. With the availability of information and the abundance of autonomous threats such as worms, Trojans and viruses, there is little difference in the risks posed by knowledgeable users versus the technically inept. Q: How do I ensure that roaming laptops don’t pose a threat to my organization? A: Until an effective method can be developed to provide instantaneous policycompliance spot checks to ensure new network entities are safe to join the network, roaming laptops need to be treated as a potential threat to the network. Since isolating roaming laptop networks is not cost effective, you need to approach the risks imposed by roaming laptops by minimizing the susceptibility of the rest of the network. Although user education and awareness can make great strides towards minimizing the threat posed by devices not entirely under corporate controls, ultimately the defense needs to be distributed to all systems, so that no matter what a roaming laptop carries into your organization, your systems are capable of defending themselves. Q: How do I make certain that all my clients have adequate personal firewall and/or anti-virus policies on their workstations? A: Vulnerability assessment tools, organizational standards for defense software, and centralized management systems can help ensure that all networked systems have proper defenses. Competition between vendors, however, has prevented a single all-encompassing solution from being made available to organizations. If your organization has relationships with security product vendors, encourage them to make public their management protocols and data structures—with the proper incentives for developing an open security communication standard, an all-encompassing management solution may be just over the horizon. Q: My management has been slow to react to imminent security risks, how can I sell them on preventative measures? A: In a word, costs. Prevention is a discretionary expense, but reacting to security events is not. Approach the issue from relevant, previous experiences—rather than hypothesizing as to the cost of the next major incident, show them the true costs of recent events and show how, to use a cliché, an ounce of prevention is truly worth a pound of cure. By providing clear, undisputable financial justification, and showing your understanding of the issues they are facing, you will foster trust in your motives and initiatives.




3:29 PM

Page 77

Chapter 2

Creating Effective Corporate Security Policies

Creating Effective Corporate Security Policies Solutions in this Chapter: • • • • • • •

The Founding Principles of a Good Security Policy Safeguarding Against Future Attacks Avoiding Shelfware Policies Understanding Current Policy Standards Creating Corporate Security Policies Implementing and Enforcing Corporate Security Policies Reviewing Corporate Security Policies

Introduction The purpose of this chapter is to help the network administrator or security specialist design a plan of attack to develop a corporate security policy strategy. This person may have just been tasked with creating a new security policy, updating an old one, or maintaining one developed by someone else. Regardless, at first glance they are all daunting tasks, and may seem impossible. In this chapter, I break down the process into several defined steps, each of which help you to create, review, and enforce the policies you need to secure your corporate network. Current technology can be used to create a secure infrastructure, but good policies are necessary to maintain it. Security policies are usually seen as a necessary compliance with some higher power, not as a necessity of function in a network operation. They are often overlooked and undervalued until they are really needed. We can create secure networks, write secure code, and build reliable, survivable systems with current technology today. If configured properly, using the principles outlined in this book, we can accomplish our goals. However, we still come down to the fundamental flaw with securing our networks: people. People, unlike computers, don’t follow instructions exactly as told. They have choices, and their choices can put cracks in the security walls. These cracks can be a personal dual-homed box connected to the outside, bypassing the firewall; they can be an insecure password that’s easy to remember; or they can be a lazy system administrator that leaves ex-employee credentials in the authentication database. The statistics from the industry are shocking. Jacqueline Emigh demonstrates this in her June 2002 article for Jupitermedia Corporation, Security Policies: Not Yet As Common As You'd Think: “If organizations don't develop and enforce security policies, they're opening themselves up to vulnerabilities,” maintains Richard Pethia, director of the FBI's National Infrastructure Protection Center (NIPC).

Other studies underscore these vulnerabilities. In its recently released 2002 Computer Crime and Security Survey, the Computer Security Institute (CSI) conducted research among 853 security practitioners, mainly in large corporations and government agencies. A full 90 percent admitted to security breaches over the past 12 months, and 80 percent acknowledged financial losses due to these breaches. Frequently detected attacks and abuses included viruses (85 percent), system penetration from the outside (40 percent), denial of service attacks (40 percent), and employee abuses of Internet access privileges, such as downloading pornography or pirated software, or “inappropriate use of e-mail systems” (78 percent). The question is, why weren’t security policies put in place if they could have helped to prevent some of these incidents? Slowly, those in the industry have begun to realize the importance of policies and procedures as a means to protect their information assets. The article Human Error May Be No. 1 Threat to Online Security by Jaikumar Vijay for Computerworld reports on a story where VeriSign issued two digital certificates to an individual claiming to be a Microsoft employee. “The whole thing proves that online security isn't about the technology,” said Laura Rime, a vice president at Identrus LLC in New York, which was established by eight leading banks to develop standards for electronic identity verification for e-commerce. “It is more about the operating procedures and processes rather than technology, that is crucial in preventing incidents such as these,” Rime said. These examples demonstrate how employee’s actions put cracks in the walls, and the best way to mitigate the creation of those cracks is with the proper policies—policies that are appropriate, meaningful, understandable, and enforceable. The purpose of this chapter is to help you create a strategy and program to develop a system to create, deploy, administer and enforce policies. The chapter starts with a background of policies, and addresses certain important issues in security policies. It leads with the importance of understanding the proper use of policies: What policies are supposed to accomplish, and what they are not meant to accomplish. I review some of the theory behind creating good policies, and how they tie in to the rest of the security principles. Then I address the issue of policy shelfware, the elephant graveyard of policies. On multiple policy review projects, I have witnessed good policies go unused and unnoticed, which only results in poor implementation of policies and poor network security. Finally, I review the current shortcomings with the multiple guidelines and policy review tools currently available. The field of security policies, though it has been around since the 1960s from an IT perspective, still has a ways to go in regards to standards. The second half of the chapter covers the development process, from risk assessment to enforcement. These sections discuss the different tools available for performing risk assessments, and review the importance of risk assessments. I then cover some guidelines for creation of corporate security policies. Though this may seem like it should be the focus of this chapter, it is only one subsection of the larger picture of security policy. Because so many policy creators focus just on creation, they miss the rest of the picture and are unable to perform a thorough job. I later address the issues involved in implementing policies as

procedures, once they are developed. This is where the rubber meets the road, and policies take hold in the corporate environment. Finally, the chapter addresses the areas of reviewing corporate security policies, and the various tools available to help conduct reviews of policies and procedures, and gauges them against guidelines.

The Founding Principles of a Good Security Policy It has been known for a long time that there is no silver bullet in security, that no one device, configuration, or person will be able to solve all the security woes a company may have. Instead, it is commonly accepted that good security is a combination of elements, which form together to create a wall. Each component brings something unique to the table, and security policy is no exception. Security policy will not solve all your misconfigurations and personnel problems, but it will provide a piece of the puzzle that no other component can provide: structure. We can think of the structure and use of security policies as a pyramid of goals, with the most important goals on top, supported by the foundation to accomplish those goals. Figure 2.1 illustrates this concept. Figure 2.1 The Pyramid of Goals





Our primary goal, as is the generally accepted goal of information security, is to maintain information confidentiality, integrity, and availability (CIA). We can accomplish this by concentrating on several principles of information security, which are taken from industry experience: •

Principle of least privilege Create the minimum level of authorization necessary for users, services, and applications to perform their job functions.

Defense in depth Build security implementations in multiple layers, such that if an outer layer is compromised, the underlying layers are still resistant to attack.

Secure failure Design systems so that, in the event of a security failure or compromise, they fail into a closed, secure state rather than an open, exposed state.

• •

Secure weak links Concentrate security efforts on the weakest link in the security infrastructure. Universal participation Effective security requires the cooperation of all parties, to ensure that security programs, policies, and procedures are universally accepted, understood, and applied.

Defense through simplicity Reduce complexity wherever possible, because administrators and users are less likely to make mistakes on simpler systems.

Compartmentalization Segregate systems such that if one is compromised, it does not reduce the security of others.

Default deny By default, deny all access not explicitly allowed.

Guidelines, standards, and rules issued by regulatory agencies are inherently created with these principles. The ISO17799, GLB and HIPAA all include elements of these security principles. Many of these guidelines, standards, or rules will also explicitly state the goals of security: confidentiality, availability, and integrity. To achieve these eight principles of security, we rely on the proper creation of policies, which is the focus of this chapter. The implementation and enforcement of policies are procedures, which I address at the end of the chapter—though a full explanation requires further research. Now that you have an idea of how policies fit in to the bigger picture of an organizational structure for information security, let’s get into it.

Safeguarding Against Future Attacks Good network security can only guard against present, known attacks. Tomorrow’s vulnerabilities cannot be guarded against by applying patches and upgrading software. We have the ability today to create a secure network, but the unknown vulnerability of tomorrow may expose a gaping hole in our networks. We have the ability to lock down our servers, configure routers, and patch Internet-facing software to maintain a secure network up to the current moment. However, once we stop maintaining our network, we let security slide, and holes begin to open up to the latest vulnerabilities. The nice thing about policies is they are one of the few security tools that help us to guard against unknown, unforeseen, future attacks. Policies define what actions need to be taken to maintain secure networks, such as removing inactive users, monitoring firewall activity, and maintaining current, standard server builds. However, proper policies are also not the silver bullet. Policies that are not reviewed, updated, promulgated, or enforced will become outdated and ineffectual. In many ways, policies implemented improperly can be more hurtful than no policies at all, because they can give a false sense of security. However, one requirement for any policy to be effective is management support. Without management support, even the best policy with the best implementation and enforcement will be ineffective. Damage & Defense… Guard Against the Unknown

Imagine if you were able to stop attacks before they started. Imagine if you were able to patch vulnerabilities before they are discovered. Sounds impossible? Perhaps not. If you implement proper information security policies and procedures, you may be able to prevent attacks before they even start. For example, if your policies require you to follow the principle of defense in depth, and you have a properly implemented security perimeter around your entire network, you are less likely to suffer an impact from a failure in one component of your network. Another example: If you have proper personnel policies and procedures implemented, such as performing background checks on employees, removing old user accounts from former employees, and evaluating the threat potential of current employees, you may be less likely to suffer an attack from an insider. With the proper personnel controls in place, you may be able to recognize and mitigate threats from a potentially subversive employee before they take action. You may even be able to recognize them as a threat before they get any ideas, and address the problem before it starts.

Required: Management Support In the past, as it is today, policy tended to be handed off to the low man on the totem pole, passed off as grunt work that nobody really wanted to do. This is really counterintuitive when we step back and look at it. Policies are passed to the low man because they don’t directly impact the bottom line. No direct costbenefit can be applied to policy creation and implementation. This was the same argument applied to information security, until just recently more management began to sit up and take notice. Information security has made some headway in justifying itself without having a direct, accountable impact on the bottom line. Now it’s time for security policy to make the same progress. The importance of security policy becomes clear once management recognizes that policy is where the corporate strategy for security gets implemented. Executive level management should care about how their corporate strategies are implemented. As security becomes a larger price point for executives in the future, security policy may follow suit. Until then, it will be an uphill battle for many security administrators to get management support for their security initiatives. However, security policy is one area where management support is critical. Because security policies deal much more with the day-to-day actions of employees, and changes in policy should ideally result in changes in procedures, it is important that security implementers have the backing of management. Effecting procedural change in a corporation where employees are set in their ways can be very difficult, and requires much effort. Before you make an effort to implement any policies, make certain you have specific commitments from management as to their role in your initiative, and the support they will provide.

Avoiding Shelfware Policies Shelfware policies literally sit on the shelf, unread by employees, and are usually not enforced. Oftentimes, employees will not even be aware such policies exist. Multiple times, my clients have been surprised to learn of a policy (which I uncovered during my digging) they did not know existed. Other times, my clients will know that a policy exists, somewhere, but they are unable to locate it, and do

not know what the policy covers. Right now, you should ask yourself if you know where your information security policy is located. How long has it been since you looked at it? If your answer is “never” or “not since I started work here,” you’re in the majority with other shelfware policy users. Shelfware policies can be more dangerous than no policy at all, because they provide a false sense of security. You probably sleep better at night knowing you have a security policy. Doesn’t it bother you that you put the security of your network in the trust of other users? A good security policy is what protects you from other users on your network. Some of the policies I’ve seen have been created because of compliance requirements. Others have created policies because of customer requirements. When policy is created due to external requirements, without management directive, without clear value, and without an owner, it has a risk of becoming shelfware. The easiest way to keep a policy from becoming shelfware, though it sounds obvious, is to avoid letting it sit on the shelf. A good policy is easily readable, referred to often, maintained and updated, valued in the organization, supported by management, enforced, and championed by a clear owner. These are the traits of a “living policy,” one that doesn’t sit on the shelf. In the following sections, I discuss ways to keep your policy alive.

Make Policies Readable One of the most critical factors in making a policy useful is to make it readable. This section offers multiple tips to make your policies as readable and understandable as possible. Policies are often filled with relevant and useful information, but it’s usually buried deep inside. It requires a good writer to make a policy document flow. First and foremost, try to maintain a logical structure in whatever you write. Outline your policies before you write them, then go back to fill in the missing pieces in detail. This will give you the ability to view your policies from a bird’s eye view, and then dig down to the details. The phrase “shelfware policies” comes from Gary Desilets, who wrote a great article on how to properly compose policies to make them easy to read. It’s located at The emphasis of this article was to “employ selected technical writing skills to improve security policy documentation.” This article has many useful tips for any person writing a security policy, or any technical document in general. Many of the key points Desilets makes come from standard good writing practices. He points out that policies should be written with the reader in mind. This includes presenting the policy in a clear, logical order such that it is easy to understand. The following list gives a short synopsis of each section, but I suggest you read the actual article for yourself, as he covers each section much more extensively: •

“Manage the document set” If your policy includes multiple documents, especially when writing a large policy, consider how you will manage your multiple documents. •

Consider the placement of content, such that it appears in relevant areas. However, be careful to not duplicate the same content throughout your policy documents, as it will be more difficult to maintain.

• • •

Consider the reader, only give them the information they need to know, and differentiate that from optional information. Consider naming conventions, and maintain consistent naming conventions throughout your policy.

“Do it with style” Plan the style of your writing, including tone, character, and voice. Keep it consistent throughout the document, so it’s easier to read. Plan this out before you start writing, so you have a blueprint for how your message will sound. For example, consider the use of “he/she” versus defining “he” or “she” for various instances. He/she is less readable, and usually randomly substituting “he” or “she” will suffice. “Focus on the readers” Identify the anticipated readers, and write with them in mind. They will be concerned with the policies that affect them and the consequences if they don’t comply. Usually, you want to write for someone who is relatively inexperienced; don’t assume that he knows about a particular subject.

“Follow the action” The main thing is to keep the main thing the main thing. Keep your sections short and to the point, avoid background information if unnecessary, and try to use topic sentences at the beginning of each paragraph as often as possible.

“Be careful what you borrow” Be careful when using an existing security policy template and integrating it into your document. You can introduce readability problems, such as changes in tense and/or focus, extraneous material, or a section that just doesn’t fit. Try to make it your own when you copy and paste.

“Careful with categories” Desilets identifies three common mistakes when using categories, which can be a powerful tool when logically grouping items, but can also be confusing if used wrong. • Logical classification When making categories, it is easy to overlook essential sections.

Broad definition If a category is made to encompass too much, the meaning of the grouping becomes lost. These omit important limitations in the category.

Classifications of unequal value When classifying items into groups, the weight of each group may be inadvertently made equal. If this is not the intent, the value of the varying groupings may be lost.

“Use the technology” Use the most current, applicable technology to promulgate your policies. This can be as simple as a binder or corporate letter, to as complex as online policy management software. I go over this in more depth in the “Policy Enforcement” section.

All too often, policies are written with the compliance requirements in mind. The author may forget that their policies need to be read and followed by users. I can’t emphasize enough that policies need to be relevant to the end user and their needs.

Make Policies Referable What is the point of developing a policy if nobody is going to refer to it? Your policy may be a great literary work, and it may be well written and easy to understand, but if users don’t keep the information fresh in their minds, they will forget what they have read. You need to use some strategy in your policy to keep that information fresh in your readers’ minds, and one way to do that is to make them refer to it often. After all, your policy needs to be a living document, and part of being alive is that it gets used often. One way to encourage your readers to review the policy often is to keep it close at hand. Your policies need to be easily accessible, either in print form, electronic form, or both. One of the best examples I’ve seen of keeping policies current was a client that presented them in four ways. First, they distributed hard copies to all the employees. Second, they had all the policies stored on their intranet, which also had multitudes of other useful information. Third, they had posters with graphics emphasizing their most important policies hung around the building. Each poster included the name of the policy in reference, and the intranet address. Fourth, they had presentations of all their policies, both to new employees and as refresher courses for current employees. The courses included quizzes, in some cases that restricted computer use and intranet or Internet access until the employee passed the test. In contrast, one of the worst cases I’ve seen was a company that distributed a form of policy “receipt and understanding” without distributing the policy. When employees asked if they could review the policy before they signed the statement stating they had done so, they found out it had not been completed. The most frightening part was that several employees signed and returned their statements without reviewing the nonexistent policy. That makes me wonder if they ever would have read the policy, even if it was available. Storing a policy on an intranet site that is accessed often, as I stated earlier, is one good way to encourage employees to refer to your policy often. Another way is to combine the policy with procedural documents, which may be referred to often. For example, procedural documents may have the snippet of relevant policy stated in the beginning, so employees will see the policy from which it is derived. Procedural documents may include checklists, access right lists, and important corporate contact information. Procedural development is out of the scope for this chapter, because it is much more customized for each organization. One last tip, which was alluded to earlier, is to quiz employees on policy before they are granted access to certain systems. Some software, such as PoliVec, includes quiz modules that force employees to review and pass policy quizzes. Think of this as taking your medicine: Your employees might not like it, but your organization as a whole will be healthier in the end.

Keep Policies Current Keeping policies current is one of the most important ways to keep policies from becoming shelfware. If policies are constantly updated, hopefully with useful information, the policies will probably be read. If employees know the policy is static, and hasn’t been updated in several years or even months, they are more likely to ignore it. One of the most important ways to keep a policy current is to treat it as a “living” document. A living document is one that is constantly changing,

evolving to meet the needs of the company. By treating it as a living document, a few things happen naturally. It’s removed from the shelf more, because it’s updated more often. Because it’s updated more often, it will be read more, so long as it is updated with new, useful information. Maintaining a living document keeps it updated with the latest changes in the company, so it is constantly relevant. To maintain a living document, the policy needs to have an owner, who is responsible for keeping it current. This is discussed later in the “Designate Policy Ownership” section. In addition, a living document needs to be kept in a location that is easily accessible to all employees, such as online or as a help file on a common share. Keep in mind a strategy to easily update your policy with current information when you are designing and developing your document.

Balance Protection and Productivity Imagine seeing an old car that has a car alarm but no wheels. Or a Lamborghini with the keys in the ignition and doors unlocked. One looks silly, the other can be downright dangerous. Both are as irresponsible as poor policy implementation. When creating your policies, you need to adjust your requirements for the appropriate level of security. You need to create your policies so they protect your assets that need protecting, but still allow your employees to get their jobs done. At the same time, don’t sacrifice security for ease of use. A balance between the two is necessary for appropriate creation and implementation of policies. Keep in mind that if your policies are considered unnecessary, they’ll often be ignored.

Recognize Your Value As you invest more time and effort into maintaining your policy and making it useful, it becomes more valuable as an asset to the company. This value needs to be recognized by the management, by the users, and by the party responsible for maintaining the policy. If any of those three parties do not consider the policy to be valuable, it will fall by the wayside and become shelfware. Once it is recognized as an asset, you can treat it as you would an internally developed software product. It can carry a monetary value that depreciates over time, and it requires maintenance to hold its value. Management needs to recognize the value of the security policy. This is usually assumed if management provides their support for the security policy initiative. If management does not see the value in the policy, they may not give it the appropriate financial or resource support. It’s the responsibility of management to provide the support necessary for policy development and deployment. If you don’t have management supporting and valuing your policy development program, stop right there and focus your attention on management. Your efforts will be dead in the water if you don’t have them on board. The importance of management support is that it provides legitimacy to your policy document for your users. If users see that management is supporting the policy document, they may take it seriously. If users consider it to be a joke or another unnecessary organizational requirement, they will not take it seriously. If users do not consider the security policy to be of value, they may not follow its guidelines, making it mostly ineffective. If you have management support but are lacking in user support, there are only two available options:

Force users to comply with policy through decree This is rarely effective, since users will sometimes act like children who don’t want to be picked up; they will squirm every which way to make it next to impossible to get them to do what you want.

Focus your attention on getting user support for your policy document If they feel any sort of ownership of your policy, or if they see the value, they will take it more seriously, and it will instantly become more effective. There are several ways to influence acceptance, such as assigning roles in policy development to individual people or holding security policy forums. Any action you can take to provide them with some feeling of ownership or value will bring them closer to you.

Finally, if the person or group in charge of maintaining the security policy sees her efforts going unvalued, she may not perform to the level of quality expected. It’s difficult to fight an uphill battle against management and users, but it’s impossible to fight a battle you believe you will lose. If you do not see the value in creating security policies, you need to pass your responsibilities on to someone who does.

Designate Policy Ownership A clear owner of the policy should be designated by management and announced to all relevant employees. This person should be the head of the “policy police,” a group or individual whose sole responsibility is to maintain and enforce the corporate policies. One of the best information security programs I have reviewed was at a client who had a dedicated employee whose sole responsibility was the creation, maintenance, and enforcement of their security policy. In addition, this employee had the full support of upper management. In larger corporations, this may be feasible due to the resources. Smaller corporations may see this as overkill, but the value of dedicated individuals to maintain policies may increase in the future. Due to our changing corporate environment, and with the ongoing concerns of personal privacy, the extent to which a corporation can monitor their employees’ adherence with policies will become an important topic. How far can a corporation go to enforce their policies? Can they read corporate e-mail? Can they monitor Web sites visited by employees? Court rulings have determined that corporations can in fact take these actions. However, they are usually discovered by a system administrator, investigating why an employee’s mailbox keeps going over quota. If an employee were specifically designated to look for policy violations and address them, employees may start to pay more attention to the corporate policies. Another reason for designating a clear policy owner is that acting as the “policy police” is not a very attractive or exciting job. If this responsibility is assigned to an employee who already has multiple other commitments, policing employees for policy violations may fall to the back burner. Done properly, enforcing corporate policies is a full-time job.

Obtain Management Support You absolutely must have management on board if you want to prevent your security policy from becoming shelfware. Although supportive management can make a policy effective in the organization, lack of support from management

will kill any policy initiative before it even gets out of the gate. This is for the simple reason that if management doesn’t take information security policies seriously, why should the employees? In addition, management support cannot simply be a rubber stamp. Management needs to be actively involved in the entire process from policy creation to deployment to enforcement. The owner of the policy needs to have a higher power they can turn to if they need support on a policy issue. Some management may see policy as an unnecessary burden, only for compliance with regulations or to meet customer requirements. We’ve already addressed that this excuse is inadequate when developing an information security program. If your management acts this way, explain that security policy is like car insurance: even though many would prefer to not spend money on insurance, to save a penny and avoid the hassle, it’s very useful when we need it. It intrinsically provides an added level of reassurance, both to your customers and your employees.

Understanding Current Policy Standards Currently there are multiple information security policy standards available, and it can be quite confusing to try to understand which one is appropriate for your needs. First, I want to address the importance of standards, and why you should choose to baseline your policies against standards. Using standards to baseline your information security program is a lot like attending an accredited university, or trusting the financial status of a corporation that has been independently audited. Your hope is that the accreditation board or auditors know more about the subject matter than you, and can provide you with an unbiased guarantee that the university is legitimate, or the corporation’s books are true. Due to recent events such as the Enron and Arthur Andersen scandals, the importance of a truly independent third party has become even more apparent. The guidelines published for information security programs are usually created by a collective of people who probably know more than any one individual about what should be included in a program. As it was once explained to me, “these guidelines were created by smart people who put a lot of thought into what they were doing.” Although none of them are perfect, many of them are quite useful in assuring the information security program coordinator that they have addressed all the relevant issues. None should be treated as the ultimate truth, but instead, each component should be carefully considered for inclusion. Another benefit of baselining your security program against a standard is that, much like an accredited institution, you can reliably say that your program does not have any gaping holes in its security Web. I have seen security policies where the program wasn’t based on a guideline or standard, and although the areas they covered were thorough, they completely forgot to include a physical security policy. Oops. A common misconception is that a standard should include defined controls, such as best practices for a particular software, device, or even technology. In fact, it’s up to a knowledgeable administrator of these devices to define these controls. This administrator should be skilled in both security knowledge and configuration. They can use the guidelines defined in the standard, or the general rules defined by the policy administrator, to create the necessary controls.

ISO17799 One of the most widely accepted and endorsed security policy guidelines in use is the International Organization for Standardization (ISO) 17799:2000. This document was originally the British Standard (BS)7799, and was submitted to the ISO in late 2000. This document was submitted using the “ISO/IEC JTC 1 Fast Track Process,” which allows an existing standard to be submitted for immediate vote without modification. Though the vote for the BS 7799 passed, it was not without resistance. Several members, including the United States, protested that the proposed draft was too general for use as a standard. A technical standard would usually include specific controls and how they are to be implemented. However, it has been argued that the generality of the ISO17799 is both its greatest weakness and greatest strength. Just as no two organizations are alike, no two security policies should be alike. Each should be customized to the specific needs of the company, an issue I get into later in the “Creating Corporate Security Policies” section. There has been some confusion over the ISO17799, in that you cannot become certified as ISO17799-compliant. When the BS 7799 was originally submitted, BSI declined to include BS 7799-2 for approval, which is a checklist of controls that a company can be audited against. The ISO17799 is not appropriate to be certified against, and therefore the ISO has not offered a certification through its registrars. However, if your company has a desire to be certified against the BS 7799-2, which is the closest certification available, you can get more information at BSI’s homepage, Regardless of a company’s interest in certification, the ISO17799 provides high-level guidance for almost all of the areas your security policy should cover. In some cases, the ISO17799 may provide details irrelevant to your business practice. For example, if you do not outsource any software development, Section 10.5.5, titled “Outsourced software development,” may be irrelevant to you. In other cases, the ISO17799 might not provide enough detail to use for guidance. For example, if you require details on creating a security policy for international trade agreements, you may need to use another source for more detail than what is provided in section 4.2, “Security of third party access.” The ISO17799 can be purchased from BSI for under $200, and it is a worthwhile investment. Though not perfect, it is one of the best we have, and one of the most widely referred-to security documents. It appears to have good traction and is gaining ground, both in the American and international communities, as a solid security standard.

SAS70 The Statement on Auditing Standards (SAS) No. 70, Service Organizations, is a tool available to auditing firms and CPAs to conduct an audit of a company that already has implemented an information security program. The SAS70 does not contain a checklist of security controls, but rather allows an auditing firm to issue a statement of how well a company is adhering to their stated information security policy. The report issued can be of type I or type II: A type I includes the auditor’s report and controls, and type II includes testing and verification of the security controls over a time of six months or more. There has been some controversy over the applicability of the SAS70 to conduct a security review. Namely, it does not contain a checklist of recommended security controls, and verifies only that stated security controls are

followed. If a corporation’s information security program has omitted particular controls, as I have seen done with several clients, and I have mentioned previously, this is not noted in the SAS70 report. Because the audit is conducted by auditors who do not necessarily have an information security background, they may miss important gaps in the policy. If you have already implemented a security policy based on a standard, such as the ISO17799, the SAS70 may give your information security program additional credibility. Having more accreditation groups stating that your program gets a “pass” grade doesn’t necessarily mean you have a more secure program. However, it can help to make customers happy or meet federal or insurance requirements. Remember that the SAS70 is not appropriate for use as a checklist to create an information security policy. Many other standards have been created, some of which are listed here: •

Control Objectives for Information and (Related) Technology (COBT) A free set of guidelines for information security, published by the Information Systems Audit and Control Association (ISACA)

ISO 15408/Common Criteria A technical standard published by the ISO used to support the specification and technical evaluation of IT security features in products.

Government Information Security Reform Act (GISRA) Requires civilian Federal Agencies to examine the adequacy of their information security policies, among other requirements.

Government Policy Both parties in the United States government have recognized the importance of protecting a user’s online privacy and security. Though each party has a different way of implementing their plans to secure users, one thing we can be sure of is that we will see new regulations regarding our online privacy and security. Some may be new regulations, others suggested guidelines, but the government has taken notice of the need for legislation. Bill Clinton’s administration posted to on May 1, 2000: “The Administration released a new regulation to protect the privacy of electronic medical records. This rule would limit the use and release of private health information without consent; restrict the disclosure of protected health information to the minimum amount of information necessary; establish new requirements for disclosure of information to researchers and others seeking access to health records; and establish new criminal sanctions for the improper use or disclosure of private information.” George W. Bush told the Associated Press on October 6, 2000: Q: “On Internet privacy: Should the federal government step in to safeguard people’s online privacy or can that be done through self-regulation and users’ education?” A: “I believe privacy is a fundamental right, and that every American should have absolute control over his or her personal information. Now, with the advent of the Internet, personal privacy is increasingly at risk. I am committed to protecting personal privacy for every American and I believe the marketplace can function without sacrificing the privacy of individuals.”

The following two sections provide two overviews of major developments that may affect your organization: the Health Insurance Portability and Accountability Act (HIPAA) and the Gramm-Leach-Bliley Act (GLBA). Note there easily may be other legislation that affects your security policies, such as the Children’s Online Privacy Protection Act (COPPA) or the Uniting and Strengthening America by Providing Appropriate Tools Required to Intercept and Obstruct Terrorism (USA PATRIOT) Act. You should be vigilant in ensuring that your corporate policies are in accordance with law, because regulatory agencies may check your organization for compliance, and fines could be issued if you are not in compliance.

Health Insurance Portability and Accountability Act (HIPAA) The Health Insurance Portability and Accountability Act was signed into law in 1996. HIPAA came about in response to a need to establish standards for the transfer of patient data among health care providers. This includes health care clearinghouses, health plans, and health care providers who conduct certain financial and administrative transactions electronically. Insurance providers, hospitals, and doctors use a wide array of information systems to store and transfer patient information, and have various claim forms with varying formats, codes, and other details that must be completed for each claim. HIPAA was enacted to simplify the claim process. Privacy and security issues were also addressed in this legislation to protect patient data. A provision in HIPAA gave Congress a three-year time limit to pass legislation for electronic health care transactions, codes, identifiers, and security. After three years, the responsibility diverted to the Department of Health and Human Services (HHS). In August of 1999, HHS took over the issue of health privacy regulations. The current timeline for compliance is shown in Table 2.1. Additional standards from HIPAA are still under development. Table 2.1 HIPAA Compliance Timeline Standard Electronic transaction standards

Rule Final: August 2000

Privacy standards

Final: December 2000 Revised: Aug. 14, 2002 Final: May 2002

Employer identifier Security standards National provider identifier As of: December 2002

Proposed: August 1998 Expected: December 2002 Proposed: May 1998 Expected: Spring 2003

Compliance October 16, 2003 (extension) October 16, 2003* April 14, 2003 April 14, 2004* July 30, 2004 July 30, 2005* 24 months after final rule 36 months after final rule* 24 months after final rule 36 months after final rule* *Small health plans

The security standard will apply to all individually identifiable patient information that is transmitted or stored electronically by covered entities. This includes all transmissions that are covered by the HIPAA Electronic Transactions Standards rule. The security rule has been designed with both internal and external threats in mind, and it is meant to act as an established baseline standard for organizations. On a high level, entities are required to do the following:

Assess potential risks and vulnerabilities

Protect against threats to information security or integrity, and against unauthorized use or disclosure

Implement and maintain security measures that are appropriate to their needs, capabilities, and circumstances Ensure compliance with these safeguards by all staff

The draft security standards have been divided into four sections, to provide a comprehensive approach to securing information systems. The department of Health and Human Services has distilled the requirements from the proposed rule into a matrix. This matrix outlines the requirements for each of these four sections (this matrix can be found at • •

Administrative procedures Documented practices to establish and enforce security policies Physical safeguards Protection of buildings and equipment from natural hazards and intrusions

Technical security services Processes that protect, control, and monitor information access

Technical security mechanisms Controls that restrict unauthorized access to data transmitted over a network

Note that as of the writing of this document, the HHS has not yet issued the final rule for the security portion of HIPAA. Based on information from Health Data Management (, the HHS has indicated that it is due in the end of October 2002. However, as was stated on their site, “Stanley Nachimson, speaking on July 22, was reacting to recent, persistent rumors that the rule, which HHS has said for months would come out in August, now is delayed until October. He declined to say when the rule would be published or why it could be further delayed.” Nachimson is part of the group in the HHS responsible for creating and distributing the HIPAA administration simplification rules. As was stated on the HHS Fact Sheet released August 21, 2002: “In August 1998, HHS proposed rules for security standards to protect electronic health information systems from improper access or alteration. In preparing final rules for these standards, HHS is considering substantial comments from the public, as well as new laws related to these standards and the privacy regulations. HHS expects to issue final security standards shortly.” Hopefully, we’ll see the new security standards soon, so we can implement the privacy regulations without playing the security standard guessing game.

Gramm-Leach-Bliley Act (GLBA) On November 12, 1999, President Clinton signed the Financial Modernization Act, more commonly known as the Gramm-Leach-Bliley Act (GLBA). This act gave financial institutions the ability to engage in a broad range of financial activities, by removing existing barriers between banking and commerce. The GLBA requires certain federal agencies to establish guidelines or rules regarding administrative, technical, and physical information safeguards.

Some of these safeguards are implemented in agency-issued rules, others as guidelines. Though various agencies issued varying guidelines or rules, most were required to “implement the standards prescribed under section 501(b) in the same manner, to the extent practicable…” However, the FTC and SEC were explicitly required to issue standards prescribed under section 501(b) by rule instead. The following is an excerpt from section 501(b) of the GLBA Act: “…Each agency… shall establish appropriate standards for the financial institutions subject to their jurisdiction relating to administrative, technical, and physical safeguards. 1. Insure the security and confidentiality of customer information; 2. Protect against any anticipated threats or hazards to the security or integrity of such information; and 3. Protect against unauthorized access to or use of such information that could result in substantial harm or inconvenience to any customer.”

How GLBA Affects You All financial institutions that handle customer nonpublic information (NPI), regardless of size, are expected to implement the rules or guidelines from its controlling regulatory agency before the compliance deadline. The matrix in Table 2.2 describes the details of each implementation. Table 2.2 Financial Agencies Affected by GLBA Agency Federal Trade Commission Office of the Comptroller of the Currency Board of Governors of the Federal Reserve System

Federal Deposit Insurance Corporation Office of Thrift Supervision

Organizati ons Institutions not explicitly listed below National banks and federal branches of foreign banks Bank holding companies and member banks of the Federal Reserve System FDIC insured banks

Safeguardi ng Rule 16 CFR § 314

Privacy Rule 16 CFR § 313

Safeguar ding May 23, 2003 *May 24, 2004 July 1, 2001 *July 1, 2003


12 CFR § 30

12 CFR § 40

12 CFR § 208, 211, 225, 263

12 CFR § 216

July 1, 2001 *July 1, 2003

July 1, 2001 *July 1, 2002

12 CFR § 308, 364

12 CFR § 332

12 CFR § 568, 570

12 CFR § 573

July 1, 2001 *July 1, 2003 July 1, 2001 *July 1,

July 1, 2001 *July 1, 2002 July 1, 2001 *July 1,

FDIC insured savings

July 1, 2001 *July 1, 2002 July 1, 2001 *July 1, 2002

National Credit Union Administrati on Securities and Exchange Commission Commodity Futures Trading Commission ("CFTC") [added December 21, 2000]

association s Federally insured credit unions Securities brokers and investment companies Commoditi es brokers

12 CFR § 748, Appendix A

12 CFR § 716

17 CFR § 248, Section 248.30

17 CFR § 248

17 CFR § 160, Section 160.30

17 CFR § 160



July 1, 2001 *July 1, 2003 July 1, 2001 *July 1, 2002

July 1, 2001 *July 1, 2002 July 1, 2001 *July 1, 2002

March 31, 2002 *March 31, 2003

March 31, 2002 *March 31, 2003

* Indicates extension for grandfathering of contracts If your regulatory agency is listed in Table 2.2, and you handle customer data, you fall under the provisions of the GLBA. By now, most companies should be in compliance with GLBA regulations, based on the due date determined by their regulatory agency. Remaining are FTC Safeguarding requirements, and most safeguarding requirements for third parties that fell under the grandfathering clause. For the past couple years, the GLBA privacy requirements have been getting most of the press. There have been multiple papers written on the privacy requirements, and almost all of the privacy requirement deadlines have passed. In addition, privacy requirements are out of scope for this book, so the rest of this chapter concentrates on the safeguarding regulations. The safeguarding requirements vary between most regulatory agencies, though all have the goal of implementing safeguards to meet the requirements in section 501(b). The safeguards for each regulatory agency can be found with a quick search online for the safeguarding rule listed in Table 2-2. Below is an example of how various regulatory agencies passed different rules. The FTC requires a risk assessment to be performed, and controls to be considered, in a minimum of three core areas: • •

Employee training and management Information systems, including network and software design, as well as information processing, storage, transmission, and disposal

Detecting, preventing, and responding to attacks, intrusions, or other systems failures

However, the collection of regulatory bodies, referred to as the “Agencies,” have issued slightly different guidelines. Their guidelines require a risk assessment to be performed for the organization, followed by suggested measures to control those risks. These measures are to be considered and implemented only if appropriate for the agency’s circumstances. In all, there are eight suggested measures, as listed here (note that there are additional controls, but I am using this as an example to demonstrate that there are multiple implementations):

Access controls on customer information systems, including controls to authenticate and permit access only to authorized individuals and controls to prevent employees from providing customer information to unauthorized individuals who may seek to obtain this information through fraudulent means

Access restrictions at physical locations containing customer information, such as buildings, computer facilities, and records storage facilities to permit access only to authorized individuals

Encryption of electronic customer information, including while in transit or in storage on networks or systems to which unauthorized individuals may have access Procedures designed to ensure that customer information system modifications are consistent with the bank's information security program

Dual control procedures, segregation of duties, and employee background checks for employees with responsibilities for or access to customer information

Monitoring systems and procedures to detect actual and attempted attacks on or intrusions into customer information systems

Response programs that specify actions to be taken when the bank suspects or detects that unauthorized individuals have gained access to customer information systems, including appropriate reports to regulatory and law enforcement agencies

Measures to protect against destruction, loss, or damage of customer information due to potential environmental hazards, such as fire and water damage or technological failures

The general process of achieving GLBA compliance under the FTC requires several clearly-defined steps: 1. Organizations must identify one or more employees to coordinate their information security program. 2. Organizations must perform a risk assessment, as mentioned earlier in this section, to identify “reasonably foreseeable” risks to the security, confidentiality, and integrity of customer information that could result in the unauthorized disclosure, misuse, alteration, destruction, or other compromise of such information. This risk assessment must also evaluate the sufficiency of the safeguards in place to protect against these risks. 3. Controls must be designed and implemented to address the risks identified in the risk assessment. In addition, the effectiveness of these controls, systems, and procedures must be regularly tested and monitored. 4. Service providers must be selected that are capable of maintaining appropriate safeguards for customer information. The service providers must be required, under contract, to implement and maintain the identified safeguards. 5. The information security program must be continuously monitored and adjusted, based on the testing of safeguard controls stated earlier

in this section. In addition, any changes in technology, threats, business operations, or other circumstances that may impact your information security program must be considered and compensated. Once you have achieved compliance with the GLBA, it is your responsibility to maintain that level of compliance. As is demonstrated in this chapter, any decent information security program requires continual monitoring and adjustment. The regulatory agencies recognized this when creating the requirements for their GLBA implementation. To remain compliant with GLBA, an organization must continually monitor their information security policy, and make modifications to improve their security. This closely relates to and validates one of the themes in this chapter, that to keep a policy alive it must be continuously evaluated and taken off the shelf. This safeguarding requirement will be in effect indefinitely, until repealed or replaced with new legislation. Yet again, this is more proof that information security management is not a fad, but here to stay. The following sections of this chapter discuss how to create, implement, enforce, and review information security policies. This is a cyclical process, and the final section, policy review, discusses how reviews are used to create new policies. Much like GLBA requires continual review, so should your policies, even if you don’t fall under GLBA.

Creating Corporate Security Policies Up to this point, we have discussed the general goals of policy, the general purpose of policy, and the general way you keep your policy from collecting dust. We also discussed some of the current guidelines available to help you in creating your policy. This is the section where we get into the meat of it, where we will discuss how to create the policy portion of your information security program. If you already have a good understanding of the world of security policy, this is the section you should start reading. Creating policies may seem like a daunting task, but compared to the rest of the steps that need to happen first, such as defining the scope, assessing risks, and choosing controls policies, it is relatively easy. One of the most important things to keep in mind when creating policies is to make them useful. This is so important that I have dedicated an entire previous section to avoiding shelfware policies (see “Avoiding Shelfware Policies” earlier in this chapter). The actual creation of the policy has become relatively easy, thanks to the numerous tools available for the policy creator. Think of creating a security policy like writing a computer program, you want to reuse other people’s work as much as possible. The most important part of the policy creator’s role is to understand and define what they want the policy to accomplish. Consider the easiest and most logical way for them to accomplish their goal. Compile prewritten policies you have at your disposal to integrate into your document. Think of prewritten policies like our classes and functions, the building blocks of policy. Usually, this is a turning point for most policy developers, where they realize how many prewritten policies and templates are already in existence. What once looked like a search for scarce information has now become information overload. Then it’s a matter of picking the ones applicable to your goals, and modifying the templates and samples for your corporation.

One key point to remember while copying and pasting policy templates, once you paste it, treat it like your own work. Make certain you proofread for inconsistencies, and sanitize your policy if necessary. The last thing you want is to have your policy thoroughly reviewed only after it’s published. If inconsistencies are discovered, it hurts your own credibility, and the credibility of the policy and any following policies you may publish. Policy development, like information security management, is a process. It contains a series of steps that takes the user towards a goal, and no single fix can solve all problems. The following is a process that draws from multiple resources to help security managers develop their policy: 1. Justification Formalize a justification for the creation of your security policy. This usually comes from a management directive, hopefully from the Board of Directors. This is your ticket to create what you need to get your job done. Make certain you have a way to check back with the board should it be necessary to get reinforcements. 2. Scope Clearly define the scope of your document, who is covered under your policy, and who isn’t. (This is discussed more in the "Defining the Scope" section later in this chapter.) 3. Outline Compose a rough outline of all the areas you need your policy to cover. If you start here, you’ll be able to fill in the blanks as you find sample policies, omit redundancies, and create controls to enforce your policies. 4. Management support Management support is different from justification. Justification says “we need this done, and this is why.” Management support says “I will help you get this done.” This usually comes in the form of support from VPs for smaller organizations, or department managers for larger organizations. Having the support of the Board behind you can make this task much easier. 5. Areas of responsibility This is related to the initial scoping you performed, but on a more detailed level. By now you have identified the general areas where you will be responsible for creating a security policy. For example, in your scoping you may have defined that your policy will cover data centers that are directly controlled by your organization, not third parties. You may also have already contacted the manager for the data center and informed them that you will be creating their security policy. Now you need to define what areas of the data center policy you are responsible for. The immediate manager will probably know more about this than you. For example, you will need to identify the demarcation points for responsibility from third-party suppliers and responsibilities for physical security. If physical security is already covered under a corporate physical security policy, and the data center follows these policies, it may not be necessary to create a second, redundant physical security policy. However, you certainly could integrate the current physical security policy into your document, and include any modifications if necessary, so long as you have permission of the physical security policy coordinator.

6. Discover current policies The first step in creating a new set of security policies in your organization is to understand where you are in relation to your current security policies. If you don’t know where you are today, you won’t be able to figure out how to get to where you want to be tomorrow. This notion applies to organizations large and small, both those that have a security policy in place, and those that are creating them for the first time. (This is discussed more in the “Discovering Current Policies” section later in this chapter.) 7. Evaluate current policies The evaluation phase of this process is very similar to Step 11. Both cover the same materials, because the processes are very cyclical. In practice, the creation of new policies is just like the review and improvement of existing policies, except that no or few policies currently exist. 8. Risk assessment One of the most important parts of creating a security policy is the risk assessment, and it’s often not given the attention it is due. As you are acquiring management support and defining your areas of responsibility, you will begin to get a feel for what data matters more than other data. Formalizing this is a risk assessment. (This is discussed more in the “Evaluating Current Policies” section later in this chapter.) 9. Create new policies The actual creation of security policies is a minor part, so long as you have performed the other steps outlined. If you choose to use templates to create your policy, all you need to do is assess what policies you need and select the appropriate templates. Otherwise, you can use an automated tool to create security policies based on best practices after answering some questions about your environment. Once the outline is created, risk assessment has been completed, and the controls have been defined, the policy almost writes itself. I also touch on the creation of procedures and their controls later under “A Note on Procedural Development.” 10. Implementation and Enforcement Once the policy is written, the easy part is over. Distributing it to the rest of your organization, and ensuring it is read and followed, is the hard part. This is one of the most critical steps in developing your policy program. This is where all your work culminates in getting results, and is one of the most important reasons to have management support. If you have managed your program properly, there are many managers, plus the Board, that have supported you in your policy development process. Hopefully you have maintained good relationships with all of them, and they are anticipating the final result of your work. If you write your policies correctly, they will not be an additional burden on your intended “victims,” and there will be little resistance. This is so important that I address implementation and enforcement in the section “Implementing and Enforcing Corporate Security Policies.” 11. Review Once you have developed and deployed your policy, you are not done. Changes in regulations, environment, business strategy, structure, or organization, personnel, and technology can all affect the way your policy is interpreted and implemented. Remember that this is a “living document,” and now it’s up to you or a group of policy administrators to keep it alive. Plants need water, sunlight,

and food to grow and thrive, and so does your policy. If you leave it to function on it’s own, it will die, and your project will fail. This is a fundamental change I have been witnessing in the policy industry, that policy administration is not a project but a job description. I do not believe that policy administration should be delegated to one administrator who also has the responsibility of maintaining network availability. It should be built into a job description where the primary goal is policy maintenance. This is another section I feel so strongly about that I expound on it in the section “Reviewing Corporate Security Policies.”

Defining the Scope Defining the scope for your security policy is one of the first steps you need to perform once you’ve been able to justify why a security policy needs to be created, and it has been approved. The scope you define will be included at the very beginning of your security policy, and if you choose to create multiple policies for different business groups, each of those needs a scope definition as well. The dangers of failing to define your policy’s scope are demonstrated in the following example. One of my clients had a data center that created their own security policies, because the data center staff thought it was within their scope of responsibility. However, when the parent organization created the scope for their security policies, they included the data center in their policy creation program. Because the data center had already created their own security program, they thought they were not included in the scope of the corporate policy program. When it came time to review their policies, it became apparent that this was not the case. The parent organization’s policies were quite different from those of the data center. For example, the data center would track inventories of their equipment by making a “daily check” of the servers they were maintaining. However, they did not have any sort of an asset inventory or tracking system, as the corporate security policy required. The result was that the data center had a lax security program, not due to poor planning, but to poor scoping of responsibilities. Included in the scoping statement will be a statement of intent, the business units that this policy pertains to, and any special groups that also need to know this policy pertains to them. The scope may also include specifics of responsibility—how groups know if various policies apply to them and not to other groups. It’s important to define the areas of responsibility, because many people will go through the most amazing feats to avoid having a policy apply to them. Many times it may be as simple as not signing a statement of receipt or acknowledgem67ent of a policy. Other times, it may goes as far as actively avoiding duties and tasks so they are not subject to the policies underneath. For example, a network administrator may avoid configuring a system on a frequent basis because he doesn’t want to record all the changes in a log book. At this point, it becomes an issue of policy enforcement, instead of simply scoping. This is an extreme case, but it can and does happen. Here is a list of some various entities that may need to be specifically stated in the scope of the policy document: •

Data centers


• •

Customer call centers Satellite offices

• •

Business partners Professional relationships


• •

Suppliers Temporary workers

Salaried versus hourly workers

There are other reasons to define the scope of your policy, besides explicitly defining to whom the policy pertains. If multiple policies exist in an organization, they can overlap with each other, and perhaps even conflict with each other. For example, your company may contract with a third party to perform activities such as payment processing. If your security policy has controls to guard the privacy of your employees, their private payment information should have some control to ensure its security. Should that be covered in the third party’s privacy policy, your privacy policy, or both? Usually, a scoping requirement will include details such as “This policy pertains to all parties directly employed by company X, but does not explicitly apply to thirdparty vendors who have a security policy that has been reviewed by company X.” In addition, if the scope is allowed to grow too large for one policy, and is not separated into two policies early on, it may grow to an unmanageable size. The policy developer needs to keep in mind how big is too big, and a good way to evaluate that is when the policy gets so large it scares people away. If the scope is maintained so that no policy goes over 10 pages, there is less of a chance that the policies will be intimidating. Also, if the scope gets too big, the entire project will just become unmanageable. Keep an eye out for scope creep. In a related sense, don’t make the scope so small that the policy does not apply to anything or anyone. This is more common sense, but make sure it has a jurisdiction to cover. Finally, if a group thinks they are not included in the policy, they may be very difficult to manage. Groups that are independent, have been grandfathered in, or were purchased without properly being assimilated to the corporate culture may create resistance. If they are explicitly or at least unambiguously stated in the scoping statement, they may be less likely to resist adopting the policy document. One of my clients involved a data center that had been operating in the same manner for a long period of time. While the rest of the organization grew, and policies developed, the data center remained the same size, with the same staff members. This was because the organization used an expansion plan of opening new data centers and acquisitions, instead of building on to their current location. As a result, the initial data center staff had become set in their ways, and heavily resistant to change. When the new corporate policies were pushed down from the organization, without a very clear introduction I might add, the data center resisted the changes. They insisted they were out of scope due to the unique tasks in their work, and insisted they were not included with the initial scope. If the manager of the data center had not gone back to the policy development group and asked for justification of their scoping decisions, the data

center would not have accepted the new policies without a lot more work and headache.

Discovering Current Policies Before you can create new policies for your organization, you have to understand what policies you currently have in place. You probably have policies in place of which you are unaware. It’s rather common during a policy review that I will locate policies the client did not know they had. The review process is usually equally a discovery process. Even a small organization that claims to have no security policies will usually have a guideline document, or a procedure that describes a particular function. Most organizations that are starting this process for the first time will discover that they have multiple policies scattered throughout their organization, written in an inconsistent manner and stored in various locations. This is the common problem of “patchwork policies,” which is when policies have been created to identify a need, without any guidance or coordination. Your goal is to identify all the patchwork policies, and corral them into one location. Think of this as your opportunity to play detective, and there are a multitude of locations where policies could be hiding.

Hardcopy The softcopy version of some policies has been lost, and all that is left is the original hardcopy version. Or, the softcopy version may be buried in some department, and the only copy you have available is the hardcopy. For example, your organization’s HR department may have issued an employee handbook, or even a security awareness guidebook, that details a number of guidelines and restrictions. These usually include password policies, access policies, physical security, appropriate usage guidelines, and so on. However, if your organization does not have a central repository for all policies, including these, it’s your responsibility to pull them into your current policy development program. If you can identify the original creator of those policies, an interview may be useful. Perhaps you could even partner with them to create your new policies. It’s been my experience that most employees have lost their employee handbooks, and you may have trouble finding one. I’ve been on several projects where we had to specially request one from HR, and even then it took a while for them to arrive. In addition, you may discover that many employees have not read their HR manuals, which should not be a big surprise. This is a classic indicator of shelfware policies, and demonstrates how common they are.

Online Resources Your organization may have a network share setup, which acts as a common repository for shared documents. This may include Microsoft Outlook Public Folders, Lotus Notes, FTP server, or Windows share. Based on your organization’s infrastructure, you have multiple locations where you can pull policies for your review. In addition, your organization probably has a corporate intranet, which could be a goldmine for your policy document search. However, it can often be more difficult to extract information from the corporate intranet than you might expect. I’ve been on multiple engagements where we ask the client to “deliver all relevant policies” and they provide only a few, claiming there were not that many

on the intranet. Within five minutes, I could usually find double what they had tracked down, and that’s without using Google. Look through every section of your intranet. This includes not just the IT section marked “Policies and Procedures.” Also browse through the general IT section. Then jump over to HR—they will typically have some juicy bits. Next, you might want to look at the Legal department or Contracts department, if you have one. There you can usually find waivers and agreements that can help to give you bearing on the current compliance requirements. Also check out any contractor sections you may have. Finally, resort to the search engine, and try terms such as: “policy, procedure, GLBA, HIPAA, COPPA, security, awareness, education, training, IT, compliance.” Usually, these will turn up some useful leads for more exploration.

Interviews The final source to investigate is the employees themselves. These interviews are meant for discovery. I cover interviews again under the “Evaluating Current Policies” section. If you need to interview at this stage, it’s probably because you were unable to find any policies that pertained to your topic. In this case, you should have two goals from your interviews to help you to get an idea of current status: • •

Question them to identify additional locations of policies. Question them to get a rough idea of any guidelines or procedures they may follow.

Evaluating Current Policies The evaluation phase is very similar to the phase we’ll describe in the “Reviewing Corporate Security Policies” section later in this chapter. As a result, this section will introduce some of the points made there. The evaluation phase evaluates the policies you collected during your discovery phase, including interviews used to supplement nonexistent policies. Your goal during this phase is to get an understanding of the current policies and underlying procedures, where they comply with best practices and where they fall short. Part of the evaluation phase can include performing a risk assessment, as the results are used to revise current security policies, or to create new security policies if none exist.

Gap Analysis A gap analysis is a common way of representing the discrepancies between best practices, policies, and corporate practices such as procedures. A gap analysis is nothing more than a spreadsheet with requirements detailing each policy or procedural control. This checklist is usually filled out during the course of the review, and it is very useful in identifying systemic problems or completely disregarded policy areas. It is usually performed against industry best practices, such at the ISO17799. From your gap analysis, you’ll be able to locate key areas where you need to revise your current policies so they include additional sections. Your gap policy would look something like Table 2.3, which is based on excerpts from the ISO17799. Table 2.3 Sample Gap Policy

Best Practice Section Information security coordination

Best Practice Text

Relevant Policy


Recommenda tion

A cross-functional committee made up of members from relevant parts of the organization should be formed to direct an information security initiative. This committee should have one manager designated as the lead.

Informatio n Security Policy


Data labeling and handling

Appropriate procedures should be established to define how confidential data should be labeled and handled. Such actions typically include copying, storage, transmission, and destruction.


The Information Security Policy contains a member statement, which lists the various groups that must be represented on the “Information Security Steering Council.” We were unable to locate a policy that identifies requirements for data labeling and handling.

A policy should be created that appropriately addresses the requirements for data labeling and handling, based on needs identified in the risk assessment.

Interviews Another way to evaluate current information security policies is to interview current staff members. Usually these interviews occur when there are limited policies during the discovery phase. However, a more in-depth understanding of the policies occurs when we interview specific people about the policies currently in place. This helps us to understand how well current policies are followed. Once you have identified where your policies are lacking, either through gap analysis or interviews, you need to rank them in order to address them. This can be determined through a risk assessment, where you can identify where you are most likely to be hurt the most. Naturally, this is the best place to start building or revising your policies.

Assessing Risks Performing a risk assessment can be thought of as part of the evaluation phase for policy development, because its results are used to revise the policies so they address identified risks.

A risk assessment is a common tool used when creating policies. However, an inexperienced individual creating policy may skip this step to save time. The same inexperienced individual will probably use templates to create security policies, and as a result may define policies through the templates that are inappropriate for his organization’s risks. A security policy professional will perform a risk assessment using some of the methodologies I discuss later in this section. The inexperienced individual can use these same tools and techniques to create a fair and useful risk assessment. The necessity of a risk assessment can be brought to light in a simple scenario. If you have completed your security policy, and someone asks the question “why did you define that control?” or “Why did you specify this control over this other control?” They come down to the simple idea of “how do you know which assets require more protection?” All these questions can be answered in a risk assessment. The field of risk assessment is large enough to fill a book, as are most things in policy. This subsection only scratches the surface of risk assessments. There are multiple tools available to perform risk assessments. Risk is defined, and usually accepted in the industry, as follows: Risk = Vulnerabilities × Threat × Exposure Many of you are probably familiar with this formula, but I will recap. Vulnerabilities allow your organization to be injured. These may be due to weaknesses in your hardware or software, but also may be the lack of necessary diligence in staying current on government regulations. These are the holes in your system that can be exploited, either by an attacker or a surprise change in government regulations. The threat is the probability that a vulnerability will be exploited. A threat can come from a malicious user, an insider user, an autonomous worm, or legislation that hurts your company. Your exposure is how much you will be hurt if a threat attacks a vulnerability. How much money or respect will your company lose, or what damages will be done to your assets? The Exposure factor can also be represented as Asset Value; however, Exposure captures the actual loss, whereas Asset Value represents an all-or-nothing view of asset loss. Let’s use an example: Say a new buffer overflow is discovered for the Apache Web server. Your vulnerability is the newly discovered buffer overflow. The threat is the ease with which an attacker can exploit the new vulnerability, such as if an exploit is available in the wild. Finally, the exposure is how much damage you will suffer if you were exploited. Is this a server with client information, or a corporate honeypot set up by network administrators to lure hackers aware from the real critical servers with juicy names and listening ports? A properly configured honeypot will have no exposure if properly configured, because attackers could own it without giving up access to any corporate assets. You have probably seen other forms of risk assessment formulas, such as calculating Annualized Loss Expectancy (ALE) or Estimated Annual Cost (EAC). I believe the risk formula given earlier is the best to use for all-around risk assessments, though individual needs may vary. At this point in your policy creation, you should have a scope defined for each policy you hope to implement, and you should have created a rough outline. You also should have acquired management support and defined areas of responsibility. Management support is critical when performing a risk assessment, because you will be reviewing their most sensitive information. They may not feel comfortable telling you where their soft underbelly is located, which

of their points are the most vulnerable. There could be any number of reasons why they may not be forthcoming with information, but in many ways you are like a doctor and they are the patient. They need to be honest and tell you truthfully what problems and what vulnerabilities they are having. It doesn’t hurt to have a manager on your side to emphasize the need to be forthcoming with all relevant information. This is also why it is important to have identified areas of responsibility, so you know who you need to talk to in each organization to get the answers you need. A little legwork beforehand will help save you from being bounced around between staff members. It’s much more powerful to come in with a manager and an organizational chart or job description, and know exactly who you need to talk to.

Performing a Security Risk Assessment Risk assessments are necessary to understand the assets in your system, and help you develop your security posture. Oftentimes an appropriate risk assessment will bring to light information you did not have about your network assets. It will show you where you store your critical information, and if that information is easily susceptible to attack. A risk assessment considers this scenario with a multitude of different variables, and defines a risk number to each item. This can be quite a daunting task to try to perform on your own, but there are a number of tools developed to help you perform this. Here is a list of four products that can help you conduct your risk assessment (note that there are a multitude of products available): •

Operationally Critical Threat, Analysis, and Vulnerability Evaluations (OCTAVE) A process document that provides an extensive risk assessment format ( GAO Information Security Risk Assessment Case studies of organizations that implemented risk assessment programs (

RiskWatch Software created that asks a series of questions to help individuals perform a risk assessment. Also includes modules for review against the ISO17799 standard (

Consultative, Objective and Bi-functional Risk Analysis (COBRA) Another risk assessment software program. Also includes questions that map against the ISO17799 (

Let's take a closer look at OCTAVE as an example. It is a tool developed by the Software Engineering Institute (SEI) at Carnegie Mellon University. The goal of OCTAVE is to allow an organization to fully understand and manage the risks they face. OCTAVE usually has more components than most people want to throw at their risk assessment process, mostly because it was developed with large organizations in mind (over 300 employees). Due to demand for a simpler risk assessment structure, the SEI is creating OCTAVE-S, designed for smaller organizations. The OCTAVE-S is due out in Q2 of 2003. One interesting thing to note is that OCTAVE integrates the Risk = Threat × Vulnerability × Exposure model, but does not make direct relations to dollar amount. Instead of using “Exposure,” OCTAVE uses “Information Asset valuation,” which can help to quantify difficult-to-measure assets. Also,

OCTAVE treats all “Threat” possibilities as one, because SEI feels that not enough data is available to create accurate threat values. Conducting the entire analysis takes about a month for an assessment team, including a series of twelve half- or full-day workshops. It is an assetdriven evaluation approach, meaning that it’s important for the analysis team to identify the assets most important to the organization, and to focus their efforts on those assets. Factors affecting the time include the scope, time available by the assessment team, time to assess vulnerabilities, and project coordination. OCTAVE’s core message is to be a self-directed information security risk evaluation. The reason behind this is that your employees will know the organization better than anyone else. However, if your organization feels more comfortable hiring an outside expert to conduct your risk assessment, the SEI does offer licensing programs for OCTAVE. OCTAVE consists of three phases. The inputs to this process include interviews, conducted by a team of experts as defined in the framework. The processes as stated in the framework help to understand the steps to be taken, and what will be gotten out of each step. For a complete description, see the Web site: •

Phase 1: Build asset-based threat profiles This is the selfassessment of the organization, its assets, and the value of those assets. This includes the measures that are being taken to protect those assets, and the threats against those assets. The result of this section is a threat profile.

Phase 2: Identify infrastructure vulnerabilities This phase uses the identified critical assets and identifies the current vulnerabilities in those systems or systems that are related.

Phase 3: Develop security strategy and plans The information from Phase 1 and Phase 2 are combined and analyzed to come up with a strategy to mitigate the identified vulnerabilities and protect the critical assets.

If the OCTAVE process seems to be too extensive for you, and the OCTAVE-S also doesn’t seem to be filling your needs, you can turn to one of the alternative risk assessment tools such as RiskWatch or COBRA. RiskWatch has a variety of options, including support for physical security, information security, HIPAA, and 17799. The COBRA program also provides a series of questions based on the ISO17799. Both programs offer an extensive output report.

Preparing Reports Most risk assessment tools provide the means to create a risk assessment report. In most cases, this report will contain meaningful information, but it may not be presented in the most straightforward manner. It is up to you, as the report analyst and policy champion, to extract the necessary information. You may be the only person to view the risk assessment report, because it usually will be used to manage risks, the next step in the risk assessment process.

Managing Risks This final step is really a transition point to creating new policies. Now that you’ve identified the risks to your organization, you need to mitigate those risks. This is usually done one of two ways, through new policies and procedures or

through a quick fix. I like to divide the high-risk findings from the medium- and low-risk findings, to identify the things that need to be fixed right now. After a risk assessment, the high-risk findings are usually glaringly obvious and need to be fixed immediately. It’s usually not difficult to get approval for these types of problems, because they are usually embarrassing and management wants them resolved as quickly as possible. Examples include poor authentication routines, server vulnerabilities, and network architecture vulnerabilities. Often, these are also quick fixes, or management is willing to throw enough resources at the problem to get it resolved before something bad happens. After the high-risk, quick-fixes are resolved, you can concentrate on the medium- or low-risk vulnerabilities, or the high-risk policy findings. An example for a high-risk policy finding may be the lack of an e-mail policy, password management system, or the need for a trouble ticketing system such as Remedy. This is when you select the appropriate controls to mitigate your risks, based on the results of your risk assessment. These controls will be built into your policies as discussed in the next section. NOTE Some fixes may be relatively quick and easy, such as installing a patch. Others may be more extensive, such as identifying the need to install a trouble ticketing system.

Creating New Policies You have already performed the majority of the work necessary to create your policies, through gathering support, scoping your project and participants, and selecting controls. Now you can begin to create your policies, and you should start by planning a structure for your policy documents. Many successful policies use a “hierarchy” structure, where one central document defines specific principles and goals of the policy program. Smaller, more specific documents tie into this central policy document to address certain areas, such as e-mail or user management policies. This helps to maintain a structure for policies and impose a consistency and readability throughout all policies in your program. In this section, I list only a few examples of products that can help you to develop, implement, and enforce your policies. However, this is by no means a complete list. The fact is, most policy management software products include modules to cover all of the sections I discuss below. Most enterprise level policy management software companies, such as Bindview, Symantec, Polivec, and NetIQ, include modules that cover the policy development, implementation, and enforcement in their policy software. However, I split up the discussion between development, implementation, and enforcement to help administrators who might not be able to afford a full policy management system. There are a wide variety of tools available to assist you in the policy creation process. In the following sections, I discuss some of the tools and templates available to help you. These exist to help you create the actual text to include in your policy. The hardest part for you is to decide what policies to implement to enable your chosen controls, and to accomplish your goal. I also suggest you read through Charl van der Walt’s article on Security Focus, which gives a thorough overview of information security policies (

When creating your policies, you should have already identified your high-risk areas. These require additional attention to the policies you choose. It sounds obvious, but spend more time on your high-risk areas, such that you cover all the necessary elements. Pay close attention to the controls you require and make certain that those controls are covered in the policy. Finally, make certain that the policies in high-risk areas are especially clear and succinct. When considering readability and comprehension of your policies, consider that some of your readers may understand the concepts best when given examples of policies. For example, your average user may not understand why it is necessary to create strong passwords. If your users do not understand the need to include special characters, alphanumerics, and nonprinting characters in their passwords, they may not take the extra effort to create a strong password. If you do not have the ability to enforce strong passwords during creation, this may result in weak, even dictionary passwords, being created in your system. However, some of these concerns may be mitigated if users are provided justifications, such as examples of good and bad passwords, and a basic explanation of a brute force password cracking attack. This ties back to Chapter 1, which explains that your user can be both your biggest asset, and your biggest weakness, in securing your network. If you can get your users on your side, by helping them understand why certain policies exist, it is likely that they will be more willing to help you enforce these policies.

Using Templates There are a multitude of sample templates available to help you create your written security policy. However, remember that you must review and modify any template you use so that it fits your specific company. You must “make it yours,” and make certain that your policy statements properly mesh with your corporate culture, business practices, and regulations. Many smart people have spent a lot of time considering what should be included in good security policies, and they have captured those in multiple guidelines and templates. It would be unwise to ignore their efforts. One fantastic source for sample policy statements is Charles Cresson Wood’s book Information Security Policies Made Easy. Not only does he provide over 700 sample policies for use, but he includes a supporting argument for each of his policy suggestions. This can help if you choose to include justifications for the policy decisions you make in your document, as discussed previously. Another great source for learning more about how to create security policies is Scott Barman’s book Writing Information Security Policies. Barman provides guidelines for selecting your information security policies, including considerations such as what needs protection and from whom. He also addresses the various roles in an information security group, and the responsibilities of each of these roles. Barman also provides templates and a discussion of special considerations for major policy sections, such as physical security, Internet security policies, encryption, and software development policies. Finally, Barman addresses the issue of maintaining the policies, through enforcement and review. There are a number of sample policies available online, or in various books. SANS has published a site with various security policy examples, though not as extensive as Wood’s book. However, the documentation on the SANS Web site is free, and for those on a tight budget or only requiring a small policy

deployment, it provides a good alternative (Barman’s book is also relatively inexpensive).The SANS site does include additional information and discussion on current policy-related topics, such as government guidelines and links to additional policy templates. The “SANS Security Policy Project” is available at There are also a number of free or commercial policy templates, some of which are listed at the end of the chapter in the “Links to Sites” section. With the acceptance of the BS7799 as the ISO17799, we have a worldwide standard on which we can base our policy creation decisions. The result has been that we have raised the bar on policies recently, to the point where the industry best practices has an extensive minimum baseline. However, many policies in existence today were not created using guidelines or templates, but were thrown together in an ad hoc fashion. It is these policies that you should be wary of; if this sounds familiar, you should consider an upgrade. The following is a template of common items that should be included in most corporate policies. This is a compilation based on some of the better policies I’ve seen. Your policy may not have all these sections, based on your needs: •

Overview • Introduction Introduce the policy, goals, and why it exists. •

Purpose What is it meant to accomplish, and what risks does it mitigate?

Authority Who approved this policy?

Policy Ownership Who is the owner of this policy, who makes changes, and who do I contact with questions?

Scope Where does this apply to the organization, and who is affected?

• •

Duration What is the time span of this policy’s existence? Related Documents What other documents contribute to this policy?

Policy • Actual Policy Text The actual rules that will be implemented by procedures.

Roles and responsibilities • Roles Defined and assigned to employees for various classifications.

• Responsibilities Defined for each role. Compliance requirements How do you comply with this policy, and what constitutes a violation?

• •

Exceptions to this policy Those explicitly outside scope. Enforcement of this policy How is this policy enforced, what are the consequences for violation?

Revision History Tracks changes; necessary for handing off to new owners.

Tools There are a variety of tools available that can help you write your information security policies. These are useful if the policy administrator does not have the time or resources to create an information security document. However, be careful to not place too much trust in the prewritten policies. No policies should be created and deployed if they haven’t been reviewed for consistency and checked for conflicts with corporate or government regulations. The tools mentioned here is by no means a complete list, because there are a number of other companies and products that have tools to help you develop your policies. For example, META Security Group’s Command Center or Symantec’s Enterprise Security Manager 5.5 can both help in creating your policies. Pentasafe (founded in 1997, purchased by NetIQ in October 2002), as a component of their VigilEnt product, offers a component known as PolicyCenter. PolicyCenter helps the policy administrator to create their policies and distribute, track, and enforce them. PolicyCenter uses the templates from Wood’s Information Security Policies Made Easy. Thanks to Wood’s templates, this product has a wide range of policy documents to draw from. For more information, see or PoliVec (founded in 2000) is a relatively new player in this market, and has released their PoliVec Builder application. In the same way as the NetIQ offering, PoliVec allows you to create policies from general templates. Also like NetIQ, PoliVec allows you to specify templates for GLB and HIPAA requirements. Both the PoliVec and NetIQ offerings allow administrators to create, distribute, and enforce their policies. These features help to create a “living policy” document. For more information, visit Finally, Glendale Consulting Limited (founded 1991), from the UK, has created RUSecure. This is available as the Information Security Suite. These tools offer a variety of different services, including a way to distribute policies through your intranet. The SOS System (Security Online Support) is offered both in Microsoft HTML Help format, and as an HTML intranet site. Their templates are based on the BS7799/ISO17799, and they offer an extensive number of policies to select and customize. Other tools include the Business Continuity Plan Generator, to help administrators create disaster recovery plans. More information is available at:

A Note on Procedural Development Now we have the current policies read and interviews performed. We may have performed a gap analysis and identified additional locations where we require policies or procedures. We also may have performed a risk assessment, to help us identify the locations that require the most immediate or directed attention. We also have probably identified the policies that we want to change or new policies we want to create. This is where lofty policies usually die, and good policies show their value as a guiding document. This is procedural development, which can be much more extensive than policy development, because there are so many more roles and responsibilities to address. Procedures define controls that need to be followed to enforce the policies we established. If we had policies without procedures, it would be like having a constitution without laws.

Choosing your controls is a critical decision point in your policy development process because it is a direct expense from your company that has always been difficult to cost-justify. Your goal should be to protect your assets by closing your holes in the most cost-effective way possible. For example, if you discover a buffer overflow in your Web server software that allows remote compromise, you can patch that right away. However, you want to get to the root of the problem to prevent this from happening again. Usually this can be accomplished through the creation of a build policy for new systems, and a maintenance policy for systems online. A less effective decision could be to migrate your Web server to a different platform with a different OS and server software. Although this may fix the immediate problem, chances are the new platform will eventually become vulnerable as well. Consideration should be given to long-term fixes, otherwise you might end up with a series of ineffective, short-term, or ill-chosen fixes. Other examples of controls include password policies, encrypted disk policies, and laptop handling policies, to name a few. Some of you may have already implemented controls as you discovered vulnerabilities during your risk assessment process. You probably remediated those as soon as they were discovered, perhaps with little planning or foresight to future situations. The process of choosing controls adds a systematic approach to the ad-hoc method of quickly fixing vulnerabilities. Although it may take longer at first, in the end it will yield more effective results. Note that I am not endorsing that you leave a vulnerability exposed while you follow this process. Patch them as soon as you find them, but be sure to still do this process to resolve the root cause of your vulnerability. Treat the problem, not the symptom. The process involves the following steps: 1. 2. 3. 4. 5.

Scoping the vulnerability. Ranking vulnerabilities in order of severity. Evaluating possible options to remediate. Performing a comparative cost-benefit analysis. Selecting the best cost-benefit option.

Scoping the Vulnerability The first step is to understand the vulnerability. This involves tracking down the root cause of the vulnerability. To use the example from the preceding section, where a buffer overflow was discovered in Web server software, you should identify that the root cause is a policy issue, not a software issue. The policy issue is either a lack of build policies or lack of vulnerability scanning, tracking, and updating. Include possibly related vulnerabilities in your scoping process, so you can capture more broad-reaching, effective tools to fix problems.

Ranking Vulnerabilities in Order of Severity Continuing with the Web server software example, a risk analysis should already have been performed to evaluate the ease of exploiting the vulnerability, the likelihood that the vulnerability could be exploited, and the potential damage due to exploitation. This will help to identify what holes need to be fixed immediately, if they weren’t taken care of as soon as they were discovered.

Evaluating Possible Options to Remediate Create a quick list of possible options you have to fix your root cause problem. In our example, could your root cause be most easily fixed with a change of

software platform? Perhaps, if it appears that the platform is inherently insecure, and multiple vulnerabilities are continually being discovered. However, that may not be the most appropriate change if you can dig even deeper to the root cause, which is usually a policy development or enforcement issue. From your quick list of options, you should have several solutions to fix each vulnerability.

Performing a Comparative Cost-Benefit Analysis The purpose of a cost-benefit analysis is to enhance the decision-making capabilities of management to efficiently use the resources of the organization. In this case, a cost-benefit analysis helps IT management decide which controls to put in their procedures, based on limited time, budget, skill sets, and other resources. Sometimes multiple vulnerabilities can be resolved with only one quick fix. These should usually get a higher ranking in your cost-benefit analysis, since you can resolve multiple issues with one process. This chapter does not cover performing a cost-benefit analysis, or the economic tools available to assign value to assets and information if lost, but many other resources are available, including these IT-specific ones: • A good description of cost-benefit analysis with examples. Includes a cost-benefit guide for the National Institute of Health. A cost-benefit analysis for a Network Intrusion Detection System.

Selecting the Best Cost-Benefit Option After performing your analysis, you should be able to make a clear, confident decision as to which is the best course of action to resolve the root causes of your vulnerabilities. These options become your controls, and your controls can be developed during the course of your analysis. This process may also help you determine the areas that need the most control. If you are having problems coming up with what controls you should use, the ISO17799 is a great guideline of general controls. Note that it does not provide many specifics, and it is certainly not extensive, but it does provide a high-level list of options. Note that all suggestions from the ISO17799 will need to be customized to your environment, and they should not be copied directly from the guidelines. You can use the suggestions from the ISO17799 to create your control requirements. An added benefit of using the ISO17799 is that the controls you choose will feed directly into your policy creation, as outlined in a following section. A gap analysis could be helpful; I discuss that further in the later section “Reviewing Corporate Security Policies.” In addition, you may want to reference vendor Web sites when creating controls, because they often have a list of best practices for creating secure deployments of their products. They may also publish a checklist of safe computing practices in regards to their software or hardware. As an example, I’ve provided a few links to vendor-specific security checklists. Most vendors will provide a checklist for their products. If you can’t find one from your vendor, a quick online search will usually turn up a couple templates. •

O’Reilly ( Checklist for how to harden Cisco routers.

Microsoft ( ity/lockdown.asp) Lockdown instructions for various versions of Microsoft operating systems.

BIND ( Security guidelines for the configuration of the BIND DNS.

Implementing and Enforcing Corporate Security Policies Now that we have created our policies, either from templates or tools, we need to implement and enforce them. This is the critical stage where we can either create policy shelfware or a living policy. Even a poorly written policy can be distributed and used to educate employees. However, no matter how wonderful and eloquent a policy may be, if it’s not distributed and enforced properly, it is not worth the paper it is printed on. The first component in determining if our policy will become shelved or used has already been made, when we were creating the policy. Did we use a tool such as PoliVec or NetIQ to create our policy? If we did, then our job of deploying the policy may be easier. A new entrant to this field is Symantec. Though Symantec does not have a policy creation module, you can load modules into its scanner that are compliant with certain guidelines. This is discussed in detail later in the section “Automated Techniques for Enforcing Policies.” However, if we are deploying a legacy policy, for example, one that just got updated to be in compliance with the latest regulations, we may have a more difficult time implementing and enforcing the updated policy. For more information on how to help get management on board in your security initiatives, refer to Chapter 1. The section “Developing and Maintaining Corporate Awareness,” addresses ways to mitigate risks at a corporate level. Much of the chapter addresses ways to develop the human infrastructure to address incident response and prevention, which is closely related with the enforcement of security policies. Tools & Traps… Rewriting Your Policies for a Management System

With the advent of automated security policy management systems, there are some things you may want to consider when implementing your information security policy. Do you want to backtrack some and implement part of your security policy program using an automated tool? Consider the benefits, but also consider the traps. On one hand, you will be able to monitor continuously for compliance with your security policies, checking everything from patch level to password strength, from access controls to intrusion signatures. This can assist you in securing your hosts and network, by holding the reigns of policy tight on your network. However, also consider the switching costs involved once you port your existing policies to the new management system. This will take time and resources, and will probably need to be repeated if you choose to switch to a

competing product. In addition, these products are relatively new and untested and may have their own inherent concerns. Finally, these products cover only a specific set of information security policy procedural controls and still require the maintenance of a policy administrator. Consider the needs of your network and whether you feel you will benefit from the implementation of such a system. If your corporate culture or policies require tight maintenance on compliance with policies in your hosts and networks, either due to heightened threats or government regulation, an automated security policy management system may be appropriate. However, if you think it will add to the security of your network, but you fail to implement additional policies and controls, you are probably leaving a gaping hole in your security policy. If you still need to implement policies the old-fashioned way, do not despair. There are still a variety of manual tools and methods available to help you get your message out. Once a policy is created, you need to develop procedures to help administrators implement those policies. In some cases, those procedures are best developed by the administrators themselves. However, this may significantly slow down your deployment time. Another approach is to develop a preliminary list of the procedures you think should be implemented and allow the administrator to add, augment, or remove items from that list. An extensive list of sample procedures is “The Site Security Policies Procedure Handbook,” available at Almost all procedures require a technical insight into some area, and many procedures should not be developed without input from experts in those areas. Even areas such as physical security require insight into physical authentication routines, biometrics, and networking and power considerations for physical setting of systems.

Policy Distribution and Education Now we have developed a set of policies and procedures, but unfortunately, nobody is aware they exist, so the next step is an awareness campaign, to inform users about our new policies and procedures and to educate them about any new changes. First, we have to determine the scope of our recipients. It won’t make much sense to give our new policies to individuals who don’t need to read them, and at the same time it would be a mistake if we missed important personnel. The answer is not to distribute all policies to all people, in a blanket coverage issuance of our new policies, but to deliver select, targeted messages to specific users and groups throughout the organization. Mass distribution would completely backfire as all personnel are inundated with countless, unnecessary policies. What we need to do is determine the minimum number of policies and procedures we can distribute to each person or group of people, such that we can get our point across with the smallest amount of information. This greatly increases the likelihood that our policies will actually be read, and can help to make them easier to comprehend. There are several ways we can accomplish this. If your company has an accurate listing of job descriptions, you can break down your new policy document by job responsibility. If you have to do this manually, be careful to do so in a way that you can easily update all the policies at once.

Breaking policies and procedures into manageable pieces also helps to make them more easily accessible Finally, we need to consider education in our policy distribution program. Some rare employees may take their own initiative to familiarize themselves with the corporate security policies. The rest of them, which is usually most of them, will require a fair amount of coaxing and convincing to get them to read the policies. Even more coaxing and convincing will be required if you want them to sign a form acknowledging their receipt and understanding. Fortunately, a variety of tools are available to help you coax and convince. First of all, you can play a variety of games with how you present the policies to employees. For example, if you request employees to review only a small number of policies immediately relevant to them, they may be more likely to do so. This is especially effective if you first show them the large number of policies that apply to everyone else. Another way of presenting employees with policies is to deliver them in installments, each with a statement of receipt and understanding. By presenting them in bite-sized chunks, employees may be more receptive to the idea of reading the policies. For example, bite-sized chunks of policy may come in the form of posters, screensavers, or newsletters, each informing them of one small but important piece of the policy program. Given enough time, most of your important policies can be distributed through the organization. Finally, usually more effective than requiring a statement of understanding is to require that employees take a quiz on their policy knowledge. A quiz initially appears to be part of the enforcement section but is really awareness in disguise. Questions are usually simple enough that employees can guess the correct answer, but difficult enough that they need to think about the question. Though this puts an additional burden on employees, it will usually result in a more secure and productive environment in the long run. I have seen some clients that take this as far as restricting intranet, Internet, even terminal access before the appropriate policy quizzes are passed. Another advantage is this provides an audit trail against which you can check to see how many of your employees have taken and passed the quiz. Though not required, some tools can help you in quizzing your employees. They allow you to dig even deeper, to identify areas in the policies where employees are having particular trouble. Here are some links that are useful in creating an awareness program: • An example of a tool you can use to quiz your employees on your security policies. NIST created a small guide to help assemble an awareness program. NIST guidelines for creating an awareness program.

• • SANS has a section dedicated to security awareness. An excerpted chapter that reviews the basics of a security awareness program.

Policy Enforcement

Policy enforcement was alluded to in the previous section, when requiring your employees to take policy comprehension quizzes. Quizzes can act as both a means of education and enforcement, depending on how they are structured. Usually, they perform both roles. Quizzes can be given using either manual or automated tools, discussed below. There are two sections we can address under policy enforcement. One is the old-fashioned way of policy enforcement, using manual techniques, such as quizzes, spot checks, and discipline. The other means is using automated policy enforcement techniques, of which there are multiple vendors of software that performs checks, produces quizzes, and even tracks individual employee’s compliance.

Manual Techniques for Enforcing Policies There are a wide variety of tools and techniques available to policy administrators to help them check for policy compliance and enforce those policies. I already addressed the utility of quizzing employees on their policy knowledge. This can be performed on either a one-time basis, annually, or on any schedule that is appropriate. Quizzes can be created on intranet sites, issued as stand-alone software, or performed on paper. The most popular, and essential, tool for enforcing policies is to perform a policy and procedure review. This is usually performed through means of a gap analysis against some baseline standard. Once a review is performed, your security policy administrator can take note of the areas where policies, procedures, or procedural enforcement are lacking. One proposed method of policy enforcement, by Charl van der Walt, is that of the “Resource Centre Model,” whereby the policy itself is self-policing. The guidelines are located in a central resource, and the individuals have access to the policies, and are responsible for bringing themselves in compliance. Compliance is then enforced through the audit department, by means of spot checks or network scans, to check up on users. One of the more effective, but expensive, techniques in policy enforcement brings up one of the themes in this chapter: to have a designated individual responsible for policy and procedure enforcement. This human enforcer, call them the policy police, is responsible for knowing the current corporate policies and checking employees for compliance. This person may become one of the least liked people in the company, if they are not careful how they go about enforcing the policies. The policy police have a wide array of tools to detect and enforce policy violations. Some groups perform red team penetration tests at unexpected times, to check that network controls are properly in place and patches are kept up-todate. Some groups may hire external consultants to perform these penetration tests, if the resources are not available internally. A multitude of companies, such as Foundstone, ISS, and @stake, offer penetration testing services. Performing your own policy review is covered more in the “Reviewing Corporate Security Policies” section. Once the policy police have identified the areas where policies and procedures are not being followed, they need to take action. We are assuming that the policies and procedures are complete and do not have any gaping holes. Resolution may be as simple as reissuing the policies or procedures, or requiring the offending individuals to take or retake the policy quizzes. If this appears to be

a habitual problem, the policy police need to be both authorized and capable of taking more severe action. This may range from removing or restricting information access, to employment termination, to criminal prosecution. Employment contracts usually already have specified grounds for termination; it’s up to the policy police to take appropriate action if warranted by the employee’s actions. The individuals responsible for enforcing policies, the policy police, must be given the appropriate tools and jurisdiction to affect consequences on offending individuals. If there are no means of levying consequences in the current policy program, it may be as effective as a dog without teeth. There must be consequences for violating corporate policies, and those consequences should be clearly explained in the policy document and in employee contracts. Finally, manual policy techniques can catch violations that automated scanners, even red teams, can miss. Usually this takes place during the policy interviews, during the policy review process. For example, if the interviewer asks “Are your passwords stored in a secure location,” and the response is “Yes, right here under my keyboard,” it becomes clear that policies are not being distributed or enforced properly. Errors or oversights in organizational structure, information handling, and physical security can be identified and remediated through the interview process.

Automated Techniques for Enforcing Policies Automated policy tools have only recently been introduced into the market, and can be thought of as a subset of vulnerability scanners. They can be very helpful if used properly, but, like most tools, can hurt you if used improperly. Policy scanners are available from a variety of companies, including NetIQ (Formerly Pentasafe, purchased October 2002; the new brand name is NetIQ VigilEnt), Symantec, and PoliVec. As discussed previously, NetIQ and PoliVec can help you create your policies through policy builder software and then deploy that policy to employees. This allows you to develop your policy and then deploy it using the same software. Symantec, a relatively new player in the automated security policy market, provides the user with templates based on industry standards or government guidelines, and then scans your network to check for compliance with those standards or guidelines. BindView, also a new player in this field, has created the Policy Operations Center, which helps you to manage your policies in one resource and track how they are being used. Network scanning for policy compliance can involve deploying an agent device on each machine, which may be difficult for a large network that does not have push update software already installed. These scanners can check for Registry settings, version and patch updates, even password complexity requirements. Some scanners include password cracking software, so you can also audit your users’ password strength. Descriptions of different policy scanners are available at the following Web sites: •

NetIQ (Formerly Pentasafe)

Symantec ctID=45



What all the scanners have in common is they provide a way for users to quickly check for policy violations in their network configurations. In addition, all have some way to check that your network and host configurations are in compliance with published guidelines or standards, including HIPAA and GLBA. In addition, these tools make it much easier on the policy, network, or system administrator to check for compliance in configuration on individual machines. It allows administrators to catch holes in their infrastructure before they become exploited. What’s even better, it can help administrators catch potential holes, such as misconfigurations in a host build policy. For example, if your policy scanner performs a review of your Windows 2000 server Registry, and notices that restrict anonymous is set to zero (RA=0), and your policy defines it should be at least 1, you can remedy this quickly. If you decide to solve this by reconfiguring your server build policy, perhaps you will find other misconfigurations as well. However, none of these products provide a complete solution. Purchasing any one of these items will not check compliance for all your policies, and it is still necessary to conduct a manual, but perhaps abridged, policy review. The biggest danger in security policy is complacency, and the worst offender is the person who thinks somebody else will do what needs to be done.

Reviewing Corporate Security Policies Once you have built, implemented, and are actively enforcing your security policies, your efforts can level off. The process of reviewing your security policies is one with the goal of maintenance, to keep your current policies in the most up-to-date, applicable form possible. The role of the policy administrator shifts from a builder to a maintainer, but has not decreased in workload or importance. Policies must be maintained with constant diligence, otherwise they will become stale and outdated. The more policies become outdated, the more difficult it is to bring them, and the company, back into compliance. A new tool has been released by the Human Firewall Council, which allows administrators to evaluate their current security practices against the ISO17799. It also provides them with a Security Management Index, a ranking of their security management against others in their industry. You can take the test here: ( I should note that the danger here is not that new regulations may be passed that require an update to the policies, however, that is very important. The danger is that policy administrators may become lax, even removed from their duties, as the uninformed may feel that policies can be “finished.” If this happens, your policies are on their way to becoming shelfware. Policy review is tied very closely with policy enforcement, but each has a clear and distinct area. Both occur after the policies have been finished and distributed, but enforcement deals with making certain the current policies are followed; reviewing current policies deals with keeping them updated, and adjusting them based on feedback from users. It’s the process of reviewing and

updating policies based on the needs of users, the company, and regulations that help to make a “living policy.” Policies can be reviewed by either an internal group or an external audit team. There are two schools of thought in this area. First, an internal group will be intimately knowledgeable about the company culture, assets, risks, and policies. They probably have seen the policies evolve through several stages and have some history about why certain things are done. They probably know what has been tried in the past, and what hasn’t worked. However, many of the internal group’s benefits are also drawbacks. An internal group may not be able to remove themselves enough from the company and corporate culture to give a completely unbiased policy review. This is where the second school of thought comes in, hiring an external policy expert to review your corporate security policies. An external expert may not be as knowledgeable about your company and your corporate culture. They may not know what you have tried in the past, or the history of why certain policies exist. However, they will probably have more experience with policy development, enforcement, and review than your internal group. They probably have seen a number of different policy implementations and have seen what has and has not worked at their clients. And finally, any decent external policy reviewer will learn your company, corporate culture, and why your organization made the policy decisions it did. This experience can help you in crafting your next revision of your policies. There are some distinct steps that are performed in almost every policy review process: 1. Perform risk analysis (See the “Risk Analysis” section earlier in this chapter for an in-depth discussion.) Some guidelines, such as GLBA, require that a risk assessment be performed before any policy controls are established. In addition, the ISO17799 recommends that a risk analysis be performed in order to choose the appropriate controls. 2. Review current policies The reviewer needs to be familiar with all your current policies, procedures, and how those policies came about. If your company has kept a version document, describing policy changes and their justification, that can prove to be very useful for both internal and external reviewers. 3. Identify key personnel There are typically a few key managers upon whom the policy directly depends. These may be the manager of internal IT, or the legal department responsible for handling thirdparty contracts. Regardless, the person with direct responsibility of assuring practices are in accordance with each particular policy should be identified. 4. Interview personnel to correlate with policies Once those key personnel are identified, they should be interviewed. This is the reality check for your policies: Do your managers know what is supposed to be going on? You should be checking for discrepancies between your interview and what the policies state. 5. Review implementation of policies as procedures Your scope may include that you dig as deep as checking for the implementation of those procedures throughout the network. This can be as cursory as a spot check on a few systems, performed in conjunction with a





penetration test, or performed thoroughly using a policy scanner and manual methods, as described previously. Remember that the scope for procedural reviews can get very big, very quickly. Check discrepancies between policies and best practices Throughout the course of your review, you have probably had a baseline you were contracted to check against. This could be as broad as the ISO17799, or as general as the GLBA guidelines. Regardless of what your baseline is, you should identify it before you begin work, then note discrepancies throughout the course of your review. This gives your review credibility, and improves the overall work by ensuring all your bases for compliance are covered. Prepare gap analysis Perform a gap analysis and create a spreadsheet to identify sections of your policy that are lacking in depth, or missing. Once your gap spreadsheet is complete, you will be able to use it again in the future. Your goal is to make policy review a repeatable process, one that can be performed quickly and easily on a regular basis, no less than once a year. Once you finish your gap spreadsheet, your last step is to update your policies with revisions. Update your policies The update process is relatively painless, since it involves exactly the same steps of creation, distribution, and enforcement as I outlined above. Since you have already have completed this process once, it will be easier the second time through. You have transitioned from building policies to maintaining policies, and that process is complete after you have conducted your first policy review. This has now become a maintainable, repeatable process that will help you to update, educate, and enforce your policies. Modify existing policies The final and most important step of the policy review process is to modify the existing policy to correct any problems your discovered during your review process. This can be done by referring back to the policy creation section and following the steps contained within to update your existing policy. This step ties the review process back into the creation process, which completes the loop and creates a closed circuit information security policy management system. From there, the process can be repeated.

Security Checklist As I mentioned in policy review Step 9, one of the most important things you can do to your policies is to modify and update them to keep them current. You do this by referring back to your policy creation steps, and looking to integrate new information or remove outdated information, from your current policy. This security checklist can act as a quick reference guide to the steps you took during the creation process. Take a moment to review this checklist, and ask yourself if you need to update your policies.

Developing a Policy •

Establish justification.

Define the scope.

• •

Compose a rough outline. Gather management support.

• •

Define areas of responsibility. Discover current policies.

Evaluate current policies.

• •

Perform a risk assessment. Create new policies.

Implementing and Enforcing a Policy • •

Distribute the policy and educate the users. Enforce the policy using manual or automated techniques.

Reviewing the Policy •

Perform risk analysis.

• •

Review current policies. Identify key personnel.

• •

Interview personnel to correlate with policies. Review implementation of policies as procedures.

Check discrepancies between policies and best practices.

• •

Prepare gap analysis. Update your policies.

Modify existing policies.

Summary The future of corporate policies is that they will only become more important. They will not be going away any time soon, no sooner than the need for corporate security will go away. They are closely tied together, yet policy development is a much softer security skill than others, such as network administration, and is consequently easily overlooked. Security policy development should not be easily brushed off, or it may likely come back to hurt you. The first part of this chapter covered the founding principles of security policy, the way policies can help protect you from future attacks, ways to avoid creating shelfware policies, and a brief overview of current policy standards. The purpose of this section was to provide a brief overview of the field of policies, including the theory behind good security policy and current events taking place. The second section covered the practical implementation of security policies, including creating policies, implementing and enforcing policies, and reviewing and maintaining your security policies. The purpose of this section was to provide you with a roadmap to develop your security policies, in addition to the multitude of books, software, and online resources available.

The chapter started with a basic overview of the founding principles of good security policies. These included the goal of security, to achieve data confidentiality, integrity, and availability. Good security principles are one of the few tools that provide protection against unforeseen, future threats and vulnerabilities. Good policies imply good procedures and controls, which usually result in a more secure network. If a network is maintained properly in accordance with good policies, it will be more likely to survive an attack than a less secure network with poor policies and controls. One of the key components in building a strong security policy program is to have the full support of management behind the initiative, including resources, funding, strategy development, and statements. The only useful policy is one that is read, understood, and followed. The need to create policies that are less likely to become shelfware is obvious when developing a policy program. Policy creators that focus their efforts with the right motivation can make policies easy to read and referred to often, they can keep them maintained and valued, and they can communicate the importance of having a policy owner. There are multiple policy standards that have come onto the scene lately, but the most popular for general use seems to be the ISO17799. Though this standard is not perfect, it is one of the most extensive we have available, and it can be used to address most security policy areas. The chapter covers several governmental regulations that have recently been enacted, including GLBA. This concludes the first section of the chapter, covering the general field of security policies. The second section of the chapter begins with creating your security policies, though it could also serve as a guideline to revising a current security policy. This section includes the general areas to be included in most security policies and a discussion of the importance of properly scoping your policies. It also addresses why it is important to understand the current state of your security policy program before you begin to make any changes, and how to assess that current state through the use of a gap analysis. Assessing the risks to your network is equally important. The area of risk management is extensive, and this chapter’s brief overview includes risk models, risk management, and control selection. The chapter concludes with the basic tools available to help policy administrators construct their security policies, including tools, templates, and guidelines for custom creation. The next step in creating your security policy program is to implement the policies you just created. There are tools available for distributing your policies and educating your employees, including software tools and manual techniques. Once they are distributed, current policies must be enforced; the chapter provides an overview of manual tools, such as penetration tests and procedure reviews, and automated tools, such as vulnerability scanners that integrate your current policies into their reporting structure. The review and modification of security policies is a cyclical process that ties back to policy creation. The process involves performing a policy review, at the end of which, policies usually need to be modified to correct oversights and mitigate newly discovered risks. In conclusion, security policy management, through a security policy system, is a cyclical process that is never completed. Security policies have been around since the 1960s, but the security policies of today have more theory to support their actions and more guidelines to cover all their critical areas, and they

are more sophisticated in the tools and techniques used. Now that information security controls have advanced a significant level, the management of the security policies used to implement these controls are advancing in their own right. As computer security becomes more important in our society, and the number of threats to our networks continues to increase, the importance of security policy will also continue to increase.

Solutions Fast Track The Founding Principles of a Good Security Policy •

The “pyramid of goals” for information security includes the following factors: confidentiality, integrity, and availability (CIA), and principles, policies, and procedures. Specifically, the principles embodied in the pyramid of goals include: the principle of least privilege; defense in depth; secure failure; secure weak links; universal participation; defense through simplicity; compartmentalization; and default deny.

Safeguarding Against Future Attacks •

Good security policies protect against unforeseen, future attacks.

Management support is required for a successful policy implementation.

Avoiding Shelfware Policies •

Policies left unused on the shelf can be more dangerous than no policies at all.

• •

Easily readable policies are more likely to be used. Policies referred to often are more likely to be current in user’s minds.

• •

Policies kept up to date contain relevant information. Policies should be recognized for their value.

Policies with a clear owner are less likely to be forgotten.

Policies with management support are more likely to be taken seriously.

Understanding Current Policy Standards •

Using a baseline can improve your policy and help to avoid gaps.

• •

ISO17799 is an internationally recognized standard. SAS70 is a compliance tool for audit firms.

Multiple standards are available, and selecting the right match for your organization can be difficult.

Creating Corporate Security Policies

• • •

The policy development process includes the following steps: Justifying the creation of a policy; defining the scope of the document; composing a rough outline; garnering management support; establishing specific areas of responsibility; discovering current policies; evaluating current policies; performing a risk assessment; creating new policies; implementing and enforcing the policies; and reviewing and maintaining the policies. Various groups may be involved in your policy’s scope that you didn’t even know existed. Gathering your current policies is a process of looking online and in hardcopy manuals and performing interviews. Performing a gap analysis to assess your current policies is relatively easy, and there are multiple tools available.

Performing a risk assessment will help you to identify where you need to focus first.

Most policy management software includes modules to create policies for you. Also, plan the hierarchy of your policies before you start building. Your users are some of your greatest threats. Keep that in mind while you are creating your policies.

• •

Templates can help you to create your policies quickly and more accurately.

Selecting the best controls to implement sometimes requires additional analysis, such as scoping, ranking, and evaluating the options.

Implementing and Enforcing Corporate Security Policies •

The best-laid policies may go awry if not properly implemented and enforced.

Multiple policy distribution and education tools and techniques are available to help your awareness program.

Multiple policy enforcement techniques are available, from manual policy police to automated compliance scanners.

Reviewing Corporate Security Policies • •

Policies can be reviewed by an internal group or external consultants. Steps usually performed in a review include performing a risk analysis; reviewing current policies; identifying key personnel; interviewing key personnel; reviewing implementation of policies as procedures; reviewing discrepancies; and preparing a gap analysis. The conclusion of the review involves the update and maintenance of the existing security policy, completing the loop of the information security policy management system.

Links to Sites

• • Article by Charl van der Walt, with a great overview of security policy in general. A collection of 24 information security policy templates. Over 60 articles on security policy in the SANS reading room. Very useful to browse through. A collection of articles about security policy, with a new article every month or so. All NIST special publications, many security-related.

• • NIST Computer Security Handbook, a great source for policy fodder, with a synopsis here: Slideshow presentation on general security policy topics. Provides a number of pay policy downloads, in addition to the RUSecure policy samples. The RUSecure program is a fully capable trial version, good for experimenting with various ISO17799 policies. Download evaluation version of COBRA risk analysis tool. Also check out other information on security policy creation, delivery, and compliance. Provides a great overview of the HIPAA Security Rule, so you can familiarize yourself with the general concepts before the final rule is established. Useful document that matches specific sections of the security rule to implementation requirements. The main FTC site for the GLBA act, filled with relevant information and legislative information for GLBA compliance. An article by Gary Desilets on avoiding shelfware policies. Great discussion with tips to keep your policies readable. A free tool to compare your organization’s security management against the ISO17799.

Mailing Lists •

GAO Reports (Subscribe: The United States General Accounting Office issues a wealth of reports, usually at the request of a congressional member. Many of these reports are useful, and sometimes relevant to information security.

Privacy Forum (Subscribe: [email protected] with “subscribe privacy” in the body) Moderated list covering technical and nontechnical privacy-related issues.

RISKS (Subscribe: Visit News and interesting tidbits on risk-related items, which may turn up some useful information.

HIPAA-REGS (Subscribe: [email protected] with “subscribe HIPAA-REGS first-name last-name” in the body) A list serve from the HHS department, which will provide recent information regarding HIPAA regulations. HIPAAlert (Subscribe: Newsletters on current happenings in HIPAA industry.

• •

ComplianceHeadquarters (Subscribe: Provides e-mail updates on laws and regulations.

Frequently Asked Questions Q: How do I capture the current framework of security controls in place at my company if we currently don’t have any security policies or procedures? A: First, you need to capture the framework you currently use in your security applications. To begin, you need to understand what has already been done, and why. Create an inventory of your current security solutions, such as your firewalls, IDS, and current policies if you have them. You also need to capture the business decisions behind each of those solutions. How was your firewall rule set developed? Did you consider “Deny All” while creating it? Then you need analyze the rules that have been put in place on each device through your environment. Finally, use current best practices, as established by the product vendor, to identify the controls you should have in place. Q: How do I begin to create an information security program to deal with compliance requirements? A: Identify what you want to accomplish with your security program. Are you doing this because your customers want to see that you are committed to information security? Are you doing this because you have to be compliant with government regulations, such as GLBA or (upcoming) HIPAA? Are you doing this because you have third-party suppliers that require you to implement a security program? Depending on the goal of your security program, choose the guidelines you want to comply with. Then use these guidelines as a starting point for best practice principles. Look around industry and regulatory Web sites for published guidelines to help you become compliant. Q: How do I go about selecting various controls for a small environment, if we do not want to create full policies and procedures? A: Based on your identified risks, you should select controls from your chosen compliance guidelines that will help you to mitigate your risks. If you didn’t perform a risk assessment, you can still use the “common man” principle. This involves selecting the sections that a knowledgeable, educated person in

your position would choose to protect against known threats. Though not as diligent as a full risk assessment, it will be an improvement over your current position. I prefer the ISO/IEC 17799:2000, because it is extensive and widely accepted, though it does have its drawbacks. Keep in mind you just need to implement the relevant sections, and that each section may not be extensive enough to create procedures. Q: How do I build my identified policy framework into full policy sections? A: Once you have your sections selected, use them as a skeleton to develop your new policy program. Combine this with the framework you’ve already captured from your current security installation, and you can capture your current situation. Then fill out the rest of the skeleton with best practices to create a comprehensive security policy program. Consider the policy program lifecycle: create, implement, review, modify. You have now created your new security policy program, but you need to implement it. You should review and modify at least once a year, to consider changes in technology, threats, or updated government regulations. It’s critical to perform this policy lifecycle to maintain a living document and to keep your policy updated. Q: I have been tasked to create our corporate security policies for my mediumsized company (300 employees), but I’m having difficulty getting commitment from my manager. How can I resolve this issue? A: First, you need to find a champion for the policy initiative. Start with the person or group that tasked you to create security policies, and implore them to take ownership of their initiative. If they are not an executive in the company, you should work with them to get an executive on board. Explain to them that for a successful policy program implementation, it requires guidance from the upper echelons of the organization, because it affects every aspect of the organization. Stroke their ego a little bit, and explain that it requires their support. Your goal is to get your CIO and/or CEO on board for this initiative. If they already are part of this, and are still not giving you the resources you need, check to make sure they understand your needs. They might not understand what is involved in creating a security program, and think that it can be done quickly, easily, and cheaply. They might not understand that each security policy program must be customized to the organization’s specific needs, functions, exposures, and vulnerabilities. Finally, provide them with an explicit list of requirements to get your task completed. A survey in the September 2002 issue of Information Security Magazine shows that medium size organizations (100–1,000 users) have the least guidance from information security policies. This probably helps to explain why you are being tasked with this assignment now. According to the survey, only 40 percent of respondents indicate most or all of their IT security decisions were governed by security policies. What’s worse is that 26 percent of respondents indicated they either don’t have a security policy or that it’s not followed.



3:29 PM

Page 77

Chapter 3

Planning and Implementing an Active Directory Infrastructure

Planning and Implementing an Active Directory Infrastructure Solutions in this chapter: • Plan a strategy for placing global catalog servers. • Evaluate network traffic considerations when placing global catalog servers. • Evaluate the need to enable universal group caching. • Implement an Active Directory directory service forest and domain structure. • Create the forest root domain. • Create a child domain. • Create and configure Application Data Partitions. • Install and configure an Active Directory domain controller. • Set an Active Directory forest and domain functional level based on requirements. • Establish trust relationships. Types of trust relationships might include external trusts, shortcut trusts, and cross-forest trusts.

Introduction It can be said with little disagreement that Active Directory was the most significant change between Windows NT and Windows 2000. Active Directory gave administrators the flexibility to configure their network as it best fit their environment. Domain structures became much more understandable and flexible, and management of users, groups, policies, and resources became less overwhelming. As wonderful a tool as Active Directory appeared to be, it did not come without its own set of issues. Failing to properly plan an Active Directory structure prior to implementation became a nightmare for many administrators who were used to simple implementation processes for older operating systems such as Windows NT. There were also questions revolving around the best migration path from Windows NT to Windows 2000 Active Directory? Do you upgrade? Do you rebuild your domain from scratch? What were the pros and cons of each choice? What is the cost associated with either choice? Not choosing the best migration path and poor planning were the growing pains of moving to the latest-and-greatest operating system from Microsoft. Now, as you face the decision to move to Windows Server 2003, you will have to face many of these questions again. The good news is, your experience with planning your Windows 2000 environment is going to make this transition that much easier. That said, there is still a lot of work to be done and a lot of planning that must take place before you actually sit down at your servers to take that leap. We will begin this chapter with laying out our Active Directory hierarchy.

Designing Active Directory Active Directory is all about relationships between the domains it consists of, and of the objects each domain contains. As you probably already know, users, groups, printers, servers, and workstations along with a host of other types of network resources and services, are represented in the Active Directory domains as objects. Each object contains information that describes the individuality of that particular user or computer, and so forth. The design of the domains in Active Directory are placed in tree structures that form a forest. Moreover, the objects in each domain can be organized in a hierarchical structure – a tree – through which the objects relate to each other. Through a solid design, the Active Directory can facilitate administration of the entire network; from password management to installs, moves, adds and changes. Therefore, the choice to have a single or multiple forests, the design domains contained within those forests and their tree structures, as well as the design of the objects within each domain is critical to a well-functioning network.

Evaluating your Environment Before you design your future network, you must have a good understanding of the network already in place. The network does not only mean the existing servers and protocols, but everything down to the wired (or wireless) topology. Let’s look at the elements that you should gather when evaluating your environment. Network topology is the physical shape of your network. Most networks have grown over time and have become hybrids of multiple types of topologies over time. Not only must you discover the shape of the network at each level, but you must also find out the transmission speed of each link. This will help you in placing the Active Directory servers, called domain controllers, throughout the network. The easiest way to start is to look at an overall “10,000 foot” view of the network, which generally displays the backbone and/or wide area network links. Then, you will drill down into each geographical location and review each building’s requirements, if there are separate buildings. Finally, you will look at every segment in those buildings.

EXERCISE 2.01 EVALUATING A WAN ENVIRONMENT Let’s look at an example network, which we will use throughout this chapter. Our example company has an existing internetwork that connects three separate offices in Munich, Germany; Paris, France and Sydney, Australia. The headquarters of the company is located in Munich. Both the networks in Paris and Sydney connect directly to Munich, and all traffic between Paris and Sydney is transmitted through the Munich office. The connections are all leased E1 lines with a 2.032 Mbps transmission speed. Figure 3.1 shows this configuration. Figure 3.1 A High-Level View of the Example WAN

Munich 2.03

M 32 2.0

2 Mb


b ps



At this point, you might think, “Cool, done with that.” Not yet. Now, you need to look at the networks within each location. In the Munich location, there are three buildings that are connected by a fiber optic ring running Fiber Distributed Data Interface (FDDI) at 100 Mbps. Neither Paris nor Sydney have multiple buildings. The Munich location is configured as shown in Figure 3.2. Figure 3.2 General Layout of the Munich Campus Network

Building A network

Building B network Router Router FDDI Ring


Building C network

Munich’s buildings are named A, B, and C. Both buildings A and B have been upgraded to Gigabit Ethernet throughout over Cat6 copper cabling. Building A holds the servers for the entire Munich campus on a single segment. Both of these buildings have three segments each, connected by a switch, which is then routed into the FDDI ring, as shown in Figure 3.3. Figure 3.3 Buildings A and B Network Configuration

Building A

Building B


Switch Switch

Router Hub






Server Server Server Server Server

Router Building C

Building C in Munich has a single Token Ring network segment at 16 Mbps and two Ethernet segments running 10BaseT. This is displayed in Figure 3.4. Figure 3.4 The Building C Network in Munich has Older and Slower Networking Equipment Than Buildings A and B Building A

Building B


Switch Switch

Router Hub






Building C

Server Server Server Server Server


Token Ring

10 Mbps Hub

10 Mbps Hub

Paris and Sydney, although being far apart, have nearly identical configurations. Each location has two segments of 100BaseT Ethernet, both with servers, and the Ethernet segments are connected to each other by a switch. A router is connected to one segment that leads to the Munich location. This topology is depicted in Figure 3.5. Figure 3.5 Both Sydney and Paris have Nearly Identical Network Topologies

Munich network








When describing the physical topology of a network, you may find that a single drawing that attempts to include all the items within the network will be too confusing. By breaking the process down and looking at different portions of the network, you can make it easy to document an entire internetwork. Notice that in each of the areas, we have described routers and what types of topology they are routing from and to. In addition to this, you will need to know what protocols are being routed across the internetwork. The network will likely be using TCP/IP, and also likely that it is the version IPv4. It is possible that the network could be using IPv6, which is routed differently than IPv4, and just as possible that the network is using both IPv4 and IPv6 on various segments. In addition, the network could be using other routable protocol stacks, such as Internetwork Packet Exchange/Sequenced Packet Exchange (IPX/SPX) or AppleTalk. Unroutable protocol stacks such as NetBIOS Enhanced User Interface (NetBEUI) will not need to be routed, but will affect bridging configurations and overhead on the data transmitted. For our example network, the network already uses TCP/IP with IPv4 addresses. The network administrator uses Network Address Translation (NAT) when connecting to the Internet, so it uses the private IP class B addresses of through inside the network that is then translated to a class C address for any computer communicating on the Internet. TCP/IP is used throughout the internetwork. The Munich location has two NetWare servers that use IPX/SPX to communicate with clients in Buildings A and C. No other protocols are used on the network. The protocol diagram would appear as shown in Figure 3.6. Figure 3.6 Protocols can be Mapped to the Segments that Require Them





Building A

Building B

Router IP














Hub Server Server Server Server Server IPX IPX



Building C

Router IP X I P /S PX

Token Ring



10 Mbps Hub

10 Mbps Hub

In addition to knowing the existing protocols, you should know the operating systems on servers that are currently used, their placement and the services that run on them. Throughout this exercise, we’ve touched on part of this, but we really haven’t explored it in detail. Servers are a source of data for all clients on the network. What this means is that traffic tends to centralize around servers. Think of each server as the center of a wheel with traffic creating logical spokes to all the clients. When you have multiple servers, you end up with multiple wheels overlapping each other. For this reason, you need to know where servers are located so that you can determine traffic patterns. The next step is to list the network operating systems and the services that are shared by those servers. Of particular importance are the servers that provide Domain Name System (DNS) services. These servers are required for Active Directory to perform, and will possibly be reconfigured as a result of your Active Directory rollout. For this reason, when you list the DNS servers, you should also list the type of DNS software being used, the version, the zones provided by the DNS server and whether the server is the primary or secondary zone server for each zone. A discussion of the DNS naming for the organization is also needed, since you may be changing or adding to the naming scheme. 3In our ongoing example, the Munich location has two NetWare servers, 10 Windows NT 4.0 servers and three Windows 2000 servers. There is a single Windows NT Primary Domain Controller (PDC) in the company’s single domain. There are also two Backup Domain Controllers (BDCs) at the Munich location. In addition, both Sydney and Paris have a single BDC on site, which provides local Dynamic Host Configuration Protocol (DHCP). The NetWare servers provide file and print services. The Windows 2000 member servers and Windows NT member servers all provide file and print services. Note that you will probably encounter servers that provide services to access a variety of peripherals on the network, such as faxes and printers. The peripheral equipment should be listed in addition to the server that provides that peripheral’s services.

The PDC is the sole DNS server and provides Windows Internet Naming Service (WINS) services. There is a single zone for the example.local domain. In addition to this type of diagram, you should list each server’s hardware and software configuration on a separate sheet. This information may be needed for upgrades and compatibility. Earlier we mentioned that the example company used NAT to communicate across the Internet. This means that there is an Internet connection, which is in Munich, and that enables traffic to both exit the company’s network as well as enter it. This leads to the question of whether there is a method of remote access into the network. That remote access can take place across the Internet connection in the form of a Virtual Private Network (VPN), or it can be dialup connections to the network, which in turn provide Internet access. You may choose to combine your description of servers and services with remote access and VPN. If you have a complicated remote access configuration, you should provide a separate diagram. Finally, you should have an understanding of the clients in the network. First, you should know how many users work at each site. Next, you should have an understanding of the types of users who are on the network – whether they are power users, or knowledge workers, or if the focus of their jobs do not include much computer work, their hours of network usage, their applications and the workstation operating systems. When planning for an Active Directory rollout, you will need to know the users’ IDs in order to ensure a successful upgrade or migration. In addition, you will need to determine administrative areas and powers for users, so you should have an idea of what each user is responsible for and the administrative rights users require to perform their jobs.

Creating a Checklist Preparing for the Active Directory is a lengthy process. In fact, the last migration I worked on included a far longer preparation period than actual implementation phase. To keep on track during the preparation period, you should create a checklist of the items that you need to look at for each network location, each server, service, peripheral, workstation and user. The more organized you are, the higher your success rate is likely to be. There are some things that you may find are required specifically for your own project, however, the basic information that you should collect for each area are as follows: Network Locations Topology Transmission speed Number of segments Number of users at that location Servers at that location Number of workstations at that location Connectivity to other locations Protocols used IP Addressing scheme, if any Servers Hardware configuration Network operating system Name IP address, if any Services provided

DNS configuration, if any WINS configuration, if any Location Protocol configuration Services Windows NT domain structure, if any Active Directory structure, if any DNS naming scheme WINS configuration DNS software and version, if not the server’s native DNS service Peripherals Name Usage IP address, if any Server that provides the service for the peripheral Location Workstations Operating system IP address, if any, or if using DHCP User(s) that use the workstation Location Users Name ID Location Administrative powers, if any

Expect the Unexpected As stated in the beginning, most networks have grown over time and as a result, are hybrids of various topologies. When you inventory each location, you are bound to run into some unique configurations. Perhaps you’ll find someone using an archaic operating system on a server just to use a legacy application. For example, I once found a MUMPS server running a database application at a financial company, and at a manufacturing company, I discovered a workstation that was running DOS because a DOS application was custom written to move a mechanical arm and no one had the original code, nor did they have the specifications for the mechanical arm in order to write a new application. In another company, I found that the main DNS server was a UNIX version of BIND that wasn’t compatible with the Active Directory, but it was required for use with another application. Regardless of what you discover, there is likely some way to overcome the challenge. In the MUMPS situation, the database application was migrated to a SQL Server. In the DOS situation, the workstation was left unchanged. In the DNS situation, we created a subdomain structure for DNS just to incorporate the Active Directory. Just make certain that you incorporate enough time in your project schedule as a cushion for handling the unexpected challenges that come your way.

Creating an Active Directory Hierarchy

Once you have a clear picture of your organization’s current environment, you are ready to design your new Active Directory hierarchy. This hierarchy will contain, at a minimum, a forest with a root domain. Depending upon your organization’s needs, you may have child domains, multiple namespaces and these configured in several domain trees. The larger the organization and more complex its needs, the more intricate the Active Directory forest will become. Planning for your Active Directory Hierarchy

The Active Directory hierarchy of domains within a forest is a key component of the exam. You should expect to see questions that test your knowledge of when, why and where to create new domains. In real life, design of an Active Directory forest and its domains is often based more so on politics and preferences than it is on the design demands of the network environment. Keep in mind that the purist’s viewpoint – based on actual requirements – is how you should approach all Active Directory design scenarios. These are: • Begin with a single forest. •

Create a single root domain using the DNS namespace at the smallest level for the organization. For example, if the company’s name is Example Interiors, Inc. and they have registered the domain name for, then you should use as the root domain of the forest. (By contrast, in real life, you might not want your website’s domain name to be integrated with your secure production Active Directory forest’s root domain. In fact, you might want to use a subdomain of, such as as the forest’s root domain, or you might prefer a different name altogether, such as eii.local.)

When there is a physical discontinuity in the network, you should create a new domain as a subdomain of the root domain. For example, if you have a production plant in South America with intermittent network connectivity to the rest of the network, you should create a subdomain for that plant.

When there is a need for a new security policy for a set of users, you should create a new domain. For example, the users on the network who work on government contracts will require a very strict security policy while users who work on civilian contracts will not. Therefore, you should create two subdomains. (By contrast, in real life and depending upon your government contracts, you might even be forced to create a different forest for such workers, or you might be able to apply that security policy via group policies to a specific OU.)

When there are specific administrative requirements in a scenario, you should pay attention to the clues in the question about whether the need is for separation or delegation. In the case of separation of administration, you should create a subdomain. In the case of delegation of administration, you should create an OU and delegate the administration.

Before You Start During this exercise, you should make certain that you have all the information you gathered with you. You will be referring to this during the design. In addition, it is helpful to have the contact information for administrators throughout the network, as well. At the start, you should know what a forest is, what a domain is and how they will affect your design. The forest is the largest administrative boundary for users and computers in the network, and logically groups one or more domains with each other. Even though most organizations require only a single forest, the first thing you should decide is how many forests you should have in your organization. The decision to have multiple forests should be limited to whether you need: 1. Multiple schemas 2. Administrative separation 3. Organizational separation 4. Connectivity issues A schema lists and defines the types of objects and attributes that are included within the Active Directory database. The schema includes object types such as user accounts, and attribute types such as password or phone number. When a new object is added to the Active Directory, it is created according to the recipe within the schema that defines what that object should be and which attributes it will include. When you add new types of objects and attributes to the Active Directory schema, you are said to be extending the schema” For example, when you install Microsoft Exchange Server 2000 or later, you will have new objects in the Active Directory, such as mailbox information. Without extending the schema, the mailbox information is simply not available. If your organization needs a test domain, for use in a lab and to test applications before installing on the regular network, then you should probably consider this to be a need for a separate schema and create a separate forest for a testing lab. Administrative need for separation is sometimes a reason to have multiple forests. Keep in mind that multiple forests will increase the overall administration of the organization, and this reason to create additional forests is usually caused by organizational politics more than actual need. Another cause for multiple forests is organizational separation. In this scenario, there may be more than one organization that shares the network. A joint venture, for example, may have users that come from one or more businesses, and being a separate entity from each of the participating businesses, it would be a security strategy to provide a separate forest to the joint venture. Finally, if you have a network that has physical discontinuity between network segments such that there is no connectivity, then you will probably be forced to have separate forests at each separate site, or you should plan to put a connection in place. For example, let’s imagine that our example company builds a large satellite office in the middle of South America in a location that has dialup lines with poor connectivity. This is a situation which might warrant a separate forest.

Forest Root

For each forest in your design, you should decide the name of the forest root. This is a critical decision because domain names are closely integrated with the DNS naming scheme. This means that the DNS naming scheme should be planned at approximately the same time as the names of your domains. The forest root domain provides its name to the entire forest. For example, if you have a DNS naming scheme where is used for the web and you plan to use contoso.local for the internal organization; if you make the root domain contoso.local, then the forest is named contoso.local. The forest is the largest administrative boundary in the Active Directory. There are few reasons to have multiple forests, such as the need for multiple schemas, separate global catalogs so that the organization is logically separate, or connectivity problems prevent communication between domain controllers. At the creation of the forest root domain, the first domain controller takes on all operations master roles and the Global Catalog server role. The schema is created using default settings. It creates the NTDS.DIT file that holds the Active Directory domain information, along with the default objects within the domain. The forest at its simplest is a single domain, but can consist of more than one domain. The domains are typically organized in the structure of domain trees, formed by the contiguity of their namespaces.

EXERCISE 2.02 SELECTING A FOREST ROOT DOMAIN NAME Look at the DNS names that you will be using. In our example company, the company uses “example.local” for its internal DNS naming scheme. Given that the company wants to continue using this naming scheme, then the forest root domain can be example.local. Keep in mind, however, if the company wanted to have a separate DNS name for the Active Directory, the company could use sub.example.local, or anothername.local as the forest root domain name. In our example, though, we will use the “example.local” DNS name for the root, and the resulting design would resemble Figure 3.7. Figure 3.7 The Forest Root Domain is the Start of the Design and Planning of the Active Directory Hierarchy


example.local Forest

Child Domain The next task in your plan is to determine whether to have child domains, and then determine their placement and their names. The domain plan will follow the DNS namespace, which means that you should have a good idea of the namespace you intend to use. While there is a relationship via trust between the parent and child domain, the administrator of the parent domain does not have automatic authority over the child domain, nor does the child domain’s administrator have authority over the parent domain. Group policies and administrative settings are also unique to each domain. In our example company, the original scheme has a single Windows NT domain. However, let’s consider that both Paris and Sydney are requesting separate domains. Paris wants a separate domain for the research and development department which is designing a new e-commerce application requiring logon authentication by extranet users and wants to have that application in its own “” domain that they will register with InterNIC. Sydney has had a significant growth rate and wants to establish its own domain for administrative purposes. The Sydney domain will then become part of the example.local namespace as a subdomain, which will be called sydney.example.local. Note that a child domain does not need to be in the same namespace in order to be a child of the forest root. However, any other domain is only a child domain of the upper levels of its own namespace, which means that is not a child domain of sydney.example.local or vice versa. This design is shown in Figure 3.8. Figure 3.8 This Forest has Three Domains in its Hierarchy



example.local Forest

You should ensure that there is a need for each domain in each forest. In our example, the need for Sydney to have a separate domain is driven by its growth rate and need for administrative separation. By contrast, Paris’ need for a separate domain is not for administration of all of Paris’ users, but for an application. The design selected could have just as easily been handled as a separate forest for Paris e-commerce application’s domain, and Sydney’s users could have been a part of the single domain just as they had been in the past Windows NT domain. Remember that design decisions are not set in stone, but based on the discretion of the designer as well as the needs expressed by users and administrators. Child domains should be considered whenever running into the following issues: • A location communicates with the rest of the network via the Internet or dialup lines. The intermittent connectivity drives a need for a separate domain. •

A group within the organization requires its own security policies domain-wide. This is probably a poor reason because group policies can be assigned to separate Organizational Units (OUs) to achieve the same affect.

There is a need for administrative separation for a group or location. Delegation of administrative duties can overcome many of these claims, so it is not always necessary to create a separate domain. Often, this is the need given when in fact the reason is political. Whenever deciding to create additional domains, remember that each additional domain adds administrative overhead, increased replication traffic, and both of these can result in higher costs.

Domain Trees

A domain tree is simply a set of domains that form a namespace set. For example, if you have four domains example.local, set1.example.local, set2.example.local and second.set1.example.local, you have an entire domain tree. If you have another domain in the forest named, then it is outside the domain tree.

Configuring Active Directory Before you configure the Active Directory, you need to know which servers are going to become domain controllers and in which domain they will be placed. When installing, you must install the domain controllers within the root domain first, and then the domain controllers in the child domains, working your way down each domain trees. Once a domain controller has been installed, you can begin configuring the way that the database will function to meet your objectives. One of the things that you can configure is Active Directory application directory partitions. Keep in mind that the Active Directory is a data store that contains the information about users, groups, computers and other network services and resources. Each domain controller contains a copy of the Active Directory data store. There are four different types of partitions of the Active Directory data store: • Domain Contains information about the objects that are placed within a domain. •

Configuration Contains information about the Active Directory’s design, including the forest, domains, domain trees, domain controllers and global catalog.

Schema Contains description data about the types of objects that can exist within the Active Directory.

Application Contains specialized data to be connected with specific applications. This is new to the Active Directory, and this particular partition is intended for local access or limited replication. The application partition must be specially created and configured, it is not available by default. The data itself is contained within a file named NTDS.DIT, which is contained on each domain controller. A domain controller’s NTDS.DIT file will only include the information for the domain controller’s own domain, not any other domain.

Application Directory Partitions Application directory partitions are new to the Active Directory. When you configure an application directory partition, the data connected to a specific application’s directory is stored for use by the local application, and connected to the Active Directory. Because many applications take advantage of simple directory data, this information can be stored and indexed with the Active Directory data. However, this application data is not needed for much of the administration of the network, nor is it always necessary for replication across the entire Active Directory network. For example, in our example, imagine that Sydney has implemented a SQL application that stores data within the Active Directory. The only users that

take advantage of the SQL application are located within Sydney, so it is not necessary to replicate that data to Munich or Paris. This is where the use of an application directory partition can ensure that the Wide Area Network (WAN) link is not overwhelmed by unnecessary replication traffic. The configuration principals are simple. Consider that the Active Directory is a large database and an Application directory partition is a smaller database that can be indexed to the Active Directory. If you have information that you want to keep locally, including extensions to the schema, you can place that information within an Application directory partition. In addition, any of the above computers can contain multiple instances of Application directory partitions.

EXERCISE 2.03 INSTALLING A NEW ACTIVE DIRECTORY PARTITION To install a new application directory partition, you can follow these instructions: 1. Click Start | Run 2. Type CMD in the command line and press Enter to open a command prompt window 3. At the prompt, type NTDSUTIL 4. A prompt for the NTDSUTIL tool appears. Type DOMAIN MANAGEMENT 5. At the next prompt, type CONNECTION 6. Next, type CONNECT TO SERVER servername where servername is the DNS name of the domain controller that will contain the new partition. 7. Type QUIT to return to the domain management prompt 8. Type CREATE NC partitionname servername where partitionname is the name of the application directory partition in the format of dc=newpart, dc=example, dc=local, if you were creating a partition named newpart.example.local, and where servername is the DNS name of the domain controller that will contain the new partition.

Managing Partitions Application directory partitions are interconnected with the Active Directory, which means that they can utilize the same management tools as the Active Directory. As you can already see, the application directory partitions are created using NTDSUTIL, an Active Directory utility. Ntdsutil is also used to delete application directory partitions, or to create replicas (copies) of the partition on another domain controller. In addition, you can use the LDP.exe utility to manage the application directory partition using Lightweight Directory Access Protocol (LDAP)

commands. Plus, you can use the ADSI (Active Directory Services Interface) Edit tool.

Naming Partitions When you have multiple instances of Application directory partitions running on a single computer, you need to have unique names for each, and different ports.

Replication As we stated earlier, each domain controller contains a set of partitions of the Active Directory. Domain controllers within the same domain contain replicas of the same partition. Replication is the process of ensuring that data is up-to-date across all replicas. Any data that has been changed, such as a new password for a user, must be copied to all other replicas of that same partition. The Active Directory uses a multimaster model for replication. Each domain controller is equal to all other domain controllers. This means that an administrator can add new objects, delete objects or make changes to existing objects on any domain controller. Then, when replication takes place, that domain controller transmits the changes to its peers. Sites are used for efficiency in replication. A site is considered a set of well-connected IP subnets, but is manually configured by an administrator. Wellconnected is a concept that usually depends upon the network administrator or designer’s discretion. For example, in our example, there are three locations – Munich, Paris and Sydney. Of these locations, Paris is fairly small and has a full E1 pipe to connect to Munich. Paris could be made its own site, or it could be placed within the Munich site. Sydney, with its size and growth rate, would probably be best as a separate site. The Knowledge Consistency Checker (KCC) is a process that runs on each domain controller every 15 minutes to automatically create a replication topology, selecting which domain controllers to replicate with and when. This is based upon the configuration that you specify when you specify the sites within the Active Directory Sites and Services console. When you manually specify certain items, such as a preferred bridgehead server, the KCC will not override your configuration. Configuring Replication of an Active Directory Application Directory Partition

Replication of Active Directory application directory partitions takes place between the domain controllers that hold the partition and its replicas. If there is a single partition, the data does not replicate. However, if there is no replica, that data will not be fault tolerant. In order to configure replication, you must simply create a replica of the partition referencing the partition by its distinguished name as it appears when using X.500 naming. However, you reference the name of a domain controller by its fully qualified domain name (FQDN) as it appears in DNS naming. The process for adding a replica of an application directory partition is: 1. Open a command prompt by clicking Start | Run and typing CMD then pressing Enter. 2. Type NTDSUTIL at the command prompt and press Enter. 3. Type DOMAIN MANAGEMENT at the prompt and press Enter.

4. Type CONNECTION and press Enter. 5. Type CONNECT TO SERVER domain_controller_name and press Enter. 6. Type QUIT and press Enter. 7. Type ADD NC REPLICA application_partition_name domain_controller_name and press Enter.

Domain Controllers DNS is integral to the Active Directory and must be configured on the server in order for the Active Directory to be installed. When DNS is not configured with the correct resource records for the new domain controller (or not configured with Dynamic DNS enabled for the future root domain’s DNS zone), then the Active Directory wizard will prompt you to configure it or to allow it to install and configure DNS as a service on the new domain controller. Once you have completed the installation of the first domain controller in the root domain of the forest, you have the following implementation tasks: • Install the remaining domain controllers within the root domain, if any. •

Create the child domains, if any, by installing the domain controllers for each of them.

Implement application data partitions, if needed.

Install and configure additional domain controllers, as needed.

Set the functional level of the domain(s).

• Establish trust relationships, as needed. When Windows Server 2003 installs on a new server, it automatically becomes a standalone server. It will be able to join a domain as a member server, share files, share printers, and provide applications. But for all that, you still don’t have an Active Directory forest with a root domain. By contrast, when you install Windows Server 2003 on an existing domain controller, it will upgrade the server’s operating system and then automatically begin the Active Directory wizard. If the domain controller you are upgrading is a Windows 2000 server, the upgrade is automatic. If the domain controller is a Windows NT PDC or BDC, the Active Directory wizard begins so that you can promote the server to a domain controller and configure it anew. NOTE Before you install the Active Directory, you should make certain that the file system you are using is NTFS. You can convert the file system using the command convert /fs:ntfs. When you are ready to install the Active Directory, you will use the Active Directory Promotion Wizard to promote a standalone server to domain controller status. The first domain controller that you install is installed into the root domain of the forest. The Active Directory Promotion Wizard is initiated by

typing DCPROMO at the command prompt. You can also reach this wizard by following these steps: 1. In the Manage Your Server window, select Add or Remove a Role as shown in Figure 3.9. Figure 3.9 The Manage Your Server Console

2. In the resulting dialog box click Next 3. The computer will locate the services that are currently configured and display those as well as the ones that are available to be configured. From this list, select Domain Controller (Active Directory), as displayed in Figure 3.10 and click Next. 4. Click Next at the following screen. The DCPROMO wizard will then begin. Figure 3.10 Selecting the Option to Initialize the Active Directory Wizard

If you are currently using a Windows NT network, you will recognize the benefits of using DCPROMO. In Windows 2000 and Windows Server 2003, you can take a standard server and promote it to a domain controller without having to reinstall the network operating system (NOS). This is also true of demotion. You can remove the Active Directory from a domain controller and demote that domain controller to a standard file server without having to reinstall the NOS. Under Windows NT, the only way to change a server’s role in the domain was to remove and reinstall the NOS. There are several ways of configuring the domain controller. You first must know what domain the domain controller will belong to, and you should have DNS fully configured and functioning before you start. Given the extensive use of Service Resource Records (SRV RRs) in DNS, the optimal configuration for DNS is to have Dynamic DNS enabled so that the new domain controller will register its services in the DNS zone without requiring you to manually input them. Before you begin a domain controller installation, gather the information that you will need for the server: • Server name •

Domain name

Directory for placement of the Active Directory file

Directory for placement of the Active Directory logs

Directory for placement of the SYSVOL, which contains replicated data

Domain Administrator’s password

Directory services restore mode password

EXERCISE 2.04 INSTALLING THE FIRST DOMAIN CONTROLLER IN THE FOREST The domain controller’s installation is merely the first part towards configuration. After you have completed the Active Directory wizard, you will be ready to configure trust relationships, sites, user accounts, computer accounts and group policies. To begin: 1. Click Start | Run. Type DCPROMO in the box and press Enter. 2. You will see the Active Directory wizard’s welcome screen. Click Next. 3. Click Next to bypass the warning about compatibility issues with Windows 95 and older Windows NT clients. 4. Select a Domain Controller for a New Domain. Click Next Figure 3.11 Selecting the Domain in a New Forest Option

5. Select a Domain in a New Forest as shown in Figure 3.11. Click Next 6. Type in the DNS name for the root domain of your forest. Click Next.

7. Type in the NetBIOS name of the domain and click Next, as shown in Figure 3.12. Do not name this domain the same as a Windows NT 4.0 domain on the network, or you will have a conflict. Figure 3.12 Selecting a NetBIOS Name for the New Domain

8. Verify the directory locations for Active Directory files and click Next. 9. Verify the location for the SYSVOL share. Click Next. 10. DNS will be tested as shown in the DNS Registration Diagnostics displayed in Figure 3.13. If it fails the test, you will be asked to select an option to configure DNS. Click Next.

Figure 3.13 The Improved Active Directory Wizard’s DNS Registration Options

11. Select the permissions level for the domain controller. Click Next. 12. Type in the password for restoring Active Directory services to this domain controller. Don’t lose the password! Type in the password confirmation. Click Next. 13. Verify the summary screen options. Click Next. The Active Directory wizard will take some time to complete the installation. When it is finished, click the Finish button to close the wizard.

Establishing Trusts Trust relationships are necessary for an administrator to grant rights to the local resources to users from other domains, Kerberos realms or entire forests. The way that a trust works is to simply enable the administrator to grant rights. Without a trust relationship in place, the rights cannot be granted at all. Even with a trust in place, if there have been no rights granted to a resource, then the resource cannot be accessed.

Types of Trusts There are several types of trusts in an Active Directory forest: • Implicit Kerberos trusts within the forest

Explicit external trusts with Windows NT 4.0 domains, domains within other forests and Kerberos realms

Forest trusts

• Shortcut trusts The standard trust relationship in an Active Directory forest is the implicit Kerberos trust. This type of trust is bi-directional and transitive. Bidirectional means that when Domain A trusts Domain B, then Domain B also trusts Domain A. Transitive means that when Domain A trusts Domain B and Domain B trusts Domain C, then Domain A also trusts Domain C. When there are Windows NT 4.0 domains, Kerberos realms or multiple forests within an organization, the explicit external trust relationship can be used to facilitate the granting of rights. An explicit external trust relationship is unidirectional and non-transitive. This means that when Domain A trusts Domain B, Domain B does not have to trust Domain A in return. In addition, if Domain A trusts Domain B, and Domain B trusts Domain C, it does not follow that Domain A trusts Domain C. In fact, the explicit external trusts in the Windows Server 2003 Active Directory act exactly the same as the trust relationships between native Windows NT 4.0 domains. For example, an organization has two forests, one forest is the network’s main forest and the other is a forest used for research and development. Users in the lab must still access resources in the main forest, although they typically logon and access resources in the research and development forest daily. Therefore, an explicit trust between the users’ domain and the resource domain in the main forest can be created. The resource domain in the main forest would have to trust the lab users’ domain so that rights to the resources in the resource domain can be granted to the lab users. Because the trust relationship is unidirectional and non-transitive, the users will not be able to access resources in any other domain unless additional trusts are created. Forest trust relationships are new to the Active Directory under Windows Server 2003. Since forests can contain multiple domains containing both users and resources, a complex set of explicit external trust relationships was the only way to enable access to resources from the domains in one forest to the users in another forest. Imagine, for example, that an organization has two forests – one used for lab testing and the other used for standard business applications and resources. Users in the lab testing forest could not access mission-critical applications such as e-mail, or files and printers without explicit trust relationships where the domains in the standard forest each trusted the domains in the lab testing forest. The forest trust relationship in the Windows Server 2003 Active Directory makes establishing trust relationships between the domains in one forest and those in another fairly simple to establish. The forest trust is a uni-directional transitive relationship between the domains in one forest and the domains in a second forest, which is created through a single trust link between the root domains in each forest. When the trust is created such that Forest A trusts Forest B, then the users in any domain within Forest B can be granted rights to access resources within any domain within Forest A. However, this trust will not work in the opposite direction. A separate trust would need to be created whereby Forest B trusts Forest A. The transitive nature of this type of trust is only applicable to domains – because any domain within the trusting forest would trust any domain within the trusted forest. However, the trust is not transitive between entire forests. For example, if

Forest A trusts Forest B and Forest B trusts Forest C, Forest A does not trust Forest C. However, any domain within Forest A will trust any domain in Forest B because of the single trust relationship established between the root domain of Forest A and the root domain of Forest B. The shortcut trust is created between two domains within a single forest. You might wonder why this is necessary, since there are Kerberos transitive trusts that connect all the domains within a forest. The need for a shortcut trust appears only in large, complex forests with multiple domains in multiple domain trees. The shortcut trust speeds up the resolution of the trust relationships between domains that exist deep within two different domain trees.

EXERCISE 2.05 CREATING A FOREST TRUST RELATIONSHIP In order to create a forest trust relationship, you must have two forests whose root domains can communicate with each other. Both forests must be set to the Windows Server 2003 forest functional level, described in the following section. To create the forest trust: 1. Click Start | Administrative Tools | Active Directory Domains and Trusts. 2. In the left pane, navigate to the root domain of the forest. 3. Right-click the root domain and select Properties from the popup menu. 4. Click the Trusts tab. 5. Click New Trust to start the Trust Wizard. 6. Click Next at the welcome screen. 7. In the trust name, type the DNS name of the root domain of the other forest. Click Next 8. Select the Forest trust on the trust type dialog. Click Next. 9. Select whether the direction of this trust will be one-way (and if so, whether it is an incoming trust or outgoing trust) or two-way. Click Next.

Using a Forest Trust for a Lab Environment

One of the major changes for the Active Directory was the addition of the forest trust. In Windows 2000 Active Directory, it appeared that Microsoft viewed a forest as a single entity that should stand alone and encompass an entire organization’s internetwork. Real life, however, intruded upon that vision. Organizations created multiple forests for a variety of reasons – not the least of which was the purpose of research and development. Even when an organization created a single forest for their production users, they typically created a test forest for application development, deployment testing, and other research. The forest was usually much smaller in numbers of users, but often mirrored the same number of domains and had a

similar namespace. Given the many changes that a lab forest was often put through, users who were members of a lab forest found that they had to maintain two user accounts – one in the lab and one in the standard forest - in order to access resources such as files, e-mail and business applications that existed within the production forest. One of the ways that organizations attempted to make resource access easier for the lab forest users was in creating explicit external trust relationships between all the domains within the production forest and the domains within the lab forest. If the lab forest underwent domain changes, new trust relationships had to be established. Through the use of a forest trust relationship, it is a simple matter to create a single trust relationship between the production forest and the lab forest. Regardless of how the domains change within either forest, the trust relationship remains in place and provides the path for all lab users to access the business applications that they need without logging off of one forest, then logging back onto the other.

Evaluating Connectivity When you create a trust relationship of any sort, you must have connectivity between the domains and/or realm involved or the trust relationship cannot be created. Ensuring that you can resolve the names of the domains involved via Domain Name System (DNS) is one of the first steps toward evaluating the connectivity. There is little need for much bandwidth to support a trust relationship, but to enable access to resources, you will need to have available bandwidth. When there is no connection between two domains, the trust cannot be created. The domain will not be recognized and you will be prompted for whether the DNS name you provided was a Kerberos realm.

Setting Functionality A domain in a Windows 2000 Active Directory forest had two options – it could run in mixed mode (the default) or it could run in native mode. These modes have evolved into domain functional levels within the Windows Server 2003 Active Directory. Furthermore, there is now a set of forest functional levels that you can achieve. We will look at both domain and forest functionality in this section. You must have certain information about the network environment available to you before you set a functional level for a domain or for the forest. • Operating systems running on the domain controllers, both current and future •

Whether you plan to use Universal security groups

Whether you plan to nest groups

Whether you need security ID history

If you intend to have a forest trust

If you might need to deactivate schema classes

This information will help you in deciding which domain and forest functional levels you should use. Even if you have installed only Windows Server 2003 domain controllers, you should not raise your forest functional level to Windows Server 2003 if you plan to install or promote domain controllers with older operating systems in any of the forest’s domains. After the forest functional level has been raised, you can’t add any other domain controllers using Windows NT 4.0 or Windows 2000 throughout all domains in the forest.

Forest There are three forest functional levels: • Windows 2000 •

Windows Server 2003 interim

• Windows Server 2003 The Windows 2000 forest functional level provides the same services as a Windows 2000 forest. It can contain domains at any domain functional level, and it can contain domain controllers using Windows NT 4.0, Windows 2000 and Windows Server 2003 operating systems. The default forest functional level is Windows 2000. The Windows Server 2003 interim forest functional level is a special functional level used for forests that consist solely of Windows Server 2003 domain controllers and Windows NT 4.0 backup domain controllers. The Windows Server 2003 forest functional level is the highest forest functional level and can only contain Windows Server 2003 domain controllers and domains that are at the Windows Server 2003 functional level. The Windows Server 2003 forest functional level provides the following capabilities: • The ability to create a forest trust •

Domain renaming capability

The InetOrgPerson object designated for Internet administration

Schema class deactivation

Improved global catalog and standard replication

EXERCISE 2.06 RAISING THE FOREST FUNCTIONAL LEVEL Once you raise a forest functional level, you cannot change it back. In addition, you cannot add any domain controllers that are not Windows Server 2003. In order to raise the forest functional level: 1. Open the Active Directory Domains and Trusts console by clicking Start | Administrative Tools | Active Directory Domains and Trusts. 2. In the left pane, right-click the top node. 3. Select Raise Forest Functional Level from the popup menu

4. A dialog will display that will allow you to select the forest functional level.

Domain There are four domain functional levels available within the Windows Server 2003 Active Directory. These functional levels are: • Windows 2000 mixed •

Windows 2000 native

Windows 2003 interim

• Windows Server 2003 The Windows 2000 mixed domain functional level is the default for all new domains, and is basically the same as a Windows 2000 mixed mode domain under the Windows 2000 Active Directory. This type of domain can have domain controllers using Windows NT 4.0, Windows 2000, and Windows Server 2003. The Windows 2000 native domain functional level allows Windows 2000 and Windows Server 2003 domain controllers. This functional level offers the use of Universal Security groups, nesting groups and SID History. The Windows Server 2003 interim domain functional level is intended only for use when upgrading a Windows NT 4.0 domain directly to Windows Server 2003. This functional level supports only Windows NT 4.0 and Windows Server 2003 domain controllers. The Windows Server 2003 domain functional level can only be used when all domain controllers within the domain are Windows Server 2003. When the domain has been raised to Windows Server 2003, it will support domain controller renaming, converting groups, SID history, full group nesting and universal groups as both security groups and distribution groups. In order to raise a domain’s functional level, you begin in the Active Directory Domains and Trusts console. 1. Click Start | Administrative Tools | Active Directory Domains and Trusts. 2. In the left pane, click on the domain that you wish to raise the functional level. 3. Right-click that domain. 4. Select Raise Domain Functional Level from the popup menu 5. In the resulting dialog, click the drop-down arrow and select the new domain functional level. 6. Click Raise.

Global Catalog Servers Each forest uses a single Global Catalog across all of its domains. This Global Catalog acts as an index, because it contains a small amount of information about the objects that exist across the entire Active Directory forest. Another task that

is relegated to the Global Catalog is the duty of processing logons in order to allow querying of universal groups. (The logon and authentication process should be able to discover access rights through the querying of a user’s universal group memberships.) Finally, the Global Catalog is instrumental during the process when a user (or application) queries the Active Directory to locate objects. The global catalog is an index data store of the objects that exist across the entire forest. It contains a partial copy of objects within each domain so that users and applications can query objects regardless of their location within the forest. The global catalog stores only the attributes about each object that may be searchable, such as a printer’s location or a user’s telephone numbers. This ensures that the size of the global catalog remains manageable, yet still provides a searchable database. A global catalog server is a special domain controller that contains a copy of the global catalog in addition to a full copy of the domain database. The first domain controller in the forest is automatically a global catalog server. The global catalog enables: • Querying of objects •

Authentication of user principal names, which takes the form of [email protected]

• Provides universal group membership information during logon When you deploy the Active Directory, you need to plan for the placement of the global catalog servers. In addition, when you determine that a global catalog server is not feasible for a location, you need to evaluate whether you should enable universal group caching so that users can logon.

Planning a Global Catalog Implementation The global catalog is integral to the logon process. Not only is it involved with any user principal name (UPN) logon, where the user enters a UPN name in the form of [email protected], but when a global catalog server is not available to a user, then the users’ universal group memberships cannot be resolved and the user’s actual permissions would not be available. Global catalog servers are also accessed whenever a user or application queries the network to search for objects such as printers. Because the global catalog is so intertwined with a user’s network interaction, you should plan carefully where to place global catalog servers. Like all planning activities, you must understand the environment, including the underlying network, the users and an idea of how the future Active Directory will be designed. In order to gain this understanding, you should gather the following documents and information about the organization before you begin your planning and design: • Wide Area Network (WAN) and Local Area Network (LAN) maps •

Bandwidth consumption across slow links

Current Windows NT domain and Active Directory domain configuration

User information including org charts, current IDs and general information

The WAN and LAN maps will help you most during your planning process. With the global catalog being so integral to logons, you might think that the easiest thing to do is to place a global catalog server at each location. However, that can increase your replication traffic as well as cost quite a bit if you have many small offices that don’t really need local servers, domain controllers or global catalogs. The tradeoff you must make is based on performance and need. When you plan your global catalog server placement, you should review the load distribution across the network as well as the failure rate of your WAN links. For example, if you have two sites connected by a T3 line and there are hundreds of users at each site, you would likely place a global catalog server at each site. The T3 line can withstand the replication traffic. Plus you would not want hundreds of users’ logon and query traffic to be crossing a WAN link just to connect to the network. If you have a very small site where you will have a domain controller, you may still not want to have global catalog replication traffic crossing the WAN if the WAN link is a small pipe, or if it is heavily utilized. You should consider the size of your Global Catalog database as well. For a Global Catalog with more than five hundred thousand objects, you will require at least 56 Kbps to 128 Kbps of available bandwidth for replication. For a network with that size of a Global Catalog, it is likely that there will be small offices with few users and a small WAN link that would not easily handle that type of bandwidth. In these cases, you should look at enabling universal group membership caching, which we will review in the following section. You should always remember these rules when you are planning your Global Catalog servers: • The first domain controller that you install into the root domain of an Active Directory forest is a global catalog server. •

You can only have one Global Catalog data store in a forest. When you have multiple forests, you will not be able to combine their Global Catalog data. In addition, you will need to know which users access which forests, and plan placement of global catalog servers for each one of the forests.

When users logon to the network or query the Active Directory to search for a resource, traffic is generated to a global catalog server or a domain controller that has universal group membership caching enabled.

In general, sites that have a domain controller can also maintain a Global Catalog server.

The larger the forest, in terms of objects, the larger the Global Catalog data store will be. This in turn will increase the size of replication traffic.

Logon and query traffic across a WAN link has a larger impact on the network than replication traffic between sites.

Users contact Global Catalog servers within their own site when logging on, browsing or querying the network. If they cannot contact a Global Catalog server or a server with universal group membership

caching enabled within their own site, they will contact a Global Catalog server in a remote site. •

The larger the number of Global Catalog servers at sites on the WAN, the higher the replication traffic across the WAN, but the less query and logon traffic. You should look at failure of WAN links and load distribution across the network in order to plan global catalog servers. Let’s look at a network that spans four cities: New York, Phoenix, Los Angeles, and Dallas. The headquarters for this company is in New York with 1,600 users, and a large datacenter is in Phoenix with 433 users. Los Angeles is a sales office with nine users and Dallas is a warehouse with 32 users. A T3 line connects New York and Phoenix. Frame Relay at 256 Kbps connects the Dallas warehouse, and a 56 Kbps line connects the sales office. Not only will the size of the pipe be helpful, but also the usage. Using a network traffic monitoring tool, such as Performance Monitor, you would find that these links have at least 30 percent available bandwidth at all times. Given just this information, you can determine that the headquarters in New York, with 1600 users will be a good place to have a Global Catalog server. In addition, the Phoenix datacenter with 433 users is another location that would be good for a Global Catalog server. The link between these two sites is at T3 speeds and has plenty of bandwidth available for replication between the Global Catalog servers. Whether to place Global Catalog servers at the Los Angeles and Dallas locations is another question. Given that both of these sites have relatively few users, the need for a Global Catalog server is probably small. In the event that the WAN link was down, there is very little that logging onto the local network will provide unless there is a mission critical application that requires network authentication. If the warehouse in Dallas had such an application, then a Global Catalog server would be needed in Dallas just in case the WAN link failed. For Global Catalogs with more than half a million objects, the bandwidth required for replication will be between 56 Kbps and 128 Kbps available on the WAN link at all times. This is not available on the link between Dallas and New York, however, the Global Catalog will reach about 10,000 objects considering that there is a couple thousand users, the same number of computers, plus mailboxes and other extraneous information. The Los Angeles sales office is another matter entirely. With so few users and a small link, the users can logon across the WAN. Therefore, there is no need to place a Global Catalog server at that office. The WAN design of the network will help you in placing global catalog servers. However, there will also be sites in larger internetworks that require additional global catalog servers. In order to decide the placement of multiple global catalog servers within a single site, you should look over the local area network information. You will need to know the LAN topology as well as the number of users and their usage requirements.

When to Use a Global Catalog You have very little choice about having a global catalog. A global catalog is automatically created upon the installation of the first domain controller in the root domain of a new forest. When you have multiple domains in the forest, the global catalog provides users a way of finding the resources within other

domains. The global catalog also provides universal group membership information in processing logons so that a user’s credentials can be accurately determined. You can, of course, choose how many global catalog servers you have. When a forest only has a single domain, the need for a global catalog server is extremely small. Domain controllers automatically contain the information for the entire domain, so there is no need for an index of those same objects in a global catalog data store. The advantage of having a global catalog is realized when you have multiple domains in the forest because it ensures that users within any domain can query the network for resources regardless of where those resources are located. The global catalog indexes information, which is configurable by the administrator so that only crucial data is included. When you have a global catalog server in a local site, logons and network queries are faster. The disadvantages to having a global catalog is really in the additional traffic that is caused during replication, queries, browsing and logons. You can overcome much of these traffic issues when you configure your sites, site links and select whether to use a global catalog server or to enable universal group caching on a domain controller.

Creating a Global Catalog Server The process of creating a global catalog server is surprisingly simple. First, you must create the global catalog server on a domain controller. You cannot do so on a member server of the domain. If you have a member server that you wish to reconfigure as a global catalog server, you will first have to install Active Directory services using the Active Directory Wizard.

EXERCISE 2.08 CREATING A GLOBAL CATALOG SERVER 1. Logon to the domain controller as a member of the Domain Admins or Enterprise Admins group. 2. Click Start | Programs | Administrative Tools | Active Directory Sites and Services 3. Navigate to the site where the domain controller is located in the left pane. Expand the site, then expand the Servers container, and finally expand the server itself. 4. Right click on the NTDS Settings object below the server. 5. Select Properties from the popup menu 6. Check the box marked Global Catalog, as shown in Figure 3.14. Figure 3.14 Creating a Global Catalog Server

Universal Group Caching Global catalog servers have a heavy impact on network traffic during replication. Allowing the users to logon and query the network across WAN links can be even more of a load, so there is a tradeoff when you place global catalog servers at sites around the network. When users attempt to logon to the network, a Global Catalog server is contacted so that the user’s membership within any universal groups can be resolved. This will allow the logon attempt to determine the user’s full rights and permissions. When the Global Catalog server is not available, the user’s logon attempt is denied. However, the Windows Server 2003 Active Directory allows you a way to have your cake and eat it to. This is called universal group membership caching.

When to Use UG Caching Whether you decide to implement universal group membership caching or a global catalog server, you will need to have a domain controller at the site. This means that you will have a certain amount of replication traffic across the WAN link no matter what. So, the main reason to do either is to localize the logon and query traffic. Let’s look at a specific situation where it makes more sense to have universal group membership caching than a global catalog server. In this

scenario, the forest is extensive with multiple domains and over a half a million objects throughout. The actual site where universal group membership caching will be enabled is small – with 50 users and a domain controller at the site. The users all belong to a domain with less than 10,000 objects in the Active Directory. The WAN link is 56 Kbps and heavily utilized. The impact of users logging onto the network is taking its toll – the users’ traffic is traveling across WAN in order to contact a global catalog server to resolve the universal group memberships - and the users complain of slow logons. To speed up the logons, you can either enable universal group membership caching or enable the global catalog on the domain controller. Since the global catalog has over half a million objects, it requires between 56 Kbps and 128 Kbps in order for replication to take place and the WAN link would not be able to carry that replication traffic. Therefore, this is the type of situation where the best option is to enable universal group membership caching. Another situation that universal group membership caching works well in is when the global catalog is so large that it taxes the resources of a domain controller. If this is the case, you can either upgrade hardware and enable the global catalog on the domain controller, or you can enable universal group membership caching.

Configuring UG Caching EXERCISE 2.09 ENABLING UNIVERSAL GROUP MEMBERSHIP CACHING In order to configure universal group membership caching, you enable it for the site, rather than for a domain controller within the site. To do so: 1. Open the Active Directory Sites and Services console 2. In the left pane, navigate to the site where universal group membership caching will be enabled. 3. Click on the site. 4. In the right pane, right click on the NTDS Site Settings object. 5. Select Properties from the popup menu 6. Check the box to Enable Universal Group Membership Caching as shown in Figure 3.15. Figure 3.15 Enabling Universal Group Membership Caching on a Domain Controller

Effects on Replication After you enable universal group membership caching on a domain controller, the domain controller will only replicate its own domain data with replication partners. There is a point at which, when a user first logs on, the domain controller will contact a global catalog server within another site to pull the user’s universal group membership information. The domain controller will then cache this information. Periodically thereafter, the domain controller will refresh that data. The default period for refreshing this data is every eight hours. You can configure this option within the Active Directory Sites and Services console.

Summary The application directory partitions are intended for integration of the forest with applications that are implemented within certain locations in the network. The application directory partition would likely have the ability to integrate with the Active Directory, but because the application would only be required at a small number of sites, the replication impact of that data would be too high for it to be a part of a domain partition. Application directory partitions

overcome this limitation by providing a locally implemented directory partition for the application that can be configured specifically to meet the needs of a set of users within the forest. Forest trust relationships are added to the existing trusts – the implicit Kerberos trusts that exist between domains within a forest, the explicit external trusts that can be created with domains and Kerberos realms outside of the forest, and shortcut trusts that can be used to speed up resource access within a forest with multiple domain trees. You should have a solid understanding of how each of these trust relationships work, their transitivity and direction, and when you should implement each type. The forest and domain functional levels are the new version of “native and mixed mode domains.” You should have a good understanding of how the forest and domain functional levels affect the features that you are able to implement. Being able to design a global catalog server placement among the sites you have designed is one of the critical skills for a forest since it will dictate how quickly users can logon, whether WAN outages will cause logon failures, and how much replication traffic will be transmitted across WAN links. In the Windows Server 2003 forest, you now have a new option to weigh against – whether to use a global catalog server or to enable universal group membership caching. With the new features and functionality available in a Windows Server 2003 Active Directory forest, you will need a solid foundation in understanding the value, benefits and design. Plus, you should practice configuring each of these features and perform tests to see how users may be affected by their implementation.

Solutions Fast Track Designing Active Directory • •

The forest root domain provides its name to the entire active directory forest. Design child domains where you need specific separations, either driven by network discontinuity, business requirements, or administrative separation Gather information, such as network topology maps and organization charts, about the current environment before making your design decisions.

Configuring Active Directory • •

Application directory partitions can be added to the Active Directory for use by local applications using the NTDSUTIL utility. There are four types of trusts – the implicit Kerberos trusts between domains within a forest, the explicit trusts between an Active Directory domain and an external domain or Kerberos realm, a shortcut trust between domains within a forest, and a forest trust between the root domains of two Windows Server 2003 forests. The forest has three functional levels – Windows 2000, Windows Server 2003 interim, and Windows Server 2003.

Domains have four functional levels – Windows 2000 native, Windows 2000 mixed, Windows Server 2003 interim, and Windows Server 2003.

Global Catalog Servers • •

• •

The Global catalog is a data store with a partial copy of objects that cross all the domains within a forest. Global catalog servers process logons in order to provide the universal group membership for a user and ensure that user has the appropriate credentials at logon. In the absence of a global catalog server, and without universal group membership caching enabled for a site, a user’s logon is denied. Universal group membership caching is enabled for an entire site, while global catalog servers are enabled on individual domain controllers.

Frequently Asked Questions Q: When you design the forest root domain, why is it such a big deal to select the right name? A: The forest root domain will become the name of the forest. If you use a name that will be accessible via the Internet, you will have security issues. If you use a name that is not going to be recognized in your DNS scheme, your users will not be able to logon. If you misspell the name during installation, you will have to rename the domain and forest – either using the domain renaming tool (allowed only at the forest functional level of Windows Server 2003) or by reinstalling. If you upgrade an existing domain and make a serious naming error, you will have to recover your original domain and start from the beginning. Q: When designing domains for a real network, there seem to be a whole lot of other reasons that people bring up for having more child domains than seem to be in the design rules. Why is that? A: Politics are a major driver for creating additional separations within a business or organization. The reality is that you can design a single domain and probably achieve everyone’s business requirements simply through a good OU and administrative delegation system. However, there is a sense of security when you have your “own domain” and many people will think up a variety of reasons to make that happen for themselves. Q: Why do you need to have an organization chart when you design your domain hierarchy? A: The org chart will give you an idea of the political separations within the organization. Even though you may be able to design a single domain, you may need the org chart later on for OU design within the domain. Q: When would anyone need an application directory partition? There aren’t really applications that use it yet. A: True. Application directory partitions are new, which means that no one really uses them…yet. However, there are TAPI applications that have been developed to use the application directory partition. Plus this type of partition

offers developers a new way to utilize directory service data without directly impacting the main Active Directory. Q: The forest trust could make things very easy to manage, but we already have a complex set of external trusts between domains in our Windows 2000 Active Directory forests. Should we change over when we upgrade? A: That all depends upon your organization’s needs. You should review which users need to access which resources and what type of security you need to have in place. From there, you can compare whether a forest trust will meet your needs or if you should continue with external trusts. Q: What’s the point of having so many domain and forest functional levels? A: The domain and forest functional levels are a way to unlock the native capabilities of the Windows Server 2003 Active Directory. If you decide to leave everything at the default levels even though your domain controllers have all been upgraded, the Active Directory will not be able to take advantage of the new features that could be available – such as a forest trust relationship. Q: Why does the global catalog appear to have more importance than before? A: There are two reasons. First, the global catalog is an absolute requirement if you have multiple domains in your forest. Planning for the global catalog is critical to ensuring that users can logon to the network. Second, there is a new feature – universal group membership caching – that can be implemented in place of the global catalog, so you need to know the differences between the two and when to use each. Q: How do you go about creating a global catalog server? A: You use the Active Directory Sites and Services console, locate the domain controller that you are going to turn into a global catalog server. Then, you right-click the NTDS Settings of the domain controller to access the NTDS Settings properties dialog and check the box for Global Catalog Server. Q: What do I do if I want to change a global catalog server into a universal group membership caching server? A: First, you can remove the global catalog server by unchecking the box in the server’s NTDS Settings Properties dialog. But when you enable universal group membership caching, you will not be doing so for an individual server, you will be enabling it for the entire site that the domain controller is a member of. This is performed in the NTDS Site Settings Properties dialog of the site.



3:29 PM

Page 77

Chapter 4

Managing and Maintaining an Active Directory Infrastructure

Managing and Maintaining an Active Directory Infrastructure Solutions in this chapter: • Manage an Active Directory forest and domain structure. • Manage trust relationships. • Manage schema modifications. • Managing UPN Suffixes. • Add or remove a UPN suffix. • Restore Active Directory directory services. • Perform an authoritative restore operation. • Perform a nonauthoritative restore operation.

Introduction There may come a time in your environment when you will be adding or removing domains from your Active Directory structure. Events such as company mergers, branch closures, and other business-oriented events can trigger a need to reconfigure your structure to accommodate change. In these types of events, you may need to add or remove trusts between domains, add organizational units, or perform other administrative tasks that can have a huge impact on your structure. In this chapter, you will learn how to manage your Active Directory Structure, including what tools are at your disposal for these management tasks. Along with these changes to your Active Directory structure, there may come a time when you realize a change that you made to your structure was incorrect. Unfortunately, there is no “Undo” command in the Active Directory tools. However, as it was with Windows 2000, Active Directory restore tools are you best friend when these types of problems occur. In this chapter, you will learn the different types of Active Directory restore types, and how to properly restore Active Directory. Let’s begin this chapter with a discussion of the different ways that you can manage your Active Directory Structure.

Choosing a Management Method Microsoft has provided a number of tools to help you manage Active Directory. You can administer your Active Directory installation using Windows Graphical User Interface (GUI) tools, various command line utilities, as well as more advanced scripting functions. Each method has certain advantages, so as we perform the many exercises in this chapter we’ll discuss both GUI and commandline procedures to accomplish each task. You’ll notice that we’ll focus primarily on the GUI interface, since this will likely be your tool of choice in your day-today operations. (Not to mention on the 70-296 exam!)


Page 1 of 50

Using a Graphical User Interface The most common means of administering your Active Directory infrastructure will be through the built-in GUI utilities that are added during the Active Directory installation process (dcpromo.exe.) The Microsoft Management Console (MMC) centralizes the graphical tools that you will use to administer your Active Directory installation as well as most other Windows Server 2003 components into a single management console that can be run from an administrative workstation or the server itself. Similar to Windows 2000, the MMC provides a common interface and presentation for built-in Microsoft utilities, as well as an increasing number of third-party management tools. You’ll use a number of snap-ins to the Management Console in order to manage your Windows Server 2003 Active Directoryimplementation. The greatest advantage to using the GUI utilities to administer your network is one of simplicity – Microsoft has distilled the most common tasks into an easy-to-follow Wizard format, where you are prompted for information at each step. Trust relationships, a major component of this chapter, are managed using the Active Directory Domains and Trusts tool. This console is located in the Administrative Tools folder on your domain controller, or you can load the administrative tools onto your local workstation. Administration of Active Directory objects such as users, groups and Organizational Units (OUs) can be accomplished with the Active Directory Users and Computers tools, and tasks associated with the physical layout of your Active Directory infrastructure can be completed using the Active Directory Sites and Services tools. In addition to the built-in utilities discussed here, there are any number of free and commercial GUI tools available from the Microsoft website and other third-party vendors. Figures 4.1, 4.2 and 4.3 illustrate each of the built-in tools I’ve just mentioned.

Figure 4.1: Active Directory Domains and Trusts

Figure 4.2: Active Directory Sites and Services


Page 2 of 50

Figure 4.3: Active Directory Users and Computers

Using the Command Line For more granular control of administrative functions, you should consider using Microsoft’s array of utilities that you can run from the Command Line Interface (CLI) to manage your Windows Server 2003 environment. You can choose from pre-installed utilities included in the Windows operating system, as well as additional tools that you can install from the ~\Support\Tools folder of the server source media. Command-line utilities can help to streamline the administrative process in cases where you find yourself issuing the same command or making the same configuration change on a regular basis. As we’ll discuss in the “Using Scripting” section that follows, CLI utilities can be integrated into batch files, login scripts and other automated scripting functions in order to speed the administrative process. Some of the command line utilities also do not have an equivalent within the GUI environment, such as the CSVDE utility that allows you to import information from a comma-separated (.CSV) text file directly into the Active Directory database. If you have large amounts of information to enter into AD, the command line utilities discussed here can you’re your administrative tasks far more efficient.

Commands In Table 4.1, we’ve included a partial list of the command line utilities available to Windows Server 2003 administrators – you can find a complete listing on the Microsoft Solutions Developers Network site at You can see the syntax and optional parameters of most of these commands by typing utility /? at the Windows command prompt, for example, the ntdsutil /? Command will list all possible parameters for the ntdsutil utility. Table 4.1Windows Server 2003 Command Line Utilities


Page 3 of 50







Description Allows you to import and export information into Active Directory using a comma-separated (CSV) format. Creates users, groups, computers, contacts, and organizational units within the Active Directory database. Modifies the attributes of an existing object within Active Directory. DSMOD can modify users, groups, computers, servers, contacts, and organizational units. Deletes objects from Active Directory. Working from a single single domain controller, this will either rename an object without moving it, or move it from its current location in the directory to a new location within the Active Directory tree. (To move objects between domains you’ll need to use the Movetree command-line tool.) Allows you to find a list of objects in the Active Directory directory using specified criteria. You can use this utility to search for computers, contacts, subnets, groups, organizational units, sites, servers and user objects. Displays specific attributes of object types within Active Directory. You can view attributes of any of the following object types: computers, contacts, subnets, groups, organizational units, servers, sites, and users. This will create, modify, and delete directory objects. You can also use LDIFDE to extend the Active Directory Schema, export user and group information to other applications or services, and populate Active Directory with data from other directory services. Installed from the ~\Support\Tools directory on the Windows Server 2003 CD, this tool is used primarily in creating, verifying and removing trust relationships on a Windows network. You’ll see this tool mentioned several times during the “Managing Trusts” section of this chapter. This is the “Swiss Army Knife” of Active Directory management tools. Among other things, ntdsutil can perform database maintenance of Active Directory, manage single master operations, as well as removing metadata left behind by domain controllers that were removed from the network without being properly uninstalled.

Using Scripting You can extend the usefulness of Windows Server 2003 command line utilities even further by including them in various scripting utilities. The applications that you can use to apply scripting to your network administration tools are endless, but two of the more readily available are Windows Scripting Host and the Active Directory Services Interface (ADSI). ADSI provides an interface for most common scripting languages to query for and manipulate directory service objects, allowing you to automate such tasks as creating users and resetting passwords. Just like using individual Command Line utilities, scripting will allow you to increase the efficiency of your administrative tasks even further by allowing you to automate processes that would otherwise be tedious and timeconsuming. For example, a university administrator might create a batch file to automatically create new user accounts for each semester’s batch of incoming students, which would prove be much more efficient than manually entering each


Page 4 of 50

object’s information into the MMC GUI. The flexibility of the command line utilities allows you to integrate them into any number of scripting applications, including VBScript, Perl, and Windows logon scripts. These scripts can be launched manually, scheduled to run at regular intervals, or integrated into a web or intranet application to be run on demand – by a user needing to reset their password, for example. While an in-depth discussion of Windows scripting is beyond the scope of this book, you can find a wide variety of information and reference material on the MSDN site at

Managing Forests and Domains As an MCSE, you’ll be expected to have the skills necessary to manage forests and domains with your Active Directory infrastructure. You’ll need to be familiar with performing such familiar tasks as creating new forests, domains and child domains, as well as the new functionality offered by Windows Server 2003. In this section we’ll cover the tasks associated with managing Active Directory at the domain and the forest level.

Managing Domains Active Directory domains are the cornerstone of a well-formed Active Directory implementation, and provide the most common framework for managing your Active Directory environment. You’ll perform some of the tasks described in this section only when your network environment changes; for example, creating a new domain tree or a child domain after creating a new department or merging with another company. Other tasks will be a part of your daily life, including creating and managing organizational units, managing domain controllers, and assigning and managing permissions on Active Directory objects. In the following pages we’ll detail the steps necessary to perform a wide array of domain management functions. Knowing how to perform these tasks will not only help you on the 70-296 exam, but also in the real world of network administration. Remember from your Windows 2000 studies that AD domains are used to organize objects within Windows Server 2003, while Active Directory sites map to the physical layout of your network infrastructure. You can have a single domain that includes multiple sites, or you can have a single site that contains many domains. Domains allow you to manage your Active Directory environment in the way that best meets your needs without locking you in to matching your administrative layout with your company’s physical structure. Windows Server 2003 domains can contain any combination of Active Directory objects, including servers, organizational units, users, groups and other resources. Windows Server 2003 computers can function as stand-alone servers that house shared resources, as well as domain controllers that handle user authentication and authorization functions.

Creating a New Child Domain Active Directory is designed to remain flexible enough to meet the changing and growing needs of a company’s organizational structure. For example, let’s say that you administer the Active Directory domain. As the company has grown, the board of directors has decided to subdivide the production team into two halves, and, both of


Page 5 of 50

which will ultimately report to the main management office. As the IT manager, you decide to create a child domain for each production subdivision. This will allow you to subdivide network resources between the two new divisions, as well as delegate IT management functions of each child domain while still maintaining overall administrative authority on the network. Your new domain structure will resemble the one shown in Figure 4.1.In Exercise 4.01 we’ll go through the steps needed to create a new child domain.

EXERCISE 4.01 CREATING A CHILD DOMAIN 1. From a Windows Server 2003 machine, click on Start | Run, then type dcpromo to launch the Active Directory Installation Wizard. 2. If the Operating System Compatibility page appears, read the information presented and click Next. 3. On the Domain Controller Type screen shown in Figure 4.4, select Domain controller for a new domain. Click Next to continue. Figure 4.4 Creating a Domain Controller

4. On the Create New Domain page, select Child domain in an existing domain tree, and then click Next. 5. The next screen, shown in Figure 4.5, will prompt you for the username, password, and domain of the user account with the


Page 6 of 50

necessary rights to create a child domain. Enter the appropriate information and click Next. Figure 4.5 Creating a Child Domain

6. On the Child Domain Installation screen, verify the name of the parent domain and enter the new child domain name, in this case Click Next to continue. 7. The NetBIOS Domain Name page, shown in Figure 4.6, will suggest a default NetBIOS name that downlevel clients will use to connect to this domain. Accept the suggested default or type in a NetBIOS domain name of your choosing, then click Next. Figure 4.6 Specifying the NetBIOS Domain Name


Page 7 of 50

8. On the Database and Log Folders screen shown in Figure 4.7, enter the location in which you want to install the database and log folders, or else click Browse to navigate to the location using Windows Explorer. Click Next when you’re ready to continue. Figure 4.7 Database and Log Folder Locations


Page 8 of 50

9. From the Shared System Volume page, type or browse to the location where you want to install the SYSVOL folder and then click Next. 10. The DNS Registration Diagnostics screen will prompt you to verify that the computer’s DNS configuration settings are accurate. Click Next to move to the next step. 11. From the Permissions screen, select one of the following options: •

Select Permissions compatible with pre-Windows 2000 server operating systems if your network still contains Windows NT domain controllers.

Choose Permissions compatible only with Windows 2000 or Windows .NET server operating systems if your domain controllers are running exclusively Windows 2000 or better.

12. The Directory Services Restore Mode Administrator Password screen will prompt you to enter the password that you want to use if you ever need to start the computer in Directory Services Restore Mode. Click Next when you’ve entered and confirmed the password. 13. Review the Summary page – if you are satisfied with your selections, click Next to begin the Active Directory installation. The installation will take several minutes and will require you to reboot the machine when you’re finished. This server will be the first domain controller in the new child domain.

WARNING Windows .NET Server 2003, Web Edition, cannot run Active Directory. It can participate on a Windows network as a member server only. Your Windows Server 2003 domain controller must be running Standard Edition, Enterprise Edition, or Datacenter Edition.

Managing a Different Domain If you have administrative rights to multiple Windows Server 2003 domains, you can manage all of them from a single desktop. For example, if you are the administrator for the domain, you can perform administrative functions for the domain to cover for someone who is on vacation or on sick leave. You can also use the steps described in this section to manage any Windows 2000 domains that still exist within your Active Directory forest. To manage a different domain in Active Directory Users and Computers, for example, right-click on the current domain name and click on Connect to Domain. You’ll see the dialog box shown in Figure 4.8, where you can specify a new domain name and optionally set this as the default domain name for the current console. You can use this functionality to create customized Management Consoles that will allow you to quickly access all of the Windows Server 2003 domains that you administer.


Page 9 of 50

Figure 4.8 Connecting to a Different Domain

Removing a Domain There are a number of situations in which you might need to remove an Active Directory domain: you may be restructuring your Active Directory environment, or reorganizing departments or locations within your company’s business structure. The process of removing an Active Directory domain is relatively straightforward; however, there are a number of considerations to keep in mind before doing so. First and most obvious, removing an Active Directory domain will permanently destroy any user, group and computer accounts stored within that domain. Additionally, if you are removing the last domain in a forest, removing the domain will also automatically delete the entire forest. If you are certain that you are ready to remove an Active Directory domain, it’s also important to remember the following points: • If the domain in question contains any child domains, the domain cannot be deleted. You must delete all child domains before proceeding. If you attempt to delete a domain that contains a child domain, the procedure described in this section will fail. •

In a multi-domain environment, be certain that the domain controllers in the domain being removed do not hold the Domain Naming Master or Schema Master operations roles. These are operations master roles (See “Understanding Operations Masters later in this section) that only exist on one machine in each forest. Therefore, if the controller in question is performing one of these functions, you’ll need to use the ntdsutil to transfer these roles to another domain controllers in another domain before continuing, in order to allow your Windows Server 2003 forest to continue to function properly. You’ll need to follow the procedure that follows for every domain controller associated with the domain you wish to remove. 1. Click on Start | Run, then type dcpromo. Click Next from the opening screen of the Active Directory Installation Wizard. 2. On the Remove Active Directory screen shown in Figure 4.9, place a check-mark next to This server is the last domain controller in the domain and click Next to continue.


Page 10 of 50

3. Follow the prompts until the wizard completes, then click Finish to begin removing the Active Directory domain. The process will take several minutes, after which you’ll be prompted to reboot. Figure 4.9 Removing Active Directory

Deleting Extinct Domain Metadata If one of your Windows Server 2003 domain controllers suffers a catastrophic failure and you are unable to remove it from the domain in a graceful manner, you can use the following steps to delete the Active Directory metadata associated with that controller. Metadata here refers to information within Active Directory that keeps track of the information that is housed on each one of your domain controllers – if a DC fails before you can remove it from the domain, its configuration information will still exist within the Active Directory database. This out of date information can cause data corruption or troubleshooting issues if it is not removed from Active Directory. It’s important that only follow these steps to remove the metadata of a domain controller that could not be cleanly decommissioned: do not delete the metadata of any domain controllers that are still functioning on your Windows Server 2003 network. In order to delete the metadata associated with a failed Active Directory controller, you’ll use the ntdsutil command-line utility. 1. Click on Start | Programs | Accessories | Command Prompt. 2. Type ntdsutil and press Enter. You’ll see the following prompt: ntdsutil:

3. At the ntdsutil prompts, type metadata cleanup and press Enter. You’ll see the following:


Page 11 of 50

metadata cleanup:

4. From this prompt, type connection and press Enter to go to the connection prompt: connection:

5. Type connect to server Server, where Server is the name of a functioning controller in your domain. Press Enter, then type quit to return to the metadata cleanup: prompt. metadata cleanup:

6. At the metadata cleanup command, type select operation target and press Enter to go to the associated prompt. select operation target:

7. From select operation target, type list sites and press Enter. You’ll see a list of available sites, each with a number next to it 8. Type select site SiteNumber, where SiteNumber is the number next to the site in question. 9. Again from the select operation target prompt, type list domains in site. Repeat the process from Step 8 by typing select domain DomainNumber and selecting the appropriate domain number from the list of domains in the site you selected. 10. Type list servers in site. Select the number of the server whose metadata you wish to remove, then type select server ServerNumber and press Enter. 11. Once you have selected the appropriate site, domain and server, type quit to return to the following prompt: metadata cleanup:

12. Type remove selected server and press Enter to begin the metadata cleanup process.

Raising the Domain Functional Level You probably recall that in Windows 2000, you were able to configure your Active Directory domains in either mixed-mode or native mode. Mixed-mode operation provided backwards compatibility for any remaining NT4 backup domain controllers (BDCs) still existing on your network. Mixed-mode domains could contain Windows NT 4.0 backup domain controllers and were unable to take advantage of such advanced Windows 2000 features as universal security groups, group nesting, and security ID (SID) history capabilities, as well as other Microsoft packages such as Exchange 2000. When you set your domain to native mode, these advanced functions became available for your use. Windows Server 2003 takes this concept of domain functionality to a new level, allowing you to establish four different levels of domain functionality with differing feature sets available depending on your network environment. The four domain functional levels available in the new release of Windows Server are as follows: • Windows 2000 mixed •

Windows 2000 native


Page 12 of 50

Windows Server 2003 interim

• Windows Server 2003 The default domain functional level is still Windows 2000, mixed-mode, to allow you time to upgrade your domain controllers from Windows NT 4 and Windows 2000 to Windows Server 2003. Just like in the previous release of Windows, however, when you raise the functional level advanced domain-wide Active Directory features become available. Just as NT4 controllers were not able to take advantage of the features available in Windows 2000 native mode, Windows 2000 controllers will not be aware of the features provided by the Windows Server 2003 level of domain and forest functionality. In Table 4.2, you can see the four levels of domain functionality available in Windows Server 2003, and the types of domain controllers that are supported by each: Table 4.2 Domain Functional Levels within Windows Server 2003 Domain Functional Level Windows 2000 mixed (default) Windows 2000 native Windows Server 2003 interim Windows Server 2003

Domain Controllers Supported Windows Server 2003 family Windows 2000 Windows NT4 Windows Server 2003 family Windows 2000 Windows Server 2003 family Windows NT4 Windows Server 2003

The “Windows Server 2003 interim” domain functional level is a special level that’s available if you’re upgrading a Windows NT 4.0 PDC to become the first domain controller in a new Windows Server 2003 domain. When you upgrade the domain functional level of your Windows Server 2003 domain, new administrative and security features will be available for your useJust like when you set Windows 2000 to either mixed- or native-mode, specifying the domain functional level is a one-way operation; it cannot be undone. Therefore, if you still have domain controllers that are running Windows NT 4.0 or earlier, you shouldn’t raise the domain functional level to Windows 2000 native. Likewise, if you haven’t finished migrating your Windows 2000 controllers to Windows Server 2003, you should leave the domain functional level lower than Windows Server 2003. To raise the functional level of your Windows Server 2003 domain, use the steps that follow: 1. Open Active Directory Domains and Trusts. 2. Right-click on the domain that you want to manage and select Raise Domain Functional Level. On the screen shown in Figure 4.10, you’ll see the current functional level of your domain, as well as the following two options to choose from:


Page 13 of 50

3. To raise the domain functional level to Windows 2000 native, select Windows 2000 native and then click Raise 4. For Windows Server 2003, select the appropriate option and then click on Raise to complete the operation Figure 4.10 Raising the Domain Functional Level

Managing Organizational Units Organizational units in Windows Server 2003 are basically identical to their function in Windows 2000: they server as Active Directory containers that you can use to organize resources within a single domain. Unlike security groups that can only contain user objects, you can use OU’s to organize users, groups, printers, computers, and other organizational units as long as they are within the same domain. (Organizational units cannot contain objects located in other domains.) You can use organizational units to delegate administrative control over a specific group of users and resources without requiring you to grant administrative access to the rest of the objects within the domain. Using organizational units in this manner will allow you to create a distributed administrative model for your network, while at the same time minimizing the number of domains needed. Delegating administration tasks allows you to assign a range of responsibilities to specific users and groups, while still maintaining control over domain- and forest-wide administrative functions on your network. For example, you can create an organizational unit containing all user and computer accounts within the Accounting department, and then assign a power user within the department the ability to reset user passwords for Accounting department users only. Another potential use would be to allow an administrative assistant the ability to edit user information to update telephone and fax information for the users they support. If your administrative model is a decentralized one, delegating control will allow users to take more responsibility for their local network resources.


Page 14 of 50

Delegation of authority also creates added security for your network by minimizing the number of user accounts that you need to add to the powerful Domain Admin and Enterprise Admin users groups. You can delegate a wide range of tasks within Windows 2003, including the following: • Create, delete and manage user accounts •

Reset user passwords

Create, delete and manage groups

Read user account information

Modify group memberships

• View and edit Group Policy information In Exercise 4.02 we’ll create a new organizational unit within a Windows Server 2003 domain, then delegate the ability to manage user accounts to a user within the OU.

EXERCISE 4.02 CREATING AN ORGANIZATION UNIT AND DELEGATING CONTROL TO A LOCAL ADMINISTRATOR 1. Open Active Directory Users and Computers. 2. Right-click on the domain, then select New | Organizational Unit. Enter a descriptive name for the OU and click OK. 3. From the MMC console, right-click the OU that you just created. (Hit F5 to refresh the console if you don’t see the new OU listed.) 4. Click Delegate control to start the Delegation of Control Wizard. 5. Click Next to bypass the introduction screen. 6. On the Users or Groups screen, click Add to specify the users who should have the administrative rights you specify for this OU. Click Next when you’re ready to continue. 7. In the Tasks to Delegate screen shown in Figure 4.11 you can either select one or more pre-configured tasks to delegate, or create a custom task. In this example, we’re going to delegate the ability to “Create, delete and manage user accounts”. Make your selection and click Next to continue. Figure 4.11 Using the Delegation of Control Wizard


Page 15 of 50

8. On the Summary screen, review the selections you’ve made and click Finish to complete the delegation process.

Assigning, Changing, or Removing Permissions on Active Directory Objects or Attributes Your life as an administrator becomes much simpler when you can assign permissions to groups or organizational units, rather than to individual objects. For example, if Andrew from the Marketing department needs to manage the printers in his department, you can set the necessary permissions on the individual printers in the Marketing OU, or on the Marketing OU itself. In the case of the former, you’ll need to manually specify Andrew’s permissions every time you add a new printer to the Marketing OU. Whereas if you give Andrew rights at the OU level, any new printer objects created within the Marketing OU will automatically be assigned the same rights as the existing printers. Along with using the Delegation of Control Wizard discussed in the previous section, you can manually assign permissions to any object within the Active Directory database, including users, groups, printers and organizational units. You’ll assign these permissions using the Active Directory Users and Computers interface, as we’ll cover in the following steps. 1. Open Active Directory Users and Computers. Within the console window, click on View | Advanced Features to access the Security property page for the Active Directory objects within your domain. 2. Right-click on the object that you want to assign permissions to (in this case the Human Resources OU) and click Properties. You’ll see the screen shown in Figure 4.12.


Page 16 of 50

Figure 4.12 Assigning Permissions to Active Directory Objects

3. Click Add to create a new entry in the object’s Access Control List (ACL), or Remove to delete an existing permission assignment. Select the user or group that you want to grant permissions to, then click OK. 4. You can grant or deny any of the basic permissions, or click on Advanced | View/Edit for the full list of permissions that you can assign. 5. Click on OK when you’re done. Repeat Steps 3 and 4 for each additional user or group that you want to assign permissions to.

Managing Domain Controllers Windows Server 2003 has introduced a simplified mechanism to rename a domain controller if you need to restructure your network’s organizational or business needs. This new functionality, available only if the domain functional level is Windows Server 2003, works to ensure that your clients will suffer no interruptions in their ability to authenticate against the renamed controller or locate any resources hosted on it. When you rename a domain controller, its new name is automatically updated within Active Directory, as well as distributed to the Domain Name System (DNS) servers on your network and Active Directory. The amount of time it will take for this propagation to take place will depends on the specific configuration of your network – replication over a WAN link will be significantly slower than over a Local Area Network, for example. During any


Page 17 of 50

latency in replication, your clients may not be able to access the newly renamed domain controller; however, this should not pose a barrier to client authentication since there should be other controllers available.

Renaming a Domain Controller 1. Open Command Prompt. 2. Type: netdom computername CurrentComputerName /add:NewComputerName 3. Ensure the computer account updates and DNS registrations are completed, then type: netdom computername CurrentComputerName /makeprimary:NewComputerName 4. Restart the computer. 5. From the command prompt, type: netdom computername NewComputerName /remove:OldComputerName

Understanding Operations Masters Windows Server 2003, like its predecessor, supports multimaster replication to share directory data between all domain controllers in the domain, thus ensuring that all controllers within a domain are essentially peers – the concept of the PDC and the BDC are long gone. However, some domain and forest changes need to be performed from a single machine to ensure consistency of the Active Directory database. As an administrator, you’ll designate a single domain controller, called an operations master, to perform these changes. The number and description of operations masters in a Windows Server 2003 domain are identical to those that existed under Windows 2000. Each Windows Server 2003 forest must contain one and only one of the following: • The Schema master, which controls all updates and modifications to the Windows Server 2003 schema •

The Domain naming master that controls the addition and removal of domains within a Windows Server 2003 forest. Likewise, each Windows Server 2003 domain must contain one of each of the following operations masters: • The Relative ID (RID) master allocates a sequence of relative ID numbers to each controller within the domain to allow for consistent updates throughout the domain •

The Primary domain controller (PDC) emulator master provides logon services to any downlevel Windows clients, mimicking the role of an NT4 PDC. If there are any remaining NT4 BDCs on the network, the PDC emulator will replicate directory information to the BDCs, as well.

The Infrastructure master coordinates references to any objects from other domains within the forest

Responding to Operations Master Failures If a Windows Server 2003 server that holds an operations master role suffers a hardware or software failure, you have the option of forcibly seizing the role and


Page 18 of 50

assigning it to another controller. In most cases, this is a drastic step that shouldn’t be undertaken if the cause of the failure is a simple network or hardware issue that can be resolved in a relatively short time. We’ll discuss the potential impact of seizing the various operations roles in this section. The following operations master roles should not be seized unless you are completely unable to return the original holder of the role to the Windows network: • Schema Master •

RID Master

• Domain Naming Master A temporary loss of any of these three roles will not affect the operations of your users or the availability of your network under most circumstances. (If the schema master has failed, you would not be able to install a new application that needed to extend the schema, for example.) A domain controller whose Schema Master, RID Master or Domain Naming Master role has been seized must never be brought back online. The controller in question must be reformatted and re-installed before returning to the network, or your Active Directory database will become completely corrupted. If this happened, you would be forced to restore the entire Active Directory structure from backup, rather than simply rebuilding a single server. The loss of the infrastructure master will not be visible to your network users either, and will only affect administration of your network if you need to move or rename a large number of domain accounts. Unlike the three roles discussed in the previous paragraph, though, you can return the original infrastructure master to production without reinstalling the operating system, making the prospect of seizing the infrastructure master a slightly less daunting proposal. The only one of the five operations masters whose loss will be immediately noticeable to your end users is the PDC emulator, especially if you are supporting clients who rely on that role for authentication. As such, you may wish to immediately seize the PDC emulator role if the original master suffers any sort of failure. Like the infrastructure master, you can return the original PDC emulator to the network without reformatting or reinstalling the OS.

Seizing an Operations Master Role To transfer an operations master role to a different server, follow the steps listed in this section. 1. Open a Command Prompt and type ntdsutil 2. At the ntdsutil command prompt, type roles 3. At the fsmo maintenance command prompt, type connections 4. At the server connections command prompt, type:connect to server DomainController, where DomainController is the FQDN of the domain controller that you want to assign the operations master role to. 5. At the server connections prompt, type quit 6. At the fsmo maintenance command prompt, enter any of the following:


Page 19 of 50

seize schema master

seize domain naming master

seize infrastructure master

seize RID master

seize PDC emulator

7. After you specify which role you want to seize and press Enter you’ll be prompted to confirm the operation. Click Yes to continue or No to cancel.

Managing Forests Many of the tasks associated with managing forests in your Active Directory environment should not be undertaken without significant planning and testing, as they will have a broad effect on the entirety of your network infrastructure. Many of the functions discussed in this section revolve around features that are new to Windows Server 2003. Application directory partitions allow Active Directory-aware software to store their application data in multiple locations within your Active Directory infrastructure, allowing for fault-tolerance and improved performance since clients will be able to access application data from multiple locations. If all of the domain controllers in a forest are running Windows Server 2003, you now have the option to raise the functional level of the forest to introduce new security features across the entire forest. We’ll also cover the steps needed to access the schema, the repository within Active Directory where all directory objects are defined and managed.

Creating a New Domain Tree Like Windows 2000, a Windows Server 2003 Active Directory forest can contain one or more domain trees. You’ll create a new domain tree when you need to create a domain whose DNS namespace is not related to the other domains in the forest, but whose schema, security boundaries and configuration needs to be at least somewhat centrally managed. A good example of this would be the acquisition of another company whose IT management functions will be taken over by the new parent company. In this case, the DNS name of the acquisition’s domain (and any of its child domains) does not need to contain the full name of the parent domain. For example, if purchased a competing airplane manufacturer who already had an established web presence under, you can create a separate domain tree for the company and its userbase. See the illustration in Figure 4.13 for a graphical example of this scenario. Figure 4.13 Multiple Domain Environments


Page 20 of 50

To create a new domain tree, use the procedure that follows: 1. From the Run line or a command prompt, type dcpromo to begin the Active Directory Installation Wizard. 2. Read the information presented on the Operating System Compatibility page and click Next to continue. 3. On the Domain Controller Type page, select click Domain controller for a new domain and click Next. 4. On the Create New Domain page, select Domain tree in an existing forest. 5. On the Network Credentials page, you’ll be prompted to enter the username, password, and domain of a user account with the appropriate security to create a new domain tree. Click Next when you’re ready to proceed. (As with most domain and forest management functions, the user account that you’re using must be a member of the Enterprise Admins group to succeed.) 6. On the New Domain Tree page, enter the full DNS name of the new domain and click Next. 7. Verify or change the NetBIOS name suggested by the Installation Wizard for backwards compatibility. Click Next to continue. 8. On the Database and Log Folders screen, specify the drive letter and directory that will house the database and log folders and then click Next. (You can also use the Browse button to select the directory that you want.) 9. The next screen you’ll see will be the Shared System Volume page. From here, manually type or browse to the directory where you want the Sysvol to be installed. Click Next to continue. 10. The DNS Registration Diagnostics screen will prompt you to choose an existing DNS server for name resolution, or to install the


Page 21 of 50

DNS Server Service on the local machine. Click Next once you’ve made your selection. 11. From the Permissions page, select one of the following: •

Permissions compatible with pre-Windows 2000 server operating systems

Permissions compatible only with Windows 2000 or Windows Server 2003 operating systems

12. From the Directory Services Restore Mode Administrator Password screen, enter and confirm the password that you want to assign to the local Administrator account for this server, and then click Next. You’ll need this password in order to start the computer in Directory Services Restore Mode. Be sure to store this password in a secure location, as it may or may not be the same as the administrative password for your Windows Server 2003 Active Directory structure. 13. The Summary screen will allow you to review any changes and settings that you’ve specified. Click Back to make any changes, or click Next to begin installing Active Directory on this machine. The installation process will take several minutes, after which you’ll be prompted to restart the computer. 14. Once the machine has restarted, this will be the first domain controller in the new domain tree. Windows Server 2003 will automatically create a two-way transitive trust relationship between the root of the new domain and any other domains within the Active Directory forest.

Raising the Forest Functional Level Similar to the domain functional level, Windows Server 2003 has created differing forest functional levels that can enable new Active Directory features that will apply to every domain within an Active Directory forest. When you first create a Windows Server 2003 Active Directory forest, its forest functionality level will be set to Windows 2000. Depending on your environment, you can consider raising the Forest Functional Level to Windows Server 2003; however, just like the Domain Functional Level, changing the Forest Functional Level is a one-way operation that cannot be undone. As such, if any of your domain controllers are still running Windows NT 4.0 or Windows 2000, you shouldn’t raise your forest functional level to Windows Server 2003 until your existing controllers have been upgraded. The following table details the types of domain controllers that are supported by each of the forest functional levels. Table 4.3 Controllers Supported by Different Forest Functional Levels Forest functional level Windows 2000 (default) Windows 2000 Windows Server 2003 family Windows Server 2003 interim* Windows Server 2003 family


Domain controllers supported Windows NT 4.0 Windows NT 4.0

Page 22 of 50

Windows Server 2003 Windows Server 2003 interim is a special functional level that’s Available if you upgrading a Windows NT 4.0 domain to become the first domain in a new Windows Server 2003 forest.

Windows Server 2003 family

Raising the Forest Functional Level To raise the functional level of your Windows Server 2003 forest, follow the steps included here: 1. Open Active Directory Domains and Trusts. 2. Right-click on the Active Directory Domains and Trusts node and select Raise Forest Functional Level. 3. From Select an available forest functional level, select Windows Server 2003 and then click Raise. 4. If there are servers in your forest that cannot be upgraded to the new forest functional level, click on Save As in the Raise Forest Functional Level dialog box to create a log file that will specify which of your domain controllers need to be upgraded from Windows NT 2000. New & Noteworthy… Windows Server 2003 Domain and Forest Functionality

When you raise the Domain and/or Forest Functionality level within your Active Directory environment, certain advanced features will be available for your use. At the domain level, the Windows Server 2003 functional level will provide the following advantages that are not available in either Windows 2000 mixed or native mode. You can enable these features on a domain-by-domain basis: • Domain controller rename tool: this Resource Kit utility will allow you to rename a domain controller if your business or organizational structure changes • SID History allows you to migrate security principals from one domain to another • Converting groups enables the ability to convert a security group to a distribution group and vice versa • InetOrg Person objects ease the migration from other LDAPenabled directory applications to Active Directory • the lastLogonTimestamp attribute keeps track of the last logon time for either a user or computer account, providing the administrator with the ability to track the history of the account Raising the Forest Functional Level creates the following features that you can implement throughout your Windows Server 2003 forest: • Domain Rename will allow you to rename an entire Active Directory domain • Forest trusts enable two-way transitive trusts between separate Windows Server 2003 forests. In Windows 2000, trusts between forests were one-way and intransitive


Page 23 of 50

• • •

• •

InetOrg Person objects can now be made available throughout your entire Windows Server 2003 forest You can now reuse the object identifier, the ldapDisplayName, and the schemaIdGUID that are associated with a defunct schema object, whether a class or an attribute Linked Value Replication allows individual values of a schema attribute to be replicated separately. In Windows 2000, if an administrator or application made a change to a member of a group, for example, the entire group needed to be replicated. With linked value replication, only the group member that has changed is replicated, greatly improving replication efficienty and speed in larger environments. Dynamic auxiliary classes allow you to link auxiliary schema classes to an individual object, rather than entire classes of objects. This also serves to improve replication under Windows Server 2003 Global catalog replication has also been improved by propagating only partial changes when possible

Managing Application Directory Partitions Windows Server 2003 has introduced the concept of Application Directory Partitions, which allow Active Directory-aware applications to store information specific to the operation of their application in multiple locations within a Windows Server 2003 domain. This provides fault tolerance and load balancing in case one server that houses an application partition fails or is taken offline. You can configure this application-specific data to replicate to one or more domain controllers located anywhere within your Windows Server 2003 forest. Application directory partitions follow the same DNS-based naming structure as the rest of your Windows Server 2003 domain, and can exist in any of the following locations: • As a child of a domain directory partition. •

As a child of an application directory partition.

• As a new tree in the Active Directory forest. For example, you can create an application directory partition for an Active Directory-aware database application as a child of the domain. If you named the application directory partition databaseapp, the DNS name of the application directory partition would then become The distinguished name of the application directory partition would be dc=databaseapp, dc=airplanes, dc=com. You could then create an application directory partition called databaseapp2 as a child of, the DNS name of the application directory partition would be and the distinguished name would be dc=databaseapp2, dc=databaseapp dc=airplanes, dc=com. In the final example, if the domain was the root of the only domain tree in your forest and you created an application directory partition with the DNS name of databaseapp (with the distinguished name of dc=databaseapp), this application directory partition would not exist the same tree as the domain. It would instead become the root of a new tree in the Windows Server 2003 forest. Application directory partitions are almost always created by the applications that will use them to store and replicate data within the domain


Page 24 of 50

structure; however, Enterprise Admins can manually create and manage application directory partitions when testing and troubleshooting is necessary. You can use any of the following tools to create and manage application directory partitions: • Third-party tools from the vendor who provided the application •

ntdsutil command-line tool

• Active Directory Service Interfaces (ADSI) In this section, we’ll focus on using the ntdsutil utility to create and manage application directory partitions.

Creating or Deleting an Application Directory Partition In this section, we’ll discuss the steps necessary to manage Application Directory partitions. 1. From the Command Prompt, type ntdsutil. 2. Enter the following commands at the ntdsutil menu prompts: C:\ntdsutil Ntdsutil>domain management Domain management>connection Connection>connect to server servername Connection>quit

3. To create an application directory partition, enter the following at the Domain Management prompt: Domain Management> create nc ApplicationDirectoryPartition DomainController

4. To delete an application directory partition, enter the following at the Domain Management prompt: Domain Management> delete nc ApplicationDirectoryPartition DomainController

Use the Table 4.4 to determine the values of the servername, ApplicationDirectoryPartition and DomainController variables: Variable Name ServerName ApplicationDirectoryPartition


Definition The full DNS name of the domain controller to which you want to connect, for example, The distinguished name of the application directory partition that you want to create or delete. For example, the distinguished name of the application directory is dc=databaseapp, dc=airplanes, dc=com. The full DNS name of the domain controller where you want to create or delete the application directory partition. If you want to create or delete the partition on the controller that you already specified with the Servername variable, you can type NULL for this value.

Table 4.4 NTDSUTIL parameter definitions


Page 25 of 50

For example, to create an application directory partition called application1 as a child of the domain on the domain controller called, you would enter the following in Step 3 of this procedure: create nc dc=application1, dc=biplanes, dc=airplanes, dc=com controller1.

If you later decide that you want to delete this partition, you can follow the same procedure using the following syntax: delete nc dc=application1, dc=biplanes, dc=airplanes, dc=com controller1.

Managing the Schema Similar to the previous release of the operating system, the Windows Server 2003 Active Directory schema contains the definitions for all objects within Active Directory. Whenever you create a new directory object such as a user or group, the new object is validated against the schema to determine which attributes the object should posses. (A printer object should have very different attributes than a user object or a file folder object, for example.) In this way, Active Directory will validate every new object that you create against the appropriate definition within the schema before it will record the new object in the Active Directory database. Each forest can contain only one schema, which is replicated along with the rest of the Active Directory database to every controller within the forest. If your implementation or security needs require you to maintain different schema for different business units, you will need to create a separate forest for each individual schema that you need to maintain. For example, you may create a separate forest for application testing so that any test changes to the schema will not replicate throughout your entire Active Directory forest. The Windows Server 2003 schema comes pre-loaded with an extensive array of object classes and attributes that will meet the needs of most organizations; however, some applications will extend the schema by adding their own information to it: Exchange 2000 and 2003 are good examples of this. In order to manage the schema directly, you’ll need to install the Active Directory Schema snap-in: because of the delicate nature of schema management operations, this utility is not installed on a Windows Server 2003 server by default. Listing all of the schema classes and attributes within Active Directory would require a book unto itself; if you are interested, a comprehensive reference is available on the MSDN website.

Installing the Active Directory Schema Snap-in This section will walk you through the steps needed to install the Schema snapin. 1. From a command prompt, type the following to register the necessary .DLL file on your computer: regsvr32 schmmgmt.dll 2. To access the Active Directory Schema Snap-in, you’ll need to add it to the Microsoft Management Console. Click on Start | Run, then type mmc /a and click OK. You’ll see a blank MMC console. 3. Click on File | Add/Remove Snap-in | Add.


Page 26 of 50

4. Browse to Active Directory Schema within the Snap-In menu shown in Figure 4.14. Click Add and then Close to add the snap-in to the MMC console. Figure 4.14 Adding the Schema Management Snap-In

5. Save the console in the system32 directory as schmmgmt.msc. (You can add a shortcut to this tool in the Documents and Settings\All Users\Programs\Administrative Tools folder if you wish.)

Securing the schema You can protect the Active Directory schema from unauthorized changes by using access control lists (ACL’s) to determine who can make alterations to the schema. When you first install Windows Server 2003, the only users who have write access to the schema are members of the Schema Admins group, and the only default member of this group is the administrator account in the root domain of the forest. You should restrict membership in the Schema Admins group as much as possible, since careless or malicious alterations to the schema can render your network inoperable. To modify the permissions assigned to your Active Directory schema, follow these steps: 1. Open the Active Directory Schema snap-in. 2. Right-click Active Directory Schema and then click Permissions. 3. Click on the Security tab; in the Group or user names section, select the group whose permissions you wish to change. 4. Under Permissions for Administrators, select Allow or Deny for the permissions you want to change. Click OK when you’re done.

Adding an Attribute to the Global Catalog ID_MANAGE_04.doc

Page 27 of 50

By default, the global catalog stores a partial set of object attributes so that users can search for information within the Active Directory. While the most common attributes are already included in the global catalog, you can speed up search queries across a domain for an attribute that is not included by default by adding it to the global catalog. For example, if you want your users to be able to search for each other’s Fax numbers, you can this attribute to the global catalog so that users can easily search for other user's fax numbers within Active directory. Keep in mind that this sort of change will affect all domains in within your forest and will cause a full synchronization of all object attributes that are stored in the global catalog if your forest functional level is not set to Windows Server 2003. This can cause a noticeable spike in network traffic; as such, any additions to the global catalog should be carefully considered and tested before implementing them in a production environment. To add an attribute to the global catalog: 1. Open the Active Directory Schema snap-in. 2. In the console tree, click Attributes, and right-click the name of the attribute that you want to add to the global catalog. 3. Select Properties; you’ll see the screen shown in Figure 4.15 Figure 4.15 Replicating an Attribute to the Global Catalog


Page 28 of 50

4. Place a check mark next to Replicate this attribute to the Global Catalog, and then click OK.

Managing Trusts Just like in previous versions of the Windows server operating system, Windows Server 2003 trusts will allow network administrators to establish relationships between domains and forests, so that users from Domain A can access resources in Domain B. Unlike previous releases of Windows, however, Windows 2000 and 2003 allow for the creation of two-way, transitive trusts. This means that if Domain A trusts Domain B, and if Domain B trusts Domain C, then Domain A automatically trusts Domain C, as well. (You may remember the days of Windows NT4 where the number of trust relationships you needed to create in a large environment became staggeringly large: a network with 10 domains would require the administrator to manually create 90 trust relationships to allow for the kind of trust relationships that 2000 and 2003 create automatically.) In this section, we’ll cover the various types of trust relationships that you can create to allow your users to quickly and easily access the resources they require. Trusted and Trusting Domains

When you create a new domain in Windows Server 2003, a two-way transitive trust will automatically be created between it and any existing domains in the Windows Server 2003 forest. However, for security reasons you may wish to create a trust relationship that only operates in one direction. In this case, you will have a trusted domain that contains the user resources who require access, and the trusting domain that contains the resources being accessed. Diagrammatically, this would be represented using an arrow pointing towards the trusted domain. Now, I don’t know about you, but I had a tough time remembering which domain was the trusted domain versus the trusting domain, and which way the arrow was supposed to point, until one of my instructors explained it like this: Think of the last two letters in “trust-ED” as talking about a guy named Ed. The “trust-ED” domain is the one that contains users, since ED is there. Whereas the trustING domain contains the thing that your users are trying to access. It’s the “trust-ING” domain because that’s where the THINGs are. That way, when you’re looking at a diagram of a one-way trust relationship on the 70-296 exam, just remember that the arrow is pointing to ED. Take a look at the diagram in Figure 4.16 and you’ll see what I mean. Figure 4.16 Trusted and Trusting Domains


Page 29 of 50

“Hey Ed! I’m trusting you with the THINGs in this domain!”


Trusting (resource) Domain

Trusting (resource) Domain

Try to find other humorous anecdotes like this one as you’re preparing for the exam. Rote memorization will only stay with you for so long; personalizing a concept in this way makes it more real for you. (And hence, easier to remember.)

Creating a Realm Trust Windows Server 2003 allows you to create a trust relationship with an external Kerberos realm, allowing cross-platform interoperability with other Kerberos services such as UNIX and MIT-based implementations. You can establish a realm trust between your Windows Server 2003 domain and any non-Windows Kerberos V5 realm. This trust relationship will allow pass-through authentication, in which a trusting domain (the domain containing the resources to be accessed) honors the logon authentications of a trusted domain (the domain containing the user accounts.) You can grant rights and permissions in the trusting domain to user accounts and global groups in the trusted domain, even though the accounts or groups don't exist in the trusting domain's directory. Realm trusts can also be either one-way or two-way. You can create a realm trust using the Active Directory Domains and Trusts GUI, or the netdom command-line utility. To perform this procedure, you must be a member of the Domain Admins or Enterprise Admins group, or you must have been delegated the appropriate authority by a member of one of these groups. (We discussed delegation of authority in the “Managing Organizational Units” and the “Assigning, Changing, or Removing Permissions on Active Directory Objects or Attributes” sections. To manage trust relationships, you’ll need the Full Control permission

EXERCISE 4.04 CREATING A REALM TRUST USING THE WINDOWS INTERFACE 1. Click on Start | Programs | Administrative Tools | Active Directory Domains and Trusts. Enter the appropriate username and password to access the utility. 2. Right-click on the domain that you want to administer, and select Properties.


Page 30 of 50

3. Click on the Trusts tab, click on New Trust and then click Next. You’ll see the figure shown in 4.17. Figure 4.17: Specifying the Name of the Target Domain

4. On the Trust Name page, type the name of the Kerberos realm that you want to establish a trust relation ship with, and then click Next. 5. On the Trust Type page, select the Realm Trust option, and then click Next. 6. You’ll be taken to the screen shown in Figure 4.18. From the Transitivity of Trust page, you have the following options: Figure 4.18: Transitivity of Trust


Page 31 of 50

To form a trust relationship between your Windows Server 2003 domain and only the realm specified in the Trust Wizard, click Nontransitive and then click Next.

To form a trust relationship between the Windows Server 2003 domain, the specified realm and all other trusted realms, click Transitive and then Next.

7. On the Direction of Trust page, select one of the following options from the screen shown in Figure 4.19. Figure 4.19: Specifying the Direction of the Trust Relationship


Page 32 of 50

Two-way: this will create a two-way realm trust, where users in your domain and the specified external realm will be able to access resources in either domain or realm.

One-way: incoming: Users in your Windows Server 2003 domain will be able to access resources in the external realm, but external users will not be able to access any resources in your Windows Server 2003 domain.

One-way:outgoing: The reverse of one-way: incoming – Users in the external realm will be able to access files within your domain, but your Windows Server 2003 users will not be able to access any resources in the external realm.

8. Finally, you’ll need to enter the password that will be used to establish the trust relationship. This password will need to be entered by the administrator of the Kerberos realm as well. Enter the trust password on the screen shown in Figure 4.20. 9. Click Next and then Finish to complete the creation of the new realm trust. Figure 4.20: Creating a Trust Password

Managing Forest Trusts Windows Server 2003 has introduced a new feature that will allow administrators to easily establish trusts between domains in different forests. Creating a forest trust will form implied trust relationships between every domain in both forests. You must manually establish a forest trust, unlike other types of trusts that are automatically created, such as the trust relationship between a parent and a child


Page 33 of 50

domain within the same forest. You can only create this type of trust between the forest root domains between two Windows Server 2003 forests. Forest trusts are transitive and can be one-way or two-way. A one-way trust will allow members of the trusted forest to access files, applications and resources that are located in the trusting forest. However, as the name implies, the trust operates in only one direction – if you establish a one-way forest trust between forest A (the trusted forest) and forest B (the trusting forest), members of forest A can access resources located in forest B, but not the other way around. In this example, for users in Forest B to access resources in Forest A, you would instead need to create a two-way forest trusts. This would allow users and groups from either forest to utilize resources located in the other forest. Each domain within Forest A will also trust all domains in Forest B, and vice versa. To create a forest trust in your Windows Server 2003 forest root domain, follow these steps: 1. Click on Start | Programs | Administrative Tools | Active Directory Domains and Trusts. If you are using the RunAs function, enter the administrative username and password when prompted. 2. Right-click the forest root domain and select Properties. 3. On the Trusts tab, click New Trust and then click Next. 4. On the Trust Name page, type the DNS name of the target forest and click Next to continue. 5. On the Trust Type page, select Forest trust. Click Next to continue. 6. On the Direction of Trust page, select one of the following options: •

Two-way Forest trust: Users in the local and remote forest will be able to access resources in either forest.

One-way:incoming: Users in the remote forest will be able to access resources within the forest specified in Step 2, but users in this forest will not be able to access any resources in the remote forest.

One-way:outgoing: The reverse of the previous bullet point – users in the forest specified in Step 2 will be able to access resources in the remote forest, but not the other way around.

Creating a Shortcut Trust Authentication requests between two domains in different domain trees must travel a trust path, that is, a series of individual relationships between the two domains. This can be a somewhat lengthy process within a complex forest structure; but you can reduce this through the use of shortcut trusts. Shortcut trusts are one-way or two-way, transitive trusts that you can use to optimize the authentication process if many of your users from one domain need to log onto another domain in the forest structure. As illustrated in Figure 4.21, the shortcut trust between Domain A and Domain F will shorten the path traveled for User1’s login request between the two domains. In the figure, UserA must access the printer in Domain F by referring to the trust relationship between Domain A and Domain B, then between Domain B and Domain C and so forth until reaching Domain F. The shortcut trust creates a trust relationship directly between Domain A and Domain F, which will greatly shortening the authentication process in an


Page 34 of 50

enterprise domain with a large series of forest trust relationships. Use these steps to create a shortcut trust using the GUI interface:

Figure 4.21: Shortcut Trusts 1. Click on Start| Programs | Administrative Tools | Active Directory Domains and Trusts 2. Right-click on your domain name and select Properties. 3. From the Trusts tab, click on New Trust and then Next. 4. On the Trust Name screen, enter the DNS name of the target domain. Click Next when you’re ready to continue. 5. From the Direction of Trust page, select one of the following options: •

Two-way will create a two-way shortcut trust, so that the login process will be optimized in both directions.

One-way incoming will hasten the login process for users in an external domain to access the domain you administer. If users in your domain need to authenticate to the target domain, they will need to traverse the usual trust path between the two.

One-way outgoing will accomplish the reverse –user logins from your domain to the target domain will be able to use this shortcut trust, but incoming login requests will not.

6. If you have Domain Admin or Enterprise Admin access to each domain involved in the trust relationship, you can create both sides


Page 35 of 50

of a shortcut trust at the same time. Click Both this domain and the specified domain on the Sides of Trust page.

Creating an External Trust You’ll create an external trust to form a non-transitive trust with a domain that exists outside of your Windows Server 2003 forest. External trusts can be oneway or two-way, and should be employed when users need access to resources located in a Windows NT 4.0 domain, or in an individual domain located within a separate Windows 2000 or 2003 forest that you haven’t established a forest trust with. You’ll use an external trust instead of a forest trust if the trusting domain is running Windows NT 4.0, or if you want to restrict access to another forest simply to resources within a single domain. External trusts can be created using either the GUI interface or the command line. Like most of the functions discussed in this chapter, you must be a member of the Domain Admins or Enterprise Admins group, or you must have been delegated the appropriate authority by a member of one of these groups in order to perform these procedures.

Creating an External Trust With the Windows interface 1. Click on Start | Programs | Administrative Tools | Active Directory Domains and Trusts. Enter the appropriate username and password to run the utility if you’ve configured the shortcut to use RunAs. 2. Right-click on the domain that you want to create a trust for, and click Properties. 3. From the Trusts tab, click on New Trust and then Next. 4. On the Trust Name screen, enter the DNS or NetBIOS name of the domain that you want to establish a trust with, then click Next. 5. The next screen allows you to establish the Trust Type. Click on External Trust, then Next to continue. 6. From the Direction of Trust screen, select one of the following: •

Two-way will establish a two-way external trust. Users in your domain and the users in the specified domain will be able to access resources in either domain.

One-way incoming: Users in your Windows Server 2003 domain will be able to access resources in the trusting domain that you specify, but the trusting domain will not be able to access any resources in the 2003 domain.

One-way outgoing: The reverse of One-way incoming – users in the external domain can access resources in your domain, but your users will not be able to connect to resources in the external domain.

7. Click Next when you’ve determined the direction of the trust you’re creating. On the Outgoing Trust Properties sheet, you can choose one of the following options: •

To allow users from the external domain to access to all resources in your Windows Server 2003 domain, select Allow authentication for


Page 36 of 50

all resources in the local domain. (You’ll most commonly select this option if both domains are part of the same company or organization.) •

In order to restrict users in the external domain from obtaining access to any of the resources in your domain, click Allow authentication only for selected resources in the local domain. This option should be used when each domain belongs to a separate organization. Once you’ve made your selection, click Next to continue.

8. If you have Domain Admin or Enterprise Admin access to each domain involved in the trust relationship, you can create both sides of an external trust at the same time. Click Both this domain and the specified domain on the Sides of Trust page.

Selecting the Scope of Authentication for Users Once you’ve created a trust relationship between two separate forests, you’ll need to indicate the scope of authentication for users from the trusted domain. You can either allow users in the trusted forest to be treated as members of the Authenticated Users group in the local forest, or specify that users from the other forest must be granted explicit permission to authenticate to local resources. (You’ll hear the latter option referred to as an “Authentication Firewall.”) If users from the trusted domain are not treated as members of the Authenticated Users group in the trusting domain, they will only be able to access any resources for which they have been granted specific permissions. This is a more restrictive means of granting access, and should be used when the trusting domain contains extremely sensitive or compartmentalized data. Specify the scope of authentication for any trusts you’ve created using the following steps: 1. Click on Start | Programs | Administrative Tools | Active Directory Domains and Trusts. 2. Right-click on the domain that you want to administer, and select Properties. 3. On the Trusts tab, select the trust that you want to administer under Domains trusted by this domain (outgoing trusts) or Domains that trust this domain (incoming trusts) and do one of the following: •

To select the scope of authentication for users that authenticate through an external trust, select the external trust that you want to administer and then click Properties. On the Authentication tab, click either Domain-wide or Selective authentication. If you select Selective authentication, you need to manually enable permissions on the local domain and on the resource to which you want users in the external domain to have access. Otherwise, the users from the trusted domain will automatically be added to the Authenticated Users group in the trusting domain.

To select the scope of authentication for users authenticating through a forest trust, click the forest trust that you want to administer, and then click Properties. On the Authentication tab, click either Forest-wide or Selective authentication. If you select Selective


Page 37 of 50

authentication, you need to manually enable permissions on each domain and resource in the local forest that users in the second forest should be able to access.

Verifying a Trust Once you have created a trust relationship, you may need to verify that the trust has been created properly if the users in either domain are not able to access the resources that you think they should. You can perform this troubleshooting technique by using the following steps: 1. Click on Start | Programs | Administrative Tools | Active Directory Domains and Trusts. 2. Right-click on the domain you want to administer and click Properties. 3. From the Trusts tab, click on the trust that you wish to verify and select Properties. 4. Click Validate to confirm that the trust relationship is functioning properly. Select from one of the following options: •

If you select No, do not validate the incoming trust, Microsoft recommends that you repeat the procedure on the remote domain to ensure that it is fully functional.

If you choose Yes, validate the incoming trust, you’ll be prompted for a username and password with administrative rights to the remote domain.

Removing a Trust If you need to delete a trust relationship between two domains, you can do so in one of two ways. From the command line, you can use the netdom Support Tools utility with the following syntax: netdom trust TrustingDomainName /d:TrustedDomainName /remove /UserD:User /PasswordD:*. UserD and PasswordD refer to a username and password with administrative credentials for the domain that you’re administering. To remove a trust using the Windows interface, follow these steps: 1. Click on Start | Programs | Administrative Tools | Active Directory Domains and Trusts. 2. Right-click on your domain name and select Properties. 3. On the Trusts tab, select the trust that you want to remove, either under Domains trusted by this domain (outgoing trusts) or Domains that trust this domain (incoming trusts), and click Remove. 4. Choose whether you wish to remove the trust relationship on the local domain only, or on both the local and the other domain. If you choose Yes, remove the trust from both the local domain and the other domain, you’ll need to have access to a user account and password that has administrative rights in the remote domain. Otherwise, choose No, remove the trust from the local domain


Page 38 of 50

only, and have an administrative user with the appropriate credentials repeat the procedure on a controller in the remote domain. Configuring & Implementing… Managing Trust Relationships at the Command Line

While the Windows GUI certainly makes creating and managing trust relationships a snap, there are times when you might want or need to do so from the command line – you may be working in a test environment or other scenario in which your domain structures change frequently, which would make the command line a more efficient option for managing your network. In this case, you can turn to the netdom utility that’s found in the \Support directory of the Windows Server CD. The basic syntax of the utility is as follows: netdom trust TrustingDomainName /d:TrustedDomainName /add UserD: administrator /PasswordD: password.

TrustingDomainName specifies the DNS name of the target domain in the trust relationship that you’re creating, while TrustedDomainName specifies the trusted or account domain. (When using the command line, you’re supplying the UserID and password within the command-line syntax itself. As such, you don’t need to use the RunAs function described for use with the Active Directory Domains and Trusts utility.) You can also use netdom to specify the password that you’ll use to connect to one or both domains, and to establish the trust as one-way or two-way. For example, to create a two-way trust between DomainA and DomainB, you would type the following at the command prompt: netdom trust DomainA /d:DomainB /add /twoway

You can use the above syntax with the netdom utility whether you are creating a forest, shortcut or external trust. To establish a realm trust from the command line, you’ll use a slightly different syntax: netdom trust TrustingDomainName /d:TrustedDomainName /add /realm /PasswordT:NewRealmTrustPassword

Just like before, TrustingDomainName specifies the DNS name of the trusting domain in the new realm trust, and TrustedDomainName refers to the DNS name of the trusted domain in the new realm trust. NewRealmTrustPassword is the password that will be used to create the new realm trust. The password that you specify needs to match the one used to create the other half of the trust in the external Kerberos realm, or the creation of the trust relationship will fail. Finally, you can use netdom to verify a trust relationship as follows: netdom trust TrustingDomainName /d:TrustedDomainName /verify

The netdom command has numerous other optional command-line parameters that you can view by entering netdom trust | more at the Windows command prompt.

Managing UPN Suffixes Within the Active Directory database, each user account possesses a logon name, a pre-Windows 2000 user logon name – this is the equivalent to the NT 4.0 Security Account Manager (SAM)account name – and a UPN suffix The UPN


Page 39 of 50

suffix refers to the portion of the username to the right of the @ character. In a Windows 2000 or 2003 domain, the default UPN suffix for a user account will be the DNS domain name of the domain that contains the user account. For example, the UPN suffix of ‘[email protected]’ would be ‘’ You can add alternative UPN suffixes in order to simplify network administration and streamline the user logon process by creating a single UPN suffix for all users. Consider an Active Directory forest that consists of two discontinuous domain names as the result of a corporate merger: and Rather than forcing the users from each domain to remember which UPN they need to specify when logging onto the different domain systems, you can create an alternative UPN suffix so that all user accounts can be addressed as [email protected], allowing users from each domain to use a consistent naming syntax when logging onto systems from the two separate domains. Another real-world example might be if your company uses a deep domain structure, which could create long domain names that become difficult for your users to remember. You can use an alternative UPN suffix to allow users to remember user@airplane, rather than [email protected]. To add a new UPN suffix to a Windows Server 2003 domain: 1. Open Active Directory Domains and Trusts. 2. Right-click the Active Directory Domains and Trusts icon and select Properties. 3. On the UPN Suffixes tab, enter an alternative UPN suffix for the forest and click Add. 4. If you wish to add any additional UPN suffixes, repeat Step 3 until you’re finished. Click OK when you’re done.

Restoring Active Directory Similar to Windows 2000, Windows Server 2003 will allow you to restore your Active Directory data in case of a system hardware failure, data corruption, or accidental deletion of critical data. Active Directory restores can only be performed from the local Windows Server 2003 domain controller – you cannot restore the Active Directory directory to a remote computer without the aid of a third-party utility. In order to restore this data on a domain controller, you must first restart the controller in Directory Services Restore Mode, using the password that you specified during the initial installation of Windows Server 2003. This will allow you to restore Active Directory directory service information, as well as the SYSVOL directory itself. To access Directory Services Restore Mode, press F8 during startup and select it from the list of startup options. Windows Server 2003 includes the option to perform an authoritative and non-authoritative restore of Active Directory information. The new release also includes a third option called a Primary Restore that was not available in previous versions of Active Directory. We will discuss all three of these options in the upcoming section.

Performing a Non-authoritative Restore


Page 40 of 50

When restoring objects to the Active Directory database, you can perform either an authoritative or a non-authoritative restore. The non-authoritative restore is the default restore type for Active Directory that will allow restored objects to be updated with any changes held on other controllers in the domain after the restore has completed. For example, let’s say that on a Wednesday you restore jsmith’s Windows user object from the Monday backup file. Between Monday and Wednesday, jsmith’s Department attribute was changed from “Marketing” to “Human Resources.” In this scenario, the jsmith object from the Monday backup tape will still possess the old “Marketing” Department attribute. However, this information will be re-updated to “Human Resources” at the next replication event, since the other controllers in your domain will update the restored controller with their newer information. Using this default restore method, any changes made subsequent to the backup being restored will be automatically replicated from the other controllers in your Windows Server 2003 domain. Just like in Windows 2000, you must first boot into Directory Services Restore Mode in order to restore the System State data on a domain controller. Use the [F8] key to access the Startup options menu during the Windows Server 2003 boot-up process, then scroll to “Directory Services Restore Mode” and press [Enter]. This startup mode will allow you to restore the SYSVOL directory and the Active Directory, as discussed in the next exercise.

EXERCISE 4.05 PERFORMING A NON-AUTHORITATIVE RESTORE 1. Once you have booted into Director Restore Mode, open the Windows Backup utility by clicking Start | Programs | Accessories | System Tools | Backup. 2. Click Next to bypass the Welcome screen, then select Restore Wizard to begin the restore process, as shown in Figure 4.22. Figure 4.22 Beginning the Restore Process


Page 41 of 50

3. Select the radio button next to Restore files and settings, then click Next to continue to figure 4.23. Figure 4.23 Selecting the Files and Information to Restore

4. Place a check mark next to the files and data that you wish to restore. (In this case, the backup only contains the System State data, so that


Page 42 of 50

will be the only check mark necessary.) Click Next when you’ve finished making your selections. 5. Clicking Finish from the Summary screen will launch the restore process using the following default options: files will be restored to their original locations, and existing files will not be replaced. If you wish to change any of these options, click Advanced to continue to Figure 4.24 Figure 4.24 Selecting a Destination for Restored Files

6. In Figure 4.25, you will select the location to restore the files, folders and System State information to. If you want the System State to automatically overwrite any existing information, select “Original location”. Otherwise you can choose one of the other two options: “Alternate location” will restore any files and folders to another directory or drive while maintaining the existing directory structures. “Single folder” will restore all files into a single directory, regardless of the folders or sub-folders present in the backup file. Click Next when you’ve made your selection. You’ll see the screen shown in Figure 4.25. Figure 4.25: Choosing how to Restore Existing Files


Page 43 of 50

7. From this screen you will instruct the Restore Wizard to either leave any existing files intact, to replace them if they are older than the files that exist on the backup media, or to overwrite existing files en masse. You must make this decision globally, unlike when performing a Windows Explorer file copy in which you are prompted to overwrite on each individual file. (As stated in the previous Warning, remember that this only applies to user files or folders that you may be restoring in addition to the Active Directory database.) Click Next to continue to Figure 4.26. Figure 4.26 Selecting Advanced Restore Options


Page 44 of 50

8. Use this screen to change any final security settings, if necessary. Click Next and then Finish to launch the restore process. You’ll see a progress window to indicate that the restore is underway. 9. Since you have restored Active Directory data during this process, you’ll be prompted to reboot when the restore has completed. After rebooting, check the Event Viewer for any error messages, and verify that the desired information has been restored properly.

Performing an Authoritative Restore In some cases, you may not want changes made since he last backup operation to be replicated to your restored Active Directory data. In these instances you will want all domain controller replicas to possess the same information as the backed up data that you are restoring. To accomplish this, you’ll need to perform an authoritative restore. This is especially useful if you inadvertently delete users, groups, or organizational units from the Active Directory directory service, and you want to restore the system so that the deleted objects are recovered and replicated. (Otherwise, the replication updates from the more up-to-date controllers will simply “re-delete” the information that you just worked so hard to restore.) When you mark information as authoritative, it’s the restore process will change the objects’ Update Sequence Numbers (USN’s) so that they are higher – and therefore considered newer – than any other USN’s in the domain. This will ensure that any data that you restore is properly replicated to your other domain controllers. In an authoritative restore, the objects in the restored directory will replace all existing copies of those objects, rather than having the restored items receive updates through the usual replication process. To perform an authoritative restore of Active Directory data, you will need to run the


Page 45 of 50

ntdsutil utility after you have restored the System State data, but before you reboot the server at the end of the restore process. The following exercise will cover the steps in using ntdsutil to mark Active Directory objects for an authoritative restore.

EXERCISE 4.06 PERFORMING AN AUTHORIATATIVE RESTORE 1. Follow the steps listed in Exercise 4.5 to perform a non-authoritative restore. When the restore process completes and you are prompted to reboot the domain controller, select No. From a command prompt, type ntdsutil and press Enter. You’ll see the prompt shown in Figure 4.27. 2. Type authoritative restore and press Enter. 3. To authoritatively restore the entire Active Directory database from your backup media, type restore database and press Enter. Click Yes to confirm – you will see the progress window shown in Figure 4.27. Figure 4.27 Performing an Authoritative Restore

4. Type quit until you return to the main Command Prompt, then reboot the domain controller. Check the Event Viewer and the Active Directory management utilities to confirm that the restore completed successfully.

Understanding NTDSUTIL Restore Options Ntdsutil.exe provides a number of optional parameters in performing an authoritative restore of the Active Directory database. In the previous exercise, you simply used the restore database syntax to authoritatively restore the entire Active Directory structure. However, you can exert much more granular control


Page 46 of 50

over the Active Directory restore using the command-line syntax discussed here. (You can always type ntdsutil /? for a listing of all available options.) The complete list of available restore options within ntdsutil is as follows: {restore database|restore database verinc %d|restore subtree %s|restore subtree %s verinc %d.} These individual parameters perform the following tasks: • restore database Marks the entire database as authoritative – all other domain controllers will accept replication data from the restored server as the most current information. •

restore database verinc %d This will mark the entire database as authoritative, and will increment the version number by %d. In Figure 4.27, you saw that the default syntax increments the version number by 100000, which is usually sufficient to mark the database restore as authoritative. You’ll only need to use this option if you need to perform a second authoritative restore over a previous incorrect one. For example, if you perform an authoritative restore using a Tuesday backup, and discover afterwards that you require the Monday tape to correct the problem you’re trying to resolve, you should authoritatively restore the Monday backup using a higher version number like 200000. This will ensure that the other controllers in your domain will regard the second, restore operation as authoritative.

restore subtree %s Use this syntax to restore a specific sub-tree (and all children of that sub-tree) as being authoritative. The sub-tree is defined by using the fully distinguished name (FDN) of the object.

restore subtree %s verinc %d This performs the same function as restore database verinc %d for a single sub-tree.

Performing a Primary Restore You’ll perform a primary restore when the server you are trying to restore contains the only existing copy of any replicated data, in this case the SYSVOL directory and the Active Directory data. Using a primary restore will allow you to return the first replica set to your network; do not use this option if you’ve already restored other copies of the data being restored. Typically, you’ll only perform a primary restore when you have lost all of the controllers in your domain and are rebuilding the entire Active Directory structure from your backup media. You’ll perform a primary restore very similarly to a non-authoritative restore, but in the final Advanced Options screen, place a check-mark next to “When restoring replicated data sets, mark the restored data as the primary data for all replicas”.

Summary Once you’ve implemented your Active Directory infrastructure, you’ll need to perform a number of tasks to maintain it in top working order. To help you in this, the steps needed to create a forest root domain and any subsequent child domains. Depending on the other controllers installed on your network, you may also be able to raise the Active Directory forest and/or domain functional levels to take advantage of new Windows Server 2003-specific features and


Page 47 of 50

functionality. As a final topic in the planning and implementation stage, you’ll also need to consider creating any necessary trust relationships within your organization or with external vendors with whom you share information and data on a regular basis. Finally, we discussed the process of performing both authoritative and non-authoritative restores of the Active Directory database in the event of any sort of hardware or software failure. As another means of addressing any potential failures on your network, we also discussed the necessary steps to manage domain controllers and organizational units, as well as the best way to view and modify the Active Directory schema. Both of these will help you maintain and recover your Active Directory installation as painlessly as possible, since the things than can go wrong, inevitably do.

Fast Track Choosing a Management Method •

The most common administrative tools are the graphical user interface (GUI) utilities that are automatically installed when you run dcpromo to install Active Directory. The three most common of these are Active Directory Users and Computers, Active Directory Domains and Trusts, and Active Directory Sites and Services Windows Server 2003 offers an array of command-line utilities that can add, delete and remove Active Directory objects, create and delete trust relationships, manage domain controllers, and much more Combine command-line utilities with Microsoft or thirdparty scripting tools like VBScript, Windows Scripting Host and the like, to create powerful utilities to streamline repetitive administrative tasks

Managing Forests and Domains •


Base your decision to create multiple domains within a single forest on whether you need to maintain a separate security boundary or Active Directory schema for either organization or business units. Use multiple domains or Organizational Units to delegate some administrative responsibility while still maintaining a centrallyadministered network. If you need to maintain two discrete entities in terms of security and network management, multiple forests are the way to go Raising the domain or forest functional level allows you to implement security and administrative improvements, but will not allow any Windows NT 4 or 2000 controllers to participate in the domain. You’ll need to either upgrade all down-level controllers on your network, or else demote them to standalone server status

Page 48 of 50

You can create all necessary trust relationships – forest, shortcut, realm, etc. – using either the Windows GUI or the netdom command line utility

Restoring Active Directory •

You can restore the Active Directory or System State using the native Windows Server 2003 Backup utility or a tool from a third-party vendor The default Active Directory restore type is “nonauthoritative”, where any restored objects will be updated by any other domain controllers within the replication topology to bring the restored objects up to date To prevent the restored copy of an object or objects from receiving updates, use the ntdsutil utility to mark the restored data as “authoritative”. All other controllers in the domain will take the restored copy of the object as the definitive copy, and will update their own information accordingly

Frequently Asked Questions Q: How do I decide between implementing a separate domain versus an organizational unit? A: You’ll want to create a domain if the resources you’re attempting to group together have different security requirements than the rest of the existing network. Certain security settings, especially account policies, can only be implemented at the domain level, not at the OU level. - I don’t remember where I was going with this, and I think we covered the trust relationship topic pretty nicely within the chapter itself. Q: I have a third-party utility that accesses my Active Directory data via LDAP; however, it cannot read signed or encrypted LDAP data. How can I disable this feature? A: In the HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVers ion\AdminDebug key, and create a DWORD value called AdsOpenObjectFlags according to the information in Table 4.5. Change the value of the key to any of the following, depending on your needs. (Remember that editing the Registry can be a risky proposition, and that you should have a viable backup on hand in case anything goes awry): Table 4.5: Registry Values to Disable Signed and/or Encrypted LDAP Traffic Value 1 2 3

Disables Signing Encrypting Signing & Encrypting


Page 49 of 50

Q: What happens to Windows NT trust relationships when you upgrade to Windows Server 2003? A: When you upgrade a Windows NT domain to a Windows Server 2003 domain, all of your existing Windows NT trusts will be preserved as-is. Remember that trust relationships between Windows Server 2003 domains and Windows NT domains are nontransitive.


Page 50 of 50



3:29 PM

Page 77

Chapter 5

Managing User Identity and Authentication

Managing User Identity and Authentication Solutions in this chapter: • Identity Management • Identity Management with Microsoft’s Metadirectory • MMS Architecture • Password Policies • User Authentication • Single Sign-on • Authentication Types • Internet Authentication Service • Creating a User Authorization Strategy • Using Smart Cards • Implementing Smart Cards • Create a password policy for domain users

Identity Management In today’s connected world, providing proof of your identity is often required to ensure that someone else is not trying to use your identity. It used to be that entering in a username and password was sufficient information in order to authenticate someone to a network. However, password authentication is only the first step in truly authentication a user in today’s environment. You must have a well-defined password policy, which includes account lockout, password rotation, as well as other options to ensure limited access to your network. In this chapter, we will develop a password policy for your Windows 2003 network. However, sometimes passwords and password policies are not enough, and we have to take authentication to the next plateau. Tools such as biometric devices, token devices, voice identification, and smart cards are becoming much more mainstream for user authentication as the price continues to drop, and acceptance continues to rise. If you have ever seen a large datacenter, you have probably seen biometric tools such as thumbprint scanners or palm scanners at entryways for employees to gain access to the datacenter. Other sites may use smartcard readers for access to public computer kiosks. For example, Sun Microsystems requires the use of smart cards for students to sign-in each day to class. Each student is assigned a smartcard and a four-digit pin that they must use to sign in each day before class begins. In Windows Server 2003 and Windows XP, Microsoft has implemented smart card technology into the operating system as well as Active Directory to provide you with enhanced authentication abilities in order to add additional security to your network. As a Windows 2003 MCSE, you are required to understand how to implement smart card technologies, and manage resources


Page 1 of 52

through the use of smart cards. Let’s begin with a discussion on password policies.

Identity Management with Microsoft’s Metadirectory Microsoft may have developed Active Directory, but they did not create it in a vacuum. Microsoft made every effort to ensure that Active Directory would be representative of Internet standards, and be able to interoperate with third-party applications. Many enterprise networks have a common set of business requirements for their networked systems, including: •

Single logon and synchronized passwords across systems to simplify network access from the user’s perspective, which translates directly to a reduction in support needs.

• Ease of propagating human resources information throughout multiple systems when a user is hired, thus providing network access; and when a user is fired, thus providing a measure of security.

Single global address book that contains current information for other users, including their e-mail addresses regardless of the messaging system used.

Metadirectories have become more prevalent in networking because of the proliferation of directory databases. The average enterprise has about 10 directories residing in their multiple network operating systems, electronic messaging, databases, groupware, PBX telephone systems, and infrastructure operating systems. For example, when a new employee is hired, a company might need to enter that employee’s data into an HR database, a security badge database, the PBX voice mail system, an electronic messaging application, a proxy server, Novell Directory Services, a NetWare bindery, a legacy Windows NT domain, Active Directory, and so forth. A metadirectory is somewhat different from a synchronization method of updating directories. Synchronization is the process of ensuring that when an administrator makes a change to one database, that change is synchronized across all other databases. This is like Multi-Master replication among dissimilar databases. As unlikely as it seems, this is a common system already developed for many messaging systems. It enables global address books from different vendors to be synchronized when a change is made to one of those vendors’ directories. This type of synchronization is traditionally implemented through gateway or connector software. A metadirectory, on the other hand, is a superset of all directories. Primarily, these directories manage identity information, but many of them extend into other resource information, such as data, files, printers, shares, applications, telephone information, policy rules, and so on. Not all directories contain the exact same extent of information, but most have a commonality in the identity of users who are allowed to access this information, as shown in the following illustration: Identity Management with a Metadirectory


Page 2 of 52

PBX identity -----phone

NDS identity -----resources

AD identity -----resources

Metadirectory identity

Messaging identity -----e-mail SQL identity -----data

DNS identity -----location Groupware identity -----data

The metadirectory is actually a directory itself, or an index, of all the information that can be synchronized between these various databases. There are two approaches to metadirectory products: •

Identity information index

Single point of administration

The identity index approach enables centralization of the common identity information from the various databases mapped to each other. In the early development of metadirectories, this approach is most common. The single point of administration approach includes a further extension into the security aspects of the various directories by including the resource information and the rules that apply to how users are granted access to those resources. Regardless of which approach is used, the capability of managing identity from a single point is a major administrative process improvement over the problems incurred through managing an average of 10 directories containing information about the same user identity. The challenge with metadirectories is to establish rules to manage the updates when they can be initiated from any one of the directories. The question at hand is, “which directory owns that particular identity attribute?” For example, is it more sensible to have the messaging database own the e-mail address or the SQL database? Probably the messaging database should own that piece of information. That means, if an administrator made a change to the e-mail address on a SQL database, and another administrator made a change to the messaging database, the change that would win is the messaging database e-mail address. This is done by establishing the messaging database as the master of the e-mail address attribute, whereas other databases are slaves to the messaging master. Microsoft acquired Zoomit Corporation, a company that developed metadirectory technologies, in 1999. This acquisition enables Microsoft to implement a metadirectory that will be able to access and interact natively with


Page 3 of 52

Active Directory, and be able to work with other directory services. The new product is Microsoft Metadirectory Services (MMS). Such directories would likely include: • •

Messaging address books DNS and DHCP databases

Third-party directory services

Database directories

Mainframe and minicomputer account managers

In essence, a metadirectory enables an administrator to have a single interface into multiple directory services, and manage those directory services using intelligent rules. The metadirectory must be able to integrate with those other directory services in a way that can maintain integrity across directories, and translate between different types of data representing the same value. For example, the e-mail address in one directory might be given two fields: a string representing the user ID and a string representing the Internet domain. A different directory might keep the e-mail address in a single field as a string value. Telephone numbers can include area codes and symbols in one directory, represented by a string value, but they could be a seven-digit telephone number in another directory with no symbols and represented by a number. The metadirectory must be able to understand these values and map them between directories. This can be done by using a native API for each directory, or by using a common protocol to access each directory (such as the Lightweight Directory Access Protocol, or LDAP) and then manipulating the data to ensure that the data is correct in each directory that the metadirectory touches. The optimal architecture for a metadirectory is one in which the metadirectory is the central connecting point between all the other directory services (see the following illustration). If a directory service were connected to others in a serial fashion, where directory A connects to directory B and directory B connects to directory C and so on, it would be less likely that the metadirectory could apply business rules regarding the ownership of values in the data : Hub and Spoke Metadirectory


Page 4 of 52

Legacy NT Domain

Web Server

Active Directory


Novell Directory Services Database

Exchange Server

Serial Directories

Exchange Server

Legacy NT Domain

Active Directory


Novell Directory Services Database

Web Server

MMS Architecture


Page 5 of 52

VIA was the name of the metadirectory product that Microsoft acquired when they bought Zoomit. It can run as a service or a console on a Windows NT 4 or Windows 2000 Server. To access the MMS metadirectory, a client can be one of the following: • •

Web browser LDAP client—either LDAP v.2 or LDAP v.3

Zoomit Compass client

The MMS “metaverse” database connects to multiple directories through management agents that work in a bidirectional flow that can be scheduled by the administrator. There are management agents currently available for the following directories. Future versions and updates may contain additional management agents. • •

Banyan VINES GMHS (BeyondMail and DaVinci)

Lotus Notes

Microsoft Exchange Server

Microsoft Mail

Microsoft Windows NT domains

Microsoft Windows 2000 Active Directory

Netscape Directory Server

Novell NetWare bindery

Novell Directory Services

Novell GroupWise (4.x and 5.x)

SQL databases, via ODBC

X.500 directories, via LDAP, such as ISOCOR, ICL, and Control Data

Additionally, a report management agent is available for reporting on the metaverse, and a generic management agent is available to use in creating a custom version for a different database. The metaverse can synchronize directories to the attribute level. In fact, new objects can be created in any directory or the metadirectory, or attributes can be changed, and then those objects and attribute changes will be propagated to the metadirectory (if made from a different directory). From the metadirectory, they will be propagated to the rest of the connected directories. MMS also supports ownership of data to the attribute level. This further maintains the referential integrity of the data when there are two or more different sources for identity information.

Password Policies


Page 6 of 52

Since they are largely created and managed by end-users, passwords have the potential to be the weak link in any network security implementation. You can install all of the high-powered firewall hardware and VPN clients that you’d like, but if your Vice President of Sales uses the name of her pet St. Bernard as her password for the customer database system, all of your preventative measures might be rendered useless. And since passwords are the “keys to the kingdom” of any computer system, the database that Windows 2003 uses to store password information will be a common attack vector for anyone attempting to hack your network. Luckily, Windows 2003 offers several means to secure passwords on your network. A combination of technical measures along with a healthy dose of user training and awareness will go a long way towards protecting the security of your network systems.

Creating an Extensive Defense Model In modern computer security, a system administrator needs to create a security plan that uses many different mechanisms to protect your networks from unauthorized access. Rather than relying solely on a hardware firewall and nothing else, Defense-In-Depth would also utilize strong passwords on local client PCs in the event that the firewall were compromised, as well as other security mechanisms. The idea here is to create a series of security mechanisms so that if one of them were circumvented, other systems and procedures would already be in place to help impede an attacker. Microsoft refers to this practice as an “Extensive Defense Model.” The key points of this model include the following: • A viable security plan needs to begin and end with user awareness, since a technical mechanism is only as effective as the extent to which the users on your network adhere to it. As an administrator, you need to educate your users about how to best protect their accounts from unauthorized attacks. This can include advice about not sharing passwords, not writing them down or leaving them otherwise accessible, and making sure to lock a workstation if the user needs to leave it unattended for any length of time. You can spread security awareness information via email, posters in employee break areas, printed memos, or any other medium that will get your users’ attention. •

Use the system key utility (syskey) on all critical machines on your network. This utility, discussed later in this section, will encryption the password information that is stored in the Security Accounts Manager (SAM) database. At a minimum, you should secure the SAM database on the domain controllers in your environment; you should also consider protecting the local user database on your workstations in this manner, as well.

Educate your users about the potential hazards of selecting “Save My Password” or a feature like it on mission-critical applications such as remote access or VPN clients. Make sure that they understand that the convenience of saving passwords on a local workstation is far outweighed by the potential security risk if a user’s workstation becomes compromised.


Page 7 of 52

If you need to create one or more “service accounts” for applications to use to interface with the operating system, make sure that each of these accounts has a different password. Otherwise, compromising one such account will leave multiple network applications open to attack.

If you suspect that a user account has been compromised, change the password immediately. If possible, consider renaming the account entirely since it is now a known attack vector.

Create a password policy and/or account lockout policy that is appropriate to your organization’s needs. (Both of these policies will be discussed more fully later in this section.) It’s important to strike a balance between security and usability when designing these types of account policies: a 23-character minimum password length may seem like a good security measure on paper, for example, but any security offered by such a decision will be rendered worthless when your users leave their impossible-to-remember 23-character passwords written down on sticky notes on their monitors for all the world to see.

Strong Passwords When discussing security awareness with your user community, one of the most critical issues to consider is that of password strength. While a weak password will provide potential attackers with an easy access to your users computers, and consequently the rest of your company’s network, well-formed passwords will be significantly more difficult to decipher. Even though password-cracking utilities used by attackers continue to evolve and improve, educating your users to the importance of strong passwords will provide additional security for your network’s computing resources. According to Microsoft, a weak password is one that contains any portion of your name, your company’s name or your network login ID. So if my username on a network system were hunterle, and my network password were hunter12!@!, that would be considered a weak password. A password that contains any complete dictionary word – password, thunder, protocol – would also be considered weak. (Though it should go without saying, blank passwords are obviously straight out, as well.) By comparison, a strong password will contain none of the characteristics described above; it will not contain any reference to your username, company name, or any word found in the dictionary. Strong passwords should also be at least seven characters long, and contain characters from each of the following groups: • Uppercase letters: A, B, C… •

Lowercase letters: z, y, x…

Numeric digits: 0, 1, 2, 3, 4, 5, 6, 7, 8, 9

• Non-alphanumeric characters: !, *, $, }, etc. Each strong password should be appreciably different from any previous passwords that the user may have created: P!234abc, Q!234abc, and R!234abc, while each meet the above password criteria, would not be considered strong passwords when viewed as a whole. To further complicate matters, an individual


Page 8 of 52

password can still be weak, even though it meets the criteria listed above. For example, IloveU123! would still be a fairly simple one to crack, even though it possesses the length and character complexity requirements of a strong password.

System Key Utility Most password-cracking software used in attacking computer networks will attempt to target the SAM database or the Windows directory services in order to access passwords for user accounts. To secure your Windows 2003 password information, you should use the System Key Utility (the syskey.exe file itself is located in the ~\System32 directory by default) on every critical machine that you administer. This utility will encrypt password information in either location, providing an extra line of defense against would-be attackers. To use this utility on a workstation or member server, you must be a member of the local Administrators group on the machine in question. (If the machine is a member of a domain, remember that the Domain Admins group is added to the local Administrators group by default.) On a domain controller, you need to be a member of the Domain Admins or Enterprise Admins group. In the following exercise, we’ll go through the steps in enabling the System Key Utility on a Windows 2003 Server.

EXERCISE 5.01 CREATING A SYSTEM KEY 1. From the Windows 2003 server desktop, click Start | Run, then type syskey and click OK. You’ll see the screen shown in Figure 5.1.

Figure 5.1 Enabling Syskey Encryption 2. Click Encryption Enabled, then click Update. 3. Choose from the security options shown in Figure 5.2. The different options available to you are as follows:


Page 9 of 52

Figure 5.2 Selecting Syskey Encryption Options •

Password Startup, Administrator-Generated password This will encrypt the account password information and store the associated key on the local computer. In this case, however, you will select a password that will be used to further protect the key. You’ll need to enter this password during the computer’s boot-up sequence. This is a more secure option than storing the startup key locally as described below, since the password used to secure the system key isn’t stored anywhere on the local computer. The drawback to this method is that an administrator must be present to enter the syskey password whenever the machine is rebooted, which might make this a less attractive option for a remote machine that requires frequent reboots.

System Generated Password, Store Startup Key on Floppy Disk This option stores the system key on a separate floppy disk, which must be inserted during the system startup. This is the most secure of the three possible options, since the system key itself is not stored anywhere on the local computer, and the machine will not be able to boot without the floppy disk containing the system key.

System Generated Password, Store Startup Key Locally This encrypts the SAM or directory services information using a random key that’s stored on the local computer. You can reboot the machine without being prompted for a password or a floppy disk; however, if the physical machine is compromised, the System Key can be modified or destroyed. Of the three possible options when using syskey, this is the least secure.

4. Once you have selected the option that you want, click OK to finish encrypting the account information. You’ll see the confirmation message shown in Figure 5.3.


Page 10 of 52

Figure 5.3 Confirmation of Syskey Success

Defining a Password Policy Using Active Directory, you can create a policy to enforce consistent password standards across your entire organization. Among the criterion that you can specify are how often passwords must be changed, how many unique passwords a user must utilize when changing their password, as well as the complexity level of passwords that are acceptable on your network. Additionally, you can specify an account lockout policy that will prevent users from logging in after a certain number of incorrect login attempts. In this section, we’ll discuss the specific steps necessary to enforce password and account lockout policies on a Windows 2003 network.

Applying a Password Policy EXERCISE 5.02 CREATING A DOMAIN PASSWORD POLICY 1. From the Windows 2003 desktop, open Active Directory Users and Computers. Right-click on the domain that you want to set a password policy for and select Properties. 2. Click on the Group Policy tab, shown in Figure 5.4. You can edit the default domain policy, or click New to create a new policy. In this case, we will click Edit to apply changes to the default policy.


Page 11 of 52

Figure 5.4 Group Policy Tab 3. Navigate to the Password Policy by clicking on Computer Configuration | Windows Settings | Security Settings | Account Policies | Password Policies. You’ll see the screen shown in Figure 5.5

Figure 5.5 Configuring Password Policy Settings 4. For each item that you wish to configure, right-click on the item and select Properties. In this case, we’ll enforce a password history of three passwords. In the screen shown in 5.6, place a check mark next to Define this Policy Setting, and then enter the appropriate value. Using Password Policies, you can configure any of the following settings:


Page 12 of 52

Figure 5.6 Defining the Password History Policy •

Enforce Password History allows you to define the number of unique passwords that Windows will retain. This will prevent users from using the same password again when their password expires. Setting this number to at least 3 or 4 will prevent users from alternating repeatedly between two passwords whenever they’re prompted to change their password.

Maximum Password Age defines how frequently Windows will prompt your users to change their passwords.

Minimum Password Age ensures that passwords cannot be changed until they are more than a certain number of days old. This works in conjunction with the first two settings by preventing users from repeatedly changing their passwords to circumvent the Enforce password history policy.

Minimum Password Length dictates the shortest allowable length that a user password can be, since longer passwords are typically stronger than shorter ones. Enabling this setting will also prevent users from setting a blank password.

If you enable the Password must meet complexity requirements policy setting, any new passwords created on your network will need to meet the following requirements: minimum of 6 characters in length, and contain 3 of the following 4 character groups: uppercase letters, lowercase letters, numeric digits, and non-alphanumeric characters such as %, !, and [.

Store Passwords Using Reversible Encryption will store a copy of the user’s password within the Active Directory database using reversible encryption. This is required for certain message digest functions to work properly. This policy is disabled by default, and should only be enabled if you are certain that your environment requires it.


Page 13 of 52

Modifying a Password Policy You can modify an existing Windows Server 2003 password policy by navigating to the policy section listed in the previous exercise and making whatever changes you desire. Unlike other types of Group Policies where clients settings refresh themselves every 30 minutes, new and modified password policies will only take effect on any new passwords created on your network. For example, any changes to the password policies may take effect the next time your users’ password expires. If you make a radical change to your password policy, you will need to force all desired user accounts to change their passwords in order for the change to take effect. Because of this, you should carefully plan your password policy so that you can create all necessary settings before rolling out Active Directory to your clients.

Applying an Account Lockout Policy In addition to setting password policies, you can configure your network so that user accounts will be locked out after a certain number of incorrect logon attempts. This can either be a “soft lockout”, where the account will be reenabled after 30 minutes, for example. You also have the option of configuring a “hard lockout” where user accounts will only be re-enabled by the manual intervention of an administrator. Before implementing an account lockout policy, you need to understand the potential implications for your network. While an account lockout policy will increase the likelihood of deterring a potential attack against your network, you also run the risk of locking out authorized users as well. You need to set the lockout threshold high enough that authorized users will not be locked out of their accounts due to simple human error of mistyping their password before they’ve had their morning coffee – three to five is a commonly used threshold. You should also remember that if a user changes their password on ComputerA while they are already logged onto ComputerB, the session on ComputerB will continue to attempt to log into the Active Directory database by using the old (now incorrect) password, which will eventually lock out the user account. This can be a common occurrence in the case of service accounts and administrative accounts. Exercise 5.03 details the necessary steps in configuring account lockout policy settings for your domain.

EXERCISE 5.03 CREATING AN ACCOUNT LOCKOUT POLICY 1. From the Windows 2003 desktop, click on Start | Programs | Administrative Tools | Active Directory Users and Computers. 2. Right-click on the domain you want to administer, then select Properties. 3. Click New to create a new Group Policy, or select Edit to modify the default domain policy. 4. Navigate to the Account Lockout Policy by clicking on Computer Configuration | Windows Settings | Security Settings | Account Policies | Account Lockout Policies. You’ll see the screen shown in Figure 5.7.


Page 14 of 52

Figure 5.7 Account Lockout Policy Objects 5. For each item that you wish to configure, right-click on the item and select Properties. To illustrate, we’ll create an account lockout threshold of 3 invalid logon attempts. From the screen shown in 5.8, place a check mark next to Define this Policy Setting, and then enter the appropriate value. Using Account Lockout Policies, you can customize the following configuration settings.

Figure 5.8 Configuring the Account Lockout Threshold •

Account lockout duration determines the amount of time that a locked out account will remain inaccessible. Setting this to zero will mean that the account will remain locked out until an administrator manually unlocks it. Select a lockout duration that will deter intruders without crippling your authorized users –30 to 60 minutes is sufficient for most environments.

Account lockout threshold determines the number of invalid login attempts that can occur before an account will be locked out. Setting this to zero will mean that accounts on your network will never be locked out.

Reset account lockout counter after defines the amount of time in minutes after a bad login attempt that the “counter” will reset. If this value is set to 45 minutes, and user jsmith types his password


Page 15 of 52

incorrectly two times before logging on successfully, his running tally of failed login attempts will reset to zero after forty-five minutes have elapsed. Be careful not to set this too high, or your users may lock themselves out through simple morning typo’s.

Modifying an Account Lockout Policy You can modify an existing account lockout policy by navigating to the policy section listed in the previous section and making any necessary changes. Unlike the Password Policies discussed earlier in this section, account lockout settings will propagate to your network clients every 30 minutes; users will not need to change their passwords in order for new or modified account lockout policies to take effect.

Password Reset Disks A potential disadvantage to enabling strong passwords on your network is that your users will likely forget their passwords more frequently. It’s only to be expected, since Y!sgf($q is a far more difficult password to remember than ,say, goflyers. In previous releases of Windows, if a user forgot their local user account password, the only recourse was for an administrator to manually reset it. When this happened, the user would lose any Internet passwords that were saved on their local computer, as well as any encrypted files or e-mail encrypted with a user’s public key. Windows Server 2003 and Windows XP provide a better solution for forgotten passwords. In the newest release of Windows, your users can create a Password Reset Disk for their local user accounts so that they won’t lose any of their valuable data in the event that they forget their password. When you create a Password Reset Disk, Windows creates a public and private key pair. The private key is stored on the Password Reset Disk itself, while the public key is used to encrypt the user’s local account password. In case the user forgets their password, they can use the private key that’s stored on the reset disk to decrypt and retrieve their current password. When you use the Password Reset Disk, you’ll be prompted to immediately change the password for your local user account, which will then be encrypted with the same public and private key pair. Your users will not lose any data in this scenario because they are only changing their password, rather than requiring an administrator to reset it.

Creating a Password Reset Disk To create a Password Reset Disk: 1. Press Ctrl+Alt+Del, and click Change Password. 2. In the User name field, enter the logon of the account that you’re creating the Password Reset Disk for. 3. In the “Log on to” field, make sure that the Local Computer Name is specified, rather than any domain that the computer may be configured to log into. 4. Once you’ve entered the appropriate username, click Backup to begin the Forgotten Password Wizard.


Page 16 of 52

5. Click Next to bypass the Welcome Screen of the Forgotten Password Wizard. You’ll be prompted to insert a blank, formatted floppy disk into your A:\ drive. 6. Click Next again to create the Password Reset Disk. 7. Once you’ve finished creating the Password Reset Disk, be sure to store it in a secure location.

Resetting a Local Account If a user has forgotten the password to their local user account and has not previously created a Password Reset Disk, your only alternative will be to reset their local account password. Remember that doing so will cause the user in question to lose the following information: • Any e-mail encrypted with the user's public key •

Internet passwords that are saved on the local computer

Local files that the user has encrypted

EXERCISE 5.04 RESETTING A LOCAL USER ACCOUNT Follow these steps to reset a local user account: 1. Log onto the workstation using the local administrator account, or an account that is a member of the Domain Admins group on your Windows domain. 2. Open the Computer Management MMC console by clicking on Start | All Programs | Administrative Tools | Computer Management. 3. In the left-hand pane of the Computer Management console, click on Computer Management | System Tools | Local Users and Groups | Users. You’ll see the screen shown in Figure 5.9.

Figure 5.9 Administering Local Users 4. Right-click on the user account whose password you need to reset, and then click Set Password. You’ll see the warning message shown in Figure 5.10


Page 17 of 52

Figure 5.10 Warning of Potential Data Loss When Resetting a Password 5. Click Proceed to reset the user’s password. You’ll see the screen shown in Figure 5.11, which will give you one last warning regarding the potential data loss associated with resetting a local user account password. Enter a new password that meets the complexity requirements of your domain password policy, then click OK. (Since this is a local password, the complexity requirements of your domain password policy will not be automatically enforced. However, you should nonetheless create a strong password for the local account on your workstation.) A pop-up window will indicate that the password was set successfully. Click OK again to return to the Computer Management Console.

Figure 5.11 Resetting the Local User Password 6. If you would like the user to change their password at their first login, right-click on the user object and select Properties. Place a check mark next to User Must Change Password at Next Logon, then click OK.


Page 18 of 52

7. Log out of the workstation and allow the user to login with their newly reset password.

User Authentication Any well-formed security model needs to address the following three topics: Authentication, Authorization, and Accounting (you’ll sometimes see the last one referred to as “Auditing.”) Put simply, authentication deals with who a person is, authorization centers around what an authenticated user is permitted to do, and accounting/auditing is concerned with tracking who did what to a network file, service or other resource. Windows Server 2003 addresses all three facets of this security model, beginning with the user authentication strategies that we’ll discuss in this chapter. Regardless of which protocol or technical mechanism is used, all authentication schemes need to meet the same basic requirement of verifying that a user or other network object is in fact who it claims to be. This can include verifying a digital signature on a file or hard drive, or verifying the identity of a user or computer that is attempting to access a computer or network resource. Windows Server 2003 offers several protocols and mechanisms to perform this verification, including (but not limited to) the following: • Kerberos •

NTLM (NT LAN Manager)

SSL/TLS (Secure Sockets Layer/Transport Security Layer)

Digest authentication

Smart cards

• Virtual private networking (VPN) In the following sections, we’ll describe the particulars of each authentication mechanism available with Windows Server 2003, and the appropriate use for each. The most common authentication mechanism that dates back to the mainframe days is password authentication. This occurs when the user supplies a password to a server or host computer and the server compares the supplied password with the information that it has stored in association with the username in question. If the two items match, the system permits the user to log on. Concerns regarding password authentication have largely been connected with ensuring that user passwords are not transmitted via cleartext over a network connection. In fact, many modern password authentication schemes such as NTLM and Kerberos never transmit the actual user password at all. Another concern that is more difficult to address is that of user education: even after years of reminding users of the importance of choosing strong passwords and protecting their login information, many still use their children’s’ names as passwords. In a world of increasingly connected computing systems, the importance of creating strong password policies as part of your network’s security plan cannot be overstated. To assist in this, Windows Server 2003 allows you to establish password policies to mandate the use of strong, complex passwords as we discussed earlier in the chapter. You can also mandate


Page 19 of 52

that your users log in using smart cards, a topic that we’ll cover in depth in a later section.

Need for Authentication User authentication is a necessary first step within any network security infrastructure, because it establishes the identity of the user. Without this key piece of information, Windows 2003 access control and auditing capabilities would not be able to function. Once you understand how the various authentication systems function, you’ll be able to use this information to create an effective user authentication strategy for your network. The location of your users, whether they are connected to the LAN via a high-speed network connection or a simple dial-up line, and the client and server operating systems in use throughout your organization will dictate the appropriate authentication strategy for your users. Keep in mind as we go along that a fully functional authentication strategy will almost certainly involve a combination of the strategies and protocols described in this chapter, as a single solution will not meet the needs of an enterprise organization. Your goal as a network administrator is to create an authentication strategy that provides the optimum security for your users, while allowing you to administer the network as efficiently as possible.

Single Sign-on A key feature of Windows Server 2003 is support for single sign-on, an authentication mechanism that will allow your domain users to authenticate against any computer in a domain while only needing to provide their login credentials one time. This allows network administrators to manage a single account for each user, rather than dealing with the administrative overhead of maintaining multiple user accounts across different domains. It also provides greatly enhanced convenience and for network users, as only needing to maintain a single password or smart card makes the network login process much simpler. (This also reduces network support calls, reducing the support required to maintain a network even further.) Whether your network authentication relies on single sign-on or not, any authentication scheme is a two-step process. First the user must perform an Interactive Logon in order to access their local computer. Once they’ve accessed the local workstation, Network Authentication will allow users to access needed network services or resources. In this section, we’ll examine both of these processes in detail.

Interactive Logon A network user performs an interactive logon when they present their network credentials to the operating system of the physical computer that they are attempting to log into, usually their desktop workstation. The logon name and password can either be a local user account or a domain account. When logging on using a local computer account, the user presents credentials that are stored in the Security Account Manager (SAM) database that is stored on the local machine. While any workstation or member server can store SAM, but those accounts can only be used for access to that specific computer. When using a domain account, the user’s domain information is authenticated against the


Page 20 of 52

Active Directory database. This allows the user to gain access, not only to the local workstation, but to the Windows 2003 domain and any trusting domains. In this case, the user’s domain account bypasses the workstation’s SAM database, authenticating to the local workstation using the information stored in Active Directory. The diagram in Figure 5.12 provides an illustration of these two processes.

Figure 5.12 Interactive Logons using Local vs. Domain Accounts

Network Authentication Once a user has gained access to a physical workstation, it’s almost inevitable that they will require access to files, applications or services hosted by other machines on the local- or wide-area network. Network authentication is the mechanism that will confirm the user's identity to whatever network resource that they attempt to access. Windows 2003 provides several mechanisms to enable this type of authentication, including Kerberos, Secure Socket Layer/Transport Layer Security (SSL/TLS), as well as NTLM to provide backwards compatibility with Windows NT 4.0 systems. Using the previous description of Interactive Logons, users who log on using a local computer account must provide logon credentials again every time they attempt to access a network resource, since the local computer account only exists within the individual workstation or member server’s SAM database rather than a centrally managed directory service like Active Directory. If the user logged on using a domain account, on the other hand, their logon credentials will be automatically submitted to any network services that they need to access. Because of this, the network authentication process is transparent to users in an Active Directory environment; the network operating system handles everything behind the scenes without the need for user intervention. This feature provides the foundations for single sign-on in a Windows 2003 environment by allowing users to access resources in their own domain as well as other trusted domains.

Authentication Types Windows 2003 offers several different authentication types to meet the needs of a diverse user base. The default authentication protocol for a homogeneous


Page 21 of 52

Windows 2003 environment is Kerberos, version 5. This protocol relies on a system of tickets to verify the identity of network users, services and devices. For web applications and users, you can rely on the standards-based encryption offered by the Secure Sockets Layer/Transport Layer Security (SSL/TLS) security protocols, as well as Microsoft Digest. To provide backwards compatibility for earlier versions of Microsoft operating systems, Windows 2003 still provides support for the NTLM protocol as well. In this section, we’ll examine the various authentication options available to you as a Windows administrator.

Kerberos Within a Windows 2003 domain, the primary authentication protocol is Kerberos Version 5. Kerberos provides thorough authentication by verifying not only the identity of network users, but also the validity of the network services themselves. This latter feature was designed to prevent users from attaching to “dummy” services created by malicious network attackers to trick users into revealing their passwords or other sensitive information. Verifying both the user and the service that the user is attempting to use is referred to as mutual authentication. Only network clients and servers that are running Windows 2000, Windows Server 2003 or Windows XP Professional operating system will be able to use the Kerberos authentication protocol; any downlevel clients that attempt to use a “kerberized” resource will use NTLM authentication instead. (We’ll discuss NTLM more fully in a later section.) All 2000/2003/XP Professional machines that belong to a Windows Server 2003 or Windows 2000 domain will use the Kerberos protocol enabled as the default mechanism for network authentication for domain resources. The Kerberos authentication mechanism relies on a Key Distribution Center (KDC) to issue tickets that allow client access to network resources. Each domain controller in a Windows 2003 domain functions as a KDC, allowing for fault tolerance in the event that one controller becomes unavailable. Network clients will use Domain Name Service (DNS) to locate the nearest available KDC to acquire a ticket and provide network authentication. Kerberos tickets contain an encrypted password that confirms the user's identity to the requested service. These tickets remain resident in memory on the client computer system for a specific amount of time, usually 8 or 10 hours. The longevity of these tickets allows Kerberos to provide single sign-on capabilities, so that the authentication process as a whole becomes transparent to the user once they’ve initially entered their logon credentials.

Understanding the Kerberos Authentication Process When a user enters their network credentials on a Kerberos-enabled system, the following steps take place. These transactions occur entirely behind the scenes; the user is only aware that they’ve entered their password or PIN number as part of a normal logon process. 1. Using a smart card or a username/password combination, a user authenticates to the KDC. The KDC issues a ticket-granting ticket (TGT) to the client system. The client retains this TGT in memory until needed.


Page 22 of 52

2. When the client attempts to access a network resource, it presents its Ticket-Granting Ticket to the ticket-granting service (TGS) on the nearest available Windows 2003 KDC. 3. If the user is authorized to access the service that it is requesting, the TGS issues a service ticket to the client. 4. The client presents the service ticket to the requested network service. Through mutual authentication, the service ticket will prove the identity of the user as well as the identity of the service. The Windows Server 2003 Kerberos authentication system can also interact with non-Microsoft Kerberos implementations such as MIT and UNIXbased Kerberos systems. This new “realm trust” feature will allow a client in a Kerberos realm to authenticate against Active Directory to access resources, as well as vice versa. This interoperability will allow Windows 2003 domain controllers to provide authentication for client systems running UNIX/MIT Kerberos, including clients that may be running operating systems other than Windows XP Professional or Windows 2000. Conversely, it will also allow Windows-based clients to access resources within a UNIX-based Kerberos realm.

SSL/TLS Any time that you visit a website that uses an https:// prefix instead of http://, you’re seeing Secure Socket Layer (SSL) encryption in action. SSL is protocol that operates at the Network Layer of the OSI model, providing encryption for protocols like HTTP, LDAP and IMAP operating at the higher layers of the protocol stack. SSL provides three major functions in encrypting TCP/IP-based traffic: • Server authentication to allow a user to confirm that an Internet server is really the machine that it is claiming to be. I can’t think of anyone who wouldn’t like the assurance of knowing that they’re looking at the genuine site and not a duplicate created by a hacker before entering their credit card information. •

Client authentication to allow a server to confirm a client’s identity. This would be important for a bank that needed to transmit sensitive financial information to a server belonging to a subsidiary office. Combining server and client authentication provides a means of mutual authentication similar to that offered by the Kerberos protocol.

Encrypted connections allow all data that is sent between a client and server to be encrypted and decrypted, allowing for a high degree of confidentiality. This function also allows both parties to confirm that the data was not altered during transmission. The Transport Layer Security (TLS) protocol is currently under development by the Internet Engineering Task Force. It will eventually replace SSL as a standard for securing Internet traffic, while remaining backwards compatible with earlier versions of SSL. RFC 2712 describes the way to add Kerberos functionality to the TLS suite, which will potentially allow Microsoft and other vendors to extend its use beyond LAN/WAN authentication to use on the Internet as a whole.


Page 23 of 52

SSL and TLS can use a wide range of ciphers to allow connections with a diverse client base. However, you can edit the Registry on the Windows 2003 Server hosting your web presence to restrict these to specific ciphers only. Within the Registry Editor on the server, browse to the following key: HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProv iders\SCHANNEL\Ciphers, as shown in Figure 5.13. Each available cipher has two potential values: • 0xffffffff (enabled) •

0x0 (disabled)

Figure 5.13 Editing SSL/TLS Ciphers

NTLM Versions of Windows earlier than Windows 2000 used NT LAN Manager (NTLM) to provide network authentication. In a Windows 2003 environment, NTLM will be used to communicate between two computers when one or both of them is running NT4 or earlier. For example, NTLM authentication would be used in the following communications: • Workstations or standalone servers that are participating in a workgroup instead of a domain will use NTLM for authentication •

Windows 2000 or Windows XP Professional computers logging onto an NT 4.0 4.0 PDC or BDC.

A Windows NT 4.0 Workstation client authenticating to an NT4.0, Windows 2000 or Windows 2003 domain controller.


Page 24 of 52

Users in a Windows NT 4.0 domain that has a trust relationship with a Windows 2000 or Windows Server 2003 domain or forest. NTLM encrypts user logon information by applying a mathematical function (or hash) to the user’s password. The NT4.0 SAM database doesn’t store the user’s password, but rather the value of the hash that is created when NTLM encrypts the password. In addition, the client machine actually applies the hash to the user’s password before transmitting it to the domain controller; in this way, the user’s password is never actually transmitted across the network. (And the transmission of the hash value itself is transmitted in an encrypted form, increasing the protocol’s security even further.) Using simple numbers for the sake of example, let’s say that the NTLM hash takes the value of the password and multiplies it by 2. Let’s say further that user JSmith has a password of ‘3’. The conversation between JSmith, JSmith’s workstation and the domain controller will go something like this: JSmith: “My password is ‘3’” JSmith’s workstation: “Hey, Domain Controller! JSmith wants to log in.” Domain Controller: “Send me the hash value of JSmith’s password.” JSmith’s workstation: “The hash value of her password is ‘6’.” Domain Controller: “Okay, the number ‘6’ matches the value that I have stored in the SAM database for the hash of JSmith’s password. I’ll let her log in.”

Digest Authentication Microsoft provides Digest Authentication as a means of authenticating Web applications that are running on Internet Information Server. Digest Authentication uses the Digest Access Protocol, which is a simple challengeresponse mechanism for applications that are using HTTP or Simple Authentication Security Layer (SASL)-based communications. When Microsoft Digest authenticates a client, it creates a session key that is stored on the web server and used to authenticate subsequent authentication requests without needing to contact a domain controller for each individual authentication request. Similar to NTLM, Digest authentication sends user credentials across the network as an encrypted hash, so that the actual password information cannot be extracted in case a malicious attacker is attempting to “sniff” the network connection. (A “sniffer” is a device or software application that monitors network traffic for sensitive information, similar to a wiretap on a telephone.) Before implementing Digest Authentication on your IIS server, you need to make sure that the following requirements have been met: • Clients who need to access a resource or application that’s secured with Digest authentication need to be using Internet Explorer 5 or later. •

The user attempting to log on to the IIS server, as well as the IIS server itself, need to be members of the same domain, or need to belong to domains that are connected by a trust relationship

The authenticating users need a valid account stored in Active Directory on the domain controller.


Page 25 of 52

The domain that the IIS server belongs to must contain a domain controller running Windows 2000 or 2003. The IIS server itself also needs to be running Windows 2000 or later.

Digest Authentication requires user passwords to be stored in a reversibly encrypted (clear text) format within Active Directory. You can establish this from the Account tab of the user’s Properties sheet in Active Directory Users & Computers, or use a Group Policy to enable this feature for a large number of users. After changing this setting, your users will need to change their passwords so that a reversibly encrypted hash can be created: the process is not retroactive.

Passport Authentication If you’ve ever logged onto the MCP Secure Site at, you’ve probably already seen Passport Authentication in action. Any business that wishes to provide the convenience of single sign-on to its customers can license and use Passport Authentication on their site. Using Passport Authentication will enable your company or client to deliver a convenient means for customers to access and transact business on a given site. Sites that rely on Passport Authentication use a centralized Passport server to authenticate users, rather than hosting and maintaining their own proprietary authentication systems. Companies can also use Passport Authentication to map sign-in names to information in a sales or customer database, which can offer Passport customers a more personalized Web experience through the use of targeted ads, content and promotional information. Using .NET Passport can help your business increase its sales, and advertising revenues through improved customer loyalty. As Microsoft Passport has gained acceptance, the Passport Sign-on logo (shown in Figure 5.14 and 5.15) has begun to appear on more and more corporate and ecommerce website.

Figure 5.14: Passport Sign-On Through

Figure 5.15 Passport On From a technical perspective, Passport Authentication relies on standards-based Web technologies including Secure Sockets Layer (SSL) encryptions, HTTP redirects, cookies, and symmetric key encryption. Because the technology utilized by Passport Authentication is not proprietary, it is compatible with both Microsoft Internet Explorer and Netscape Navigator, as well as some flavours of UNIX systems and browsers. The single sign-on service is similar to forms-based authentication that is common throughout the Internet; it simply extends the functionality of the sign-on features to work across a distributed set of participating sites.


Page 26 of 52

Passport’s Advantages for Businesses

Microsoft introduced the .NET Passport service in 1999, and since then the system has become responsible for authenticating more than 200 million accounts. Many prominent businesses have integrated .NET Passport into their web authentication scheme, including McAfee, eBay, NASDAQ, Starbucks, and many others. If you are considering integrating Passport authentication into your web authentication strategy, here are some of the advantages that will be available for your use: • Single sign-in allows your users to sign onto the Passport site once to access information from any participating website. This alleviates the frustration of registering at dozens of different sites and maintaining any number of different sets of logon credentials. The Passport service will allow the over 200 million Passport users quick and easy access to your site •

The Kids Passport service provides tools that will help your business comply with the legal provisions of the U.S. Children's Online Privacy Protection Act (COPPA). Your company can use this service to conform with the legal aspects of collecting and using children’s personal information, and to customize your website to provide age-appropriate content.

Maintain Control of Your Data. Since the Passport service is simply an authentication service, your customer information and data will still be controlled in-house, and is not shared with the Passport servers unless you configure your website to do so. At the time of this writing, there are two fees for the use of Passport Authentication: a USD$10,000 fee paid by your company on an annual basis, and a periodic testing fee of USD $1500 per URL. The $10,000 is not URL specific and will cover all URLs controlled by a single company. Payment of these fees will entitle your company to unlimited use of the Passport Authentication service for as many URLs as you have registered for periodic testing.

Understanding Passport Authentication Security Microsoft has created several key features within Passport Authentication to ensure that the security and privacy of your customers and users can be maintained at the highest possible level. Some of the security features employed by Passport Authentication are as follows: • The web pages used to control the sign-in, sign-out, and registration functions are centrally hosted, rather than relying on the security mechanisms of each individual member site. •

All centrally hosted pages that are used to exchange usernames, passwords or other credential information always use SSL encryption to transmit information.

Passport Authentication-enabled sites use encrypted cookies to allow customers to access several different sites without retyping their login information. However, an individual site can still opt to require


Page 27 of 52

users to return to the Passport sign-in screen when accessing their site for the first time. •

All cookie files related to Passport Authentication use strong encryption: when you set up your site to use Passport, you will receive a unique encryption key to ensure the privacy of your users’ personal information.

The central Passport servers will transmit sign-in and profile information to your site in an encrypted fashion. You can then use this information to create local cookies, avoiding any further client redirection to the Passport servers.

A web site that participates in Passport Authentication will never actually receive a member's password. Authentication information is transmitted via a cookie that contains encrypted time stamps that are created when the member first signs onto Passport. The Microsoft Passport sign out function allows users to delete any Passport-related cookies that were created on their local machine during the time that they were logged onto Microsoft Passport.

A participating website will only communicate directly with the central Passport server to retrieve configuration files, which are then cached locally by the individual member server. All information that is exchanged between clients and the Passport servers takes places using HTTP redirects, cookies, and encrypted queries.

Internet Authentication Service Beginning as early as the Option Pack add-on for NT 4.0, Microsoft has offered the Internet Authentication Service as a Remote Authentication Dian-In User Service (RADIUS) server. The release of IAS offered with Windows 2003 expands and improves the existing IAS functionality, and includes connection options for wireless clients and proxying to remote RADIUS servers. IAS is available in the Standard, Enterprise and Datacenter Editions of Windows 2003, but not the Web Edition. Since it functions with a wide range of wireless, remote access and VPN equipment, IAS can be used for everything from the smallest corporate remote access solution to managing the user base of a major Internet Service Provider (ISP.) The Internet Authentication Service can manage all aspects of the login process: directing the user authentication process, verifying a user’s authorization to access various network resources, and collecting logging information to provide accountability for each user’s logins and activity. IAS supports a variety of authentication methods that can meet the needs of most modern client platforms. In addition, you can add custom authentication methods to meet any specialized requirements of your network security policy. The default authentication methods supported by IAS are password-based Pointto-Point Protocols (PPP) and the Extensible Authentication Protocol (EAP.) By default, IAS supports two EAP protocols: namely EAP-MD5 and EAP-TLS. Supported PPP protocols include • Password Authentication Protocol (PAP) •

Challenge Handshake Authentication Protocol (CHAP)


Page 28 of 52

Microsoft Challenge Handshake Authentication Protocol (MSCHAP)

• MS-CHAP version 2. Once a user has been authenticated, IAS can use a number of methods to verify that the authenticated user is authorized to access the service that they are attempting to connect to. As with authentication methods, you can use the Software Development Kit (SDK) to create custom authorization methods to meet your business needs. Authorization methods supported by IAS include the following: • Dialed Number Identification Service (DNIS) bases its authorization decision on the phone number that the caller is using. As a cost-saving measure, for example, you might want to authorize only users within a local calling area to use a particular number. •

Automatic Number Identification/Calling Line Identification(ANI/CLI) is the opposite of DNIS; it authorizes access based on the number that a user is calling from.

Guest Authorization allows access to an access point or dial-up number without a username and password. This is becoming more common in airplane terminals, coffee shops and the like who provide a wireless access point to their clientele. To protect the access point in question, users connecting with Guest Authorization will typically have a severely curtailed set of operations that they can perform: web browsing only, for example.

Remote Access Policies are the most effective way to set authorization for Active Directory user accounts. Remote Access Policies can authorize network access based on any number of conditions such as group membership, time of day, access number being used, etc. Once a user has been authorized, you can also use remote access policies to mandate the level of encryption that remote access clients need to be using in order to connect to your network resources, as well as setting any maximum time limits for a remote connection or inactivity timeout values. Packet filters can also control exactly which IP addresses, hosts and/or port numbers the remote user is permitted to access while connected to your network.

New Features in Internet Authentication Service

While IAS has been around in various incarnations since Windows NT 4.0, it has several new features under Windows 2003 that make it an ideal solution for enterprise environments. Some of these new features are as follows: • RADIUS proxy: In addition to providing its own RADIUS authentication services, you can configure an IAS server to forward authentication requests to one or more external RADIUS servers. The external RADIUS server does not need to be another IAS server; as long as it is running an RFC-compliant RADIUS installation, the external server can be running any type of platform can and operating system. IAS can forward these requests according to user name, the IP address of the target RADIUS server, and other conditions as necessary. In a large, heterogeneous environment, IAS


Page 29 of 52

can be configured to differentiate between the RADIUS requests that it should handle by itself, and those that should be forwarded to external servers for processing. •

Remote-RADIUS-to-Windows-User-Mapping allows you to further segregate the authentication and authorization processes between two separate servers. For example, a user from another company can be authenticated on the RADIUS server belonging to their separate company, while he or she will receive authorization to access your network through this policy setting on your IAS server.

Support for Wireless Access Points to allow authentication and authorization for users with IEEE 802.1x-compliant wireless network hardware. IAS can authenticate wireless users through the Protected Extensible Authentication Protocol (PEAP), which offers security improvements over EAP.

IAS can log auditing information to a SQL database for better centralized data collection and reporting

Network Access Quarantine Control allows you to severely restrict the network access of remote clients until you can verify that they comply with any corporate security policies, such as mandatory anti-virus protection or service pack installations. Once you have verified the compliance of these remote machines, you can remove them from Quarantine and allow them access in accordance with your network’s remote access policy

Authenticated Switching Support. A network switch provides filtering and management of the physical packets transmitted over a local- or wide-area network. To prevent unauthorized access to the network infrastructure, many newer switches require users to provide authentication before being allowed physical access to the network. Under Windows 2003, IAS can act as a RADIUS server to process the login requests from these advanced pieces of network hardware.

Using IAS for Dial-up and VPN The RADIUS protocol provided by the IAS service is a popular means of administering remote user access to a corporate network. For example, you can have your users dial a local telephone number for a regional Internet Service Provider, then authenticate against your IAS server using a VPN client. If the remote user is in the same local calling area as your corporate network, you can integrate IAS with the familiar Routing & Remote Access feature to allow them to dial directly into a modem attached to the IAS server. IAS will then use RADIUS to forward the authentication and authorization request to the appropriate Active Directory domain. In this section we’ll cover the necessary steps to allow dial-up access to your corporate network. For the sake of the exercises in this section, we’ll assume that your users are dialing directly into a remote access server that is running the Internet Authentication Service. In Exercise 5.03, we’ll cover the necessary steps to install and configure IAS on a domain controller in your Windows 2003 domain.


Page 30 of 52

EXERCISE 5.05 CONFIGURING IAS ON A DOMAIN CONTROLLER 1. From the Windows Server 2003 desktop, open the Control Panel by clicking on Start | Programs | Control Panel. Double-click on Add/Remove Programs. 2. Click Add/Remove Windows Components. When the Windows Components Wizard appears, click Networking Services, and then Details. You’ll see the screen shown in Figure 5.16.

Figure 5.16 Installing the Internet Authorization Service 3. Place a check mark next to Internet Authentication Service and then click OK. 4. Click Next to begin the installation. Insert the Windows Server 2003 CD if prompted. Click Finish and Close when the installation is complete. Now that you’ve installed the Internet Authorization Service, you need to register the IAS server within Active Directory. (This is similar to authorizing a newly created DHCP server.) Registering the IAS server will allow it to access the user accounts within the Active Directory domain. 5. Click on Start | Programs | Administrative Tools | Internet Authentication Service. You’ll see the screen shown in Figure 5.17.


Page 31 of 52

Figure 5.17 The IAS Administrative Console 6. Right-click on the Internet Authentication Service icon and click on Register Server in Active Directory. 7. Click OK at the next screen, shown in Figure 5.18. This will allow IAS to read the dial-in properties for the users in your domain.

Figure 5.18 Configuring Permissions for IAS

Once you’ve installed and authorized an IAS server, you can use the Internet Authentication Service icon in the Administrative Tools folder to configure logging, as well as specifying which UDP port that IAS will use to transmit logging information. To administer the IAS server, click on Start | Programs | Administrative Tools | Internet Authentication Service. Next, you’ll need to create Remote Access Policies to enable your Active Directory Users to access your network through the IAS Server.

Creating Remote Access Policies Similar to Windows 2000, you can control remote access capabilities of users and groups by using a remote access policy. You can have multiple policies associated with various users and groups, and each policy can allow or deny remote access to the network based on a number of factors including date and time, Active Directory group membership, connection type (modem versus VPN), etc. Your goal as an administrator is to create remote access policies that


Page 32 of 52

reflect the usage needs of your company or clients. If your remote access capabilities are limited to three dial-up modem connections, for example, you may wish to restrict the use of these modems during the day to those users who have a specific need for it. For example, you may have a small number of Regional Sales Directors who work from various locations and need to access reporting data during the day. In the following exercise, we’ll create a remote access policy that limits remote access connections on your network to members of the SalesVP group between the hours of 8am and 5pm, Monday through Friday. Creating this policy will allow your company’s Sales Vice Presidents to access the information they need rather than allowing extraneous remote access connections to tie up your limited resources.

EXERCISE 5.06 CREATING A REMOTE ACCESS POLICY 1. Open the IAS administration utility by clicking on Start | Programs | Administrative Tools | Internet Authentication Service. 2. Right-click on Remote Access Policies and select New Remote Access Policy. Click Next to bypass the initial screen in the wizard. You’ll see the screen shown in Figure 5.19. Click Use the wizard to set up a typical policy for a common scenario, enter a name to describe the policy, and then click Next.

Figure 5.19 Creating a Remote Access Policy 3. From the Access method screen, select the access method that this policy will apply to. You can select one of the following methods: •

VPN Access


Page 33 of 52

Dial-Up Access

Wireless Access


4. For the purpose of this example, select Dial-Up Access, then click Next. 5. Decide whether to grant remote access permission on a user- or group- level. Using groups will provide easier and more efficient administration since you can group users with common remote access needs and add or remove users from the group as needed. Select Group, and the SalesVP group. Click Next to continue. 6. On the screen shown in Figure 5.20, select the authentication method that this remote access policy will use. If your clients are using software that can handle the higher encryption levels, you can disable weaker encryption schemes like CHAP to prevent users from connecting with a lower level of encryption.

Figure 5.20 Remote Access Authentication Methods 7. Click Next to continue. On the next screen, select the levels of encryption that your users can employ to connect to the IAS server. You can select an encryption level of 40-, 56-, or 128-bit encryption, or choose not to mandate encryption at all. Click Next and then Finish to set these standard policy settings. 8. Next you’ll want to further modify the remote policy so that users can only connect to your dial-up modems between 8AM and 5PM, Monday through Friday. Right-click on the Remote Access Policy that you just created, and select Properties.


Page 34 of 52

9. Click Add to include another condition to this policy, adding new conditions one at a time. Figure 5.21 illustrates the various conditions that you can use to grant or deny remote access to your clients.

Figure 5.21 Remote Access Policy Conditions The final step in enabling remote access via IAS is to configure your Active Directory Users or Groups to use the remote access policy that you just created. To configure the SalesVP group to use the remote access policy, follow these steps: 10. In Active Directory Users and Computers, right-click on the SalesVP group and select Properties. 11. Click on the Remote Access tab, and select Click on Control Access Through Remote Access Policy. Click OK, repeating this step for any other users or groups who require the remote access policy.

Using IAS for Wireless access Windows Server 2003 has made it a relatively straightforward matter to enable a Wireless Access Point (WAP) to interact with IAS. Wireless clients can authenticate against an IAS server using smart cards, certificates, or username/password combination. The actual sequence of events when a wireless device requests access to your wired network will proceed in this manner: 1. When a wireless client comes within range of a wireless access point, the WAP will request authentication information from the client.


Page 35 of 52

2. The client sends its authentication information to the WAP, which forwards the login request to the RADIUS server. (In this case, IAS.) 3. If the login information is valid, IAS will transmit an encrypted authentication key to the WAP. 4. The WAP will use this encrypted key to establish an authenticated session with the wireless client. To allow wireless clients to access your network, you need to perform two steps: create a remote access policy that allows wireless connectivity, and add your WAPs as RADIUS clients on the IAS server so that they can forward login information to IAS for processing. (You’ll configure your Wireless Access Point as a RADIUS client according to the instructions provided by the WAP manufacturer.) A remote access policy for wireless users should contain the following information: • Access Method: Wireless Access •

User or Group: Group, specifying the WirelessUsers group, for example

Authentication Methods: Smart Card or Other Certificate

Policy Encryption Level: Strongest Encryption, disable all other possible encryption levels.

Permission: Grant Remote Access Permission

Other uses of IAS You can use IAS in many different situations to provide various types of remote access for your network users. Besides the uses we’ve already covered, you can also configure IAS to handle the following: • Authenticating switches: You can use remote access policies to allow IAS to act as a RADIUS server for Ethernet switches that have the ability to authenticate to a central server. You can enforce this type of authentication through the use of remote access policies to ensure that no “rogue” or unauthorized switches are brought online within your network infrastructure. •

Outsourcing remote access connections: IAS allows an organization to outsource its remote access infrastructure to a third party Internet Service Provider. In this situation, a user connects to an ISP’s dial-up, but their login credentials are forwarded to your corporate IAS server for processing; your IAS server will also handle all logging and usage tracking for your remote users. This can provide a great deal of cost savings for an organization as it can utilize the ISP’s existing network infrastructure, rather than creating its own network of routers, access points and WAN links. IAS can also provide a similar service for outsourcing wireless access, in which a third party vendor’s Wireless Access Point forwards the user’s authentication information to your IAS server for processing.

Creating a User Authorization Strategy


Page 36 of 52

Windows 2003 offers a wide array of options for user authentication and authorization, allowing you to design a strategy to meet the needs of all of your end users. Rather than being locked into a single technology or protocol, you can mix and match the solutions presented in this section to best meet the needs of your users and organization. When creating a user authorization strategy, you need to keep a few key points in mind. 1. Who are your users? More specifically, what type of computing platforms are they using? If you are using Windows 2003 family operating systems for clients and servers across your entire enterprise, you can mandate the highest levels of Kerberos v5 encryption. At that point you can increase the security level of your network by disabling all earlier forms of encryption, since they won’t be in use on your network. If, however, you are supporting downlevel clients like Windows NT4 Server or Workstation, you’ll need to make allowances for these users to transmit their information using NTLM or NTLMv2 encryption. 2. Where are your users located? If your company operates only in a single location, you can use firewall technologies to render your network resources inaccessible to the outside world. In all likelihood, however, you’ll need to provide some mechanism for remote access, either for traveling users or customers connecting via a web browser. In this case you’ll want to select the highest level of encryption that can be handled by your remote users and clients. This is a simpler matter for remote users, as you can mandate a corporate software policy dictating that everyone uses the most recent version of Internet Explorer. Allowing for customer access creates a more complex environment, as you obviously cannot control which browsers or platforms your clients will be using. While implementing an authentication method like Digest Authentication will require all users to have Internet Explorer 5 or better, most modern web browsers regardless of software vendor provide support for other technologies like SSL encryption.

Educating Users While the more highly publicized network security incidents always center on a technical flaw – an overlooked patch that led to a global Denial of Service attack, a flaw that led to the worldwide propagation of an email virus, etc. – many network intrusions are caused by a lack of knowledge among corporate employees. Because of this, user education is a critical component of any security plan. Make sure that your users understand the potential dangers of sharing their login credentials with anyone else, or leaving that information in a location where others could take note of it – the famed “password on a sticky note” cautionary tale in action. Your users will be far more likely to cooperate and comply with corporate security standards if they understand the reasons behind them, and the damage that could be caused by ignoring security measures. Security education should not only be thorough, but also repetitive. It is not enough to simply provide security information at a new employee orientation and never mention it again. As a network administrator you should take steps to make sure that security awareness remains a part of your users’ daily lives. You


Page 37 of 52

can promote this awareness through the simplest of measures: including a paragraph in an employee newsletter, sending bulletins to the user base when a new virus is becoming a threat, and the like. (At the same time, though, you should avoid sending out so much information that your users become overwhelmed by it; a security bulletin that no one reads is no more useful than one that you don’t send at all.) By combining user education with technical measures such as password policies and strong network authentication, you will be well on your way to creating multiple layers of protection for your network and the data contained therein.

Using Smart Cards Smart cards provide a portable method of providing security on a network for such tasks as client authentication and securing user data. In this section, we’ll provide an overview of smart card technology, as well as the steps involved in utilizing smart cards on your Windows 2003 network. Smart card implementations rely in part on the Certificate Authority service, so we’ll spend some time discussing the use of certificates within Windows Server 2003 as well. Support for smart cards is a key feature within the Windows Server 2003 family. Smart cards provide tamper-resistant, safe storage for protecting your users’ private keys, which are used to encrypt and decrypt data, as well as other forms of your users’ personal information. Smart cards also isolate security processes from the rest of the computer, providing heightened security since all authentication operations are performed on the smart card, rather than being transmitted to other parts of the computer or network that do not need to be involved in the process. Finally, smart cards will provide your users with a portable means of transmitting their logon credentials and other private information, regardless of their location. Smart Cards in Action

The use of smart cards for authentication and data encryption is a new but growing trend within enterprise networks. The cards themselves can be used, not just for network authentication, but can be imprinted with employee information so that they can also serve as identification badges. A good illustration of this type implementation is the RSA SecurID Card from, shown in Figure 5.22.The RSA devices use an internal clock to generate a new PIN number every 60 seconds, creating a highly secure authentication method that is as portable and convenient as a common credit card or ATM card.

Figure 5.22:RSA SecurID Card In some cases, smart cards technology can also be integrated into an existing employee identification system by imprinting employee information onto a smart card. Obviously, special care needs to be taken in implementations like this so that the smart card components do not become damaged through everyday use. The advantage to this type of smart card rollout is that users do not have to remember to carry five different pieces of ID with them; the ID card that


Page 38 of 52

gets them in the door is the same one that logs them onto their computers. You’ll also see smart cards that are configured as smaller “fobs” or “tags” that can be stored on a keychain, and some vendors are even considering integrating smart card technology into handheld devices and cellphones. The smart card readers themselves can either be standalone readers, or else a smart card “fob” can be inserted directly into a workstation’s USB port.

Understanding Smart Cards Using a smart card for network logons provides extremely strong authentication because it requires two authentication factors: something the user knows (the PIN) along with something the user has. (The smart card itself.) This provides stronger authentication than a password along, since a malicious user would need to have access to both the smart card and the PIN in order to impersonate a legitimate user. It’s also difficult for an attacker to perform a smart card attack undetected, because the user would notice that their smart card was physically missing.

When to use Smart Cards Smart cards can provide security solutions for a number of business and technical processes within your organization. When deciding whether or not to add smart cards to a given system, you’ll need to weigh the security benefits against the costs of deployment, both in terms of hardware costs and ongoing support. Smart cards can secure any of the following processes within your business: • Using a smart card for interactive user logons will provide security and encryption for all logon credentials. Relying on smart cards instead of passwords will mean that you will not need to worry about the quality and strength of user passwords. •

Requiring smart cards for remote access logons will prevent attackers from using dial-up or Internet connections to compromise your network, even if they gain physical access to a remote laptop or home computer.

Administrator logons are ideal candidates for smart card authentication, since they have the potential to wreak far more havoc on a network installation than an account belonging to a less powerful network user. By requiring your administrators to use smart cards, you can greatly reduce the possibility that an attacker can gain administrative access to your network. However, keep in mind that some administrative tasks are not suited for smart card logons; as such, your administrators should have the option of logging on with a username/password combination when necessary.

Digital signing and encryption of private user information such as email and other confidential files

Implementing Smart Cards


Page 39 of 52

Utilizing smart cards on your network involves a number of preparatory steps that we’ll discuss in this section. First we’ll look at the steps involved in establishing a Certificate Authority on your network, as well as a discussion of the related concepts and terminology. Next we’ll examine the process of establishing security permissions for users and administrators to request certificates to use with their smart card and smart card readers. Finally we’ll walk step-by-step through the process of setting up a smart card enrollment station to issue certificates to your end users, as well as the actual procedure to issue a smart card certificate to a user on your network. We’ll end this section with some best practices for providing technical support for the smart card users on your network

PKI and Certificate Authorities Smart card authentication relies on certificates to control which users can access the network using their smart cards. Certificates are digitally signed statements that verify the identity of a person, device or service. Certificates can be used for a wide variety of functions, including Web authentication, securing email, verifying application code validity, and allowing for smart card authentication. The machine that issues certificates is referred to as a certificate authority, and the person or device that received the certificate is referred to as the subject of the certificate. Certificates will typically contain the following information: • The subject's public key value •

Any identifying information, such as the username or email address

The length of time that the certificate will be considered valid

Identifier information for the company/server that issued the certificate

The digital signature of the issuer, which attests to the validity of the subject’s public key their identifying information Every certificate also contains a Valid From and Valid To date to prevent potential misuse stemming from employee turnover and the life. Once a certificate has expired, the user needs to obtain a new certificate in order to continue to access the associated network resources. Certificate authorities also maintain a certificate revocation list that can be used in case a certificate needs to be cancelled before its regular expiration date transpires. Certificates are perhaps most useful to establish mutual authentication between two entities – users, computers, devices, etc. – need to authenticate to one another and exchange information with a high level of confidence that each entity is who or what it claims to be. Because of this need, many companies will install their own certificate authorities and issue certificates to their internal users and devices in order to heighten the security of their network environment. This provides the assurance, not only that the user is who they say they are, but assures the user that their session is not being misdirected to a “phony” server being used to intercept sensitive information. Support for smart cards is a key feature of the public key infrastructure that’s included with Windows Server 2003. You need to take several steps in order to prepare your Windows 2003 network to allow your company to use smart card devices. The first step is to install Certificate Services on at least one of your Windows 2003 servers. You can accomplish this through the


Page 40 of 52

Add/Remove Programs applet in the Control Panel; you’ll find the Certificate Server under the Add/Remove Windows Components screen. This will establish the Windows 2003 server in question as a certificate authority for your Windows 2003 domain. Once you’ve established your server as a certificate authority, you’ll need to create three types of certificate templates to allow for smart card use on your network. Just like a document template in business application software like Microsoft Word, a certificate template allows multiple certificates to be created using the same basic settings. This is critical for this purpose, as it ensures that all certificates issued will contain the same security information. The security templates that you’ll need to create are: • Enrollment Agent Certificate. This will allow a Windows 2003 machine to act as an enrollment station, creating certificates on behalf of smart card users who need to access the network •

The Smart Card Logon Certificate will allow your users to authenticate to the network by using a smart card inserted into a smart card reader

Smart Card User Certificates will not be covered extensively in this section, but are used to provide the capability to secure email once a user has been authentication You’ll be prompted to create these certificate templates automatically the first time that you open the Certificate Template MMC console. Click on Start | Run, then type certtmpl.msc and click OK. When you’re prompted to install new certificate templates, click OK. This step will also upgrade any existing templates on your server, if the machine was functioning as a certificate authority under a previous version of Windows.

Setting Security Permissions In order to implement PKI certificates, administrators and users need to have the appropriate permissions for the certificate templates that are installed on the certificate authority. You can grant, edit or remove these permissions in the Certificate Templates management snap-in. In order to edit these permissions, you need to be a member of the Enterprise Admins group, or the Domain Admins group in the forest root domain. To manage permissions on your security templates, do the following: 1. Open the Certificate Templates MMC console by clicking on Start | Run, then typing certtmpl.msc and clicking OK. You’ll see the screen shown in Figure 5.23


Page 41 of 52

Figure 5.23 Managing Certificate Templates 2. Right-click on the certificate template whose permissions you need to change and select Properties. 3. On the Security tab shown in Figure 5.24, add the users and groups who will need to request certificates based on this template. Under the Allow column, place a check mark next to the Read and Enroll permission. Click OK when you’ve set the appropriate permissions for all necessary users and groups.


Page 42 of 52

Figure 5.24 Setting Permissions for Certificate Templates

Enrollment Stations To distribute certificates and keys to your users, the Certificate Server that’s included with Windows Server 2003 includes a smart card enrollment station. The enrollment station allows an administrator to request a smart card certificate on a user’s behalf so that it can be pre-installed onto their smart card. The certificate server signs the certificate request that’s generated on behalf of the smart card user. Before your users can request certificates, you need to prepare the enrollment station to generate certificates for their use. A smart card administrator must have the appropriate security permissions to administer the Enrollment Agent certificate template, as detailed in the last section. Any machine running Windows XP or Windows Server 2003 can act as an enrollment station.

Issuing Enrollment Agent certificates To prepare your certification authority to issue smart card certificates, you’ll first need to prepare the Enrollment Agent certificate. Before you begin, make sure that your user account has been granted the Read and Enroll permissions as discussed in the last section. To create an Enrollment Agent Certificate, follow the steps included here. 1. Open the Certificate Authority snap-in by clicking on Start | Programs | Administrative Tools | Certification Authority. 2. In the console tree, navigate to Certificate Authority | ComputerName | Certificate Templates. 3. From the Action menu, click on New | Certificate to Issue. You’ll see the screen shown in Figure 5.25.

Figure 5.25 Issuing a Certificate Template 4. Select the Enrollment Agent template and click OK. 5. Return to the Action menu, and select New | Certificate to Issue. Select one of the following options:


Page 43 of 52

To create certificates that will only be valid for user authorization, select the Smart Card Logon certificate template and click OK.

For certificates that can be used both for logon and to encrypt user information like email, click on the Smart Card User certificate template, then click OK. Once you’ve created the Enrollment Agent certificate, anyone with access to that certificate can generate a smart card on behalf of all users in your organization. The resulting smart card could then be used to log on to the network and impersonate the real user. Because of the capabilities of this certificate, you need to maintain strict controls over who has access to them.

Requesting an Enrollment Agent Certificate In the following exercise, we’ll prepare a Windows Server 2003 machine to act as a smart cart enrollment station. Be sure that the user account you’re using to log on has been granted the Read and Enroll permissions for the Enrollment Agent certificate template.

EXERCISE 5.07 CREATING A SMART CARD CERTIFICATE ENROLLMENT STATION 1. Log onto the machine as the user who will be installing the certificates. 2. Create a blank MMC console by clicking Start | Run, then type mmc and click OK. 3. From the console window, click File | Add/Remove Snap-in, then select Add. 4. Double-click on the Certificates snap-in. Click Close and then OK. You’ll see the Certificates snap-in shown in Figure 5.26.


Page 44 of 52

Figure 5.26 The Certificates Management Console 5. In the right-hand pane, click on Certificates | Current User | Personal. 6. Click on Action | All Tasks, and then select Request New Certificate. Click Next to bypass the Welcome screen. 7. Select the Enrollment Agent certificate template and enter a description for the certificate, in this case “Smart Card Enrollment Certificate.” Click Next to continue. 8. Click Finish to complete the installation of the enrollment agent.

Enrolling Users The process of setting up your company’s employees to use smart cards includes hardware, software, and administrative considerations. On the hardware side, you’ll need to purchase and install smart card readers for all of your users’ workstations. Assuming that the reader is plug-and-play compatible, the hardware installation process should be fairly uncomplicated. Once the necessary hardware is in place, you’ll then use the Enrollment Station to install smart card logon or user certificates for each user’s smart card, as well as setting an initial PIN number for them to use. Along with these technical pieces, you will also be required to create and document policies regarding identification requirements to receive a smart card or reset a forgotten PIN number. Finally, you’ll need to train your users on the new procedure to log onto a smart card-protected workstation, since the familiar Ctrl+Alt+Del key sequence will be a thing of the past.

Installing a Smart Card Reader Most smart card readers are Plug-and-Play compatible under the Windows Server 2003 software family, so the actual installation of them is relatively straightforward. If you’re using a reader that is not Plug-and-Play compatible or that has not been tested by Microsoft, you’ll need to obtain installation instructions from the manufacturer of the card reader. As of this writing, the smart card readers listed in Table 5.1 are supported by Windows XP and Windows Server 2003. The corresponding device drivers will be installed on the workstation or server when the card reader has been detected by the operating system. Brand American Express Bull Compaq Gemplus Gemplus Gemplus Hewlett Packard Litronic


Smart card reader GCR435 SmarTLP3 Serial reader GCR410P GPR400 GemPC430 ProtectTools 220P


Device driver

USB Serial Serial Serial PCMCIA USB Serial Serial

Grclass.sys Bulltlp3.sys grserial.sys Grserial.sys Gpr400.sys Grclass.sys Scr111.sys Lit220p.sys

Page 45 of 52

Schlumberger Schlumberger Schlumberger SCM Microsystems SCM Microsystems SCM Microsystems SCM Microsystems Systemneeds Omnikey AG Omnikey AG Omnikey AG

Reflex 20 Reflex 72 Reflex Lite SCR111 SCR200 SCR120 SCR300 External 2010 2020 4000

PCMCIA Serial Serial Serial Serial PCMCIA USB Serial Serial USB PCMCIA

Pscr.sys Scmstcs.sys Scr111.sys Scr111.sys Scmstcs.sys Pscr.sys Stcusb.sys Scr111.sys Sccmn50m.sys Sccmusbm.sys Cmbp0wdm.sys

Table 5.1 Supported Smart Card Readers under Windows 2003 To install a smart card reader on your computer, simply attach the reader to an available port, either serial or USB, or insert the reader into an available PCMCIA slot on a laptop. If the driver for the reader is preinstalled in Windows 2003, the installation will take place automatically. Otherwise the Add Hardware Wizard will prompt you for the installation disk from the card reader manufacturer.

Issuing Smart Card Certificates Once you’ve established the appropriate security for the certificate templates and installed smart card readers on your users’ workstations, you can begin the process of issuing the smart card certificates that your users will need to access the network. This enrollment process needs to be a controlled procedure. In much the same way that employee access cards are monitored to ensure that unidentified persons do not gain physical access to your facility, smart card certificates need to be monitored to ensure that only authorized users can view network resources. In the following exercise, we will use the Web Enrollment application to set up a smart card with a Logon Certificate.

EXERCISE 5.08 SETTING UP A SMART CARD FOR USER LOGON 1. Log onto your workstation with a user account with rights to the Enrollment Agent Certificate template in the domain where the user's account is located. 2. Open Internet Explorer, and browse to http://servername, where servername is the name of the Certificate Authority on your network. 3. Click on Request a certificate, then Advanced Certificate Request. You’ll need to choose one of the following options: •

Smart Card Logon certificate if you want to issue a certificate that will only be valid for authenticating to the Windows domain

A Smart Card User certificate will allow the user to secure email and personal information, as well as logging onto the Windows 2003 domain.


Page 46 of 52

4. Under Certificate Authority, select the name of the CA for your domain. If there are multiple CA’s in your domain, click on the one that you wish to issue the smart card certificate. 5. For Cryptographic Service Provider, select the cryptographic service provider (CSP) of the smart card’s manufacturer. This is specific to the smart card hardware; consult the manufacturer’s documentation if you are uncertain. 6. In Administrator Signing Certificate, select the Enrollment Agent certificate that will sign the certificate enrollment request. Click Next to continue. 7. On the User to Enroll screen, click Select User to browse to the user account for which you are creating the smart card certificate. Click Enroll to create a certificate for this user. 8. You’ll be prompted to insert the user’s smart card into the reader on your system. When you click OK to proceed, you’ll be prompted to set an initial PIN number for the card. 9. If another user has previously used the smart card that you’re preparing, a message will appear indicating that another certificate already exists on the card. Click Yes to replace the existing certificate with the one you just created. 10. On the final screen, you’ll have the option to either view the certificate you just created, or to begin a new certificate request. 11. Close your browser when you’ve finished creating certificate requests so that no extraneous certificates can be created if you walk away from the enrollment station.

Assigning Smart Cards Once you’ve pre-configured your users’ smart cards, you’ll need to establish guidelines defining how cards are assigned to those who require them. This part of your smart card deployment plan is more procedural than technical, as you need to determine acceptable policies and service level agreements for your smart cards and smart card readers. For example, what type of identification will you require in order for a user to obtain their smart card? Even if this is a small enough organization that you recognize all of your users on sight, you should still record information from a driver’s license or another piece of photo identification for auditing purposes. Another set of issues revolves around your users’ PINs. How many unsuccessful logon attempts will you allow before locking out the smart card? While this will vary according to your individual business requirements, three or four PIN entry attempts are usually more than sufficient. Next, you’ll need to decide whether you will allow users to reset their own PINs, or if they’ll need to provide personal information to security or help desk personnel to have them reset by the IT staff. The former will be more convenient for your user base, but that convenience will come at the expense of potential security liabilities. If user PINs need to be reset by the IT staff, decide what type of information the user


Page 47 of 52

will need to present in order to verify their identity. Document all applicable security policies and distribute them to your administration and security personnel, and make sure that your users are aware of them before they take possession of their smart cards.

Logon Procedures To log on to a computer using a smart card, your users will no longer need to enter the CTRL+ALT+DEL key sequence. Rather, they’ll simply insert the smart card into the smart card reader, at which point they’ll be prompted to enter their PIN number. Once the PIN is accepted, the user will have access to all local and network resources that their Active Directory user account has been granted permissions to.

Revoking Smart Cards Along with creating policies for issuing and configuring smart cards, you should consider how your organization will handle revoking the smart card of an employee who resigns or is terminated. To be successful, this should be viewed as a joint effort between your company’s administrative processes like payroll and human resources along with the IT department. Just as an employee needs to return ID badges and keys as part of the exit process, they should also be required to return their smart card to the company. (As an added incentive, some companies will withhold the employee’s final paycheck until these items are returned.) Whether the employee exits the company in a graceful manner or not, you should add their smart card certificate(s) to your CA’s certificate revocation list (CRL) at the same time that you disable or delete their other logon IDs and credentials. Depending on the manufacturer of the smart card itself, you may have an option to physically disable the smart card itself on the basis of a serial number or other unique identifier.

Planning for Smart Card Support Like any device or technology used to enhance network security, you’ll need to make plans to educate your users on how to use smart cards, as well as providing administrative tools to support their ongoing use. First, make sure that your users understand the purpose of deploying smart cards; you’ll receive a much better response if they comprehend the importance of the added security, rather than if they’re simply handed a smart card and told to use it. Emphasize that the smart card is a valuable resource to protect the company and its assets, rather than simply another corporate procedure designed to annoy them or waste their time. They should know whom they should call or for help and technical support, if this is different from their usual support contacts, as well as what to do if their card is lost or stolen. Maintain a printed version of this information, and distribute it to your users when they receive their smart cards. You can also publish this on your corporate intranet if you have one. When orienting your users to the use of smart card, make sure that you cover the following key points: • Protect the external smart card chip. If the chip itself becomes scratched, dented, or otherwise damaged, the smart card reader might not be able to read the data on the chip. (This is similar to the magnetic strip on a credit card or an ATM card.)


Page 48 of 52

Do not bend the card, as it can destroy the internal components of the card. This can extend to something as simple as a user putting the smart card in their back pocket, because they might sit on the card and break its internal components.

Avoid exposing the card to extreme temperatures. Leaving a smart card on the dashboard of your car on a hot day can melt or warp the card, while extremes of cold can make the card brittle and cause it to break.

Keep the smart card away from magnetic sources like credit cards and scanners at retail stores.

Keep the smart card away from young children and pets, as it presents a potential swallowing or choking hazard. Along with user education, there are several settings within Active Directory Group Policy that can simplify the administration of smart cards on your network. Some of these, like account lockout policies and restricted login times, will impact users by default if they rely on their smart cards for domain logons. Other policy settings are specific to managing mart cards on your network. Within Group Policy, you can enable the following settings: • Smart card required for interactive logon. This prevents a user account from logging onto the network by presenting a username/password combination; they will only be able to authenticate by using a smart card. This provides strict security for your users; however, you should plan for an alternate means of authentication in case your smart card implementation becomes unavailable for any reason. This policy is not appropriate for users who need to perform administrative tasks like installing Active Directory on a server or joining computers to a Windows 2003 domain. •

On smart card removal allows you to mandate that, when a user removes their smart card from the reader, their session is either logged off or locked to prevent them from leaving an active session running when they walk away. User education is critical if you select the forced logoff option, as users will need to make sure that they’ve saved changes to any of their documents and files before they remove their smart cards.

Do not allow smart card device redirection will prevent your users from using smart card to log onto a Terminal Services session. Set this policy if you’re concerned about conserving network resources associated with your Terminal Server environment.

Account lockout threshold. While this setting is not specific to smart cards, smart card PINs are more susceptible to password attacks, so your lockout threshold settings should be adjusted accordingly. From an administrative standpoint, there are several other important considerations in creating a support structure for smart card use. You need to identify the persons within your organization who will be able to perform security-related tasks like resetting PINs or distributing temporary cards to •


Page 49 of 52

replace those that are lost or forgotten. You’ll also need to decide how you’ll handle personnel changes like name changes, changes in employments status, as well as any special procedures for high-level employees, traveling users and support personnel.

Fast Track Password Policies •

According to Microsoft, complex passwords consist of at least seven characters, including three of the following four character types: uppercase letters, lowercase letters, numeric digits and non alphanumeric characters like & $ * !. Password policies, including password length and complexity as well as account lockout policies, are set at the domain level. If you have a subset of your userbase that requires a different set of account policies and other security settings, you should create a separate domain to meet their requirements Be sure that you understand the implications of an account lockout policy before you enable one in a production environment

User Authentication •

Kerberos 5 is the default communication method between two machines that are both running Windows 2000 or better. For down level clients and servers, NTLM authentication will be used. Internet Authentication Service can be used for a variety of applications: as a RADIUS server or proxy, to authenticate network hardware like switches, and to provide remote access and VPN authentication To provide authentication for web applications, you can implement either SSL/TLS for standards-based encryption that is recognized by a wide range of browsers and platforms, or Microsoft Digest that is specific to Internet Explorer version 5 or later

Using Smart Cards • •

Microsoft Windows 2003 relies on its Public Key Infrastructure (PKI) and Certificate Services to facilitate smart card authentication Smart Card certificates are based on the following three certificate templates: the Enrollment Agent certificate used to create certificates for smart card users, the Smart Card Logon certificate that provides user authentication only, and the Smart Card User certificate that allows for both authentication as well as data encryption Several Group Policy settings are specific to smart card implementations, while other account policy settings will also affect smart card users

Frequently Asked Questions


Page 50 of 52

Q: How can I configure a smart card user to be able to temporarily log onto the network if they’ve forgotten their card? A: In the user’s Properties sheet within Active Directory Users and Computers, make the following changes on the Account tab: Clear the check-mark next to Smart Card is Required for Interactive Logon Place a check-mark next to User Must Change Password at NextLogon Finally, right-click on the user object and select Reset Password. Inform the user of their new password, and that they will need to change it the first time they log on. Q: What weaknesses does the Kerberos authentication protocol possess? A: The largest concern to be aware of when using Kerberos authentication centers on the physical security of your Key Distribution Centers, as well as your local workstations. Since Kerberos attempts to provide single sign-on capabilities for your users, an attacker who gains access to your workstation console will be able to access the same resources that you yourself are able to. Kerberos also does not protect against stolen passwords; if a malicious user obtains a legitimate password, he or she will be able to impersonate a legitimate user on your network. Q: What are the advantages of implementing a “soft lockout” policy versus a “hard lockout” within the account lockout policies? A: A hard lockout policy refers to an account lockout that must be manually cleared by an administrator. This provides the highest level of security, but carries with it the risk that legitimate users will be unable to access network resources – you can effectively create a denial-of-service attack against your own network. A soft lockout that expires after a set amount of time will still help to avert password attacks against your network, while still allowing legitimate users a reasonable chance to get their jobs done. For example, if your account lockout policy specifies that accounts should be locked out for one hour after two bad logon attempts, this will render even an automated password-guessing utility so slow as to be nearly ineffective. Q: My organization is in the planning stages of a smart card rollout. What are the security considerations involved when setting up a smart card enrollment station? A: Since a smart card enrollment station will allow you to create certificates on behalf of any user within your Windows Server 2003 domain, you should secure these machines heavily, both in terms of physical location and software patches. Imagine the damage that could be wrought if a malicious user were able to create a smart card logon certificate for a member of the Domain Admins group, and use it to log onto your network at will. Q: How can I convince my users that the company’s new smart card rollout is something that is protecting them, rather than simply “yet another stupid rule to follow”? A: One of the most critical components of any network security policy is securing “buy-in” from your users: a security mechanism that is not followed


Page 51 of 52

is little more useful than not having one to begin with. Try to explain the value of smart card authentication from the end-user’s perspectives: if you work in a sales organization, ask your sales force how they would feel if their client contacts, price quotes and contracts fell into the hands of their main competitor. In a situation like this, providing a good answer to “What’s in it for me?” can mean the difference between a successful security structure and a failed one.


Page 52 of 52
Creating Security Policies and Implementing Identity Management with Active Directory

Related documents

8 Pages • 1,201 Words • PDF • 1021.5 KB

4 Pages • 634 Words • PDF • 174.2 KB

1,135 Pages • 369,180 Words • PDF • 10.2 MB

389 Pages • 101,660 Words • PDF • 4.6 MB

894 Pages • 380,901 Words • PDF • 22.4 MB

721 Pages • 204,505 Words • PDF • 5.1 MB

181 Pages • 66,513 Words • PDF • 2 MB

432 Pages • 203,682 Words • PDF • 2.6 MB

353 Pages • 104,120 Words • PDF • 5.6 MB