Pro Python System Administration 2nd Edition Book

411 Pages • 147,091 Words • PDF • 4.1 MB
Uploaded at 2021-09-24 13:35

This document was submitted by our user and they confirm that they have the consent to share it. Assuming that you are writer or own the copyright of this document, report to us by using this DMCA report button.


FE S S

ND ON

IONA

LS ®

Sileika

Sf Co/Vo

Pro

EDITION a#

on Syst Admim em \

inistration

%/.s S//e//fa

For your convenience Apress has placed some of the front matter material after the index. Please use the Bookmarks and Contents at a Glance links to access them.

friendsof

0

Apress

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

Contents at a Glance About the Author.............................................................................................................. xvii About the Technical Reviewers......................................................................................... xix Acknowledgments............................................................................................................. xxi Introduction..................................................................................................................... xxiii N■Chapter 1: Reading and Collecting Performance Data Using SNMP..................................1 N■Chapter 2: Managing Devices Using the SOAP API..........................................................37 N■Chapter 3: Creating a Web Application for IP Address Accountancy..............................79 N■Chapter 4: Integrating the IP Address Application with DHCP......................................111 N■Chapter 5: Maintaining a List of Virtual Hosts in an Apache Configuration File............143 N■Chapter 6: Gathering and Presenting Statistical Data from Apache Log Files..............163 N■Chapter 7: Performing Complex Searches and Reporting on Application Log Files......189 N■Chapter 8: A Website Availability Check Script for Nagios............................................217 N■Chapter 9: Management and Monitoring Subsystem....................................................241 N■Chapter 10: Remote Monitoring Agents........................................................................275 N■Chapter 11: Statistics Gathering and Reporting............................................................301 N■Chapter 12: Distributed Message Processing System...................................................331 N■Chapter 13: Automatic MySQL Database Performance Tuning......................................349 N■Chapter 14: Using Amazon EC2/S3 as a Data Warehouse Solution...............................367 Index.................................................................................................................................391

v

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

Introduction The role of the system administrator has grown dramatically over the years. The number of systems supported by a single engineer has also increased. As such, it is impractical to handcraft each installation, and there is a need to automate as many tasks as possible. The structure of systems varies from organization to organization, therefore system administrators must be able to create their own management tools. Historically, the most popular programming languages for these tasks were UNIX shell and Perl. They served their purposes well, and I doubt they will ever cease to exist. However, the complexity of current systems requires new tools, and the Python programming language is one of them. Python is an object-oriented programming language suitable for developing large-scale applications. Its syntax and structure make it very easy to read—so much so that the language is sometimes referred to as “executable pseudocode.” The Python interpreter allows for interactive execution, so in some situations an administrator can use it instead of a standard UNIX shell. Although Python is primarily an object-oriented language, it is easily adopted for procedural and functional styles of programming. Given all that, Python makes a perfect fit as a new language for implementing system administration applications. There are a large number of Linux system utilities already written in Python, such as the Yum package manager and Anaconda, the Linux installation program.

The Prerequisites for Using this Book This book is about using the Python programming language to solve specific system administration tasks. We look at the four distinctive system administration areas: network management, web server and web application management, database system management, and system monitoring. Although I explain in detail most of the technologies used in this book, bear in mind that the main goal here is to display the practical application of the Python libraries so as to solve rather specific issues. Therefore, I assume that you are a seasoned system administrator. You should be able to find additional information yourself; this book gives you a rough guide for how to reach your goal, but you must be able to work out how to adapt it to your specific system and environment. As we discuss the examples, you will be asked to install additional packages and libraries. In most cases, I provide the commands and instructions to perform these tasks on a Fedora system, but you should be ready to adopt the instructions to the Linux distribution that you are going to use. Most of the examples also work without many modification on a recent OS X release (10.10.X). I also assume that you have a background in the Python programming language. I introduce the specific libraries that are used in system administration tasks, as well as some lesser known or less often discussed language functionality, such as the generator functions or the class internal methods, but the basic language syntax is not explained here. If you want to refresh your Python skills, I recommend the following books: Pro Python by Marty Alchin and J. Burton Browning (Apress, 2012; but watch for a new edition due to be released in early 2015); Python Programming for the Absolute Beginner by Mike Dawson (Course Technology PTR, 2010); and Core Python Applications Programming by Wesley Chun (Prentice Hall, 2012) All examples presented in this book assume the Python version 2.7. This is mostly dictated by the libraries that are used in the examples. Some libraries have been ported to Python 3; however, some have not. So if you need to run Python 3, make sure you check that the required libraries have Python 3 support.

xxiii

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

N INTRODUcTiON

The Structure of this Book This book contains 14 chapters, and each chapter solves a distinctive problem. Some examples span multiple chapters, but even then, each chapter deals with a specific aspect of the particular problem. In addition to the chapters, several other organizational layers characterize this book. First, I grouped the chapters by the problem type. Chapters 1 to 4 deal with network management issues; Chapters 5 to 7 talk about the Apache web server and web application management; Chapters 8 to 11 are dedicated to monitoring and statistical calculations; and Chapters 12 and 13 focus on database management issues. Second, I maintain a common pattern in all chapters. I start with the problem statement and then move on to gather requirements and proceed through the design phase before moving into the implementation section. Third, each chapter focuses on one or more technologies and the Python libraries that provide the language interface for the particular technology. Examples of such technologies could be the SOAP protocol, application plug-in architecture, or cloud computing concepts. More specifically, here’s a breakdown of the chapters: Chapter 1: Reading and Collecting Performance Data Using SNMP Most network-attached devices expose the internal counters via the Simple Network Management Protocol (SNMP). This chapter explains basic SNMP principles and the data structure. We then look at the Python libraries that provide the interface to SNMP–enabled devices. We also investigate the round robin database, which is the de facto standard for storing the statistical data. Finally, we look at the Jinja2 template framework, which allows us to generate simple web pages. Chapter 2: Managing Devices Using the SOAP API Complicated tasks, such as managing the device configuration, cannot be easily done by using SNMP because the protocol is too simplistic. Therefore, advanced devices, such as the Citrix Netscaler load balancers, provide the SOAP API interface to the device management system. In this chapter, we investigate the SOAP API structure and the libraries that enable the SOAP–based communication from the Python programming language. We also look at the basic logging functionality using the built-in libraries. This second edition of the book includes examples of how to use the new REST API to manage the load balancer devices. Chapter 3: Creating a Web Application for IP Address Accountancy In this chapter, we build a web application that maintains the list of the assigned IP addresses and the address ranges. We learn how to create web applications using the Django framework. I show you the way the Django application should be structured, tell how to create and configure the application settings, and explain the URL structure. We also investigate how to deploy the Django application using the Apache web server. Chapter 4: Integrating the IP Address Application with DHCP This chapter expands on the previous chapter, and we implement the DHCP address range support. We also look at some advanced Django programming techniques, such as customizing the response MIME type and serving AJAX calls. This second edition adds new functionality to manage dynamic DHCP leases using OMAPI protocol. Chapter 5: Maintaining a List of Virtual Hosts in an Apache Configuration File This is another Django application that we develop in this book, but this time our focus is on the Django administration interface. While building the Apache configuration management application, you learn how to customize the default Django administration interface with your own views and functions.

xxiv

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

N INTRODUcTiON

Chapter 6: Gathering and Presenting Statistical Data from Apache Log Files In this chapter, the goal is to build an application that parses and analyses the Apache web server log files. Instead of taking the straightforward but inflexible approach of building a monolithic application, we look at the design principles involved in building plug-in applications. You learn how to use the object and class type discovery functions and how to perform a dynamic module loading. This second edition of the book shows you how to perform data visualization based on the gathered data. Chapter 7: Performing Complex Searches and Reporting on Application Log Files This chapter also deals with the log file parsing, but this time I show you how to parse complex, multi-line log file entries. We investigate the functionality of the open-source log file parser tool called Exctractor, which you can download from http://exctractor.sourceforge.net/. Chapter 8: A Web Site Availability Check Script for Nagios Nagios is one of the most popular open-source monitoring systems, because its modular structure allows users to implement their own check scripts and thus customize the tool to meet their needs. In this chapter, we create two scripts that check the functionality of a website. We investigate how to use the Beautiful Soup HTML parsing library to extract the information from the HTML web pages. Chapter 9: Management and Monitoring Subsystem This chapter starts a three-chapter series in which we build a complete monitoring system. The goal of this chapter is not to replace mature monitoring systems such as Nagios or Zenoss but to show the basic principles of the distributed application programming. We look at database design principles such as data normalization. We also investigate how to implement the communication mechanisms between network services using the RPC calls. Chapter 10: Remote Monitoring Agents This is the second chapter in the monitoring series, where we implement the remote monitoring agent components. In this chapter, I also describe how to decouple the application from its configuration using the ConfigParser module. Chapter 11: Statistics Gathering and Reporting This is the last part of the monitoring series, where I show you how to perform basic statistical analysis on the collected performance data. We use scientific libraries: NumPy to perform the calculations and matplotlib to create the graphs. You learn how to find which performance readings fall into the comfort zone and how to calculate the boundaries of that zone. We also do the basic trend detection, which provides a good insight for the capacity planning. Chapter 12: Distributed Message Processing System This is a new chapter for the second edition of the book. In this chapter I show you how to convert the distributed management system to use Celery, a remote task execution framework. Chapter 13: Automatic MySQL Database Performance Tuning In this chapter, I show you how to obtain the MySQL database configuration variables and the internal status indicators. We build an application that makes a suggestion on how to improve the database engine performance based on the obtained data. Chapter 14: Amazon EC2/S3 as a Data Warehouse Solution This chapter shows you how to utilize the Amazon Elastic Compute Cloud (EC2) and offload the infrequent computation tasks to it. We build an application that automatically creates a database server where you can transfer data for further analysis. You can use this example as a basis to build an on-demand data warehouse solution.

xxv

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

N INTRODUcTiON

The Example Source Code The source code of all the examples in this book, along with any applicable sample data, can be downloaded from the Apress website by following instructions at www.apress.com/source-code/. The source code stored at this location contains the same code that is described in the book. Most of the prototypes described in this book are also available as open-source projects. You can find these projects at the author’s website, http://www.sysadminpy.com/.

xxvi

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

CHAPTER 1

Reading and Collecting Performance Data Using SNMP Most devices that are connected to a network report their status using SNMP (the Simple Network Management Protocol). This protocol was designed primarily for managing and monitoring network-attached hardware devices, but some applications also expose their statistical data using this protocol. In this chapter we will look at how to access this information from your Python applications. We are going to store the obtained data in an RRD (round robin database), using RRDTool—a widely known and popular application and library, which is used to store and plot the performance data. Finally we’ll investigate the Jinja2 template system, which we’ll use to generate simple web pages for our application.

Application Requirements and Design The topic of system monitoring is very broad and usually encompasses many different areas. A complete monitoring system is rather complex and often is made up of multiple components working together. We are not going to develop a complete, self-sufficient system here, but we’ll look into two important areas of a typical monitoring system: information gathering and representation. In this chapter we’ll implement a system that queries devices using an SNMP protocol and then stores the data using the RRDTool library, which is also used to generate the graphs for visual data representation. All this is tied together into simple web pages using the Jinja2 templating library. We’ll look at each of these components in more detail as we go along through the chapter.

Specifying the Requirements Before we start designing our application we need to come up with some requirements for our system. First of all we need to understand the functionality we expect our system to provide. This will help us to create an effective (and we hope easy-to-implement) system design. In this chapter we are going to create a system that monitors network-attached devices, such as network switches and routers, using the SNMP protocol. So the first requirement is that the system be able to query any device using SNMP. The information gathered from the devices needs to be stored for future reference and analysis. Let’s make some assumptions about the use of this information. First, we don’t need to store it indefinitely. (I’ll talk more about permanent information storage in Chapters 9–11.) This means that the information is stored only for a predefined period of time, and once it becomes obsolete it will be erased. This presents our second requirement: the information needs to be deleted after it has “expired.” Second, the information needs to be stored so that graphs can be produced. We are not going to use it for anything else, and therefore the data store should be optimized for the data representation tasks.

1

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

CHAPTER 1 N READING AND COLLEcTING PERFORMANcE DATA USING SNMP

Finally, we need to generate the graphs and represent this information on easily accessible web pages. The information needs to be structured by the device names only. For example, if we are monitoring several devices for CPU and network interface utilization, this information needs to be presented on a single page. We don’t need to present this information on multiple time scales; by default the graphs should show the performance indicators for the last 24 hours.

High-Level Design Specification Now that we have some ideas about the functionality of our system, let’s create a simple design, which we’ll use as a guide in the development phase. The basic approach is that each of the requirements we specified earlier should be covered by one or more design decisions. The first requirement is that we need to monitor the network-attached devices, and we need to do so using SNMP. This means that we have to use appropriate Python library that deals with the SNMP objects. The SNMP module is not included in the default Python installation, so we’ll have to use one of the external modules. I recommend using the PySNMP library (available at http://pysnmp.sourceforge.net/), which is readily available on most of the popular Linux distributions. The perfect candidate for the data store engine is RRDTool (available at http://oss.oetiker.ch/rrdtool/). The round robin database means that the database is structured in such a way that each “table” has a limited length, and once the limit is reached, the oldest entries are dropped. In fact they are not dropped; the new ones are simply written into their position. The RRDTool library provides two distinct functionalities: the database service and the graph-generation toolkit. There is no native support for RRD databases in Python, but there is an external library available that provides an interface to the RRDTool library. Finally, to generate the web page we will use the Jinja2 templating library (available at http://jinja.pocoo.org, or on GitHub: https://github.com/mitsuhiko/jinja2), which lets us create sophisticated templates and decouple the design and development tasks. We are going to use a simple Windows INI-style configuration file to store the information about the devices we will be monitoring. This information will include details such as the device address, SNMP object reference, and access control details. The application will be split into two parts: the first part is the information-gathering tool that queries all configured devices and stores the data in the RRDTool database, and the second part is the report generator, which generates the web site structure along with all required images. Both components will be instantiated from the standard UNIX scheduler application, cron. These two scripts will be named snmp-manager.py and snmp-pages.py, respectively.

Introduction to SNMP SNMP (Simple Network Management Protocol) is a UDP-based protocol used mostly for managing network-attached devices, such as routers, switches, computers, printers, video cameras, and so on. Some applications also allow access to internal counters via the SNMP protocol. SNMP not only allows you to read performance statistics from the devices, it can also send control messages to instruct a device to perform some action—for example, you can restart a router remotely by using SNMP commands. There are three main components in a system managed by SIMPLE NETWORK MANAGEMENT PROTOCOL (SNMP): u

The management system which is responsible for managing all devices

u

The managed devices, which are all devices managed by the management system

u

The SNMP agent, which is an application that runs on each of the managed devices and interacts with the management system

This relationship is illustrated in Figure 1-1.

2

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

CHAPTER 1 N READING AND COLLEcTING PERFORMANcE DATA USING SNMP

The Management System

K

\

SNMP Agent software

SNMP Agent software

Managed device 1

Managed device 2

...

SNMP Agent software Managed device X

Figure 1-1.  The SNMP network components This approach is rather generic. The protocol defines seven basic commands, of which the most interesting to us are get, get bulk, and response. As you may have guessed, the former two are the commands that the management system issues to the agent, and the latter is a response from the agent software. How does the management system know what to look for? The protocol does not define a way of exchanging this information, and therefore the management system has no way to interrogate the agents to obtain the list of available variables. The issue is resolved by using a Management Information Base (or MIB). Each device usually has an associated MIB, which describes the structure of the management data on that system. Such a MIB would list in hierarchical order all object identifiers (OIDs) that are available on the managed device. The OID effectively represents a node in the object tree. It contains numerical identifiers of all nodes leading to the current OID starting from the node at the top of the tree. The node IDs are assigned and regulated by the IANA (Internet Assigned Numbers Authority). An organization can apply for an OID node, and when it is assigned it is responsible for managing the OID structure below the allocated node. Figure 1-2 illustrates a portion of the OID tree.

3

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

CHAPTER 1 N READING AND COLLEcTING PERFORMANcE DATA USING SNMP ROOT CCITT(0)

I C

s

)

ISO (1)

5

I

CCITT-ISO (2)

ORG (3)

T

DOD (6)

S

INTERNET (1) Directory (1)

I

c

Mgmt (2)

)

c21

Experimental (3)

c

)

Enterprise (1)

Private (4)

5

Enterprise (1)

) (

Cisco (9)

Novell (23)

System (1) Interfaces (2)

sysDescr (1)

sysUpTime (3)

z c )

ifNumber (1)

sysObjectID (2)

ifTable (2)

ifEntry (1)

ifIndex (1)

ifDesc (2)

)

c

ifType (3)

Figure 1-2.  The SNMP OID tree Let’s look at some example OIDs. The OID tree node that is assigned to the Cisco organization has a value of 1.3.6.1.4.1.9, which means that all proprietary OIDs that are associated with the Cisco manufactured devices will start with these numbers. Similarly, the Novell devices will have their OIDs starting with 1.3.6.1.4.1.23. I deliberately emphasized proprietary OIDs because some properties are expected to be present (if and where available) on all devices. These are under the 1.3.6.1.2.1.1 (System SNMP Variables) node, which is defined by RFC1213. For more details on the OID tree and its elements, visit http://www.alvestrand.no/objectid/top.html. This website allows you to browse the OID tree and it contains quite a large collection of the various OIDs.

The System SNMP Variables Node In most cases the basic information about a device will be available under the System SNMP Variables OID node subtree. Therefore let’s have a close look at what you can find there. This OID node contains several additional OID nodes. Table 1-1 provides a description for most of the subnodes.

4

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

CHAPTER 1 N READING AND COLLEcTING PERFORMANcE DATA USING SNMP

Table 1-1.  System SNMP OIDs

OID String

OID Name

Description

1.3.6.1.2.1.1.1

sysDescr

A string containing a short description of the system or device. Usually contains the hardware type and operating system details.

1.3.6.1.2.1.1.2

sysObjectID

A string containing the vendor-specific device OID node. For example, if the organization has been assigned an OID node 1.3.6.1.4.1.8888 and this specific device has been assigned a .1.1 OID space under the organization’s space, this field would contain a value of 1.3.6.1.4.1.8888.1.1.

1.3.6.1.2.1.1.3

sysUpTime

A number representing the time in hundreds of a second from the time when the system was initialized.

1.3.6.1.2.1.1.4

sysContact

An arbitrary string containing information about the contact person who is responsible for this system.

1.3.6.1.2.1.1.5

sysName

A name that has been assigned to the system. Usually this field contains a fully qualified domain name.

1.3.6.1.2.1.1.6

sysLocation

A string describing the physical location of the system.

1.3.6.1.2.1.1.7

sysServices

A number that indicates which services are offered by this system. The number is a bitmap representation of all OSI protocols, with the lowest bit representing the first OSI layer. For example, a switching device (operating on layer 2) would have this number set to 22 = 4. This field is rarely used now.

1.3.6.1.2.1.1.8

sysLastChange

A number containing the value of sysUpTime at the time of a change to any of the system SNMP objects.

1.3.6.1.2.1.1.9

sysTable

A node containing multiple sysEntry elements. Each element represents a distinct capability and the corresponding OID node value.

The Interfaces SNMP Variables Node Similarly, the basic interface statistics can be obtained from the Interfaces SNMP Variables OID node subtree. The OID for the interfaces variables is 1.3.6.1.2.1.2 and contains two subnodes: u

An OID containing the total number of network interfaces. The OID value for this entry is 1.3.6.1.2.1.2.1; and it is usually referenced as ifNumber. There are no subnodes available under this OID.

u

An OID node that contains all interface entries. Its OID is 1.3.6.1.2.1.2.2 and it is usually referenced as ifTable. This node contains one or more entry nodes. An entry node (1.3.6.1.2.1.2.2.1, also known as ifEntry) contains the detailed information about that particular interface. The number of entries in the list is defined by the ifNumber node value.

You can find detailed information about all ifEntry subnodes in Table 1-2.

5

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

CHAPTER 1 N READING AND COLLEcTING PERFORMANcE DATA USING SNMP

Table 1-2.  Interface entry SNMP OIDs

OID String

OID Name

Description

1.3.6.1.2.1.2.2.1.1

ifIndex

A unique sequence number assigned to the interface.

1.3.6.1.2.1.2.2.1.2

ifDescr

A string containing the interface name and other available information, such as the hardware manufacturer’s name.

1.3.6.1.2.1.2.2.1.3

ifType

A number representing the interface type, depending on the interface’s physical link and protocol.

1.3.6.1.2.1.2.2.1.4

ifMtu

The largest network datagram that this interface can transmit.

1.3.6.1.2.1.2.2.1.5

ifSpeed

The estimated current bandwidth of the interface. If the current bandwidth cannot be calculated, this number should contain the maximum possible bandwidth for the interface.

1.3.6.1.2.1.2.2.1.6

ifPhysAddress

The physical address of the interface, usually a MAC address on Ethernet interfaces.

1.3.6.1.2.1.2.2.1.7

ifAdminStatus

This OID allows setting the new state of the interface. Usually limited to the following values: 1 (Up), 2 (Down), 3 (Testing).

1.3.6.1.2.1.2.2.1.8

ifOperStatus

The current state of the interface. Usually limited to the following values: 1 (Up), 2 (Down), 3 (Testing).

1.3.6.1.2.1.2.2.1.9

ifLastChange

The value containing the system uptime (sysUpTime) reading when this interface entered its current state. May be set to zero if the interface entered this state before the last system reinitialization.

1.3.6.1.2.1.2.2.1.10

ifInOctets

The total number of bytes (octets) received on the interface.

1.3.6.1.2.1.2.2.1.11

ifInUcastPkts

The number of unicast packets forwarded to the device’s network stack.

1.3.6.1.2.1.2.2.1.12

ifInNUcastPkts

The number of non-unicast packets delivered to the device’s network stack. Non-unicast packets are usually either broadcast or multicast packets.

1.3.6.1.2.1.2.2.1.13

ifInDiscards

The number of dropped packets. This does not indicate a packet error, but may indicate that the receive buffer was too small to accept the packets.

1.3.6.1.2.1.2.2.1.14

ifInErrors

The number of received invalid packets.

1.3.6.1.2.1.2.2.1.15

ifInUnknownProtos

The number of packets that were dropped because the protocol is not supported on the device interface.

1.3.6.1.2.1.2.2.1.16

ifOutOctets

The number of bytes (octets) transmitted out of the interface.

1.3.6.1.2.1.2.2.1.17

ifOutUcastPkts

The number of unicast packets received from the device’s network stack. This number also includes the packets that were discarded or not sent. (continued)

6

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

CHAPTER 1 N READING AND COLLEcTING PERFORMANcE DATA USING SNMP

Table 1-2.  (continued)

OID String

OID Name

Description

1.3.6.1.2.1.2.2.1.18

ifNUcastPkts

The number of non-unicast packets received from the device’s network stack. This number also includes the packets that were discarded or not sent.

1.3.6.1.2.1.2.2.1.19

ifOutDiscards

The number of valid packets that were discarded. It’s not an error, but it may indicate that the send buffer is too small to accept all packets.

1.3.6.1.2.1.2.2.1.20

ifOutErrors

The number of outgoing packets that couldn’t be transmitted because of the errors.

1.3.6.1.2.1.2.2.1.21

ifOutQLen

The length of the outbound packet queue.

1.3.6.1.2.1.2.2.1.22

ifSpecific

Usually contains a reference to the vendor-specific OID describing this interface. If such information is not available the value is set to an OID 0.0, which is syntactically valid, but is not pointing to anything.

Authentication in SNMP Authentication in earlier SNMP implementations is somewhat primitive and is prone to attacks. An SNMP agent defines two community strings: one for read-only access and the other for read/write access. When the management system connects to the agent, it must authenticate with one of those two strings. The agent accepts commands only from a management system that has authenticated with valid community strings.

Querying SNMP from the Command Line Before we start writing our application, let’s quickly look at how to query SNMP from the command line. This is particularly useful if you want to check whether the information returned by the SNMP agent is correctly accepted by your application. The command-line tools are provided by the Net-SNMP-Utils package, which is available for most Linux distributions. This package includes the tools to query and set SNMP objects. Consult your Linux distribution documentation for the details on installing this package. For example, on a RedHat-based system you can install these tools with the following command:   $ sudo yum install net-snmp-utils   On a Debian-based system the package can be installed like this:   $ sudo apt-get install snmp   The most useful command from this package is snmpwalk, which takes an OID node as an argument and tries to discover all subnode OIDs. This command uses the SNMP operation getnext, which returns the next node in the tree and effectively allows you to traverse the whole subtree from the indicated node. If no OID has been specified, snmpwalk will use the default SNMP system OID (1.3.6.1.2.1) as the starting point. Listing 1-1 demonstrates the snmpwalk command issued against a laptop running Fedora Linux.

7

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

CHAPTER 1 N READING AND COLLEcTING PERFORMANcE DATA USING SNMP

Listing 1-1.  An Example of the snmpwalk Command $ snmpwalk –v2c -c public -On 192.168.1.68 .1.3.6.1.2.1.1.1.0 = STRING: Linux fedolin.example.com 2.6.32.11-99.fc12.i686 #1« SMP Mon Apr 5 16:32:08 EDT 2010 i686 .1.3.6.1.2.1.1.2.0 = OID: .1.3.6.1.4.1.8072.3.2.10 .1.3.6.1.2.1.1.3.0 = Timeticks: (110723) 0:18:27.23 .1.3.6.1.2.1.1.4.0 = STRING: Administrator ([email protected]) .1.3.6.1.2.1.1.5.0 = STRING: fedolin.example.com .1.3.6.1.2.1.1.6.0 = STRING: MyLocation, MyOrganization, MyStreet, MyCity, MyCountry .1.3.6.1.2.1.1.8.0 = Timeticks: (3) 0:00:00.03 .1.3.6.1.2.1.1.9.1.2.1 = OID: .1.3.6.1.6.3.10.3.1.1 .1.3.6.1.2.1.1.9.1.2.2 = OID: .1.3.6.1.6.3.11.3.1.1 .1.3.6.1.2.1.1.9.1.2.3 = OID: .1.3.6.1.6.3.15.2.1.1 .1.3.6.1.2.1.1.9.1.2.4 = OID: .1.3.6.1.6.3.1 .1.3.6.1.2.1.1.9.1.2.5 = OID: .1.3.6.1.2.1.49 .1.3.6.1.2.1.1.9.1.2.6 = OID: .1.3.6.1.2.1.4 .1.3.6.1.2.1.1.9.1.2.7 = OID: .1.3.6.1.2.1.50 .1.3.6.1.2.1.1.9.1.2.8 = OID: .1.3.6.1.6.3.16.2.2.1 .1.3.6.1.2.1.1.9.1.3.1 = STRING: The SNMP Management Architecture MIB. .1.3.6.1.2.1.1.9.1.3.2 = STRING: The MIB for Message Processing and Dispatching. .1.3.6.1.2.1.1.9.1.3.3 = STRING: The management information definitions for the« SNMP User-based Security Model. .1.3.6.1.2.1.1.9.1.3.4 = STRING: The MIB module for SNMPv2 entities .1.3.6.1.2.1.1.9.1.3.5 = STRING: The MIB module for managing TCP implementations .1.3.6.1.2.1.1.9.1.3.6 = STRING: The MIB module for managing IP and ICMP« implementations .1.3.6.1.2.1.1.9.1.3.7 = STRING: The MIB module for managing UDP implementations .1.3.6.1.2.1.1.9.1.3.8 = STRING: View-based Access Control Model for SNMP. .1.3.6.1.2.1.1.9.1.4.1 = Timeticks: (3) 0:00:00.03 .1.3.6.1.2.1.1.9.1.4.2 = Timeticks: (3) 0:00:00.03 .1.3.6.1.2.1.1.9.1.4.3 = Timeticks: (3) 0:00:00.03 .1.3.6.1.2.1.1.9.1.4.4 = Timeticks: (3) 0:00:00.03 .1.3.6.1.2.1.1.9.1.4.5 = Timeticks: (3) 0:00:00.03 .1.3.6.1.2.1.1.9.1.4.6 = Timeticks: (3) 0:00:00.03 .1.3.6.1.2.1.1.9.1.4.7 = Timeticks: (3) 0:00:00.03 .1.3.6.1.2.1.1.9.1.4.8 = Timeticks: (3) 0:00:00.03 .1.3.6.1.2.1.2.1.0 = INTEGER: 5 .1.3.6.1.2.1.2.2.1.1.1 = INTEGER: 1 .1.3.6.1.2.1.2.2.1.1.2 = INTEGER: 2 .1.3.6.1.2.1.2.2.1.1.3 = INTEGER: 3 .1.3.6.1.2.1.2.2.1.1.4 = INTEGER: 4 .1.3.6.1.2.1.2.2.1.1.5 = INTEGER: 5 .1.3.6.1.2.1.2.2.1.2.1 = STRING: lo .1.3.6.1.2.1.2.2.1.2.2 = STRING: eth0 .1.3.6.1.2.1.2.2.1.2.3 = STRING: wlan1 .1.3.6.1.2.1.2.2.1.2.4 = STRING: pan0 .1.3.6.1.2.1.2.2.1.2.5 = STRING: virbr0 .1.3.6.1.2.1.2.2.1.3.1 = INTEGER: softwareLoopback(24) .1.3.6.1.2.1.2.2.1.3.2 = INTEGER: ethernetCsmacd(6) .1.3.6.1.2.1.2.2.1.3.3 = INTEGER: ethernetCsmacd(6) .1.3.6.1.2.1.2.2.1.3.4 = INTEGER: ethernetCsmacd(6)

8

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

CHAPTER 1 N READING AND COLLEcTING PERFORMANcE DATA USING SNMP

.1.3.6.1.2.1.2.2.1.3.5 = INTEGER: ethernetCsmacd(6) .1.3.6.1.2.1.2.2.1.4.1 = INTEGER: 16436 .1.3.6.1.2.1.2.2.1.4.2 = INTEGER: 1500 .1.3.6.1.2.1.2.2.1.4.3 = INTEGER: 1500 .1.3.6.1.2.1.2.2.1.4.4 = INTEGER: 1500 .1.3.6.1.2.1.2.2.1.4.5 = INTEGER: 1500 .1.3.6.1.2.1.2.2.1.5.1 = Gauge32: 10000000 .1.3.6.1.2.1.2.2.1.5.2 = Gauge32: 0 .1.3.6.1.2.1.2.2.1.5.3 = Gauge32: 10000000 .1.3.6.1.2.1.2.2.1.5.4 = Gauge32: 10000000 .1.3.6.1.2.1.2.2.1.5.5 = Gauge32: 10000000 .1.3.6.1.2.1.2.2.1.6.1 = STRING: .1.3.6.1.2.1.2.2.1.6.2 = STRING: 0:d:56:7d:68:b0 .1.3.6.1.2.1.2.2.1.6.3 = STRING: 0:90:4b:64:7b:4d .1.3.6.1.2.1.2.2.1.6.4 = STRING: 4e:e:b8:9:81:3b .1.3.6.1.2.1.2.2.1.6.5 = STRING: d6:f9:7c:2c:17:28 .1.3.6.1.2.1.2.2.1.7.1 = INTEGER: up(1) .1.3.6.1.2.1.2.2.1.7.2 = INTEGER: up(1) .1.3.6.1.2.1.2.2.1.7.3 = INTEGER: up(1) .1.3.6.1.2.1.2.2.1.7.4 = INTEGER: down(2) .1.3.6.1.2.1.2.2.1.7.5 = INTEGER: up(1) .1.3.6.1.2.1.2.2.1.8.1 = INTEGER: up(1) .1.3.6.1.2.1.2.2.1.8.2 = INTEGER: down(2) .1.3.6.1.2.1.2.2.1.8.3 = INTEGER: up(1) .1.3.6.1.2.1.2.2.1.8.4 = INTEGER: down(2) .1.3.6.1.2.1.2.2.1.8.5 = INTEGER: up(1) .1.3.6.1.2.1.2.2.1.9.1 = Timeticks: (0) 0:00:00.00 .1.3.6.1.2.1.2.2.1.9.2 = Timeticks: (0) 0:00:00.00 .1.3.6.1.2.1.2.2.1.9.3 = Timeticks: (0) 0:00:00.00 .1.3.6.1.2.1.2.2.1.9.4 = Timeticks: (0) 0:00:00.00 .1.3.6.1.2.1.2.2.1.9.5 = Timeticks: (0) 0:00:00.00 .1.3.6.1.2.1.2.2.1.10.1 = Counter32: 89275 .1.3.6.1.2.1.2.2.1.10.2 = Counter32: 0 .1.3.6.1.2.1.2.2.1.10.3 = Counter32: 11649462 .1.3.6.1.2.1.2.2.1.10.4 = Counter32: 0 .1.3.6.1.2.1.2.2.1.10.5 = Counter32: 0 .1.3.6.1.2.1.2.2.1.11.1 = Counter32: 1092 .1.3.6.1.2.1.2.2.1.11.2 = Counter32: 0 .1.3.6.1.2.1.2.2.1.11.3 = Counter32: 49636 .1.3.6.1.2.1.2.2.1.11.4 = Counter32: 0 .1.3.6.1.2.1.2.2.1.11.5 = Counter32: 0 .1.3.6.1.2.1.2.2.1.12.1 = Counter32: 0 .1.3.6.1.2.1.2.2.1.12.2 = Counter32: 0 .1.3.6.1.2.1.2.2.1.12.3 = Counter32: 0 .1.3.6.1.2.1.2.2.1.12.4 = Counter32: 0 .1.3.6.1.2.1.2.2.1.12.5 = Counter32: 0 .1.3.6.1.2.1.2.2.1.13.1 = Counter32: 0 .1.3.6.1.2.1.2.2.1.13.2 = Counter32: 0 .1.3.6.1.2.1.2.2.1.13.3 = Counter32: 0 .1.3.6.1.2.1.2.2.1.13.4 = Counter32: 0 .1.3.6.1.2.1.2.2.1.13.5 = Counter32: 0 .1.3.6.1.2.1.2.2.1.14.1 = Counter32: 0

9

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

CHAPTER 1 N READING AND COLLEcTING PERFORMANcE DATA USING SNMP

.1.3.6.1.2.1.2.2.1.14.2 = Counter32: 0 .1.3.6.1.2.1.2.2.1.14.3 = Counter32: 0 .1.3.6.1.2.1.2.2.1.14.4 = Counter32: 0 .1.3.6.1.2.1.2.2.1.14.5 = Counter32: 0 .1.3.6.1.2.1.2.2.1.15.1 = Counter32: 0 .1.3.6.1.2.1.2.2.1.15.2 = Counter32: 0 .1.3.6.1.2.1.2.2.1.15.3 = Counter32: 0 .1.3.6.1.2.1.2.2.1.15.4 = Counter32: 0 .1.3.6.1.2.1.2.2.1.15.5 = Counter32: 0 .1.3.6.1.2.1.2.2.1.16.1 = Counter32: 89275 .1.3.6.1.2.1.2.2.1.16.2 = Counter32: 0 .1.3.6.1.2.1.2.2.1.16.3 = Counter32: 922277 .1.3.6.1.2.1.2.2.1.16.4 = Counter32: 0 .1.3.6.1.2.1.2.2.1.16.5 = Counter32: 3648 .1.3.6.1.2.1.2.2.1.17.1 = Counter32: 1092 .1.3.6.1.2.1.2.2.1.17.2 = Counter32: 0 .1.3.6.1.2.1.2.2.1.17.3 = Counter32: 7540 .1.3.6.1.2.1.2.2.1.17.4 = Counter32: 0 .1.3.6.1.2.1.2.2.1.17.5 = Counter32: 17 .1.3.6.1.2.1.2.2.1.18.1 = Counter32: 0 .1.3.6.1.2.1.2.2.1.18.2 = Counter32: 0 .1.3.6.1.2.1.2.2.1.18.3 = Counter32: 0 .1.3.6.1.2.1.2.2.1.18.4 = Counter32: 0 .1.3.6.1.2.1.2.2.1.18.5 = Counter32: 0 .1.3.6.1.2.1.2.2.1.19.1 = Counter32: 0 .1.3.6.1.2.1.2.2.1.19.2 = Counter32: 0 .1.3.6.1.2.1.2.2.1.19.3 = Counter32: 0 .1.3.6.1.2.1.2.2.1.19.4 = Counter32: 0 .1.3.6.1.2.1.2.2.1.19.5 = Counter32: 0 .1.3.6.1.2.1.2.2.1.20.1 = Counter32: 0 .1.3.6.1.2.1.2.2.1.20.2 = Counter32: 0 .1.3.6.1.2.1.2.2.1.20.3 = Counter32: 0 .1.3.6.1.2.1.2.2.1.20.4 = Counter32: 0 .1.3.6.1.2.1.2.2.1.20.5 = Counter32: 0 .1.3.6.1.2.1.2.2.1.21.1 = Gauge32: 0 .1.3.6.1.2.1.2.2.1.21.2 = Gauge32: 0 .1.3.6.1.2.1.2.2.1.21.3 = Gauge32: 0 .1.3.6.1.2.1.2.2.1.21.4 = Gauge32: 0 .1.3.6.1.2.1.2.2.1.21.5 = Gauge32: 0 .1.3.6.1.2.1.2.2.1.22.1 = OID: .0.0 .1.3.6.1.2.1.2.2.1.22.2 = OID: .0.0 .1.3.6.1.2.1.2.2.1.22.3 = OID: .0.0 .1.3.6.1.2.1.2.2.1.22.4 = OID: .0.0 .1.3.6.1.2.1.2.2.1.22.5 = OID: .0.0 .1.3.6.1.2.1.25.1.1.0 = Timeticks: (8232423) 22:52:04.23 .1.3.6.1.2.1.25.1.1.0 = No more variables left in this MIB View (It is past the end« of the MIB tree)   As an exercise, try to identify some of the listed OIDs using Tables 1-1 and 1-2 and find out what they mean.

10

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

CHAPTER 1 N READING AND COLLEcTING PERFORMANcE DATA USING SNMP

Querying SNMP Devices from Python Now we know enough about SNMP to start working on our own management system, which will be querying the configured systems on regular intervals. First let’s specify the configuration that we will be using in the application.

Configuring the Application As we already know, we need the following information available for every check: u

An IP address or resolvable domain name of the system that runs the SNMP agent software

u

The read-only community string that will be used to authenticate with the agent software

u

The OID node’s numerical representation

We are going to use the Windows INI-style configuration file because of its simplicity. Python includes a configuration parsing module by default, so it is also convenient to use. (Chapter 9 discusses the ConfigParser module in great detail; refer to that chapter for more information about the module.) Let’s go back to the configuration file for our application. There is no need to repeat the system information for every SNMP object that we’re going to query, so we can define each system parameter once in a separate section and then refer to the system ID in each check section. The check section defines the OID node identifier string and a short description, as shown in Listing 1-2. Create a configuration file called snmp-manage.cfg with the contents from the listing below; don’t forget to modify the IP and security details accordingly. Listing 1-2.  The Management System Configuration File [system_1] description=My Laptop address=192.168.1.68 port=161 communityro=public   [check_1] description=WLAN incoming traffic oid=1.3.6.1.2.1.2.2.1.10.3 system=system_1   [check_2] description=WLAN incoming traffic oid=1.3.6.1.2.1.2.2.1.16.3 system=system_1   Make sure that the system and check section IDs are unique, or you may get unpredictable results. We’re going to create an SnmpManager class with two methods, one to add a system and the other to add a check. As the check contains the system ID string, it will automatically be assigned to that particular system. In Listing 1-3 you can see the class definition and also the initialization part that reads in the configuration and iterates through the sections and updates the class object accordingly. Create a file called snmp-manage.py with the contents shown in the listing below. We will work on adding new features to the script as we go along.

11

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

CHAPTER 1 N READING AND COLLEcTING PERFORMANcE DATA USING SNMP

Listing 1-3.  Reading and Storing the Configuration import sys from ConfigParser import SafeConfigParser    class SnmpManager: def __init__(self): self.systems = {}   def add_system(self, id, descr, addr, port, comm_ro): self.systems[id] = {'description' : descr, 'address' : addr, 'port' : int(port), 'communityro' : comm_ro, 'checks' : {} }   def add_check(self, id, oid, descr, system): oid_tuple = tuple([int(i) for i in oid.split('.')]) self.systems[system]['checks'][id] = {'description': descr, 'oid' : oid_tuple, }   def main(conf_file=""): if not conf_file: sys.exit(-1) config = SafeConfigParser() config.read(conf_file) snmp_manager = SnmpManager() for system in [s for s in config.sections() if s.startswith('system')]: snmp_manager.add_system(system, config.get(system, 'description'), config.get(system, 'address'), config.get(system, 'port'), config.get(system, 'communityro')) for check in [c for c in config.sections() if c.startswith('check')]: snmp_manager.add_check(check, config.get(check, 'oid'), config.get(check, 'description'), config.get(check, 'system'))   if __name__ == '__main__': main(conf_file='snmp-manager.cfg')   As you see in the example, we first have to iterate through the system sections and update the object before proceeding with the check sections.

N■Note This order is important, because if we try to add a check for a system that hasn’t been inserted yet, we’ll get a dictionary index error.

12

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

CHAPTER 1 N READING AND COLLEcTING PERFORMANcE DATA USING SNMP

Also note that we are converting the OID string to a tuple of integers. You’ll see why we have to do this later in this section. The configuration file is loaded and we’re ready to run SNMP queries against the configured devices.

Using the PySNMP Library In this project we are going to use the PySNMP library, which is implemented in pure Python and doesn’t depend on any precompiled libraries. The pysnmp package is available for most Linux distributions and can be installed using the standard distribution package manager. In addition to pysnmp you will also need the ASN.1 library, which is used by pysnmp and is also available as part of the Linux distribution package selection. For example, on a Fedora system you can install the pysnmp module with the following commands:   $ sudo yum install pysnmp $ sudo yum install python-pyasn1   Alternatively, you can use the Python Package manager (PiP) to install this library for you:   $ sudo pip install pysnmp $ sudo pip install pyasn1   If you don’t have the pip command available, you can download and install this tool from http://pypi.python.org/pypi/pip. We will use it in later chapters as well. The PySNMP library hides all the complexity of SNMP processing behind a single class with a simple API. All you have to do is create an instance of the CommandGenerator class. This class is available from the pysnmp.entity.rfc3413.oneliner.cmdgen module and implements most of the standard SNMP protocol commands: getCmd(), setCmd(), and nextCmd(). Let’s look at each of these in more detail.

The SNMP GET Command All the commands we are going to discuss follow the same invocation pattern: import the module, create an instance of the CommandGenerator class, create three required parameters (an authentication object, a transport target object, and a list of arguments), and finally invoke the appropriate method. The method returns a tuple containing the error indicators (if there was an error) and the result object. In Listing 1-4, we query a remote Linux machine using the standard SNMP OID (1.3.6.1.2.1.1.1.0). Listing 1-4.  An Example of the SNMP GET Command >>> from pysnmp.entity.rfc3413.oneliner import cmdgen >>> cg = cmdgen.CommandGenerator() >>> comm_data = cmdgen.CommunityData('my-manager', 'public') >>> transport = cmdgen.UdpTransportTarget(('192.168.1.68', 161)) >>> variables = (1, 3, 6, 1, 2, 1, 1, 1, 0) >>> errIndication, errStatus, errIndex, result = cg.getCmd(comm_data, transport, variables) >>> print errIndication None >>> print errStatus 0 >>> print errIndex 0 >>> print result [(ObjectName('1.3.6.1.2.1.1.1.0'), OctetString('Linux fedolin.example.com« 2.6.32.11-99.fc12.i686 #1 SMP Mon Apr 5 16:32:08 EDT 2010 i686'))] >>>  

13

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

CHAPTER 1 N READING AND COLLEcTING PERFORMANcE DATA USING SNMP

Let’s look at some steps more closely. When we initiate the community data object, we have provided two strings—the community string (the second argument) and the agent or manager security name string; in most cases this can be any string. An optional parameter specifies the SNMP version to be used (it defaults to SNMP v2c). If you must query version 1 devices, use the following command:   >>> comm_data = cmdgen.CommunityData('my-manager', 'public', mpModel=0)   The transport object is initiated with the tuple containing either the fully qualified domain name or the IP address string and the integer port number. The last argument is the OID expressed as a tuple of all node IDs that make up the OID we are querying. Therefore, we had to convert the dot-separated string into a tuple earlier when we were reading the configuration items. Finally, we call the API command getCmd(), which implements the SNMP GET command, and pass these three objects as its arguments. The command returns a tuple, each element of which is described in Table 1-3. Table 1-3.  CommandGenerator Return Objects

Tuple Element

Description

errIndication

If this string is not empty, it indicates the SNMP engine error.

errStatus

If this element evaluates to True, it indicates an error in the SNMP communication; the object that generated the error is indicated by the errIndex element.

errIndex

If the errStatus indicates that an error has occurred, this field can be used to find the SNMP object that caused the error. The object position in the result array is errIndex-1.

result

This element contains a list of all returned SNMP object elements. Each element is a tuple that contains the name of the object and the object value.

The SNMP SET Command The SNMP SET commandis mapped in PySNMP to the setCmd() method call. All parameters are the same; the only difference is that the variables section now contains a tuple: the OID and the new value. Let’s try to use this command to change a read-only object; Listing 1-5 shows the command-line sequence. Listing 1-5.  An Example of the SNMP SET Command >>> from pysnmp.entity.rfc3413.oneliner import cmdgen >>> from pysnmp.proto import rfc1902 >>> cg = cmdgen.CommandGenerator() >>> comm_data = cmdgen.CommunityData('my-manager', 'public') >>> transport = cmdgen.UdpTransportTarget(('192.168.1.68', 161)) >>> variables = ((1, 3, 6, 1, 2, 1, 1, 1, 0), rfc1902.OctetString('new system description')) >>> errIndication, errStatus, errIndex, result = cg.setCmd(comm_data, transport,« variables) >>> print errIndication None >>> print errStatus 6 >>> print errIndex 1

14

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

CHAPTER 1 N READING AND COLLEcTING PERFORMANcE DATA USING SNMP

>>> print errStatus.prettyPrint() noAccess(6) >>> print result [(ObjectName('1.3.6.1.2.1.1.1.0'), OctetString('new system description'))] >>>   What happened here is that we tried to write to a read-only object, and that resulted in an error. What’s interesting in this example is how we format the parameters. You have to convert strings to SNMP object types; otherwise; they won’t pass as valid arguments. Therefore the string had to be encapsulated in an instance of the OctetString class. You can use other methods of the rfc1902 module if you need to convert to other SNMP types; the methods include Bits(), Counter32(), Counter64(), Gauge32(), Integer(), Integer32(), IpAddress(), OctetString(), Opaque(), TimeTicks(), and Unsigned32(). These are all class names that you can use if you need to convert a string to an object of a specific type.

The SNMP GETNEXT Command The SNMP GETNEXT command is implemented as the nextCmd() method. The syntax and usage are identical to getCmd(); the only difference is that the result is a list of objects that are immediate subnodes of the specified OID node. Let’s use this command to query all objects that are immediate child nodes of the SNMP system OID (1.3.6.1.2.1.1); Listing 1-6 shows the nextCmd() method in action. Listing 1-6.  An Example of the SNMP GETNEXT Command >>> from pysnmp.entity.rfc3413.oneliner import cmdgen >>> cg = cmdgen.CommandGenerator() >>> comm_data = cmdgen.CommunityData('my-manager', 'public') >>> transport = cmdgen.UdpTransportTarget(('192.168.1.68', 161)) >>> variables = (1, 3, 6, 1, 2, 1, 1) >>> errIndication, errStatus, errIndex, result = cg.nextCmd(comm_data, transport, variables) >>> print errIndication requestTimedOut >>> errIndication, errStatus, errIndex, result = cg.nextCmd(comm_data, transport, variables) >>> print errIndication None >>> print errStatus 0 >>> print errIndex 0 >>> for object in result: ... print object ... [(ObjectName('1.3.6.1.2.1.1.1.0'), OctetString('Linux fedolin.example.com« 2.6.32.11-99.fc12.i686 #1 SMP Mon Apr 5 16:32:08 EDT 2010 i686'))] [(ObjectName('1.3.6.1.2.1.1.2.0'), ObjectIdentifier('1.3.6.1.4.1.8072.3.2.10'))] [(ObjectName('1.3.6.1.2.1.1.3.0'), TimeTicks('340496'))] [(ObjectName('1.3.6.1.2.1.1.4.0'), OctetString('Administrator ([email protected])'))] [(ObjectName('1.3.6.1.2.1.1.5.0'), OctetString('fedolin.example.com'))] [(ObjectName('1.3.6.1.2.1.1.6.0'), OctetString('MyLocation, MyOrganization,« MyStreet, MyCity, MyCountry'))]

15

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

CHAPTER 1 N READING AND COLLEcTING PERFORMANcE DATA USING SNMP

[(ObjectName('1.3.6.1.2.1.1.8.0'), TimeTicks('3'))] [(ObjectName('1.3.6.1.2.1.1.9.1.2.1'), ObjectIdentifier('1.3.6.1.6.3.10.3.1.1'))] [(ObjectName('1.3.6.1.2.1.1.9.1.2.2'), ObjectIdentifier('1.3.6.1.6.3.11.3.1.1'))] [(ObjectName('1.3.6.1.2.1.1.9.1.2.3'), ObjectIdentifier('1.3.6.1.6.3.15.2.1.1'))] [(ObjectName('1.3.6.1.2.1.1.9.1.2.4'), ObjectIdentifier('1.3.6.1.6.3.1'))] [(ObjectName('1.3.6.1.2.1.1.9.1.2.5'), ObjectIdentifier('1.3.6.1.2.1.49'))] [(ObjectName('1.3.6.1.2.1.1.9.1.2.6'), ObjectIdentifier('1.3.6.1.2.1.4'))] [(ObjectName('1.3.6.1.2.1.1.9.1.2.7'), ObjectIdentifier('1.3.6.1.2.1.50'))] [(ObjectName('1.3.6.1.2.1.1.9.1.2.8'), ObjectIdentifier('1.3.6.1.6.3.16.2.2.1'))] [(ObjectName('1.3.6.1.2.1.1.9.1.3.1'), OctetString('The SNMP Management« Architecture MIB.'))] [(ObjectName('1.3.6.1.2.1.1.9.1.3.2'), OctetString('The MIB for Message Processing« and Dispatching.'))] [(ObjectName('1.3.6.1.2.1.1.9.1.3.3'), OctetString('The management information« definitions for the SNMP User-based Security Model.'))] [(ObjectName('1.3.6.1.2.1.1.9.1.3.4'), OctetString('The MIB module for SNMPv2« entities'))] [(ObjectName('1.3.6.1.2.1.1.9.1.3.5'), OctetString('The MIB module for managing TCP« implementations'))] [(ObjectName('1.3.6.1.2.1.1.9.1.3.6'), OctetString('The MIB module for managing IP« and ICMP implementations'))] [(ObjectName('1.3.6.1.2.1.1.9.1.3.7'), OctetString('The MIB module for managing UDP« implementations'))] [(ObjectName('1.3.6.1.2.1.1.9.1.3.8'), OctetString('View-based Access Control Model« for SNMP.'))] [(ObjectName('1.3.6.1.2.1.1.9.1.4.1'), TimeTicks('3'))] [(ObjectName('1.3.6.1.2.1.1.9.1.4.2'), TimeTicks('3'))] [(ObjectName('1.3.6.1.2.1.1.9.1.4.3'), TimeTicks('3'))] [(ObjectName('1.3.6.1.2.1.1.9.1.4.4'), TimeTicks('3'))] [(ObjectName('1.3.6.1.2.1.1.9.1.4.5'), TimeTicks('3'))] [(ObjectName('1.3.6.1.2.1.1.9.1.4.6'), TimeTicks('3'))] [(ObjectName('1.3.6.1.2.1.1.9.1.4.7'), TimeTicks('3'))] [(ObjectName('1.3.6.1.2.1.1.9.1.4.8'), TimeTicks('3'))] >>>   As you can see, the result is identical to that produced by the command-line tool snmpwalk, which uses the same technique to retrieve the SNMP OID subtree.

Implementing the SNMP Read Functionality Let’s implement the read functionality in our application. The workflow will be as follows: we need to iterate through all systems in the list, and for each system we iterate through all defined checks. For each check we are going to perform the SNMP GET command and store the result in the same data structure. For debugging and testing purposes we will add some print statements to verify that the application is working as expected. Later we’ll replace those print statements with the RRDTool database store commands. I’m going to call this method query_all_systems(). Listing 1-7 shows the code, which you would want to add to the snmp-manager.py file you created earlier.

16

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

CHAPTER 1 N READING AND COLLEcTING PERFORMANcE DATA USING SNMP

Listing 1-7.  Querying All Defined SNMP Objects def query_all_systems(self): cg = cmdgen.CommandGenerator() for system in self.systems.values(): comm_data = cmdgen.CommunityData('my-manager', system['communityro']) transport = cmdgen.UdpTransportTarget((system['address'], system['port'])) for check in system['checks'].values(): oid = check['oid'] errInd, errStatus, errIdx, result = cg.getCmd(comm_data, transport, oid) if not errInd and not errStatus: print "%s/%s -> %s" % (system['description'], check['description'], str(result[0][1]))   If you run the tool you’ll get results similar to these (assuming you correctly pointed your configuration to the working devices that respond to the SNMP queries):   $ ./snmp-manager.py My Laptop/WLAN outgoing traffic -> 1060698 My Laptop/WLAN incoming traffic -> 14305766   Now we’re ready to write all this data to the RRDTool database.

Storing Data with RRDTool RRDTool is an application developed by Tobias Oetiker, which has become a de facto standard for graphing monitoring data. The graphs produced by RRDTool are used in many different monitoring tools, such as Nagios, Cacti, and so on. In this section we’ll look at the structure of the RRTool database and the application itself. We’ll discuss the specifics of the round robin database, how to add new data to it, and how to retrieve it later on. We will also look at the data-plotting commands and techniques. And finally we’ll integrate the RRDTool database with our application.

Introduction to RRDTool As I have noted, RRDTool provides three distinct functions. First, it serves as a database management system by allowing you to store and retrieve data from its own database format. It also performs complex data-manipulation tasks, such as data resampling and rate calculations. And finally, it allows you to create sophisticated graphs incorporating data from various source databases. Let’s start by looking at the round robin database structure I apologize for the number of acronyms that you’ll come across in this section, but it is important to mention them here, as they all are used in the configuration of RRDTool, so it is vital to become familiar with them. The first property that makes an RRD different from conventional databases is that the database has a limited size. This means that the database size is known at the time it is initialized, and the size never changes. New records overwrite old data, and that process is repeated over and over again. Figure 1-3 shows a simplified version of the RRD to help you to visualize the structure.

17

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

CHAPTER 1 N READING AND COLLEcTING PERFORMANcE DATA USING SNMP

a CD Cell 1

Cell 12

Cell 2

Cell 11

Cell 10

Cell 3 Pointer to the last record

)

Cell 9

Cell 4

)

Cell 5

CD Cell 8

Cell 6 Cell 7

)

)

)

)

Figure 1-3.  The RRD structure Let’s assume that we have initialized a database that is capable of holding 12 records, each in its own cell. When the database is empty, we start by writing data to cell number 1. We also update the pointer with the ID of the last cell we’ve written the data to. Figure 1-3 shows that 6 records have already been written to the database (as represented by the shaded boxes). The pointer is on cell 6, and so when the next write instruction is received, the database will write it to the next cell (cell 7) and update the pointer accordingly. Once the last cell (cell 12) is reached, the process starts again, from cell number 1. The RRD data store’s only purpose is to store performance data, and therefore it does not require maintaining complex relations between different data tables. In fact, there are no tables in the RRD, only the individual data sources (DSs). The last important property of the RRD is that the database engine is designed to store the time series data, and therefore each record needs to be marked with a timestamp. Furthermore, when you create a new database you are required to specify the sampling rate, the rate at which entries are being written to the database. The default value is 300 seconds, or 5 minutes, but this can be overridden if required. The data that is stored in the RDD is called a Round Robin Archive (RRA). The RRA is what makes the RRD so useful. It allows you to consolidate the data gathered from the DS by applying an available consolidation function (CF). You can specify one of the four CFs (average, min, max, and last) that will be applied to a number of the actual data records. The result is stored in a round robin “table.” You can store multiple RRAs in your database with different granularity. For example, one RRA stores average values of the last 10 records and the other one stores an average of the last 100 records. This will all come together when we look at the usage scenarios in the next sections.

18

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

CHAPTER 1 N READING AND COLLEcTING PERFORMANcE DATA USING SNMP

Using RRDTool from a Python Program Before we start creating the RRDTool databases, let’s look at the Python module that provides the API to RRDTool. The module we are going to use in this chapter is called the Python RRDTool, and it is available to download at http://sourceforge.net/projects/py-rrdtool/. However, most Linux distributions have this prepackaged and available to install using the standard package management tool. For example, on a Fedora system you would run the following command to install the Python RRDTool module:   $ sudo yum install rrdtool-python   On Debian-based systems the install command is:   $ sudo apt-get install python-rrd   Once the package is installed, you can validate that the installation was successful:   $ python Python 2.6.2 (r262:71600, Jan 25 2010, 18:46:45) [GCC 4.4.2 20091222 (Red Hat 4.4.2-20)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import rrdtool >>> rrdtool.__version__ '$Revision: 1.14 $' >>>

Creating a Round Robin Database Let’s start by creating a simple database. The database we are going to create will have one data source, which is a simple increasing counter: the counter value increases over time. A classical example of such a counter is bytes transmitted over the interface. The readings are performed every 5 minutes. We also are going to define two RRAs. One is to average over a single reading, which effectively instructs RRDTool to store the actual values, and the other will average over six measurements. Following is an example of the command-line tool syntax for creating this database:   $ rrdtool create interface.rrd \ > DS:packets:COUNTER:600:U:U \ > RRA:AVERAGE:0.5:1:288 \ > RRA:AVERAGE:0.5:6:336   Similarly, you can use the Python module to create the same database:   >>> import rrdtool >>> rrdtool.create('interface.rrd', ... 'DS:packets:COUNTER:600:U:U', ... 'RRA:AVERAGE:0.5:1:288', ... 'RRA:AVERAGE:0.5:6:336') >>>  

19

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

CHAPTER 1 N READING AND COLLEcTING PERFORMANcE DATA USING SNMP

The structure of the DS (data source) definition line is:   DS:::::   The name field is what you name this particular data source. Since RRD allows you to store the data from multiple data sources, you must provide a unique name for each so that you can access it later. If you need to define more than one data source, simply add another DS line. The DS type (or data source type) field indicates what type of data will be supplied to this data source. There are four types available: COUNTER, GAUGE, DERIVE, and ABSOLUTE: u

The COUNTER type means that the measurement value is increasing over time. To calculate a rate, RRDTool subtracts the last value from the current measurement and divides by the measurement step (or sampling rate) to obtain the rate figure. If the result is a negative number, it needs to compensate for the counter rollover. A typical use is monitoring ever-increasing counters, such as total number of bytes transmitted through the interface.

u

The DERIVE type is similar to COUNTER, but it also allows for a negative rate. You can use this type to check the rate of incoming HTTP requests to your site. If the graph is above the zero line, this means you are getting more and more requests. If it drops below the zero line, it means your website is becoming less popular.

u

The ABSOLUTE type indicates that the counter is reset every time you read the measurement. Whereas with the COUNTER and DERIVE types, RRDTool subtracted the last measurement from the current one before dividing by the time period, ABSOLUTE tells it not to perform the subtraction operation. You use this on counters that are reset at the same rate that you do the measurements. For example, you could measure the system average load (over the last 15 minutes) reading every 15 minutes. This would represent the rate of change of the average system load.

u

The GAUGE type means that the measurement is the rate value, and no calculations need to be performed. For example, current CPU usage and temperature sensor readings are good candidates for the GAUGE type.

The heartbeat value indicates how much time to allow for the reading to come in before resetting it to the unknown state. RRDTool allows for data misses, but it does not make any assumptions and it uses the special value unknown if the data is not received. In our example we have the heartbeat set to 600, which means that the database waits for two readings (remember, the step is 300) before it declares the next measurement to be unknown. The last two fields indicate the minimum and maximum values that can be received from the data source. If you specify those, anything falling outside that range will be automatically marked as unknown. The RRA definition structure is:   RRA::::   The consolidation function defines what mathematical function will be applied to the dataset values. The dataset parameter is the last dataset measurements received from the data source. In our example we have two RRAs, one with just a single reading in the dataset and the other with six measurements in the dataset. The available consolidation functions are AVERAGE, MIN, MAX, and LAST: u

AVERAGE instructs RRDTool to calculate the average value of the dataset and store it.

u

MIN and MAX selects either the minimum or maximum value from the dataset and stores it.

u

LAST indicates to use the last entry from the dataset.

20

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

CHAPTER 1 N READING AND COLLEcTING PERFORMANcE DATA USING SNMP

The XFiles factor value shows what percentage of the dataset can have unknown values and the consolidation function calculation will still be performed. For example, if the setting is 0.5 (50%), then three out of six measurements can be unknown and the average value for the dataset will still be calculated. If four readings are missed, the calculation is not performed and the unknown value is stored in the RRA. Set this to 0 (0% miss allowance) and the calculation will be performed only if all data points in the dataset are available. It seems to be a common practice to keep this setting at 0.5. As already discussed, the dataset parameter indicates how many records are going to participate in the consolidation function calculation. And finally, samples tells RRDTool how many CF results should be kept. So, going back to our example, the number 288 tells RRDTool to keep 288 records. Because we’re measuring every 5 minutes, this is 24 hours of data (288/(60/5)). Similarly, the number 336 means that we are storing 7 days’ worth of data (336/(60/30)/24) at the 30-minute sampling rate. As you can see, the data in the second RRA is resampled; we’ve changed the sampling rate from 5 minutes to 30 minutes by consolidating data of every six (5-minute) samples.

Writing and Reading Data from the Round Robin Database Writing data to the RRD data file is very simple. You just call the update command and, assuming you have defined multiple data sources, supply it a list of data source readings in the same order as you specified when you created the database file. Each entry must be preceded by the current (or desired) timestamp, expressed in seconds since the epoch (1970-01-01). Alternatively, instead of using the actual number to express the timestamp, you can use the character N, which means the current time. It is possible to supply multiple readings in one command:   $ date +"%s" 1273008486 $ rrdtool update interface.rrd 1273008486:10 $ rrdtool update interface.rrd 1273008786:15 $ rrdtool update interface.rrd 1273009086:25 $ rrdtool update interface.rrd 1273009386:40 1273009686:60 1273009986:66 $ rrdtool update interface.rrd 1273010286:100 1273010586:160 1273010886:166   The Python alternative looks very similar. In the following code, we will insert another 20 records, specifying regular intervals (of 300 seconds) and supplying generated measurements:   >>> import rrdtool >>> for i in range(20): ... rrdtool.update('interface.rrd', ... '%d:%d' % (1273010886 + (1+i)*300, i*10+200)) ... >>>   Now let’s fetch the data back from the RRDTool database:   $ rrdtool fetch interface.rrd AVERAGE packets   1272983100: -nan [...] 1273008600: -nan 1273008900: 2.3000000000e-02 1273009200: 3.9666666667e-02 1273009500: 5.6333333333e-02

21

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

CHAPTER 1 N READING AND COLLEcTING PERFORMANcE DATA USING SNMP

1273009800: 4.8933333333e-02 1273010100: 5.5466666667e-02 1273010400: 1.4626666667e-01 1273010700: 1.3160000000e-01 1273011000: 5.5466666667e-02 1273011300: 8.2933333333e-02 1273011600: 3.3333333333e-02 1273011900: 3.3333333333e-02 1273012200: 3.3333333333e-02 1273012500: 3.3333333333e-02 1273012800: 3.3333333333e-02 1273013100: 3.3333333333e-02 1273013400: 3.3333333333e-02 1273013700: 3.3333333333e-02 1273014000: 3.3333333333e-02 1273014300: 3.3333333333e-02 1273014600: 3.3333333333e-02 1273014900: 3.3333333333e-02 1273015200: 3.3333333333e-02 1273015500: 3.3333333333e-02 1273015800: 3.3333333333e-02 1273016100: 3.3333333333e-02 1273016400: 3.3333333333e-02 1273016700: 3.3333333333e-02 1273017000: -nan [...] 1273069500: -nan   If you count the number of entries, you’ll see that it matches the number of updates we’ve performed on the database. This means that we are seeing results at the maximum resolution— in our case, a sample per record. Showing results at the maximum resolution is the default behavior, but you can select another resolution (provided that it has a matching RRA) by specifying the resolution flag. Bear in mind that the resolution must be expressed in the number of seconds and not in the number of samples in the RRA definition. Therefore, in our example the next available resolution is 6 (samples) * 300 (seconds/sample) = 1800 (seconds):   $ rrdtool fetch interface.rrd AVERAGE -r 1800 packets   [...] 1273010400: 6.1611111111e-02 1273012200: 6.1666666667e-02 1273014000: 3.3333333333e-02 1273015800: 3.3333333333e-02 1273017600: 3.3333333333e-02 [...]   Now, you may have noticed that the records inserted by our Python application result in the same number stored in the database. Why is that? Is the counter definitely increasing? Remember, RRDTool always stores the rate and not the actual values. So the figures you see in the result dataset show how fast the values are changing. And because the Python application generates new measurements at a steady rate (the difference between values is always the same), the rate figure is always the same.

22

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

CHAPTER 1 N READING AND COLLEcTING PERFORMANcE DATA USING SNMP

What does this number exactly mean? We know that generated values are increasing by 10 every time we insert a new record, but the value printed by the fetch command is 3.3333333333e-02. (For many people this may look slightly confusing, but it’s just another notation for the value 0.0333(3).) Where did that come from? In discussing the different data source types, I mentioned that RRDTool takes the difference between two data point values and divides that by the number of seconds in the sampling interval. The default sampling interval is 300 seconds, so the rate has been calculated as 10/300 = 0.0333(3), which is what is written to the RRDTool database. In other words, this means that our counter on average increases by 0.0333(3) every second. Remember that all rate measurements are stored as a change per second. We’ll look at converting this value to something more readable later in the section. Here’s is how you retrieve the data using the Python module method call:   >>> for i in rrdtool.fetch('interface.rrd', 'AVERAGE'): print i ... (1272984300, 1273071000, 300) ('packets',) [(None,), [...], (None,), (0.023,), (0.03966666666666667,), (0.056333333333333339,),« (0.048933333333333336,), (0.055466666666666671,), (0.14626666666666666,), (0.13160000000000002,), (0.055466666666666671,), (0.082933333333333331,), (0.033333333333333333,), (0.033333333333333333,), (0.033333333333333333,), (0.033333333333333333,), (0.033333333333333333,), (0.033333333333333333,), (0.033333333333333333,), (0.033333333333333333,), (0.033333333333333333,), (0.033333333333333333,), (0.033333333333333333,), (0.033333333333333333,), (0.033333333333333333,), (0.033333333333333333,), (0.033333333333333333,), (0.033333333333333333,), (0.033333333333333333,), (0.033333333333333333,), (None,), [...] (None,)] >>>   The result is a tuple of three elements: dataset information, list of datasources, and result array: u

Dataset information is another tuple that has three values: start and end timestamps and the sampling rate.

u

List of datasources simply lists all variables that were stored in the RRDTool database and that were returned by your query.

u

Result array contains the actual values that are stored in the RRD. Each entry is a tuple, containing values for every variable that was queried. In our example database we had only one variable; therefore the tuple contains only one element. If the value could not be calculated (is unknown), Python’s None object is returned.

You can also change the sampling rate if you need to:   >>> rrdtool.fetch('interface.rrd', 'AVERAGE', '-r', '1800') ((1272983400, 1273071600, 1800), ('packets',), [(None,), [...] (None,), (0.06161111111111111,), (0.061666666666666668,), (0.033333333333333333,), (0.033333333333333333,), (0.033333333333333333,), (None,), [...] (None,)]) 

23

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

CHAPTER 1 N READING AND COLLEcTING PERFORMANcE DATA USING SNMP

N■Note  By now you should have an idea of how the command-line tool syntax is mapped to the Python module calls You always call the module method, which is always named after the RRDTool function name, such as fetch, update, and so on. The argument to the function is an arbitrary list of values. A value in this case is whatever string is separated by spaces on the command line. Basically, you can take the command line and copy it to the function as an argument list. Obviously, you need to enclose each individual string with quote symbols and separate them with a comma symbol. To save space and avoid confusion, in further examples I’m only going to provide the command-line syntax, which you should be able to map to the Python syntax quite easily.

Plotting Graphs with RRDTool Plotting graphs with RRDTool is really easy, and graphing is one reason this tool has become so popular. In its simplest form, the graph-generating command is quite similar to the data-fetching command:   $ rrdtool graph packets.png --start 1273008600 --end 1273016400 --step 300\ > DEF:packetrate=interface.rrd:packets:AVERAGE \ > LINE2:packetrate#c0c0c0   Even without any additional modification, the result is a quite professional-looking performance graph, as you can see in Figure 1-4.

200 nT ‘

150 m 100 m 50 m

L 01:40

02: 00

02:20

02:40

03:00

03:20

03:40

Figure 1-4.  A simple graph generated by RRDTool First of all, let’s look at the command parameters. All the plotting commands start with a file name for the resulting image and optionally the time scale values. You can also provide a resolution setting, which will default to the most detailed resolution if not specified. This is similar to the -r option in the fetch command. The resolution is expressed in seconds. The next line (although you can type the whole graph command in one line) is the selector line, which selects the dataset from an RRDTool database. The format of the selector statement is:   DEF:=::   The selector name argument is an arbitrary string, which you use to name the resulting dataset. Look at it as an array variable that stores the result from the RRDTool database. You can use as many selector statements as you need, but you need to have at least one to produce any output.

24

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

CHAPTER 1 N READING AND COLLEcTING PERFORMANcE DATA USING SNMP

The combination of the rrd file, data source, and consolidation function variables defines exactly what data needs to be selected. As you can see, this syntax completely decouples the data storage and data representation functions. You can include results from different RRDTool databases on the same graph and combine them in any way you like. The data for the graphs can be collected on different monitoring servers and yet combined and presented on a single image. This selector statement can be extended with optional parameters that specify the start, stop, and resolution values for each data source. The format would be as follows, and this string should be appended at the end of the selector statement. Each element is optional, and you can use any combination of them.   :step=:start=:end=   So we can rewrite the previous plotting command as:   $ rrdtool graph packets.png \ > DEF:packetrate=interface.rrd:packets:AVERAGE:step=300:« start=1273008600:end=1273016400 \ > LINE2:packetrate#c0c0c0   The last element on the command line is the statement that tells RRDTool how to plot the data. The basic syntax for the data plotting command is:   ::   The most widely used plot types are LINE and AREA. The LINE keyword can be followed by a floating-point number to indicate the width of the line. The AREA keyword instructs RRDTool to draw the line and also fill in the area between the x-axis and the graph line. Both commands are followed by the selector name, which provides the data for the plotting function. The color value is written as an HTML color format string. You can also specify an optional argument legend, which tells RRDTool that a small rectangle of a matching color needs to be displayed at the bottom of the graph, followed by the legend string. As you could with the data selector statement, you can have as many of the graphing statements as you need, but you need to define at least one to produce a graph. Let’s take a second look at the graph we produced. RRDTool conveniently printed the timestamps on the x-axis, but what is displayed on the y-axis? It may look like measurements in meters, but in fact the m stands for “milli,” or one thousandth of the value. So the values printed there are exactly what has been stored in the RRDTool database. This is, however, not intuitive. We don’t see the packet size, and the data transfer rate can be either really low or really high, depending on the transmitted packet size. Let’s assume that we’re working with 4KB packets. In this case the logical solution would be to represent the information as bits per second. What do we have to do to convert the packets per second into bits per second? Because the rate interval doesn’t change (in both cases we measure the amount per second), only the packets value needs to be multiplied, first by 4096 (the number of bytes in a packet) and then by 8 (the number of bits in a byte). The RRDTool graph command allows defining the data conversion function that will be applied to any data selector variable. In our example we would use the following statement to convert packets per second into bytes per second:   $ rrdtool graph kbps.png --step 300 --start 1273105800 --end 1273114200 \ DEF:packetrate=interface.rrd:packets:AVERAGE \ CDEF:kbps=packetrate,4096,\*,8,\* \ LINE2:kbps#c0c0c0  

25

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

CHAPTER 1 N READING AND COLLEcTING PERFORMANcE DATA USING SNMP

If you look at the image produced by this command, you’ll see that its shape is identical to Figure 1-4, but the y-axis labels have changed. They are not indicating a “milli” value anymore—all numbers are labeled as k. This makes more sense, as most people feel more comfortable seeing 3kbps rather than 100 milli packets per second.

N■Note  You may be wondering why the calculation string looks rather odd. First of all, I had to escape the * characters so they are passed to the rrdtool application without being processed by the shell. And the formula itself has to be written in Reverse Polish Notation, in which you specify the first argument, then the second argument, and then the function that you want to perform. The result can then be used as a first argument. In my example I effectively tell the application to “take the packetrate and 4096 and multiply them, take the result and 8 and multiply them.” It takes some time to adjust, but once you get a handle on it, expressing formulas in RPN is really pretty easy. Finally, we need to make the graph even more presentable by adding a label to the y-axis, a legend for the value that we are plotting, and the title for the graph itself. This example also demonstrates how to change the size of the generated image:   $ rrdtool graph packets.png --step 300 --start 1273105800 --end 1273114200 \ --width 500 --height 200 \ --title "Primary Interface" --vertical-label "Kbp/s" \ DEF:packetrate=interface.rrd:packets:AVERAGE \ CDEF:kbps=packetrate,4096,\*,8,\* \ AREA:kbps#c0c0c0:"Data transfer rate"   The result is shown in Figure 1-5.

Primary interface 7.0k

1

6.0 k o

5.0 k a

X

4.0 k

-2

3.0 k 2.0 k

1.0 k 01:40 02:00 Data transfer rate

02:20

02:40

03:00

03:20

03:40

Figure 1-5.  Formatting the RRDTool-generated graph This introduction to RRDTool has covered only its basic uses. The application, however, comes with a really extensive API, which allows you to change pretty much every aspect of a graph. I recommend reading the RRDTool documentation, which is available at http://oss.oetiker.ch/rrdtool/doc/.

26

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

CHAPTER 1 N READING AND COLLEcTING PERFORMANcE DATA USING SNMP

Integrating RRDTool with the Monitoring Solution We’re now ready to integrate RRDTool calls into our monitoring application, so that the information we gather from the SNMP-enabled devices is recorded and readily available for reporting. Although it is possible to maintain multiple data sources in one RRDTool database, it is advisable to do so only for measurements that are closely related. For example, if you’re monitoring a multiprocessor system and want to store interrupt counts of every single CPU, it would make perfect sense to store them all in one data file. Mixing memory utilization and temperature sensor readings, by contrast, probably is not a very good idea, because you may decide that you need a greater sampling rate for one measurement, and you can’t easily change that without affecting other data sources. In our system, the SNMP OIDs are provided in the configuration file and the application has absolutely no idea whether they are related or not. Therefore we will store every reading in a separate data file. Each data file will get the same name as the check section name (for example, check_1.rrd), so make sure to keep them unique. We will also have to extend the configuration file, so that each check defines the desired sampling rate. And finally, every time the application is invoked, it will check for the presence of the data store files and create any that are missing. This removes the burden from application users to create the files manually for every new check. You can see the updated script in Listing 1-8. Listing 1-8.  Updating the RRDs with the SNMP Data #!/usr/bin/env python   import sys, os.path, time from ConfigParser import SafeConfigParser from pysnmp.entity.rfc3413.oneliner import cmdgen import rrdtool    class SnmpManager: def __init__(self): self.systems = {} self.databases_initialised = False   def add_system(self, id, descr, addr, port, comm_ro): self.systems[id] = {'description' : descr, 'address' : addr, 'port' : int(port), 'communityro' : comm_ro, 'checks' : {} }   def add_check(self, id, oid, descr, system, sampling_rate): oid_tuple = tuple([int(i) for i in oid.split('.')]) self.systems[system]['checks'][id] = {'description': descr, 'oid' : oid_tuple, 'result' : None, 'sampling_rate' : sampling_rate }   def query_all_systems(self): if not self.databases_initialised: self.initialise_databases() self.databases_initialised = True

27

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

CHAPTER 1 N READING AND COLLEcTING PERFORMANcE DATA USING SNMP

cg = cmdgen.CommandGenerator() for system in self.systems.values(): comm_data = cmdgen.CommunityData('my-manager', system['communityro']) transport = cmdgen.UdpTransportTarget((system['address'], system['port'])) for key, check in system['checks'].iteritems(): oid = check['oid'] errInd, errStatus, errIdx, result = cg.getCmd(comm_data, transport, oid)   if not errInd and not errStatus: file_name = "%s.rrd" % key rrdtool.update(file_name, "%d:%d" % (int(time.time(),), float(result[0][1]),) )   def initialise_databases(self): for system in self.systems.values(): for check in system['checks']: data_file = "%s.rrd" % check if not os.path.isfile(data_file): print data_file, 'does not exist' rrdtool.create(data_file, "DS:%s:COUNTER:%s:U:U" % (check, system['checks'][check]['sampling_rate']), "RRA:AVERAGE:0.5:1:288",)    def main(conf_file=""): if not conf_file: sys.exit(-1) config = SafeConfigParser() config.read(conf_file) snmp_manager = SnmpManager() for system in [s for s in config.sections() if s.startswith('system')]: snmp_manager.add_system(system, config.get(system, 'description'), config.get(system, 'address'), config.get(system, 'port'), config.get(system, 'communityro')) for check in [c for c in config.sections() if c.startswith('check')]: snmp_manager.add_check(check, config.get(check, 'oid'), config.get(check, 'description'), config.get(check, 'system'), config.get(check, 'sampling_rate')) snmp_manager.query_all_systems()   if __name__ == '__main__': main(conf_file='snmp-manager.cfg')  

28

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

CHAPTER 1 N READING AND COLLEcTING PERFORMANcE DATA USING SNMP

The script is now ready for monitoring. You can add it to the Linux cron scheduler and have it executed every 5 minutes. Don’t worry if you configure some checks with a sampling rate greater than 5 minutes; RRDTool is clever enough to store the measurements at the sampling rate that has been specified at the database creation time. Here’s a sample cronjob entry that I used to produce sample results, which we’ll be using in the next section:   $ crontab -l */5 * * * * (cd /home/rytis/snmp-monitor/; ./snmp-manager.py > log.txt) 

Creating Web Pages with the Jinja2 Templating System In the last section of this chapter we are going to create another script, this one generating a simple structure of web pages containing the graphs. The main entry page lists all available checks grouped by the system and links to the check details page. When a user navigates to that page, she will see the graph generated by RRDTool and some details about the check itself (such as the check description and OID). Now, this looks relatively easy to implement, and most people would simply start writing a Python script that would use print statements to produce the HTML pages. Although this approach may seem to work, in most cases it soon is unmanageable. The functional code often becomes intermingled with the content-producing code, and adding new functionality usually breaks everything, which in turn leads to hours spent debugging the application. The solution to this problem is to use one of the templating frameworks, which allow decoupling the application logic from the presentation. The basic principle of a templating system is simple: you write code that performs calculations and other tasks that are not content-specific, such as retrieving data from the databases or other sources. Then you pass this information to the templating framework, along with the name of the template that uses this information. In the template code you put all HTML formatting text together with the dynamic data (which was generated earlier). The framework then parses the template for simple processing statements (like iteration loops and logical test statements) and generates the result. You can see the basic flow of this processing in Figure 1-6. Python application

Template file { { name } } { { age } }

name = 'John' age = 30

Templating framework

Result John 30

Figure 1-6.  Data flow in the templating framework

29

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

CHAPTER 1 N READING AND COLLEcTING PERFORMANcE DATA USING SNMP

This way, your application code is clean from all content-generation statements and is much easier to maintain. The template can access all variables presented to it, but it looks more like an HTML page, and loading it into a web browser usually produces acceptable results. So you can even ask a dedicated web developer to create the templates for you, as there is no need to know any Python to modify them. I’m going to use a templating framework called Jinja, which has syntax very similar to that used by the Django web framework. We’re also going to talk about the Django framework in this book, so it makes sense to use a similar templating language. The Jinja framework is also widely used, and most Linux distributions include the Jinja package. On a Fedora system you can install it with the following command:   $ sudo yum install python-jinja2   Alternatively, you can use the PiP application to install it:   $ sudo pip install Jinja2   You can also get the latest development version of the Jinja2 framework from the official website: http://jinja.pocoo.org/.

N■Tip Make sure to install Jinja2 and not the earlier release—Jinja. Jinja2 provides an extended templating language and is actively developed for and is supported.

Loading Template Files with Jinja2 Jinja2 is designed to be used in the web framework and therefore has a very extensive API. Most of its functionality is not used in simple applications that only generate a few pages, so I’m going to skip those functions, as they could be a topic for a book of their own. In this section I’ll show you how to load a template, pass some variables to it, and save the result. These three functions are what you will use most of the time in your applications. For more extensive documentation on the Jinja2 API, please refer to http://jinja.pocoo.org/docs/api/. The Jinja2 framework uses so-called loader classes to load the template files. These can be loaded from various sources, but most likely they are stored on a file system. The loader class, which is responsible for loading the templates stored on a file system, is called jinja2.FileSystemLoader. It accepts one string or a list of strings that are the pathnames on a file system where the template files can be found:   from jinja2 import FileSystemLoader   loader1 = FileSystemLoader('/path/to/your/templates') loader2 = FileSystemLoader(['/templates1/', '/teamplates2/']   Once you have initialized the loader class, you can create an instance of the jinja2.Environment class. This class is the central part of the framework and is used to store the configuration variables, access the templates (via the loader instance), and pass the variables to the template objects. When initializing the environment, you must pass the loader object if you want to access externally stored templates:   from jinja2 import Environment, FileSystemLoader   loader = FileSystemLoader('/path/to/your/templates') env = Environment(loader=loader)  

30

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

CHAPTER 1 N READING AND COLLEcTING PERFORMANcE DATA USING SNMP

When the environment has been created, you can then load the templates and render the output. First you call the get_template method, which returns a template object associated with the template file. Next you call the template object’s method render, which processes the template contents (loaded by the previously initialized loader class). The result is the processed template code, which can be written to a file. You have to pass all variables to the template as a dictionary. The dictionary keys are the names of the variables available from within the template. The dictionary values can be any Python objects that you want to pass to the template.   from jinja2 import Environment, FileSystemLoader   loader = FileSystemLoader('/path/to/your/templates') env = Environment(loader=loader) template = env.get_template('template.tpl') r_file = open('index.html', 'w') name = 'John' age = 30 result = template.render({'name': name, 'age': age}) r_file.write(result) r_file.close()

The Jinja2 Template Language The Jinja2 templating language is quite extensive and feature-rich. The basic concepts, however, are quite simple and the language closely resembles Python. For a full language description, please check the official Jinja2 template language definition at http://jinja.pocoo.org/2/documentation/templates. The template statements have to be escaped; anything that is not escaped is not processed and will be returned verbatim after the rendering process. There are two types of language delimiters: u

The variable access delimiter, which indicates a reference to a variable: {{ ... }}

u

The statement execution delimiter, which tells the framework that the statement inside the delimiter is a functional instruction: {% ... %}

Accessing Variables As you already know, the template knows the variables by the names they were given as dictionary keys. Suppose the dictionary passed to the render function was this:   {'name': name, 'age': age}   The following statements in the template can access these variables as shown here:   {{ name }} / {{ age }}   The object passed to the template can be any Python object, and the template can access it using the same Python syntax. For example, you can access the dictionary or array elements. Assume the following render call:   person = {'name': 'John', 'age': 30} r = t.render({'person': person})  

31

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

CHAPTER 1 N READING AND COLLEcTING PERFORMANcE DATA USING SNMP

Then you can use the following syntax to access the dictionary elements in the template:   {{ person.name }} / {{ person.age }}

Flow Control Statements The flow control statements allow you to perform checks on the variables and select different parts of the template that will be rendered accordingly. You can also use these statements to repeat a piece of the template when generating structures such as tables or lists. The for ... in loop statement can iterate through these iterable Python objects, returning one element at a time:   Available products {% for item in products %} {{ item }} {% endfor %}   Once in the loop, the following special variables are defined. You can use them to check exactly where you are in the loop. Table 1-4.  The Loop Property Variables

Variable

Description

loop.index

The current iteration of the loop. The index starts with 1; use loop.index0 for a count indexed from 0.

loop.revindex

Similar to loop.index, but counts iterations from the end of the loop.

loop.first

Set to True if the first iteration.

loop.last

Set to True if the last iteration.

loop.length

The total number of elements in the sequence.

The logical test function if is used as a Boolean check, similar to the use of the Python if statement:   {% if items %} {% for item in items %} {% if item.for_sale %} {{ item.description }} {% endif %} {% endfor %} {% else %} There are no items {% endif %}  

32

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

CHAPTER 1 N READING AND COLLEcTING PERFORMANcE DATA USING SNMP

The Jinja2 framework also allows for template inheritance. That is, you can define a base template and inherit from it. Each child template then redefines the blocks from the main template file with appropriate content. For example, the parent template (parent.tpl) may look like this:   MyCompany – {% block title %}Default title{% endblock %} {% block content %} There is no content {% endblock %}   The child template then inherits from the base template and extends the blocks with its own content:   {% extends 'parent.tpl' %} {% block title %}My Title{%endblock %} {% block content %} My content %} {% endblock %}

Generating Website Pages The script that generates the pages and the images uses the same configuration file used by the check script. It iterates through all system and check sections and builds a dictionary tree. The whole tree is passed to the index generation function, which in turn passes it to the index template. The detailed information for each check is generated by a separate function. The same function also calls the rrdtool method to plot the graph. All files are saved in the website’s root directory, which is defined in the global variable but can be overruled in the function call. You can see the whole script in Listing 1-9. Listing 1-9.  Generating the Website Pages #!/usr/bin/env python   from jinja2 import Environment, FileSystemLoader from ConfigParser import SafeConfigParser import rrdtool import sys   WEBSITE_ROOT = '/home/rytis/public_html/snmp-monitor/'   def generate_index(systems, env, website_root): template = env.get_template('index.tpl') f = open("%s/index.html" % website_root, 'w') f.write(template.render({'systems': systems})) f.close()   def generate_details(system, env, website_root): template = env.get_template('details.tpl') for check_name, check_obj in system['checks'].iteritems(): rrdtool.graph ("%s/%s.png" % (website_root, check_name),

33

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

CHAPTER 1 N READING AND COLLEcTING PERFORMANcE DATA USING SNMP

'--title', "%s" % check_obj['description'], "DEF:data=%(name)s.rrd:%(name)s:AVERAGE" % {'name': check_name}, 'AREA:data#0c0c0c') f = open("%s/%s.html" % (website_root, str(check_name)), 'w') f.write(template.render({'check': check_obj, 'name': check_name})) f.close()   def generate_website(conf_file="", website_root=WEBSITE_ROOT): if not conf_file: sys.exit(-1) config = SafeConfigParser() config.read(conf_file) loader = FileSystemLoader('.') env = Environment(loader=loader) systems = {} for system in [s for s in config.sections() if s.startswith('system')]: systems[system] = {'description': config.get(system, 'description'), 'address' : config.get(system, 'address'), 'port' : config.get(system, 'port'), 'checks' : {} } for check in [c for c in config.sections() if c.startswith('check')]: systems[config.get(check, 'system')]['checks'][check] = { 'oid' : config.get(check, 'oid'), 'description': config.get(check, 'description'), }   generate_index(systems, env, website_root) for system in systems.values(): generate_details(system, env, website_root)    if __name__ == '__main__': generate_website(conf_file='snmp-manager.cfg')   Most of the presentation logic, such as checking whether a variable is defined and iterating through the list items, is implemented in the templates. In Listing 1-10, we first define the index template, which is responsible for generating the contents of the index.html page. As you know, in this page we’re going to list all defined systems with a complete list of checks available for each system. Listing 1-10.  The Index Template System checks {% if systems %} {% for system in systems %} {{ systems[system].description }} {{ systems[system].address }}:{{ systems[system].port }} {% if systems[system].checks %} The following checks are available: {% for check in systems[system].checks %}

34

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

CHAPTER 1 N READING AND COLLEcTING PERFORMANcE DATA USING SNMP

{{ systems[system].checks[check].description }} {% endfor %} {% else %} There are no checks defined for this system {% endif %} {% endfor %} {% else %} No system configuration available {% endif %}   The web page generated by this template is rendered as shown in Figure 1-7. http:// 192.168.1.68/~rytis/snmp- monitor/

| 192.168.1.1.http: P 1:166(165) ack 1 win 5488 [...] POST /soap/ HTTP/1.1 Host: 192.168.1.1 Accept-Encoding: identity Content-Length: 540 Content-Type: text/xml; charset=utf-8 SOAPAction: "urn:NSConfigAction" [...] nstest« nstest  

52

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

CHAPTER 2 N MANAGING DEVIcES USING THE SOAP API

11:11:35.567226 IP 192.168.1.1.http > 192.168.1.10.40494: P 1:949(948) ack 706 win 57620 [...] HTTP/1.1 200 OK Date: Mon, 29 Jun 2009 11:13:08 GMT Server: Apache Last-Modified: Mon, 29 Jun 2009 11:13:08 GMT Status: 200 OK Content-Length: 622 Connection: close Set-Cookie:« NSAPI=##F0F402A6574084DB4956184C6443FEE54DD5FC1E1953E3730A5A307BBEC3;Domain=« 192.168.1.1; Path=/soap Content-Type: text/xml; charset=utf-8 0Done   We can see that with our initial request for login action, we send a SOAP message with our credentials encapsulated as an HTTP POST request. The response is also a SOAP message, encapsulated in an HTTP response. The SOAP response does not carry much useful information; it contains only two pieces of data: a numeric return code (rc) and an alphanumeric string (message). When everything is okay, rc is set to 0 and message is set to Done. The HTTP header carries more important information: it sets a cookie that we need to use with other requests:   Set-Cookie: NSAPI=##F0F402A6574084DB4956184C6443FEE54DD5FC1E1953E3730A5A307BBEC3; Domain=192.168.1.1;Path=/soap   This cookie value is associated with our account on NS, and so the web service knows that whoever sends this cookie has already been authenticated.

Gathering Performance Statistics Data We have already established the following requirements for the statistics gathering and monitoring tool: u

We want our tool to gather statistical information about: u

CPU and memory utilization

u

System overview: requests rate, data rate, established connections

u

Overview of all virtual servers: up/down and what services are down within each

53

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

CHAPTER 2 N MANAGING DEVIcES USING THE SOAP API

u

These can be split into two groups: u

System status (CPU, memory, and request rate readings)

u

Virtual server status (virtual server states)

We can now split our implementation into two parts, which is easier to code and test.

SOAP Methods for Reading Statistical Data and its Return Values Table 2-1 lists the methods that are used in our statistics-gathering tool, along with the name and a brief description of each method’s Return object. We are going to use some of them in our code. (You should be able to modify the code quite easily and add more items for the tool to query. If you find yourself needing more details about more specific items, such as AAA, GSLB, or Compression, please refer to Netscaler API documentation, available to download from the Netscaler management web page.) Table 2-1.  Statistic Web Service Methods and Their Return Values Used in Our Example

Method

Return Variable

Description

statsystem

_internaltemp

The internal system temperature in C.

_rescpuusage

The combined CPU usage expressed as a percentage.

_memusagepcnt

The memory usage expressed as a percentage.

statprotocolhttp

_httprequestsrate

The total HTTP(S) request rate (per second).

statlbvserver

_primaryipaddress

The IP address of the virtual server.

_primaryport

The port number of the virtual server

_state

The state of the virtual server: UP: The virtual server is running. DOWN: All services failed in the virtual server. OUT OF SERVICE: The virtual server is disabled.

statservice

_vslbhealth

The health of the virtual server, expressed as the percentage of services that are in the UP state.

_requestsrate

The rate of requests per second the virtual server is receiving.

_primaryipaddress

The IP address of the virtual server.

_primaryport

The port number of the virtual server.

_state

The state of the virtual server: UP: The service is running. DOWN: The service is not running on the physical server. OUT OF SERVICE: The service is disabled.

_requestsrate

The rate of requests per second the service is receiving.

54

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

CHAPTER 2 N MANAGING DEVIcES USING THE SOAP API

Reading System Health Data Reading system status data is pretty straightforward; all we need to do is call two methods: one to retrieve readings about hardware and memory status, and another to check the total HTTP and HTTPS requests served by our load balancer. As we can see from Table 2-1, we will be calling the statsystem and statprotocolhttp methods. Neither of these methods requires any input parameters. Listing 2-19 shows a simplified version of the statistics-gathering method in our NSStatApi class. Listing 2-19.  Obtaining System Health Data def system_health_check(self): results = {} [...] req = self.module.statsystem() res = self.soap.statsystem(req)._return results['temp'] = res._List[0]._internaltemp results['cpu'] = res._List[0]._rescpuusage results['mem'] = res._List[0]._memusagepcnt [...] req = self.module.statprotocolhttp() res = self.soap.statprotocolhttp(req)._return results['http_req_rate'] = res._List[0]._httprequestsrate [...] return results   This looks similar to the login request we performed earlier; however, there is one important difference to notice. This time we need to use the _List variable to access the details we are interested in receiving. The reason for this is that all response _return objects contain two required and one optional variable: _rc, _message, or _List. We already know that _rc and _message contain a request return code and a message that provides more details about the request status. However, _List is optional and is an array that may contain one or more instances of the Return object(s). Even if the method will always return a single instance, it is still contained in the array. This is one of the means for providing a standard way of communication: every request is always going to return the same set of variables, so if we needed to, we could write a standard SOAP request dispatcher/response handler. How do we find out what Structure objects are returned in the list? This is very simple. First, you need to look for the methodname response class in the NSStat_services_types.py module that contains all data types used in SOAP communication. In our case, we are searching for statsystemResult_Def class. Once we have found it, we need to look for the Type definition, similar to the following:   TClist = [ZSI.TCnumbers.IunsignedInt(pname="rc", aname="_rc", minOccurs=1, maxOccurs=1, nillable=False, typed=False, encoded=kw.get("encoded")), ZSI.TC.String(pname="message", aname="_message", minOccurs=1, maxOccurs=1, nillable=False, typed=False, encoded=kw.get("encoded")), GTD("urn:NSConfig","systemstatsList",lazy=False)(pname="List", aname="_List", minOccurs=0, maxOccurs=1, nillable=False, typed=False, encoded=kw.get("encoded"))]  

55

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

CHAPTER 2 N MANAGING DEVIcES USING THE SOAP API

Now, we look for the systemstatsList class definition, shown in Listing 2-20. Listing 2-20.  The systemstatsList Class Definition class systemstatsList_Def(ZSI.TC.Array, TypeDefinition): #complexType/complexContent base="SOAP-ENC:Array" schema = "urn:NSConfig" type = (schema, "systemstatsList") def __init__(self, pname, ofwhat=(), extend=False, restrict=False, attributes=None, **kw): ofwhat = ns0.systemstats_Def(None, typed=False) atype = (u'urn:NSConfig', u'systemstats[]') ZSI.TCcompound.Array.__init__(self, atype, ofwhat, pname=pname, childnames='item', **kw)   In this class definition, we find a reference to the actual class, which is going to contain all the variables we will receive in the response from SOAP. Finally, in Listing 2-21, we search for systemstats_Def class, where the subclass Holder contains all available variables. Listing 2-21.  The Definition of the systemstats Return Type class systemstats_Def(ZSI.TCcompound.ComplexType, TypeDefinition): [...] class Holder: typecode = self def __init__(self): # pyclass self._rescpuusage = None self._memusagepcnt = None self._internaltemp = None [...]   This may look really complicated, but for automated systems it is always the same pattern in accessing the information, which helps to simplify the process.

Reading Service Status Data Retrieving information about services is very similar; it just involves more steps:

1.

We need to retrieve a list of all virtual servers on the Netscaler. This can be achieved with the statlbvserver method, which accepts an optional name parameter. If that is specified, only information about that virtual server will be returned. If name is not specified or is set to blank, information about all virtual servers will be returned.



2.

For each virtual server on the list, we create a list of services attached to it. This actually requires using a different SOAP service—the Netscaler configuration SOAP. The Statistics API does not provide functionality to query dependencies between configuration entities, so we are going to use the getlbvserver method from the Configuration API.



3.

We check whether the virtual server health score is not 100 percent. If the server is not on the ignore list, we list unhealthy services that are attached to it. We use the statservice method to retrieve statistics about each service, and if the service is not in the UP state, we indicate that.

56

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

CHAPTER 2 N MANAGING DEVIcES USING THE SOAP API

N■Note In the Citrix load balancer, the virtual server has a number of services attached to it that serve user requests. The health score for a virtual server is calculated as a percentage of active services in the virtual server pool. If a virtual server contains ten services in its pool and two of them are not responding to the health checks, the score for that virtual server is 80 percent. In the following code listings I show classes and methods that implement health and service statistics gathering. In order to keep the code simple, these examples do not have any error handling. The full source code, which is available to download from the book’s page at www.apress.com, contains additional error handling and reporting functionality. First, in Listing 2-22, we define a new Statistics API wrapper class, which implements two methods: get_ vservers_list and get_service_details. The class inherits all functions from the base NSSoapApi class, which we defined earlier. Listing 2-22.  The Statistics API Wrapper Class class NSStatApi(NSSoapApi): [...] def get_vservers_list(self, name=''): result = {} req = self.module.statlbvserver() req._name = name res = self.soap.statlbvserver(req)._return for e in res._List: result[e._name.strip('"')] = { 'ip': e._primaryipaddress, 'port': e._primaryport, 'status': e._state, 'health': e._vslbhealth, 'requestsrate': e._requestsrate, } return result   def get_service_details(self, service): result = {} req = self.module.statservice() req._name = service res = self.soap.statservice(req)._return result = { 'ip': res._List[0]._primaryipaddress, 'port': res._List[0]._primaryport, 'status': res._List[0]._state, 'requestsrate': res._List[0]._requestsrate, } return result   The get_vservers_list method calls the statlbvserver SOAP method and passes an optional name parameter. If the name string is empty, a list of all virtual servers will be returned. When the list is returned, we create our own dictionary with just a few items from the complete list. The get_service_details method calls the statservice SOAP method and passes a service name as an argument. The SOAP response consists of detailed information about the service. We will extract only the information that is interesting for us and return it as a Python dictionary.

57

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

CHAPTER 2 N MANAGING DEVIcES USING THE SOAP API

The second class we define, in Listing 2-23, is a configuration API wrapper class. This class should mainly be used for functions that deal with load balancer configuration, but we need to call one function from this service: getlbvserver. This function returns (among other details about the virtual server) a list of all services that are bound to a particular virtual server. Our method is called get_services_list and simply returns the result as a Python list with service names as elements. Listing 2-23.  A Configuration API Wrapper Class class NSConfigApi(NSSoapApi): def get_services_list(self, vserver): req = self.module.getlbvserver() req._name = vserver res = self.soap.getlbvserver(req)._return result = [e.strip('"') for e in res._List[0]._servicename] return result   Finally, in Listing 2-24, we implement our query function, which performs the following steps:

1.

Initiates instances of both classes.



2.

Retrieves a list of all virtual servers.



3.

If the virtual server health is not 100 percent, gets a list of services bound to it.



4.

Prints out all unhealthy services.

Listing 2-24.  Retrieving Service Status Data ns = NSStatApi([...]) ns_c = NSConfigApi([...])   for (vs, data) in ns.get_vservers_list(name=OPTS.vserver_query).iteritems(): if (data['status'] != 'UP' or data['health'] != 100) and vs not in config.netscalers['primary']['vserver_ignore_list'] or OPTS.verbose: print " SERVICE: %s (%s:%s)" % (vs, data['ip'], data['port']) print " LOAD: %s req/s" % data['requestsrate'] print " HEALTH: %s%%" % data['health'] for srv in sorted(ns_c.get_services_list(vs)): service = ns.get_service_details(srv) if service['status'] != 'UP' or OPTS.vserver_query or OPTS.verbose: print ' * %s (%s:%s) - %s (%s req/sec)' % (srv, service['ip'], service['port'], service['status'], service['requestsrate'])   Following is the sample output from the tool. Depending on your load balancer configuration, and the operational status of the virtual servers and services, you obviously will get different results.

58

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

CHAPTER 2 N MANAGING DEVIcES USING THE SOAP API

In this example, the first section displays basic health information about the load balancers: memory usage, CPU usage, temperature, and total HTTP requests. The second section displays information about a service that is not completely healthy. This service is supposed to have 30 services running, but two of them are marked as DOWN:  $ ./ns_stat.py ************************************************** Health check for loadbalancer: 192.168.1.1 Memory usage: 6.434952% CPU usage: 15% Temperature: 47C Requests: 4926/sec -----------------------------SERVICE: main_web_server (192.168.0.5:80) LOAD: 1140 req/s HEALTH: 92% * web_farm_service-13 (192.168.2.13:80) - DOWN (0 req/sec) * web_farm_service-14 (192.168.2.14:80) - DOWN (0 req/sec) -----------------------------$ 

Automating Some Administration Tasks The second part of our exercise is to create a management tool for our load balancer. Going back to our original requirements, we know that we want the configuration tool to perform the following tasks:

1.

Disable/enable all services for any of the available virtual servers.



2.

Disable/enable any individual service.



3.

Disable/enable any set of services (may span across multiple virtual servers).

Device Configuration SOAP Methods The configuration API provides over 2,500 different methods to alter load balancer configuration. Configuring a load balancer is usually a complicated task and goes far beyond the scope of this book. In this section, though, I show how to get a list of services and how to enable and disable them. Other functions behave in a similar fashion, so if you need to create a new virtual server, you would just call appropriate functions.

59

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

CHAPTER 2 N MANAGING DEVIcES USING THE SOAP API

Table 2-2 lists the methods we will be using in the configuration tool, along with each one’s return variable and a description. Table 2-2.  Methods Used to Enable and Disable Servers

Method

Return Variable

Description

disableservice

_rc

The return code of the operation (simpleResult type); 0 if successful.

_message

A detailed explanation of the result (simpleResult type). “Done” if successful; otherwise a meaningful explanation is provided.

_rc

The return code of the operation (simpleResult type); 0 if successful.

_message

A detailed explanation of the result (simpleResult type). “Done” if successful; otherwise a meaningful explanation is provided.

_servicename

A list of all services bound to a particular virtual server.

enableservice

getlbvserver

As you can see, the first two methods for enabling and disabling services are really simple in their responses: they either succeed or fail. Just like the login method, they return a data structure simpleResponse, which contains only a return code and a detailed description of the error in case of failure. The last method is getlbvserver, which we used in a previous section to retrieve a list of all services bound to a virtual server. The same method wrapper will be used here.

Setting a Service State Setting the state of a service is as simple as calling either enableservice or disableservice with a service name as a parameter to the method call. Citrix Netscaler load balancer service and virtual server names are not case sensitive, so when calling either method, you do not need to care about setting a correct case for the name parameter. We define another function in our NSConfigApi class that will implement switching between the states and wrap two SOAP functions into one convenient, easy-to-use class method. We call this method set_service_state, and it will accept two required arguments: a new state and a Python array that contains the names of all the services whose state we want to change. Listing 2-25 shows the code. Listing 2-25.  The Wrapper for the SOAP enableservice and disableservice Functions def set_service_state(self, state, service_list, verbose=False): [...] for service in service_list: if verbose: print 'Changing state of %s to %sd... ' % (service, state) req = getattr(self.module, '%sservice' % state)() req._name = service res = getattr(self.soap, '%sservice' % state)(req)._return [...] return   As you can see, it is a simple function; however, it contains one thing that is worth a bit more attention: we do not explicitly specify the name of the method we are calling; it is automatically constructed during runtime from the argument value that we receive in the state variable.

60

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

CHAPTER 2 N MANAGING DEVIcES USING THE SOAP API

To achieve this, we use the Python getattr function, which allows us to get a reference to an object’s property at runtime without knowing the property name in advance. When we call getattr, we provide two arguments: a reference to an object and the name of the property we are addressing. Therefore, our explicit call to a method looks like this:   result = some_object.some_function()   This would be equivalent to:   result = getattr(some_object, "some_function")()   It is important to note the () after the getattr call. The getattr return value is a reference to an object, and as such does not execute a function. If we are accessing an object variable, it will return the value of the variable, but if we are accessing a function, we would only get a reference to it:   >>> class C(): ... var = 'test' ... def func(self): ... print 'hello' ... >>> o = C() >>> getattr(o, 'var') 'test' >>> getattr(o, 'func') >>> getattr(o, 'func')() hello >>>   This method is often used to implement dispatcher functionality, which we use in our code as well, instead of explicitly testing for the state parameter, as shown here:   if state == 'enable': req = self.module.enableservice() req._name = service res = self.soap.enableservice(req)._return elif state == 'disable': req = self.module.disableservice() req._name = service res = self.soap.disableservice(req)._return   At this point we construct the name of our function and call it automatically:   req = getattr(self.module, '%sservice' % state)() req._name = service res = getattr(self.soap, '%sservice' % state)(req)._return   This is a powerful technique that makes your code much more readable and easier to maintain. In the previous example, we reduced the number of lines from eight to only three. There is a caveat, however: that we might reference a property that does not exist. In our example, we must make sure that state is set to either 'enable' or 'disable'; otherwise, getattr will return None as a result.

61

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

CHAPTER 2 N MANAGING DEVIcES USING THE SOAP API

A Word About Logging and Error Handling Although they do not affect the functionality of our tools or API access library, it is important to implement basic logging, error reporting, and error handling. At every stage of writing code, we need to anticipate all possible outcomes, especially if we are using external libraries and/or external services, such as the SOAP API.

Using the Python Logging Module Regardless of the size of our project, it is good practice to report as many details as possible of what is happening in the code. Python comes with a built-in logging module, which is flexible and configurable yet is easy and simple to use.

Logging Levels and Scope The Python logging module provides five levels of detail. Table 2-3 provides details on when to use each level. Table 2-3.  Logging Levels and When You Should Use Each

Level

When to Use

DEBUG

As the name suggests, this logging level is for debugging purpose. Use DEBUG to log as much information as possible; messages at this level should contain enough detail for you to identify possible problems with the code.

INFO

This is a less detailed level, and it’s usually used to log key events in the system’s life cycle, such as contacting an external service or calling a rather complicated subsystem.

WARNING

Reports all unexpected events at this logging level. Everything that is not harmful but is out of the ordinary should be reported here. For example, if a configuration file is not found, but we have default settings, we should raise a warning.

ERROR

This level is used to log any event that prevents us from completing a given task but still allows us to proceed with the remaining tasks. For example, if we need to check the status of five virtual servers but one of them cannot be found, we report this as an error and proceed with checking other servers.

CRITICAL

If we cannot proceed any further, we log the error with this logging level and exit. There is no need to provide detailed information at this point; when it comes to troubleshooting, we will switch to a lower level, such as DEBUG.

It is important to think about the scope and purpose of our logging. We must differentiate between regular output from the tool and logging. Regular output and reporting are the primary functions of the tool and thus must not mix with the logging message from the application. We might choose to use the logging module to write application output messages as well, but they need to go to a different stream. Application logging is purely for reporting the status of the application. For example, if we are not able to connect to the load balancer, we must log that as a critical event and quit. In other words, something happened to our tool that prevented it from finishing its operation. However, if we get the temperature reading and decide that it is too high above normal, we must not log this as critical in our log stream because high system temperature has nothing to do with our application. Regardless of the load balancer’s health, our tool behaves and functions correctly. Continuing with this example, we might decide either to simply print the warning message or to log it in some other stream, possibly called loadbalancers_health.log.

62

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

CHAPTER 2 N MANAGING DEVIcES USING THE SOAP API

Configuring and Using the Logger Depending on what we want to achieve, the logger configuration can be simple or complex. I tend not to overcomplicate it, and keep it as simple as possible. At the end of the day, there are only a handful of things we need in our logger configuration: u

The logging level. How much output do we want our logger to produce? If the tool is mature, well tested, and stable, realistically we would set the log level to ERROR, but if we’re developing, we’d probably stick to DEBUG.

u

The log destination. Do we want log messages on screen or in the file? It is best to write it to a file, especially if we are using multiple loggers, one for application status messages and another for systems that we are managing or monitoring.

u

The logging message format. The default logger message format is not very informative, so we might want to add additional fields to it, which is simple to achieve.

Fortunately, the logging module provides a basicConfig method, which allows us to set all of these with one function call:   import logging logging.basicConfig(level=logging.DEBUG, filename='NSLib.log', format="%(asctime)s [%(levelname)s] (%(funcName)s() (%(filename)s:%(lineno)d)) %(message)s")   As you might have already guessed, setting the logging level is trivial; we just need to use one of the defined internal variables, whose names match the log level names we used previously: DEBUG, INFO, WARNING, ERROR, or CRITICAL. The log output destination is just a filename. If we do not specify any filename, the logging module will use standard output (stdout) to write all messages. The logging format is a bit more complicated. The format must be defined following Python string formatting rules, assuming that the right argument is a dictionary. The standard convention of formatting a string in Python with parameters in a hash array is as follows:   >>> string = "%(var1)s %(var2)d %(var3)s" % {'var1': 'I bought', 'var2': 3, 'var3':« 'sausages'} >>> print string I bought 3 sausages >>>   Just as in our example, the logging module expects a formatted string on the left of the % operator and provides a standard prepopulated dictionary as the right argument. Table 2-4 lists the most useful parameters that to use in the logging format string.

63

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

CHAPTER 2 N MANAGING DEVIcES USING THE SOAP API

Table 2-4.  Predefined Dictionary Fields that Can Be Used in a Logging Format String

Level

Description

%(asctime)s

The time when the log message was presented, in human-readable form, such as 2009-07-07 14:04:39,462. The number after the comma is the time portion in milliseconds.

%(levelname)s

A string representing the log level. Possible default values: DEBUG, INFO, WARNING, ERROR, or CRITICAL.

%(funcName)s

The name of the function where the logging message was generated.

%(filename)s

The name of the file where the logging call was made. This does not contain the full path to the file, just the filename portion.

%(module)s

The name of the module that generated the logging call. This is same as the filename with extension stripped out.

%(lineno)d

The line number in the file that issued the logging call. Not always available.

%(message)s

The actual logging message processed as msg % args in the following format: logging.debug(msg, args)

Once we have configured the logging module, using it is extremely simple—all we have to do is initialize a new instance of the logger and call its methods to write appropriate log messages (all methods are called by the appropriate logging level name): Listing 2-26.  Initializing a New Logger Instance logging.basicConfig(level=logging.DEBUG, filename='NSLib.log', format="%(asctime)s [%(levelname)s] (%(funcName)s() (%(filename)s:%(lineno)d)) %(message)s")   logger = logging.getLogger() logger.critical('Simple message...') logger.error('Message with one argument: %s', str1) logger.warning('Message with two arguments. String %s and digit: %d', (msg, val)) try: not_possible = 1 / 0 except: logger.critical('An exception has occurred! Stack trace below:', exc_info=True)   As you can see, the logging module is flexible, yet easy to configure. Use it as much as possible and try to avoid old-style logging using print statements.

Handling Exceptions Exceptions are errors that prevent our code (or the code of modules that our code is calling) from executing properly and cause execution to terminate. In our previous example, in Listing 2-26, the code fails because we included a statement that instructs Python to execute division by zero, which is not possible. This raised a ZeroDivisionError exception and execution of the code is terminated there. Unless we used the try: ... except: ... statement, our program would terminate at this point. Python allows us to act on the exceptions so we can decide how to handle them appropriately. For example, if we try to establish communication with a remote web service, but the service is not responding, we will get a “connection timed out” exception. If we have more than one service to query, we might just report this as an error and proceed with other services.

64

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

CHAPTER 2 N MANAGING DEVIcES USING THE SOAP API

Catching exceptions is easy:   try: call_to_some_function() except: do_something_about_it()   As we saw in the previous section, we can log a full exception stack trace just by indicating that we want to log exception details to the logger function call. In my code example, I use the following construction to detect an exception, log it, and pass it on. If you are writing a module, and you cannot really decide what to do with exceptions that occur, this is one of the ways to deal with them:   try: module.function() except: logger.error('An exception has occurred while executing module.function()', exc_info=True) raise   It is also possible to catch specific exceptions and perform different actions for each:   try: result = divide_two_numbers(arg1, arg2) except ZeroDivisionError: # if this happens, we will return 0 logger.error('We attempted to divide by zero, setting result to 0') result = 0 except: # something else has happened, so we reraise it logger.critical('An exception has occurred while executing module.function()', exc_info=True) raise   If you’re writing your own module, you might decide to introduce exceptions specific to this module, so they can be caught and dealt with accordingly. I use this technique in the NSLib.py module. Custom exceptions must be derived from the generic Exception class. If you do not require any specific functionality, you could define new exception as the following class:   class NSLibError(Exception): def __init__(self, error_message): self.error_message = error_message   def __str__(self): return repr(self.error_message)  

65

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

CHAPTER 2 N MANAGING DEVIcES USING THE SOAP API

Once the Exception class is defined, you would raise it by calling the raise operator and passing an object instance of this Exception class:   class NSSoapApi(object): def __init__(self, module=None, hostname=None, username=None, password=None): [...] if not (hostname and username and password): self.logger.critical('One or more from the following: hostname, username and password, are undefined') raise NSLibError('hostname, username and password must be defined')   Although it is not required, it is good practice to follow the Exception class convention, which states that all Exception class names should end with Error. Unless the module is huge and implements distinctively different functionality, you might just define one exception per module or group of omodules.

NetScaler NITRO API Now that we know how to use the SOAP API to manage NetScaler devices, let’s have a quick look at alternative ways of managing these load balancers. Starting from version 9.x, Citrix introduced the REST-based API, called Nitro API. Starting with version 10.5 of NetScaler OS, Citrix also provides a Python module for accessing and using Nitro API. The rest of the chapter shows how to use this module.

Download First, we need to get the module and the documentation files. We can retrieve them from a running NetScaler device:

1.

Log on to netscaler web console.



2.

Click on “downloads.”



3.

Download “NITRO API SDK for Python” (nitro-python.tgz).

Once we’ve downloaded the module and documentation archive files, we need to unpack them:   $ tar zxf nitro-python.tgz $ ls -l total 27724 -rw-r--r-- 1 rytis rytis 7700769 Aug 21 17:35 nitro-python.tgz -rwxr-xr-- 1 rytis rytis 20684800 Jul 3 20:52 ns_nitro-python_tagma_50_10.tar $ rm nitro-python.tgz $ tar xf ns_nitro-python_tagma_50_10.tar $ ls -l total 20204 drwxr-xr-x 7 rytis rytis 4096 Jul 3 20:26 nitro-python-1.0 -rwxr-xr-- 1 rytis rytis 20684800 Jul 3 20:52 ns_nitro-python_tagma_50_10.tar $ ls -l nitro-python-1.0/ total 52 drwxr-xr-x 2 rytis rytis 4096 Jul 3 20:26 doc drwxr-xr-x 2 rytis rytis 4096 Jul 3 20:26 lib -r-x------ 1 rytis rytis 10351 Jul 3 19:44 License.txt -r-x------ 1 rytis rytis 109 Jul 3 19:44 MANIFEST.in

66

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

CHAPTER 2 N MANAGING DEVIcES USING THE SOAP API

drwxr-xr-x 2 rytis rytis 4096 Jul 3 20:26 nitro_python.egg-info drwxr-xr-x 3 rytis rytis 4096 Jul 3 20:26 nssrc -rw-r--r-- 1 rytis rytis 353 Jul 3 20:26 PKG-INFO -r-x------ 1 rytis rytis 1054 Jul 3 19:44 readme_start.txt drwxr-xr-x 2 rytis rytis 4096 Jul 3 20:26 sample -rw-r--r-- 1 rytis rytis 59 Jul 3 20:26 setup.cfg -r-x------ 1 rytis rytis 1573 Jul 3 19:44 setup.py   To install and check dependencies, the nitro-python package depends on the requests module, but the required files are delivered in the same package, so we do not need to worry about installing them separately:   $ cd nitro-python-1.0 $ python setup.py install $ pip freeze argparse==1.2.1 distribute==0.6.24 nitro-python==1.0 requests==2.3.0 wsgiref==0.1.2 $

Using the Nitro-Python Module Module Layout The Citrix Netscaler Python API package has a slightly unusual layout, whereby it follows the typical Java packaging pattern of splitting the classes and methods into lots of subpackages. We are not going to discuss whether this is a good practice in laying out Python projects; I find it slightly unusual, though, and it leads to massive and sometimes inconvenient import statements, as we will see later. If we inspect the package directory structure, we see how granular the subpackage structure is, with lots of directories (each being a separate subpackage) and very few modules in each. Each module typically contains only one or two classes defined in each module. We can confirm that by running the following command in the nitro-python-1.0 directory:   $ nitro-python-1.0/ $ find nssrc/ -name \*.py -exec grep -c ^class {} \; -print   There are lots of module files in the main package, which indicates that the code was quite possibly generated automatically:   $ cd nitro-python-1.0/ $ find nssrc/ -name \*.py -not -name __init__.py | wc -l 1132   So, how can we find the class or method that we require? The API functions are split into two major parts: u

Configuration. Functions in this category are used to actively manage Netscaler appliance.

u

Statistics. Functions in this category are used to gather statistical data from Netscaler appliance.

67

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

CHAPTER 2 N MANAGING DEVIcES USING THE SOAP API

The items list in each category is almost identical, since items that can be configured typically can be monitored as well. Table 2-5 shows logical groups of items in each category. Table 2-5.  Logical Grouping of Functions

Configuration

Statistics

Description

Event

-

Event framework for subscribing and publishing Netscaler events

AAA

AAA

Authentication, authorization, and accounting service

App

-

Application resource configuration

Appflow

Appflow

AppFlow resources

Application Firewall

Application Firewall

Application Firewall

Appqoe

Appqoe

Application Level Quality of Experience

Audit

Audit

Auditing resources

Authentication

Authentication

Authentication resources

Authorization

Authorization

Authorization services

Autoscale

Autoscale

Autoscaling

Basic

Basic

Basic system configuration resources

Ca

Ca

Content Accelerator services

Integrated Caching

Integrated Caching

Integrated Caching services

Cluster

Cluster

Netscaler cluster management

Compression

Compression

HTTP compression services

Cache Redirection

Cache Redirection

HTTP cache management services

Content Switching

Content Switching

Content aware traffic management services

Db

-

Database user configuration

Domain Name Service

Domain Name Service

DNS management

HTTP DoS Protection

HTTP DoS Protection

HTTP Denial-of-Service protection service

Front-end optimization

Front-end Optimization

Web content optimization service

Filter

-

Request content filtering configuration

Global Server Load Balancing

Global Server Load Balancing

Global Server Load Balancing service

High Availability

High Availability

Netscaler High Availability configuration resources

Ipsec

Ipsec

IPsec management

Load Balancing

Load Balancing

Load Balancing management resources

LLDP

LLDP

Link Layer Discovery Protocol resources

Network

Network

Network configuration management

NS

NS

Global system configuration resources (continued)

68

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

CHAPTER 2 N MANAGING DEVIcES USING THE SOAP API

Table 2-5.  (continued)

Configuration

Statistics

Description

NTP

-

System NTP configuration

Policy

-

System policy configuration

Priority Queuing

Priority Queuing

Priority Queuing service

Protocol

Protocol

Protocol management

-

QoS

Quality of Service statistical data

Responder

Responder

Responder services

Rewrite

Rewrite

HTTP rewrite services

Rise

-

Remote Integrated Service engine configuration

Router

-

Router configuration

Sure Connect

Sure Connect

SureConnect service

SNMP

SNMP

Simple Network Management Protocol services

Spillover

Spillover

Spillover management resources

SSL

SSL

Secure Socket Layer configuration resources

Stream

Stream

Connection streaming management resources

System

System

System configuration management resources

Traffic Management

Traffic Management

Traffic service/policy management resources

Transform

Transform

URL transformation resources

Tunnel

-

SSL VPN tunnel management

Utility

-

System technical support tools

SSL VPN

SSL VPN

Virtual Private Network management resources

WebInterface

-

Netscaler Web Interface configuration

Each group contains resources that deal with the same aspect of the system; for example, the Load Balancing group contains all resources that deal with the Load Balancer resource configuration, Virtual Server resource configuration, and so on. To find out all the details, you will have to download NetScaler NITRO API documentation archive from a running appliance, and use a web browser to read the documentation. It is well laid out, and the information is reasonably easy to find. All NetScaler resources are defined in subpackages of the nssrc.com.citrix.netscaler.nitro.resource package. The configuration resources can be found in packages inside the nssrc.com.citrix.netscaler.nitro. resource.config package (for example, LBserver resource definition is in nssrc.com.citrix.netscaler.nitro. resource.config.lb.lbvserver module), and the statistics resources can be found in packages inside the nssrc. com.citrix.netscaler.nitro.resource.stats package (for example, LBserver statistics resources can be found in nssrc.com.citrix.netscaler.nitro.resource.stat.lb.lbvserver_stats module).

69

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

CHAPTER 2 N MANAGING DEVIcES USING THE SOAP API

Unfortnately there is no easy way of finding the correct methods, and in most of the cases, we have to follow the following checklist: u

Identify whether we want to get statistical data or make configuration changes.

u

If we are reading statistical data, then we are interested in modules contained in the nssrc/ com/citrix/netscaler/nitro/resource/stats directory.

u

If we are configuring resources, then we should be looking into modules that reside in the nssrc/com/citrix/netscaler/nitro/resource/config directory.

u

Service management resources are located in the nssrc/com/citrix/netscaler/nitro/ service directory.

u

Utility classes are located in the nssrc/com/citrix/netscaler/nitro/util directory. These are mostly for internal use within library, but we may find them useful as well.

u

Exception definitions are located in nssrc/com/citrix/netscaler/nitro/.

u

Once we have identified the package that contains functions related to what we are trying to do, we have to manually locate the relevant modules and find out the name of the resources that we need to use.

Continuing with the LBservice configuration example, let’s assume we want to find methods related to LBservice configuration. We know that they are located in /nssrc/com/citrix/netscaler/nitro/resource/config, so we change to that directory and list all subdirectories:   $ cd nssrc/com/citrix/netscaler/nitro/resource/config $ ls -l total 200 drwxr-xr-x 2 rytis rytis 4096 Jul 3 20:26 aaa drwxr-xr-x 2 rytis rytis 4096 Jul 3 20:26 app drwxr-xr-x 2 rytis rytis 4096 Jul 3 20:26 appflow drwxr-xr-x 2 rytis rytis 4096 Jul 3 20:26 appfw drwxr-xr-x 2 rytis rytis 4096 Jul 3 20:26 appqoe drwxr-xr-x 2 rytis rytis 4096 Jul 3 20:26 audit drwxr-xr-x 2 rytis rytis 4096 Jul 3 20:26 authentication drwxr-xr-x 2 rytis rytis 4096 Jul 3 20:26 authorization drwxr-xr-x 2 rytis rytis 4096 Jul 3 20:26 autoscale drwxr-xr-x 2 rytis rytis 4096 Jul 3 20:26 basic drwxr-xr-x 2 rytis rytis 4096 Jul 3 20:26 ca drwxr-xr-x 2 rytis rytis 4096 Jul 3 20:26 cache drwxr-xr-x 2 rytis rytis 4096 Jul 3 20:26 cluster drwxr-xr-x 2 rytis rytis 4096 Jul 3 20:26 cmp drwxr-xr-x 2 rytis rytis 4096 Jul 3 20:26 cr drwxr-xr-x 2 rytis rytis 4096 Jul 3 20:26 cs drwxr-xr-x 2 rytis rytis 4096 Jul 3 20:26 db drwxr-xr-x 2 rytis rytis 4096 Jul 3 20:26 dns drwxr-xr-x 2 rytis rytis 4096 Jul 3 20:26 dos drwxr-xr-x 2 rytis rytis 4096 Jul 3 20:26 Event drwxr-xr-x 2 rytis rytis 4096 Jul 3 20:26 feo drwxr-xr-x 2 rytis rytis 4096 Jul 3 20:26 filter drwxr-xr-x 2 rytis rytis 4096 Jul 3 20:26 gslb drwxr-xr-x 2 rytis rytis 4096 Jul 3 20:26 ha -rw-r--r-- 1 rytis rytis 449 Jul 3 20:04 __init__.py

70

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

CHAPTER 2 N MANAGING DEVIcES USING THE SOAP API

drwxr-xr-x 2 rytis rytis 4096 Jul 3 20:26 ipsec drwxr-xr-x 2 rytis rytis 4096 Jul 3 20:26 lb drwxr-xr-x 2 rytis rytis 4096 Jul 3 20:26 lldp drwxr-xr-x 2 rytis rytis 4096 Jul 3 20:26 network drwxr-xr-x 2 rytis rytis 4096 Jul 3 20:26 ns drwxr-xr-x 2 rytis rytis 4096 Jul 3 20:26 ntp drwxr-xr-x 2 rytis rytis 4096 Jul 3 20:26 policy drwxr-xr-x 2 rytis rytis 4096 Jul 3 20:26 pq drwxr-xr-x 2 rytis rytis 4096 Jul 3 20:26 protocol drwxr-xr-x 2 rytis rytis 4096 Jul 3 20:26 responder drwxr-xr-x 2 rytis rytis 4096 Jul 3 20:26 rewrite drwxr-xr-x 2 rytis rytis 4096 Jul 3 20:26 rise drwxr-xr-x 2 rytis rytis 4096 Jul 3 20:26 router drwxr-xr-x 2 rytis rytis 4096 Jul 3 20:26 sc drwxr-xr-x 2 rytis rytis 4096 Jul 3 20:26 snmp drwxr-xr-x 2 rytis rytis 4096 Jul 3 20:26 spillover drwxr-xr-x 2 rytis rytis 4096 Jul 3 20:26 ssl drwxr-xr-x 2 rytis rytis 4096 Aug 28 11:55 stream drwxr-xr-x 2 rytis rytis 4096 Jul 3 20:26 system drwxr-xr-x 2 rytis rytis 4096 Jul 3 20:26 tm drwxr-xr-x 2 rytis rytis 4096 Jul 3 20:26 transform drwxr-xr-x 2 rytis rytis 4096 Jul 3 20:26 tunnel drwxr-xr-x 2 rytis rytis 4096 Jul 3 20:26 utility drwxr-xr-x 2 rytis rytis 4096 Jul 3 20:26 vpn drwxr-xr-x 2 rytis rytis 4096 Jul 3 20:26 wi   We can see that the subdirectories roughly match the list in Table 2-5, although some of the names are abbreviated. In our example, Load Balancing group is abbreviated as “lb.” Now, if we change to the “lb” directory, we will find a bunch of modules that deal with the Load Balancer resource configuration:   $ cd lb $ ls -l total 744 -rw-r--r-- 1 rytis rytis 1369 Jul 3 20:04 __init__.py -rw-r--r-- 1 rytis rytis 3446 Jul 3 20:04 lbgroup_binding.py -rw-r--r-- 1 rytis rytis 6986 Jul 3 20:04 lbgroup_lbvserver_binding.py -rw-r--r-- 1 rytis rytis 16062 Jul 3 20:04 lbgroup.py -rw-r--r-- 1 rytis rytis 3664 Jul 3 20:04 lbmetrictable_binding.py -rw-r--r-- 1 rytis rytis 7232 Jul 3 20:04 lbmetrictable_metric_binding.py -rw-r--r-- 1 rytis rytis 9059 Jul 3 20:04 lbmetrictable.py -rw-r--r-- 1 rytis rytis 3885 Jul 3 20:04 lbmonbindings_binding.py -rw-r--r-- 1 rytis rytis 6741 Jul 3 20:04 lbmonbindings.py -rw-r--r-- 1 rytis rytis 7171 Jul 3 20:04 lbmonbindings_service_binding.py -rw-r--r-- 1 rytis rytis 6665 Jul 3 20:04 lbmonbindings_servicegroup_binding.py -rw-r--r-- 1 rytis rytis 3055 Jul 3 20:04 lbmonitor_args.py -rw-r--r-- 1 rytis rytis 3548 Jul 3 20:04 lbmonitor_binding.py -rw-r--r-- 1 rytis rytis 8153 Jul 3 20:04 lbmonitor_metric_binding.py -rw-r--r-- 1 rytis rytis 103635 Jul 3 20:04 lbmonitor.py -rw-r--r-- 1 rytis rytis 7894 Jul 3 20:04 lbmonitor_service_binding.py

71

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

CHAPTER 2 N MANAGING DEVIcES USING THE SOAP API

-rw-r--r-- 1 rytis rytis 7954 Jul 3 20:04 lbmonitor_servicegroup_binding.py -rw-r--r-- 1 rytis rytis 16051 Jul 3 20:04 lbparameter.py -rw-r--r-- 1 rytis rytis 1102 Jul 3 20:04 lbpersistentsessions_args.py -rw-r--r-- 1 rytis rytis 8354 Jul 3 20:04 lbpersistentsessions.py -rw-r--r-- 1 rytis rytis 8144 Jul 3 20:04 lbroute6.py -rw-r--r-- 1 rytis rytis 8731 Jul 3 20:04 lbroute.py -rw-r--r-- 1 rytis rytis 7512 Jul 3 20:04 lbsipparameters.py -rw-r--r-- 1 rytis rytis 10514 Jul 3 20:04 lbvserver_appflowpolicy_binding.py -rw-r--r-- 1 rytis rytis 10721 Jul 3 20:04 lbvserver_appfwpolicy_binding.py -rw-r--r-- 1 rytis rytis 11369 Jul 3 20:04 lbvserver_appqoepolicy_binding.py -rw-r--r-- 1 rytis rytis 14857 Jul 3 20:04 lbvserver_auditnslogpolicy_binding.py -rw-r--r-- 1 rytis rytis 14877 Jul 3 20:04 lbvserver_auditsyslogpolicy_binding.py -rw-r--r-- 1 rytis rytis 10881 Jul 3 20:04 lbvserver_authorizationpolicy_binding.py -rw-r--r-- 1 rytis rytis 9367 Jul 3 20:04 lbvserver_binding.py -rw-r--r-- 1 rytis rytis 10474 Jul 3 20:04 lbvserver_cachepolicy_binding.py -rw-r--r-- 1 rytis rytis 11289 Jul 3 20:04 lbvserver_capolicy_binding.py -rw-r--r-- 1 rytis rytis 10681 Jul 3 20:04 lbvserver_cmppolicy_binding.py -rw-r--r-- 1 rytis rytis 7236 Jul 3 20:04 lbvserver_csvserver_binding.py -rw-r--r-- 1 rytis rytis 11386 Jul 3 20:04 lbvserver_dnspolicy64_binding.py -rw-r--r-- 1 rytis rytis 6143 Jul 3 20:04 lbvserver_dospolicy_binding.py -rw-r--r-- 1 rytis rytis 11309 Jul 3 20:04 lbvserver_feopolicy_binding.py -rw-r--r-- 1 rytis rytis 14777 Jul 3 20:04 lbvserver_filterpolicy_binding.py -rw-r--r-- 1 rytis rytis 14644 Jul 3 20:04 lbvserver_pqpolicy_binding.py -rw-r--r-- 1 rytis rytis 124769 Jul 3 20:04 lbvserver.py -rw-r--r-- 1 rytis rytis 10591 Jul 3 20:04 lbvserver_responderpolicy_binding.py -rw-r--r-- 1 rytis rytis 10514 Jul 3 20:04 lbvserver_rewritepolicy_binding.py -rw-r--r-- 1 rytis rytis 14450 Jul 3 20:04 lbvserver_scpolicy_binding.py -rw-r--r-- 1 rytis rytis 11101 Jul 3 20:04 lbvserver_service_binding.py -rw-r--r-- 1 rytis rytis 8876 Jul 3 20:04 lbvserver_servicegroup_binding.py -rw-r--r-- 1 rytis rytis 9325 Jul 3 20:04 lbvserver_servicegroupmember_binding.py -rw-r--r-- 1 rytis rytis 11429 Jul 3 20:04 lbvserver_spilloverpolicy_binding.py -rw-r--r-- 1 rytis rytis 14590 Jul 3 20:04 lbvserver_tmtrafficpolicy_binding.py -rw-r--r-- 1 rytis rytis 10554 Jul 3 20:04 lbvserver_transformpolicy_binding.py -rw-r--r-- 1 rytis rytis 3448 Jul 3 20:04 lbwlm_binding.py -rw-r--r-- 1 rytis rytis 6260 Jul 3 20:04 lbwlm_lbvserver_binding.py -rw-r--r-- 1 rytis rytis 10356 Jul 3 20:04 lbwlm.py   The module names typically are a close match to the NetScaler command line interface names, so if you are familiar with the NetScaler command line configuration, you should be able to identify the correct module that contains the resource definitions. For example, lbvserver.py has a class definition that represents the LBvserver resource.

72

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

CHAPTER 2 N MANAGING DEVIcES USING THE SOAP API

N■Note If you only used command line or web interface, you may not be aware of some of the intermediate resources, such as binding resources. If you are familiar with basic NetScaler concepts, you know that you can bind services (service resource represent any service that is running on a dedicated host) to LBvserver (LBvserver represents a virtual load balanced service). When you bind multiple services to a virtual load balanced server, you effectively instruct NetScaler to start forwarding all traffic that reaches the LBvserver to the services that are bound to the LBvserver. The CLI command to bind the services is “bind lb vserver .” What the CLI does not expose is the intermediate object, which is called “binding.” This binding object is a one-to-one mapping between LBvserver and service. Think of it as a many-to-many table relationship in a relational database. When you have a many-to-many relationship between two table, you would create a third table, which is used to decouple the other two tables.

Logging On One of the first things we have to do is authenticate ourself with the NetScaler. Under the hood, we are sending the authentication details, and the load balancer is replying with the authentication token, which we will use with subsequent requests. To establish a connection, we need to create an instance of the nitro_service class, initialize it with the correct credential details, and call the login() method:   >>> from nssrc.com.citrix.netscaler.nitro.service.nitro_service import nitro_service >>> client = nitro_service('192.168.0.100', 'http') >>> client.set_credential('nsroot', 'nsroot') >>> client.timeout = 10 >>> client.isLogin() False >>> client.login() >>> client.isLogin() True >>>   It is important that we set the timeout for the connection as illustrated in the example. If we do not, the default timeout will remain set to zero seconds, and we will get the following exception if we try to establish a connection:   >>> client.login() Traceback (most recent call last): File "", line 1, in File "/home/rytis/.virtualenvs/nitro-test/local/lib/python2.7/site-packages/nitro_python-1.0py2.7.egg/nssrc/com/citrix/netscaler/nitro/service/nitro_service.py", line 220, in login raise e requests.exceptions.ConnectionError: HTTPConnectionPool(host='192.168.0.100', port=80): Max retries exceeded with url: /nitro/v1/config/login (Caused by : [Errno 115] Operation now in progress)  

73

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

CHAPTER 2 N MANAGING DEVIcES USING THE SOAP API

Once we have established the connection, we can use the Client object in the communication with the NetScaler load balancer.

Gathering Statistical Data As discovered earlier, all classes that deal with statistics gathering from the device can be found in nssrc/com/citrix/ netscaler/nitro/resource/stats/ subdirectories. In this section, we discover how to gather system-specific and resource (virtual server)-specific statistical data. Classes that deal with system-specific data are located in nssrc/com/citrix/netscaler/nitro/resource/stat/ system/. In this package, we can find the following modules: u

systembw_stats.py

u

systemcpu_stats.py

u

systemmemory_stats.py

u

system_stats.py

To get the CPU usage details, we have to use the class defined in the systemcpu_stats.py module. First, we need to initialize the session object. It is not required to call the login() method explicitly, because the library will do that automatically for us:   >>> from nssrc.com.citrix.netscaler.nitro.service.nitro_service import nitro_service >>> from nssrc.com.citrix.netscaler.nitro.resource.stat.system.systemcpu_stats import systemcpu_ stats >>> client = nitro_service('192.168.0.100', 'http') >>> client.set_credential('nsroot', 'nsroot') >>> client.timeout = 500   Then, we create an instance of systemcpu_stats class. We have to pass the Client object, so that the systemcpu_ stats object knows how to connect to the load balancer:   >>> cpu_stats = systemcpu_stats.get(client)   In my instance, I have an appliance with six CPUs, so the response contains six elements:   >>> len(cpu_stats) 6   Finally, let's read the actual statistical data:   >>> for c in cpu_stats: ... print c.percpuuse ... 0 2 1 0 2 0   As you can see, the device is not particularly busy.

74

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

CHAPTER 2 N MANAGING DEVIcES USING THE SOAP API

Similarly, we can retrieve the data about memory usage:   >>> from nssrc.com.citrix.netscaler.nitro.service.nitro_service import nitro_service >>> from nssrc.com.citrix.netscaler.nitro.resource.stat.system.systemmemory_stats import systemmemory_stats >>> client = nitro_service('192.168.0.100', 'http') >>> client.set_credential('nsroot', 'nsroot') >>> client.timeout = 500 >>> mem_stats = systemmemory_stats.get(client) >>> mem_stats[0].memtotallocmb u'1964'   Other interesting properties that are available to read are shown in Table 2-6. Table 2-6.  Netscaler device specific properties

Property name

Description

Shmemallocpcnt

Shared memory insue percent.

Shmemallocinmb

Shared memory insue, in megabytes.

Shmemtotinmb

Total shared memory allowed to allocate, in megabytes.

Memtotfree

Total free PE memory in the system.

Memusagepcnt

Percentage of memory utilization on NetScaler.

Memtotuseinmb

Total NetScaler memory in use, in megabytes.

Memtotallocpcnt

Currently allocated memory in percent.

Memtotallocmb

Currently allocated memory, in megabytes.

memtotinmb

Total memory available (grabbed) for use by packet engine (PE), in megabytes.

Memtotavail

Total system memory available for PE to grab from the system.

You can find detailed information about the available properties in the REST API documentation, which is available to download from the NetScaler management Web UI. If we want to retrieve the information about the virtual servers that are running on our load balancer device, we need to use the nssrc.com.citrix.netscaler.nitro.resource.stat.lb.lbvserver_stats module. First, let's check the state of the virtual servers. In the following example, we retrieve the statistical data about all virtual servers and then see the name and the state of each server:   >>> from nssrc.com.citrix.netscaler.nitro.service.nitro_service import nitro_service >>> client = nitro_service('192.168.0.100', 'http') >>> client.set_credential('nsroot', 'nsroot') >>> client.timeout = 500 >>> from nssrc.com.citrix.netscaler.nitro.resource.stat.lb.lbvserver_stats import lbvserver_stats >>> lbvs_stats = lbvserver_stats.get(client)

75

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

CHAPTER 2 N MANAGING DEVIcES USING THE SOAP API

>>> len(lbvs_stats) 4 >>> for lbvs in lbvs_stats: ... print "%s: %s" % (lbvs.name, lbvs.state) ... test_1: UP test_2: UP test_3: UP test_4: DOWN   If we need to retrieve the details of a single virtual server, we specify the name of the server. In that case, the result is not a list, but just a single object:   >>> lbvs_stats = lbvserver_stats.get(client, name="test_4") >>> len(lbvs_stats) Traceback (most recent call last): File "", line 1, in TypeError: object of type 'lbvserver_stats' has no len() >>> lbvs_stats.state u'DOWN'   For a complete list of properties, refer to the NetScaler REST API documentation. The most useful properties are listed in Table 2-7. Table 2-7.  Virtual server specific properties

Property name

Description

Vsvrsurgecount

Number of requests waiting on this vserver.

Establishedconn

Number of client connections in ESTABLISHED state.

Inactsvcs

Number of INACTIVE services bound to a vserver.

Vslbhealth

Health of the vserver. This gives percentage of UP services bound to this vserver.

primaryipaddress

IP address of the vserver.

primaryport

The port on which the service is running.

type

Protocol associated with the vserver.

state

Current state of the server. Possible values are UP, DOWN, UNKNOWN, OFS (Out of Service), TROFS(Transition Out of Service), TROFS_DOWN(Down When going Out of Service).

actsvcs

Number of ACTIVE services bound to a vserver.

tothits

Total vserver hits.

hitsrate

Rate (/s) counter for tothits.

totalrequests

Total number of requests received on this service or virtual server. (This applies to HTTP/SSL services and servers.)

requestsrate

Rate (/s) counter for totalrequests. (continued)

76

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

CHAPTER 2 N MANAGING DEVIcES USING THE SOAP API

Table 2-7.  (continued)

Property name

Description

totalresponses

Number of responses received on this service or virtual server. (This applies to HTTP/SSL services and servers.)

responsesrate

Rate (/s) counter for totalresponses.

totalrequestbytes

Total number of request bytes received on this service or virtual server.

requestbytesrate

Rate (/s) counter for totalrequestbytes.

totalresponsebytes

Number of response bytes received by this service or virtual server.

responsebytesrate

Rate (/s) counter for totalresponsebytes.

totalpktsrecvd

Total number of packets received by this service or virtual server.

pktsrecvdrate

Rate (/s) counter for totalpktsrecvd.

totalpktssent

Total number of packets sent.

pktssentrate

Rate (/s) counter for totalpktssent.

curclntconnections

Number of current client connections.

cursrvrconnections

Number of current connections to the actual servers behind the virtual server.

I hope this information sets you on the right path and enables you to find your way around the vast amount of statistical data that is available on the NetScaler devices.

Performing Administration Tasks Finding administration task methods is equally easy. All methods are defined in the modules available in the nssrc/com/citrix/netscaler/nitro/resource/config/ directory. The following example shows how to disable and enable any specific server:   >>> from nssrc.com.citrix.netscaler.nitro.service.nitro_service import nitro_service >>> from nssrc.com.citrix.netscaler.nitro.resource.config.basic.server import server >>> client = nitro_service('192.168.0.100', 'http') >>> client.set_credential('nsroot', 'nsroot') >>> srv_obj = server.get(client, name="test_srv_1") >>> srv_obj.state u'ENABLED' >>> srv_obj.state = 'DISABLED' >>> server.update(client, srv_obj) >>> >>> srv_obj = server.get(client, name="test_srv_1") >>> srv_obj.state u'DISABLED'  

77

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

CHAPTER 2 N MANAGING DEVIcES USING THE SOAP API

Summary This chapter demonstrated how to use Python for accessing the SOAP API to monitor and manage Citrix Netscaler load balancers. It also covered how to organize your own project, how to structure your code, and how to handle errors and report the functional status of your module. The following points were made: u

The SOAP API is a method to call procedures on a remote server, also called a web service.

u

The SOAP protocol defines a message structure for information exchange between service provider and consumer.

u

SOAP messages use the XML language to structure data.

u

The underlying or carrier protocol is HTTP.

u

WSDL is used to describe all services available on a web service and the data structures used in call/response messages.

u

The WSDL definition can be converted to Python helper modules with the wsdl2py tool.

u

It is important to define requirements before you start coding.

u

Errors and exceptions must be handled appropriately.

u

The logging module is used to log messages and group them by severity.

u

Starting from version 10.5, the nitro-python package can be used to access REST API on the NetScaler.

78

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

CHAPTER 3

Creating a Web Application for IP Address Accountancy In this chapter, we are going to build a simple application that will keep track of all IP addresses allocated on the internal network. The chapter covers all phases of developing this application, starting with gathering and setting the requirements to design the application through aspects of the implementation phase.

Designing the Application Ideally, application design should not be based on the technology that is going to be used to implement it. Having said that, this kind of independence is rarely achievable and in most cases is not practical, as each technology implies its own implementation patterns and best practices. In this chapter, we will define requirements and application design before explaining what technology is going to be used. This way it will be easier for you to understand how to reuse the design phase even if in your own work you will be using different technologies.

Setting out the Requirements The most important consideration in developing any application is an understanding of exactly what you want from it. Step away from the images of user interfaces you have seen somewhere else, or the functionality of some other (possible similar) application that you may have used in the past. Instead, take a piece of paper and write down in short sentences what you want your application to do. Our imaginary organization is a rather large enterprise with a reasonably complicated network infrastructure, so it is important to assign and use IP address space effectively. In the past, addresses were recorded in a simple spreadsheet and different teams used different structures to represent the same information. Here, there is no authority assigning IP address ranges, so effective and clear communication between teams is important. New systems are being introduced while old ones are being decommissioned. Group policy prevents servers from using dynamic IP allocation; only user machines can obtain address information from DHCP. Based on this brief description, let’s come up with the following list of requirements: u

The system must be centralized, but accessible by many different users.

u

The application must be able to store both IP ranges and individual IP addresses.

u

The application must provide a means for creating a hierarchical organization of ranges and individual IP addresses.

u

Users must be able to add, remove, and modify entries.

79

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

CHAPTER 3 N CREATING A WEB APPLIcATION FOR IP ADDRESS AccOUNTANcY

u

Users must be able to search for information.

u

The system must be able to check whether the machines that use IP addresses are responsive.

u

For all IP addresses, the system should attempt to obtain name records.

u

Users must be required to enter a description for any IP reservation they make.

u

It should be easy to extend the system to use DHCP.

Now that we have defined all our requirements, we can go back to them at any time during the development phase and verify that our application does exactly what it is expected to do. We will not be implementing unnecessary functionality; and by comparing the actual implementation against the set of requirements, we will always know how much progress we have made and how much work is still left to do. Going forward, we can even delegate individual tasks to other people if there is a need to do so. If at some point we discover that we have left out some important functionality, we can always go back to our list and modify it accordingly, but that will be a conscious decision that will prevent us from implementing any new functionality “as we go along” with our development.

Making Design Decisions Once we have the requirements established, we can proceed with some design decisions about how to implement them. Each design decision must attempt to solve some goal stated in the requirements list. Because this is not a massive project, there is no need to create a formal design document; the same informal list of statements should suffice here. So based on the requirements just stated, we can make the following decisions about the application development and structure: u

The application is going to be web based.

u

It will run on a dedicated web server and will be accessible by anyone in the organization from his or her web browser.

u

The application will be written in Python and will use the Django framework.

u

Implementation is split into two phases: basic IP allocation and reservation functionality, and integration with DHCP. (We’ll tackle the first phase in this chapter and move on to DHCP integration in Chapter 4.)

That is it; even as short as this list is, it ensures that we’re not going to deviate from the goals we stated initially, and if we really need to make some variation, that will be recorded. The list here mainly represents the nonfunctional aspects of design; we’ll get to more specific details in the following sections. Formally, this should constitute a detailed design document, but I am only going to describe two things: what data our application is going to operate on, and what the application will do with that data.

Defining the Database Schema From the requirements just stated, we know that we need to record the following data: u

The IP range and/or individual IP addresses

u

The parent range that the current range belongs to

u

For each record, whether it is allowed to be empty

80

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

CHAPTER 3 N CREATING A WEB APPLIcATION FOR IP ADDRESS AccOUNTANcY

HOW IP ADDRESSES WORK Before proceeding, let’s examine how IP addressing works, so you will better understand some specific database layout and structure decisions we’re going to make. The description provided here is somewhat simplified; if you want to learn more about IP networks and specifically about IP addressing, I recommend the Wikipedia entry on CIDR: http://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing.

Briefly, each IP address has two parts: the network address part, which identifies the network a particular address belongs to, and the host address within that network. A full IP address in IPV4 is always 32 bits long. Before Classless Inter-Domain Routing (CIDR) was introduced, there were only three available network blocks or classes: class A (8 bits to define the network address, allowing over 16 million unique host addresses), class B (16 bits for the network address, and over 65,000 unique host addresses), and class C (24 bits for the network address with 256 unique host addresses). This was very inefficient, as it did not allow for fine-grained address and range allocation, so the CIDR scheme was introduced, which allows us to use a network address of any length. In CIDR notation, each IP address is followed by the number that defines how many bits the network part comprises. So the address 192.168.1.1/24 tells us that this is an IP from a class C network whose first 24 first bits are a network address. This image illustrates various configurations of an IP address, which I’ll explain a bit later. The example uses a network address range that is much smaller than a default class C, so you can see how that works. 1 9 2

A)

.

.

1

.

.

1 6 8

.

1

.

H

3 2

/ 2 7

1 1 0 0 0 0 0 0 . 1 0 1 0 1 0 0 0 . 0 0 0 0 0 0 0 1 . 0 0 1 0 0 0 0 0

—H host

network 1 9 2

.

1 6 8

.

1

.

6 3

/ 2 7

1 1 0 0 0 0 0 0 . 1 0 1 0 1 0 0 0 . 0 0 0 0 0 0 0 1 . 0 0 1 1 1 1 1 1

host

network 1 9 2

D)

/ 2 7

host

network 1 9 2

C)

5 2

1 1 0 0 0 0 0 0 . 1 0 1 0 1 0 0 0 . 0 0 0 0 0 0 0 1 . 0 0 1 1 0 1 0 0

>> import sqlite3 >>> sqlite3.version '2.6.0' >>>   If you are using one of the non-mainstream Linux distributions, or if the packages are not installed during the initial installation, refer to the documentation of your Linux distribution for information on installing the latest Python 2.7.x release and SQLite packages.

N■Note  As of this writing, Django 1.6 requires Python 2.6.5 and above. The next Django release 1.7 will drop Python 2.6 altogether, and the minimum supported Python version will be 2.7. Django version 1.5 and above officially support Python 3, so you can use it as well. Most mainstream Linux distributions have a reasonably up-to-date Django version available as a package. For example, this is how you would install Django on a Fedora system:   $ sudo yum install python-django   You can also use Python package manager (PIP) to install the required packages:   $ sudo pip install django  

86

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

CHAPTER 3 N CREATING A WEB APPLIcATION FOR IP ADDRESS AccOUNTANcY

The advantage to using PIP is that the packages are typically more up to date. At the time of this writing, the package in Fedora’s repository is 1.6.4 and in PyPI (Python package index, a place where PIP looks for packages) is 1.6.5, which is the latest available. The main disadvantage is that you will end up having an application deployed on your system that is not known to the system’s package manager. You can test the Django installation by importing its module from the Python command-line interface:   # python Python 2.7.5 (default, Feb 19 2014, 13:47:28) [GCC 4.8.2 20131212 (Red Hat 4.8.2-7)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import django >>> django.get_version() '1.6.4' >>>

The Structure of a Django Application Django treats any website as a project. In Django terms, a project is a set of web applications and project- (or site-) specific configurations. You can reuse the same applications in different sites just by deploying them in new projects, and they will automatically use new settings, such as database credentials. A project may contain any number of applications. The term project may sound a bit confusing; I find site or website more appropriate. Creating a new project is simple. Assuming you have installed Django correctly, you just need to run the command django-admin.py in the directory where you want the new project directory to be created. Django’s administration tool will create a simple project skeleton with basic configuration files. We will use /var/app/vhosts/www_example_com/ as the base directory for the project that will hold all Django applications:   $ mkdir -p /var/app/virtual/ $ cd /var/app/virtual $ django-admin.py startproject www_example_com $ ls -lR www_example_com/ total 8 -rw-r--r-- 1 rytis staff 258 10 Jun 21:08 manage.py drwxr-xr-x 6 rytis staff 204 10 Jun 21:08 www_example_com   www_example_com//www_example_com: total 24 -rw-r--r-- 1 rytis staff 0 10 Jun 21:08 __init__.py -rw-r--r-- 1 rytis staff 1999 10 Jun 21:08 settings.py -rw-r--r-- 1 rytis staff 306 10 Jun 21:08 urls.py -rw-r--r-- 1 rytis staff 405 10 Jun 21:08 wsgi.py   In the project directory you’ll find the following files: manage.py: An automatically generated script that you will use to manage you project. Creating new database tables, validating modes, or dumping SQL script are all done using this tool. This tool also allows you to invoke a command prompt interface for accessing data models. www_example_com/settings.py: A configuration file that holds database information and application-specific settings.

87

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

CHAPTER 3 N CREATING A WEB APPLIcATION FOR IP ADDRESS AccOUNTANcY

www_example_com/urls.py: A configuration file that acts as a URL dispatcher. Here, you define which views should respond to which URLs. www_example_com/wsgi.py: A WSGI configuration file that can be used if the Django application is running under a WSGI compliant web server such as Apache with mod_wsgi enabled.

N■Note  The configuration file location is specific to your project. In this chapter our project is created in /var/app/ virtual/www_example_com/, so assume this location when you see references to the manage.py, settings.py and urls.py files. Once you have created a new project, you need to specify the database engine that Django should use. As mentioned earlier, in our example, we are going to use SQLite. To enable this, we need to make two changes in the settings.py configuration file (referenced as the settings file later in the chapter): specify the database engine and the absolute filename for the database file:   DATABASES = { 'default': { 'ENGINE': 'django.db.backends.sqlite3', 'NAME': os.path.join(BASE_DIR, 'db.sqlite3'), } }   When project and database configuration are finished, we can create our application by issuing the following command in our project directory:   $ python manage.py startapp ip_addresses $ ls –l ip_addresses/ total 12 -rw-r--r-- 1 root root 0 2014-05-24 14:55 __init__.py -rw-r--r-- 1 root root 57 2014-05-24 14:55 models.py -rw-r--r-- 1 root root 514 2014-05-24 14:55 tests.py -rw-r--r-- 1 root root 26 2014-05-24 14:55 views.py   Just like the Django administration tool, the project management script creates a skeleton for our new application. Now that we have our project (or website) set up and one application configured, what we need to do is define the data model, write view methods, create the URL structure, and finally design the templates. All that I will describe in more detail in the following sections, but first I still need to show how to make our new site available for others to see. The application will not be available for immediate use; we need to provision it in the settings file by appending it to the INSTALLED_APPS list:   INSTALLED_APPS = ( 'django.contrib.auth', 'django.contrib.contenttypes', 'django.contrib.sessions', 'django.contrib.sites', 'ip_addresses', )

88

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

CHAPTER 3 N CREATING A WEB APPLIcATION FOR IP ADDRESS AccOUNTANcY

Using Django with Apache Web Server Django comes with its own lightweight web server, which is written in Python. This is a great tool for quick testing or during development, but I would strongly advise against using it in a production environment. I have never encountered any problems while using it, but as the developers behind Django say, they are in the web frameworks business and are not here to develop robust web servers. One of the most obvious choices for a web server is the Apache web service. It is widespread and used on the vast majority of websites on the Internet. Apache installation packages are included by default on many Linux distributions. It is easy to set up Apache in such a way that it serves both static CSS stylesheets and images and dynamically generated pages (as in a Django application). Our example will assume the following information: u

Name of the website: www.example.com

u

IP address of the server: 192.168.0.1

u

Directory where Django code is stored: /var/app/vhosts/www.example.com/

u

Directory where static contents are stored: /var/www/vhosts/www_example_com/

N■Note  You may wonder why the code and contents directories are separate. The reason for separation is that it’s an additional security measure. As you will see later in the chapter, we will instruct the web server to call the mod_python module for all requests made to the virtual server. The exception will be all URIs starting with /static/, which will be our static content. Now, if for some reason we make a mistake in the configuration file so that mod_python is not called and the code directory is part of the DocumentRoot directive, all our Python files will become downloadable. So, always keep your code files separate and outside DocumentRoot! Listing 3-1 shows the VirtualServer definition in the Apache web server configuration file. Depending on your Linux distribution, this section may be included directly in httpd.conf or as a separate configuration file alongside other VirtualServer definitions. Listing 3-1.  The VirtualServer Definition for the Django Web Application ServerName www.example.com DocumentRoot /var/www/virtual/www.example.com ErrorLog /var/log/apache2/www.example.com-error.log CustomLog /var/log/apache2/www.example.com-access.log combined SetHandler mod_python PythonHandler django.core.handlers.modpython PythonPath sys.path+['/var/app/virtual/'] SetEnv DJANGO_SETTINGS_MODULE www_example_com.settings SetEnv PYTHON_EGG_CACHE /tmp SetHandler None  

89

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

CHAPTER 3 N CREATING A WEB APPLIcATION FOR IP ADDRESS AccOUNTANcY

The first section of the configuration deals with basic configuration, such as setting server name, base directory for all static contents, and log file locations. This is followed by the mod_python configuration, where the first line tells Apache to pass execution of each web server phase to the mod_python module:   SetHandler mod_python   This directive is followed by the module configuration settings.

WHAT ARE APACHE HANDLERS Every request that is ?received by an Apache web server is processed in phases. For example, a request to a simple index.html file may involve three phases: translate the URI to the absolute location of the file; read the file and send it in an HTTP response; and log the event. The phases involved in each request depend on the server configuration. Each phase is processed by a handler. Apache server has only basic handlers; more complicated functions are implemented by handlers that are part of loadable modules, one of them being mod_python. The Python module has handlers for all possible Apache phases, but by default no handlers are called. Each phase needs to be associated specifically with the appropriate handler in the configuration file. Django requires only one handler, the generic PythonHandler, which is invoked during the phase when actual content is provided and served to the requester. The Django framework comes with its own handler and does not require the default mod_python.publisher handler. The following statement tells Apache to call Django’s handler:   PythonHandler django.core.handlers.modpython   As you already know, every website in Django is actually a Python module, with its configuration file. The Django handler requires that information so it can load the configuration and find appropriate functions. This information is provided in the next two lines. The first directive adds our base directory to the default Python path, and the second sets an environment variable identifying which framework will be used to get name of the module for loading.   PythonPath sys.path+['/var/app/virtual/'] SetEnv DJANGO_SETTINGS_MODULE ip_accounting.settings   We also need to identify where the temporary Python files will be stored. We make sure this directory is writable by the user, which we use to run Apache web server:   SetEnv PYTHON_EGG_CACHE /tmp   And finally, let’s define the exception so that the static contents (everything that starts with /static/) will not be handed to mod_python for processing. Instead, the default Apache handler will be called; it will simply serve any requested file:   SetHandler None   If you were following these instructions to configure Django and have created your first application and have instructed Apache to serve it accordingly, you now should be able to fire up your web browser and navigate to the Django web application. At this moment the data models are not created, and even the URL dispatcher is not configured, so Django will only serve the generic “It worked!” page, shown in Figure 3-1.

90

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

CHAPTER 3 N CREATING A WEB APPLIcATION FOR IP ADDRESS AccOUNTANcY

ftno

Welcome to Django

| + [0 http://www.example.com/ Welcome to Django

C ] (Qj Coogle

r

+

It worked! Congratulations on your first Django-powered page. Of course, you haven! actually done any work yet. Here's what to do next: If you plan to use a database, edit the DATA3AS£_* settings in wwexamplecon/settings.py. •Start your first app by running python . py startapp [appname).



www_example_com/manage

You're seeing this message because you have DEBUG haven! configured any URLs. Get to work!

-

True

in your Django settings file and you

Figure 3-1.  The standard Django application greeting page

N■Tip If you are seeing a “Server Error” message instead of the standard page, check the Apache error log file that contains Python exceptions or Apache error messages, which can help you identify the cause of the error.

Implementing Basic Functionality Once the preparation work ?that included Django installation and setting up the Apache web server is finished, we can proceed with the development of the web application. This process can be split into the following parts: u

Create models

u

Define the URL schema

u

Create views

In my experience, this process is very iterative; I continue modifying my models, adding new URLs, and creating new views as I go along with the development. This approach allows me to get something working very quickly and test some functionality even if the whole application is not finished yet. Do not assume that this approach is chaotic. Quite the contrary; I only work on the elements that I identified and wrote down in the design phase. Thus, this process merely breaks down a huge piece of work into smaller and more manageable chunks that can be developed and tested separately and in stages.

Defining the Database Model Before proceeding, look back at Table 3-1 and review the fields we are going to use in the data model. Because Django maps objects to a relational database and does so automatically, we need to create a class definition for every concept that we are using in the application, which will be mapped to the tables in the database.

91

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

CHAPTER 3 N CREATING A WEB APPLIcATION FOR IP ADDRESS AccOUNTANcY

Table 3-2.  Commonly Used Django Field Types

Field Class Name

Description

BooleanField

This field accepts only True or False values, except when it’s used with a MySQL database, in which case the field stores values 1 or 0 accordingly. Keep that in mind when testing for the field value.

CharField

Use this field to store strings. It requires a max_length argument to set the maximum length of the string it can store. Do not use this field to store large amounts of text; use TextField instead.

DateField

Stores the date as an instance of the Python datetime.date class. This field class accepts two optional parameters: auto_now, which if set to True sets the field value to the current date every time the object is saved; and auto_now_add, which if set to True sets the field value to the current date only when created for the first time. Both parameters forces Django to use the current date, and this cannot be overridden.

DateTimeField

Stores the date and time as a Python datetime.datetime instance. Uses same optional parameters as DateField.

DecimalField

Used to store fixed-precision decimal numbers. Requires two arguments: max_digits, which sets the maximum number of digits in the number, and decimal_places, which sets the number of decimal places.

EmailField

Similar to CharField but also performs a check for a valid email address.

FileField

Used to store uploaded files. Note that files are stored not in the database but locally on a file system. This field requires an argument path_to, which points to a relative to MEDIA_ROOT directory. You can use strftime variables to construct pathnames and filenames depending on the current date and time. MEDIA_ROOT must be set in the settings file for the current project.

FloatField

Stores floating-point numbers.

ImageField

Very similar to FileField, but additionally performs a check that the file is a valid image. Also has two optional arguments: height_field and width_field, which store names of model class variables and will be automatically populated depending on the uploaded image dimensions. Using this field type requires the Python Imaging Library (PIL).

IntegerField

Stores integer values.

PositiveIntegerField

Stores integer values but allows only positive integers.

NullBooleanField

Stores True and False just like BooleanField, but also accepts None. Useful where a combination of Yes/No/Undefined choices is required.

SlugField

Stores text like CharField, but allows only alphanumeric characters, underscores, and hyphens. Useful for storing URLs (without the domain part!). The max_length argument defaults to 50 but can be overridden.

TextField

Used to store large blocks of text.

TimeField

Stores the time as a Python datetime.time instance. Accepts the same optional arguments as DateField.

URLField

Used to store URLs including the domain name. Has an optional parameter verify_exists, which checks that the URL is valid, actually loads, and does not return 404 or any other error.

XMLField

A TextField, which additionally checks whether the text is valid XML and corresponds to the XML schema as defined by RELAX NG (www.relaxng.org). Requires the argument schema_path, which must point to a valid schema definition file.

92

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

CHAPTER 3 N CREATING A WEB APPLIcATION FOR IP ADDRESS AccOUNTANcY

We only have one table, so let’s define the class for it as shown in Listing 3-2. Add this code to your models.py file just below the default contents. Listing 3-2.  The Data Class Defining the Application’s Network Address Model class NetworkAddress(models.Model): address = models.IPAddressField() network_size = models.PositiveIntegerField() description = models.CharField(max_length=400) parent = models.ForeignKey('self')   The code is really self-explanatory and straightforward. We start by defining a new class NetworkAddress, which inherits from Django’s model.Model class, defined in the django.db module. So the class becomes a custom model, which Django will use to create database tables. This model class will also be used to create the database API dynamically. I will show later how this API can be used. Within the class we define three fields by initiating class variables with appropriate objects from the models class. Django provides many different types of fields, and Table 3-2 lists the most-used types. To create a database table, we simply use the manage.py utility with the option syncdb. When we run it for the first time, it will also create tables for other applications listed in the settings file (authentication, Django content type, and session and site management). The built-in authentication application requires an administrator account, so it will ask few more questions:   $ python manage.py syncdb Creating tables ... Creating table django_admin_log Creating table auth_permission Creating table auth_group_permissions Creating table auth_group Creating table auth_user_groups Creating table auth_user_user_permissions Creating table auth_user Creating table django_content_type Creating table django_session Creating table ip_addresses_networkaddress   You just installed Django's auth system, which means you don't have any superusers defined. Would you like to create one now? (yes/no): yes Username (leave blank to use 'rytis'): Email address: [email protected] Password: Password (again): Superuser created successfully. Installing custom SQL ... Installing indexes ... Installed 0 object(s) from 0 fixture(s) $  

93

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

CHAPTER 3 N CREATING A WEB APPLIcATION FOR IP ADDRESS AccOUNTANcY

This command-line dialog has successfully created all necessary tables in the database. To see exactly how our table has been structured in the database, we use the following command:   $ python manage.py sql ip_addresses BEGIN; CREATE TABLE "ip_addresses_networkaddress" ( "id" integer NOT NULL PRIMARY KEY, "address" char(15) NOT NULL, "network_size" integer unsigned NOT NULL, "description" varchar(400) NOT NULL, "parent_id" integer REFERENCES "ip_addresses_networkaddress" ("id") ) ; COMMIT; $   As you can see, Django uses variable names as the names for the fields in the table, and the table name is constructed from application and model class names. This is handy because it does provide some degree of name spacing, so you don’t need to worry that your class name clashes with a class name of another application.

URL Configuration You will find yourself changing URL configurations quite often in the Django development process, as you will be adding new views and functions. In order not to leave the process uncontrolled, you need to set out some basic rules for how you will define new URLs. Although Django gives you full control over the process, be nice to others (and especially to yourself ) by choosing a sensible URL structure and naming convention. There are no defined rules or guidelines for how to create URLs. And as a system administrator, you will probably not be developing web systems available to large audiences, so you can be more relaxed in the way you organize them. However, I would suggest some guidelines that I find quite useful to follow: u

Always start with the name of the application. In the IP address example, all URLs (including the domain name) will be http://www.example.com/ip_address/[...]. If you ever want to use another application in your web site, you will not have to worry about the URL names overlapping. For example, a view function is quite common. In our example, if we had not put the application name in front, and we had two applications A and B, we would have an issue if they both wanted to use the URL /view/.

u

Put the model name after the application name. If you need a more specific subset of objects of the same type, add the selection criteria after the model name. When possible, avoid using object IDs! So, continuing with our example, we would have ip_addresses/ networkaddress/, which lists all top-level networks. If we navigated to /ip_addresses/ networkaddress/109.168.0.0/, it would return us either a list of addresses in that particular network, or the details of a specific IP address if that was a host address.

u

If you need to operate on any of the objects, add the operation verb after the specific object name. So, in our example, if we wanted to have a link to the delete function for a network address, we would use /ip_addresses/networkaddress/192.168.0.1/delete.

These guidelines can be summarized by the following example URL:   http://www.example.com/////   The URL mapping is defined in the urls.py module, which has default settings as shown in Listing 3-3.

94

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

CHAPTER 3 N CREATING A WEB APPLIcATION FOR IP ADDRESS AccOUNTANcY

Listing 3-3.  Default Contents of the Site-wide urls.py File from django.conf.urls import patterns, include, url   from django.contrib import admin admin.autodiscover()   urlpatterns = patterns('', # Examples: # url(r'^$', 'www_example_com.views.home', name='home'), # url(r'^blog/', include('blog.urls')),   url(r'^admin/', include(admin.site.urls)), )   The structure of this file is very simple and straightforward. The most important part is the urlpatterns variable, which is a list of URL tuples. Each entry (tuple) has three parts:   url(, , )   Here’s what happens when the user requests a page from a Django web application: the request is sent to an Apache web server, which in turn will invoke its Django handler. The Django framework will go through all entries in the urlpatterns and attempt to match each regular expression against the URL that has been requested. When the match is found, Django will then call a callback function that is coupled with the regular expression. It will pass an HttpRequest object (I will discuss this in the views section)and optionally a list of captured parameters from the URL. I strongly recommend that you do not define any application-specific URL rules in the main urls.py file; use the configuration local to the application you are developing. This way you decouple application URLs from the website, which allows you to reuse the same application in different projects. Let me explain how this works. Decoupling is fairly simple; all you need to do is define your application-specific URLs in the application module and reference this file in all requests that start with the name of your application. So, in our example, we would have the following entries in the main project urls.py:   urlpatterns = ('', [...] url(r'^ip_addresses/', include('www_example_com.ip_addresses.urls')), [...] )   whereas the application-specific configuration file, ip_addresses/urls.py, contains:   urlpatterns = patterns('', [...] )   As you can see, the main urls.py will capture all URLs that begin with ip_addresses/, and the remainder of the URL is sent to ip_addresses/urls.py for further processing.

95

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

CHAPTER 3 N CREATING A WEB APPLIcATION FOR IP ADDRESS AccOUNTANcY

Using the Management Interface We could now go ahead and create some views and forms to display the records, and add and remove them, but before we do that, here’s how to enable the Django administration interface. This is a really handy tool that provides immediate access to your data, with full and rich functionality that allows you to add, remove, modify, search, and filter records stored in the database. It is also very useful during the development phase, letting you add new records and create display views before you create forms to add new records.

Enabling the Management Interface N■Note  In the earlier versions of Django, the management interface was disabled by default. In the more recent versions of Django, the management interface is enabled automatically for you, so you do not have to do anything. It is still a good idea to go through the instructions here and familiarize yourself with the configuration file structure. There is very little you need to do to enable the administration interface: add it to the applications list in the site configuration, enable URL rules, and configure Apache to serve static content for the interface (mostly CSS and JS scripts). You modify the INSTALLED_APPS list in the settings.py module so that it contains the administration package:   INSTALLED_APPS = ( 'django.contrib.auth', 'django.contrib.contenttypes', 'django.contrib.sessions', 'django.contrib.sites', 'django.contrib.admin', 'ip_addresses', )   Once you’ve done that, you need to rerun the syncdb command so that new tables for the administration application are created in the database:   $ python manage.py syncdb Creating table django_admin_log Installing index for admin.LogEntry model $   You uncomment all lines in the urls.py module that are related to the administration plug-in. And you make sure your urls.py looks like Listing 3-4. Listing 3-4.  Enabling the Administration Interface in the urls.py Module from django.conf.urls import patterns, include, url   from django.contrib import admin admin.autodiscover()   urlpatterns = patterns('', # Examples: # url(r'^$', 'www_example_com.views.home', name='home'), # url(r'^blog/', include('blog.urls')),  96

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

CHAPTER 3 N CREATING A WEB APPLIcATION FOR IP ADDRESS AccOUNTANcY

url(r'^admin/', include(admin.site.urls)),   # ip_addresses application url(r'^ip_addresses/', include('ip_addresses.urls')), )   You create a link in the DocumentRoot directory, so that the contents of /opt/local/django-trunk/django/ contrib/admin/media are served by Apache from the URL www.example.com/static/admin:   $ ln –s /usr/share/django/django/contrib/admin/media \ /var/www/virtual/www.example.com/static/admin   Once you have done all this preparation work, you should be able to navigate to www.example.com/admin and see the administration interface login page, shown in Figure 3-2.

«no

{L

Log in | Django site admin

+ |

http://wvvw.example.eom/ip_addresses/networkaddress/192.168.l.0/24/add/

| + 10 http://www.example.eom/ip_addresses/networkaddress/I92.168.l.0/24/add/

*

http://www.example.com/ip_addr

C | (Os Google +

.

Address:

Network size: Description: (Add)

Figure 3-5.  Form widget on HTML page

N■Note  Newer versions of Django (starting from 1.5) enforce the Cross Site Request Forgery (CSRF) check by default. In a nutshell, CSRF is when a malicious website attempts to perform some action on your site using data entered in the said website. This is bad because it means someone can pretend to be a legitimate website, and can collect sensitive user data. To make sure that this does not happen, Django generates a unique token every time it builds a form, and this token is sent back with the form data. Django then checks if the supplied token matches the locally stored one. If there is a match, then the request is genuine; otherwise, someone else is attempting to send the data and such a request needs to be ignored. And finally we make sure to add two new rules to the urls.py file, one for adding addresses to subnet range, and one for adding top level addresses:   url(r'^networkaddress/(?P\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}\/\d{1,2})/add/$', 'add'), url(r'^networkaddress/add/$', 'add'),  

107

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

CHAPTER 3 N CREATING A WEB APPLIcATION FOR IP ADDRESS AccOUNTANcY

Modifying Existing Records The form and view for object modification are very similar to the add form and view. The only difference is that there will be even fewer fields that users are allowed to edit. Realistically, if the user decides to change an IP, he or she needs to delete the record and recreate it within another network. So we will only allow users to change the description of a record. Therefore, the only field in our form object is the description, as shown in Listing 3-14. Listing 3-14.  The Modify Form Class class NetworkAddressModifyForm(ModelForm): class Meta: model = NetworkAddress fields = ('description',)   As you can see, instead of excluding fields, we use the fields list, which tells Django which fields to include; all other fields are ignored. The view method is very similar to the one that is used to add new records. In fact, everything is the same with one exception: upon first view the form is prepopulated with the data from the database, because users are changing the existing data instead of creating new records. Saving changes is the same, because Django works out that the record is already present and updates it, instead of adding a new one. As you can see from Listing 3-15, even the template is reused without any changes. Listing 3-15.  The Modify View Method def modify(request, address=None): if request.method == 'POST': # submitting changes ip, net_size = address.split('/') address_obj = NetworkAddress.objects.get(address=ip, network_size=int(net_size)) form = NetworkAddressModifyForm(request.POST, instance=address_obj) if form.is_valid(): form.save() return HttpResponseRedirect("..") else: # first time display ip, net_size = address.split('/') address_obj = NetworkAddress.objects.get(address=ip, network_size=int(net_size)) form = NetworkAddressModifyForm(initial={ 'description': address_obj.description, }) return render_to_response('add.html', {'form': form,}, context_instance=RequestContext(request))     We also add two new rules to the url dispatcher configuration file urls.py:   url(r'^networkaddress/(?P\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}\/\d{1,2})/modify/$', 'modify'), url(r'^networkaddress/modify/$', 'modify'),  

108

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

CHAPTER 3 N CREATING A WEB APPLIcATION FOR IP ADDRESS AccOUNTANcY

Summary In this chapter, we presented instruction on how to design an application and go from the requirement gathering and specification through design to the actual implementation. There was also explanation of how to use the Django framework for rapid web application development. u

Always start with the requirements specification. This will act as a reference point and simplify testing. It also helps manage user expectations.

u

Design the data model first and make sure the design is in line with requirements.

u

Decouple the application from the project (or website), so it can be reused multiple times.

109

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

CHAPTER 4

Integrating the IP Address Application with DHCP In the previous chapter, we implemented a simple IP accounting application that allows users to keep track of their IP address estate. I described the full lifecycle of the application, from the requirements gathering through the design phase, and finally to the implementation. The emphasis was on the importance of the requirements and design phases, as these allow a developer to validate its implementation. You may have noticed that, although we implemented most of the initial requirements, we did not get all of them! I deliberately left out a few, such as the search function, DNS resolution, and active check. I did that primarily to demonstrate how easy it is to validate your implementation and show what’s missing in it, but also simply to keep the chapter to a manageable size and not to overwhelm you with information. So, in this chapter we are going to implement the missing components and extend the original design with new functionality by adding support for DHCP service.

Extending the Design and Requirements I mentioned “support for DHCP” as a requirement in the previous chapter, but what do we really want from it? Let’s take a look at how DHCP is used in a typical organization. I will assume the ISC DHCP service, which is widely available with most Linux distributions. When assigning addresses on a subnet, we have the following options: u

Assign the IP addresses statically, in which case we configure each device with its own IP address.

u

Assign the IP addresses dynamically, depending on a set of rules using the DHCP service.

N■Tip  Before proceeding with this chapter, you may want to install the ISC DHCP server package. You can do that by using the package manager available with your Linux distribution (on RedHat Linux it can be done with the command yum install dhcp, or on Debian-based systems it would be apt-get install isc-dhcp-server). Alternatively, you can download it from the official ISC DHCP website at http://www.isc.org/software/dhcp.

111

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

ChapTEr 4 N INTEGraTiNG ThE IP AddrEss ApplicaTiON WiTh DHCP

Let’s quickly recap what the DHCP can do and how it is configured. The ISC DHCP allows you to define very complicated sets of rules. The simplest set would contain no rule at all, in which case any request for an IP would be granted and a unique address from available pool would be assigned, assuming there are free IPs available in the address pool. One rule commonly used is to assign IP addresses depending on a hardware MAC address. This method allows you to assign the same IP address always to the same machine, but does not require it be configured locally on the server. We can use the DHCP group directive to configure such a host:   group { ... host host001 { hardware ethernet 00:11:22:33:44:55; fixed-address 192.168.0.1; } ... }   A more advanced use of client grouping is DHCP client class separation, whereby clients are grouped into classes that satisfy some criteria. The ISC DHCP server provides many options for such separation. For example, you can use various DHCP request fields (or even parts of them) to group the nodes—such as using part of the DHCP hostname to identify what is sending the request. You can also see what DHCP options are available by reading the UNIX manual page for dhcp-options. The following example uses the DHCP option vendor-class-identifier to group all Sun Ultra 5 machines into one class:   class "sun-ultra-5" { match if option vendor-class-identifier "SUNW.Ultra-5_10"; }   For a second example, this code matches the beginning of a DHCP hostname and puts it into a separate class if it starts with "server":   class "server" { match if substring (option hostname, 0, 6) = "server"; }   For the sake of simplicity, let’s assume that all subnets are on the same physical network on the DHCP server, which means we will be using the shared-network directive when defining new subnets. As you can see, the simple process of investigation is gradually evolving into making certain design decisions. This blurring of tasks should be avoided whenever possible, and I’ve demonstrated it here just to show how easy it is to get carried away and amend your design to accommodate the limitations (or features) of any particular product. So first, let‘s ignore everything we know about the particular DHCP server product and list all the things we want it to do. Our imaginary organization has multiple networks that are subdivided into smaller subnets, which in turn can contain even smaller subnets. Usually, if a subnet is subdivided into smaller networks, it rarely contains IP addresses for physical (or virtual) hosts. The only IPs that will be present in such a network are IP addresses of the networking devices—such as routers, switches, and load balancers, none of which gets its IP from DHCP. Therefore, we will only create DHCP-managed subnets that are right at the bottom of the subnet tree and are not subdivided into smaller networks.

112

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

ChapTEr 4 N INTEGraTiNG ThE IP AddrEss ApplicaTiON WiTh DHCP

Within the DHCP-managed network, we want to have the following: u

Statically assigned addresses that are completely out of DHCP server control. In other words, the DHCP server should have no knowledge of that range and should not be configured to offer any address from that range. Each IP address is configured manually on the device that uses it.

u

Static addresses assigned by the DHCP server. For example, we want IP addresses to be offered depending on the requestor’s MAC address.

u

IP addresses assigned depending on properties of the client, such as hardware vendor, DHCP parameter values, and so on. Since we do know how many of these IPs we will need, we have to be able to assign a predefined range of addresses. We also don’t want to be limited to just a set of DHCP options; we should have full control over all available options.

u

IP addresses assigned to all other clients. As in the previous requirement, we need to be able to specify a range of IPs to use here.

As you can see, the listed requirements are very similar to the ones set out earlier, but they do not contain any references to a particular implementation. This approach has two major advantages: you are free to choose any product you think is the best fit for the purpose, and you can outsource implementation to other teams. As long as the result satisfies the requirements, we don’t really care about technical implementation details. Also, having this list in hand helps you quickly identify and select the appropriate product. In addition to the network management and IP allocation requirements, we have some operational needs: u

We need the configuration to be generated but not immediately applied, so it can be reviewed and changes applied manually.

u

We do not require manual changes made to the configuration file to be propagated back to the application database. For example, if we manually add a few hosts into the DHCP configuration, we do not need the application to update the database entries accordingly.

u

At this stage we do not want the application to control the DHCP service; this will be done manually using standard OS commands.

Now that we have identified what is required, we can start making basic design decisions: u

We will use ISC DHCP because it allows us to implement all listed requirements.

u

We will use the same web application framework and language because this project is an extension of another project.

u

The configuration file will be generated by the same web application (that is, there are no external tools that read from the database and generate a configuration file).

Just as in the previous example, we now need to do two things: define the extended data model and create an application workflow.

Extending the Database Schema This time the data model is a lot more complicated than it was when we just had to collect information about the networks and IP addresses. Now we need to store the DHCP server’s view of the network topology, consisting of all classification rules and address ranges within each DHCP subnet. Therefore, we are going to break this down into several iterations of defining the DB model class, writing the view functions, and testing. This gradual approach is easier to tackle and we can catch errors more easily.

113

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

ChapTEr 4 N INTEGraTiNG ThE IP AddrEss ApplicaTiON WiTh DHCP

At this time we have only identified that we are going to have the following data model classes: u

The DHCP network, which points to its “sponsor” network class. This can only be created for a network that does not have any subnetworks.

u

The address pool model, which defines address ranges within a DHCP network and must have an associated rule.

u

The rules model, which defines rules for classifying DHCP requests. Each rule can be assigned to one or more Address pools.

u

The “static” DHCP address rule model, which allows assigning IPs depending on the requestor’s hardware MAC address.

Making Additions to the Workflow There are some additions to the workflow as well. First, we need to add a link to create (or delete) a DHCP network for every network that has no subnets. We also need to allow users to add and delete information about the DHCP network pools, rules, and static IP addresses. Each of these options will be available within the DHCP network listing.

Adding DHCP Network Data In the first iteration we are going to add support for the DHCP network definitions. We will use an approach similar to what we would do with a larger project: define the data models, define the workflows, and go to the implementation phase.

The Data Models Let’s start by adding a new data class that is going to store information about the DHCP network. This class is going to point to its “sponsor” physical network class and contain several DHCP options that are required by the clients, such as router address, DNS server, and domain name. Listing 4-1 shows what we’re going to add to the models.py file. Listing 4-1.  The Data Model Class for the DHCP Subnet class DHCPNetwork(models.Model): physical_net = models.OneToOneField(NetworkAddress) router = models.IPAddressField() dns_server = models.ForeignKey(DNSServer) domain_name = models.ForeignKey(DomainName)   def __unicode__(self): return "DHCP subnet for %s" % physical_net   In this example we also refer to two new entities: DNSServer and DomainName. Classes for them are also defined in models.py and they only contain information about the IP of the DNS server(s) and domain names with brief comments. The reason for separating them from the DHCPNetwork class is that if we ever want to change the IP address of our DNS server, we won’t need to go through each DHCP network entry and change it. You can find the definitions of the other classes in the source code available on the Apress website.

114

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

ChapTEr 4 N INTEGraTiNG ThE IP AddrEss ApplicaTiON WiTh DHCP

Additional Workflows What extra workflows does a generic DHCP network support require? Obviously we want to add or remove a DHCP network to or from a subnet, assuming the subnet can have a corresponding DHCP network. When a DHCP network is defined, we also want to modify its settings. As in the previous chapter, each workflow action is going to have its own view function, and Add and Modify will have their own data-entry forms. As you already know, views don’t work unless they are defined in a URL configuration file, so that the Django framework knows what view function to call when it receives a request from a user.

The Add Function First, we need to know if we can provide “Add DHCP network” functionality for a subnet. The easiest and most logical way to do this is to check in the Network Display view to see whether there are any address entries that do not have the subnet size set to 32 bits. If there are entries with subnet size other than 32 bits, then this subnet cannot be DHCP enabled; otherwise, we can provide a link to the DHCP add function. So, the network view is going to perform the check and pass a Boolean variable that we will query in the template, and it will either display a message or provide a link. Here’s the quick check in the view code:   for address in addr_list: if address.network_size != 32: has_subnets = True   And the additions to the template:   Add new subnet or node IP {% if has_subnets %} DHCP support cannot be enabled for networks with subnets {% else %} Enable DHCP support {% endif %}   Can you see what the resulting URL is going to be? The structure is following the same convention for the URL that we defined earlier:   http://www.example.com/////   So far, the objects have been a pair of IP address and their network sizes, which uniquely identified each object in the database. The object now is a DHCP network within a physical network. The DHCP network as such has nothing that uniquely identifies it. So, let’s add /dhcp/ to the IP/network size pair, which tells us that this is a DHCP object for this particular network. Assuming the new view for adding DHCP network is called add_dhcp, this is what needs to be added to the URL mapping file:   (r'^networkaddress/(?P\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}\/\d{1,2})/« dhcp/add/$', 'add_dhcp'),  

115

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

ChapTEr 4 N INTEGraTiNG ThE IP AddrEss ApplicaTiON WiTh DHCP

This view will follow the same form-processing pattern, and it looks very similar to the one we used to add new networks. It also requires the form class to generate a form model for the template automatically: class DHCPNetworkAddForm(ModelForm): class Meta: model = DHCPNetwork exclude = ('physical_net',)   We exclude the Physical Network field because it is already known at the time of creation; it is supplied as a URL argument. Listing 4-2 shows our dhcp_add function, in which we even use the same template that was used previously. Listing 4-2.  A View to Handle the Add Function for DHCP Networks def add_dhcp(request, address=None): if request.method == 'POST': network_addr = None if address: ip, net_size = address.split('/') network_addr = NetworkAddress.objects.get(address=ip, network_size=int(net_size)) dhcp_net = DHCPNetwork(physical_net=network_addr) form = DHCPNetworkAddForm(request.POST, instance=dhcp_net) if form.is_valid(): form.save() return HttpResponseRedirect("../..") else: form = DHCPNetworkAddForm() return render_to_response('add.html', {'form': form,}, context_instance=RequestContext(request))   You may wonder what happens to the DNS and Domain Name fields; they are foreign keys in the model definition, so what are users supposed to enter here? In Figure 4-1 you can see what Django is going to display.

«nn http://www.example.com/ip_addresses/networkaddress/192.168.0-0/25/dhcp/add/ 8 nmask_array.insert(0, str(dec)) return ".".join(nmask_array)  

132

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

ChapTEr 4 N INTEGraTiNG ThE IP AddrEss ApplicaTiON WiTh DHCP

The logic of this function is very simple: u

Set the number of bits in an integer variable to one (that is, set it to 1). This can be expressed as 2^ -1.

u

Shift the result to the left, filling in the remaining number of bits with 0. The total number of bits in a netmask is always 32.

u

For every 8 bits (4 sets in total), convert them to a decimal number string.

u

Join all numbers, using the dot symbol to separate individual numbers.

Finally, we need to add an additional URL pattern that calls this view:   url(r'^dhcpd.conf/$', views.dhcpd_conf_generate, name='dhcp-conf-generate')   Following is an example of the DHCP configuration file that was generated from some sample data I entered into my database:   ignore client-updates; ddns-update-style interim; # class rule 1 class "class_rule_1" { match if substring (option host-name, 0, 6) = "server";; } # test rule (gen form) class "class_rule_2" { test rule - gen form; }   shared-network network_4 { subnet 192.168.0.128 netmask 255.255.255.128 { option routers 192.168.0.130; option domain-name-servers 208.67.222.222; option domain-name domain1.example.com; } } shared-network network_5 { subnet 192.168.0.0 netmask 255.255.255.128 { option routers 192.168.0.113; option domain-name-servers 208.67.220.220; option domain-name domain2.example.com; pool { allow members of "class_rule_1"; range 192.168.0.1 192.168.0.20; } } }

133

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

ChapTEr 4 N INTEGraTiNG ThE IP AddrEss ApplicaTiON WiTh DHCP

Other Modifications The majority of the work has been done, but we still need to add a couple of things to fulfill the initial set of requirements: hostname resolution for node IPs and a status check.

Resolving IPs to Hostnames To get further information about the IP addresses, let’s do a reverse name resolution and print a fully qualified domain name next to each address entry. There are two places where we could implement this lookup: we can either modify the display view and do a host lookup there, and pass the information to the template; or we can extend the Model class with an additional function that returns a hostname for the IP address or an empty string if the hostname cannot be resolved. Let’s go with the second option, as it is more elegant and does not require changing the interface between the view and the template. Here’s an additional method for the Model class, which uses the gethostbyaddr function from Python’s socket library to perform a reverse lookup. The result is a tuple: (, , ) and we’re using the first entry (hostname) as a result.   import socket   ... class NetworkAddress(models.Model): ... def get_hostname(self): try: fqdn = socket.gethostbyaddr(str(self.address))[0] except: fqdn = '' return fqdn   And a minor change in the template to display additional property (if available):   {% for address in addresses_list %} {{ address.address }}/ {{ address.network_size« }} {% ifequal address.network_size 32 %}(host){% else %}(network){% endifequal %} {{ address.description }} {% if address.get_hostname %} ({{ address.get_hostname }}) {% endif %} (delete | modify) {% endfor %}

Checking Whether the Address Is in Use Let’s implement a simple function that checks whether the IP address is in use. To do so, we need to send an ICMP ECHO message to the IP address and wait for the response. Strictly speaking, this is not a valid test to check if an address is in use, because there might be a few scenarios where the IP address is used but does not respond to a ping request. For example, firewalls might be preventing ICMP traffic, or that traffic might be blocked at the server level. In most cases, however, this simple test is very effective; just bear in mind that failure indicated by this test may not necessarily mean actual failure of the server or that the address is not used.

134

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

ChapTEr 4 N INTEGraTiNG ThE IP AddrEss ApplicaTiON WiTh DHCP

The implementation follows the usual pattern of defining a view and adding a new URL pattern to the URLConfig file. Because of a relative complexity of implementing ICMP using the Python socket library (it requires using sockets in raw mode, which in turn requires application to run as root user), we will call the system ping utility and make a decision based on the return code, shown in Listing 4-20. Listing 4-20.  A View that Does an ICMP Check for an IP Address def networkaddress_ping(request, address=None): if responding_to_ping(address): msg = "Ping OK" else: msg = "No response" return HttpResponse(msg)   def responding_to_ping(address, timeout=1): import subprocess rc = subprocess.call("ping -c 1 -W %d %s" % (timeout, address), shell=True, stdout=open('/dev/null', 'w'), stderr=subprocess.STDOUT) if rc == 0: return True else: return False   Here we force ping to send only one packet and the timeout is set to 1 second. Although this may reduce accuracy, the response will be much quicker. Most local networks should operate within these constraints, but if you need to have more accuracy, you can increase the default timeout and instruct ping to send more than one probe packet. You also need to add two additional URL patterns:   url(r'^networkaddress/(?P\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3})/ping/$', views.networkaddress_ping, name='networkaddress-ping'), url(r'^networkaddress/$', views.networkaddress_ping, name='networkaddress-ping-url'),   The first pattern catches an IP address and also the method (/ping/) that it needs to perform on the given address. The second line is simply for housekeeping—you will find out later why it is required. Why did we implement this check as a separate call to the web server? Wouldn’t it be easier to generate the list of IP addresses to be displayed, ping each one individually, and then pass the ping results along with the IP addresses to the template? Yes, we could have done that, but there is one major problem with that approach: the application response time. In a real-life situation, you may have really large networks and may need to perform ping checks on hundreds of servers. Even if you implement this check in a multithreaded manner—in other words, attempt to call the ping function simultaneously—you’re still going to spend 1, 2, or even more seconds to complete the request. From a usability point of view, this is not acceptable; if the system is slow to respond, users are not going to like it. So what we are going to do here is display the list of all addresses in a subnet and then asynchronously call the ping URL using JavaScript. Users will not get the status report immediately, but at least the page with other information and links to actions will be displayed immediately. Another benefit of this approach is that you don’t need to make any changes to the display view at all—just some minor modification to the display template (add a placeholder to hold the status information). JavaScript will be placed in the base template, so all pages automatically get this functionality.

135

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

ChapTEr 4 N INTEGraTiNG ThE IP AddrEss ApplicaTiON WiTh DHCP

Since this book isn’t about the JavaScript, I’ll limit myself to a brief explanation and an example of how it is used. Listing 4-21 uses the jQuery library to perform asynchronous AJAX calls to obtain the results and update the web page accordingly. Listing 4-21.  Modified Address List Loop Code {% for address in addresses_list %} {{ address.address« }}/ {{ address.network_size }} {% ifequal address.network_size 32 %}(host){% else %}(network){%« endifequal %} {{ address.description }} {% if address.get_hostname %} ({{ address.get_hostname }}) {% endif %} (delete | modify) {% ifequal address.network_size 32 %} [Status: Unknown ] {% endifequal %} {% endfor %}   The additional line checks whether the address is likely to be a node IP and then inserts an HTML tag, which will be used to update information at this location in the document. This tag has two properties: Class and ID. We will use the Class property to identify what tags contain the IP addresses and need checking, and the ID property to hold the value of the IP address. You may wonder what this get_formated_address method is and why we’re not using the address directly. The reason is that jQuery expects HTML tag IDs not to have dots in the name, and the ID name also needs to start with a letter; therefore, we have to add the ip_ prefix to it. This method simply replaces all occurrences of (.) with (_) in the address field. Lastly, we add some JavaScript that traverses all tags belonging to the same Address class and performs an AJAX asynchronous call to the web server. The result will is then used as the HTML content of the tag. The code in Listing 4-22 has been added to the base template, from which all other templates inherit. Listing 4-22.  JavaScript that Performs Asynchronous Calls and Updates the Status Page $(document).ready(function(){ $(".address").each(function () { var curId = $(this).attr('id'); updateStatus(curId); }); });  

136

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

ChapTEr 4 N INTEGraTiNG ThE IP AddrEss ApplicaTiON WiTh DHCP

function updateStatus(attrId) { address = attrId.replace('ip_', ''); address = address.replace(/_/g, '.'); $.ajax({ url: '{% url networkaddress-ping-url %}' + address + '/ping/', success: function(response) { $('#' + attrId).html(response); } }); }     Now you see why we needed to have the placeholder URL pattern. JavaScript is also partially generated by Django—we insert the network address URL using the reverse URL lookup. Because we cannot generate a full URL (with the address part in it), this is a generic URL that will be modified by the JavaScript. We only use the first part of it; therefore, we needed this defined in the URLConfig. So, the logic of this JavaScript code is as follows: u

Remove the ip_ prefix.

u

Replace underscores with dots.

u

Perform an AJAX asynchronous call.

u

Update the web page when the results come back.

Now when the user navigates to the listing page, it will be displayed immediately and then gradually updated with the status reports for each IP address as the results become available.

Dynamic DHCP Lease Management So far our project was based on the assumption that we need to generate a static DHCP configuration file. The application we built thus far allowed us to input the required data, and the output is a configuration file that can be used by ISC DHCP server. This approach is usually good enough, especially in environments where the DHCP configuration is reasonably static. The problem arises when you need to add and remove static allocations frequently. With every modification you will have to deploy the new configuration file and restart the DHCP server process. When you restart the DHCP process, there is a brief period when the DHCP server is not available. All requests for DHCP addresses that come in while the service is down will fail, and you may end up with clients who lack IP addresses. The solution to this problem is dynamic lease management using OMAPI (Object Management Application Programming Interface). OMAPI is an API interface to the ISC DHCP server, and it lets you manipulate the internal data structure of the running instance of the service. In this section I will show how to manipulate DHCP allocations using OMAPI. We are not going to change the application written so far; this is just to give an idea how to manage DHCP leases dynamically.

Employ Python Interface to OMAPI We are going to use Dr. Torge Szczepanek’s pypureomapi library for accessing ISC DHCP OMAPI interface. The project is available on the following URL: https://github.com/CygnusNetworks/pypureomapi if you want to install it from source.

137

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

ChapTEr 4 N INTEGraTiNG ThE IP AddrEss ApplicaTiON WiTh DHCP

The package is also available from PyPI package repository, and can be installed with PIP tool:   # pip install pypureomapi Downloading/unpacking pypureomapi Downloading pypureomapi-0.3.tar.gz Running setup.py egg_info for package pypureomapi   Installing collected packages: pypureomapi Running setup.py install for pypureomapi   Successfully installed pypureomapi Cleaning up...  

Set up the ISC DHCP Server The default configuration of the ISC DHCP does not allow management over OMAPI. If you want to dynamically manage the service, you must create an additional configuration that defines authentication and connection details. Let’s start with the most basic ISC DHCP server configuration file (on RedHat-based systems this file is /etc/ dhcp/dhcpd.conf ), which contains the following configuration:   subnet 192.168.0.0 netmask 255.255.255.0 {   }  

N■Note  Keep in mind that in this section we are working with a minimal DHCP server configuration, which is good enough to illustrate OMAPI functionality. In a real-life situation, you may want to extend this configuration so that it suits your environment requirements. First, we need to generate a HMAC-MD5 key for the :user that will be connecting to the ISC DHCP server:   # dnssec-keygen -r /dev/urandom -a HMAC-MD5 -b 256 -n USER omapikey Komapikey.+157+08556 #   This creates two files, one with the key and one with the metadata information:   # ls -l Komapikey* -rw------- 1 root root 70 Jul 6 14:49 Komapikey.+157+08556.key -rw------- 1 root root 185 Jul 6 14:49 Komapikey.+157+08556.private #   Both files contain the key, which in my example is “QKHRF1laxE4cNUAa2t/GOa0VBFeUb5ROS+53gEw2BzQ=”:   # awk '/Key/ {print $2}' Komapikey.+157+08556.private QKHRF1laxE4cNUAa2t/GOa0VBFeUb5ROS+53gEw2BzQ= #  

138

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

ChapTEr 4 N INTEGraTiNG ThE IP AddrEss ApplicaTiON WiTh DHCP

Let’s take the key and the username we used (omapikey), and update the ISC DHCP configuration file:   key omapikey { algorithm hmac-md5; secret QKHRF1laxE4cNUAa2t/GOa0VBFeUb5ROS+53gEw2BzQ=; }   omapi-key omapikey; omapi-port 7911;   subnet 192.168.0.0 netmask 255.255.255.0 {   }   When you restart the DHCP service, you will see that now it is listening on the defined port for OMAPI commands:   # netstat -ntlp Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 1330/sshd tcp 0 0 0.0.0.0:7911 0.0.0.0:* LISTEN 8994/dhcpd #  

Add a New Host Lease Record Before we start modifying the DHCP lease records, let’s make sure there are no existing lease records in the lease file. The file on RedHat-based systems is /var/lib/dhcpd/dhcpd.leases, and in this case its contents are displayed below. As you can see, there are no host records there:   # cat /var/lib/dhcpd/dhcpd.leases # The format of this file is documented in the dhcpd.leases(5) manual page. # This lease file was written by isc-dhcp-4.2.6   server-duid "\000\001\000\001\033L\014\234\010\000'\037\337\302"; #   Let’s connect to the ISC DHCP using OMAPI, and create a new lease record:   >>> import pypureomapi >>> USER='omapikey' >>> KEY='QKHRF1laxE4cNUAa2t/GOa0VBFeUb5ROS+53gEw2BzQ=' >>> omapi = pypureomapi.Omapi('127.0.0.1', 7911, USER, KEY) >>> omapi.add_host('192.168.0.100', '00:11:22:33:44:55') >>>  

139

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

ChapTEr 4 N INTEGraTiNG ThE IP AddrEss ApplicaTiON WiTh DHCP

Now if you take a look at the leases file, you will find that a new record has been inserted:   # cat /var/lib/dhcpd/dhcpd.leases # The format of this file is documented in the dhcpd.leases(5) manual page. # This lease file was written by isc-dhcp-4.2.6   server-duid "\000\001\000\001\033L\014\234\010\000'\037\337\302";   host nh53b963617fbd63378fe0 { dynamic; hardware ethernet 00:11:22:33:44:55; fixed-address 192.168.0.100; }

Delete a Host Lease Record Similarly, you can delete host records from the leases database:   >>> import pypureomapi >>> USER='omapikey' >>> KEY='QKHRF1laxE4cNUAa2t/GOa0VBFeUb5ROS+53gEw2BzQ=' >>> omapi = pypureomapi.Omapi('127.0.0.1', 7911, USER, KEY) >>> omapi.del_host('00:11:22:33:44:55') >>>   You will see that the lease record is not removed from the database; instead, it is marked as deleted:   # cat dhcpd.leases # The format of this file is documented in the dhcpd.leases(5) manual page. # This lease file was written by isc-dhcp-4.2.6   host nh53b963617fbd63378fe0 { dynamic; hardware ethernet 00:11:22:33:44:55; fixed-address 192.168.0.100; } server-duid "\000\001\000\001\033L\014\234\010\000'\037\337\302";   host nh53b963617fbd63378fe0 { dynamic; deleted; }  

140

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

ChapTEr 4 N INTEGraTiNG ThE IP AddrEss ApplicaTiON WiTh DHCP

Summary In this chapter we have expanded the functionality of the network address management application by adding support for DHCP and also by performing some checks, such as DNS lookups and ICMP pings, to make sure the address is in use. u

Generic views help reduce the amount of code you need to write; use them to perform generic tasks such as displaying object information and basic manipulations such as Delete, Modify, and Add.

u

You can modify the response MIME type, allowing Django to generate wide variety of content—HTML, XML, text, and even binary documents.

u

Think about the user experience and whether your application will perform various tasks as quickly when the amount of data grows. If you need to, use JavaScript to postpone content loading.

u

You don’t need to have libraries available or write your own functionality to perform certain tasks. If you need to, you can use system utilities such as ping to perform these tasks.

u

You can use the OMAPI interface to dynamically update ISC DHCP running configuration.

141

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

CHAPTER 5

Maintaining a List of Virtual Hosts in an Apache Configuration File We examined the Django web framework in great detail in Chapters 3 and 4. In this chapter, we’ll continue exploring the Django framework and in particular the administration application. Instead of writing the views and the forms ourselves, we are going to use the built-in object management application, but we’ll customize it to fit our needs and requirements. The application we will create in this chapter is a web-based one to generate the virtual host configuration for the Apache web server.

Specifying the Design and Requirements for the Application Why would you want to have an application that generates the Apache configuration files for you? There are pros and cons to this approach. Let me start with the advantages of generating configurations files automatically. First, although you cannot eliminate it completely, you greatly reduce the error factor. When you automatically generate configuration files, the settings are either available as a selection, so you cannot make any typos, or they can be validated. So you have a system that does the basic error checking, and silly mistakes such as “ServreName” are eliminated. Second, this approach to some degree enforces the backup policy. If you accidentally destroy the application configuration, you can always re-create it. Third—and this is the most important aspect, in my opinion—you can have a central place to configure multiple clients. For example, let’s assume that you have a web farm of ten identical web servers all sitting behind a web load balancer. All servers are running the Apache web server and all should be configured identically. By using an automated configuration system, you generate the configuration file once (or even better, you can create the configuration on demand) and then upload to all servers. There are some drawbacks, as well. Any configuration utility, unless it is natively written for the system that you are configuring, adds another layer between you and the application. Any changes to the configuration structure will immediately have an effect on the configuration tool. New configuration items need to be provisioned in the configuration system. Even the slightest change in the syntax needs to be accounted for. If you want to make the best of your configuration tool, you have to revalidate it against every new software release to make sure that your tool still produces a valid configuration file. The choice is obviously yours. For a standard configuration, I suggest automating as much as possible, and if you are creating your own tools, you can always account for the extra configuration that is specific to your environment.

Functional Requirements Let’s go back to the Apache web server configuration tool. First, this tool should generate only the name-based virtual host configuration. We don’t expect this tool to generate the server-specific configuration, only the blocks that are responsible for defining virtual hosts.

143

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

CHAPTER 5 N MAINTAINING A LIST OF VIRTUAl HOSTS IN AN APAcHE CONFIGURATION FIlE

In the virtual host definition section, you can use the configuration directives from various installed modules. Typically the Apache core module is always available; therefore, the tool should provide you with the list of all configuration directives from the core module. It should be possible to add new configuration directives. Some configuration directives may be nested inside each other, as in the following example, where the SetHandler directive is encapsulated in the Location directive section. The tool should allow you to define the relationships between the configuration directives where one is encapsulated by the other:   SetHandler server-status   There might be situations where multiple virtual host definition sections have very similar configurations. The application that we’re going to build should allow you to clone any existing virtual host definition together with all its configuration directives. In addition to the clone operation, the application should allow you to mark any virtual host section as a template. The template virtual host block should not become functional part of the configuration file, although it can be included in the form of a comment block. The most important part of any virtual host definition is the server domain name and its aliases. The list of all domain names that the virtual host is responding to should be made easily available, and the links to the appropriate web location should be provided. The configuration file should be made available as a web resource and the server as a plain-text file document.

High-Level Design As discussed, we will be using the Django web framework to build our application. However, instead of writing all forms manually, we will reuse Django’s provided data administration application, which we’ll configure to our needs. It is unlikely that the application will be maintaining the configuration for a great number of virtual hosts, so we are going to use the SQLite3 database as the data store for our configuration. We are going to store two types of data in the database: the virtual host objects and the configuration directives. This allows for expansion and further modification of the application—for example, we could extend the configuration directives model and add an “allowed values” field.

Setting Up the Environment We’ve already discussed the Django application structure in great detail in Chapters 3 and 4, so you should be comfortable creating the environment settings for the new application. I’ll briefly mention here the key configuration items, so it will be easier for you to follow the examples and code snippets later in the chapter.

Apache Configuration First, we need to instruct the Apache web server how to handle the requests sent to our application. This is a fairly standard configuration that assumes our working directory to be in /srv/app/, and the Django project name is www_example_com. The document root is set to /srv/www/www.example.com, and it’s used only to contain a link to the administration web site static files. We’ll come to creating the link a bit later. Listing 5-1 shows the code.

144

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

CHAPTER 5 N MAINTAINING A LIST OF VIRTUAl HOSTS IN AN APAcHE CONFIGURATION FIlE

Listing 5-1.  The Apache Web Server Configuration ServerName www.example.com DocumentRoot /srv/www/www.example.com ErrorLog /var/log/httpd/www.example.com-error.log CustomLog /var/log/httpd/www.example.com-access.log combined SetHandler mod_python PythonHandler django.core.handlers.modpython PythonPath sys.path+['/srv/app/'] SetEnv DJANGO_SETTINGS_MODULE www_example_com.settings SetEnv PYTHON_EGG_CACHE /tmp SetHandler None  

After creating the configuration, we make sure that all directories mentioned in the configuration file (/srv/www/www.example.com/ and /srv/app/) exist. Also, we make sure that these directories are owned by the user running the Apache daemon. Typically it is the user named apache or httpd. When we have finished, we restart the Apache web server, so it reads in the new configuration.

Creating a Django Project and Application We’ll start off by creating a new Django project called www_example_com. As you already know from Chapters 3 and 4, the project in fact becomes a Python module with its init methods and possibly submodules (the applications within the project). Therefore, the project name has to follow the Python variable naming conventions and cannot contain dots or start with a number. We start a new project first:   $ cd /srv/app/ $ django-admin.py startproject www_example_com   At this point, you should be able to navigate to the web site URL that you defined earlier (in our example, it’s http://www.example.com) and you should see the standard Django welcome page. The next step is to create a new application within the project. You must follow the same naming rules as with the project name when you choose a name for your application. I’ll simply call it httpconfig:   $ django-admin.py startapp httpconfig

Configuring the Application Now, we need to specify some details about the project, such as the database engine type, and also tell the project about the new application. Even though we have created its skeleton files, the application is not automatically included in the project configuration. First, you change the database configuration in the settings.py file in the project directory. Don’t worry about the database file, as it will be created automatically:   DATABASES = { 'default': { 'ENGINE': 'django.db.backends.sqlite3', 'NAME': os.path.join(BASE_DIR, 'db.sqlite3'), } } 

145

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

CHAPTER 5 N MAINTAINING A LIST OF VIRTUAl HOSTS IN AN APAcHE CONFIGURATION FIlE

Second, you change the default administration media location; you’re going to link to it within the existing media directory. In the same settings.py file, make sure to have this setting:   ADMIN_MEDIA_PREFIX = '/static/admin/'   Third, you add two new applications to the enabled applications list. You’re going to enable the administration application that is part of the standard Django installation, and you’ll also add your application to the list:   INSTALLED_APPS = ( 'django.contrib.admin', 'django.contrib.auth', 'django.contrib.contenttypes', 'django.contrib.sessions', 'django.contrib.messages', 'django.contrib.staticfiles', 'httpconfig', )   Fourth, you have to run a database synchronization script, which will create the database file for us and also create all required database tables as defined in the application model files. To be sure, you don’t have any yet in the httpconfig application, but you need to do this step so that the administration and other applications have their database tables created. Run the following command to create the database:   $ python manage.py syncdb Creating tables ... Creating table django_admin_log Creating table auth_permission Creating table auth_group_permissions Creating table auth_group Creating table auth_user_groups Creating table auth_user_user_permissions Creating table auth_user Creating table django_content_type Creating table django_session   You just installed Django's auth system, which means you don't have any superusers defined. Would you like to create one now? (yes/no): yes Username (leave blank to use 'rytis'): Email address: [email protected] Password: Password (again): Superuser created successfully. Installing custom SQL ... Installing indexes ... Installed 0 object(s) from 0 fixture(s)  

146

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

CHAPTER 5 N MAINTAINING A LIST OF VIRTUAl HOSTS IN AN APAcHE CONFIGURATION FIlE

Defining the URL Structure You’ve got the application and the database set up, but you still cannot navigate to any of the pages, even the administration interface. This is because the project does not know how to respond to the request URLs and how to map them to appropriate application views. You need to do two things in the urls.py configuration file: enable the URL routing to the administration interface objects and point to the application-specific urls.py configuration. The project-specific urls.py file is located in the project directory at /srv/app/www_example_com/ www_example_com/. Its contents after enabling both settings will be the code shown in Listing 5-2. Listing 5-2.  Project- (or Site-) Specific URL Mapping from django.conf.urls import patterns, include, url   # this is required by the administration appplication from django.contrib import admin admin.autodiscover()   urlpatterns = patterns('', # route requests to the administration application url(r'^admin/', include(admin.site.urls)), # delegate all other requests to the application specific # URL dispatcher url(r'', include('httpconfig.urls')), )   You have not created any views in this application, but you can already define the URL mapping in the application-specific urls.py, which needs to be created in the application directory httpconfig. The majority of the work is going to be done in the administration interface, so the application’s interaction with the outside world is fairly limited. It’ll respond to two requests only: if nothing is specified on the URL path, the view should return all virtual hosts in a plain-text format. If an integer is specified, it’ll return only the section of the configuration file for that particular virtual host. This will be used in the administration interface. In the httpadmin directory, you create the urls.py file shown in Listing 5-3. Listing 5-3.  The Application-Specific URL Mapping from django.conf.urls import patterns, include, url   urlpatterns = patterns('httpconfig.views', url(r'^$', 'full_config'), url(r'^(?P\d+)/$', 'full_config'), )   This configuration means that there is no application-specific part in the URL—all requests to the root location will be forwarded to our application. If you need to “hide” this application behind a certain path in the URL, please refer back to Chapters 3 and 4 for details on how to do that. In addition to this configuration, you also have to define the view method; otherwise, the Django URL parser may complain about the undefined view. You create the following method in the views.py file in the application directory:   from django.http import HttpResponse   def full_config(request): return HttpResponse('Hello!')  

147

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

CHAPTER 5 N MAINTAINING A LIST OF VIRTUAl HOSTS IN AN APAcHE CONFIGURATION FIlE

N■Tip If you get any errors when you navigate to the newly created web site, make sure that all files and directories in the project directory and the project directory itself are owned by the Apache or httpd user. Also note that if you make any changes to the Python files in your project directory, you will need to restart the Apache daemon, so that the requests will be served by the new code rather than the old, which may still be cached in memory.

The Data Model As we discussed in the requirements and design section, the database model for our application is fairly simple and contains only two entities: the virtual host definition and the configuration directive definition. For the implementation, however, we also need to add a third element to the schema that ties the virtual host and the configuration directive elements. The reason for adding yet another table is that each configuration directive can be part of one or more virtual hosts. Also, there might be one or more directives in each virtual host. Therefore, we have a many-to-many relationship between the objects, and in order to resolve that we need to insert an intermediate table that has a one-to-many relationship with the other tables. We can represent this relationship model in the entity relationship (ER) diagram shown in Figure 5-1, where you can see the properties of each entity and the relationships between them. ER diagrams are really helpful when coding, and they sometimes save you from writing complex code just to find information that can be easily obtained with a simple SQL statement if you know the relations between different tables. We’ll use this technique again in later chapters.

3] ConfigDirective

31 VirtualHost

1 id INT

t id INT

O name VARCHAR(200)

O description VARCHAR(200)

documentation VARCHAR(500)

Obind address VARCHAR(200) Ois default BOOLEAN

is container BOOLEAN

is template BOOLEAN

::

VHostDirective

t

id INT

O value VARCHAR(200)

j

Indexes

PRIMARY directive vhost parent

fci

Figure 5-1.  An entity relationship diagram

148

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

CHAPTER 5 N MAINTAINING A LIST OF VIRTUAl HOSTS IN AN APAcHE CONFIGURATION FIlE

N■Note The diagram in Figure 5-1 was produced using the MySQL Work Bench tool. It follows the convention and structure used to represent the data tables and also the relationships between them (one-to-many links, and so on). The description of those details is beyond the scope of this book, but if you want to learn more about the subject, I recommend Beginning Database Design: From Novice to Professional, 2nd ed., by Clare Churcher (New York: Apress, 2012), which is a good introduction to database design. A much shorter description of some of the symbols used in the diagram can be found on the Wikipedia page http://en.wikipedia.org/wiki/Entity-relationship_model. You can see that the ConfigDirective and the VirtualHost tables have a one-to-many relationship with the VHostDirective table. This table also holds the value for the configuration directive, which is specific to the particular virtual host. You may also have noticed that the VHostDirective has a loop-back relationship to itself. This is to implement the directive encapsulation, where some directives can be the “parent” directives for others.

The Basic Model Structure We’ll go through several iterations while creating the data model. We’ll start with the basic model that contains only the object properties and then gradually add functionality as we go along with the administration interface improvements. Listing 5-4 shows the initial code. Listing 5-4.  The Basic Model Structure from django.db import models   class ConfigDirective(models.Model): name = models.CharField(max_length=200) is_container = models.BooleanField(default=False) documentation = models.URLField( default='http://httpd.apache.org/docs/2.0/mod/core.html')   def __unicode__(self): return self.name   class VirtualHost(models.Model): is_default = models.BooleanField(default=False) is_template = models.BooleanField(default=False, help_text="""Template virtual hosts are commented out in the configuration and can be reused as templates""") description = models.CharField(max_length=200) bind_address = models.CharField(max_length=200) directives = models.ManyToManyField(ConfigDirective, through='VHostDirective')   def __unicode__(self): default_mark = ' (*)' if self.is_default else '' return self.description + default_mark  

149

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

CHAPTER 5 N MAINTAINING A LIST OF VIRTUAl HOSTS IN AN APAcHE CONFIGURATION FIlE

class VHostDirective(models.Model): directive = models.ForeignKey(ConfigDirective) vhost = models.ForeignKey(VirtualHost) parent = models.ForeignKey('self', blank=True, null=True, limit_choices_to={'directive__is_container': True}) value = models.CharField(max_length=200)   def __unicode__(self): fmt_str = "" if self.directive.is_container else "%s %s" directive_name = self.directive.name.strip('') return fmt_str % (directive_name, self.value)   If you followed the examples and explanation in Chapters 3 and 4, this model should be reasonably familiar to you. You define the basic properties of each element along with the ForeignKey objects that define the relationship between the classes. There is one thing, though, that may not look familiar to you—the many-to-many relationship declaration in the VirtualHost class:   directives = models.ManyToManyField(ConfigDirective, through='VHostDirective')   Why do you have to define this relationship explicitly if you already defined the VHostDirective class that joins the two entities? The reason is that this allows you to find the corresponding ConfigDirectives directly from the VirtualHost, without having to get to the VHostDirective objects first. We could create the database structure from this model, but it’ll be empty at this time and therefore not very useful without at least the list of the core Apache module directives. I have created an initial data JSON file that contains the entries for all core module directives. Here’s an example of a few entries; you can get the full set from the book’s web page at http://apress.com.   [ { "model": "httpconfig.configdirective", "pk": 1, "fields": { "name": "AcceptPathInfo", "documentation": "http://httpd.apache.org/docs/2.0/mod/core.html#AcceptPathInfo", "is_container": "False" } },   { "model": "httpconfig.configdirective", "pk": 2, "fields": { "name": "AccessFileName", "documentation": "http://httpd.apache.org/docs/2.0/mod/core.html#AccessFileName", "is_container": "False" } }, ]  

150

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

CHAPTER 5 N MAINTAINING A LIST OF VIRTUAl HOSTS IN AN APAcHE CONFIGURATION FIlE

If you copy this file to the project directory (in our example, this will be www_example_com/httpconfig/fixtures/) and name it initial_data.json, the data from the file will be loaded every time you run the syncdb command. Now, delete all application-related tables if you have created any in the database and re-create the database tables again with the new model and the initial dataset:   $ sqlite3 database.db SQLite version 3.7.13 2012-07-17 17:46:21 Enter ".help" for instructions Enter SQL statements terminated with a ";" sqlite> .tables auth_group django_admin_log auth_group_permissions django_content_type auth_message django_session auth_permission django_site auth_user httpconfig_configdirective auth_user_groups httpconfig_vhostdirective auth_user_user_permissions httpconfig_virtualhost sqlite> drop table httpconfig_configdirective; sqlite> drop table httpconfig_vhostdirective; sqlite> drop table httpconfig_virtualhost; sqlite> .exit $ ./manage.py syncdb Creating table httpconfig_configdirective Creating table httpconfig_virtualhost Creating table httpconfig_vhostdirective Installing index for httpconfig.VHostDirective model Installing json fixture 'initial_data' from absolute path. Installed 62 object(s) from 1 fixture(s)   You’re nearly ready to start managing the object in the administration application; just register all the model classes with the administration interface and restart the Apache web server. As you already know, you have to create the admin.py file in the application directory with contents similar to Listing 5-5. Listing 5-5.  Basic Administration Hooks from django.contrib import admin from www_example_com.httpconfig.models import *   class VirtualHostAdmin(admin.ModelAdmin): pass   class VHostDirectiveAdmin(admin.ModelAdmin): pass   class ConfigDirectiveAdmin(admin.ModelAdmin): pass   admin.site.register(VirtualHost, VirtualHostAdmin) admin.site.register(ConfigDirective, ConfigDirectiveAdmin) admin.site.register(VHostDirective, VHostDirectiveAdmin)  

151

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

CHAPTER 5 N MAINTAINING A LIST OF VIRTUAl HOSTS IN AN APAcHE CONFIGURATION FIlE

If you navigate to the administration console, which you can find at http://www.example.com/admin/, you will be provided with the login screen. You can log in with the user account that you created during the first syncdb call. Once logged in, you’ll be presented with the standard administration interface, which lists all model classes and allows you to create the individual entries. Now, you must appreciate how much work this has already spared you—you don’t need to deal with user management, model object discovery, or any other housekeeping tasks. However, the administration interface is generic and has absolutely no knowledge of the purpose behind your data models and what fields are important to you. Let’s take our model as an example. The main entity for you is the virtual host. However, if you navigate to it in the administration interface, you’ll only see one column in the listing view. If you have added any entries, you’ll see that it’s the description field that is displayed. Click the Add button to add a new virtual host. All property fields are displayed, but what about the configuration directives? These need to be created separately on a different screen, and then you have to link each directive to the appropriate virtual host. That’s not very useful, is it? Luckily, the Django administration module is very flexible and can be customized to accommodate most of the requirements that you can think of. We’ll improve the look and feel of the administration interface and add more functionality to it in the next sections.

Modifying the Administration Interface Most of the administration interface tuning is done in the models.py and admin.py files. The Python community is attempting to separate all the model definition files from the administration customization files, and a lot of work has already been done to achieve this separation. However, as of this writing, some items affecting the administration interface can still be found in the models.py file. In either case, I will always indicate which file you need to make changes in, but unless instructed, otherwise always assume the application directory: /srv/app/www_example_com/httpconfig/.

Improving the Class and Object Lists There’s only so much that the administration application can guess about your data model, its properties, and the information you’d like to be presented with. Therefore, if you don’t make any modifications or adjustments, you’ll just get the standard object representation string displays, just as the strings are returned by the __unicode__() method of the class. In the following sections, I’ll show you how to change the default layout.

Customizing the Class Names By default, Django attempts to guess the name of the class. Usually, the administration framework gets reasonably close results, but sometimes you may end up with strange names. For example, our three classes will be listed as: u

Config directives

u

V host directives

u

Virtual hosts

The “V host directives” name may look a bit cryptic in this situation. Another issue is the plural form of the class name. The examples we have resolved quite nicely, but should we have a class called “Host Entry,” for example, we’d end up with the automatically generated plural form “Host Entrys,” which obviously isn’t the correct spelling. In situations like this, you may want to set the class name and the plural form of the name yourself. You don’t need to set both, just the one that you want to modify. This setting is done in the model definition file, models.py. Listing 5-6 shows the additions to the class definition we created earlier.

152

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

CHAPTER 5 N MAINTAINING A LIST OF VIRTUAl HOSTS IN AN APAcHE CONFIGURATION FIlE

Listing 5-6.  Changing the Class Names class ConfigDirective(models.Model): class Meta: verbose_name = 'Configuration Directive' verbose_name_plural = 'Configuration Directives' [...]   class VirtualHost(models.Model): class Meta: verbose_name = 'Virtual Host' verbose_name_plural = 'Virtual Hosts' [...]   class VHostDirective(models.Model): class Meta: verbose_name = 'Virtual Host Directive' verbose_name_plural = 'Virtual Host Directives' [...]   You make the modifications and reload the Apache web server. Now you will be presented with more readable options: u

Configuration Directives

u

Virtual Host Directives

u

Virtual Hosts

Adding New Fields to the Object List Let’s start by modifying the virtual hosts listing. If you haven’t created any virtual hosts yet, you can do that now. It doesn’t really matter what properties you’re going to use in the configuration; at this stage we’re only interested in getting the layout right. Also, assign some configuration directives to the virtual hosts that you’ve created. One of the most important attributes of any virtual host is the ServerName, which defines the hostname this particular virtual host is responding to. As you know, the Apache web server identifies the virtual host by the HOST HTTP header value. It takes that value from the HTTP request and tries to match it against all ServerName or ServerAlias fields in the configuration file. When it finds a match, it knows which virtual host is supposed to serve that particular request. So these two directives are the ones you would probably want to see in the virtual host listing. How do you include these virtual hosts in the list where only the string representation of the object is displayed? You can use the ModelAdmin class property list_display to specify the properties you want to have displayed, but there is no such property as a list of server names in the VirtualHost class. Therefore, you’ll have to write your own method that returns every associated ServerName and ServerAlias. You extend your VirtualHost class with the method shown in Listing 5-7.

153

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

CHAPTER 5 N MAINTAINING A LIST OF VIRTUAl HOSTS IN AN APAcHE CONFIGURATION FIlE

Listing 5-7.  Listing the Associated ServerNames and ServerAliases def domain_names(self): result = '' primary_domains = self.vhostdirective_set.filter(directive__name='ServerName') if primary_domains: result = "%(d)s" % {'d': primary_domains[0].value} else: result = 'No primary domain defined!' secondary_domains = self.vhostdirective_set.filter(directive__name='ServerAlias') if secondary_domains: result += ' (' for domain in secondary_domains: result += "%(d)s, " % {'d': domain.value} result = result[:-2] + ')' return result domain_names.allow_tags = True   This code fetches all VHostDirective objects that point to the ConfigDirective object whose name is either 'ServerName' or 'ServerAlias'. The value of such a VHostDirective object is then appended to the result string. In fact, this value is used to construct an HTML link, which should open in a new browser window when clicked. The intention here is that all the links of the virtual host are presented in the listing and are clickable, so you can immediately test them. Let’s take a closer look at the instruction that retrieves the VHostDirective objects (the highlighted lines in Listing 5-7). As you know from the model definition, the VirtualHost class, which you’re modifying now, does not link to the VHostDirective class. The link is reversed; the VHostDirective class has a foreign key that points back to the VirtualHost class. Django allows you to create reverse lookups as well by using the special attribute name _set. In our case, the name is virtualhostdirective_set. This attribute implements standard object selection methods, such as filter() and all(). Now, using this virtualhostdirective_set attribute, we’re actually accessing the instances of the VHostDirective class, and therefore we can specify a forward filter that matches the corresponding Directive object name against our search string: directive__name='ServerName'. Let’s add another method that returns a link to the object representation URL. We are also going to display this in the listing, so that users can click on it and the code snippet just for this virtual host will appear in a new browser window. This VirtualHost class method is defined in the models.py file:   def code_snippet(self): return "View code snippet" % self.id code_snippet.allow_tags = True   Have you noticed that in both cases we modify the method’s allow_tags property by setting it to True? This prevents Django from parsing the HTML codes and replacing them with “safe” characters. With the tags enabled, you can place any HTML code in the object listing; for example, you can include links to external URLs or include images. Finally, let’s list all the properties that we want to see in the object list. This includes the class attributes and the names of the two functions that we’ve just created. Add the following property to the ModelAdmin class definition in the admin.py file:   class VirtualHostAdmin(admin.ModelAdmin): list_display = ('description', 'is_default', 'is_template', 'bind_address', 'domain_names', 'code_snippet')  

154

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

CHAPTER 5 N MAINTAINING A LIST OF VIRTUAl HOSTS IN AN APAcHE CONFIGURATION FIlE

Now when you navigate to the VirtualHost object list, you should see something similar to Figure 5-2. It may not be obvious, but the listed domain names and the code snippet text are clickable and should open the URL in a new browser window.

Select Virtual Host to change I Django site admin

£

| + 0 http://www.example.com/admin/httpconfig/virtualhost/ Select Virtual Host to change

I

C

Q* Google

+

Dj...

Django administration

Welcome, rytis. Change password / Log out

Home > Httpconfig > Virtual Hosts

r "I

Select Virtual Host to change Action |

Q



-------

co

;

Description

Is default

Is template

Bind address

Domain names

Code snippet

Another test server

©

©

* 8080

www.google.com (www.lelOO.net)

View code snippet

©

O

www.apress.com

View code snippet

My test

server 1

2 Virtual Hosts

Figure 5-2.  The modified object list view

Reorganizing the Form Fields If you tried adding the virtual host instances using the current administration interface, you probably noticed how unfriendly and confusing the process is. First, you have to create a new VirtualHost object; then, you have to navigate away from it and create one or more VHostDirective objects by picking the newly created VirtualHost object. Wouldn’t it be nicer if you could create all that from one form? Luckily this is very easy thing to do. In Django terms, this is called an inline formset and it allows you to edit models on the same page as the parent model. In our case, the parent model is the VirtualHost and we want to edit the instances of the VHostDirective inline. This can be accomplished in only two steps. First, you create a new administration class that inherits from the admin. TabularInline class. You add the following code to the admin.py file. The properties of this class indicate which child model you want to include and how many extra empty lines you want to have in the formset:   class VHostDirectiveInLine(admin.TabularInline): model = VHostDirective extra = 1   The second step is to instruct the administration class that you want to have this inline form included in the main model edit form:   class VirtualHostAdmin(admin.ModelAdmin): inlines = (VHostDirectiveInLine,) [...]   This simple manipulation results in a rather nice-looking formset that includes the entry fields for both the parent and the child models, as shown in Figure 5-3.

155

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

CHAPTER 5 N MAINTAINING A LIST OF VIRTUAl HOSTS IN AN APAcHE CONFIGURATION FIlE

flOO | *| ] I

Change Virtual Host | Django site admin

4-

0 http://www.example.eom/admin/httpconfig/virtualhost/2/

C

Dj

Q* Google

+

Change Virtual Host ! Django site

Django administration

Welcome, rytls. Change password

/ Log out

Home > Httpconfig > Virtual Hosts > Another test server

Change Virtual Host Is default Is template Template virtual hosts are commented out in the configuration and can be reused as templates

Description:

Another test server

Bind address:

* 8080

Virtual Host Directives Directive

Delete?

Value

Parent

ServerName www.go09le.com

ServerName

i

+

i

+

www.google.com

*

+

*

+

www.lelOO.net

Server Alias www.lelOO.net

ServerAlias

:*

:+

* Delete

Save and add another

Save and continue editing

Figure 5-3.  Including the child model editing form



If you don’t like the way the fields are organized in the form, you can change their order and also group them into logical groups. You group the fields by defining the fieldsets. Each fieldset is a tuple of two elements: a fieldset name and a dictionary of fieldset properties. One dictionary key is required, the list of fields. The other two keys, classes and description, are optional. Following is an example of the ConfigDirective model administration form, which has two fieldset groups defined:   class ConfigDirectiveAdmin(admin.ModelAdmin): fieldsets = [ (None, {'fields': ['name']}), ('Details', {'fields': ['is_container', 'documentation'], 'classes': ['collapse'], 'description': 'Specify the config directive details'}) ]

156

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

CHAPTER 5 N MAINTAINING A LIST OF VIRTUAl HOSTS IN AN APAcHE CONFIGURATION FIlE

The first group contains only one field and has no name. The second group is labeled Details.. It has a short description below the label, contains two fields, and has a show/hide capability. The Classes property defines the CSS class name and depends on the class definitions. The standard Django administration CSS defines two useful classes: the Collapse class allows you to show/hide the whole group and, the Wide class adds some extra space for the form fields.

Adding Custom Object Actions We’re nearly ready with the application, but there are two more functions that we need to implement. In the virtual host model, we have a Boolean flag that indicates whether the host is the default. This information is also conveniently displayed in the listing. However, if we want to change it, we have to navigate to the object’s edit page and change the setting there. It would be nice if this could be done from the object-list screen, by just selecting the appropriate object and using an action from the drop-down menu in the top-left corner of the list. However, the only action that is currently available there is “Delete selected virtual hosts.” Django allows us to define our own action functions and add them to the administration screen menu. There are two steps to get a new function in the actions list. First, we define a method in the administration class; and next, we identify the administration class in whose actions list this method should be listed as an action. The custom action method is passed three parameters when called. The first is the instance of the ModelAdmin class that called the method. We can define the custom methods outside of the ModelAdmin class, in which case multiple ModelAdmin classes can reuse them. If you define the method within a particular ModelAdmin class, the first parameter will always be the instance of that class; in other words, this is a typical class method self property. The second parameter is the HTTP request object. It can be used to pass the message back to the user once the action is complete. The third parameter is the query set that contains all objects that have been selected by the user. This is the list of objects you will be operating on. Because there can be only one default virtual host, you have to check whether multiple objects have been selected and, if so, return an error indicating that. Listing 5-8 shows the modifications to the model administration class that create a new custom object action. Listing 5-8.  A Custom Action to Set the Default Virtual Host Flag class VirtualHostAdmin(admin.ModelAdmin): [...] actions = ('make_default',)   def make_default(self, request, queryset): if len(queryset) == 1: VirtualHost.objects.all().update(is_default=False) queryset.update(is_default=True) self.message_user(request, "Virtual host '%s' has been made the default virtual host" % queryset[0]) else: self.message_user(request, 'ERROR: Only one host can be set as the default!') make_default.short_description = 'Make selected Virtual Host default'   The next custom action that we’re going to define is object duplication. This action takes the selected objects and “clones” them. The clones are going to have the same settings and the same set of configuration directives with the same values, but the following exceptions will apply: u

The virtual host description will get the “(Copy)” string appended to its description.

u

The new virtual host will not be the default.

u

The new virtual host will not be a template.

157

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

CHAPTER 5 N MAINTAINING A LIST OF VIRTUAl HOSTS IN AN APAcHE CONFIGURATION FIlE

The challenge here is to correctly resolve all parent-child dependencies of the VHostDirective objects. In the Apache virtual host definition, we can have only one level of encapsulation, so we don’t need to do any recursive discovery of the related objects. The duplication method can be split into the following logical steps:

1.

Create a new instance of the VirtualHost class and clone all properties.



2.

Clone all directives that do not have any parents.



3.

Clone all directives that are containers and therefore may potentially contain child directives.



4.

For each container directive, find all its child directives and clone them.

Listing 5-9 shows the duplication function code. Listing 5-9.  The Action to Duplicate the Virtual Host Objects def duplicate(self, request, queryset): msg = '' for vhost in queryset: new_vhost = VirtualHost() new_vhost.description = "%s (Copy)" % vhost.description new_vhost.bind_address = vhost.bind_address new_vhost.is_template = False new_vhost.is_default = False new_vhost.save() # recreate all 'orphan' directives that aren't parents o=vhost.vhostdirective_set.filter(parent=None).filter(directive__is_container=False) for vhd in o: new_vhd = VHostDirective() new_vhd.directive = vhd.directive new_vhd.value = vhd.value new_vhd.vhost = new_vhost new_vhd.save() # recreate all parent directives for vhd in vhost.vhostdirective_set.filter(directive__is_container=True): new_vhd = VHostDirective() new_vhd.directive = vhd.directive new_vhd.value = vhd.value new_vhd.vhost = new_vhost new_vhd.save() # and all their children for child_vhd in vhost.vhostdirective_set.filter(parent=vhd): msg += str(child_vhd) new_child_vhd = VHostDirective() new_child_vhd.directive = child_vhd.directive new_child_vhd.value = child_vhd.value new_child_vhd.vhost = new_vhost new_child_vhd.parent = new_vhd new_child_vhd.save() self.message_user(request, msg) duplicate.short_description = 'Duplicate selected Virtual Hosts'

158

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

CHAPTER 5 N MAINTAINING A LIST OF VIRTUAl HOSTS IN AN APAcHE CONFIGURATION FIlE

Generating the Configuration File We’ve finished adjusting the administration interface, so it is now ready for adding new virtual hosts and managing the existing database entries. We need to finish writing the view method that will display the information. There is one issue, though: the “parent” directives mimic the XML syntax. That is, they have opening and closing elements. The default string representation that we’ve written for the VHostDirective model class takes care of the opening element, but we also need to write a function that generates an XML-like closing tag. These two tags will be used to enclose the “child” configuration directives. We add the following method to the VHostDirective class in the models.py file. This function converts the to if the directive is marked as a container directive:   def close_tag(self): return "" % self.directive.name.strip('') if self.directive.is_container else ""   Once we’ve done that, we extend the previously created empty view method with the code from Listing 5-10. This code iterates through all available objects if no arguments were supplied. If an integer is supplied as an argument, it will select only the object with the matching ID. For all objects in the list, a dictionary structure is created. This structure contains the VirtualHost object and the corresponding directive objects. The orphan and the containers are stored separately, so it’s easier to distinguish between them in the template. The return object sets the MIME type of the response to “text/plain,” which allows us to download the URL directly to the configuration file. Listing 5-10.  The View Method from httpconfig.models import * from django.http import HttpResponse, HttpResponseRedirect from django.shortcuts import render_to_response, get_object_or_404   # Create your views here.   def full_config(request, object_id=None): if not object_id: vhosts = VirtualHost.objects.all() else: vhosts = VirtualHost.objects.filter(id=object_id) vhosts_list = [] for vhost in vhosts: vhost_struct = {} vhost_struct['vhost_data'] = vhost vhost_struct['orphan_directives'] = \ vhost.vhostdirective_set.filter(directive__is_container=False).filter(parent=None) vhost_struct['containers'] = [] for container_directive in \ vhost.vhostdirective_set.filter(directive__is_container=True): vhost_struct['containers'].append({'parent': container_directive, 'children': \ vhost.vhostdirective_set.filter(parent=container_directive), }) vhosts_list.append(vhost_struct) return render_to_response('full_config.txt', {'vhosts': vhosts_list}, mimetype='text/plain')  

159

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

CHAPTER 5 N MAINTAINING A LIST OF VIRTUAl HOSTS IN AN APAcHE CONFIGURATION FIlE

N■Note The backslash character in the examples is used to wrap the long lines of code. This is a valid Python language syntax that allows you to format your code for a greater readability. If you are retyping these examples, please maintain the same code structure and layout. Do not confuse the backslash characters with the line wrapping symbol («), which indicates that the line was too long to fit on a page and has been wrapped. You must join the lines split by this symbol when reusing the examples. As you know from Chapters 3 and 4, the templates are stored in the templates subdirectory in the application folder. Listing 5-11 shows the full_config.txt template. Listing 5-11.  The Virtual Host View Template # Virtual host configuration section # automatically generated - do not edit   {% for vhost in vhosts %}   ## ## {{ vhost.vhost_data.description }} ## {% if vhost.vhost_data.is_template %}#{% endif %} {% if vhost.vhost_data.is_template %}#{% endif %} {% for orphan_directive in « vhost.orphan_directives %} {% if vhost.vhost_data.is_template %}#{% endif %} {{ orphan_directive }} {% if vhost.vhost_data.is_template %}#{% endif %} {% endfor %} {% if vhost.vhost_data.is_template %}#{% endif %} {% for container in vhost.containers %} {% if vhost.vhost_data.is_template %}#{% endif %} {{ container.parent|safe }} {% if vhost.vhost_data.is_template %}#{% endif %} {% for child_dir in « container.children %} {% if vhost.vhost_data.is_template %}#{% endif %} {{ child_dir }} {% if vhost.vhost_data.is_template %}#{% endif %} {% endfor %} {% if vhost.vhost_data.is_template %}#{% endif %} {{ container.parent.close_tag|safe }} {% if vhost.vhost_data.is_template %}#{% endif %} {% endfor %} {% if vhost.vhost_data.is_template %}#{% endif %}   After you’ve made all the modifications, you should be able to navigate to the website URL (in our example, this would be http://www.example.com/), and the result should be a section of the automatically generated Apache configuration file that contains the virtual host definitions, as shown in Listing 5-12. Note that the templates are also included, but are commented out and thus will be ignored by the web server. Listing 5-12.  A Sample Configuration File # Virtual host configuration section # automatically generated - do not edit ## ## My test server 1 ##

160

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

CHAPTER 5 N MAINTAINING A LIST OF VIRTUAl HOSTS IN AN APAcHE CONFIGURATION FIlE

ServerName www.apress.com AcceptPathInfo Off AddDefaultCharset Off ## ## Another test server ## # # # ServerName www.google.com # # ServerAlias www.1e100.net # # #

Summary In this chapter we discussed how to modify the default Django administration application to make it more user-friendly and to suit your object models. Key points to remember: u

The object listing can include any model properties as well as custom-defined functions.

u

The custom-defined functions in the object list can also generate HTML output.

u

You can add custom actions to the object list administration page.

u

If your model has many fields, they can be rearranged into logical groups.

u

You can include the child model in the parent edit page as an inline fieldset.

161

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

CHAPTER 6

Gathering and Presenting Statistical Data from Apache Log Files This chapter covers the architecture and implementation of plug-in based applications. As an example, we’re going to build a framework for analyzing Apache log files. Rather than creating a monolithic application, we’ll use the modular approach. Once we have a base framework, we’ll create a plug-in for it that performs the analysis based on the geographical location of the requestor.

Application Structure and Functionality In the data mining and statistics gathering area, it is difficult to come up with a single application that suits the requirements of multiple users. Let’s take the analysis of Apache web server logs as an example. Each request that is received by the web server is written to a log file. There are several different data fields written in each log line, along with the timestamp when the request came in. Let’s imagine you’ve been asked to write an application that analyzes the log files and produces a report. This is the extent of a typical request that comes from the users who are interested in the statistical information. Obviously, there is not much you can do with this request, so you ask for more information, such as what exactly the users want to see in their report. Now, the hypothetical users are getting more involved in the design phase, and they tell you that they want to see the total number of downloads for a particular file. Well, that’s easy to do. But then you get another request that asks for per-hour statistics of the site hits. You script that in. Then, there’s a request to correlate the time of the day with the browser type. And the list goes on and on. Even if you’re writing the tools for one particular organization, the requirements are too diverse and impossible to capture at the requirement-gathering phase. So what should you do in this situation? Wouldn’t it be nice to have a generic application that could be extended with modules that specialize in extracting and processing the information? Each module would be responsible for performing the specific calculations and producing the reports. These modules could be added and removed as and when required, without affecting the functionality of other modules, and more important, without requiring any changes to the main application. This type of modular structure is often referred to as the plug-in architecture. A plug-in is a small piece of software that extends the functionality of the main application. This technique is very popular and is used in many different applications. A good example is the web browser. Most of the popular web browsers on the market support plug-ins. A web page may contain an embedded Adobe Flash movie, but the browser itself doesn’t know (and doesn’t need to know) how to handle this type of file. So it looks for a plug-in that has the capability to process and display the Adobe Flash file. If it finds such a plug-in, it passes the file object to it for processing. If it can’t find a plug-in, the object is simply not displayed to the end user. The absence of the appropriate plug-in does not prevent the web page from being displayed. We’ll use this approach to build the application for analyzing Apache logs. Let’s begin with the requirements for the particular statistical analysis tasks for the application.

163

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

CHAPTER 6 N GATHERINg AND PRESENTINg STATISTIcAL DATA FROM APAcHE LOg FILES

Application Requirements We need to implement two main requirements in our application: u

The main application will be responsible for parsing the Apache log files and extracting the fields from each log line. The log line format may differ between web server installations, so the application should be configurable to match the log file format.

u

The application should be able to discover the installed plug-in modules and pass the extracted fields to the appropriate plug-in module for further processing. Adding new plug-in modules should not have any effect on the functionality of the existing modules and the functionality of the main application.

Application Design The requirements imply that the application should be split into two parts: Main application: The application will parse the log files from the list of directories supplied as a command-line argument to it. Each log file will be processed one line at a time. The application does not guarantee that the files are processed in chronological order. Each log line is split in word boundaries, and the field separator is the space character. It is possible that some fields will have space characters in their contents; such fields must be enclosed in double quotes. For ease of use, the fields will be identified by the corresponding log format field codes, as described in the Apache documentation. Plug-in manager component: The plug-in manager is responsible for discovering and registering the available plug-in modules. Only the special Python classes will be treated as plug-in modules. Each plug-in exposes the log fields it’s interested in. When the main application parses the log files, it will check the subscribed plug-in table and pass the required information to the relevant plug-ins. Next, let’s look at how we can implement the plug-in framework in Python.

Plug-in Framework Implementation in Python There’s good and bad news when it comes to the plug-in framework implementation in Python. The bad news is that there is no standard approach in implementing the plug-in architecture. There are several different techniques, as well as commercial and open-source products to use, but each approaches the problem differently. Some are better in one area, but may fall short in other areas. The way you choose to implement this architecture largely depends on what you want to achieve. The good news is that there is no de facto standard for implementing the plug-in framework, so we get to write our own! As we write the implementation, you’ll learn several new things about the Python language and programming techniques, such as class type inspection, duck typing, and dynamic module loading. Before we dive into the technical details, though, let’s establish exactly what a plug-in is and how it is related to the main, or host, application.

164

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

CHAPTER 6 N GATHERINg AND PRESENTINg STATISTIcAL DATA FROM APAcHE LOg FILES

The Mechanics of a Plug-in Framework The host application processes the data it receives—whether it’s a log file for the log-parsing engine, an HTML file for a web browser, or another type of file. Its work is completely unaffected by the presence of the plug-ins or their functionality. However, the host application provides a service to the plug-in modules. In the case of the log-processing application, its sole responsibility is reading the data from the files, recognizing the log format, and converting that data to the appropriate data structures. This is the service that it provides to the plug-in modules. The main application does not care whether the data it has produced is used by any of the modules or how it is used. The plug-in modules largely depend on the host application. As an example, let’s take the plug-in that counts the number of requests. This plug-in cannot perform any counting unless it receives the data. So the plug-in is rarely useful without the main application. You may wonder why you should bother with this separation at all. Why can’t the plug-in modules read the data files and do whatever they need to do with the data? As we discussed, there might be many different applications performing different calculations with the same data. Having each of those modules implement the same data reading and parsing functionality would be inefficient from the development perspective—it takes time to redevelop the same process again and again. Obviously, this is a rather simplistic example. Quite often, the end user does not notice this separation between the main application and the plug-in modules. The user experiences the application as the combined result of the application and the plug-in. Let’s consider the web browser example again. The HTML page is rendered by the browser engine and is presented to the user. The plug-in modules render the various components within the page. For example, the Adobe Flash movie is rendered by the Flash plug-in, and the Windows Media files are rendered by the Windows Media plug-in module. The user sees only the end result: the rendered web page. Adding new plug-ins to the system simply extends the functionality of the application. After deploying a new plug-in, users can start visiting websites that did not display correctly (or at all) before the plug-in was installed. Another great example of an application that is plug-in based is the Eclipse project (http://eclipse.org/). It started as a Java development environment, but it has grown into a platform that supports multiple languages, integrates with various version control systems, and provides for modeling and reporting—all thanks to its plug-in architecture. The basic application doesn’t do a lot, but you can extend it and tailor it to your needs by installing the appropriate plug-ins. So the same “application” might do completely different things. To me, it’s a Python development platform; to someone else, it’s a UML modeling tool.

Interface Model As you might have already guessed, the host application and the plug-in modules are typically very loosely coupled entities. Therefore, a protocol must be defined for the interaction between those two entities. Usually, the host application exposes the well-documented service interfaces, such as function names. The plug-in methods call them whenever they need anything from the host application. Similarly, the plug-ins expose their interface, so that the host application can send the data to them or notify them about some occurring events. This is where matters get slightly more complicated. The plug-in modules usually implement functionality that the host application may not be aware of. Therefore, the plug-ins may announce their capabilities, such as a capability to display a Flash movie file. The capability type is usually associated with the module function name, so that the main application knows which method implements the capability. As an example, let’s consider a simplistic browser model. We have a basic host application that receives the HTML page and also downloads all linked-in resources. Each resource has a MIME type associated with it. The Flash objects have the application/x-shockwave-flash type. When the browser comes across such an object, it will look in its plug-in registry and search for a plug-in that claims to have a capability to process this type of file. Once the plug-in and the method name are found, the host application calls that method and passes the file object to it.

165

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

CHAPTER 6 N GATHERINg AND PRESENTINg STATISTIcAL DATA FROM APAcHE LOg FILES

Plug-in Registration and Discovery So what exactly is this plug-in registry that the host application checks? In simple terms, it’s a list of all the plug-in modules that have been found and loaded together with the main application. This list usually contains the object instances, their capabilities, and the functions that implement these capabilities. The registry is a central location to store all plug-in instances, so that the host application can find them during runtime. The plug-in registry is created during the plug-in discovery process. The discovery process varies among the different implementations, but usually involves finding the appropriate application files and loading them into memory. Typically, there is a separate process within the host application that deals with the plug-in management tasks, such as the discovery, registration, and control. Figure 6-1 shows an overview of all the components and their relationships. f

I I I

x Host Application

Plug-in Framework Send a signal or call a function

Plugin 1 def function_A()

I

def service_X() def service_Y()

1 Request a service (call function)

Look up the capabilities and their functions

I I

Plugin 2 def function_B()

Plug-in Manager Discover, initialize and add to the registry

Plug-in Registry Plugin 1 ('capability A') Plugin 2 ('capability B') ... J

Figure 6-1.  Typical plug-in architecture

Creating the Plug-in Framework As I’ve mentioned, there are several ways of implementing the plug-in–based architecture in Python. Here, I’m going to discuss one of the simplest methods, which is flexible enough to suit the needs of most small applications.

N■Note Dr André Roberge made a descriptive presentation at PyCon 2009 comparing several different plug-in mechanisms. You can find his presentation, titled “Plugins and monkeypatching: increasing flexibility, dealing with inflexibility,” at http://blip.tv/file/1949302/. If you decide that you need a more sophisticated implementation, take a look at the implementations provided by the Zope (http://zope.org/), Grok (http://grok.zope.org/), and Envisage (http://code.enthought.com/projects/envisage/) frameworks. These products are enterprise-grade plug-in frameworks that will allow you to build extensible applications. The downside of using them is that they are usually too big and complicated for simple applications. 166

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

CHAPTER 6 N GATHERINg AND PRESENTINg STATISTIcAL DATA FROM APAcHE LOg FILES

Discovery and Registration The discovery process is based on the fact that the base class can find all its child classes. Here’s a simple example:   >>> class Plugin(object): ... pass ... >>> class MyPlugin1(Plugin): ... def __init__(self): ... print 'plugin 1' ... >>> class MyPlugin2(Plugin): ... def __init__(self): ... print 'plugin 2' ... >>> Plugin.__subclasses__() [, ] >>>   This code creates a base class and then defines two more classes that inherit from the base class. We now can find all classes that have inherited from the main class by calling the base class built-in method __subclasses__(). This is a very powerful mechanism for finding classes without knowing their names, or even the names of the module from which they have been loaded. Once the classes have been discovered, we can create the instances of each class and add them to a list. This is the registration process. After all the objects have been registered, the main program can start calling their methods:   >>> plugins = [] >>> for cls in Plugin.__subclasses__(): ... obj = cls() ... plugins.append(obj) ... plugin 1 plugin 2 >>> plugins [, ] >>>   So the discovery and registration process flow is as follows:

1.

All plug-in classes inherit from one base class that is known to the plug-in manager.



2.

The plug-in manager imports one or more modules that contain the plug-in class definitions.



3.

The plug-in manager calls the base class method __subclasses__() and discovers all loaded plug-in classes.



4.

The plug-in manager creates instances.

We now have several problems to resolve. First, the plug-in classes need to be stored in a separate location, preferably in separate files. This allows for deploying new plug-ins and removing obsolete ones without worrying that the application files might accidentally be overwritten. So we need a mechanism to import arbitrary Python modules that contain the plug-in class definitions. You can use the Python built-in method __import__ to load any module by its name at runtime, but the module file needs to be in the system search path.

167

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

CHAPTER 6 N GATHERINg AND PRESENTINg STATISTIcAL DATA FROM APAcHE LOg FILES

For the sample application, we’ll use the following directory and file structure:   http_log_parser.py >> class Cow(): ... def moo(self): ... print 'moo..' ... >>> class Duck(): ... def quack(self): ... print 'quack!' ... >>> animal1 = Cow() >>> animal2 = Duck() >>> >>> for animal in [animal1, animal2]: ... if hasattr(animal, 'quack'): ... animal.quack() ... else: ... print animal, 'cannot quack' ... cannot quack quack! >>>

177

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

CHAPTER 6 N GATHERINg AND PRESENTINg STATISTIcAL DATA FROM APAcHE LOg FILES

>>> for animal in [animal1, animal2]: ... try: ... animal.quack() ... except AttributeError: ... print animal, 'cannot quack' ... cannot quack quack! >>>  

In the first iteration, we explicitly check for the availability of the method (we ask for permission) before we call the method. In the second iteration, we call the method without checking if it’s available. We then catch the possible exception (we ask for forgiveness) and handle the absence of the method accordingly, if at all.

Plug-in Modules We’re now in the position to start writing the plug-in modules and using the scripts to analyze the Apache web server log files. In this section, we’ll create a script that counts all requests and sorts them by the country from which they originated. We will use the GeoIP Python library to perform the IP-to-country-name mappings.

N■Note The GeoIP data is produced by the MaxMind company, which provides the databases for individual (free) and commercial (paid for) use. You can find more information about MaxMind’s products and services at http://maxmind.com/app/ip-location. The GeoIP database attempts to provide the geographical information (such as country, city, and coordinates) of the location where the IP address is located. This is useful for various purposes. For example, it allows a business to provide localized ad service, where it can display advertisements to users depending on their location.

Installing the Required Libraries The GeoIP database libraries are written in C, but there are Python bindings available as well. The packages are available on most Linux platforms. For example, on a Fedora system, run the following command to install these libraries:   $ sudo yum install GeoIP GeoIP-python   This will install the C libraries along with the helper tools and the Python bindings. The package may include the initial database that contains the IP-to-country mapping data, but most likely this data will be out of date, as the database is normally updated every three to four weeks. There are two databases that are free for personal use: the Countries database and the Cities database. I suggest updating those two databases regularly if you want to have up-to-date information. The tools that can fetch the latest version of the database are provided in the base package. Here’s how you fetch the databases after you install the packages:   $ sudo touch /usr/share/GeoIP/GeoIP.dat $ sudo touch /usr/share/GeoIP/GeoLiteCity.dat $ sudo perl /usr/share/doc/GeoIP-1.4.7/fetch-geoipdata.pl

178

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

CHAPTER 6 N GATHERINg AND PRESENTINg STATISTIcAL DATA FROM APAcHE LOg FILES

Fetching GeoIP.dat from http://geolite.maxmind.com/download/geoip/database/GeoLiteCountry/GeoIP.dat.gz GeoIP database updated. Old copy is at GeoIP.dat.20100521 $ sudo perl /usr/share/doc/GeoIP-1.4.7/fetch-geoipdata-city.pl Fetching GeoLiteCity.dat from http://geolite.maxmind.com/download/geoip/database/GeoLiteCity.dat.gz GeoIP database updated. Old copy is at GeoLiteCity.dat.20100521   The reason for the touch command at the beginning is that if the .dat files are not present, the tools will fail to download the new version, so those files must be created first.

Using the GeoIP Python Bindings When the libraries are installed, they will look for the data files in the standard location (typically in /usr/share/GeoIP/), so we don’t need to specify the location. We need only specify the access method:   import GeoIP   # the data is read from the disk every time it’s accessed # this is the slowest access method gi = GeoIP.new(GeoIP.GEOIP_STANDARD) # the data is cached in memory gi = GeoIP.new(GeoIP.GEOIP_MEMORY_CACHE)   Once we initialize the data access object, we can start looking up the information:   >>> import GeoIP >>> gi = GeoIP.new(GeoIP.GEOIP_MEMORY_CACHE) >>> gi.country_name_by_name('www.apress.com') 'United States' >>> gi.country_code_by_name('www.apress.com') 'US' >>> gi.country_name_by_addr('4.4.4.4') 'United States' >>> gi.country_code_by_addr('4.4.4.4') 'US' >>>   If we want to retrieve the city information, we need to open the specific data file. We then can perform the city data lookups as well:   >>> import GeoIP >>> gi = GeoIP.open('/usr/share/GeoIP/GeoLiteCity.dat', GeoIP.GEOIP_MEMORY_CACHE) >>> gir = gi.record_by_name('www.apress.com') >>> for k, v in gir.iteritems(): ... print "%20s: %s" % (k, v) ... city: Emeryville region_name: California region: CA area_code: 510 time_zone: America/Los_Angeles longitude: -122.289703369

179

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

CHAPTER 6 N GATHERINg AND PRESENTINg STATISTIcAL DATA FROM APAcHE LOg FILES

metro_code: country_code3: latitude: postal_code: dma_code: country_code: country_name:

807 USA 37.8342018127 94608 807 US United States

>>>

Writing the Plug-in Code We need to decide which methods we’re going to implement. We need to receive the information about each log line that is being processed. Therefore, the plug-in must implement the process() method, which will perform the country lookup and increase the appropriate counters. At the end of the loop, we need to print a simple report that lists all the countries and sorts the list by the number of requests. As shown in Listing 6-3, we use only one field from the data structure and just ignore the rest of the data. Listing 6-3.  The GeoIP Lookup Plug-in #!/usr/bin/env python   from manager import Plugin from operator import itemgetter import GeoIP   class GeoIPStats(Plugin):   def __init__(self, **kwargs): self.gi = GeoIP.new(GeoIP.GEOIP_MEMORY_CACHE) self.countries = {}   def process(self, **kwargs): if 'remote_host' in kwargs: country = self.gi.country_name_by_addr(kwargs['remote_host']) if country in self.countries: self.countries[country] += 1 else: self.countries[country] = 1   def report(self, **kwargs): print "== Requests by country ==" for (country, count) in sorted(self.countries.iteritems(), key=itemgetter(1), reverse=True): print " %10d: %s" % (count, country)   We save this file as plugin_geoiplookup.py in the plugins/ directory. (Actually, any name with the plugin_ prefix and .py suffix will be recognized as a valid plug-in module.) Now if we run the main application, we’ll get the result similar to the one in the following example, provided that we have a sample log file in the logs/ directory.  

180

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

CHAPTER 6 N GATHERINg AND PRESENTINg STATISTIcAL DATA FROM APAcHE LOg FILES

$ ./http_log_parser.py == Requests by country == 382: United States 258: Sweden 103: France 42: China 31: Russian Federation 9: India 8: Italy 7: United Kingdom 7: Anonymous Proxy 6: Philippines 6: Switzerland 2: Tunisia 2: Japan 1: Croatia

Visualizing the Data This simple report functionality is sufficient for data analysis purposes, but sometimes you might want to get a quick visual overview of the results. Extending on the previous example, we are going to create a heat map image as part of the report-generation process. The heat map will represent all the countries and the intensity of the color will be proportional to the number of hits we find in the log files. We are going to use matplotlib library and basemap extension of the matplotlib library to draw the world map. Matplotlib comes with basic world map shape definition; however, we will need more detailed shapes of each country. These can be obtained for free from various resources on the Internet. You can find more information and detailed installation instructions for numpy and matplotlib in Chapter 11, therefore I will only discuss mapping capabilities of the matplotlib and basemap in this chapter.

Installing Required Libraries and Data Files The following install instructions assume that you are running a Fedora system. You might need to modify them to suit your specific OS, but the package names are typically identical. We are going to use a couple of helper functions from the numpy package, so we need to install it first with:   $ yum install numpy   The plotting functionality is provided by the matplotlib library, which can be installed by running the following command:   $ yum install matplotlib   The map manipulation functions are available from the matplotlib extension called basemap. Basemap does not do any plotting by itself; it uses the matplotlib to make the actual drawings. Basemap provides functionality to transform coordinates to one of the available map projections. It depends on the geos (Geometric Engine) library, so we will need to install that as well:   $ yum install geos $ yum install basemap  

181

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

CHAPTER 6 N GATHERINg AND PRESENTINg STATISTIcAL DATA FROM APAcHE LOg FILES

And finally we will need to parse custom ESRI shape files, so we are going to use shapefile library for the task. Unfortunately, it is not available as an RPM package at the moment of writing, therefore we will use pip command to install it:   $ pip install pyshp   ESRI file format is a popular Geographic Information System (GIS) vector data format, and a lot of geographical information is made available in this format. We are going to download shape data of all countries from http://thematicmapping.org/downloads/world_borders.php. In the directory where our plug-in code is located, we download the following file http://thematicmapping.org/downloads/TM_WORLD_BORDERS-0.3.zip and unzip it into a separate directory:   $ mkdir world_borders && cd world_borders $ curl –O http://thematicmapping.org/downloads/TM_WORLD_BORDERS-0.3.zip $ unzip TM_WORLD_BORDERS-0.3.zip   This will create a number of ESRI shape files:   $ ls -1 world_borders/ Readme.txt TM_WORLD_BORDERS-0.3.dbf TM_WORLD_BORDERS-0.3.prj TM_WORLD_BORDERS-0.3.shp TM_WORLD_BORDERS-0.3.shx 

N■Note ESRI stands for Environmental Systems Research Institute. The company specializes in producing Geographic Information System (GIS) software and geodatabase management applications. The shapefile is a geospatial vector data format used to store and exchange data between GIS software. The data stored in shapefile is a set of geometric data primitives, such as points, lines, and polygons, along with the associated attributes that describe what those primitives represent. As an example, the shapefile we are going to use contains polygons (geometric data primitives) that represent countries of the world (associated attributes). In other words each polygon has an associated name with it. The term shapefile may imply that it is a single file, but in fact a shapefile is a set of multiple files. SHP file contains the geometric shape data, SHX is the geometric data index file, which is used to locate relevant data, DBF is the attribute database, and PRJ defines the coordinate system.

Working with Shapefile PyShp library makes it easy to read and extract geospatial information from shapefile. You can find full PyShp documentation here: https://code.google.com/p/pyshp/wiki/PyShpDocs. First, we need to create and initialize the reader object, as all data access will be done using this object. When we initialize a new reader object, we need to tell it where to find the file that contains the shape objects. It will automatically open attribute and other files:   >>> import shapefile >>> r = shapefile.Reader('world_borders/TM_WORLD_BORDERS-0.3.shp')  

182

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

CHAPTER 6 N GATHERINg AND PRESENTINg STATISTIcAL DATA FROM APAcHE LOg FILES

Once the file reader object is created, we can start working with the data contained in the shapefile. Let’s read all stored shapes first. Shapes are made of one or more points, and they represent a physical location on a map.   >>> countries = r.shapes() >>> len(countries) 246   As you can see, there are 246 shapes stored in our shapefile, each representing one of the world’s countries. Each shape is a set of points and contains one or more parts. For example, if a mainland country has a single island that belongs to it, the shape representing such a country will contain two parts: one for defining the mainland country’s boundaries, and one for the island. Let’s have a closer look at one of the countries from our list. We will pick the first one:   >>> country = countries[0]   The shape that defines the boundaries of this country is made of 48 points:   >>> len(country.points) 48   It is also not a contiguous border; rather, it has two distinct parts:   >>> len(country.parts) 2 >>> country.parts [0, 23]   The numbers in the parts list are the indexes to the first point of the referenced part. So, in our example, the first part starts with the point at index 0 and the second part starts with the point at index 23. Thus, the first part has 23 points (points 0 to 22), and the second part has 25 points (points 23 to 48). Each point is just a coordinate on the world map:   >>> country.points[0] [-61.686668, 17.024441000000138]   Now we know how to read geometric data, but that data is mostly meaningless if we do not know what it represents. As we already know, each shapefile also contains attributes for each shape. These attributes can be read by calling records() method of the reader object, just as we did to read the shapes information:   >>> records = r.records() >>> len(records) 246   You can see that we have a matching number of attributes–one attribute for each shape. Let’s see what the attributes are for the first country in the list:   >>> country_rec = records[0] >>> country_rec ['AC', 'AG', 'ATG', 28, 'Antigua and Barbuda', 44, 83039, 19, 29, -61.783, 17.078]   This explains why we see two parts in the country shape: Antigua and Barbuda is a twin island-nation that lies on the eastern edge of the Caribbean Sea.

183

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

CHAPTER 6 N GATHERINg AND PRESENTINg STATISTIcAL DATA FROM APAcHE LOg FILES

Although we may be able to guess what some of the fields are (like the country name and the code), other fields are not self-explanatory. To see what each field means, we need to check the fields property of the reader object:   >>> r.fields [('DeletionFlag', 'C', 1, 0), ['FIPS', 'C', 2, 0], ['ISO2', 'C', 2, 0], ['ISO3', 'C', 3, 0], ['UN', 'N', 3, 0], ['NAME', 'C', 50, 0], ['AREA', 'N', 7, 0], ['POP2005', 'N', 10, 0], ['REGION', 'N', 3, 0], ['SUBREGION', 'N', 3, 0], ['LON', 'N', 8, 3], ['LAT', 'N', 7, 3]]   Each field in the list is another list and contains the information shown in Table 6-2: Table 6-2.  Attributes in the Fields Description List

Index

Description

0

Field name, which describes data in this column index.

1

Field type contained in this column index. Possible types are: [C]haracter, [N]umber, [L]ong, [D]ate, and [M]emo.

2

Field length defines the length of data found at this column index.

3

Decimal length describes the number of decimal places in “number” fields.

Displaying the Requests Data on the World Map We are now ready to generate a world map with the countries. Each country that has generated any number of requests will be colored in, and the color intensity is proportional to the number of requests generated. We add the following code to the plugin_geoip_stats.py plug-in. The comments in Listing 6-4 explain what each section of the code is doing: Listing 6-4.  Adding Map Generator to the GeoIP Lookup Plug-in #!/usr/bin/env python   [...]   import shapefile import numpy as np import matplotlib.pyplot as plt import matplotlib as mpl from mpl_toolkits.basemap import Basemap from matplotlib.collections import LineCollection from matplotlib import cm   [...]  

184

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

CHAPTER 6 N GATHERINg AND PRESENTINg STATISTIcAL DATA FROM APAcHE LOg FILES

def report(self, **kwargs): print "== Requests by country ==" for (country, count) in sorted(self.countries.iteritems(), key=itemgetter(1), reverse=True): print " %10d: %s" % (count, country) generate_map(self.countries)    def generate_map(countries):   # Initialize plotting area, set the boundaries and add a sub-plot on which # we are going to plot the map fig = plt.figure(figsize=(11.7, 8.3)) plt.subplots_adjust(left=0.1,right=0.9,top=0.9,bottom=0.1,wspace=0.15,hspace=0.05) ax = plt.subplot(111)   # Initialize the basemap, set the resolution, projection type and the viewport bm = Basemap(resolution='i', projection='robin', lon_0=0)   # Tell basemap how to draw the countries (built-in shapes), draw parallels and meridians # and color in the water bm.drawcountries(linewidth=0.5) bm.drawparallels(np.arange(-90., 120., 30.)) bm.drawmeridians(np.arange(0., 360., 60.)) bm.drawmapboundary(fill_color='aqua')   # Open the countries shapefile and read the shape and attribute information r = shapefile.Reader('world_borders/TM_WORLD_BORDERS-0.3.shp') shapes = r.shapes() records = r.records()   # Iterate through all records (attributes) and shapes (countries) for record, shape in zip(records, shapes):   # Extract longitude and latitude values into two separate arrays then # project the coordinates onto the map projection and transpose the array, so that # the data variable contains (lon, lat) pairs in the list. # Basically, the following two lines convert the initial data # [ [lon_original_1, lat_original_1], [lon_original_2, lat_original_2], ... ] # into projected data # [ [lon_projected_1, lat_projected_1, [lon_projected_2, lat_projected_2], ... ] # # Note: Calling baseshape object with the coordinates as an argument returns the # projection of those coordinates lon_array, lat_array = zip(*shape.points) data = np.array(bm(lon_array, lat_array)).T   # Next we will create groups of points by splitting the shape.points according to # the indices provided in shape.parts   if len(shape.parts) == 1: # If the shape has only one part, then we have only one group. Easy. groups = [data,]

185

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

CHAPTER 6 N GATHERINg AND PRESENTINg STATISTIcAL DATA FROM APAcHE LOg FILES

else: # If we have more than one part ... groups = [] for i in range(1, len(shape.parts)): # We iterate through all parts, and find their start and end positions index_start = shape.parts[i-1] index_end = shape.parts[i] # Then we copy all point between two indices into their own group and append # that group to the list groups.append(data[index_start:index_end]) # Last group starts at the last index and finishes at the end of the points list groups.append(data[index_end:])   # Create a collection of lines provided the group of points. Each group represents a line. lines = LineCollection(groups, antialiaseds=(1,)) # We then select a color from a color map (in this instance all Reds) # The intensity of the selected color is proportional to the number of requests. # Color map accepts values from 0 to 1, therefore we need to normalize our request count # figures, so that the max number of requests is 1, and the rest is proportionally spread # in the range from 0 to 1. max_value = float(max(countries.values())) country_name = record[4]   requests = countries.get(country_name, 0) requests_norm = requests / max_value   lines.set_facecolors(cm.Reds(requests_norm))   # Finally we set the border color to be black and add the shape to the sub-plot lines.set_edgecolors('k') lines.set_linewidth(0.1) ax.add_collection(lines)   # Once we are ready, we save the resulting picture plt.savefig('requests_per_country.png', dpi=300)     When the plug-in report method is called, it will display the results as a text and also generate a map similar to that shown in Figure 6-2.

186

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

CHAPTER 6 N GATHERINg AND PRESENTINg STATISTIcAL DATA FROM APAcHE LOg FILES

•V

•'

L

L

. n

Figure 6-2.  Example map generated by the Geo-IP lookup plug-in

Summary In this chapter, we wrote a simple yet extensible and powerful plug-in framework in Python. We also implemented a simple Apache web server log parser, and wrote a plug-in to count the number of requests received and then group them by the country from which they originated. Key points to remember: u

The plug-ins allow decoupling of the main application from its extensions—plug-in modules.

u

The plug-in architecture typically consists of three components: the host application, the plug-in framework, and the plug-in modules.

u

The plug-in framework is responsible for finding and registering the plug-in modules.

u

Any Python class can find the other classes that inherited from it, and this mechanism can be used to find and group the classes. This property of the class can be used to find all plug-in classes.

u

You can use the MaxMind GeoIP database to find the physical location of an IP address.

u

Matplotlib, in combination with PyShp (shapefile) and Basemap, can be used to plot data on a map.

187

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

CHAPTER 7

Performing Complex Searches and Reporting on Application Log Files System administration duties often include installing and supporting various applications. These may be either produced by open-source communities or developed in-house. There are also a wide variety of languages used when developing those applications; common languages found these days would be Java, PHP, Python, Ruby, and (yes, some still are using it) Perl. In this chapter I talk about applications developed in Java, as this seems to be the most common language selected by large enterprises for their web applications. Java applications commonly run within the application server container, such as Tomcat, Jetty, Websphere, or JBoss. You, as a system administrator, need to know whether the application is running correctly. Every well-organized and structured application is supposed to write its status to one or more log files; in the Java world, this is usually done via the log4j adapter. By observing the log file, a system administrator can detect any faults and failures within the application, which are commonly logged as exception stack traces. The logging of a full exception stack trace usually indicates an unrecoverable error—an error that the application was not able to handle itself. If you do not happen to have many requests, and the application is merely doing anything, catching these exceptions and analyzing them can be done by hand. However, if you need to manage hundreds of servers and there are tens of GBs of information produced, you surely need some automated tools to gather and analyze the data for you. In this chapter, I explain how I developed the open-source tool called Exctractor (no, this is not a typo—the name is constructed by joining two words, exception and extractor) and how it functions.

Defining the Problem Before proceeding, let’s review the problem that this application will be trying to resolve. Every program writes its running status to a log file. What exactly is being logged is up to the developer who created the application. There are no enforced standards on what to log, and even the format of the logging is somewhat undefined. Although it’s not required, most log entries have timestamps and include a severity level to indicate the importance of the message, along with the actual text of the status message. This is not enforced, and you may find that the log files you are dealing with have more attributes—or maybe even less. For example, some applications that I have come across don’t even bother logging a timestamp. Generally, well-developed Java applications follow more or less the same standard when logging their status messages. Normally the messages are status reports written by the application that indicate what the application is doing at the moment. In situations where the application runs into an undefined state, it will generate an exception, which will normally be logged with full execution status information: the call stack. I have created a simple web application that I’m going to use throughout this chapter to illustrate various aspects of exception raising and for analyzing different types of exceptions. Listing 7-1 is the source code for this application. You can compile it with the javac tool and run it from within the Tomcat application container. Be aware that this application is to be used only as an example, as it potentially allows any user to access any file on your system; the only limitation is your file system’s access rights mechanism.

189

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

CHAPTER 7 N PERFORMING COMPLEX SEARcHEs AND REPORTING ON APPLIcATION LOG FILEs

Listing 7-1.  A Java Program to Illustrate Application Behavior import java.io.*; import java.util.*; import java.text.*; import javax.servlet.*; import javax.servlet.http.*;   public class FileServer extends HttpServlet {   public void doGet(HttpServletRequest request, HttpServletResponse response) throws IOException, ServletException { response.setContentType("text/html"); PrintWriter out = response.getWriter();   String fileName = request.getParameter("fn"); if (fileName != null) { out.println(readFile(fileName)); } else { out.println("No file specified"); }   }   private String readFile(String file) throws IOException { StringBuilder stringBuilder = new StringBuilder(); Scanner scanner = new Scanner(new BufferedReader(new FileReader(file)));   try { while(scanner.hasNextLine()) { stringBuilder.append(scanner.nextLine() + "\n"); } } finally { scanner.close(); } return stringBuilder.toString(); } }   Listing 7-2 is an example of a Java stack trace, which has been generated by the web application running in the Tomcat application container. Listing 7-2.  An Example of a Java Stack Trace Jan 18, 2010 8:08:49 AM org.apache.catalina.core.StandardWrapperValve invoke SEVERE: Servlet.service() for servlet FileServer threw exception java.io.FileNotFoundException: /etc/this_does_not_exist_1061 (No such file or directory) at java.io.FileInputStream.open(Native Method) at java.io.FileInputStream.(FileInputStream.java:137) at java.io.FileInputStream.(FileInputStream.java:96) at java.io.FileReader.(FileReader.java:58)

190

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

CHAPTER 7 N PERFORMING COMPLEX SEARcHEs AND REPORTING ON APPLIcATION LOG FILEs

at FileServer.readFile(FileServer.java:30) at FileServer.doGet(FileServer.java:21) at javax.servlet.http.HttpServlet.service(HttpServlet.java:690) at javax.servlet.http.HttpServlet.service(HttpServlet.java:803) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter« (ApplicationFilterChain.java:269) at org.apache.catalina.core.ApplicationFilterChain.doFilter« (ApplicationFilterChain.java:188) at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:210) at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:172) at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127) at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:117) at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:108) at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:151) at org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:870) at org.apache.coyote.http11.Http11BaseProtocol$Http11ConnectionHandler.« processConnection(Http11BaseProtocol.java:665) at org.apache.tomcat.util.net.PoolTcpEndpoint.processSocket(PoolTcpEndpoint.java:528) at org.apache.tomcat.util.net.LeaderFollowerWorkerThread.runIt« (LeaderFollowerWorkerThread.java:81) at org.apache.tomcat.util.threads.ThreadPool$ControlRunnable.run(ThreadPool.java:685) at java.lang.Thread.run(Thread.java:636)   If you take a closer look at the exception, you may notice that the application code tried to open a file, but the file did not exist. Obviously, a well-written application should handle a simple case like a missing file more gracefully than throwing an exception, but sometimes it is not feasible to build into the application logic a check for every possible scenario. In the case of more complex applications, this may not be possible at all.

Why We Use Exceptions Language constructs such as events and signals are part of a normal program flow. Exceptions, by contrast, indicate that something has gone wrong while executing the program, such as a function called with wrong parameters, so that the result cannot be calculated. For example, suppose we have a function that divides two numbers and accepts them as parameters. Naturally, division by zero is not possible, and should such a function receive an instruction to divide by zero, it would have no idea what to do. So a seemingly simple function becomes rather complicated; it has to check whether it can divide the two numbers it is given and also return two values instead of one: one value is to indicate whether the operation completed successfully, and the other holds the actual result. Alternatively, it could return a number if the operation was successful or otherwise a null object. In either case, the code that called this function now has to be able handle both numbers and null objects as a result, rendering simple arithmetic constructs into more complicated "if ... else" logic flows. This is where exceptions come in. Instead of returning a special code that indicates the error, functions that cannot complete their normal operation will simply raise an exception. At the moment when an exception is raised, the program execution stops and the Java environment proceeds with the exception-handling procedure. Such exceptions can be “caught” by the application. Going back to the division example, the whole calculation code can be wrapped in Java’s "try ... catch" construct. Then, regardless of the point at which the code failed, and the specific function (such as division), the code would catch any arithmetical exceptions and would know than the calculation couldn’t have been completed.

191

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

CHAPTER 7 N PERFORMING COMPLEX SEARcHEs AND REPORTING ON APPLIcATION LOG FILEs

Are Exceptions Always a Bad Sign? The short answer is “no” and the slightly longer answer is “it depends.” The reason for an exception to be raised is that something unexpected happened. Let’s say we have a web application that serves files from our server. All files are linked from external pages, and the general assumption is that whoever creates the listing would only list files that do exist. But, being human, we all make mistakes, and the operator of the listing may have made a typo, so the resulting link would point to a file that does not exist. Now, if a user clicks on the link, the application tries to do exactly what it is told to—retrieve the file. But the file does not exist, and so the code that is responsible for reading in the file would fail and throw an exception saying that the file does not exist. Should the application check for the missing files and react appropriately? In this example, probably yes; but in more complicated situations, it is not always possible to predict and write code for every possible outcome. Even with simple applications like my file retriever service, it’s not always possible to think about every possible thing that can wrong. As an example, let’s run the application as a Tomcat user and assume that all files being written to the file system have permissions set such that Tomcat users have read access to them. It’s been like that for a long time, and the application works flawlessly. One day, a new system administrator joins the crew and, without knowing, deploys a file with different user permissions. Suddenly there’s a file access error. The file is not missing, but the process that runs with Tomcat user permissions cannot read it. The developer has not thought of this situation, and so there’s no code to handle it. This is where exception handling is really helpful; the application would encounter a situation that is different from a normal program flow and cannot handle it, so the code raises an exception and either the system administrator or the developer can examine why things have gone wrong.

Why We Should Analyze Exceptions Now that we know exceptions in the logs aren’t always a bad sign, does that mean we should leave them unhandled? My general view is that the logs files should contain as few exceptions as possible. An occasional exception means that something exceptional has happened and we should investigate; but if there are similar exceptions over a period of time, that means that the event is not exceptional anymore and it is something commonplace. Therefore, the application needs to be changed so that handling such events becomes part of the normal program flow rather than an exceptional event. Going back to my file reader example, we see that the developer initially thought that there might be one possible error that he needs to check for and that was a missing parameter, so he built the check into the application logic:   if (fileName != null) { out.println(readFile(fileName)); } else { out.println("No file specified"); }   That’s a good strategy, as it may sometimes happen that the external references do not specify any file name, but the application happily handles the situation. Now, let’s pretend for a moment that this has been running for a long time and no one has reported any issues. But one day you decide to have a look at the application log file and you notice some unusual exception stack traces that have never been logged before:   Jan 18, 2010 8:08:35 AM org.apache.catalina.core.StandardWrapperValve invoke SEVERE: Servlet.service() for servlet FileServer threw exception java.io.FileNotFoundException: /etc/this_does_not_exist_2 (No such file or directory) at java.io.FileInputStream.open(Native Method) ...  

192

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

CHAPTER 7 N PERFORMING COMPLEX SEARcHEs AND REPORTING ON APPLIcATION LOG FILEs

This indicates that users are trying to reach a file that does not exist. You know that the only link to your web service is from another page, so you go and fix it. But how do you prevent this from happening again? There’s nothing wrong with your application, but you might want to check and improve the process of adding the new links to the external page so that it only points to the files that do exist. Whether to build in a case for handling files that do not exist is entirely up to you, as there are no hard-and-fast rules for when and what should be handled in the application logic. My view is that if the exception is highly unlikely to happen, it’s best to keep the application logic as simple as possible. Now, some time later you encounter yet another exception:   Jan 18, 2010 8:07:59 AM org.apache.catalina.core.StandardWrapperValve invoke SEVERE: Servlet.service() for servlet FileServer threw exception java.io.FileNotFoundException: /etc/shadow (Permission denied) at java.io.FileInputStream.open(Native Method) ...   This time, it’s indicating that the file is present but with the wrong permissions. Again, it’s time to investigate why this is happening and to fix the root cause of the issue—which isn’t always the application but might well be something external to it. In this situation, a new system administrator changed the file permissions, and that broke the application. As you can see from this simple real-life scenario, exceptions in the application log files do not necessarily mean issues with the application that is generating them. To find the root cause of the issues that are either directly or indirectly indicated by the exception logs, you as a system administrator need to know as much as possible about the various indicators. Having exception stack tracing is very useful, but you also want to know when the exception first started to appear in the log files. What is the extent of the problem? How many of the exceptions are you getting? If you are receiving a large number of them, it is probably not really an exceptional situation, and the application needs to be modified to handle it as part of the application logic.

Parsing Complex Log Files Parsing log files (or any other unstructured set of data) is a rather challenging task. Unlike structured data files like XML or JSON, plain log text files do not follow any strict rules and may change without any warning. It is completely up to the person who has developed the application to decide what gets logged and in what format. The format of the log entries might even change between different releases of the software. As a system administrator, you may need to negotiate some sort of approval procedure so that if you automate log parsing, you will not get caught by surprise when the format of the file changes. It is best to engage developers as well, so they use the same tools as you are. If they are using the same tools, they are less likely to break them. To illustrate, I use the catalina.out file generated by the Tomcat application server. As you can see, the application itself is not writing any log messages at all, so the only log entries you will find there are from the JVM and Tomcat. Obviously, if you are using different application containers, such as Jetty or Jboss, your log entries may look different. Even if you are using Tomcat, you can override default behavior and the way messages are formatted, so look at the log files that you are dealing with and adjust the examples in this chapter accordingly so that they match your environment.

What Can We Find in a Typical Log File? Before proceeding with writing the analyzer code or changing any configuration for it, take a look and identify the types of messages you have in the log files, and determine how you can unambiguously identify them. Look for common attributes that make them distinguishable. Typically, you will see standard messages generated by either the application itself or the application container. These messages are meant to inform you about the state of the application. Because these messages are generated by the application they most likely indicate expected behavior and each state they inform you about is part of the normal application flow. As I’m going to investigate exceptions I’m not really interested in that type of message. Listing 7-3 is a snippet from Tomcat’s log file that shows what “normal” log messages look like.

193

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

CHAPTER 7 N PERFORMING COMPLEX SEARcHEs AND REPORTING ON APPLIcATION LOG FILEs

Listing 7-3.  Standard Logging Messages in catalina.out Jan 17, 2010 8:18:24 AM org.apache.catalina.core.AprLifecycleListener lifecycleEvent INFO: The Apache Tomcat Native library which allows optimal performance in production« environments was not found on the java.l ibrary.path: /usr/lib/jvm/java-1.6.0-openjdk-1.6.0.0/jre/lib/i386/client:/usr/lib/jvm/« java-1.6.0-openjdk-1.6.0.0/jre/lib/i386: /usr/lib/jvm/java-1.6.0-openjdk-1.6.0.0/jre/../lib/i386:/usr/java/packages/lib/i386:/lib:/« usr/libJan 17, 2010 8:18:24 AM org.apache.coyote.http11.Http11BaseProtocol« initINFO:Initializing Coyote HTTP/1.1 on http-8081Jan 17, 2010 8:18:24 AM« org.apache.catalina.startup.Catalina load INFO: Initialization processed in 673 ms Jan 17, 2010 8:18:24 AM org.apache.catalina.core.StandardService start INFO: Starting service Catalina Jan 17, 2010 8:18:24 AM org.apache.catalina.core.StandardEngine start INFO: Starting Servlet Engine: Apache Tomcat/5.5.23 Jan 17, 2010 8:18:24 AM org.apache.catalina.core.StandardHost start INFO: XML validation disabled Jan 17, 2010 8:18:25 AM org.apache.catalina.core.ApplicationContext log INFO: ContextListener: contextInitialized() Jan 17, 2010 8:18:25 AM org.apache.catalina.core.ApplicationContext log INFO: SessionListener: contextInitialized() Jan 17, 2010 8:18:25 AM org.apache.catalina.core.ApplicationContext log INFO: ContextListener: contextInitialized() Jan 17, 2010 8:18:25 AM org.apache.catalina.core.ApplicationContext log INFO: SessionListener: contextInitialized() Jan 17, 2010 8:18:25 AM org.apache.catalina.core.ApplicationContext log INFO: org.apache.webapp.balancer.BalancerFilter: init(): ruleChain:« [org.apache.webapp.balancer.RuleChain: [org.apache.webapp. balancer.rules.URLStringMatchRule: Target string: News / Redirect URL:« http://www.cnn.com], [org.apache.webapp.balancer.rules. RequestParameterRule: Target param name: paramName / Target param value:« paramValue / Redirect URL: http://www.yahoo.com], [or g.apache.webapp.balancer.rules.AcceptEverythingRule: Redirect URL:« http://jakarta.apache.org]]   You can see that all log entries start with a time stamp. This is one of the attributes I will use to detect the log entry. Also notice that log entries may span multiple lines. So each long entry starts with a line that begins with a timestamp and finishes when another line with a timestamp is detected. Write this down, as this is going to become one of the design decisions for your application.

The Structure of an Exception Stack Trace Log Listing 7-4 is an example of a stack trace that has been generated by the JVM. This stack trace is from the Tomcat application that failed to load my web application because of a malformed web.xml. As you can see, such things cannot be predicted; hence, they are exceptions to normal operation.

194

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

CHAPTER 7 N PERFORMING COMPLEX SEARcHEs AND REPORTING ON APPLIcATION LOG FILEs

Listing 7-4.  An Example of an Exception Stack Trace Jan 17, 2010 10:07:04 AM org.apache.catalina.startup.ContextConfig applicationWebConfig SEVERE: Parse error in application web.xml file at jndi:/localhost/test/WEB-INF/web.xml org.xml.sax.SAXParseException: The element type "servlet-class" must be terminated« by the matching end-tag "". at org.apache.xerces.parsers.AbstractSAXParser.parse(Unknown Source) at org.apache.xerces.jaxp.SAXParserImpl$JAXPSAXParser.parse(Unknown Source) at org.apache.tomcat.util.digester.Digester.parse(Digester.java:1562) at org.apache.catalina.startup.ContextConfig.applicationWebConfig« (ContextConfig.java:348) at org.apache.catalina.startup.ContextConfig.start(ContextConfig.java:1043) at org.apache.catalina.startup.ContextConfig.lifecycleEvent(ContextConfig.java:261) at org.apache.catalina.util.LifecycleSupport.fireLifecycleEvent« (LifecycleSupport.java:120) at org.apache.catalina.core.StandardContext.start(StandardContext.java:4144) at org.apache.catalina.startup.HostConfig.checkResources(HostConfig.java:1105) at org.apache.catalina.startup.HostConfig.check(HostConfig.java:1203) at org.apache.catalina.startup.HostConfig.lifecycleEvent(HostConfig.java:293) at org.apache.catalina.util.LifecycleSupport.fireLifecycleEvent« (LifecycleSupport.java:120) at org.apache.catalina.core.ContainerBase.backgroundProcess(ContainerBase.java:1306) at org.apache.catalina.core.ContainerBase$ContainerBackgroundProcessor.« processChildren(ContainerBase.java:1570) at org.apache.catalina.core.ContainerBase$ContainerBackgroundProcessor.« processChildren(ContainerBase.java:1579) at org.apache.catalina.core.ContainerBase$ContainerBackgroundProcessor.run« (ContainerBase.java:1559) at java.lang.Thread.run(Thread.java:636)   Like “normal” log entries, this starts with a timestamp showing when the entry was created. It also spans a few lines; in fact, most stack traces are rather lengthy and may contain over a hundred lines, depending on the application structure. A stack trace effectively is a call stack and it prints out the entire function hierarchy, down to the one that has encountered the exceptional situation. The structure of a Java exception stack trace log is not formal in any way; I’m just splitting it for my own convenience, as this will help me organize these log entries later in the parser code. You should be able to apply the same structure without much trouble. The first line of the log entry I’m going to call the “logline.” This line contains a timestamp of when the log entry was created and also the module name and the function where the exception occurred:   Jan 17, 2010 10:07:04 AM org.apache.catalina.startup.ContextConfig applicationWebConfig   The following line I’m going to call the “headerline.” This line is not really part of the actual stack trace, but it is printed out by the application code that “caught” the exception:   SEVERE: Parse error in application web.xml file at jndi:/localhost/test/WEB-INF/web.xml  

195

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

CHAPTER 7 N PERFORMING COMPLEX SEARcHEs AND REPORTING ON APPLIcATION LOG FILEs

And finally, the third section contains the “body” of the exception. This includes all the following lines and is the last part of the log entry. Usually the last line is a Java thread run method.   org.xml.sax.SAXParseException: The element type "servlet-class" must be terminated by the matching end-tag "". at org.apache.xerces.parsers.AbstractSAXParser.parse(Unknown Source) at org.apache.xerces.jaxp.SAXParserImpl$JAXPSAXParser.parse(Unknown Source) at org.apache.tomcat.util.digester.Digester.parse(Digester.java:1562) ... at java.lang.Thread.run(Thread.java:636)   I’ve defined the structure of an exception log entry, but how do I know that this is an exception and not a normal log entry? So far they both look the same: they both have a timestamp, and they both span one or more lines. To a human it’s a rather obvious difference, and you immediately spot the exception, but are there any other fingerprints in the exception stack trace that I could use to identify it as a genuine exception and not the lengthy log entry? If you look and compare different exception stack traces, you’ll notice one commonality: each exception stack trace mentions the exception class name. Some examples include org.xml.sax.SAXParseException and java.io.FileNotFoundException. This occurs because each exception is effectively an instance of the exception class. Again, class name could be anything, but it is an accepted practice to append the word Exception to the class name. So I’m going to use this as one of my classifiers. Another classifier is the word java. Because I’m dealing with Java programs, in most cases I will have one or more methods from native Java libraries. So I’m going to work on the assumption that if my exception candidate contains these two words, it is likely to be an actual exception. But I don’t want to be limiting myself, so I have to make sure that my application structure allows me to change or plug in another validation method. Now I have something to operate on: I know how my log entries should look. I also know what the exception looks like, as well as what makes it different from the normal log entry. That should be enough to implement the log parser.

Handling Multiple Files Before diving into the actual parsing, I need to read the data in first. This may sound trivial, but if you want to do this efficiently, there are some tricks you might want to know about. First, you need to decide where you will get the data from. While this may seem obvious, remember that log files come in different shapes and sizes. I want to have the tool flexible enough so it can be applied to different situations. To make things simple and remove guesswork at the implementation phase, I’ll start with listing some assumptions I’m going to make and some requirements I’m going to rely on: u

Log files can be either plain text or compressed with bzip2.

u

Log files have the extension .log for a plain text file or .log.bz2 for a bzip2 file.

u

I need to be able to process just a subset of log files based on their name. For example, I need to be able to use the file pattern web server; all files that match this will be processed, but not other files.

u

The results from all files processed should be combined into one report.

u

The tool should operate on all files found in a specified directory or list of different directories. Log files from all subdirectories should also be included.

196

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

CHAPTER 7 N PERFORMING COMPLEX SEARcHEs AND REPORTING ON APPLIcATION LOG FILEs

Handling Multiple Files Given the requirements just stated, I define two variables that represent the patterns for file search calls:   LOG_PATTERN = ".log" BZLOG_PATTERN = ".log.bz2"   The filename pattern is stored in the global variable OPTIONS.file_pattern. By default this is set to an empty string and so it will match all filenames. This variable is controlled by the command-line parsing class, which I’m going to talk about later in the chapter. For the time being, just note that it can be set to any value by using the -p or --pattern option. I need to create a list of directories and all subdirectories recursively so that I can search for the log files in them. Users are going to supply me with a list of top-level directories, which I need to explode into a full tree of all sub- and sub-subdirectories. The list of arguments is going to be stored in the ARGS variable by the OptionParser class. There is a really handy function in Python’s OS library called walk. It recursively builds a list of files in each directory and all subdirectories. Let’s set up a simple directory structure and see how the os.walk function works:   $ mkdir -p top_dir_{1,2}/sub_dir_{1,2}/sub_sub_dir   This will produce a three-level directory structure:   $ ls -1R top_dir_1 top_dir_2   ./top_dir_1: sub_dir_1 sub_dir_2   ./top_dir_1/sub_dir_1: sub_sub_dir   ./top_dir_1/sub_dir_1/sub_sub_dir:   ./top_dir_1/sub_dir_2: sub_sub_dir   ./top_dir_1/sub_dir_2/sub_sub_dir:   ./top_dir_2: sub_dir_1 sub_dir_2   ./top_dir_2/sub_dir_1: sub_sub_dir   ./top_dir_2/sub_dir_1/sub_sub_dir:   ./top_dir_2/sub_dir_2: sub_sub_dir   ./top_dir_2/sub_dir_2/sub_sub_dir:  

197

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

CHAPTER 7 N PERFORMING COMPLEX SEARcHEs AND REPORTING ON APPLIcATION LOG FILEs

Now we can use os.walk to generate the same output, as shown in Listing 7-5. Listing 7-5.  Recursively Retrieving a List of Directories with os.walk $ python Python 2.6.1 (r261:67515, Jul 7 2009, 23:51:51) [GCC 4.2.1 (Apple Inc. build 5646)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> import os >>> for d in os.walk('.'): ... print d ... ('.', ['top_dir_1', 'top_dir_2'], []) ('./top_dir_1', ['sub_dir_1', 'sub_dir_2'], []) ('./top_dir_1/sub_dir_1', ['sub_sub_dir'], []) ('./top_dir_1/sub_dir_1/sub_sub_dir', [], []) ('./top_dir_1/sub_dir_2', ['sub_sub_dir'], []) ('./top_dir_1/sub_dir_2/sub_sub_dir', [], []) ('./top_dir_2', ['sub_dir_1', 'sub_dir_2'], []) ('./top_dir_2/sub_dir_1', ['sub_sub_dir'], []) ('./top_dir_2/sub_dir_1/sub_sub_dir', [], []) ('./top_dir_2/sub_dir_2', ['sub_sub_dir'], []) ('./top_dir_2/sub_dir_2/sub_sub_dir', [], []) >>> os.walk('.') >>>  As you can see, a call to os.walk returns a generator object. I will talk about generators in more detail later in this chapter, but for now, note that they are objects that you can iterate through just as you would any normal Python list or tuple object. The return result is a three-tuple with the following entries: The directory path: The current directory whose contents are exposed in the next two variables. Directory names: A list of directory names in the directory path. This list excludes ‘. ’ And ‘ .. ’ directories. File names: A list of the file names in the directory path. By default, os.walk will not follow symbolic links that point to directories. To follow symbolic links, you can set the followlinks parameter to True, which will instruct os.walk to follow all symbolic links that it comes across while scanning the directory tree. I’m only interested in the directory listing, as I’m going to use a different function to filter out the files that will be processed and analyzed. Collecting only the first element of the three-tuple result, I can build the list of directories. So, to build a recursive list of all directories from the list of top-level directories that are supplied as an argument list, I would write the following:   DIRS = [] for dir in ARGS: for root, dirs, files in os.walk(dir): DIRS.append(root)  

198

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

CHAPTER 7 N PERFORMING COMPLEX SEARcHEs AND REPORTING ON APPLIcATION LOG FILEs

Now the DIRS list contains all directories that I will need to search for log files. I need to go through this list and search for all files that have a name satisfying three search patterns: either LOG_PATTERN or BZLOG_PATTERN and OPTIONS.file_pattern. I’m going to use one of the simplest ways of obtaining the list, which is to traverse the list of directories, create a simple listing of contents, and then match the result against search patterns and use only files that satisfy both. The following code does just that and opens matched files for reading:   for DIR in DIRS:   for file in (DIR + "/" + f for f in os.listdir(DIR) if f.find(LOG_PATTERN) != -1 and f.find(OPTIONS.file_pattern) != -1 ): if file.find(BZLOG_PATTERN) != -1: fd = bz2.BZ2File(file, 'r') else: fd = open(file, 'r')   Take a closer look at the list construct, which is called “list comprehension.” This is a powerful mechanism for creating lists of objects that you want to iterate through. With list comprehension you can quickly and elegantly apply some validation or transformation to an existing list and get the new list immediately. For example, here’s what you’d do to quickly generate a list of all even numbers squared from 1 to 10:   >>> [x**2 for x in range(10) if x % 2 == 0] [0, 4, 16, 36, 64]   The basic structure for list comprehension is:   [ /operation/ for in /if / ]   where is a variable used to generate a list, /operation/ is an optional operation that you might need to perform on each element of the resulting list, is the list of items you’re iterating through, and // is the validation filter that filters out unwanted elements from the resulting list. With this in mind, if I dissect my file list construct, here’s what I have: u

Each element of the resulting array will be constructed as DIR + "/" + f, where DIR is the directory name and f is gathered from the os.listdir().

u

The variable f is assigned in sequence to all elements of a list returned by calling os.listdir().

u

Only those values are accepted that satisfy the condition (f.find(LOG_PATTERN) != -1 and f.find(OPTIONS.file_pattern) != -1), which requires them to match both LOG_PATTERN and OPTIONS.file_pattern.

Also, note that you can use list comprehension to generate either a list object or a generator. If you create a generator, the next element value will be derived only when requested—for example, in a for loop. Depending on the use, this may be much quicker and more memory-efficient than generating and holding the whole list in memory.

Using the Built-in Bzip2 Library You may have noticed there are two statements that create a file descriptor object. One is for flat text log files and the other one is for files compressed with bzip2. The differentiator is the log file extension, which in the case of bzip2 compression is .bz2.

199

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

CHAPTER 7 N PERFORMING COMPLEX SEARcHEs AND REPORTING ON APPLIcATION LOG FILEs

Python includes a bzip2-handling module as part of a standard set of packages. The most useful class in the module is BZ2File, which implements a full interface for handling compressed files. You can use it just as you would use the standard Open function. The returned object is a file descriptor object that implements standard file-handling operations: read, readline, write, writeline, seek , and close. Since the only difference is in how the file descriptor object is created, even though I’m using a different function to get the object, the result is assigned to the same fd variable that will be used later in the code.

Traversing Large Data Files If I have to read and process large amounts of data, I cannot use the simplistic approach of loading everything into memory and then processing it. And I will most definitely be dealing with large volumes of data here. Depending on your situation, this might be different, but busy systems are likely to have gigabytes of log data generated on an hourly basis. Obviously all this data cannot be loaded into memory at once. The solution to this problem is to use generators. The generator function allows you to produce output (reading the lines from a file) without actually loading the whole file into memory. If you just need to read the file line by line, you don’t really need to encapsulate the readline() function, as you can simply write:   f = open('file.txt', 'r') for line in f: print line   However, if you need to manipulate the file data and use the result, it might be a good idea to write your own generator function that performs required calculations and produces the results. For example, you might want to write a generator that searches for a particular string in the file and prints the string plus few lines before that string and few lines after it. This is where generators come in handy.

What Are Generators, and How Are They Used? Simply put, a Python generator is a function that potentially can return many values, and it is also able to maintain its own state between the returns. This means that you can call the function multiple times and it will return a new result every time. Each time you call it subsequently, it knows its last location and will continue from that point. The following example function generates Fibonacci numbers:   def f(): x, y = 0, 1 while 1: yield y x, y = y, y+x   When this function is called for the first time, it will initiate x and y and enter the infinite loop. The first statement in the loop is to return the value of y (note that in generators, you must use the yield statement). The next time you call this function it will start from the point where it stopped execution and returned a value—the yield statement. The next statement is to reassign x and y with new values, where x becomes the old y and the new y is a sum of old y and x. It is important to note that calling the generator function does not return the values the function is meant to calculate—it returns the actual generator object. You can then either iterate through it as you would normally do with a list or call next() method, which will get you the next value:   >>> g = f() >>> for i in range(10): ... g.next()

200

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

CHAPTER 7 N PERFORMING COMPLEX SEARcHEs AND REPORTING ON APPLIcATION LOG FILEs

... 1 1 2 3 5 8 13 21 34 55 >>>   As you can see, generators are actually functions and not lists, but they can be used as lists. Sometimes, as in the Fibonacci example, the virtual lists can be infinite. When the generator has a limited set of results, such as lines in a file or rows in a database query, it must raise a StopIteration exception, which will signal the caller that there are no more results available. You can use generators to go through all lines in the file. This will effectively return the next line whenever you call the next() function, without actually loading the whole file into memory. Once it is defined as a generator, you can just iterate through it. In my code I have a get_suspect() function, which is effectively a generator that returns excerpts of text from the log file that potentially might be an exception stack trace. This function accepts a generator as its argument and iterates through it, thereby retrieving all the lines. Here’s how it’s done. First, I create a generator that returns all lines in the file:   g = (line for line in fd)   And then, I use this generator to retrieve the lines in my function:   def get_suspect(g): line = g.next() next_line = g.next() while 1: yield result try: line, next_line = next_line, g.next() except: raise StopIteration   I enclose the call to next() in the "try: ... except:" clause because when the last line of the file has been reached, the generator will raise an exception. Therefore, when the file cannot be read anymore, I simply raise the StopIteration exception, which acts as a signal to the iterator that the generator has exhausted all its values.

Detecting the Exceptions The majority of the log entries contain only one line. So my approach in detecting exception log entries is this: u

Ignore all one-line entries. These are most likely to be from the application and will not have a stack trace because it is simply not possible to put a full stack trace into one line.

u

All log entries that have multiple lines are considered to contain an exception stack trace.

u

An exception stack trace log entry must contain the words java and exception in the log text body.

201

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

CHAPTER 7 N PERFORMING COMPLEX SEARcHEs AND REPORTING ON APPLIcATION LOG FILEs

The reason for having this two-phase detection is that a simple check like “does it have more than one line?” is very inexpensive and can eliminate a significant number of log entries.

Detecting the Potential Candidates In abstract language, the algorithm for this function would look like the following: u

Read in two lines from a file.

u

If the second line does not match the time stamp pattern, add it to the result string.

u

Keep reading lines in and appending until the time stamp pattern is matched.

u

Return the result.

u

Repeat until there is no more data in the file.

As you can see in Listing 7-6, using a generator function here is an obvious choice because I need to preserve the internal function state after the function returns the resulting string that contains a potential exception stack trace. The function itself accepts another generator function, which it uses to retrieve the lines of text. Using this approach, it is possible to replace a file-reading generator with any other generator that is capable of generating loglines. For example, this might be a database-reading function, or even a function that listens and accepts syslog service messages. Listing 7-6.  A Generator Function to Detect Potential Exceptions def get_suspect(g): line = g.next() next_line = g.next() while 1: if not (TS_RE_1.search(next_line) or TS_RE_2.search(next_line)): suspect_body = line while not (TS_RE_1.search(next_line) or TS_RE_2.search(next_line)): suspect_body += next_line next_line = g.next() yield suspect_body else: try: line, next_line = next_line, g.next() except: raise StopIteration   Obviously this can be replaced with a function that has more advanced logic and a better hit-to-miss ratio, but it is equally effective and lightweight. Here are a couple of ideas you might want to experiment with: u

Instead of using two predefined patterns for timestamp detection, try defining a list with precompiled patterns that would match the majority of popular formats. Then, as the function runs, it would count successful matches and rearrange the list on the fly so that most popular match gets made first.

u

If you have a large number of multiple-line log entries, this simple approach will fail. Try generating hashes of the first line in the log body and store them in a separate data structure. The real exception-validator function would update this table with True/False values depending on whether the guess was correct. This function can then check hashes against this table, so it will know which repeating log entries are not really exceptions although they may look like ones.

202

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

CHAPTER 7 N PERFORMING COMPLEX SEARcHEs AND REPORTING ON APPLIcATION LOG FILEs

Filtering the Legitimate Exception Traces Up until now all the code is in standard functions. This is mostly because the code was dealing with selecting the files and doing some initial validation. Neither of those tasks has anything to do with the actual exception-handling code. Now, for the exception parsing and analyzing tasks, I am going to define a class with appropriate methods. This way I can distribute and use it as a completely independent module. For example, let’s say I wanted to implement a web-based application where my users could submit their exception logs and get some statistics, and I want to be able to reuse this code. Functions that open files and deal with file patterns become obsolete because there are simply no files to deal with—all data comes from a web server. Similarly, you may want to analyze data stored in a database, in which case you would have to write an interface to retrieve this data; however, you still can reuse the code that deals with exception stack trace text. So always try to keep your code logically separated. As I have mentioned, my exception-detection mechanism (Listing 7-7) is somewhat naïve—I check for the words exception and java in the stack trace body. Listing 7-7.  Validating Exceptions def is_exception(self, strace): if strace.lower().find('exception') != -1 and \ strace.lower().find('java') != -1: return True else: return False   This is easy to change; should you need anything more sophisticated than this simple test, you can rewrite the function to use a more appropriate algorithm for your situation. Listing 7-8 shows how this detection mechanism fits together with other parts of the class. Listing 7-8.  The Basic Structure of the Exception Container Class class ExceptionContainer: def __init__(self):   def insert(self, suspect_body, f_name=""): lines = suspect_body.strip().split("\n", 1) log_l = lines[0] if self.is_exception(lines[1]):   For every suspect logline detected, the insert method will be called. That method will then call the validation function, which checks whether the text supplied is actually a stack trace and should be counted.

Storing Data in Data Structures The main goal of my application is to gather statistics about the exceptions that occur in the log file; therefore, I need to think about how and where to store this data. There are two choices: I can either hold this data in memory, or I can dump it into a database. When choosing between these two, I need to ask whether I have to do either of the following: u

Maintain this data in the same structure after the program has terminated?

u

Hold lots of records for a long period of time and access them from any other tools?

203

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

CHAPTER 7 N PERFORMING COMPLEX SEARcHEs AND REPORTING ON APPLIcATION LOG FILEs

If the answer to either question were positive, I probably would need to use an external database to hold the statistical data. However, I do not anticipate that the log files are going to have a large number of different types of exceptions. There might be hundreds of thousands of exceptions, but most likely there will be just a few hundred types of exceptions. It is really hard to think of an application that would generate the whole range of exceptions. Additionally, storing statistical data is not part of the lifecycle of this application. It is up to an external process to gather and analyze this data, so for purposes of this application, this data needs to be “live” only during the calculation phase. Therefore. I am going to use Python’s list data structure to keep the data and later to use it for reporting, but that data will all be lost when the application finishes its execution.

The Structure of Exception Stack Trace Data There is no need to hold every single exception that I come across; I need only keep a counter of all occurrences of each particular type of exception, along with details for that type. As previously discussed, an exception stack trace can be dissected into these parts:

1.

The logline (the line with the timestamp)



2.

The exception headline (the first line of the exception stack trace)



3.

The exception body (the stack trace)

In addition to this information, I also need to have the following: u

A counter to count the number of occurrences of each particular type.

u

A description for a quick reference.

u

A group that can be used to organize different types of exceptions. For example, you might want to have a group that counts all exception related to missing files; but because they might be generated by different parts of the application, or even different libraries, you may need to use different rules altogether to match them. Grouping here is the only convenient way to maintain the same counter for all those exceptions.

u

A filename so that users know in which file the exception has been found. This is useful if you are analyzing numerous files that are stored in a single directory.

So, every time I insert a new exception, the following dictionary will be appended to a list:   { 'count' : # counter 'log_line' : # logline 'header' : # header line 'body' : # body text with stack trace 'f_name' : # file name 'desc' : # description 'group' : # group }

Generating an Exception Fingerprint for Unknown Exceptions Assuming that I haven’t provided any classification rules yet, the application needs to be able to recognize similar exceptions and group them accordingly. One possibility would be to store an exception body text and compare others against it. If the next exception matches the stored one, I increase the counter; otherwise, I store that one as well and use it for future comparisons. Figure 7-1 is a flowchart of this process.

204

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

CHAPTER 7 N PERFORMING COMPLEX SEARcHEs AND REPORTING ON APPLIcATION LOG FILEs

*

Start

Get the body text of the next exception from a logfile

I

Compare it against the list of stored exceptions

T Is there a matching entry?

YES

Increase the exception counter by one

NO Append new entry to the list of stored exceptions

Figure 7-1.  Counting the exceptions This would work, but it would be very slow as the string-compare operations are really slow and expensive in terms of computing power. So when possible, try to avoid them, especially if you need to compare long strings, such as long fragments of text. A much more efficient way to perform quick text-blob comparison is to generate some unique attribute for each text fragment and then compare those attributes. (By “unique” I mean unique within that particular piece of text.) Such an attribute can be an MD5 hash function of the data stream. As you may already know, a cryptographic hash function (of which MD5 is a widely used example) is a procedure that accepts any block of data and returns a bit string of a predefined size. This string is generated in a way that if the original data is modified, it will change. By definition, the output string may be much smaller than the input string, so obviously the information is lost and cannot be restored; but the algorithm guarantees that if the hash values of two strings are the same, there is a very high probability that the original stings are the same, too. Python has a built-in MD5 library that can be used to generate MD5 sums for any input data. So I’m going to use this function to generate MD5 hashes for all exceptions that I encounter, and then compare those strings instead of comparing the full-stack traces. Listing 7-9 is an excerpt from the insert method. The following variables are defined at the beginning of the function: log_l: The exception logline hd_l: The exception header line bd_l: The exception body text f_name: The filename where the exception has been found self.exception: The dictionary where the key is theMD5 sum of the exception body text and the value is another dictionary that holds the details about the exception stack trace

205

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

CHAPTER 7 N PERFORMING COMPLEX SEARcHEs AND REPORTING ON APPLIcATION LOG FILEs

Listing 7-9.  Generating MD5 and Comparing it Against Stored Values 01: 02: 03: 04: 05: 06: 07: 08: 09: 10: 11: 12: 13: 14: 15: 16: 17: 18:  

m = md5.new() m.update(log_l.split(" ", 3)[2]) m.update(hd_l) for ml in bd_l.strip().split("\n"): if ml: ml = re.sub("\(.*\)", "", ml) ml = re.sub("\$Proxy", "", ml) m.update(ml) if m.hexdigest() in self.exceptions: self.exceptions[m.hexdigest()]['count'] += 1 else: self.exceptions[m.hexdigest()] = { 'count' : 1, 'log_line': log_l, 'header' : hd_l, 'body' : bd_l, 'f_name': f_name, 'desc' : 'NOT IDENTIFIED', 'group' : 'unrecognised_'+m.hexdigest(), } Here is a detailed explanation of what’s actually happening in this function: Lines 1–3: Initialize the md5 object and assign it the third field of the exception logline and the whole exception header line. The reason I’m picking only the last field of the exception logline is that the first two fields are going to contain date and time strings, which are constantly changing, so I don’t want them to change the MD5 hash I’m going to generate. Lines 4–5: Iterate over all lines of the exception body, one at a time. Lines 6–8: Strip all text between brackets and remove all references to automatically generated Java Proxy objects. If the line numbers are different but otherwise the exception stack traces look identical, there is a high chance that in fact they are the same. Proxy objects are assigned sequential numbers, so they will never have the same name; therefore, I need to remove them as well, so that MD5 hash doesn’t change. Line 9: Call the hexdigest method, which will generate an MD5 hash for the text that has been stored using the update function and compare the result against all stored keys. Line 10: If there is a match, increase its counter. Lines 11–18: Otherwise, insert a new record.

206

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

CHAPTER 7 N PERFORMING COMPLEX SEARcHEs AND REPORTING ON APPLIcATION LOG FILEs

Detecting Known Exceptions So far my application can detect unique exceptions and categorize them appropriately. This is quite useful, but there are some issues: u

As with any heuristic algorithm, the current implementation is really naïve in its way of detecting and comparing exceptions. It does a decent job but may struggle even with really simple cases such as a File Not Found exception. If the exception is raised in different parts of your Java application, it will produce completely different output and essentially same type of exception will be logged multiple times. One might argue that this is expected behavior and you really need to know where the exception has been raised, and that would be a valid comment. In other situations, you don’t really care about these details and would like to combine all File Not Found error messages into one group. At present this is not possible.

u

The naming convention is really confusing; all your exception groups are going to have unreadable names, such as unrecognized_6c2dc65d7c0bfb0768ddff8cabaccf68.

u

If the exception details contain time- or request-specific information, this algorithm is going to see those exceptions as different because there is no way of knowing that “File Not Found: file1.txt” and “File Not Found: file2.txt” are effectively the same exception. To verify this behavior, I generated over a thousand exceptions in which the requested file name is the same and a similar number of error messages with unique filenames. The result of running the application against this sample log file was generating one group with over a thousand instances and over a thousand different groups with one or two instances in them. The reality is that all exceptions are of the same type.

u

Although I am not comparing large pieces of text, calculating an MD5 hash and then comparing has strings is still relatively slow.

In light of those results, I am going to modify the application so that it allows me to define how I want my exceptions detected and categorized. As you already know, each exception is split into three parts: logline, header, and stack trace body. I am going to allow users to define a regular expression for any of those fields and then use that regular expression to detect exceptions. If any of the defined regular expressions is a match, then the exception will be categorized accordingly; otherwise, it’ll go for further processing by the heuristic algorithm I implemented earlier. I am also going to allow users to define any grouping name that they like, so it will be more meaningful than the unrecognized_6c2dc65d7c0bf b0768ddff8cabaccf68 strings.

The Configuration File There are many ways of storing configuration data for your applications. I prefer to use XML documents, for the following reasons:

1.

Python has built-in libraries for parsing XML and as such, accessing configuration data is simple.



2.

Syntax validation happens automatically when the configuration file is fed to the XML parser, so I need not worry about checking the syntax of the configuration file.



3.

XML documents have a clearly defined, unambiguous structure that allows me to implement hierarchical structures should I need to.

There is also a practical downside to using XML—it’s not really human-friendly. However, by using appropriate editors that can do syntax highlighting, we can mitigate this. Nowadays, most editors support this functionality. The ViM editor, which is available on nearly all Linux distributions, is also able to highlight XML syntax.

207

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

CHAPTER 7 N PERFORMING COMPLEX SEARcHEs AND REPORTING ON APPLIcATION LOG FILEs

Listing 7-10 is a simple configuration file to catch majority of the File Not Found exceptions. Listing 7-10.  A Configuration File with Two Rules   The configuration file starts with a document identification string that tells the parsers it is an XML version 1.0 document. For basic processing, this information isn’t strictly required and can be omitted, but for completeness it’s best to adhere to the specification. The root element of the XML configuration files is the tag, which encompasses all other configuration items. Now I have the option of putting exception declarations directly within the tags, and since I have not planned to have anything else in my configuration file, it would just be fine. However, if I later added any new type of configuration items—for example, something that affects reporting—it would not logically fit. So it is always a good idea to create a branch tag and place all elements of a given type within it. Therefore, I define a new domain element, which I’m going to call . All declarations for each, individual exception type are going to be defined here. As you can see, the actual exception declaration is pretty straightforward. I have three placeholders for regular expressions, followed by a description and group name fields.

Parsing XML Files with Python There are two ways of parsing an XML document. One method is called SAX, or Simple API for XML. Before you process XML with SAX, though, you need to define a callback function for each tag that you are interested in. You then call a SAX method to parse the XML. The parser will read the XML file one line at a time and call a registered method for each recognized element. Another method, which I’m going to use in my example, is called the Document Object Model (DOM). Unlike SAX, the DOM parser reads the whole XML document into memory, parses it, and builds an internal representation of that document. By nature, XML documents represent a tree-like structure, with node elements that contain child or branch elements, and so on. So the DOM parser builds a tree-like linked data structure and provides you with methods of traversing this tree structure. There are three basic steps in finding the information in an XML document: parse the XML document, find a tree node that contains the elements that interest you, and read their values or contents.

208

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

CHAPTER 7 N PERFORMING COMPLEX SEARcHEs AND REPORTING ON APPLIcATION LOG FILEs

The first step, parsing an XML document, is really simple and only takes one line of code (two, if you count the include statement). The following code reads in the whole configuration file and creates an XML parser object that later can be used to find information.   from xml.dom import minidom config = minidom.parse(CONFIG_FILE)   The next step is to find all elements. I know that their “parent” node is the element, so I need to get a list of those first. This can be done with the getElementsByTagName method, which is available for any XML object. The method accepts one argument—the name of the element you’re trying to find. The result is a list of Element objects that have the name you searched for. The search performed by method is recursive, so if I start at the top level (which in my instance is the document object), it will return all elements that have this particular name. In that case I may as well search for the tag. With this simplistic configuration file, that method would work as well, but the word exception is much too generic, and therefore may be used outside exception_types sections. Another important thing to note is that each Element object is also searchable and has the same method available to use. So, I can go through the list of elements and drill down further, searching for an tag in each:   for et in config.getElementsByTagName('exception_types'): for e in et.getElementsByTagName('exception'): 

N■Note The following text might seem slightly confusing because there is an overlap in terminology. XML elements can have attributes, as in this example: element value. Similarly, Python objects or classes can have attributes that you access like this: python_object.attribute. When XML is parsed and the representing Python object is built for your document, you would use Python class attributes to access XML document attributes. Now, I’ve reached the elements that I am interested in and the third step is to extract their values. As you can see from the configuration file example, I chose to store data as element attributes. Attributes in each Element object can be accessed using an attribute called “attributes.” This attribute is an object that acts as a dictionary. Each element of the dictionary has two values: name contains the name of the XML element attribute, and value holds the actual text value of the attribute. It may sound confusing, but it should become clear if you look at the example in Listing 7-11. Listing 7-11.  Accessing Configuration Data in the XML Document for et in config.getElementsByTagName('exception_types'): for e in et.getElementsByTagName('exception'): print e.attributes['logline'].value print e.attributes['headline'].value print e.attributes['body'].value print e.attributes['group'].value print e.attributes['desc'].value   As you can see from this example, searching for and accessing attributes of XML document elements is really a trivial task.

209

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

CHAPTER 7 N PERFORMING COMPLEX SEARcHEs AND REPORTING ON APPLIcATION LOG FILEs

Storing and Applying Filters All exception-detection and classification rules are going to be stored in an array. Each array element is a dictionary that contains precompiled regular expressions, both group and description fields, and an ID string, which is just an MD5 hash of regular expression strings. This ID can be used later in referencing particular exception groups, and it will remain unique as long as the rules are not changed. Using precompiled regular expressions increases search speed significantly, because they are already validated and converted to byte code ready for execution. Configuration parsing and importing are done during the class initialization, as you can see from the example in Listing 7-12. Listing 7-12.  Class Initialization and Configuration Import class ExceptionContainer: def __init__(self): self.filters = [] config = minidom.parse(CONFIG_FILE) for et in config.getElementsByTagName('exception_types'): for e in et.getElementsByTagName('exception'): m = md5.new() m.update(e.attributes['logline'].value) m.update(e.attributes['headline'].value) m.update(e.attributes['body'].value) self.filters.append({ 'id' : m.hexdigest(), 'll_re': re.compile(e.attributes['logline'].value), 'hl_re': re.compile(e.attributes['headline'].value), 'bl_re': re.compile(e.attributes['body'].value), 'group': e.attributes['group'].value, 'desc' : e.attributes['desc'].value, })   When the insert method (described in detail earlier) is called, it will loop through the list of filters and attempt to search for matching strings. When such a string is found, the exception details are either stored or the running counter for the group is increased, depending on whether this exception has already been encountered in the log file. If no matches were found, the heuristic categorization method will be executed, as shown in Listing 7-13. Listing 7-13.  Code to Match Custom Categorization Rules def insert(self, suspect_body, f_name=""): ...   if self.is_exception(lines[1]): self.count += 1 ...   logged = False   for f in self.filters: if f['ll_re'].search(log_l) and f['hl_re'].search(hd_l) and f['bl_re'].search(bd_l):

210

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

CHAPTER 7 N PERFORMING COMPLEX SEARcHEs AND REPORTING ON APPLIcATION LOG FILEs

logged = True if f['id'] in self.exceptions: self.exceptions[f['id']]['count'] += 1 else: self.exceptions[f['id']] = { 'count' 'log_line' 'header' 'body' 'f_name' 'desc' 'group' break

: : : : : : :

1, log_l, hd_l, bd_l, f_name, f['desc'], f['group'], }

  if not logged: # ... unknown exception, try to automatically categorise

The Benefits of a Precompiled Search over a Plain-Text Search I’ve mentioned that MD5 hash calculation, and then string comparison, can be slow compared to a precompiled regular expression search, but is that really true? Let me do some experiments and test the theory. First, I am going to run the application against the log file with over 4000 different exceptions and measure the execution time. There are four types of exception in the file: several exceptions generated by the Tomcat engine, a few hundred Permission Denied exceptions, over a thousand File Not Found with the same file name, and over a thousand File Not Found with different filenames. The first number in the result indicates the total number of exceptions, and the second is the total number of identified groups:   $ time ./exctractor.py . 4098, 1070   real 0m1.759s user 0m1.699s sys 0m0.047s   As you can see, it took nearly two seconds to crawl through the file and count all the exceptions. Now, let’s try with two simple rules that detect both types of File Not Found and the Permission Denied exceptions:   $ time ./exctractor.py . 4098, 6   real 0m0.789s user 0m0.746s sys 0m0.037s   So, the execution time has been improved significantly and the application finishes its job in half the time. Provided that the dataset is relatively small and some of the execution time is spent loading libraries and reading in configuration files, the actual savings can be even greater when applied to larger log files. Also, notice that what had been over a thousand exception groups became just 6. This is much more manageable and informative.

211

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

CHAPTER 7 N PERFORMING COMPLEX SEARcHEs AND REPORTING ON APPLIcATION LOG FILEs

Producing Reports I now have a fully functioning application that reads in log files, parses them, searches for exceptions, and counts similar exceptions, based either on automatic groups or on categories defined by the user. All that is very well and good, but unless someone can read and analyze this data, it’s still pretty useless. Let’s write a simple reporting function so that people who are going to use this application can benefit from it.

Grouping the Exceptions If you paid close attention to the previous sections discussing exception grouping, you may have noticed that exceptions are not grouped based on a group field. And, if the exception is not categorized in the configuration file, it would have been grouped based on its MD5 hash value; however, in that case, the group name and exception ID would have one-to-one mapping anyway, because the group name is generated from the hash value:   if m.hexdigest() in self.exceptions: self.exceptions[m.hexdigest()]['count'] += 1 else: self.exceptions[m.hexdigest()] = { 'count' : 1, 'log_line' : log_l, 'header' : hd_l, 'body' : bd_l, 'f_name' : f_name, 'desc' : 'NOT IDENTIFIED', 'group' : 'unrecognized_'+m.hexdigest(), }   However, if the exception has been “caught” using one of the filters from the configuration file, it would have been categorized based on the filter MD5 hash value and not the 'group' string:   if f['ll_re'].search(log_l) and f['hl_re'].search(hd_l) and f['bl_re'].search(bd_l): if f['id'] in self.exceptions: self.exceptions[f['id']]['count'] += 1 else: self.exceptions[f['id']] = { 'count' : 1, 'log_line' : log_l, 'header' : hd_l, 'body' : bd_l, 'f_name' : f_name, 'desc' : f['desc'], 'group' : f['group'], }   This approach allows you to find out how many times each individual filter has been hit, and also to group the counters based on the 'group' field. So first, I need to go through the list of all logged exceptions and create distinct categories. The categories dictionary is only going to store the group name and the total count of exceptions in that group. I also use the option key –v (for verbose) to tell whether or not to print the exception details. Listing 7-14 shows the code.

212

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

CHAPTER 7 N PERFORMING COMPLEX SEARcHEs AND REPORTING ON APPLIcATION LOG FILEs

Listing 7-14.  Grouping Exception IDs into Categories def print_status(self): categories = {} for e in self.exceptions: if self.exceptions[e]['group'] in categories: categories[self.exceptions[e]['group']] += self.exceptions[e]['count'] else: categories[self.exceptions[e]['group']] = self.exceptions[e]['count'] if OPTIONS.verbose: print '-' * 80 print "Filter ID :", e print "Exception description :", self.exceptions[e]['desc'] print "Exception group :", self.exceptions[e]['group'] print "Exception count :", self.exceptions[e]['count'] print "First file :", self.exceptions[e]['f_name'] print "First occurrence logline :", self.exceptions[e]['log_line'] print "Stack trace headline :", self.exceptions[e]['header'] print "Stack trace :" print self.exceptions[e]['body']

Producing Differently Formatted Outputs for the Same Dataset If there are no options supplied for detailed reporting, the application only prints two numbers, which indicate the total number of exceptions found and the total number of different groups. You can use this information to quickly check the current status and also accumulate the records over a period of time and import that data into Excel or other tools to draw pretty graphs. If you’re planning to import report data to some other application, it needs to comply with a format accepted by that application. If you’re using Excel to create graphs, the most convenient file type for import would be Comma Separated Values (CSV), but if you just want to display this information on the screen, you most likely want it to be more informative than just a pair of numbers separated by a comma. So, I introduced an option that allows users to set a format they want to get the result in: either CSV or plain text. I then created two template strings that reference the same variables but provide different formatting:   TPL_SUMMARY['csv'] = "%(total)s, %(groups)s" TPL_SUMMARY['text'] = "="*80 + "\nTotal exceptions: %(total)s\nDifferent groups: %(groups)s"   Then, depending on the format key supplied by the user, the print statement will select the appropriate formatting string and pass variables to it:   print TPL_SUMMARY[OPTIONS.format.lower()] % {'total': self.count, 'groups': len(categories)}   Note how you can pass variables to a formatted sting referencing them by name. This technique is really useful when you need to produce differently formatted outputs using the same set of variables.

213

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

CHAPTER 7 N PERFORMING COMPLEX SEARcHEs AND REPORTING ON APPLIcATION LOG FILEs

Calculating Group Statistics Finally, I wanted to produce a more detailed report on how many different groups were found and the number of exceptions in each, both relative (as a percentage) and absolute (the total number of occurrences). I already have all the details in the dictionary, including the group names and total number of exceptions in each group. But the dictionaries are not sorted, and it would be nice to have a list presented in descending order, where the worst “offenders” are at the top. Python has a very useful built-in function for sorting any iterable objects: sorted(). This function accepts any iterable object, such as a list or dictionary, and returns a new sorted list. The tricky part is that when iterating through a dictionary, you are only iterating though its keys, so when calling sorted() with a dictionary as its parameter, you’ll only get a list of sorted keys!   >>> d = {'a': 10, 'b': 5, 'c': 20, 'd': 15} >>> for i in d: ... print i ... a c b d >>> sorted(d) ['a', 'b', 'c', 'd'] >>>   Obviously this isn’t really what you want; you need both values in your result. Dictionaries have a built-in method that returns key/value pairs as iterable objects—iteritems(). If you use this instead, you’ll get a slightly better result, showing both the key and the value of each pair, but these are still sorted on the key value, which isn’t what you want, either:   >>> for i in d.iteritems(): ... print i ... ('a', 10) ('c', 20) ('b', 5) ('d', 15) >>> sorted(d.iteritems()) [('a', 10), ('b', 5), ('c', 20), ('d', 15)] >>>   The sorted() function accepts an argument that allows you specify a function to be used in extracting a comparison key from the list elements when the elements are composite, such as value pairs. In other words, this function should return a second value from each pair. You need a special function from the operator library: itemgetter() . I will use this function to extract the second value from each pair, and this value will be used by the sorted() function to sort the list:   >>> from operator import itemgetter >>> t = ('a', 20) >>> itemgetter(1)(t) 20 >>> sorted(d.iteritems(), key=itemgetter(1)) [('b', 5), ('a', 10), ('d', 15), ('c', 20)] >>>  

214

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

CHAPTER 7 N PERFORMING COMPLEX SEARcHEs AND REPORTING ON APPLIcATION LOG FILEs

And the final touch is telling sorted() to sort the list in reverse order, so that the list starts with the item that has the largest value:   >>> sorted(d.iteritems(), key=itemgetter(1), reverse=True) [('c', 20), ('d', 15), ('a', 10), ('b', 5)] >>>   Similarly, I am generating and printing the list of exception groups. I add a statistical calculation, just to show the relative size of each group:   for i in sorted(categories.iteritems(), key=operator.itemgetter(1), reverse=True): print "%8s (%6.2f%%) : %s" % (i[1], 100 * float(i[1]) / float(self.count), i[0] )

Summary In this chapter I explained in detail how the open-source tool Exctractor was written and what each functional part is doing. This chapter shows how to apply your Python knowledge to build a relatively complex command-line tool to analyze large text files. Although Python is not a text-processing language as such, it can be successfully used for this purpose. Important points to keep in mind: u

Start by defining a problem and what you want your application to achieve.

u

Analyze the data structures you will be working with and make design decisions based on that information.

u

If you’re dealing with large datasets, try to minimize the amount of memory required, by using generators—Python functions that generate values on the fly.

u

If you need to read and search information in large data files, use generator constructs to read them one line at a time.

u

Python has built-in support for reading and writing compressed files such as bzip2 archives.

u

Keep configuration in a structured format such as XML, especially if it tends to contain many items.

215

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

CHAPTER 8

A Website Availability Check Script for Nagios In this chapter we are going to build a custom check script for one of the standard network monitoring systems (NMS) available today: Nagios. We will be monitoring a simple website by using an HTML parsing library, which allows us to check the operational side of the site. The check script attempts to navigate through unsecured pages and then reach some protected pages, too, by simulating a user login action. All action will be recorded and fed back into the Nagios system, which can be configured to do reporting and alerting. if required.

Requirements for the Check System The main requirement for the system we are going to implement is the ability to monitor a remote website. However, the check should go beyond a simple HTTP GET or POST request, and it must allow the user to specify a navigation path. For example, it should be able to perform some action that simulates the standard user behavior: get to the main website page and then browse to the products list or navigate to the news website and select the top story. As a variation on that scenario, the system also needs to be able to simulate a login process whereby the check submits the user details to the remote website. These details are then validated by the system and the security token is returned (usually in the form of a browser cookie value). Unlike a simple HTTP check, which is readily available with the default Nagios distribution, this mechanism actually triggers the web application logic and acts as a more sophisticated check. When it’s combined with the timing parameters, it is possible to implement sophisticated checks that monitor the user logon time and alert if the login process is successful but is taking too long. We are going to use Python’s standard urllib and urllib2 libraries for accessing the websites. As a web page parser, we are going to use the Beautiful Soup HTML parsing library. Every website is unique, or at least is trying to stand out from the crowd. Therefore, making a universal check system may be a complicated task; for the sake of simplicity, I am going to set some constraints on the system that we’ll build in this chapter. For instance: u

The navigation (or user journey) path will be coded in the script and not available as a configuration.

u

The login check works only on sites that use cookie-based authentication mechanisms.

217

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

CHAPTER 8 N A WEBSITE AVAILABILITY CHEcK ScRIPT FOR NAGIOS

The Nagios Monitoring System Nagios is one of the most popular network monitoring systems. It is used to monitor a wide variety of network-attached components using different access protocols, such as HTTP, SNMP, FTP, and SSH. The capabilities are endless because Nagios has a plug-in based architecture, which allows you to extend the base functionality to meet your monitoring needs. You can also run the checks remotely by using the Nagios Remote Plugin Executor (NRPE) utility. In addition to the monitoring tasks, Nagios is capable of graphing the collected data, such as the system response times or CPU utilization. When problems occur, Nagios has the ability to alert via email or SMS notifications. Nagios packages (the base application and plug-ins) are available on most Linux platforms, so check your Linux distribution documentation for the installation details. Alternatively, you can download the source code from the Nagios web site at www.nagios.org/download. On a typical Fedora system. you can install the Nagios base system along with the basic set of plug-ins (or checks) using the following command:   $ sudo yum install nagios nagios-plugins-all   On a Debian system, you would run the following command:   $ sudo apt-get install nagios3   To proceed with this chapter, you should have some experience in managing Nagios. If you need more information, refer to the official documentation that you can find online at www.nagios.org/documentation/.

Nagios Plug-In Architecture The power of Nagios NMS is in its plug-in architecture. All check commands are external utilities that can be written in any language—C, Python, Ruby, Perl, and so on. The plug-ins communicate with the Nagios system by means of OS return codes and the standard input/output mechanism. In other words, Nagios has a predefined set of return codes that the check scripts must return. The return code dictates what the new service state should be set to. All return codes and the corresponding service states are listed in Table 8-1. Table 8-1.  Nagios Plug-in Return Codes

Return Code

Service State

0

OK. The service is in a perfectly healthy condition.

1

WARNING. The service is available but is dangerously close to the critical condition.

2

CRITICAL. The service is not available.

3

UNKNOWN. It’s not possible to determine the state of the service.

In addition to the return code, a plug-in should print at least one line to the standard output. This printed string should contain a mandatory status text followed by the optional performance data string. So, a simple one-line report example can be:   WebSite OK   This text will be appended to the status report message in the Nagios GUI. Similarly, with the performance data appended, it would look like this:   WebSite OK | response_time=1.2  

218

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

CHAPTER 8 N A WEBSITE AVAILABILITY CHEcK ScRIPT FOR NAGIOS

The performance data part then is available through the built-in Nagios macros and can be used to plot the graphs. More information about using the performance data parameter is available at http://nagios.sourceforge.net/docs/3_0/perfdata.html. When you write a new plug-in, you must provision it first in the configuration files, so that Nagios knows where to find it. Conventionally, all plug-ins are stored in /usr/lib/nagios/plug-ins. Once you’ve written a check script, you must define it in the command.cfg configuration file, which can be found in /etc/nagios/objects/. The actual location may be different, depending on how you installed Nagios. Here is an example of a check definition:   define command { command_name check_local_disk command_line $USER1$/check_disk -w $ARG1$ -c $ARG2$ -p $ARG3$ }   When you define a service or a host, you now can refer to this check with the check_local_disk name. The actual executable is $USER1$/check_disk and accepts three arguments. Following is an example of the service definition that uses this check and passes all three parameters to it:   define service { use local-service host_name localhost service_description Root Partition check_command check_local_disk!10%!5%!/ }   The $USER1$ macro that you’ve seen earlier in the command-line definition simply refers to the plug-ins directory and is defined in /etc/nagios/private/resource.cfg as $USER1$=/usr/lib/nagios/plugins. If you want, you can define a new macro and use it with your check scripts. This way, you’ll separate the packaged scripts from your own, and it becomes easier to maintain. I recommend doing this for check scripts that have a complicated structure with external configuration files or other dependencies.

The Site Navigation Check As you know, each website is unique, and although usually the same navigation and implementation principles apply, you still have to do a lot of manual work to reverse-engineer it so that you can successfully simulate the user actions. Matters get much simpler if you know how the site is built and don’t have to guess. In this example check, we’re going to build a script that navigates to the BBC UK website at http://news.bbc.co.uk/, selects the top front-page story, and follows that link. This check is a good example of a monitor that simulates one of the user behavior patterns and also tests the internal website logic for at least two functions: the ability to generate the front page and the ability to generate the top story content. We’ll also monitor the execution time, and if it exceeds the preconfigured threshold, we’ll alert on that as well.

Installing the Beautiful Soup HTML Parsing Library Before proceeding, we need to install the Beautiful Soup library. Beautiful Soup is a Python module designed to parse HTML and XML documents and extract information from them. This library is ideal for processing the real-world HTML pages, as it ignores the malformed HTML syntax with missing end tags and other errors that a web page may potentially contain.

219

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

CHAPTER 8 N A WEBSITE AVAILABILITY CHEcK ScRIPT FOR NAGIOS

Because Beautiful Soup is a really popular library, its packages are available for the majority of Linux distributions. For example, on a Fedora system we can install this library with the following command:   $ sudo yum install python-BeautifulSoup   We can also install it from the Python Package Index (PyPI):   $ sudo pip install BeautifulSoup   Alternatively, the source code is available for downloading from the application website at www.crummy.com/software/BeautifulSoup/.

Retrieving a Web Page In its simplest form, the page-retrieving function can be implemented with only two function calls, and in most cases where we’re not submitting any information and just retrieving, this is sufficient. The following example uses the urlopen() function, which performs an HTTP GET request if no additional form data is supplied. We’ll look at different methods of submitting data to the web applications later in the chapter.   >>> import urllib2 >>> r = urllib2.urlopen('http://news.bbc.co.uk') >>> html = r.read() >>> len(html) 143484   The result of the read() call is a string containing the web page as it is served by the server. This string, however, is not a full response and does not include extra information such as HTTP protocol headers. The result object returned by the urlopen() call has the info() method, which we can use to retrieve the HTTP headers as they are returned by the server. We need to remember that the object returned by the info() call is an instance of the httplib. HTTPMessage class, which implements the same protocol as the dictionary class, but in fact is not a dictionary itself:   >>> r.info() >>> print r.info() Server: Apache X-Cache-Action: HIT X-Cache-Hits: 133 Vary: X-CDN X-Cache-Age: 8 Cache-Control: private, max-age=0, must-revalidate Content-Type: text/html Date: Tue, 13 May 2014 18:02:06 GMT Expires: Tue, 13 May 2014 18:01:58 GMT Content-Language: en-GB X-LB-NoCache: true Connection: close Set-Cookie: BBC-UID=758377d205ee016ea1140d3d4136ec148f4d29ac7484e1befa21640e52188e380Pythonurllib/2.7; expires=Sat, 12-May-18 18:02:06 GMT; path=/; domain=.bbc.co.uk Content-Length: 143484   >>> r.info()['Server'] 'Apache' >>>  

220

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

CHAPTER 8 N A WEBSITE AVAILABILITY CHEcK ScRIPT FOR NAGIOS

N■Tip You can find out more about the HTTP headers, including a short description and a link to the appropriate RFC specification document, at www.cs.tut.fi/~jkorpela/http.html. Another useful method of the response object is geturl(). This method returns the actual URL of the retrieved document. It is possible that the initial URL will respond with an HTTP redirect, and we’ll actually end up retrieving a page from a completely different URL. In that case, we may want to check the origin of the page. One possible cause of a redirect is that we are trying to access restricted content without prior authentication. In this case, we will most likely be redirected to the login page of the website.   >>> r.geturl() 'http://www.bbc.co.uk/news/'   The resulting contents can then be passed to Beautiful Soup for the HTML interpretation and parsing. The result is an HTML document object that implements various methods for searching and extracting data from the document. The argument supplied to the Beautiful Soup constructor is just a string, which means we can use any string as an argument, not only the one that we’ve just retrieved from the website.   >>> from BeautifulSoup import BeautifulSoup >>> soup = BeautifulSoup(html) >>> type(soup) >>>

Parsing the HTML Pages with Beautiful Soup Once the contents are loaded into the BeautifulSoup object, we can start dissecting the BBC front web page. To give you an idea of what we need to find on the page, Figure 8-1 illustrates a sample screenshot of the front page, where we can clearly see the position of the top story. The top story on the BBC News UK when I captured the screen was titled “Court to probe UK Iraq abuse claims.” The title obviously changes from article to article, but the layout of the website rarely changes, and the top story is always displayed in the same location on the web page.

221

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

___

CHAPTER 8 N A WEBSITE AVAILABILITY CHEcK ScRIPT FOR NAGIOS 0OO !j | < _IN

||ÿ

ft|| +

,

I

ENDS

O Sign in

NEWS Home World

Video & Audio

BBC News

UK

- Home

c

M www.bbc.co.uk/news I

BBC News

Sport

England

N. Ireland Scotland

Editors' Blog

Wales

In Pictures

TV

Radio

More...

Search

I Business

Politics

Health

Education

Have Your Say

Q

RSS SK

i,

Also in the News

O +

(Player

13 May 2014 Last updated at 18:58

Magazine

i i

- Home

Sc

Technology

Entertainment & Arts

Special Reports

LATEST: Avon and Somerset chief constable Nick Gargan suspended over claims of "inappropriate behaviour” towards women

Court to probe UK Iraq abuse claims An initial investigation into claims that UK forces abused Iraqi detainees is to be opened by the International Criminal Court. Hague rejects Iraq ’abuse' complaint Military 'must aid Iraq inquiries' Q&A: Al-Sweady inquiry

NO

Watch/Listen

!

*

Deep-sea 'graveyard* captured on film

fTTra BBC News Channel

fTTil BBC Radio 5 Live i

\W-‘,

Troops killed' in Ukraine ambush Seven Ukrainian soldiers and one pro-Russian insurgent have reportedly been killed in an ambush in the eastern Donetsk region, reports say Ukrainian speakers leave Donetsk

Features

Russia deadline on Ukraine gas debt Rosenberg: What wil Putin do next?

Media mull Ukraine vote significance

Figure 8-1.  The BBC News UK front page We now need to find the corresponding HTML code in the web page. Let’s look at the web page source, shown in Listing 8-1. (I did a bit of formatting, so you will probably see a slightly different layout of the code if you view the code from your web browser.) Listing 8-1.  HTML Source Code for the BBC News UK Front Page [...]   Court to probe UK Iraq abuse claims

222

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

CHAPTER 8 N A WEBSITE AVAILABILITY CHEcK ScRIPT FOR NAGIOS

An initial investigation into claims that UK forces abused Iraqi detainees is to be opened by the International Criminal Court. Hague rejects Iraq 'abuse' complaint Military 'must aid Iraq inquiries' Q&A: Al-Sweady inquiry 'Troops killed' in Ukraine ambush Seven Ukrainian soldiers and one pro-Russian insurgent have reportedly been killed in an ambush in the eastern Donetsk region, reports say. Ukrainian speakers leave Donetsk Russia deadline on Ukraine gas debt Rosenberg: What will Putin do next? Media mull Ukraine vote significance   [...]     223

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

CHAPTER 8 N A WEBSITE AVAILABILITY CHEcK ScRIPT FOR NAGIOS

We can immediately spot three distinct marks that potentially may lead us to the top-story URL link. The first one is the tag with ID set to top-story. Second mark is the tag that belongs to top-story-header class. And the last mark could be the tag that belongs to the tshsplash class. The first mark is a good identifier because HTML requires element IDs to be unique. So we can be pretty sure that there is only one with the top-story ID. There are several ways to access the tags in a Beautiful Soup document. If we know exactly what we are looking for and the exact structure of the website, we can simply use the tag names as properties of the soup object:   >>> import urllib2 >>> from BeautifulSoup import BeautifulSoup >>> WEBSITE = 'http://www.bbc.co.uk/news/' >>> result = urllib2.urlopen(WEBSITE) >>> html = result.read() >>> soup = BeautifulSoup(html) >>> print soup.html.head.title   This code will print the title HTML string:   BBC News - Home   This is a convenient and quick method of accessing individual tags, but it does not work very well if the tag encapsulation structure is complicated. For example, the first tag we’re trying to get to is already nine levels deeper than the tag that encapsulates it (see Figure 8-2), and we don’t even know where that particular tag is in relation to the document‘s root element. » /**/ START CPS_SITE CLASS: index id="full-width" class="container-full-width">_







id=Mcontainer-top-stories-with-splashM class="container-top-storiesM>

",) )   plt.show()   Notice how the text string has been formatted. Listing 11-3 uses the Python raw string notation (just a reminder that it is defined as r'anystring') and encloses the whole expression within $ characters. This instructs the matplotlib text-rendering engine that the text will contain the subset of TeX markup instructions. Figure 11-6 shows the plot generated by Listing 11-3.

318

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

CHAPTER 11 N STATIsTIcs GATHERINg AND REPORTINg

Fourth degree polynomial 10

y=6. 2a-4 +0.&X3 -2.5a-2 -1.2:c 4-0.6 5

0

< -5

-10 Turning point

-15

-20

-4

-3

-2

-1 X Axis

0

1

2

3

Figure 11-6.  Adding text to a graph

Saving Plots to a File So far, we’ve looked at the various aspects of plot generation. You’ve seen that the plots that you generate are displayed in an interactive window within your GUI. This is perfectly acceptable if you just need to quickly check the results, but it also means that you need to perform the full calculation every time you want to see the graph. You have an option of saving the graph from the plot display window, but this is a manual process and not suitable for automated reporting systems. matplotlib uses imaging back-end processes that generate the images. For the majority of us who just want to use the most popular formats—such as PNG, PDF, SVG, PS, and EPS—matplotlib offers the Anti-Grain Geometry (Agg) back end, which uses the C++ anti-grain image-rendering engine behind the scenes. By default, matplotlib uses one of the GUI engines (for example, wxPython) when you import the pyplot module. To change this behavior, you must first instruct it to use the Agg back end, and then import pyplot. Listing 11-4 shows an example of how to initialize matplotlib with the Agg back end and generate two files in different formats. Listing 11-4.  Saving Images to Files #!/usr/bin/env python   import matplotlib matplotlib.use('Agg') import matplotlib.pyplot as plt import numpy as np   fig = plt.figure() ax = fig.add_subplot(1, 1, 1) x = np.arange(100)

319

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

CHAPTER 11 N STATIsTIcs GATHERINg AND REPORTINg

y = np.sin(2 * np.pi * x / 100) ax.plot(y) plt.savefig('sin-wave.png') plt.savefig('sin-wave.pdf')   Notice that you don’t need to specifically tell the Agg engine the file type. It is smart enough to figure that out from the file name extension. If you must use a nonstandard extension, or no extension at all, you can use an optional keyword argument to force the file type:   plt.savefig('sin-wave', format='png')

Graphing Statistical Data We have spent a great deal of time discussing various statistical methods of data analysis. You know how to check if there are any trends in the dataset and whether the trend is positive or negative. You also know how to calculate the average value of the dataset and the likelihood of the data fitting within the predefined boundaries (standard deviation). Now let’s see how to apply this knowledge. We’ll build a simple application that runs periodically and generates status pages. These pages are static pages that will be served by the Apache web server.

Collating Data from the Database Chapter 9 provided the details of the various database tables we’re using for our monitoring system and how they are related to each other. Because we’re interested in reporting on the probe readings for this chapter’s example, of most interest to us here is the probereading table, which contains the raw data obtained from the sensors. The values for this table need to be filtered before processing, so we need to know to which sensor—or to be more precise, which probe—this reading belongs. We also need to group the probe readings by the host from which they have been read. In other words, we need to iterate through all entries in the host table; then for each host we find, we need to check which probes are running on it. Once we establish the entire host-to-probe combinations, we need to obtain the sensor readings over the time. In the test database that I am using for this example, I have two hosts (called My laptop and My server) in the host table and two probes (called Used CPU % and HTTP requests). Both hosts are reporting their CPU usage figures, but only the server is serving the web pages and therefore is monitoring the number of incoming HTTP requests. You can download the database file with the data along with the rest of the source code for this book. The database is preloaded with sample performance data that has been randomly generated, but it attempts to follow real-world usage patterns. Before we proceed with the implementation, let’s quickly outline the basic structure of the site generator script.

Displaying Available Hosts First, we need to find all the hosts that are present in the database. Once we have that list, we’ll use the host ID to search for all probes associated with this host. We need to gather the probe name, the warning and error threshold values, and the host probe ID, which we’ll use to search for the probe readings. Listing 11-5 shows the code used to gather this information. Listing 11-5.  Retrieving All Hosts and the Associated Probes class SiteGenerator: def __init__(self, db_name): self.db_name = db_name self.conn = sqlite3.connect(self.db_name) self.hosts = [] self._get_all_hosts()  

320

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

CHAPTER 11 N STATIsTIcs GATHERINg AND REPORTINg

def _get_all_hosts(self): for h in self.conn.execute("SELECT * FROM host"): host_entry = list(h) query_str = """ SELECT hostprobe.id, probe.name, COALESCE(hostprobe.warning, probe.warning), COALESCE(hostprobe.error, probe.error) FROM probe, hostprobe WHERE probe.id = hostprobe.probe_id AND hostprobe.host_id = ? """ probes = self.conn.execute(query_str, (h[0],)).fetchall() host_entry.append(probes) self.hosts.append(host_entry)   In this code, notice the COALESCE() function, which returns the first non-null result from the list. Remember that we can define the site-wide threshold in the probe table, but we also allow overruling this setting in the hostprobe table. This allows us to set thresholds on a per-host basis. So the logic is to check whether the host-specific threshold setting is not set to NULL and fall back to the default if it is. Here is a simplified example to illustrate the behavior of this function:   sqlite> select coalesce(1, 2); 1 sqlite> select coalesce(NULL, 2); 2 sqlite> select coalesce(NULL, NULL);   sqlite>

Drawing Timescale Graphs Now we have all the information required for further data processing: the hosts and the associated host probes. There are many different ways to represent the statistical information that we’ve gathered. In this example, we’ll sort the information by one of the two parameters: the probe names and the timescale. To simplify the implementation, we’ll use the predefined list of available timescales: 1 day, 7 days, and 30 days. I find it easier to develop the templates and the corresponding code if I visualize the website structure that I’m developing. Figure 11-7 represents the structure of our website, along with the sample HTML file names (IDs to be replaced with the actual values) and the corresponding Jinja2 templates.

321

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

CHAPTER 11 N STATIsTIcs GATHERINg AND REPORTINg Main page of the Web site (web page: /index.html template: index.template) Contains the list of links to the detailed information pages for each available host

>r

Host Information (web page: /host__detail.html template: host.template) Contains links to the time scale details and probe details pages Also displays the probe availability information

Time Scale (X days) Details (webpage: /hsd__.html template: host_scale_details.template)

Probe Details (webpage: /hpd__.html template: host_probe_details.template)

Links to all graphs for the given host at the specified timescale

Links to all time scale graphs of the specified host probe

Figure 11-7.  The website structure

The Index Page



The index page is the simplest page on our website. It requires a minimum amount of code to generate because we don’t need to do any calculations. We just pass in the list of hosts, which we’ve already generated in the class initialization method. The private class method loads the template and passes the host list to it:   def _generate_hosts_view(self): t = self.tpl_env.get_template('index.template') f = open("%s/index.html" % self.location, 'w') f.write(t.render({'hosts': self.hosts})) f.close()   The template iterates through the list of hosts and generates links to the host details page:   Hosts {% for host in hosts %} {{ host[1] }} ({{ host[2] }}:{{ host[3] }}) {% endfor %}

322

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

CHAPTER 11 N STATIsTIcs GATHERINg AND REPORTINg

We’ll use the host list and the host probe list quite a lot in this example. Table 11-3 shows the details of each field, so you don’t need to memorize what each field contains. Table 11-3.  The Host and Probe List Fields

Element

Element Field

Description

self.hosts

0

Host ID

self.hosts

1

Name of the host

self.hosts

2

Address of the host

self.hosts

3

Port number of the monitoring agent

self.hosts

4

List of the probe elements (following fields)

host[4]

0

Host probe ID

host[4]

1

Name of the probe

host[4]

2

Warning threshold (or None if not defined)

host[4]

3

Error threshold (or None if not defined)

Host Details Page For the host details page, we need to calculate the service availability figures and display them on a web page for each host. Each page will have two sections: one to display the service availability statistics and the other to list the links to the pages containing graphs for each timescale/host probe combination. Listing 11-6 shows the two private methods that perform the calculations and also generate the website pages. Listing 11-6.  Generating the Host Details Web Page def _generate_host_toc(self, host): probe_sa = {} for probe in host[4]: probe_sa[probe[1]] = {} for scale in TIMESCALES: probe_sa[probe[1]][scale] = self._calculate_service_availability(probe, scale) t = self.tpl_env.get_template('host.template') f = open("%s/host_%s_details.html" % (self.location, host[0]), 'w') f.write(t.render({ 'host': host, 'timescales': TIMESCALES, 'probe_sa': probe_sa, })) f.close()   def _calculate_service_availability(self, probe, scale): sa_warn = None sa_err = None sampling_rate = self.conn.execute("""SELECT probeinterval FROM probingschedule WHERE hostprobe_id=?""", (probe[0],)).fetchone()[0]

323

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

CHAPTER 11 N STATIsTIcs GATHERINg AND REPORTINg

records_to_read = int(24 * 60 * scale / sampling_rate) query_str = """SELECT count(*) FROM (SELECT probe_value FROM probereading WHERE hostprobe_id=? LIMIT ?) WHERE probe_value > ?""" if probe[2]: warning_hits = self.conn.execute(query_str, (probe[0], records_to_read, probe[2],) ).fetchone()[0] sa_warn = float(warning_hits) / records_to_read if probe[3]: error_hits = self.conn.execute(query_str, (probe[0], records_to_read, probe[3],) ).fetchone()[0] sa_err = float(error_hits) / records_to_read return (sa_warn, sa_err)   The first function, _generate_host_toc(), will be called for every host found in the list. As a parameter, the _generate_host_toc() function receives a host structure, which also contains the list of all probes associated with it (see Table 11-3). The function then iterates through all host entries and all timescale values, calling the second function, _calculate_service_availability(). The _calculate_service_availability() function calculates the number of times each threshold has been breached for each host probe in a given timescale. To do that, it needs to figure out how many records to analyze. This depends on the sampling rate. For example, if we’re reading a probe every minute, we’ll have 24 * 60 = 1440 records made every day. However, if we are performing a check every 5 minutes, that will be 24 * (60/5) = 288 records. The sampling rate is stored in the database, so we’ll just need to fetch that value and calculate the number of records to analyze. The next step is to count the number of records whose value is above the threshold settings. The database query we are going to use is the same for both value checks. So we construct it once and then use it when needed in the connection.execute() calls, with the appropriate threshold setting. Let’s look at the SQL query:   SELECT count(*) FROM (SELECT probe_value FROM probereading WHERE hostprobe_id=? LIMIT ?) WHERE probe_value > ?   This is actually two nested queries. The first query that will be executed by the SQLite3 engine is the inner SELECT statement, which selects the last x records from the list for a specified host probe. The outer SELECT statement counts the number of records from the list that have a probe_value above the specified threshold value. You may notice that we don’t order the list in any way in the inner SELECT statement. So how sure are we that we’re actually going to get the last records, and not a random or semi-random selection of records from the database? In SQLite, each row has an associated ROWID value, and all rows are sorted by their row IDs. If we don’t specify the order in our SELECT statement, it will automatically be sorted by the row IDs. Since we’re only adding rows into the database, all our row IDs will be in the sequence. Therefore, a simple LIMIT SQL statement guarantees that we’ll get the last rows selected.

324

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

CHAPTER 11 N STATIsTIcs GATHERINg AND REPORTINg

N■Note  You can find more information about the row ID field in the official SQLite3 documentation, located at http://sqlite.org/lang_createtable.html#rowid. Note that other database engines, such as PostgreSQL and MySQL, may behave differently. The SQL query will be executed only if the threshold value is available; otherwise, the function returns None as a result. Once the calculations have been performed, we load the template and pass the variables to it. The template is responsible for displaying the availability statistics and also for generating links to the pages containing the graphs. Listing 11-7 shows the host details template. Listing 11-7.  Host Details Template Host details: {{ host[1] }} Views grouped by the timescales Here you'll find all available probes for this host on the same«timescale. {% for scale in timescales %} {{ scale }} day(s)«view {% endfor %} Views grouped by the probes Here you'll find all available time scale views of the same probe {% for probe in host[4] %} {{ probe[1] }} {% endfor %} Host statistics Service availability details {% for probe in probe_sa %} Availability of the "{{ probe }}" check {% for scale in probe_sa[probe] %} On a {{ scale }} day(s) scale: Warning: {{ probe_sa[probe][scale][0]|round(3) }}% Error: {{ probe_sa[probe][scale][1]|round(3) }}% {% endfor %} {% endfor %} 

Graph Collection Pages The graph collection pages are linked from the detailed host information pages. As you can see from the diagram in Figure 11-7, we’ll have two types of graph collection pages: ones that contain the graphs with the same timescale but plotting data from different probes, and those that plot all available timescale graphs for a single host probe.

325

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

CHAPTER 11 N STATIsTIcs GATHERINg AND REPORTINg

Although these functions are quite similar, I have separated them into two function calls, mostly to keep the modular structure of the code. Listing 11-8 shows both functions. Listing 11-8.  Generating the Graph Collection Pages def _generate_host_probe_details(self, host_struct, probe_struct): t = self.tpl_env.get_template('host_probe_details.template') f = open("%s/hpd_%s.html" % (self.location, probe_struct[0]), 'w') images = [] for scale in TIMESCALES: images.append([ scale, "plot_%s_%s.png" % (probe_struct[0], scale), ]) f.write(t.render({'host': host_struct, 'probe': probe_struct, 'images': images, })) f.close()   def _generate_host_scale_details(self, host_struct, scale): t = self.tpl_env.get_template('host_scale_details.template') f = open("%s/hsd_%s_%s.html" % (self.location, host_struct[0], scale), 'w') images = [] for probe in host_struct[4]: images.append([ probe[1], "plot_%s_%s.png" % (probe[0], scale), ]) f.write(t.render({'host': host_struct, 'scale': scale, 'images': images, })) f.close()   The _generate_host_probe_details() function is responsible for linking to all host probe images for all available timescales. The following is the template code for this function:   Host: {{ host[1] }} Probe: {{ probe[1] }} {% for image in images %} Time scale: {{ image[0] }} day(s) {% endfor %}   The template simply iterates through the dataset generated by the function. The dataset includes the image file names. The _generate_host_scale_details() function links to all host probes from a specified timescale. Similar to the first function, this function generates the image file names, and this list is used from within the template. The following is the template code for this function:   Host: {{ host[1] }} Scale: {{ scale }} day(s) {% for image in images %}

326

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

CHAPTER 11 N STATIsTIcs GATHERINg AND REPORTINg

{{ image[0] }} {% endfor %}

Plotting Performance Graphs We’ve been referencing the images, but we haven’t created any graphs yet. In this section, we’ll look at the function that reads the data from our database and generates individual images for every possible host probe/timescale combination. As you’ve seen, these images can be combined by multiple criteria. In this example, we group them by their timescale value and probe name. In addition to the simple data plotting, our function will also calculate some statistical parameters for the dataset: the trend function for the given data and the standard deviation value, which will give us the suggestion for the new warning and error threshold values. This can be especially useful when you are just starting to monitor a new entity and have no idea what these values should be. Listing 11-9 shows the function for plotting the performance data. You should recognize the numerical and plotting functions from the earlier discussions of the NumPy and matplotlib modules. Listing 11-9.  Plotting the Performance Data def _plot_time_graph(self, hostprobe_id, time_window, sampling_rate, plot_title, plot_file_name, warn=None, err=None): records_to_read = int(time_window / sampling_rate) records = self.conn.execute("""SELECT timestamp, probe_value FROM probereading WHERE hostprobe_id=? LIMIT ?""", (hostprobe_id, records_to_read)).fetchall() time_array, val_array = zip(*records)   mean = np.mean(val_array) std = np.std(val_array) warning_val = mean + 3 * std error_val = mean + 4 * std   data_y = np.array(val_array) data_x = np.arange(len(data_y)) data_time = [dateutil.parser.parse(s) for s in time_array] data_xtime = matplotlib.dates.date2num(data_time) a, b = np.polyfit(data_x, data_y, 1) matplotlib.rcParams['font.size'] = 10 fig = plt.figure(figsize=(8,4)) ax = fig.add_subplot(1, 1, 1) ax.set_title(plot_title + "\nMean: %.2f, Std Dev: %.2f, Warn Lvl: %.2f, Err Lvl: %.2f" % (mean, std, warning_val, error_val)) ax.plot_date(data_xtime, data_y, 'b')   ax.plot_date(data_xtime, data_x * a + b, color='black', linewidth=3, marker='None', linestyle='-', alpha=0.5)

327

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

CHAPTER 11 N STATIsTIcs GATHERINg AND REPORTINg

fig.autofmt_xdate() if warn: ax.axhline(warn, color='orange', linestyle='--', linewidth=2, alpha=0.7) if err: ax.axhline(err, color='red', linestyle='--', linewidth=2, alpha=0.7) ax.grid(True) plt.savefig("%s/%s" % (self.location, plot_file_name))   The _plot_time_graph() function starts with a SQL query that selects the timestamp and probe_value fields that belong to an appropriate host probe. Once again, here we are using the LIMIT statement to retrieve the latest results from the table. Bear in mind that this is guaranteed to work only if you’re using the SQLite3 database, as the records are automatically ordered by their ROWID value. The other databases may behave differently. Also, this assumption relies on the fact that we never delete any records from the database; therefore, the row IDs are guaranteed to be sequential. If you’re using a different database engine, or if you’re updating any of the records in this table and you suspect that the row ID may change and the ordering may be altered, you can force the ordering by the timestamp field. This ensures that all records will be sorted by their timestamp before the LIMIT instruction chops off the last section from the results list. However, this may have a significant impact on the performance, which can be improved by adding an index on the required field:   sqlite> .timer ON sqlite> SELECT timestamp, probe_value FROM probereading WHERE hostprobe_id=1 LIMIT 5; 2009-12-16T21:30:20|0.0 2009-12-16T21:31:20|0.000431470294632392 2009-12-16T21:32:20|0.000311748085651205 2009-12-16T21:33:20|0.000777994331440024 2009-12-16T21:34:20|0.00475251893721452 CPU Time: user 0.000139 sys 0.000072 sqlite> SELECT timestamp, probe_value FROM probereading WHERE hostprobe_id=« 1 ORDER BY timestamp LIMIT 5; 2009-12-16T21:30:20|0.0 2009-12-16T21:31:20|0.000431470294632392 2009-12-16T21:32:20|0.000311748085651205 2009-12-16T21:33:20|0.000777994331440024 2009-12-16T21:34:20|0.00475251893721452 CPU Time: user 0.192693 sys 0.018909 sqlite> CREATE INDEX idx_ts ON probereading (timestamp); CPU Time: user 0.849272 sys 0.105697 sqlite> SELECT timestamp, probe_value FROM probereading WHERE hostprobe_id=« 1 ORDER BY timestamp LIMIT 5; 2009-12-16T21:30:20|0.0 2009-12-16T21:31:20|0.000431470294632392 2009-12-16T21:32:20|0.000311748085651205 2009-12-16T21:33:20|0.000777994331440024 2009-12-16T21:34:20|0.00475251893721452 CPU Time: user 0.000169 sys 0.000136 sqlite>   The data we are plotting is time-sensitive, so it would make more sense to have it plotted against the corresponding timestamp values on the x axis. matplotlib has a function to plot timed data called time_plot(). Its syntax is identical to that of the plot() function, but the data argument (either only X or both, X and Y data) must

328

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

CHAPTER 11 N STATIsTIcs GATHERINg AND REPORTINg

be floating-point numbers representing the number of days since 0001-01-01, with the fraction part defining hours, minutes, and seconds. To achieve this, we need to perform two operations: convert the text strings to the Python datetime type and then convert that into the floating-point numbers. This is done by the following piece of code:   import dateutil ... data_time = [dateutil.parser.parse(s) for s in time_array] data_xtime = matplotlib.dates.date2num(data_time)   If available, we also plot the warning and error threshold lines. Each plot title includes the statistical parameters of the dataset along with the suggested values for the warning and error thresholds. Figure 11-8 shows a sample plot. My server - Received HTTP requests (scale: 1 day(s)) Mean: 400.08, Std Dev: 74.92, Warn Lvl: 624.85, Err Lvl: 699.77

800

-r

TT

T"

T

T

3

700 600 500

400 300

200

0\ÿc 0\ÿc Qÿc

Q\ÿc

0ÿc 0ÿc 0ÿc 0ÿc 0ÿc

//////////// Figure 11-8.  A plot of performance data

Summary In this chapter, we looked at basic statistical analysis using the NumPy library. The statistical functions in this library can provide better insight into the systems you are monitoring, especially if you remember these key points: u

Most real-life data, although seemingly random, follows the normal distribution pattern.

u

The standard deviation tells you how far on average each value is from the mean value of the dataset.

u

You can use standard deviation to determine the optimum values for the warning and error thresholds.

u

The first-degree polynomial function parameters can be used to identify the general trend of a dataset.

u

Using the data trend function, you can predict the future behavior of the system.

329

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

CHAPTER 12

Distributed Message Processing System In the previous three chapters we built a distributed monitoring system based on XML-RPC messaging protocol. Although it works pretty well, it may lack some features like message prioritization and task scheduling. We could add extra functionality to the code that we have already written, but instead I am going to show you how to replace the custom messaging platform with a more robust and feature-full system based on a distributed task queue called Celery.

Quick Introduction to Message and Task Queues Task queuing is a powerful mechanism that allows you to chop the work into smaller chunks, send those pieces of work to a large number of machines, and then collect the results. Depending on the number of machines at your disposal, you can significantly increase processing time.

Task Queuing Systems At its heart, the task queuing mechanism is relatively simple. A master process generates one or more tasks that need to be processed, and then pushes the instructions into a task queue. One (or more) worker process watches the queue, and as soon as it sees new task it grabs it from the queue. When the task is finished, the results (if any) are sent back to the master process. This process is illustrated in Figure 12-1.

331

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

CHAPTER 12 N DISTRIBUTEd MESSAGE PROcESSING SYSTEm

Master process Push tasks: 1,2, 3, 4,5

Q Pick tasks: 1,3

Worker process 1

)

Task queue

Pick tasks: 2,5

Worker process 2

Pick task: 4

Worker process 3

Figure 12-1.  Task queue This mechanism can be used to distribute tasks to multiple processes running on a single machine or on multiple machines. It is important to understand that the task queue is a method of distributing tasks among multiple processes. It is not a particular implementation or a product. This distribution of tasks can be applied at multiple levels. For example, you can create an application-level task queue using one thread as a master process and multiple threads as worker processes. The threads then can use shared variables to distribute tasks among themselves. Another example is host-level task distribution. One host can be a master host that generates tasks and pushes them down to the worker hosts, who process those tasks. A practical example would be a web mail system where a front-end (the master node) node accepts user input and sends it to mail-processing nodes (worker nodes), which then act as mail relays and send the email out. Furthermore, the task queue doesn’t even need to be related to computers at all! For example, a team leader can write down a day’s tasks on Post-it notes, stick them to a whiteboard, and then team members would pick them up during the day and do what’s written on the notes. Task queues are extremely useful if you want to execute long-running tasks asynchronously. You need the task to be processed, but you don’t need the results right away. A good example is sending email from a web form. The email may take a while to send, especially if the remote mail server is unavailable and you need to retry sending the mail multiple times. At the same time you don’t want the user to wait until the email is sent. So the web front end takes the web form data from the user, sends it to the mail relay for further processing, and instructs the user that the email is being sent. An example of a highly distributed task queue is Google’s Appengine Task queue. You can read more about the implementation at https://developers.google.com/appengine/docs/python/taskqueue. Other examples of task queues are Resque (https://github.com/resque/resque), which is a task queue library for Ruby applications; Jesque (https://github.com/gresrun/jesque), which is a Resque implementation in Java language; and Celery (http:// www.celeryproject.org), which we are going to discuss in this chapter. It is very important to understand that the task queue is an entirety of multiple components acting in unison: master process, which typically combines multiple subsystems, such as task execution scheduler; message queue, which is used for communication purposes; and worker processes, who implement task execution algorithms.

Message Queuing Systems One of the core components in the task queue is the message queue. The message queue is a mechanism for sharing information between processes and systems. Task queue uses the message queue to communicate between different components of the task queue system. For example, when the master process needs to send a task to one of the

332

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

CHAPTER 12 N DISTRIBUTEd MESSAGE PROcESSING SYSTEm

worker processes, it uses message queue to pass on the message. In the previous example of a manual task queue, whiteboard performs the role of message queue. Team leader (master process) sends a message (handwritten text on a Post-it note) to a team member (worker process) using whiteboard (message queue). Sometimes you will find that message queue systems are referenced as message brokers. There are many different message queues; some popular examples are: u

ActiveMQ (http://activemq.apache.org)

u

RabbitMQ (http://www.rabbitmq.com)

u

ZeroMQ (http://zeromq.org)

Typically, message queue and task queue are very loosely coupled. For example, Celery can use one of the following message brokers: u

RabbitMQ (http://www.rabbitmq.com, a message queue recommended for Celery)

u

Redis (http://redis.io, a distributed key-value store)

u

Mongo DB (http://www.mongodb.com, a popular NoSQL distributed database)

u

Beanstalk (http://kr.github.io/beanstalkd, a task queue)

u

Amazon SQS (http://aws.amazon.com/sqs/, message queue as a service, provided by Amazon)

u

Couch DB (http://couchdb.apache.org, a JSON document database)

u

Zookeeper (http://zookeeper.apache.org/, a distributed coordintion service that provides services such as naming, configuration management, synchronization)

u

Django DB (https://www.djangoproject.com, Django ORM (object-relation mapper), which is an abstraction layer on top of any database supported by Django)

u

SQLAlchemy (http://www.sqlalchemy.org, another ORM toolkit that provides object abstraction on top of most popular SQL databases)

u

Iron MQ (http://www.iron.io/mq, a message queue)

As you can see, you are not limited to a dedicated message queue; you can choose from a wide variety of specialized tools and common databases.

Setting up the Celery Server and Client In this section we look at how to install and configure Celery and all its requirements. We also look at some basic task queue usage patterns.

Installing and Setting up RabbitMQ I will use the recommended message queue, RabbitMQ, which is the most feature-full and stable of the supported platforms. Unless you have a really good reason to use a different platform, use RabbitMQ in your deployments as well. Be careful if you are building large systems, as scaling RabbitMQ with many different queues might be problematic; you might have to do some performance testing first.

333

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

CHAPTER 12 N DISTRIBUTEd MESSAGE PROcESSING SYSTEm

RabbitMQ is available as a package on most of the popular Linux distributions. If you are using a RedHat-based system, you can install RabbitMQ with the following command:   $ sudo yum install rabbitmq-server   Once the package is installed, you start it and check that it is running correctly:   $ sudo systemctl start rabbitmq-server $ sudo systemctl status rabbitmq-server rabbitmq-server.service - RabbitMQ broker Loaded: loaded (/usr/lib/systemd/system/rabbitmq-server.service; disabled) Active: active (running) since Sun 2014-07-20 12:30:22 BST; 25s ago Process: 304 ExecStartPost=/usr/lib/rabbitmq/bin/rabbitmqctl wait /var/run/rabbitmq/pid (code=exited, status=0/SUCCESS) Process: 32746 ExecStartPre=/bin/sh -c /usr/lib/rabbitmq/bin/rabbitmqctl status > /dev/null 2>&1 (code=exited, status=2) Main PID: 303 (beam) CGroup: /system.slice/rabbitmq-server.service Ōņ303 /usr/lib64/erlang/erts-5.10.4/bin/beam -W w -K true -A30 -P 1048576 -- -root / usr/lib64/erlang -progname erl -- -home /var/lib/rabbitmq -- -pa /usr/lib/rabbitmq/lib/rabbitmq_ server-3.1.5/sbin/../ebin -noshell -noinput -s rabbit boot -sname rabbit@fedora -boot start... Ōņ334 /usr/lib64/erlang/erts-5.10.4/bin/epmd -daemon Ōņ403 inet_gethost 4 Ŋņ404 inet_gethost 4   Jul 20 12:30:19 fedora.local rabbitmq-server[303]: RabbitMQ 3.1.5. Copyright (C) 2007-2013 GoPivotal, Inc. Jul 20 12:30:19 fedora.local rabbitmq-server[303]: ## ## Licensed under the MPL. See http://www.rabbitmq.com/ Jul 20 12:30:19 fedora.local rabbitmq-server[303]: ## ## Jul 20 12:30:19 fedora.local rabbitmq-server[303]: ########## Logs: /var/log/rabbitmq/rabbit@ fedora.log Jul 20 12:30:19 fedora.local rabbitmq-server[303]: ###### ## /var/log/rabbitmq/[email protected] Jul 20 12:30:19 fedora.local rabbitmq-server[303]: ########## Jul 20 12:30:22 fedora.local rabbitmq-server[303]: Starting broker... completed with 0 plugins. Jul 20 12:30:22 fedora.local rabbitmqctl[304]: ...done. Jul 20 12:30:22 fedora.local systemd[1]: Started RabbitMQ broker. Jul 20 12:30:46 fedora.local systemd[1]: Started RabbitMQ broker.   If you do not see any error messages, that means that the RabbitMQ has been successfully installed and started. It really is that simple! If you need to make any changes in the server configuration (for example, to change the port that the server binds to), you create a configuration file called /etc/rabbitmq/rabbitmq.conf. More details on the configuration parameters can be found in the official RabbitMQ documentation at http://www.rabbitmq.com/ configure.html.

Installing and Setting up Celery Once the RabbitMQ server is installed on both master and worker nodes, you can proceed with the Celery installation and configuration. Bear in mind that the master and worker do not necessarily need to be on different hosts; both can be set up on the same host for testing purposes.

334

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

CHAPTER 12 N DISTRIBUTEd MESSAGE PROcESSING SYSTEm

Celery is available on most popular Linux distributions as a package. On a RedHat-based system, you can install Celery by running the following command:   $ sudo yum install python-celery   On a Debian-based system, the package name is the same:   $ sudo apt-get install python-celery   However, I recommend installing the Celery package from PyPI repository, as it is going to contain the latest stable release:   $ sudo pip install celery

Create Celery System User and Group To begin, you should make sure that the Celery process is started automatically on all worker nodes. This is not required, and you could start the Celery process manually every time you need it, but this approach is not scalable. The following examples assume that you are using a RedHat-based system with systemd controlling system services. You will have to adjust the examples if you are using a different Linux distribution. For security reasons it is not recommended you run Celery as a root user. Better is to create a dedicated user and group, and run the Celery daemon under those credentials. To create a new user and check its UID and GID, you run the following command:   # useradd --system -s /sbin/nologin celery # id celery uid=987(celery) gid=984(celery) groups=984(celery)

Create Celery Project Directory and Sample Application All Celery applications must be located in a project directory, which can be created as follows:   # mkdir /opt/celery_project # chown celery:celery /opt/celery_project   You also need a sample application to test your configuration. We will look at the application development specific details later in this chapter, but right now let’s create a file called /opt/celery_project/tasks.py with the following contents:   from celery import Celery   app = Celery('tasks', broker='amqp://guest@localhost//', backend='amqp')   @app.task def hello(name='Anonymous'): return "Hello, %s" % name  

335

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

CHAPTER 12 N DISTRIBUTEd MESSAGE PROcESSING SYSTEm

N■Note  If you are using an earlier version of Celery than 3.1, use the ‘celery’ object name instead of ‘app’; earlier versions expect to find this specific name. In versions 3.1 and above, the name does not matter anymore.

Create Required System Directories Two directories will be required: one for storing the logging files and another for storing the temporary PID file. These directories are created automatically by the systemd process if you create a configuration file /usr/lib/tmpfile.d/ celery.conf with the following contents:   d /run/celery 0755 celery celery d /var/log/celery 0755 celery celery -

Create Systemd Configuration Files The systemd process management daemon needs a system definition file called /var/lib/systemd/system/celery. service with the following contents:   [Unit] Description=Celery workers After=network.target   [Service] Type=forking User=celery Group=celery EnvironmentFile=-/etc/conf.d/celery WorkingDirectory="${CELERYD_CHDIR}" ExecStart=/bin/celery multi start –A "${CELERY_APP}" "${CELERYD_NODES}" \ --pidfile="${CELERYD_PID_FILE}" \ --logfile="${CELERYD_LOG_FILE}" --loglevel="${CELERYD_LOG_LEVEL}" ExecStop=/bin/celery multi stopwait –A "${CELERY_APP}" "${CELERYD_NODES}" \ --pidfile="${CELERYD_PID_FILE}" ExecReload=/bin/celery multi restart –A "${CELERY_APP}" "${CELERYD_NODES}" \ --pidfile="${CELERYD_PID_FILE}" \ --logfile="${CELERYD_LOG_FILE}" --loglevel="${CELERYD_LOG_LEVEL}"   [Install] WantedBy=multi-user.target   And it needs the environment configuration file called /etc/conf.d/celery with the following:   CELERY_APP="tasks" CELERYD_NODES="worker" CELERY_BIN="/bin/celery" CELERYD_PID_FILE="/run/celery/%n.pid" CELERYD_LOG_FILE="/var/log/celery/%n.log" CELERYD_LOG_LEVEL="DEBUG" CELERYD_USER="celery" CELERYD_GROUP="celery"  

336

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

CHAPTER 12 N DISTRIBUTEd MESSAGE PROcESSING SYSTEm

Once you have created these files, you enable and start the service with the following commands:   # systemctl enable celery # systemctl start celery   If everything works fine, you should see the following output in the log file /var/log/celery/celery.log:   [2014-07-20 19:00:52,594: WARNING/MainProcess] /usr/lib/python2.7/site-packages/celery/apps/worker. py:161: CDeprecationWarning: Starting from version 3.2 Celery will refuse to accept pickle by default.   The pickle serializer is a security concern as it may give attackers the ability to execute any command. It's important to secure your broker from unauthorized access when using pickle, so we think that enabling pickle should require a deliberate action and not be the default choice.   If you depend on pickle then you should set a setting to disable this warning and to be sure that everything will continue working when you upgrade to Celery 3.2::   CELERY_ACCEPT_CONTENT = ['pickle', 'json', 'msgpack', 'yaml']   You must only enable the serializers that you will actually use.   warnings.warn(CDeprecationWarning(W_PICKLE_DEPRECATED)) [2014-07-20 19:00:52,612: INFO/MainProcess] Connected to amqp://guest:**@127.0.0.1:5672// [2014-07-20 19:00:52,620: INFO/MainProcess] mingle: searching for neighbors [2014-07-20 19:00:53,628: INFO/MainProcess] mingle: all alone [2014-07-20 19:00:53,638: WARNING/MainProcess] [email protected] ready.   There is a warning message that we will address shortly, but other than that, the startup process looks fine. You can also check the health of the system by using command line tool:   # celery status [email protected]: OK   1 node online. # celery inspect ping -> [email protected]: OK pong

337

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

CHAPTER 12 N DISTRIBUTEd MESSAGE PROcESSING SYSTEm

Test Access to the Celery Server Before we continue, let’s make sure that everything is working fine by connecting to the Celery server from a simple Python application. You will be importing the application code that we have written earlier, so make sure that you run the following commands in the /opt/celery_project/ directory:   # cd /opt/celery_project/ # python Python 2.7.5 (default, Feb 19 2014, 13:47:28) [GCC 4.8.2 20131212 (Red Hat 4.8.2-7)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> from tasks import hello >>> result = hello.delay('World') >>> result >>> result.id '926aabc8-6b1b-424e-be06-b15bcf92137e' >>> result.ready() True >>> result.result 'Hello, World' >>> result.status 'SUCCESS' >>>

Celery Basics In this section we look at the basics of using Celery.

Layout of a Typical Celery Application In fact, each Celery-based application is a system that consists of at least two components: a master process that generates work and consumes results, and a worker process that performs the work requests. Typically you will have more than one worker process, but you need to have at least one. As you discovered earlier, the communication between these processes is done over a message queue, so the processes do not need to reside on the same physical operating system. It is important to understand that both the master and the worker processes need to have access to the code that is being executed. For example, if you write a web mail processing system, you would write a library that deals with sending mail. The same library needs to be available on the master machine and all the worker machines. Only the worker machines are going to execute the code, but the master machine needs to be able to inspect the code so that it can construct the work request appropriately by sending the task name and correct arguments. The distribution of these modules is entirely up to you. Celery does not provide such functionality. You could use Celery to distribute these modules, just as we did with our custom-built XML-RPC system, but generally that is not advisable. It’s better to look at configuration management tools such as Ansible (http://www.ansible.com/), SaltStack (http://www.saltstack.com), Puppet (http://www.puppetlabs.com) or Chef (http://www.getchef.com) if you want to automate a module deployment process.

338

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

CHAPTER 12 N DISTRIBUTEd MESSAGE PROcESSING SYSTEm

Creating a Tasks Module In the previous section, when we did basic testing of the Celery setup, we created a file called tasks.py with one method in it. When you are writing larger applications, you are going to have more than one task method available for consumption by the master process. You could keep all the tasks in one file, but that file may soon become unmanageable. Therefore, it is advisable to create a dedicated module to encapsulate all tasks.

Worker and Master Process Application Files First, remove the tasks.py file that you created earlier and then create the following directory structure in /opt/celery_project. Create empty files for now, as you are going to fill them in as you go along in this example. Here, you create a simple Celery module that has two sets of tasks— one for arithmetic operations and one for geometric operations:   # pwd /opt/celery_project # tree . Ōņņ calculator.py Ŋņņ celery_app Ōņņ celeryconfig.py Ōņņ celery.py Ōņņ __init__.py Ŋņņ tasks Ōņņ arithmetics.py Ōņņ geometry.py Ŋņņ __init__.py   2 directories, 7 files   Let’s discuss how each file and directory is going to be used: u

The first file, calculator.py, is the actual running application, which submits tasks for processing. This is the master process.

u

The directory called celery_app/ is a Python module that is going to contain all files related to background task processing with Celery.

u

Celery.py is the main Celery application file that initializes the Celery application and sets its configuration.

u

Celeryconfig.py is a configuration file. It is imported and used by celery.py.

u

__init__.py is an empty file whose sole purpose is to indicate that this directory is a Python module.

u

Subdirectory tasks/ is a submodule that contains specific submodules grouped by common propery— for example, all arithmetic operations are placed in the arithmetics.py submodule.

This may look like an unnecessarily complex layout, but in reality it is not difficult to set up and it gives a lot of flexibility if your application grows larger.

339

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

CHAPTER 12 N DISTRIBUTEd MESSAGE PROcESSING SYSTEm

Overview of the Celery Configuration File Typically, the Celery application does not require a lot of configuration. You have to tell it where to look for the new jobs (this is typically a message queue running on the same host), where to store the results (usually the same message queue), and perhaps provide few environment specific settings. The easiest way to manage the Celery configuration file is to put all configuration items into a separate file and import it from the main application. So, in our example, the configuration file is called celeryconfig.py and it contains the settings shown in Listing 12-1. Listing 12-1.  Celery Configuration Settings CELERY_TASK_SERIALIZER = 'json' # Only allow object serialization using JSON CELERY_RESULT_SERIALIZER = 'json' # Previously the default was Python pickle objects, CELERY_ACCEPT_CONTENT = [ 'json', ] # but they are not secure, and will be discontinued BROKER_URL = 'amqp://guest@localhost//' # Where to look for new jobs CELERY_RESULT_BACKEND ='amqp' # Where to send job results CELERY_IMPORTS = ('celery_app.tasks.geometry', # Modules that contain Celery tasks 'celery_app.tasks.arithmetics',) # CELERY_TASK_RESULT_EXPIRES=3600 # How long keep tasks results before purging them   You can find full list of configuration options on the official Celery documentation web page, http://docs.celeryproject.org/en/latest/configuration.html; however, the most used items are listed in the Table 12-1. Table 12-1.  Some of the Most Used Celery Configuration Items

Configuration item

Description

CELERY_TIMEZONE

By default, Celery assumes UTC time zone; if you need to set location-specific time zone in message time stamps, you can modify this setting. For a reasonably up-to-date time zone name list, check http://en.wikipedia.org/wiki/List_of_tz_database_time_zones.

CELERYD_CONCURRENCY

This setting allows you to specify how many concurrent processes or threads Celery workers are allowed to run. By default, this setting is set to the number of available CPUs; however, this is very conservative. Unless you are doing a lot of computation, and the processes are really CPU bound, you should at least double this number. If your processes are mostly I/O bound, then you can usually go 5 to10 times above the number of CPUs.

CELERY_RESULT_ BACKEND

By default, Celery does not use any back end to store task results. In most cases this might be acceptable behavior. For example, if you are running background tasks that send emails, you perhaps may want the worker process to update multiple tables in your database indicating the task status, etc. You can think of the master process as a dispatcher that dispatches the tasks and doesn’t really care what happens to them. It is up to the task processes to update the system. It’s a valid approach, especially when the worker processes need to update many aspects of the running system. Another approach, though, is whereby the master process handles the data. In this scenario, the master process sends out instructions to long-running worker process, then get the results back and either updates the system status itself or hands this task to another worker process. Which option you choose depends on the system and your preferences. My recommendation is that the tasks be clearly separated into two categories: one that interacts with the external systems (mail servers, file servers, web servers, etc.) and another that works with internal system state (updates internal databases, increases counters, etc.). The chosen back-end system does not need to be the same platform as used for communication. Therefore, you can have message queue using RabbitMQ (setting value is ‘amqp’) and store the results in Redis database (setting value ‘redis’). (continued)

340

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

CHAPTER 12 N DISTRIBUTEd MESSAGE PROcESSING SYSTEm

Table 12-1.  (continued)

Configuration item

Description

CELERY_RESULT_ SERIALIZER

When the results are generated by the worker process, they need to be stored in whatever back-end media you selected with the previous setting. The default is Python’s pickle serialization method. In a nutshell, pickle serializes objects using byte code, and then pretty much evaluates this byte code on the receiving end without validating it for safety first. Therefore, if someone manages to send malicious data, pretending to come from the worker process, your master process might execute the received code; this can be used to break into your system. Because of this, the default is going to change, and Celery warns you about it on startup if you do not specify a different serialization method. One of the most convenient and safe options is to use JSON data structures.

CELERY_ACCEPT_ CONTENT

This setting is a list of allowed serializers. As you know, the master process prepares data for remote execution and sends it to the worker processes. To do this, the master process serializes the data using a serialize specified by the CELERY_TASK_SERIALIZER setting. When the worker node finishes processing the data, the result (if any) is serialized before it is sent back. How it is serialized is defined by the CELERY_RESULT_SERIALIZER setting. CELERY_ ACCEPT_CONTENT does not say how to serialize task parameters or results; it only lists allowed serializers. This permits you to have some worker nodes generating results in JSON and some generating results in YAML; if you list both methods here, they will be accepted.

CELERY_TASK_RESULT_ EXPIRES

If you store the results, this setting tells Celery how long to keep them (in seconds) before removal. The default setting is to keep all results for one day; if you set it to zero, then the results will not be removed.

CELERY_TASK_ SERIALIZER

Similar to CELERY_RESULT_SERIALIZER, but this setting instructs what serialization method to use when sending data to the remote worker processes.

CELERY_IMPORTS

This is a list of modules that Celery worker needs to import when starting. Celery worker will search for Celery task compatible functions (decorated by Celery.task decorator).

The Main Celery Application File This file is used to initialize the Celery application and load the configuration settings from the configuration module. The contents of the file are pretty self-explanatory, as shown in Listing 12-2. Listing 12-2.  Main Celery Application File from __future__ import absolute_import   from celery import Celery from celery_app import celeryconfig    app = Celery() app.config_from_object(celeryconfig)    if __name__ == '__main__': app.start()  

341

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

CHAPTER 12 N DISTRIBUTEd MESSAGE PROcESSING SYSTEm

You might be wondering what the first import statement is all about. This is needed because we named our module celery.py, but there is also a system-wide package with the same name. So Python interpreter is confused when it sees that you want to “import celery.” Do you want the local file or do you want the official package? To resolve this ambiguity you tell Python interpreter to give priority to the modules available via sys.path whenever there is a name conflict. This allows you to have a convenient name for your Celery application and still import the official package.

Celery Tasks As you have already seen from the directory structure, we moved all background tasks that Celery is going to manage to a separate submodule directory. Within that module we have two files for different sets of tasks, arithmetic operations, as shown in Listing 12-3. Listing 12-3.  Arithmetic Operations Tasks File from __future__ import absolute_import   from celery_app.celery import app   @app.task def add(a, b): return a + b   @app.task def sub(a, b): return a - b   And geometric operations, shown in Listing 12-4. Listing 12-4.  Geometric Operations Tasks File from __future__ import absolute_import   from celery_app.celery import app   @app.task def rect_area(h, w): return h * w   @app.task def circle_area(r): import math return math.pi * r

Systemd Configuration We need to adjust the systemd configuration files so that they match our current project layout. You modify the /etc/conf.d/celery file so that it now points to your new tasks module:   CELERY_APP="celery_app.celery"  

342

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

CHAPTER 12 N DISTRIBUTEd MESSAGE PROcESSING SYSTEm

Now, when you restart the Celery daemon with the following command, it should pick up the new Celery application files:   # systemctl restart celery

Celery Master Application Finally, you create the Celery master application and test that the Celery process is running, and that the tasks are available. The simple test application code is in the calculator.py file:   #!/usr/bin/env python   from celery_app import tasks   def test_tasks(): print 'Submitting job...' r = tasks.geometry.rect_area.delay(2, 2) print r.info print 'Job completed'     if __name__ == '__main__': test_tasks()   If you run it, you should see the following result:   # ./calculator.py Submitting job... 4 Job completed #

Routing Tasks One of the key features that message queue systems provide is the ability to route messages sent to the queue. Since task queuing systems, such as Celery, are usually based on the message queue system, such as RabbitMQ, they inherit the same functionality. In smaller and simpler systems, a single queue is often sufficient, but with larger systems, you will need to be able to group tasks to specific sets of workers, and this is what queues are designed for.

Inside a Message Queue System Figure 12-2 is a high-level overview of how a typical message queue system based on AMQP works. Celery hides most of this complexity, but it is good to have at least a general idea of how things fit together before considering the message routing.

343

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

CHAPTER 12 N DISTRIBUTEd MESSAGE PROcESSING SYSTEm

Broker

(RabbitMQ) Producer

>

(your application, master process)

Queue A

Exchange

Queue

B

Queue C

T

Consumer X

Consumer Y

(worker process)

(worker process)

Figure 12-2.  Typical message queuing system architecture Figure 12-2 illustrates the main actors in the message distribution mechanism. This is a simplified workflow: u

Your application (producer) calls a background task (in our example application, it was: tasks.geometry.rect_area.delay(2, 2).

u

The task details, such as name, arguments, and the destination queue (if specified), are serialized and submitted to the exchange.

u

Exchange, which is part of RabbitMQ, then forwards the message to one of the available queues. The standard defines four different types of exchange types: direct (sends tasks to one queue if it matches the message’s routing key), fanout (sends tasks to all queues that are bound to it), topic (sends messages to all queues that have matching routing key pattern), and headers (distributes messages based on message headers). Routing key is a tag that each message can be tagged with. When queues are bound to the exchanges they also get assigned a routing key (or routing key pattern). This allows exchange to route messages accordingly.

u

The message reaches consumer process (worker process), where the message is processed and removed from the queue.

If you are interested in details of the AMQP protocol, you can find more information on the official AMQP model description page: http://www.openamq.org/tutorial:the-amq-model. Discussing message queue systems in depth is beyond the scope of this book, especially as the subject is so broad. If you are interested in the message queue systems, I recommend RabbitMQ in Action: Distributed Messaging for Everyone by Alvaro Videla and Jason J. W. Williams.

Binding Worker Node to Specific Queues For basic applications, you need to do two things to effectively use queues: first, you need to instruct the worker node to bind to a particular queue; then, you need to tag the tasks so that they are routed correctly. You bind the worker node to specific queues at a start time. By default, all untagged tasks are sent to the default queue that has a name (tag) “celery.” If you do not specify any queues to bind to, then the worker process will automatically bind to this queue. Let’s create a new queue and call it “calc” so that all calculation-related tasks are sent only to the workers bound to this queue.

344

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

CHAPTER 12 N DISTRIBUTEd MESSAGE PROcESSING SYSTEm

First, you need to add new setting to /etc/conf.d/celery:   CELERY_QUEUES="calc"   Then, you make sure that when the Celery daemon starts, it uses this parameter. You need modify the system service definition file /usr/lib/systemd/system/celery.service:   ExecStart=/bin/celery multi start "${CELERYD_NODES}" -A "${CELERY_APP}" -Q "${CELERY_QUEUES}" --pidfile="${CELERYD_PID_FILE}" --logfile="${CELERYD_LOG_FILE}" --loglevel="${CELERYD_LOG_LEVEL}"   If you now restart the Celery process, you will see that the process is bound to only the new queue:   # ps auxww | grep celery celery 4760 0.7 1.0 246404 21472 ? S 14:27 0:00 /usr/bin/python -m celery worker -n [email protected] -A celery_app.celery --loglevel=INFO -Q calc --logfile=/var/log/celery/worker. log --pidfile=/run/celery/worker.pid celery 4773 0.0 0.8 245600 17680 ? S 14:27 0:00 /usr/bin/python -m celery worker -n [email protected] -A celery_app.celery --loglevel=INFO -Q calc --logfile=/var/log/celery/worker. log --pidfile=/run/celery/worker.pid   So, now that the worker is no longer bound to the default queue, what is going to happen to your calculator.py application if you run it? Remember, the tasks by default are not tagged, and thus they all go to the default “celery” queue, but now nothing listens to it. Let’s try and run this couple of times:   # ./calculator.py Submitting job... None Job completed # # ./calculator.py Submitting job... None Job completed #   Not much of a surprise, is it? There are no workers working on the “celery” queue, so the task is not processed. But what actually happened to the tasks that were submitted? You need to ask RabbitMQ directly using the rabbitmqctl command:   # rabbitmqctl list_queues Listing queues ... 1159cf27f68247da9885495e63c7dd1c 0 calc 0 celery 2 celeryev.601d558c-6354-4265-9704-a225948bb052 0 e289f4c20f754489944f75e1ee7c8ac6 0 [email protected] 0 ...done.   You can see that there are two queues, one named “celery” and one named “calc.” There are no messages in the “calc” queue, but there are two messages in the “celery” queue.

345

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

CHAPTER 12 N DISTRIBUTEd MESSAGE PROcESSING SYSTEm

To make sure that your requests are not sent into the black hole, you need to tag them. This is as simple as specifying the queue name where you want to send your task to. You modify the calculator.py file so that the task call has a queue name in it (unfortunately, we cannot use the “delay” shortcut):   r = tasks.geometry.rect_area.apply_async((2, 2), queue="calc")   If you run the calculator.py again, you will see that the task is now processed, as expected:   # ./calculator.py Submitting job... 4 Job completed #   Another method of specifying the queues is in the Celery application file (in our examples, this is celeryconfig.py):   from kombu import Queue CELERY_QUEUES = ( Queue("calc"), )   This way you can keep the same systemd configuration file on all processing machines, even if they bind to different queues.

Sending Broadcast Messages Default message queue behavior is such that one message reaches only one recipient. This is fine for tasks such as sending an email (you want only one email to be sent!) or performing calculations (there is no need to calculate the same thing on all available servers). However, sometimes you need to send messages to all available servers, and one example is the monitoring system. You want to tell all servers to run their checks and update the status accordingly. To achieve this goal, Celery has a mechanism for sending broadcast messages. This means that a message sent to a queue will be routed to all workers listening on that queue. Again, this can be defined in the celeryconfig.py file; in the example, you define two queues, one for “normal” calculations and one for “broadcast” calculations:   from kombu import Queue from kombu.common import Broadcast   CELERY_QUEUES = ( Queue("calc"), Broadcast("broadcast_calc"), )   Also modify the calculator.py application, so you submit tasks to two different queues:   def test_tasks(): print "Submitting job..." r = tasks.geometry.rect_area.apply_async((2, 2), queue="calc") print r.info print "Job completed" print "Submitting broadcast job..." r = tasks.arithmetics.add.apply_async((1, 1), queue="broadcast_calc") print r.info print "Job completed"  

346

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

CHAPTER 12 N DISTRIBUTEd MESSAGE PROcESSING SYSTEm

If you run the example code again, you should get two results:   # ./calculator.py Submitting job... 4 Job completed Submitting broadcast job... 2 Job completed   Okay, this is what we expected, but let’s think about this for a moment: If you submit one task, then there is only one task in the queue, but it is forwarded to all workers. When the workers reply with their results, they will send a message that can be translated to something like this in plain English: “the result for task with ID A is XYZ.” If you have multiple-task IDs, that’s fine, as you can relate task results to the task ID numbers, but if you broadcast, then there will be multiple results, but only one task ID! There is no easy way to resolve this, other than to ignore the results on the submitting side (master process), and make sure that the worker processes submit their results somewhere centrally—for example, on a share database.

Summary In this chapter we briefly discussed the task and message queue systems. You can use this knowledge to rewrite the distributed monitoring application that we wrote in the previous three chapters. u

Task queue systems are used to distribute tasks to worker nodes so that they can be processed in background.

u

Task queue systems typically use the underlying message queue system to distribute messages between worker nodes.

u

Celery in combination with RabbitMQ can be used to call remote Python functions.

u

Tasks can be routed to dedicated queues, and worker processers can listen to a predefined set of queues. This allows you to have specialized worker processes, and group them accordingly to this specialization.

u

It is possible to create broadcast queue where all subscribers will receive the same message.

347

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

CHAPTER 13

Automatic MySQL Database Performance Tuning In this chapter, we are going to extend the plug-in framework that we built in Chapter 6. As you may remember, the plug-in framework allows us to extend an application’s functionality by implementing new methods outside the main application code. The new framework will allow for the plug-ins to generate data and submit it back to the application, so the other plug-ins are able to use it as well. Based on the new framework, we will build an application that inspects the MySQL database configuration and live statistics and makes performance-tuning suggestions. We’ll look at some of the tuning parameters and write a few plug-ins.

Requirements Specification and Design As a system administrator, you probably have been asked to improve the performance of a MySQL database server. This is a creative and challenging task, but at the same time it can be quite daunting. The database software in itself is a complex piece of software, and you also must account for external factors such as the running environment—the number of CPU cores and the amount of memory. On top of that, the actual table layout and the SQL statement structure play very important roles. You may have already developed your own strategy for how to approach this problem. The reason I mention “your own strategy” is that, unfortunately, there is no universal solution to tuning the MySQL database. Each installation is unique and requires an individual approach. Various solutions are available to help you identify the most common issues within the database, including commercial options such as MySQL Enterprise Monitor (http://mysql.com/products/enterprise/monitor.html) and open-source tools such as MySQLTuner (http://blog.mysqltuner.com/). The main purpose of such tools is to automate the tuning process by providing insight into the system configuration and behavior. Assuming that SQL statement tuning is a job for the software developers, as a system administrator, you are effectively juggling two parameters: the database configuration and the operating environment configuration. The feedback is provided in the form of internal database counters, such as the number of slow queries or the number of connections. To put all this into perspective, MySQL Community Server 5.6.19 has 443 status variables and 602 configuration variables. I do not even consider listing the operating environment variables because that would be nearly impossible. It is humanly impossible to correlate all the variables and make meaningful observations on the larger scale. The available tools attempt to inspect the configuration and, based on the observed status variables, make some suggestions for how to improve the configuration. This works well for basic tuning, but as you dig deeper, you probably will find that you need to modify the tool so that it is tuned to your needs, rather than being based on some generic observations. This is where you need a tool that is extensible and easy to adjust.

349

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

CHAPTER 13 N AUTOMATIc MYSQL DATABAsE PERFORMANcE TUNING

Basic Application Requirements In Chapter 6, we discussed the advantages of the architecture based on plug-ins. In this architecture, the main (host) application provides some generic service to the plug-ins, which either extend the functionality of the main application or actually provide the services. From the user perspective, the system acts as one entity. This brings us to the basic requirements list for the application that we’re going to build in this chapter: u

The application should be easy to extend, modify, and enhance with new functionality.

u

The application should focus on collecting and processing the performance observations from the MySQL database.

u

The performance-tuning rules should be easy to transfer and exchange between different instances of the application.

System Design As a basis for the application, we’ll use the plug-in framework we created in Chapter 6. We could take it as is, replace the log-line-reading part with the MySQL data collection function, and start writing the plug-in modules that consume the data. This approach would serve us well in the short term, but it may not be the most extensible solution in the long run. The problem is that, although we could immediately identify the MySQL configuration parameters and status variables, we would struggle with the operating system status parameters. This is because there is no definite source for this information. Each system is different and may require different tools to report the status. The solution to this problem is to move the task of producing the information from the host application to the plug-in modules. In other words, some plug-ins will produce the data, which the other plug-ins will rely on for their calculations and, ultimately, their suggestions for performance improvements. In this scenario, the host application acts merely as a dispatcher, and the only service it provides is the connectivity to the database server. The rest of the functionality is provided by the plug-ins. Figure 13-1 shows a schematic diagram of the producer/consumer plug-in architecture. Plug-in Framework

I I I

Produce the shared information

i Host Application

I I Plugin 1 (producer)

i

run producers run consumers

£

i

Plugin 2 (consumer)

Supply the shared information

T

Dispatch the commands

*

Plug-in manager Forward the commands

I

Plug-in Registry Plugin 1 ('producer') Plugin 2 ('consumer') )

Figure 13-1.  The producer/consumer plug-in framework

350

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

CHAPTER 13 N AUTOMATIc MYSQL DATABAsE PERFORMANcE TUNING

As you can see, the host application still issues the commands via the plug-in manager object. The result is also passed back through the plug-in manager, but for clarity, the figure shows a direct link back to the host application. Once the data is collected from the producer plug-in modules, it is then passed back to the consumer modules. So the host application is responsible for providing the connectivity details to the plug-ins and also maintaining the correct order of producer-first, consumer-last calls. In addition to these changes to the plug-in framework, we’re going to provide three basic producer plug-ins: u

A plug-in to provide the MySQL system variables

u

A plug-in to provide the configuration details

u

A plug-in to provide the details of the physical and virtual memory available on the system, as well as the number of CPU cores

This will be the basic set of information upon which we’ll build our advisor plug-ins. The advisor plug-ins will perform some calculations based on the results received and provide suggestions on how to improve the server performance.

N■Note MySQL tuning is a very broad topic. If you would like to learn more, I recommend starting with the MySQL Performance Blog (http://mysqlperformanceblog.com/), which includes a wealth of performance-tuning tips and articles. Other useful resources are http://dev.mysql.com/doc/refman/5.7/en/server-parameters.html and http://www.mysql.com/why-mysql/performance/.

Modifying the Plug-in Framework The information sharing between different components can quickly become complicated. The following are some potential problems you may need to resolve: u

Which plug-ins have access to which information? You may want to hide some information from a certain set of plug-ins.

u

What if the producer plug-ins are also consumers? Some plug-ins may require information produced by other plug-ins to finish their tasks.

u

How do you share large amounts of data between the plug-ins? For example, when the amount of data produced does not fit into physical memory and needs to be stored on disk.

For the sake of simplicity, we are going to have a flat-access model, where the consumer modules can access all the information generated by the producer plug-ins. We will not implement the hierarchical-producer layout, and we will assume that the producers are self-sufficient.

Changes to the Host Application The responsibilities of the host application are limited to the following three tasks: u

Reading the MySQL database credentials from a configuration file

u

Establishing the initial connection to the server

u

Running the plug-in modules in three stages: run the producers and collect the data, run the producers’ process methods, and then run the producers’ report module

351

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

CHAPTER 13 N AUTOMATIc MYSQL DATABAsE PERFORMANcE TUNING

We will use the Python’s ConfigParser library to access the configuration from the Windows INI-style configuration file, which has the following contents (obviously, you will need to adjust the settings to match your database details):   [main] user=root passwd=password host=localhost   Listing 13-1 shows the full listing of the host application. As you can see, the code is straightforward. It is logically divided into the three main phases and the three plug-in processing stages. Notice that we use keywords to distinguish between the producer and the consumer modules. Listing 13-1.  The Host Application #!/usr/bin/env python   import re import os, sys from ConfigParser import SafeConfigParser import MySQLdb from plugin_manager import PluginManager    def main(): cfg = SafeConfigParser() cfg.read('mysql_db.cfg')   plugin_manager = PluginManager() connection = MySQLdb.connect(user=cfg.get('main', 'user'), passwd=cfg.get('main', 'passwd'), host=cfg.get('main', 'host'))   env_vars = plugin_manager.call_method('generate', keywords=['provider'], args={'connection': connection}) plugin_manager.call_method('process', keywords=['consumer'], args={'connection': connection, 'env_vars': env_vars}) plugin_manager.call_method('report')   if __name__ == '__main__': main()   If you compare this listing to the examples in Chapter 6, you’ll notice that this time we actually expect something back from the call_method function. This function returns the results generated by the producer plug-in modules and stores them in a single variable. This variable is then passed to the consumer plug-ins as a keyword argument called env_vars. The consumer plug-ins expect this argument to be present. We’ll look into the structure of this variable in the next section.

352

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

CHAPTER 13 N AUTOMATIc MYSQL DATABAsE PERFORMANcE TUNING

Modifying the Plug-in Manager The host application just handles a single call to the call_method function because it doesn’t know—and doesn’t need to know—the exact number and names of the plug-ins. It is the plug-in manager’s responsibility to route the request to the appropriate plug-in modules. However, this approach brings up a problem: If a single call to a function actually yields multiple answers from multiple functions, how do we store the result? To complicate matters even more, we don’t know exactly what the plug-in is going to return. It may be a dictionary, a list, or even a custom object. And we shouldn’t need to know this. It’s up to the consumer to decrypt this information. The people who write the producer plug-ins are expected to provide extensive documentation about the data structures produced by their modules. In our case, the plug-in manager component will handle the results in a very simple manner. It’s going to store them as separate entries in a dictionary. The dictionary keys will be the plug-in class names, and the key values will be whatever objects are returned by the plug-in module calls. This dictionary is then passed as an argument to the consumer plug-in call. This will result in a flat information store, where all information is accessible by all plug-ins. This may bring some security concerns, but for a simple application like the one we’re building here, the simplicity plays an important role. The only modification to the plug-in manager code is the call_method() function, as shown in Listing 13-2. Listing 13-2.  The Plug-in Manager Method Dispatcher Function def call_method(self, method, args={}, keywords=[]): result = {} for plugin in self.plugins: if not keywords or (set(keywords) & set(self.plugins[plugin])): try: name_space = plugin.__class__.__name__ result[name_space] = getattr(plugin, method)(**args) except AttributeError: pass return result   We now have a plug-in framework that is capable of passing the information between the modules. If you really need to have the multistage producer architecture, just for few levels, you could use keywords to implement it. For example, you may have the keywords producer1, producer2, and producer3. You then can call the generate() method three times, passing a different keyword each time and supplying the intermediate results to the producer2 and producer3 instances.

Writing the Producer Plug-ins We need to produce some data for the advisor plug-ins. We’ll start by querying the MySQL internal status and configuration tables. First, let’s look at how to access the MySQL database from Python applications.

Accessing the MySQL Database from Python Applications The support for MySQL databases is provided by the MySQLdb Python module, which is available as a prebuilt package on most Linux distributions. For example, on a Fedora system, you can install this module with the following command:   $ sudo yum install MySQL-python  

353

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

CHAPTER 13 N AUTOMATIc MYSQL DATABAsE PERFORMANcE TUNING

Alternatively, you can download the latest source package from the project’s home page at http://sourceforge.net/ projects/mysql-python/ and build the library from the source code. Keep in mind that MySQLdb uses modules written in C, so you will also need to install a C compiler (package typically called gcc), MySQL development headers (typical package name is mysql-devel) and Python development headers (typical package name is python-devel). Once you have installed the library, check that it is loading correctly:   $ python Python 2.6.2 (r262:71600, Jan 25 2010, 18:46:45) [GCC 4.4.2 20091222 (Red Hat 4.4.2-20)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import MySQLdb >>> MySQLdb.__version__ '1.2.3c1' >>>   The MySQLdb library is compatible with the Python DB-API Specification version 2. This specification defines the interface, objects, variables, and error-handling rules that the compliant library must implement. This is an attempt to unify the interface of all database access modules. The advantage of this unification is that, as a developer, you don’t need to worry much about the specifics of the database module calls because they are very similar. The code that you wrote to connect to SQLite 3 should work with the MySQL database without major modifications. The main difference between the libraries is perhaps the connect() method, which is used to connect to the database and therefore is very specific to the database software that you’re using. Regardless of which database module you are using, the first method you’ll invoke is usually connect(). This method returns an instance of the connect object, which you will use to access the database. The parameters are database-specific. Since we’re discussing the MySQL database in this chapter, here’s how you establish a connection to the database server:   >>> connection = MySQLdb.connect( host='localhost', ... user='root', ... passwd='password', ... db='test') >>>   These four parameters—the hostname, username, password, and database name—are the ones you’ll find yourself using most of the time. However, the MySQL server also supports multiple connection options, which you may need to modify. Table 13-1 lists the most important ones. For a full list of parameters, refer to the MySQLdb documentation (http://mysql-python.sourceforge.net/MySQLdb.html). Table 13-1.  Commonly Used MySQL Connect Options

Parameter

Description

Host

Name of the host to connect to—either a fully qualified domain name or an IP address of the host.

User

Username you use to authenticate to the database server.

Passwd

Password you use for the authentication.

Db

Name of the database you’re connecting to. If omitted, no default database will be selected, and you will need to use the USE DATABASE SQL command to connect to a database.

Port

Port number on which the MySQL server is running. The default value is 3306. (continued)

354

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

CHAPTER 13 N AUTOMATIc MYSQL DATABAsE PERFORMANcE TUNING

Table 13-1.  (continued)

Parameter

Description

unix_socket

Location of the UNIX socket of the MySQL server instance. The default location varies between the distributions, but typically is /var/lib/mysql/mysql.sock.

compress

Flag indicating whether the protocol compression should be enabled or disabled. It is disabled by default.

connect_timeout

Number of seconds to wait for the connect operation to complete. If it is not finished within the specified time frame, the operation raises an error.

init_command

Initialization command that the server must execute immediately after the connection has been established.

use_unicode

If this flag is set to true, the CHAR, VARCHAR, and TEXT fields are returned as Unicode strings. Otherwise, the return results are just the normal strings. Regardless of this setting, you can always write as Unicode to the database.

charset

Connection character set will be set to the character set specified as the value for this argument.

The returned connect object implements four basic methods for managing the connection status. These methods are listed in Table 13-2. Table 13-2.  The Connect Object Methods

Method name

Description

.close()

Closes the established connection, which will not be usable from the moment this method is called. All cursor objects derived from this connection will be unusable, too. Bear in mind that all transactions or changes will be rolled back if you close the connection without committing the changes first.

.commit()

Forces the database engine to commit all outstanding transactions.

.rollback()

Rolls back the last noncommitted transaction, if you’re using a MySQL database engine that does support transactions (such as InnoDB).

.cursor()

Returns a cursor object, which you will use to execute the SQL commands and read the results. The MySQL database does not support the cursors, but the MySQLdb library provides this wrapper object, which emulates the cursor functionality.

The real work in the database is done using the cursor objects. A cursor object acts as a context for the query execution and, more important, the data-fetching operations. You can have multiple cursors created by a single connection object. The changes made by any cursor will be seen immediately by the other cursors as long as they belong to the same connection. Table 13-3 lists the most commonly used cursor methods. The connection context used in the examples in the table is created as follows:   >>> connection = MySQLdb.connect( host='localhost', ... user='root', ... passwd='password', ... db='zm' ) >>> >>> cursor = connection.cursor() 

355

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

CHAPTER 13 N AUTOMATIc MYSQL DATABAsE PERFORMANcE TUNING

Table 13-3.  Commonly Used Database Cursor Methods

Method

Description

Example

.execute()

Prepares and executes the SQL query. It accepts two parameters: the SQL statement that needs to be executed (required) and an optional list of parameters. The variables in the SQL string are specified using the %s string only. The second optional argument must be a tuple, even if it is just a single value.

The following two queries are functionally identical: >>> cursor.execute("SELECT type« FROM ZonePresets WHERE id=1") 1L >>> cursor.execute("SELECT type« FROM ZonePresets WHERE id=%s", (1,)) 1L >>>

.executemany()

Similar to the .execute() method; accepts a list of options and iterates through them. The results are combined and accessible using the cursor data-fetching methods. The list elements must be tuples, even if they contain just a single value.

The following example runs two SELECT queries in one command: >>> cursor.executemany("""SELECT type« FROM ZonePresets WHERE« id=%s AND type=%s""", [ (1, 'Active'), (2, 'Active') ] ) 2L >>>

.rowcount

A read-only attribute (not a method) that indicates the number of rows the last .execute() statement generated.

.fetchone()

Returns the next row from the result set. If no more data is available. it will return the None object. The result is always a tuple. Elements are in the same order as specified by the query set.

>>> cursor.execute("SELECT id,« type FROM ZonePresets") 6L >>> cursor.fetchone() (1L, 'Active') [...] >>> cursor.fetchone() (6L, 'Active') >>> cursor.fetchone() >>>

.fetchall()

Returns all rows returned by the query in the form of a tuple of tuples.

>>> cursor.execute("SELECT id,« type FROM ZonePresets") 6L >>> cursor.fetchall() ((1L, 'Active'), (2L, 'Active'),« (3L, 'Active'), (4L, 'Active'),« (5L, 'Active'), (6L, 'Active')) >>>

.fetchmany()

Returns the number of rows specified by its argument. If no argument is supplied, the number of rows read depends on the .arraysize setting.

>>> cursor.execute("SELECT id, type« FROM ZonePresets") 6L >>> cursor.fetchmany(2) ((1L, 'Active'), (2L, 'Active')) >>> (continued)

356

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

CHAPTER 13 N AUTOMATIc MYSQL DATABAsE PERFORMANcE TUNING

Table 13-3.  (continued)

Method

Description

Example

.arraysize

A read/write attribute that controls the number of rows the .fetchmany() method must return.

>>> cursor.execute("SELECT id, type« FROM ZonePresets") 6L >>> cursor.arraysize 1 >>> cursor.arraysize=3 >>> cursor.fetchmany() ((1L, 'Active'), (2L, 'Active'),« (3L, 'Active')) >>>

Querying the Configuration Variables You don’t really need to connect to any of the databases if you want to retrieve the server configuration or the system status variables. It’s enough to establish a connection to the database server. To get the MySQL variables, we will need to use the MySQL SHOW statement. Its syntax is similar to the SELECT statement, where you are allowed to use the LIKE and WHERE modifiers to limit the query set. (Remember that there are 287 configuration settings and 291 status variables!) We’ll start with the configuration variables. These variables indicate how the server is configured. There are three ways to alter these variables: u

Set them at the server start time using the command-line parameters.

u

Set them at the server start time using the options file (usually my.cnf).

u

Set them while the server is running using the MySQL SET statement.

N■Note You can find detailed descriptions of all MySQL variables and how they affect the functionality of the server in the official MySQL documentation at http://dev.mysql.com/doc/refman/5.7/en/server-system-variables.html. The basic syntax of the command is SHOW VARIABLES. The default behavior of this command is to show the settings that are applied to the current session and is equivalent to the extended syntax of the same command: SHOW LOCAL VARIABLES. If you want to find out which settings will be applied to the new connections, you need to use the SHOW GLOBAL VARIABLES command. The result set can be further modified with the LIKE and WHERE clauses, as shown in the following example:   >>> connection = MySQLdb.connect( host='localhost', ... user='root', ... passwd='password' ) >>> cursor = connection.cursor() >>> cursor.execute("SHOW GLOBAL VARIABLES LIKE '%innodb%'") 37L >>> for r in cursor.fetchmany(10): print r ...

357

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

CHAPTER 13 N AUTOMATIc MYSQL DATABAsE PERFORMANcE TUNING

('have_innodb', 'YES') ('ignore_builtin_innodb', 'OFF') ('innodb_adaptive_hash_index', 'ON') ('innodb_additional_mem_pool_size', '1048576') ('innodb_autoextend_increment', '8') ('innodb_autoinc_lock_mode', '1') ('innodb_buffer_pool_size', '8388608') ('innodb_checksums', 'ON') ('innodb_commit_concurrency', '0') ('innodb_concurrency_tickets', '500') >>> >>> cursor.execute("SHOW GLOBAL VARIABLES WHERE variable_name LIKE '%innodb%'« AND value > 0") 18L >>> for r in cursor.fetchmany(10): print r ... ('innodb_additional_mem_pool_size', '1048576') ('innodb_autoextend_increment', '8') ('innodb_autoinc_lock_mode', '1') ('innodb_buffer_pool_size', '8388608') ('innodb_concurrency_tickets', '500') ('innodb_fast_shutdown', '1') ('innodb_file_io_threads', '4') ('innodb_flush_log_at_trx_commit', '1') ('innodb_lock_wait_timeout', '50') ('innodb_log_buffer_size', '1048576') >>>  

N■Tip The columns of the system configuration table are named variable_name and value. You can use these names in the SHOW command along with the LIKE and WHERE statements. Let’s write a plug-in class that retrieves all the variables from the database and returns the data to the plug-in manager. As you know, by default, the result is a tuple of tuples. To make it more useful, we’ll convert it to the dictionary object, where the variable names are the dictionary keys and the variable values are dictionary values, as shown in Listing 13-3. Listing 13-3.  Plug-in to Retrieve the MySQL Server Variables class ServerSystemVariables(Plugin):   def __init__(self, **kwargs): self.keywords = ['provider'] print self.__class__.__name__, 'initialising...'   def generate(self, **kwargs): cursor = kwargs['connection'].cursor() cursor.execute('SHOW GLOBAL VARIABLES') result = {}

358

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

CHAPTER 13 N AUTOMATIc MYSQL DATABAsE PERFORMANcE TUNING

for k, v in cursor.fetchall(): result[k] = v cursor.close() return result

Querying the Server Status Variables The server status variables provide insight into the server operation by presenting the internal counters. All variables are read-only and cannot be modified.

N■Note You can find detailed information about each of the MySQL server status variables in the MySQL ­documentation, which is available at http://dev.mysql.com/doc/refman/5.7/en/server-status-variables.html. The SHOW command syntax is SHOW STATUS. Similar to the SHOW VARIABLES command without the modifier, this command returns the status applicable to the current session and is equivalent to the SHOW LOCAL STATUS command. If you want to retrieve the server-wide status, use the SHOW GLOBAL STATUS command. This behavior applies only to versions 5.0 and later of the MySQL server. The versions prior to this release had an opposite behavior, where SHOW STATUS assumed the global status, and you needed to explicitly run the SHOW LOCAL STATUS if you wanted to retrieve the session-specific counters. This might present a problem if you’re developing a plug-in that may be executed on various versions of the MySQL server. There is a simple solution to this problem, though: specify the version selector in your SHOW statement. The following query correctly uses an appropriate command modifier and can be used across all versions of MySQL server:   SHOW /*!50000 GLOBAL */ STATUS   You can use the LIKE and WHERE dataset modifiers with this command, too, as in the following example:   >>> cursor.execute("SHOW GLOBAL STATUS WHERE variable_name LIKE '%innodb%' AND value > 0") 16L >>> for r in cursor.fetchmany(10): print r ... ('Innodb_buffer_pool_pages_data', '19') ('Innodb_buffer_pool_pages_free', '493') ('Innodb_buffer_pool_pages_total', '512') ('Innodb_buffer_pool_read_ahead_rnd', '1') ('Innodb_buffer_pool_read_requests', '77') ('Innodb_buffer_pool_reads', '12') ('Innodb_data_fsyncs', '3') ('Innodb_data_read', '2494464') ('Innodb_data_reads', '25') ('Innodb_data_writes', '3') >>>   Listing 13-4 shows the plug-in to retrieve the system status variables. This plug-in class is similar to the one that queries the system configuration settings.

359

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

CHAPTER 13 N AUTOMATIc MYSQL DATABAsE PERFORMANcE TUNING

Listing 13-4.  The Plug-in to Retrieve the System Status Variables class ServerStatusVariables(Plugin):   def __init__(self, **kwargs): self.keywords = ['provider'] print self.__class__.__name__, 'initialising...'   def generate(self, **kwargs): cursor = kwargs['connection'].cursor() cursor.execute('SHOW /*!50000 GLOBAL */ STATUS') result = {} for k, v in cursor.fetchall(): result[k] = v cursor.close() return result

Collecting the Host Configuration Data It’s all very well and good that we were able to retrieve the MySQL configuration and status data, but we still need to put that data into the context of the operating environment to actually make any use of it. Let’s take the key_buffer_size variable from the system configuration list as an example. This variable sets the amount of memory dedicated to the MyISAM table indexes. The setting can have a significant impact on the performance of the MySQL server. If you set it too small, the indexes will not be cached in memory, and for every lookup, the server will be performing the disk-read operation, which is significantly slower than the read-from-memory operation. If you allocate too much memory to this buffer, you’ll limit the memory available for other operations, such as the file system cache. If the file system cache is too small, all read and write operations will not be cached, and thus the disk I/O will be negatively impacted. The standard recommendation for this buffer variable is to use 30 to 40% of the total memory available on the server. So, to make this deduction, you actually need to know the amount of physical memory on the system! There are many different aspects you must consider, but the most significant ones are the amount of physical memory, the amount of virtual memory (or the swap size on Linux systems), and the number of CPU cores. We’re going to use the psutil library, which provides the API to query the system memory readings. This library is designed to get the information about the running processes and perform some basic process manipulations. It is not included in the basic Python module set, but it is widely available on most Linux distributions. For example, on a Fedora system, you can install this library with the following command:   $ sudo yum install python-psutil   The source code along with the complete documentation is available on the project website at https://github.com/giampaolo/psutil. Unfortunately, this library does not provide the information about the number of available CPU cores. We’ll need to query the Linux /proc/ file system to get the report about the available CPUs. This is quite easy to do. We just need to count the lines in the /proc/cpuinfo file that start with the keyword processor. Listing 13-5 shows the plug-in code that collects the system memory readings and the CPU information.

360

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

CHAPTER 13 N AUTOMATIc MYSQL DATABAsE PERFORMANcE TUNING

Listing 13-5.  The Plug-in to Retrieve the System Information import psutil   [...]   class HostProperties(Plugin):   def __init__(self, **kwargs): self.keywords = ['provider'] print self.__class__.__name__, 'initialising...'   def _get_total_cores(self): f = open('/proc/cpuinfo', 'r') c_cpus = 0 for line in f.readlines(): if line.startswith('processor'): c_cpus += 1 f.close() return c_cpus   def generate(self, **kwargs): result = { 'mem_phys_total': psutil.TOTAL_PHYMEM, 'mem_phys_avail': psutil.avail_phymem(), 'mem_phys_used' : psutil.used_phymem(), 'mem_virt_total': psutil.total_virtmem(), 'mem_virt_avail': psutil.avail_virtmem(), 'mem_virt_used' : psutil.used_virtmem(), 'cpu_cores' : self._get_total_cores(), } return result

Writing the Consumer Plug-ins Now we are ready to start writing the advisor plug-ins. These plug-ins will make suggestions based on the information they receive from the information producer modules. So far, we have collected the base information about the database settings and status, as well as some information about the physical hardware and the operating system. Although the information set is not exhaustive, it includes the crucial details needed to make some educated conclusions. Here, we’ll look at three examples that should be sufficient to get you up to speed so you can start developing your own advisor plug-ins.

Checking the MySQL Version The very first check you may need to perform is the MySQL version number. It’s quite important to keep your server installation up to date. Every new release fixes server software bugs and potentially introduces performance improvements.

361

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

CHAPTER 13 N AUTOMATIc MYSQL DATABAsE PERFORMANcE TUNING

The plug-in that checks the current MySQL version bases its decision on the latest generally available (GA) release version number, which is available on the MySQL download page at http://mysql.com/downloads/mysql/. To extract this information from the web page, we’ll use the Beautiful Soup HTML parsing library. The page structure is relatively simple, and the data we require is included in the last occurrence of the tag:   [...] Download MySQL Community Server [...]   MySQL Community Server 5.6.19/h1>   [...]   The plug-in code will extract this information and compare it against the information reported by the ServerSystemVariables module. Four states can be reported: u

If the major version numbers don’t match, it might be a serious issue, and therefore is marked as critical.

u

If the current major version matches the latest, but the current minor version number is lower than the latest, the issue is marked as a warning.

u

If the major and minor versions are up to date, it’s just a note that the patch might be beneficial.

u

If none of the above, we’ll conclude that the current installation is up to date.

N■Note Another possible check is for versions newer than the current GA release, which may potentially cause problems because the development versions cannot be thoroughly tested. For the sake of code simplicity, we’ll exclude this case in our example. Such an additional check should be relatively easy to include in the module. The full listing of the plug-in that checks the current MySQL version is shown in Listing 13-6. Listing 13-6.  The Module to Check the Current Version Against the Latest GA Release class MySQLVersionAdvisor(Plugin):   def __init__(self, **kwargs): self.keywords = ['consumer'] self.advices = [] self.installed_release = None self.latest_release = None   def _check_latest_ga_release(self): html = urllib2.urlopen('http://www.mysql.com/downloads/mysql/') soup = BeautifulSoup(html) tags = soup.findAll('h1')

362

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

CHAPTER 13 N AUTOMATIc MYSQL DATABAsE PERFORMANcE TUNING

version_str = tags[1].string.split()[-1] (major, minor, release) = [int(i) for i in version_str.split('.')] return (major, minor, release)   def process(self, **kwargs): version = kwargs['env_vars']['ServerSystemVariables']['version'].split('-')[0] (major, minor, release) = [int(i) for i in version.split('.')] latest_major, latest_minor, latest_rel = self._check_latest_ga_release() self.installed_release = (major, minor, release) self.latest_release = (latest_major, latest_minor, latest_rel) if major < latest_major: self.advices.append(('CRITICAL', 'There is a newer major release available, you should upgrade')) elif major == latest_major and minor < latest_minor: self.advices.append(('WARNING', 'There is a newer minor release available, consider an upgrade')) elif major == latest_major and minor == latest_minor and release < latest_rel: self.advices.append(('NOTE', 'There is a newer update release available, consider a patch')) else: self.advices.append(('OK', 'Your installation is up to date'))   def report(self, **kwargs): print self.__class__.__name__, 'reporting...' print "The running server version is: %d.%d.%d" % self.installed_release print "The latest available GA release is: %d.%d.%d" % self.latest_release for rec in self.advices: print "%10s: %s" % (rec[0], rec[1])   The following is the output of the report function performed on a system that is running a slightly older version of the server than is currently available:   MySQLVersionAdvisor reporting... The running server version is: 5.6.19 The latest available GA release is: 5.6.19 NOTE: There is a newer update release available, consider a patch

Checking the Key Buffer Size Setting We’ve already discussed the meaning of the key_buffer_size configuration parameter and the impact that this setting can have on the MySQL database server performance. The plug-in module, shown in Listing 13-7, assumes that the optimal setting is 40% of the total available physical memory. Listing 13-7.  Checking the Optimal Setting of the Key Buffer Size class KeyBufferSizeAdvisor(Plugin):   def __init__(self, **kwargs): self.keywords = ['consumer'] self.physical_mem = 0 self.key_buffer = 0

363

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

CHAPTER 13 N AUTOMATIc MYSQL DATABAsE PERFORMANcE TUNING

self.ratio = 0.0 self.recommended_buffer = 0 self.recommended_ratio = 0.4   def process(self, **kwargs): self.key_buffer = \ int(kwargs['env_vars']['ServerSystemVariables']['key_buffer_size']) self.physical_mem = int(kwargs['env_vars']['HostProperties']['mem_phys_total']) self.ratio = float(self.key_buffer) / self.physical_mem self.recommended_buffer = int(self.physical_mem * self.recommended_ratio)   def report(self, **kwargs): print self.__class__.__name__, 'reporting...' print "The key buffer size currently is %d" % self.key_buffer if self.ratio < self.recommended_ratio: print "This setting seems to be too small for the amount of memory \ installed: %d" % self.physical_mem else: print "You may have allocated too much memory for the key buffer" print "You currently have %d, you must free up some memory" print "Consider setting key_buffer_size to %d, if the difference is \ too high" % self.recommended_buffer   The following is sample output of the report:   KeyBufferSizeAdvisor reporting... The key buffer size currently is 8384512 This setting seems to be too small for the amount of memory installed: 1051463680 Consider setting key_buffer_size to 420585472, if the difference is too high

Checking the Slow Queries Counter Some SQL queries may take a long time to execute, for various reasons. If you have a large dataset, it may be perfectly normal that most of the queries take a considerably long time to finish. In that case, you may need to increase the long_query_time setting. Another possibility is that your tables are not correctly indexed. In that case, you should revisit the table structure and settings. Our last plug-in module reads two status variables: the total number of requests your database server has received and the total number of queries that took a longer time to execute than specified by long_query_time. If the ratio is larger than 0.0001% (more than one query in a million is a slow query), the report will indicate it as an issue. Obviously, you may need to adjust this value to fit your specific database environment. Slow query tracking is not enabled by default on the MySQL server, so you need to set the log_slow_queries variable in the MySQL properties file /etc/my.cnf to ON before executing the plug-in code. The full module code is shown in Listing 13-8. Listing 13-8.  The Plug-in to Check the Slow Query Ratio class SlowQueriesAdvisor(Plugin):   def __init__(self, **kwargs): self.keywords = ['consumer'] self.log_slow = False

364

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

CHAPTER 13 N AUTOMATIc MYSQL DATABAsE PERFORMANcE TUNING

self.long_query_time = 0 self.total_slow_queries = 0 self.total_requests = 0 self.long_qry_ratio = 0.0 # in % self.threshold = 0.0001 # in % self.advise = ''   def process(self, **kwargs): if kwargs['env_vars']['ServerSystemVariables']['log_slow_queries'] == 'ON': self.log_slow = True self.long_query_time = \   float(kwargs['env_vars']['ServerSystemVariables']['long_query_time']) self.total_slow_queries = \ int(kwargs['env_vars']['ServerStatusVariables']['Slow_queries']) self.total_requests = \ int(kwargs['env_vars']['ServerStatusVariables']['Questions']) self.long_qry_ratio = (100. * self.total_slow_queries) / self.total_requests   def report(self, **kwargs): print self.__class__.__name__, 'reporting...' if self.log_slow: print "There are %d slow requests out of total %d, which is %f%%" % \ (self.total_slow_queries, self.total_requests, self.long_qry_ratio) print "Currently all queries taking longer than %f are considered \ slow" % self.long_query_time if self.long_qry_ratio < self.threshold: print 'The current slow queries ratio seems to be reasonable' else: print 'You seem to have lots of slow queries, investigate them and \ possibly increase long_query_time' else: print 'The slow queries are not logged, set log_slow_queries to ON for« tracking'   The following is sample output of this module:   SlowQueriesAdvisor reporting... There are 0 slow requests out of total 15, which is 0.000000% Currently all queries taking longer than 10.000000 are considered slow The current slow queries ratio seems to be reasonable

365

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

CHAPTER 13 N AUTOMATIc MYSQL DATABAsE PERFORMANcE TUNING

Summary In this chapter, we’ve discussed how to inspect the MySQL database settings and the current running status. We also modified the plug-in framework we created in Chapter 6 so that it allows the exchange of information between various plug-in modules. u

The MySQL server configuration items can be queried with the SHOW GLOBAL VARIABLES query.

u

The database status variables can be checked with the SHOW GLOBAL STATUS command.

u

You can use the psutil module to get information about the available system memory.

366

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

CHAPTER 14

Using Amazon EC2/S3 as a Data Warehouse Solution Virtual computing, or cloud computing, is becoming increasingly popular. There are various reasons for that, but mainly it is the cost savings. Many large vendors provide cloud computing services, such as Amazon, IBM, HP, Google, Microsoft, and VMWare. Most of these services provide an API interface that allows controlling the virtual machine instances and the virtual storage devices. In this chapter, we will investigate how to control Amazon Elastic Compute Cloud (EC2) and Amazon Simple Storage System (S3 from your Python applications.

Specifying the Problem and the Solution First, we need to understand in what circumstances this solution is applicable. Although computing on demand is convenient method and can lead to the great savings in cost, it may not be applicable in all situations. In this section, we’ll briefly discuss the situation in which computing on demand can be successfully used.

The Problem Let’s imagine a typical small web startup company. The company provides some services on the Internet. The user base is relatively small but steadily growing, and it is evenly spread geographically, which means that the system is busy 24 hours a day. The system is of a typical two-tier design and consists of two application nodes and two database nodes. The application servers are running an in-house–built Java application deployed on an Apache Tomcat application server and uses the MySQL database to store the data. The web application and the database servers are reasonably busy and therefore are deemed unsuitable to run on a virtualized platform. All four servers are rented from a server hosting company and hosted in the remote data center. Now, this setup satisfies most of the present needs, and considering the slow user base growth, it should remain unchanged for a considerable amount of time. The expansion strategy for the company is to add more of the application and the database nodes as needed. The application design allows for nearly linear horizontal scalability. As the company grows, though, the owners decided to invest more in market research. To better understand the user behavior and do more targeted sales, the company needs to analyze the data stored in the database. However, as we already know, the database servers are already quite busy, and running additional queries will definitely slow the whole application down. Adding the new database servers just for the data analysis task is not cost effective, because it requires considerable initial investment and will add to the constant monthly maintenance costs. Furthermore, the analysis will be performed very infrequently, and most of the time the new systems would be idle. The second problem our startup company faces is the lack of a backup strategy. At the moment, all data is stored on the database servers, and although the servers are redundant, they are still located on the same premises. This data definitely should be backed up at a remote location.

367

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

CHAPTER 14 N USING AMAZON EC2/S3 AS A DATA WAREHOuSE SOLuTION

Our Solution One tactic is to use a computing-on-demand solution, such as Amazon EC2. Since the company needs the processing power only occasionally, it can create the virtual servers as and when necessary to perform the calculations. When the calculations are finished, the company can safely destroy the virtual server. In this case, the company only pays for the time when the server is active. At the time of this writing, the costs of these virtual instances vary from $0.02 to $6.82 per hour, depending on the used memory and the number of allocated virtual CPUs. If the data analysis is performed once a week and takes eight hours each time, the total monthly cost will not exceed $10 (assuming an extra-large high-memory instance currently priced at $0.28 per hour). This is a lot less than what a typical server would cost the company should it decide to rent one. But remember that the second part of the problem that startup faces is the lack of a remote backup. Amazon provides a highly available and scalable storage solution: its Simple Storage System. Similarly to the EC2, you only pay for what you use, and there’s no limit on how much you can store on the S3. At the time of this writing, the basic S3 pricing is $0.03 per gigabyte per month for the storage. Data transfer is free if you are uploading data to S3, but data going out (for example, if you want to restore from backup) will cost you $0.12 per GB. This is where you have to be careful, because the total price can add up to a considerable amount. One terabyte’s worth of information would set you back $30 every month. This may sound like a lot of money considering the current storage prices (1TB external USB costs $60, and this is a one-off fee), but bear in mind that you not only get the storage device but also the data protection. Currently, the standard Amazon S3 provides “99.999999999% durability and 99.99% availability of objects over a given year” (http://aws.amazon.com/s3/).

Design Specifications To accommodate all the requirements and constraints that we set out earlier, we are going to build an application that will create a new instance of the virtual machine in the EC2. The virtual machine will have an instance of the MySQL database server running and available to accept external connections. The database files are going to be stored on a separate, highly available volume, an Elastic Block Store volume. The application will operate in three stages: initialization, processing, and de-initialization. During the initialization stage, the application creates a virtual machine, attaches the volume device to it, and starts up the MySQL server. The processing phase depends on your processing requirements; it typically contains the data transfer and data processing tasks. We are not going to discuss this phase in great detail, because it really depends on your own requirements. And finally, in the de-initialization phase, we shut down the remote MySQL instance, detach the volume, create a snapshot, and destroy the virtual machine. The reason for creating a snapshot is to have a reference point to which you can revert, should you need to check the state of the data at that particular point in time. You can see this as a version control system. Obviously, each snapshot increases the data usage and therefore your costs, so you’ll have to manually control the number of snapshot images that you want to maintain.

The Amazon EC2 and S3 Crash Course At the time of this writing there aren’t many up-to-date books dealing with Amazon EC2 and S3. The reason is that both technologies (especially EC2) are rapidly evolving, which makes them fast-moving targets. There are some good books, but unfortunately, they are already slightly outdated. One of the good manuals about the Amazon web services is Programming Amazon Web Services: S3, EC2, SQS, FPS, and SimpleDB by James Murty (O’Reilly Media, 2008). This book has a good overview of the technologies along with the detailed API specification. Another text that focuses more on operational aspects is Cloud Application Architectures: Building Applications and Infrastructure in the Cloud by George Reese (O’Reilly Media, 2009).

368

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

CHAPTER 14 N USING AMAZON EC2/S3 AS A DATA WAREHOuSE SOLuTION

You can also find a lot of information on the documentations pages for each web service: u

Amazon EC2: http://aws.amazon.com/documentation/ec2/

u

Amazon S3: http://aws.amazon.com/documentation/s3/

It would be hard to fit all the necessary information about these web services in one chapter, so I’m going to describe the basic concepts. Having said that, this chapter will give you enough information to start using the Amazon EC2 and S3 web services, and you can explore more as you get comfortable with the basic principles. It is important to understand that both systems, the EC2 and S3, are primarily web services and are designed to be controlled using the standard web service protocols, such as the SOAP and REST. Many tools provide a user-friendly interface to these services, but they all use the abovementioned protocols to interact with the AWS (Amazon Web Services). If you want to use any of those services, you must sign up for them at http://aws.amazon.com/. You don’t have to create an account for each service; in fact, you can use your existing Amazon store account, but you have to sign up for each service individually.

Authentication and Security When you use the EC2 and S3 services, you have to authenticate yourself to the AWS system. There are different methods of doing so, and different services require you to provide slightly different information. Sometimes, this may cause confusion as to which method has to be used where, and more importantly, where to obtain this information. So before exploring each individual service, I’ll provide basic information about the security and authentication mechanisms used in AWS.

Account Identifier Each account has a unique AWS account ID number, which consists of 12 digits and looks something like 1234-5678-9012. Each account also has an assigned canonical user ID, which is a string containing 64 alphanumeric characters. The AWS account number is used to share the objects between different accounts. For example, if you want to grant access to your virtual machine image to someone else, you’ll have to know that person’s AWS account ID. This ID is used across all AWS services except the S3. The canonical user ID is used only in the S3 service. Similar to the AWS account ID, its primary purpose is control of access. You can obtain this information by going to http://aws.amazon.com/account/, clicking “Security credentials,” and scrolling right to the bottom of the web page. The section containing the required information is called “Account Identifiers.”

Access Credentials The access credentials are used in every REST API call. These keys are also used in the Amazon S3 SOAP calls. The access credentials are split into two parts. The first part is the Access Key ID and is used to identify the requestor identity. The second part is the secret access key, which is used to create a signature that is sent with every API request. When the AWS receives the request, it validates the signature by using the corresponding secret access key (which is only known to AWS). Only the valid secret access key can create a signature that can be validated by the AWS secret key counterpart. This ensures that the request is sent from the valid requestor. Both keys are long alphanumeric strings and can be found under the Access Credentials section of the “Access Keys” tab. It is advisable to rotate the keys regularly. Also, make sure not to disclose the secret key to anyone.

369

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

CHAPTER 14 N USING AMAZON EC2/S3 AS A DATA WAREHOuSE SOLuTION

Best practice is to create new users and generate Access Key IDs and corresponding secret access keys instead of relying on a root access key pair. This can be done by navigating to the user account management console, https://console.aws.amazon.com/iam/home?#users.

X.509 Certificates The X.509 certificates are used primarily with the SOAP API requests. The certificate consists of two files. The first file is the X.509 certificate, which contains your public key and the related metadata. The public key is sent along with the request body and is used to decrypt the signature information contained in the request. The second part of the certificate is the private key file. This file is used to create the digital signature, which is included with every SOAP request. You must keep this key secret. When you generate the X.509 certificate, you are provided with both files. However, the secret key is not stored on the Amazon systems; therefore, if you’ve lost your private key, you must regenerate the X.509 certificate. As with the access credential key, it is good practice to rotate the certificates regularly. You can generate news certificates or upload your own certificates in the “Access Credentials” sections under the X.509 Certificates tab.

EC2 Key Pair The EC2 key pair allows you to log on to a new virtual machine instance. Each key pair consists of three parts. The first part is the key pair name. When you create a new instance, you select the key pair that you want to use on this instance by selecting an appropriate key pair name. The second part is the private key file. This file is used to establish an SSH (Secure Shell) session to the new virtual machine instance. It is very important that this key is kept secret and safe at all times. Anyone possessing this key will be able to access any of your virtual machines. The last part is the public key, which is kept on the AWS system. You cannot download this key. When the virtual machine instance is started, the AWS will copy this key to the running system, which allows you to connect to it using your private key file. You can generate as many key pairs as you like. Unlike the other credentials, the EC2 keys are accessible only from the EC2 management console, which is available at https://console.aws.amazon.com/ec2/home.

The Simple Storage System Concepts From the user perspective, there are only two entities in the S3 architecture: the data objects and the buckets. The most important entity is the data object. The data object is what is actually stored on the S3 infrastructure. Technically, each data object consists of two parts: the metadata and the data payload. The metadata part describes the object and consists of the key-value pairs. As a developer, you can define any number of the key-value pairs. This metadata is sent in the HTTP header of the request. The second part is the data payload, and it is what you actually want to store on the S3. The data payload size can be anything from 1 byte to 5 gigabytes. You can assign any name to the objects as long as it conforms to the URI naming standards. Basically, if you limit the name to alphanumeric characters, dots, forward slashes, and hyphens, you should be okay. The second entity is the bucket object. The bucket is the entity that contains the data objects. The buckets cannot contain other buckets. The object name space is within each bucket; however, the bucket name is in the global name space. This means that your objects within a bucket must have unique names, but you can have two objects with the same name in different buckets. The buckets must have unique names on the S3 system, so there is a chance that you may try to use a bucket name that is already used by someone else. There is a limit of 100 buckets per account, but there is no limit on the size of the objects stored in each bucket. Figure 14-1 illustrates the relationship between the buckets and the objects, along with some example names for each.

370

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

CHAPTER 14 N USING AMAZON EC2/S3 AS A DATA WAREHOuSE SOLuTION \

Buckets

c

\

example.com

bucket1

s

s

\

objectA

objectB

index.html

data/file.dat

Objects

\

Figure 14-1.  The Amazon S3 buckets and objects These names can be mapped to the Amazon S3 resource URLs using the following naming scheme: http://.s3.amazonaws.com/  Therefore, the objects from Figure 14-1 can be accessed via the following URLs (assuming the public access rights are enabled): u

http://bucket1.s3.amazonaws.com/objectA

u

http://bucket1.s3.amazonaws.com/objectB

u

http://example.com.s3.amazonaws.com/index.html

u

http://example.com.s3.amazonaws.com/data/file.dat

I intentionally showed the real web URLs in the second bucket. When you navigate to any website, your browser uses the HTTP GET requests to fetch the pages. These are the same as the REST requests used to access the S3 system objects, so you can host complete websites (or the static parts of the dynamic sites) on the S3.

The Elastic Computing Cloud Concepts The Amazon EC2 WS is a sophisticated system that interacts with the other services such as Amazon S3 to provide you with the complete computing-on-demand solution. If you’re familiar with any of the virtualization platforms such as Xen, KVM, or VMWare, you will find most of the concepts described here to be quite similar.

Amazon Machine Images and Instances The Amazon Machine Image (AMI) is the image of the operating system that can be started. The image contains all the packages that are required to run your system. You can have as many AMIs as you need. For example, if you wanted to replicate the two-tier web system that we described earlier, you would create two types of AMIs: a web server AMI and a database AMI. The web server AMI would have the Apache web server and the Apache Tomcat application server packages installed. The database AMI would have a MySQL instance installed. There are many different AMIs available publicly. There are several provided by Amazon and other companies. Some of the AMIs are available for free, but there are also AMIs for which you have to pay if you want to use them. The easiest way of creating your own AMI is to clone an existing AMI and make your own modification. Make sure that you use an AMI from a trusted source!

371

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

CHAPTER 14 N USING AMAZON EC2/S3 AS A DATA WAREHOuSE SOLuTION

N■Note Try not to base your operations on the publicly available AMIs. When the creator of such an AMI decides to destroy the AMI, you will not be able to use it again. If you find an AMI that you think is suitable, make a copy of it and create a private AMI. Do this even if you don’t plan to make any modifications to it. This ensures that you will always be able to find the same AMI every time you need to use it. The typical Linux AMI size on S3 is under 1GB. Assuming the standard $0.03/month fee for a gigabyte of data, maintaining your own AMI would set you back only $0.36 every year. You cannot run the AMI itself; you must create an instance of the AMI you want to run. The instance is the actual virtual machine that runs the software installed in AMI. An analogue can be a Python class and the class instance (or an object). The class defines the methods and properties (or software packages in OS terms). When you want to run the defined methods, you create an object of that particular class. Similarly, an AMI is the contents of the virtual machine and the instance is the actual running virtual machine. You have two options whereby you can store the AMIs: on the Amazon S3 storage or on the Amazon Elastic Block Store snapshot (we’ll discuss that in the next section). The method of storing an AMI determines how it is created and affects its behavior. Table 14-1 summarizes the differences between these two methods of storing the AMIs. Table 14-1.  Comparison of S3– and EBS–Backed AMIs

Aspect

EBS–Backed AMI

S3–Backed AMI

Size limit

An EBS volume is limited to 1TB. This can be convenient for large installations.

The S3 backed root partition can be up to 10GB in size. If your root partition needs to be larger than that, you cannot use this method.

Stopping the running instance

You can stop the instance, which means that the virtual machine is not running and you’re not charged, but the root partition is not released and still persists as an EBS volume. You can then restart the instance from the same instance volume.

You cannot stop the instance. If you stop the instance it will be terminated and the root partition is destroyed, too; therefore, all information stored on that partition would be lost.

Data persistence

The local data storage is attached and can be used to store temporary data. When you stop the instance, the root partition will not be detached, but the local storage will be lost.

The local data storage is attached and can be used to store temporary data. When you terminate the instance, the data from both the root partition and the local storage is lost.

You can attach any number of EBS volumes to store the data permanently.

You can attach any number of EBS volumes to store the data permanently.

Boot time

The boot time is slower because all data The boot time is faster because the data on needs to be retrieved from the S3 before it is root partition is immediately available on the EBS volume. However, the virtual machine will deployed to the root partition. perform slower at the beginning because the data is gradually fetched from the snapshot.

Creating a new image

A single API call clones the existing running AMI to a new volume.

You have to create an operating system image with all required packages and then create an image bundle and upload it to the S3. You then register the AMI with the bundled image. (continued)

372

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

CHAPTER 14 N USING AMAZON EC2/S3 AS A DATA WAREHOuSE SOLuTION

Table 14-1  (continued)

Aspect

EBS–Backed AMI

S3–Backed AMI

Charging

The following charges will apply:

The following charges will apply:

r Charge for the volume snapshot (full volume size)

r S3 charge for storing the AMI image (compressed)

r Charge for the volume space used while the instances are in the stopped state (full volume size)

r Charge for the running instance

r Charge for the running instance

The following figures represent the life cycles of the S3 backed and EBS backed instances. Figure 14-2 shows the life cycle of a typical S3–backed instance. The instance is created from the AMI image stored on the S3. When the instance is terminated, the volumes are destroyed and all data is lost. You only pay for the S3 store and the running costs of the virtual machine.

Storage charges

AMI on S3 store

c >

Start

I

)

Copy AMI contents from S3 to instance root volume

Processing charges

Root volume

>r

Running instance

>

Local volume

>f

Destroy the instance, the root and local volumes

C

I

Stop

)

Figure 14-2.  An S3–backed instance life cycle Figure 14-3 displays the typical life cycle of an EBS—backed instance. On initial start, the root volume is created from the EBS snapshot. The instance then can have two different states: running and stopped. When the instance is running, you pay for the processing power and the EBS volume. When the instance is stopped, you pay for only the EBS volume. If you resume the instance, it’ll maintain all the data in its root volume; therefore, you pay for it. Finally, if you choose to destroy the instance, the volumes are destroyed too, and you do not pay for the volumes anymore.

373

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

CHAPTER 14 N USING AMAZON EC2/S3 AS A DATA WAREHOuSE SOLuTION

(

Storage charges

)

Start

Create a volume from a snapshot

EBS snapshot

Storage charges

*

Root volume

*

Local volume

Instance states

r Instance running

Instance stopped

Processing charges

Destroy the instance, the root and local volumes

C

)

Stop

Figure 14-3.  An EBS–backed instance life cycle As you can see from the figures, regardless of the instance type, they all get a local storage attached to them. This storage is called an ephemeral storage, and its lifetime is limited by the time the instance spends in the running state. It can also survive the operating system restarts (intentional or unintentional), but as soon as you stop the instance, all data on the ephemeral device is lost.

Elastic Block Store The Elastic Block Store (EBS) is a block-level device that is available to use with the EC2 instances. The volumes are completely independent from the instances and the data is not lost when the instance is terminated and destroyed. The EBS volumes are highly available and reliable storage devices. Each EBS volume can vary in size from 1GB to 1TB. You can attach multiple volumes to a single running EC2 instance. If you need volumes larger than 1TB, you can use the operating system tools such as LVM (Logical Volume Manager) to combine multiple EBS volumes into a single larger volume. As I have mentioned, the EBS volumes are block devices, so you have to create a file system on them before you can use them. Alternatively, you can use these as raw devices in the applications that support raw device access. The Amazon WS also provides functionality to take the volume snapshots. A )volume snapshot is a point-in-time copy of the volume contents. The copy is backed up to the S3 storage. You can create as many snapshots as you need. The first snapshot is a full copy of the volume, but the sequential snapshots only record the differences between the last snapshot and the current volume state. The operation of taking the snapshot of a volume can be reversed, and you can create a volume from an existing snapshot. This is useful if you have to provide the same data to multiple EC2 instances. You can also share the snapshots among the Amazon WS accounts.

374

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

CHAPTER 14 N USING AMAZON EC2/S3 AS A DATA WAREHOuSE SOLuTION

Other important and useful EBS features include: u

Provisioned IOPS, which allows you to predefine a specific level of I/O performance.

u

EBS volume encryption, which can be used to encrypt volumes and secure sensitive data.

u

Performance metrics monitoring, available through the AWS Management Console.

Security Groups The network access to your instances is controlled using the security groups. The security group is a set of network access rules, like an IPTables rule set. You define the destination network address, the port number, and the communication protocol, such as TCP or UDP. When you start a new instance, you can assign one or more security groups to it. For example, you can have a database security group that allows the TCP access to the port 3306 (MySQL service port). When you create a new database instance, you then select this security group, which allows the external access to your MySQL server. Make sure you allow the administration SSH access to your instances; otherwise, you will not be able to connect and manage them.

Elastic IPs and Load Balancers By default, each instance receives a dynamically allocated public IP address. Obviously, this is not suitable for the servers serving the web content or providing other publicly available services. Every time you restart an instance, you may potentially get a different IP address. You can request an elastic IP address, which is always attached to one EC2 instance. This allows you to create one DNS entry for your server, and that entry will not need to change over time. The additional benefit of the elastic IP is that you can assign a failover instance to it. This means that, should the primary instance fail, the IP will be relocated to another instance that is capable of serving requests. This method allows you to implement a simple active-standby system configuration. You can also use the Amazon EC2 load-balancing capabilities where the incoming requests are distributed between two or more instances. The virtual load balancer acts similarly to the conventional hardware load balancers, such as Cisco, Citrix Netscaler, or A10 range load balancer. Creating a new load balancer instance is relatively simple. You have to select the externally available service port—for example, port 80 for the HTTP traffic. Then, you select the service port on your instances. For example, let’s say you are running a Tomcat instance on port 8080 on the EC2 instances, but you want to make this service available via the standard HTTP port 80. In this case, the external service port 80 will be mapped to the internal service port 8080. Last, you assign the EC2 instances to the load balancer.

User Interfaces You can manage all AWS services through AWM Management Console, which is available at https://console.aws.amazon.com/console/home.

Creating a Custom EC2 Image Now that you have a basic understanding of the EC2 and S3 services, let’s put that knowledge to practical use. As you already know, we need to create an AMI, which we use to start our instances. I am going to show you how to create a custom AMI based on an existing image. We’ll create a S3–backed AMI image because, in our instance, it will be more cost-effective and we do not require the instance-stopping functionality. When the data is transferred and processed, we can destroy the instance.

375

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

CHAPTER 14 N USING AMAZON EC2/S3 AS A DATA WAREHOuSE SOLuTION

Reusing Existing Images Let’s start by selecting an existing image from the list of available images. In this exercise I am going to use the standard Amazon AWS management console.

1.

We start by selecting the EC2 management console from the main dashboard. We are presented with an overview of all your EC2 services.



2.

On the left-hand side menu, we select “AMIs” under the “Images” section.



3.

The default filter is to show all images that we own. We will need to select “Public Images” from the dropdown menu. For this exercise, I am going to use a CentOS 6.5 image created by a company called RightScale. This is a well-known company, which specializes in deploying mission-critical systems in the cloud environment; therefore, the images produced by them can be trusted. The AMIs ID we are looking for is ami-2e32c646. We can find it by using a search field in the filter.

Figure 14-4 is a screenshot of the AWS management console with the AMI selected. flOO

EC2 Management Console

+ M

ft

«a=

Services

v

Edit

hnp\ A console awt smaron

com

js-e*st-

1 eimagesTlter- ali-irragevplarfe rm-cent-evvi*lhi liry« public -imagevsearch- ami- ?el?c646

-

R- Slletka

EC2 Dashboarc

N. Virginia

»

Actions v

»

Help

Filter: Public imagos v

All images

~

CentOS v

Q ami-2«32c646

1 to 1 of 1 Imagos

X

a

*

O

i

Events Tags

0

C

CC2 Management Console

>

G >l

Reports

-

Rjghtlmage_CentOS_6.5_x64_v14.0

INSTANCES instances

ami-2e32c646

Source

Owner

Visibility

Status

Platforr

ngntscale-us-ea..

411009282317

Public

available

CentOS

IHD

Image: ami-2e32c646

Spot Requests

Reserved Instances

-

• AMI ID

AMI Name

Name

Limits

Details

Tags

IMAGES

AMI ID

arm-2e32c646

AMI Name

AMIs

Owner

411009282317

Source

Bunde Tasks

- ELASTIC BLOCK STORE volumes

Status

«vai able

State Reason

Platform

Cert OS

Architecture

Image Type

machine

Virtualization type

pa'avirtual

Root Device Name

/dev/adal

Snapshots

-

Rightlmage CentOS 6.5 x64 v14.0 rightscaJe-us-

east/Rightlmage_CertOS_6.5_x64_v14.0,manitest.*ml

Description

NETWORK & SECURITY

Root Device Type

Security Groups

Kernel ID

Paste IPs

Block Devices

instance storo



*86 64

RAM disk ID

aki-919d«»®

Product Codes

8db=epnerrera»

Pacement Groups Load Baancers

Kay Pairs

Network Interfaces C 2006

20)4, AmMon Web Services,

Inc. or

its affiliates. All righto reserved.

Privacy Policy

Term* of Use

Figure 14-4.  Selecting the AMI to clone

4.

When we have found the AMI, we right-click it and select “Launch” to initiate the instance launch process.

376

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

Feedback

CHAPTER 14 N USING AMAZON EC2/S3 AS A DATA WAREHOuSE SOLuTION

N■Note Make sure that you have created the security group with the ports 3306 (MySQL) and 22 (SSH) enabled for access from all IPs. You will also need a key pair to be generated and the private key downloaded to your local machine. Save the private key safely, and note its name. We’ll refer in this text to this file as .pem. We can monitor the state of the EC2 instance by clicking the Instances link in the Navigation menu. Once the instance is in the running state, we can use SSH to connect to it. Click the instance name, and the details will be displayed in a separate window. Note the instance’s public DNS name. We connect to the instance with the following command: $ ssh -i .pem root@

Making Modifications We’re now ready to make modifications to the image. As you remember, our goal is to make this image a MySQL database instance that stores all the files on a dedicated persistent EBS volume.

Install the Additional Packages First, we need to install the additional packages—the MySQL server, in particular. The reason we do this step first is that, while mounting the new file system, we’ll require the MySQL user account to be present, which is created by the package we are going to install now. We use the Yum installer to install the additional packages:   # yum install mysql mysql-server   This command will install the default MySQL package version from the OS repository. In CentOS 6.5 case, this is going to be MySQL 5.1 version. (If you need to use later versions of MySQL server, you will have to use the MySQL repository provided by Oracle.) To add the repository configuration, we run the following command before installing the MySQL packages:   # yum install http://repo.mysql.com/mysql-community-release-el6-5.noarch.rpm 

N■Note If you decide to build MySQL from the source code, make sure you set it up so that it starts automatically when your machine starts. You don’t need to worry about that if you are installing standard OS packages.

Create and Set Up an Elastic Block Store Volume Next, we are going to set up a new EBS volume. We go back to click on “Create Volume” and then use the pop-up window to specify the required options for the new volume. We make sure that we allocate enough space for our data. The availability zone must match the availability zone of our running instance. We can find out the instance’s availability zone by clicking the instance name in the Instances section.

377

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

CHAPTER 14 N USING AMAZON EC2/S3 AS A DATA WAREHOuSE SOLuTION

Depending on the volume size, it can take some time for the volume to become available. When the volume becomes available (as indicated by the volume status column), we can attach it to the running EC2 instance. We right-click the volume name and select the “Attach Volume” menu item. We will then be presented with the list of available running EC2 instances. We select the instance we created earlier. We will also be asked to specify the local device name for the new volume. When selecting a device name (such as /dev/sdf), we ensure that the name is not in use by any other device. When the device becomes available (the device file /dev/xvdf is created on the instance’s file system), we can create the file system on it with the following commands:   # mke2fs -F -j /dev/xvdf ... # e2label /dev/xvdf mysqlvol 

N■Note  You may wonder why we created /dev/sdf device, but use /dev/xvdf on the actual OS. This is because newer kernels rename the device if it is a virtual disk. Now, we create a new directory, which will be used to mount the newly created file system and change the ownership so that the MySQL process is able to write to the volume: # mkdir /mysql-db # chown mysql.mysql /mysql-db # mount LABEL=mysqlvol /mysql-db

Configure the MySQL Instance Next, we will configure the MySQL instance. We have to change the contents of the MySQL configuration file (located in /etc/my.cnf), so that the socket file and all data files are stored on the EBS volume. This ensures that the data is not lost during the system restarts. The new contents of the MySQL configuration file are presented in Listing 14-1. Listing 14-1.  Pointing the MySQL Database to the New Location [mysqld] datadir=/mysql-db socket=/mysql-db/mysql.sock user=mysql symbolic-links=0   [mysqld_safe] log-error=/var/log/mysqld.log pid-file=/var/run/mysqld/mysqld.pid   Now, let’s start the MySQL daemon and set the default password.

378

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

CHAPTER 14 N USING AMAZON EC2/S3 AS A DATA WAREHOuSE SOLuTION

N■Caution Obviously, you’ll have to use something more secure and unpredictable than what I’m using in this example.  # chkconfig --levels 235 mysqld on # service mysqld start # mysqladmin -u root -S /mysql-db/mysql.sock password 'password' # mysql -p -S /mysql-db/mysql.sock [...] mysql> grant all privileges on *.* to 'root'@'%' identified by 'password' with grant option; Query OK, 0 rows affected (0.00 sec)   mysql> flush privileges; Query OK, 0 rows affected (0.00 sec)  Finally, we shut down the MySQL daemon and unmount the volume:   # service mysqld stop # umount /mysql-db

Bundling the New AMI Once we’ve made all the modifications and are happy with the running instance, we can create a new AMI from it by bundling it. First, we have to install AMI Tools on the Linux instance. The instructions might change slightly over time, so we follow the installation instructions as described here: http://docs.aws.amazon.com/AWSEC2/latest/ CommandLineReference/set-up-ami-tools.html to install the AMI toolkit and http://docs.aws.amazon.com/ AWSEC2/latest/CommandLineReference/set-up-ec2-cli-linux.html to install the CLI utilities. We check that the installation was successful:   # ec2-ami-tools-version 1.5.3 20071010   Copyright 2008-2014 Amazon.com, Inc. or its affiliates. All Rights Reserved. Licensed under the Amazon Software License (the "License"). You may not use this file except in compliance with the License. A copy of the License is located at http://aws.amazon.com/asl or in the "license" file accompanying this file. This file is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.   # ec2-describe-regions REGION eu-west-1 ec2.eu-west-1.amazonaws.com REGION sa-east-1 ec2.sa-east-1.amazonaws.com REGION us-east-1 ec2.us-east-1.amazonaws.com REGION ap-northeast-1 ec2.ap-northeast-1.amazonaws.com REGION us-west-2 ec2.us-west-2.amazonaws.com REGION us-west-1 ec2.us-west-1.amazonaws.com REGION ap-southeast-1 ec2.ap-southeast-1.amazonaws.com REGION ap-southeast-2 ec2.ap-southeast-2.amazonaws.com  

379

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

CHAPTER 14 N USING AMAZON EC2/S3 AS A DATA WAREHOuSE SOLuTION

Then, we have to prepare our X.509 certificate files and set some environment variables with the access credentials. These will be used in the bundling commands, so we make sure we prepare them beforehand to avoid any issues when running the commands. From the account management console (https://console.aws.amazon.com/ iam/home?#security_credential), we create a new X.509 certificate file (the certificate file and the private key). We save the downloaded files as pk.pem (private key) and cert.pem (certificate) locally. When we have the both files, we copy them across to the running instance into the /mnt/ directory. We go back to the shell prompt on the running EC2 instance, and set the following environment variables to the appropriate values, which we can obtain from the account management web page:   export AWS_USER= export AWS_ACCESS_KEY= export AWS_SECRET_KEY=   We’re now ready to bundle the running instance. We issue the following command and wait until it finishes. This is a rather lengthy process and might take up to 10 minutes:   # ec2-bundle-vol -u $AWS_USER -k /mnt/pk.pem -c /mnt/cert.crt -p CentOS-6.5-x86_64-mysql -r x86_64 Setting partition type to bundle "/" with... Auto-detecting partition type for "/" Partition label detected using parted: "loop" Using partition type "none" Copying / into the image file /tmp/CentOS-6.5-x86_64-mysql... Excluding: /proc/sys/fs/binfmt_misc /sys /proc /dev/pts /dev /media /mnt /proc /sys /tmp/CentOS-6.5-x86_64-mysql /mnt/img-mnt 

N■Note  rsync seemed successful but exited with error code 23. This probably means that your version of rsync was built against a kernel with HAVE_LUTIMES defined, although the current kernel was not built with this option enabled. The bundling process will thus ignore the error and continue bundling. If bundling completes successfully, your image should be perfectly usable. We, however, recommend that you install a version of rsync that handles this situation more elegantly.  Image file created: /tmp/CentOS-6.5-x86_64-mysql Volume cloning done. Bundling image file... Splitting /tmp/CentOS-6.5-x86_64-mysql.tar.gz.enc... Created CentOS-6.5-x86_64-mysql.part.00 Created CentOS-6.5-x86_64-mysql.part.01 [ . . . ]

380

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

CHAPTER 14 N USING AMAZON EC2/S3 AS A DATA WAREHOuSE SOLuTION

Created CentOS-6.5-x86_64-mysql.part.35 Generating digests for each part... Digests generated. Unable to read instance meta-data for ancestor-ami-ids Unable to read instance meta-data for ramdisk-id Unable to read instance meta-data for product-codes Creating bundle manifest... Bundle manifest is /tmp/CentOS-6.5-x86_64-mysql.manifest.xml ec2-bundle-vol complete.   Once the image bundling is complete, we have to upload to the S3 storage. The –b option in the following command indicates the bucket name. As you know, the bucket name must be unique on the whole S3 system, so we choose it carefully. We don’t need to create the bucket beforehand; if the bucket does not exist, it will be created for you. The upload process is a little bit faster than the bundling process, but we expect it to take a considerable amount of time, too:   # ec2-upload-bundle -b pro-python-system-administration -m /tmp/CentOS-6.5-x86_64-mysql.manifest.xml -a "$AWS_ACCESS_KEY" -s "$AWS_SECRET_KEY" Uploading bundled image parts to the S3 bucket pro-python-system-administration ... Uploaded CentOS-6.5-x86_64-mysql.part.00 [ . . . ] Uploaded CentOS-6.5-x86_64-mysql.part.35 Uploading manifest ... Uploaded manifest. Manifest uploaded to: pro-python-system-administration/CentOS-6.5-x86_64-mysql.manifest.xml Bundle upload completed.    And finally, we have to register the newly created AMI. Once the command is finished executing, we’ll be prompted with the AMI ID string. We’ll also see the new AMI in your private AMI selection screen:   # ec2-register --name 'pro-python-system-administration/CentOS-6.5-x86_64-mysql' \ pro-python-system-administration/CentOS-6.5-x86_64-mysql.manifest.xml \ -K /mnt/pk.pem   IMAGE ami-2a58a342

Controlling the EC2 Using the Boto Python Module We finally come to the stage of creating the code to automatically manage the EC2 instances. You can access these services using the SOAP or REST API, but you don’t have to do all the heavy lifting yourself, as there are lots of different libraries available. Despite the lack of printed documentation, the subject is well documented on the Internet, and the libraries are available for most of the popular programming languages, like Java, Ruby, C#, Perl, and obviously Python. One of the most popular Python libraries for accessing the Amazon web services is the Boto library. This library provides interfaces to the following AWS: u

Simple Storage Service (S3)

u

Simple Queue Service (SQS)

u

Elastic Compute Cloud (EC2)

381

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

CHAPTER 14 N USING AMAZON EC2/S3 AS A DATA WAREHOuSE SOLuTION

u

Mechanical Turk

u

SimpleDB (SDB)

u

CloudFront

u

Virtual Private Cloud (VPC)

The library is available on most of the Linux distributions. For example, on a Fedora system, you can install the library with the following command:   $ sudo yum install python-boto   You can also download the source code from the projects home page, https://github.com/boto/boto. The official documentation is available at http://docs.pythonboto.org/en/latest/.

Setting Up the Configuration Variables There will be two types of configuration data. The account-specific configuration (the REST API access keys) are not specific to our application and can be stored in the Boto configuration file called .boto in the user directory. This configuration file contains the access ID key and the secret access key:   [Credentials] aws_access_key_id = aws_secret_access_key =   We’re going to store the application-specific configuration in the backup.cfg file and access it by using the ConfigParser library. The contents of the file are described in the following code:   [main] volume_id=vol-7556353c # the EBS volume ID which we mount to the EC2 DB instances vol_device=/dev/sdf # the name of the device of the attached volume mount_dir=/mysql-db # the name of the mount directory image_id=ami-2a58a342 # the name of the custom created AMI image key_name= # the name of the key pair (and the pem file) key_location=/home/rytis/EC2/ # the location of the key pair file security_grp=database # the name of the security group (with SSH and MySQL ports)

Initializing the EC2 Instance Programmatically First, let’s create the skeleton application structure. In Listing 14-2, we start by creating the BackupManager class. This class will implement the methods of managing our custom EC2 instance. We also set up a logger object, which we’ll use to log the application status. Listing 14-2.  The Structure of the Application #!/usr/bin/env python   import sys import logging import time import subprocess

382

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

CHAPTER 14 N USING AMAZON EC2/S3 AS A DATA WAREHOuSE SOLuTION

import boto import boto.ec2 from ConfigParser import SafeConfigParser import MySQLdb from datetime import datetime   CFG_FILE = 'backup.cfg'   class BackupManager:   def __init__(self, cfg_file=CFG_FILE, logger=None): self.logger = logger self.config = SafeConfigParser() self.config.read(cfg_file) self.aws_access_key = boto.config.get('Credentials', 'aws_access_key_id') self.aws_secret_key = boto.config.get('Credentials', 'aws_secret_access_key') self.ec2conn = boto.ec2.connection.EC2Connection(self.aws_access_key, self.aws_secret_key) self.image = self.ec2conn.get_image(self.config.get('main', 'image_id')) self.volume = self.ec2conn.get_all_volumes([self.config.get('main', 'volume_id')])[0] self.reservation = None self.ssh_cmd = [] [...] def main(): console = logging.StreamHandler() logger = logging.getLogger('DB_Backup') logger.addHandler(console) logger.setLevel(logging.DEBUG) bck = BackupManager(logger=logger)   if __name__ == '__main__': main()   As you can see, in the initialization process we’re already making the connection to the AWS. The result returned by the EC2Connection() call is the connection object, which we’ll use to access the AWS system.   self.ec2conn = boto.ec2.connection.EC2Connection(self.aws_access_key, self.aws_secret_key)   For example, the following two calls return the AMI image and the volume objects:   self.image = self.ec2conn.get_image(self.config.get('main', 'image_id')) self.volume = self.ec2conn.get_all_volumes([self.config.get('main', 'volume_id')])[0]   Each of those objects exposes the methods that can be used to control them. For example, the volume object implements the attach method, which can be used to attach the specific volume to an EC2 instance. We’ll discover the most frequently used method in the following sections.

383

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

CHAPTER 14 N USING AMAZON EC2/S3 AS A DATA WAREHOuSE SOLuTION

Launching the EC2 Instance Our very first task is to start the instance. This can be accomplished with the run() method, which is available in the image object we created earlier. The result of this call is a reservation object, which lists all instances started with this call. At the moment, we’re starting just one instance, but you can start multiple instances from the same AMI image. The run() method requires two parameters to be set: the key pair name and the security group. I’m also specifying the optional placement zone parameter, which indicates in which EC2 zone the instance needs to be started. We don’t really care what the zone will be as long as it is the same zone where the volume is created. You cannot attach the volumes from a different zone, so the instance must run in the same zone. You can discover the volume’s zone by inspecting the zone attribute of the volume object. As you know, the instance will not be available immediately; therefore, we have to implement a simple loop that periodically checks the status of the instance and waits until it changes the state to 'running' (see Listing 14-3). Listing 14-3.  Starting the EC2 Instance def _start_instance(self): self.logger.debug('Starting new instance...') self.reservation = self.image.run(key_name=self.config.get('main', 'key_name'), security_groups=[self.config.get('main', 'security_grp')], placement=self.volume.zone) instance = self.reservation.instances[0] while instance.state != u'running': time.sleep(60) instance.update() self.logger.debug("instance state: %s" % instance.state) self.logger.debug("Instance %s is running and available at %s" % (instance.id, instance.public_dns_name))

Attaching the EBS Volume Once the instance is running, we can attach the volume to it. As Listing 14-4 shows, the volume can be attached with just a single method call. However, there’s a caveat. Even if you wait for the volume to change its state to indicate that it has been successfully 'attached', you still may find that the device is not ready. I found that an extra 5 seconds’ wait is usually enough, but just to be on a safe side, we’ll wait another 10 seconds. Listing 14-4.  Attaching the EBS volume def _attach_volume(self, volume=None): if not volume: volume_to_attach = self.volume else: volume_to_attach = volume instance_id = self.reservation.instances[0].id self.logger.debug("Attaching volume %s to instance %s as %s" %« (volume_to_attach.id, instance_id, self.config.get('main', 'vol_device')))

384

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

CHAPTER 14 N USING AMAZON EC2/S3 AS A DATA WAREHOuSE SOLuTION

volume_to_attach.attach(instance_id, self.config.get('main', 'vol_device')) while volume_to_attach.attachment_state() != u'attached': time.sleep(20) volume_to_attach.update() self.logger.debug("volume status: %s", volume_to_attach.attachment_state()) time.sleep(10) # give it some extra time # aws sometimes is mis-reporting the volume state self.logger.debug("Finished attaching volume")

Mounting the EBS Device The volume is attached, but the file system is not visible to the operating system yet. Unfortunately, there is no API call to mount the file system because this is the operating system function, and the Amazon WS cannot do anything about it. So we have to issue the mount command remotely using the ssh command. The ssh command that establishes a remote communication link is always the same, so we construct it one using the method in Listing 14-5, and we’ll reuse it every time we need to issue an operating system command on the remote system. Listing 14-5.  Constructing the ssh Command Paramenters def _init_remote_cmd_args(self): key_file = "%s/%s.pem" % (self.config.get('main', 'key_location'), self.config.get('main', 'key_name')) remote_user = 'root' remote_host = self.reservation.instances[0].public_dns_name remote_resource = "%s@%s" % (remote_user, remote_host) self.ssh_cmd = ['ssh', '-o', 'StrictHostKeyChecking=no', '-i', key_file, remote_resource]   We have to use the OpenSSH option StrictHostKeyChecking=no because we will be making the connection to the new host, and by default OpenSSH will warn you that the host key it receives has never been seen before. It will also ask for a confirmation to accept the remote key—behavior you don’t want to see in an automated system. Once the default ssh argument string is constructed, we can issue the remote volume mount command to the running instance, as shown in Listing 14-6. Listing 14-6.  Mounting the File System on the Remote Host def _mount_volume(self): self.logger.debug("Mounting %s on %s" % (self.config.get('main', 'vol_device'), self.config.get('main', 'mount_dir'))) remote_command = "mount %(dev)s %(mp)s && df -h %(mp)s" % \ {'dev': self.config.get('main', 'vol_device'), 'mp': self.config.get('main', 'mount_dir')} rc = subprocess.call(self.ssh_cmd + [remote_command]) self.logger.debug('done')

385

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

CHAPTER 14 N USING AMAZON EC2/S3 AS A DATA WAREHOuSE SOLuTION

Starting the MySQL Instance As we did for the mount command, we’ll use the same mechanism to start and stop the MySQL daemon on the remote server. We’ll be using the standard Red Hat distribution /sbin/service command to run the initialization scripts, as shown in Listing 14-7. Listing 14-7.  Starting and Stopping MySQL Daemon Remotely def _control_mysql(self, command): self.logger.debug("Sending MySQL DB daemon command to: %s" % command) remote_command = "/sbin/service mysqld %s; pgrep mysqld" % command rc = subprocess.call(self.ssh_cmd + [remote_command]) self.logger.debug('done')

Transferring the Data At this point, we have the remote system ready to accept the MySQL database connections. As we’ve discussed before, the actual data transfer and processing is specific task, and there are no generic recipes for them. Typically, the steps involved are as follows:

5.

Establish a connection to the local database.



6.

Establish a connection to the remote database running on an EC2 instance.



7.

Find out what local data does not exist on the remote database yet.



8.

Read the record set in from the local database and update the remote database accordingly.



9.

Delete the old data from the local database if not required.



10.

Perform any statistical calculations by using the complex SQL queries or functions on the remote EC2 instance.

But then again, the process largely depends on your requirements, so I will leave the implementation of this task to you. In our example application, we’ll use a dummy function that just waits for a brief period of time:   def _copy_db(self): self.logger.debug('Backing up the DB...') time.sleep(60)

Destroying the EC2 Instance Programmatically When we finish updating the remote database and all the data processing tasks are complete, we can start destroying the EC2 instance. The instance will be destroyed, but the database volume will remain along with the data files on it. As a secondary safety measure, we’ll also create a snapshot of the volume.

Shutting Down the MySQL Instance We start by shutting down the MySQL database server. You’re already familiar with the code, which is shown in Listing 14-7. The only difference is that this time, we’ll pass the 'stop' argument to the method call.

386

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

CHAPTER 14 N USING AMAZON EC2/S3 AS A DATA WAREHOuSE SOLuTION

Unmounting the File System When the MySQL server is not running, we can safely unmount the file system. Again, we’ll do this by issuing the OS command using the ssh connection mechanism, as shown in Listing 14-8. Listing 14-8.  Unmounting the File System def _unmount_volume(self): self.logger.debug("Unmounting %s" % self.config.get('main', 'mount_dir')) remote_command = "sync; sync; umount %(mp)s; df -h %(mp)s" % \ {'mp':self.config.get('main', 'mount_dir')} rc = subprocess.call(self.ssh_cmd + [remote_command]) self.logger.debug('done')

Detaching the EBS Volume Technically, you don’t need to detach the volume at this point; it’ll be detached automatically once the EC2 instance is terminated. However, I would advise you to detach the volume first (as shown in Listing 14-9) because, if the EC2 WS behavior changes, assuming the default behavior may cause unnecessary problems in the future. Listing 14-9.  Detaching the Volume def _detach_volume(self, volume=None): if not volume: volume_to_detach = self.volume else: volume_to_detach = volume self.logger.debug("Detaching volume %s" % volume_to_detach.id) volume_to_detach.detach() while volume_to_detach.attachment_state() == u'attached': time.sleep(20) volume_to_detach.update() self.logger.debug("volume status: %s", volume_to_detach.attachment_state()) self.logger.debug('done')

Taking a Snapshot of the Volume Once the volume is detached, we will take a snapshot of the current state. Once again, it is just a single method call. We’ll also populate the description field with the current timestamp when the snapshot was taken; see Listing 14-10. Listing 14-10.  Taking a Volume Snapshot def _create_snapshot(self, volume=None): if not volume: volume_to_snapshot = self.volume else: volume_to_snapshot = volume self.logger.debug("Taking a snapshot of %s" % volume_to_snapshot.id) volume_to_snapshot.create_snapshot(description="Snapshot created on %s" % \ datetime.isoformat(datetime.now())) self.logger.debug('done')

387

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

CHAPTER 14 N USING AMAZON EC2/S3 AS A DATA WAREHOuSE SOLuTION

Shutting Down the Instance And last, we are going to terminate the EC2 instance. Although unnecessary, we’ll wait for the instance to be fully terminated before we continue, as shown in Listing 14-11. Listing 14-11.  Terminating the Running Instance def _terminate_instance(self): instance = self.reservation.instances[0] self.logger.debug("Terminating instance %s" % instance.id) instance.terminate() while instance.state != u'terminated': time.sleep(60) instance.update() self.logger.debug("instance state: %s" % instance.state) self.logger.debug('done')

The Control Sequence Although I described the methods in the same order as they should be called, for your convenience, here is the sequence of the method calls that are performed from the main application function: def main(): console = logging.StreamHandler() logger = logging.getLogger('DB_Backup') logger.addHandler(console) logger.setLevel(logging.DEBUG) bck = BackupManager(logger=logger) bck._start_instance() bck._init_remote_cmd_args() bck._attach_volume() bck._mount_volume() bck._control_mysql('start') bck._copy_db() bck._control_mysql('stop') bck._unmount_volume() bck._detach_volume() bck._create_snapshot() bck._terminate_instance()   The sample output from the running application follows. Please note that the output from the second df command shows the different mount point and the different device because the file system on the EBS volume has been successfully unmounted.   # ./db_backup.py Starting new instance... instance state: running Instance i-02139929 is running and available at ec2-54-90-194-188.compute-1.amazonaws.com Attaching volume vol-7556353c to instance i-02139929 as /dev/xvdf volume status: attached Finished attaching volume

388

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

CHAPTER 14 N USING AMAZON EC2/S3 AS A DATA WAREHOuSE SOLuTION

Mounting /dev/xvdf on /mysql-db Warning: Permanently added 'ec2-54-90-194-188.compute-1.amazonaws.com,54.90.194.188' (RSA) to the list of known hosts. Filesystem Size Used Avail Use% Mounted on /dev/xvdf 9.9G 172M 9.2G 2% /mysql-db done Sending MySQL DB daemon command to: start Starting mysqld: [ OK ] 1063 1165 done Backing up the DB... Sending MySQL DB daemon command to: stop Stopping mysqld: [ OK ] done Unmounting /mysql-db Filesystem Size Used Avail Use% Mounted on /dev/xvda 9.9G 1.3G 8.2G 13% / done Detaching volume vol-7556353c volume status: None done Taking a snapshot of vol-7556353c done Terminating instance i-02139929 instance state: terminated done 

Summary In this chapter, we looked at the Amazon Web Services (AWS) and how Simple Storage System (S3) and the Elastic Computing Cloud (EC2) can be used to perform temporary computational tasks. In addition to the computing on demand task, you discovered how to perform a remote backup of the important data. The simple application we’ve built in this chapter can serve as a foundation for building your own data warehouse on the virtual computing cloud. Remember these key points from this chapter: u

The EC2 and S3 are primarily the web services designed to be controlled programmatically.

u

The main S3 components are the data objects and the buckets containing them.

u

The Amazon Machine Images (AMIs) are used as the templates to start the EC2 instances.

u

The EC2 instances are the actual running virtual machines.

u

You can control most of the AWS services using the Python Boto library.

389

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

Index

„        A Administration interface modification class names, 152–153 form field reorganization child model, 156 Collapse class, 157 ConfigDirective model, 156 VHostDirective inline, 155 Wide class, 157 ModelAdmin class, 157 object list ModelAdmin class, 154 models.py file, 154 modification, 155 VHostDirective, 154 __unicode__() method, 152 VirtualHost class, 153 Virtual Host Flag, 157–158 Amazon EC2 and S3 services, 368 cloud concepts amazon machine image (AMI), 371 elastic block store (EBS), 374 elastic IP address, 375 load balancer, 375 security groups, 375 user interfaces, 375 security authentication access credentials, 369 account identifier, 369 EC2 key pair, 370 X.509 certificates, 370 storage systems, 370 Amazon machine image (AMI), 371 AWS management console, 376 bundling Linux instance, 379 running instance, 380

S3 storage, 381 X.509 certificate files, 380 EBS lifecycle, 374 EBS volume setup, 377 existing images, 376 MySQL configuration file, 378 MySQL server package, installation, 377 S3 lifecycle, 373 S3 vs. EBS, 372 Apache configuration file, 143, 160 administration interface modification (see Administration interface modification) container directive, 159 core module, 144 data model administration interface, 152 admin.py file, 151 entities, 148 ER diagrams, 148 structure, 149 syncdb command, 151 VirtualHost class, 150 Django web framework, 144 drawbacks, 143 environment setup configuration, 145 database creation, 146 Django project and application creation, 145 settings.py file, 145 URL structure, 147 SetHandler directive, 144 view method, 159 virtual host, 144 Virtual Host View Template, 160 Apache log files LogFormat directive, 170 Main application design, 164 Plug-in framework (see Plug-in framework) Plug-in manager component, 164

391

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

NINDEX

Apache log files (cont.) plug-in methods call mechanism, 176 image-processing, 175 problems, 175 tagging, 176 plug-in modules (see Plug-in modules) reader class, 174–175 CSV format files, 173 DictReader class, 173 generator function, 173 log strings, 172 mapping table, 172 requirements, 164 web browser, 163 Apache web server, 89

„        B Boto python module application structure, 382 configuration, 382 control sequence, 388 data transfer, 386 EBS instance remote host, 385 ssh command, 385 volume attached, 384 EC2 instance destroying, 386 run() method, 384 shut down, 388 snapshot, 387 volume detached, 387 initialization process, 383 interfaces, 381 MySQL instance shut down, 386 unmounted file system, 387

„        C Celery server and client installation and configuration, 334 logging files and temporary PID file, 336 project directory, 335 systemd process management, 336 test access, 338 UID and GID creation, 335 Layout of, 338 RabbitMQ, 333 routing tasks

broadcast messages, 346 calculator.py application, 345–346 message queue system, 343–344 queues, 344 rabbitmqctl command, 345 tasks module (see Tasks module) Classless Inter-Domain Routing (CIDR), 81 Class rules management, DHCP addition/modification, 128–129 Django’s built-in generic classes, 126–127 generic delete view, 129 generic views, 125 objects rule, 125–126 Comma-separated values (CSV) format files, 173 ConfigParser library class methods add_section() method, 281 arithmetical operations, 279 feadfp() method, 278 getboolean() method, 278 getfloat() method, 278 getint() method, 278 get() method, 278 has_option(), 280 has_section(), 280 options() method, 279 read() method, 281 remove_option() method, 281 remove_section() method, 281 sections() method, 279 set() method, 281 unexpected results, 280 write() function, 281 file format, 277–278 Consumer plug-ins key_buffer_size configuration, 363 My SQL version number, 362 ServerSystemVariables module, 362 slow query ratio, 364 Controller component, 86 Cron-Like Scheduler, 269 Cross Site Request Forgery (CSRF), 107

„        D Data model administration interface, 152 admin.py file, 151 entities, 148 ER diagrams, 148 structure, 149 syncdb command, 151 VirtualHost class, 150

392

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

NINDEX

Data model component, 85 Data normalization, 244 Data structures configuration host entries, 247 probe entries, 248 probe-to-host mapping, 248 sensor entries, 247 data normalization, 244 entity relationship (ER) diagram, 251 performance table, 249 scheduling data probe scheduling, 249 probe tickets queue, 250 site configuration, 250 Data warehouse solution computing-on-demand solution, 368 design specifications, 368 problem specification, 367 Distributed monitoring system, 241 communication flows (see XML-RPC method) data structures (see Data structures) data types configuration, 243 performance reading, 244 scheduling configuration, 244 site information, 244 monitoring agent, 243 monitoring server, 242 scheduling mechanism (see Scheduling mechanism) sensors, 243 server process, 255 configuration, 261 sensor code, 262 sensor data storage, 260 SQLite3 database (see SQLite3 database) system health check, 262 system components, 242 Django framework administration interface, 85 Apache web server CSS stylesheets and images, 89 greeting page, 91 handler statement, 90 VirtualServer definition, 89 installation, 86 Model-View-Controller (MVC) pattern, 85 object-to-relation database mapper, 85 open-source community support, 85

overview, 84 structure, 87 template system, 85 Dynamic host configuration protocol (DHCP) add function, 115 client classification class rules management (see Class rules management, DHCP) data model, 123 template inheritance, 124 configuration file add and delete functions, 120 address pool data model, 118 coding implementation, 133 data collection, 130 django template engine, 132 helper functions, 119 MIME type, 131 network size, 132–133 network view function, 118 template, 131 template parser, 119 data models, 114 delete function, 117 design requirements database schema, 113 linux distributions, 111 MAC address, 112 network management, 113 vendor-class-identifier, 112 Django framework, 115 DNSServer and DomainName, 114 IP address check, 134 modify view function, 117 name resolution, 134 OMAPI (see Object management application programming interface (OMAPI)) URL structure (see URL structure) workflow, 114

„        E Easier to Ask for Forgiveness Than Permission (EAFP), 177 Elastic block store (EBS) create and setup volume, 377 volume snapshot, 374 Exctractor, 189

„        F function_A() method, 176

393

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

NINDEX

„        G

„        J, K

Geographic Information System (GIS) vector format, 182 Global interpreter lock (GIL), 263

Jinja2 templating systems, 1 content-generation statements, 30 data flow framework, 29 language delimiters accessing variables, 31 flow control statements, 32 loader classes get_template method, 31 jinja2.Environment class, 30 jinja2.FileSystemLoader, 30 web page creation, 33

„        H HTML tag, 136 HTTP GET method, 106 HTTP POST method, 106

„        I init methods, 145 IP address accountancy add function, 105 application design add function, 83 broadcast IP, 82 database schema, 80 decision making, 80 delete function, 84 display function, 83 modify function, 84 name resolution function, 84 requirements, 79 search function, 83 system health check function, 84 working principles, 81–82 database model, 91 delete function, 104–105 django framework (see Django framework) management interface Django administration, 97–98 NetworkAddress model, 98 syncdb command, 96 urls.py Module, 96 object modification, 108 templates, 101 URL configuration Django framework, 95 mapping, 94 rules/guidelines, 94 viewing records IP network and address dataset, 100 JSON file, 100 URL dispatcher rules, 99 URL mapping rules, 99 view method, 108 ISC DHCP server, 137–138

„        L Load balancer tool, 42 Citrix Netscaler WSDL, 45 class inheritance, 47 code structure, 43 configuration file, 44 Configuration Web Service, 41 generic class, 47 locator object, 46 login method request and response messages, 52 request class, 49 return value, 51 requirements, 43 Statistics Web Service, 41 tcpdump command, 52 Log files data structures, storing data in, 203 detect unique exceptions, 207 exception fingerprint generation, 204 exception stack trace, 204 detecting exceptions, 201 generator function, 202 insert method, 203 parsing and analyzing tasks, 203 precompiled patterns, 202 real exception-validator function, 202 exceptions analysis, 192 uses, 191 Java stack trace, 190 multiple files handling (see Multiple files handling) parsing catalina.out file, 193 exception stack trace log, 194 logging messages, 193

394

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

NINDEX

reports production comma separated values, 213 grouping exception, 212 sorted() function, 214 status messages, 189 timestamps, 189 logical test function, 32

„        M matplotlib library installation, 313 plotting graphs annotate() function, 318 bar() method, 316–317 color shortcuts, 316 formatting characters, 315 plot() function, 314 saving images, 319 text strings, 318–319 text() method, 318 window, 315 responsibilites, 314 Message queuing systems, 332. See also Celery server and client Model-View-Controller (MVC) pattern, 85 Monitoring agents architecture, 275 ConfigParser library (see ConfigParser library) passive component, 275 running external process, 287 security model, 276–277 sensor code, 275 submit readings, 276 upgrading, 276 sensor code, automatically close() method, 298 mkstemp() method, 298 move() function, 299 open() command, 298 os.remove() function, 299 package update function, 297 sending and receiving, 296 sensor design, 286 wrapper class AttributeError exception, 284 built-in methods, 282 constructor method, 283 __dict__ method, 285–286 getattr() function, 283 __getattr__() method, 284–285 __init__() method, 284 requirements, 282

Section class, 283, 285 __setattr__() function, 285–286 setattr() function, 283 Multiple files handling BZip2 library, 199 dealing with large volumes, 200 DIRS list, 199 list comprehension, 199 OPTIONS.file_pattern, 197 os.walk function, 197–198 Python generator, 200 requirements, 196 tuple object, 198

„        N Nagios monitoring systems NRPE, 218 plug-in return codes, 218 Nagios Remote Plugin Executor (NRPE) utility, 218 NumPy library, 301 array append() operation, 303–304 constructor, 303 hstack() function, 304 indexing techniques, 305 iterator, 304 multidimensional arrays, 303 slicing and dicing, 305 vstack() function, 304 installation, 302 mathematical primitives, 305 polynomial function, 311 read and write files, 312 regression/curve fitting, 311 statistical functions average() function, 307 histogram calculation function, 310 mean() function, 306 normal distribution, 308–309 standard deviation, 307–309 variance, 307

„        O Object management application programming interface (OMAPI) host record deletion, leases database, 140 ISC DHCP server setting, 138 new lease record creation, 139 PyPI package repository, 138

395

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

NINDEX

„        P, Q Plug-in framework, 349 application requirements, 350 architecture, 166 consumer writings key_buffer_size configuration, 363 My SQL version number, 361 ServerSystemVariables module, 362 slow query ratio, 364 creation, 166 definition, 170 discovery and registration child classes, 167 directory and file structure, 168 __import__() method, 168 __init__() method, 168 PluginManager class, 169 process flow, 167–169 mechanics of HTML page, 165 interface model, 165 registration and discovery, 166 modifications host application, 351 plug-in manager, 353 producer writings configuration variables, 357 key_buffer_size variable, 360 My SQL database, 353 server status variables, 359 schematic diagram, 350 system design, 350 Plug-in modules coding implementation, 180–181 data visualization Basemap, 181 ESRI shape files, 182 GIS vector format, 182 matplotlib library, 181 numpy package, 181 pip command, 182 PyShp documentation, 182 world map, 184 GeoIP Python bindings, 179 required libraries installation, 178 process() method, 177 Producer plug-ins configuration variables retrieve variables, 358 SHOW GLOBAL VARIABLES command, 357 key_buffer_size variable, 360 MySQL database, 353 connect() method, 354 cursor methods, 356

MySQLdb library, 354 object methods, 355 parameters, 354 psutil library, 360 server status variables, 359 Python exceptions, 64 logging module configuration, 63 format string, 63 levels and scope, 62 logger instance, 64 Nitro API administration tasks, 77 login method, 73 module layout, 67 NetScaler device, 66, 75 systemcpu_stats class, 74 virtual server, 76 Python generator, 200 Python package manager (PIP), 13, 86 Python query configuration ConfigParser module, 11 management system, 11 reading and storing, 12 SnmpManager class, 11 Windows INI-style, 11 PySNMP library ASN.1 library, 13 CommandGenerator class, 13 GET command, 13 GETNEXT command, 15 pip command, 13 pysnmp package, 13 return objects, 14 SET command, 14 query_all_systems() method, 16 read functionality, 16

„        R RabbitMQ, 333 report() function, 177 Round robin database (RRD), 1 Routing tasks broadcast messages, 346 calculator.py application, 345–346 message queue system, 343–344 new queue creation, 344 rabbitmqctl command, 345 RRDTool consolidation function (CF), 18 creation ABSOLUTE type, 20 consolidation function, 20

396

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

NINDEX

COUNTER type, 20 dataset parameter, 21 data source (DS), 20 definition structure, 20 DERIVE type, 20 GAUGE type, 20 heartbeat value, 20 Python module, 19 samples, 21 XFiles factor, 21 database engine, 18 data storage, 18 monitoring application, integrate, 27 overview, 17 plotting graphs AREA keyword, 25 data-fetching command, 24 data plotting command, 25 LINE keyword, 25 measurements, 25 Reverse Polish Notation, 26 selector statement, 24–25 Python module, 19 read and write data command-line tool syntax, 24 dataset information, 23 datasources, 23 fetch interface, 21 resolution flag, 22 result array, 23 retrieve data, 23 RoundRobin Archive (RRA), 18 structure, 18 Running external process communication file descriptors, 293 file objects, 293 pipe objects, 294 standard error, 295 controlling process kill() command, 291 pid attribute, 290 signal and numeric values, 292 wait() method, 291 subprocess library cwd argument, 290 default shell, 288 dict function, 290 env argument, 290 fork() call, 289 os.execvp function, 288 Popen class, 287–288 Popen command, 289 shell variable, 288 split() method, 289

„        S Scheduling mechanism global interpreter lock (GIL), 263 multiprocessing creation process, 264 handling interrupts, 266 KeyboardInterrupt, 266 long-running process, 265 multithreading.active_children() function, 267 multithreading, 263 Ticket Dispatcher, 263 cmd_submit_reading, 271 ER diagram, 271 retrieving information, 272 self._send_request, 272 XML-RPC Call, 272 Ticket Scheduler, 263 (see also Ticket Scheduler) Sensor design, 286 Simple network management protocol (SNMP) design specifications Jinja2 templating library, 2 PySNMP library, 2 RRDTool, 2 OID tree, 3 requirements, 1 RRDTool, 1, 17 SIMPLE NETWORK MANAGEMENT PROTOCOL (SNMP) authentication, 7 command line, querying getnext operation, 7 Net-SNMP-Utils package, 7 snmpwalk command, 7–8 GET command, 13 GETNEXT command, 15 internet assigned numbers authority (IANA), 3 Jinja2 templating system (see Jinja2 templating systems) Management Information Base (MIB), 3 management system, 2–3 network components, 3 object identifiers (OIDs), 3–4 Python query (see Python query) SET command, 14 variables OIDs node system, 4–6 Simple Object Access Protocol (SOAP), 37 device configuration, 59 load balancer tool, 42 Citrix Netscaler WSDL, 45 class inheritance, 47 code structure, 43 configuration file, 44 generic class, 47 locator object, 46

397

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

NINDEX

Simple Object Access Protocol (SOAP) (cont.) login method, 48 requirements, 43 message, 38 Python, 41 requesting services, 38 request message, 39 response message, 39 service state, 60 statistics data, 53 return values, 54 service status, 56 system status, 55 WSDL, 40 SQLite3 database command-line tool, 255 initialization SQL commands, 256 Python application, 259 commit() function, 260 execute() statement, 260 Statistical analysis data analysis (see Statistical data) matplotlib bar() method, 316 installation, 313 plotting graphs, 314 responsibilites, 314 saving images, 319 NumPy library, 301 array, 302 installation, 302 mathematical primitives, 305 polynomial function, 311 read and write files, 312 regression/curve fitting, 310 statistical functions, 306 requirements and design, 301 Statistical data COALESCE() function, 321 data collection, 320 data performance, plotting, 327, 329 matplotlib function, 328 _plot_time_graph() function, 328 timestamp field, 328 graph collection pages, 325 _generate_host_probe_details() function, 326 _generate_host_scale_details() function, 326 host details page, 323 _calculate_service_availability() function, 324 connection.execute(), 324 _generate_host_toc() function, 324 template, 325

host retrieving, 320 index page host list, 323 host probe list, 323 private class method, 322 template, 322 website structure, 322 Storing data, data structures, 204 detect unique exceptions configuration file, 207 filters, 210 parsing XML files, 208 precompiled search, 211 exception fingerprint generation, 204 exception stack trace, 204

„        T Task queuing systems, 331–332 Tasks module arithmetic operations, 342 Celery master application, 343 geometric operations, 342 systemd configuration files, 342 worker and master process application file, 341 celeryconfig.py, 340 configuration items, 340 file and directory, 339 geometric operations, 339 Templating frameworks. See Jinja2 templating systems Ticket Scheduler Cron-Like scheduler, 269 multiprocessing process Manager class, 268 periodic events, 268 shared event object, 269 oscillator process code implementation, 268 time.sleep() function, 268 time.sleep() function, 267

„        U URL structure model class, 120 patterns, 121–122 resolution of, 121 rules, 120 templates, 122

„        V View Component, 85

398

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

NINDEX

„        W Web Services Description Language (WSDL), 40 Website Availability Check Script Beautiful Soup library, 219 find methods, 225 parsing HTML pages, 221 Site Navigation Script, 225 tag names, 224 geturl() function, 221 HTTP Client, Request Module requests library, 234 Site Logon/Logoff Check Script, 237 usage, 234 HTTP cookies, 230 Nagios, 218, 228 Nagios Host and Service Definitions, 227

requirements, 217 Site Logon/Logoff, 232 urlopen() function, 220 user login process, 229 WSDL to Python, 41

„        X, Y, Z XML-RPC method CherryPy, 254 Python, 253 structure data types, 252 multiple parameters, 252 procedure call message, 252 sensor update request, 252

399

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

Pro Python System Administration Second Edition

Rytis Sileika

Apress

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

Pro Python System Administration Copyright © 2014 by Rytis Sileika This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. Exempted from this legal reservation are brief excerpts in connection with reviews or scholarly analysis or material supplied specifically for the purpose of being entered and executed on a computer system, for exclusive use by the purchaser of the work. Duplication of this publication or parts thereof is permitted only under the provisions of the Copyright Law of the Publisher’s location, in its current version, and permission for use must always be obtained from Springer. Permissions for use may be obtained through RightsLink at the Copyright Clearance Center. Violations are liable to prosecution under the respective Copyright Law. ISBN-13 (pbk): 978-1-4842-0218-0 ISBN-13 (electronic): 978-1-4842-0217-3 Trademarked names, logos, and images may appear in this book. Rather than use a trademark symbol with every occurrence of a trademarked name, logo, or image we use the names, logos, and images only in an editorial fashion and to the benefit of the trademark owner, with no intention of infringement of the trademark. The use in this publication of trade names, trademarks, service marks, and similar terms, even if they are not identified as such, is not to be taken as an expression of opinion as to whether or not they are subject to proprietary rights. While the advice and information in this book are believed to be true and accurate at the date of publication, neither the authors nor the editors nor the publisher can accept any legal responsibility for any errors or omissions that may be made. The publisher makes no warranty, express or implied, with respect to the material contained herein. Managing Director: Welmoed Spahr Lead Editor: Steve Anglin Development Editor: Matthew Moodie Technical Reviewer: Patrick Engebretson and Massimo Nardone Editorial Board: Steve Anglin, Mark Beckner, Gary Cornell, Louise Corrigan, Jim DeWolf, Jonathan Gennick, Jonathan Hassell, Robert Hutchinson, Michelle Lowman, James Markham, Matthew Moodie, Jeff Olson, Jeffrey Pepper, Douglas Pundick, Ben Renow-Clarke, Gwenan Spearing, Matt Wade, Steve Weiss Coordinating Editor: Melissa Maldonado Copy Editor: Carole Berglie Compositor: SPi Global Indexer: SPi Global Artist: SPi Global Cover Designer: Anna Ishchenko Distributed to the book trade worldwide by Springer Science+Business Media New York, 233 Spring Street, 6th Floor, New York, NY 10013. Phone 1-800-SPRINGER, fax (201) 348-4505, e-mail [email protected], or visit www.springeronline.com. Apress Media, LLC is a California LLC and the sole member (owner) is Springer Science + Business Media Finance Inc (SSBM Finance Inc). SSBM Finance Inc is a Delaware corporation. For information on translations, please e-mail [email protected], or visit www.apress.com. Apress and friends of ED books may be purchased in bulk for academic, corporate, or promotional use. eBook versions and licenses are also available for most titles. For more information, reference our Special Bulk Sales–eBook Licensing web page at www.apress.com/bulk-sales. Any source code or other supplementary material referenced by the author in this text is available to readers at www.apress.com. For detailed information about how to locate your book’s source code, go to www.apress.com/source-code/.

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

I dedicate this book to my family—my wife, Evelina, and my daughters, Gabija and Milda

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

Contents About the Author.............................................................................................................. xvii About the Technical Reviewers......................................................................................... xix Acknowledgments............................................................................................................. xxi Introduction..................................................................................................................... xxiii N■Chapter 1: Reading and Collecting Performance Data Using SNMP..................................1 Application Requirements and Design..........................................................................................1 Specifying the Requirements................................................................................................................................ 1 High-Level Design Specification............................................................................................................................ 2

Introduction to SNMP....................................................................................................................2 The System SNMP Variables Node........................................................................................................................ 4 The Interfaces SNMP Variables Node.................................................................................................................... 5 Authentication in SNMP......................................................................................................................................... 7 Querying SNMP from the Command Line.............................................................................................................. 7

Querying SNMP Devices from Python.........................................................................................11 Configuring the Application................................................................................................................................. 11 Using the PySNMP Library................................................................................................................................... 13 Implementing the SNMP Read Functionality....................................................................................................... 16

Storing Data with RRDTool..........................................................................................................17 Introduction to RRDTool....................................................................................................................................... 17 Using RRDTool from a Python Program............................................................................................................... 19 Creating a Round Robin Database....................................................................................................................... 19

vii

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

N CONTENTS

Writing and Reading Data from the Round Robin Database................................................................................ 21 Plotting Graphs with RRDTool.............................................................................................................................. 24 Integrating RRDTool with the Monitoring Solution............................................................................................... 27

Creating Web Pages with the Jinja2 Templating System............................................................29 Loading Template Files with Jinja2..................................................................................................................... 30 The Jinja2 Template Language............................................................................................................................ 31 Generating Website Pages................................................................................................................................... 33

Summary.....................................................................................................................................36 N■Chapter 2: Managing Devices Using the SOAP API..........................................................37 What Is the SOAP API?.................................................................................................................37 The Structure of a SOAP Message....................................................................................................................... 38 Requesting Services with SOAP.......................................................................................................................... 38 Finding Information About Available Services with WSDL................................................................................... 39

SOAP Support in Python..............................................................................................................41 Converting WSDL Schema to Python Helper Module...................................................................41 Defining Requirements for Our Load Balancer Tool.....................................................................42 Basic Requirements............................................................................................................................................. 43 Code Structure..................................................................................................................................................... 43 Configuration....................................................................................................................................................... 44

Accessing Citrix Netscaler Load Balancer with the SOAP API.....................................................45 Fixing Issues with Citrix Netscaler WSDL............................................................................................................ 45 Creating a Connection Object.............................................................................................................................. 46 Logging In: Our First SOAP Call............................................................................................................................ 48

Gathering Performance Statistics Data.......................................................................................53 SOAP Methods for Reading Statistical Data and its Return Values...................................................................... 54 Reading System Health Data............................................................................................................................... 55 Reading Service Status Data............................................................................................................................... 56

Automating Some Administration Tasks......................................................................................59 Device Configuration SOAP Methods................................................................................................................... 59 Setting a Service State........................................................................................................................................ 60

viii

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

N CONTENTS

A Word About Logging and Error Handling..................................................................................62 Using the Python Logging Module....................................................................................................................... 62 Handling Exceptions............................................................................................................................................ 64

NetScaler NITRO API....................................................................................................................66 Download............................................................................................................................................................. 66 Using the Nitro-Python Module............................................................................................................................ 67

Summary.....................................................................................................................................78 N■Chapter 3: Creating a Web Application for IP Address Accountancy..............................79 Designing the Application............................................................................................................79 Setting out the Requirements.............................................................................................................................. 79 Making Design Decisions.................................................................................................................................... 80 Defining the Database Schema........................................................................................................................... 80 Creating the Application Workflow...................................................................................................................... 83

The Basic Concepts of the Django Framework...........................................................................84 What Is Django?................................................................................................................................................... 84 The Model/View/Controller Pattern..................................................................................................................... 85 Installing the Django Framework......................................................................................................................... 86 The Structure of a Django Application................................................................................................................. 87 Using Django with Apache Web Server................................................................................................................ 89

Implementing Basic Functionality...............................................................................................91 Defining the Database Model.............................................................................................................................. 91 URL Configuration................................................................................................................................................ 94 Using the Management Interface........................................................................................................................ 96 Viewing Records.................................................................................................................................................. 99 Using Templates................................................................................................................................................ 101 Deleting Records............................................................................................................................................... 104 Adding New Records......................................................................................................................................... 105 Modifying Existing Records............................................................................................................................... 108

Summary...................................................................................................................................109

ix

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

N CONTENTS

N■Chapter 4: Integrating the IP Address Application with DHCP......................................111 Extending the Design and Requirements..................................................................................111 Extending the Database Schema....................................................................................................................... 113 Making Additions to the Workflow..................................................................................................................... 114

Adding DHCP Network Data......................................................................................................114 The Data Models................................................................................................................................................ 114 Additional Workflows......................................................................................................................................... 115 The Add Function............................................................................................................................................... 115 The Modify Function.......................................................................................................................................... 117 The Delete Function........................................................................................................................................... 117

Extending the DHCP Configuration with Address Pools.............................................................117 The Address Pool Data Model............................................................................................................................ 118 The DHCP Network Details................................................................................................................................ 118 The Add and Delete Functions........................................................................................................................... 120

Reworking the URL Structure....................................................................................................120 Generation of URLs in the Model Class.............................................................................................................. 120 Reverse Resolution of URLs............................................................................................................................... 121 Assignment of Names to URL Patterns.............................................................................................................. 121 Use of URL References in the Templates........................................................................................................... 122

Adding Client Classification.......................................................................................................123 Additions to the Data Model.............................................................................................................................. 123 Template Inheritance......................................................................................................................................... 124 Class Rules Management.................................................................................................................................. 125

Generating the DHCP Configuration File....................................................................................130 Other Modifications...................................................................................................................134 Resolving IPs to Hostnames.............................................................................................................................. 134 Checking Whether the Address Is in Use........................................................................................................... 134

Dynamic DHCP Lease Management..........................................................................................137 Employ Python Interface to OMAPI.................................................................................................................... 137 Set up the ISC DHCP Server............................................................................................................................... 138

x

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

N CONTENTS

Add a New Host Lease Record.......................................................................................................................... 139 Delete a Host Lease Record............................................................................................................................... 140

Summary...................................................................................................................................141 N■Chapter 5: Maintaining a List of Virtual Hosts in an Apache Configuration File............143 Specifying the Design and Requirements for the Application...................................................143 Functional Requirements................................................................................................................................... 143 High-Level Design............................................................................................................................................. 144

Setting Up the Environment......................................................................................................144 Apache Configuration........................................................................................................................................ 144 Creating a Django Project and Application........................................................................................................ 145 Configuring the Application............................................................................................................................... 145 Defining the URL Structure................................................................................................................................ 147

The Data Model.........................................................................................................................148 The Basic Model Structure................................................................................................................................ 149

Modifying the Administration Interface.....................................................................................152 Improving the Class and Object Lists................................................................................................................ 152 Adding Custom Object Actions.......................................................................................................................... 157

Generating the Configuration File..............................................................................................159 Summary...................................................................................................................................161 N■Chapter 6: Gathering and Presenting Statistical Data from Apache Log Files..............163 Application Structure and Functionality....................................................................................163 Application Requirements................................................................................................................................. 164 Application Design............................................................................................................................................. 164

Plug-in Framework Implementation in Python .........................................................................164 The Mechanics of a Plug-in Framework............................................................................................................ 165 Creating the Plug-in Framework....................................................................................................................... 166

Log-Parsing Application............................................................................................................170 Format of Apache Log Files............................................................................................................................... 170 Log File Reader.................................................................................................................................................. 172 Calling the Plug-in Methods.............................................................................................................................. 175 xi

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

N CONTENTS

Plug-in Modules........................................................................................................................178 Installing the Required Libraries........................................................................................................................ 178 Writing the Plug-in Code.................................................................................................................................... 180 Visualizing the Data........................................................................................................................................... 181

Summary...................................................................................................................................187 N■Chapter 7: Performing Complex Searches and Reporting on Application Log Files......189 Defining the Problem.................................................................................................................189 Why We Use Exceptions..................................................................................................................................... 191 Are Exceptions Always a Bad Sign?.................................................................................................................. 192 Why We Should Analyze Exceptions.................................................................................................................. 192

Parsing Complex Log Files........................................................................................................193 What Can We Find in a Typical Log File?........................................................................................................... 193 The Structure of an Exception Stack Trace Log................................................................................................. 194

Handling Multiple Files..............................................................................................................196 Handling Multiple Files...................................................................................................................................... 197 Using the Built-in Bzip2 Library......................................................................................................................... 199 Traversing Large Data Files............................................................................................................................... 200 What Are Generators, and How Are They Used?................................................................................................ 200

Detecting the Exceptions..........................................................................................................201 Detecting the Potential Candidates................................................................................................................... 202 Filtering the Legitimate Exception Traces.......................................................................................................... 203

Storing Data in Data Structures................................................................................................203 The Structure of Exception Stack Trace Data.................................................................................................... 204 Generating an Exception Fingerprint for Unknown Exceptions......................................................................... 204 Detecting Known Exceptions............................................................................................................................. 207

Producing Reports.....................................................................................................................212 Grouping the Exceptions.................................................................................................................................... 212 Producing Differently Formatted Outputs for the Same Dataset....................................................................... 213 Calculating Group Statistics.............................................................................................................................. 214

Summary...................................................................................................................................215 xii

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

N CONTENTS

N■Chapter 8: A Website Availability Check Script for Nagios............................................217 Requirements for the Check System.........................................................................................217 The Nagios Monitoring System.................................................................................................218 Nagios Plug-In Architecture............................................................................................................................... 218

The Site Navigation Check........................................................................................................219 Installing the Beautiful Soup HTML Parsing Library.......................................................................................... 219 Retrieving a Web Page....................................................................................................................................... 220 Parsing the HTML Pages with Beautiful Soup................................................................................................... 221 Adding the New Check to the Nagios System................................................................................................... 227 Emulating the User Login Process..................................................................................................................... 229 Simplifying HTTP Client with Requests Module................................................................................................. 234

Summary...................................................................................................................................239 N■Chapter 9: Management and Monitoring Subsystem....................................................241 Design.......................................................................................................................................241 The Components................................................................................................................................................ 241 The Data Objects............................................................................................................................................... 243

The Data Structures..................................................................................................................244 Introduction to Data Normalization.................................................................................................................... 244 Configuration Data............................................................................................................................................. 247 Performance Data.............................................................................................................................................. 249 Scheduling......................................................................................................................................................... 249 Site Configuration.............................................................................................................................................. 250

Communication Flows...............................................................................................................251 XML-RPC for Information Exchange.................................................................................................................. 252 CherryPy............................................................................................................................................................ 254

The Server Process...................................................................................................................255 Storing Data in a SQLite3 Database.................................................................................................................. 255 Actions............................................................................................................................................................... 260

xiii

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

N CONTENTS

The Scheduler...........................................................................................................................262 Actions............................................................................................................................................................... 262 Running Multiple Processes.............................................................................................................................. 263 Running Methods at Equal Intervals.................................................................................................................. 267 A Cron-Like Scheduler....................................................................................................................................... 269 Ticket Dispatcher............................................................................................................................................... 271

Summary...................................................................................................................................273 N■Chapter 10: Remote Monitoring Agents........................................................................275 Design.......................................................................................................................................275 The Passive Component.................................................................................................................................... 275 Architecture....................................................................................................................................................... 275 Actions............................................................................................................................................................... 275

The Security Model...................................................................................................................276 Configuration.............................................................................................................................277 The ConfigParser Library................................................................................................................................... 277 The Configuration Class Wrapper...................................................................................................................... 282

The Sensor Design....................................................................................................................286 Running External Processes......................................................................................................287 Using the Subprocess Library............................................................................................................................ 287 Controlling the Running Processes.................................................................................................................... 290 Communicating with External Processes.......................................................................................................... 293

Automatically Updating Sensor Code........................................................................................296 Sending and Receiving Binary Data with XML-RPC........................................................................................... 296 Working with Files and Archives (TAR and BZip2)............................................................................................. 297

Summary...................................................................................................................................299 N■Chapter 11: Statistics Gathering and Reporting............................................................301 Application Requirements and Design......................................................................................301 Using the NumPy Library...........................................................................................................301 Installing NumPy................................................................................................................................................ 302 NumPy Examples............................................................................................................................................... 302 xiv

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

N CONTENTS

Representing Data with matplotlib............................................................................................313 Installing matplotlib........................................................................................................................................... 313 Understanding the Library Structure................................................................................................................. 314 Plotting Graphs.................................................................................................................................................. 314 Saving Plots to a File......................................................................................................................................... 319

Graphing Statistical Data...........................................................................................................320 Collating Data from the Database...................................................................................................................... 320 Drawing Timescale Graphs................................................................................................................................ 321

Summary...................................................................................................................................329 N■Chapter 12: Distributed Message Processing System...................................................331 Quick Introduction to Message and Task Queues......................................................................331 Task Queuing Systems...................................................................................................................................... 331 Message Queuing Systems............................................................................................................................... 332

Setting up the Celery Server and Client....................................................................................333 Installing and Setting up RabbitMQ................................................................................................................... 333 Installing and Setting up Celery......................................................................................................................... 334

Celery Basics.............................................................................................................................338 Layout of a Typical Celery Application............................................................................................................... 338 Creating a Tasks Module................................................................................................................................... 339 Routing Tasks.................................................................................................................................................... 343

Summary...................................................................................................................................347 N■Chapter 13: Automatic MySQL Database Performance Tuning......................................349 Requirements Specification and Design...................................................................................349 Basic Application Requirements........................................................................................................................ 350 System Design................................................................................................................................................... 350

Modifying the Plug-in Framework.............................................................................................351 Changes to the Host Application........................................................................................................................ 351 Modifying the Plug-in Manager......................................................................................................................... 353

Writing the Producer Plug-ins...................................................................................................353 Accessing the MySQL Database from Python Applications............................................................................... 353 Querying the Configuration Variables................................................................................................................ 357 xv

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

N CONTENTS

Querying the Server Status Variables................................................................................................................ 359 Collecting the Host Configuration Data.............................................................................................................. 360

Writing the Consumer Plug-ins.................................................................................................361 Checking the MySQL Version............................................................................................................................. 361 Checking the Key Buffer Size Setting................................................................................................................ 363 Checking the Slow Queries Counter.................................................................................................................. 364

Summary...................................................................................................................................366 N■Chapter 14: Using Amazon EC2/S3 as a Data Warehouse Solution...............................367 Specifying the Problem and the Solution..................................................................................367 The Problem...................................................................................................................................................... 367 Our Solution....................................................................................................................................................... 368 Design Specifications........................................................................................................................................ 368

The Amazon EC2 and S3 Crash Course.....................................................................................368 Authentication and Security.............................................................................................................................. 369 The Simple Storage System Concepts............................................................................................................... 370 The Elastic Computing Cloud Concepts............................................................................................................. 371 User Interfaces.................................................................................................................................................. 375

Creating a Custom EC2 Image...................................................................................................375 Reusing Existing Images................................................................................................................................... 376 Making Modifications........................................................................................................................................ 377 Bundling the New AMI....................................................................................................................................... 379

Controlling the EC2 Using the Boto Python Module...................................................................381 Setting Up the Configuration Variables.............................................................................................................. 382 Initializing the EC2 Instance Programmatically................................................................................................. 382 Transferring the Data......................................................................................................................................... 386 Destroying the EC2 Instance Programmatically................................................................................................ 386

Summary...................................................................................................................................389 Index.................................................................................................................................391

xvi

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

About the Author Rytis Sileika has over sixteen years of experience in the system administration field. Since obtaining his bachelor of science degree in computer science from Kaunas University of Technology, Lithuania, he’s been specializing in system integration and deployment automation. His areas of interest and expertise are the UNIX-based operating system, cloud computing platform management, and automation tool development. Rytis is also a RedHat Certified Engineer. He lives with his wife and two daughters in London, England. His nonprofessional interests are traveling, hiking, and photography.

xvii

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

About the Technical Reviewers

C5

Dr. Patrick Engebretson obtained his Doctor of Science degree with a specialization in Information Assurance from Dakota State University, South Dakota. He currently serves as an Assistant Professor of Computer and Network Security, teaching undergraduate and graduate courses in information security. His research interests include penetration testing, intrusion detection, exploitation, honey pots, and malware. Over the past several years, Dr. Engebretson has published numerous peer-reviewed conference papers and journal articles. He is also the author of two bestselling books on hacking and penetration testing. Dr. Engebretson has presented his research at the premier security conferences Black Hat and DefCON, in Las Vegas. He has also been invited by the Department of Homeland Security to share his research at the Software Assurance Forum in Washington, D.C. He regularly attends advanced exploitation and penetration testing trainings from industry-recognized professionals and also serves as a founding administrative member of the North Central Collegiate Cyber Defense Competition.

M

m 1

Massimo Nardone holds a Master of Science degree in Computing Science from the University of Salerno, Italy. He worked as a PCI QSA and Senior Lead IT Security/ Cloud Architect for many years; currently he leads the Security Consulting Team in Hewlett Packard Finland. With more than 19 years of work experience in SCADA, cloud computing, IT infrastructure, mobile, security, and WWW technology areas for both national and international projects, Massimo has worked as a project manager, software engineer, research engineer, chief security architect, and software specialist. He has served as visiting lecturer and supervisor for exercises at the Networking Laboratory of the Helsinki University of Technology (Helsinki University of Technology TKK became part of Aalto University) for the course of Security of Communication Protocols. He holds four international patents (PKI, SIP, SAML, and Proxy areas).

xix

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info

Acknowledgments I would like to express my gratitude to everyone at Apress involved in the development and production of this book. Especially I want to thank Steve Anglin, Melissa Maldonado, and Matthew Moodie for making sure the second edition of this book became a reality. I would also like to thank Dr. Patrick Engebretson and Massimo Nardone for making sure that the contents of the book are technically correct. Last but not least, I’d like to thank the Python development community and Guido van Rossum for creating such an elegant programming language.

xxi

||||||||||||||||||||||||||||||||||||||||||||||||| www.it-ebooks.info
Pro Python System Administration 2nd Edition Book

Related documents

411 Pages • 147,091 Words • PDF • 4.1 MB

658 Pages • 155,366 Words • PDF • 6 MB

624 Pages • 170,566 Words • PDF • 87.1 MB

210 Pages • 52,688 Words • PDF • 750.8 KB

498 Pages • 207,277 Words • PDF • 7 MB

1,038 Pages • 407,342 Words • PDF • 12.3 MB

219 Pages • 104,115 Words • PDF • 1.3 MB

546 Pages • 158,218 Words • PDF • 143.6 MB

268 Pages • 98,143 Words • PDF • 3.5 MB

216 Pages • 73,410 Words • PDF • 557.2 KB

1 Pages • 511 Words • PDF • 345.6 KB