Web Application Architecture Principles, Protocols and Practices

374 Pages • 122,767 Words • PDF • 3.8 MB
Uploaded at 2021-09-24 12:59

This document was submitted by our user and they confirm that they have the consent to share it. Assuming that you are writer or own the copyright of this document, report to us by using this DMCA report button.

Web Application Architecture Principles, protocols and practices

Leon Shklar Richard Rosen Dow Jones and Company

Web Application Architecture

Web Application Architecture Principles, protocols and practices

Leon Shklar Richard Rosen Dow Jones and Company

Copyright  2003 by

John Wiley & Sons Ltd, The Atrium, Southern Gate, Chichester, West Sussex PO19 8SQ, England Telephone (+44) 1243 779777

Email (for orders and customer service enquiries): [email protected] Visit our Home Page on www.wileyeurope.com or www.wiley.com All Rights Reserved. No part of this publication may be reproduced, stored in a retrieval system or transmitted in any form or by any means, electronic, mechanical, photocopying, recording, scanning or otherwise, except under the terms of the Copyright, Designs and Patents Act 1988 or under the terms of a licence issued by the Copyright Licensing Agency Ltd, 90 Tottenham Court Road, London W1T 4LP, UK, without the permission in writing of the Publisher with the exception of any material supplied specifically for the purpose of being entered and executed on a computer system for exclusive use by the purchase of the publication. Requests to the Publisher should be addressed to the Permissions Department, John Wiley & Sons Ltd, The Atrium, Southern Gate, Chichester, West Sussex PO19 8SQ, England, or emailed to [email protected], or faxed to (+44) 1243 770620. This publication is designed to provide accurate and authoritative information in regard to the subject matter covered. It is sold on the understanding that the Publisher is not engaged in rendering professional services. If professional advice or other expert assistance is required, the services of a competent professional should be sought. Other Wiley Editorial Offices John Wiley & Sons Inc., 111 River Street, Hoboken, NJ 07030, USA Jossey-Bass, 989 Market Street, San Francisco, CA 94103-1741, USA Wiley-VCH Verlag GmbH, Boschstr. 12, D-69469 Weinheim, Germany John Wiley & Sons Australia Ltd, 33 Park Road, Milton, Queensland 4064, Australia John Wiley & Sons (Asia) Pte Ltd, 2 Clementi Loop #02-01, Jin Xing Distripark, Singapore 129809 John Wiley & Sons Canada Ltd, 22 Worcester Road, Etobicoke, Ontario, Canada M9W 1L1 Wiley also publishes its books in a variety of electronic formats. Some content that appears in print may not be available in electronic books. Library of Congress Cataloging-in-Publication Data Shklar, Leon. Web application architecture : principles, protocols, and practices / Leon Shklar, Richard Rosen. p. cm. Includes bibliographical references and index. ISBN 0-471-48656-6 (Paper : alk. paper) 1. Web sites—Design. 2. Application software—Development. I. Rosen, Richard. II. Title. TK5105.888.S492 2003 005.7 2—dc21 2003011759 British Library Cataloguing in Publication Data A catalogue record for this book is available from the British Library ISBN 0-471-48656-6 Typeset in 10/12.5pt Times by Laserwords Private Limited, Chennai, India Printed and bound in Great Britain by Antony Rowe Ltd, Chippenham, Wiltshire This book is printed on acid-free paper responsibly manufactured from sustainable forestry in which at least two trees are planted for each one used for paper production.


Acknowledgements 1.

Introduction 1.1 1.2 1.3 1.4 1.5

The Web in Perspective The Origins of the Web From Web Pages to Web Sites From Web Sites to Web Applications How to Build Web Applications in One Easy Lesson 1.5.1 Web page design resources 1.5.2 Web site design resources 1.5.3 Web application design resources 1.5.4 Principles of web application design 1.6 What is Covered in this Book Bibliography 2.

Before the Web: TCP/IP 2.1 Historical Perspective 2.2 TCP/IP 2.2.1 Layers 2.2.2 The client/server paradigm 2.3 TCP/IP Application Services 2.3.1 Telnet 2.3.2 Electronic mail 2.3.3 Message forums 2.3.4 Live messaging 2.3.5 File servers 2.4 And Then Came the Web. . . 2.5 Questions and Exercises Bibliography

xiii 1 1 2 3 4 5 5 5 6 7 8 9 11 11 13 13 14 16 16 16 24 25 25 27 27 28




Birth of the World Wide Web: HTTP 3.1 3.2 3.3 3.4



3.7 3.8


Historical Perspective Building Blocks of the Web The Uniform Resource Locator Fundamentals of HTTP 3.4.1 HTTP servers, browsers, and proxies 3.4.2 Request/response paradigm 3.4.3 Stateless protocol 3.4.4 The structure of HTTP messages 3.4.5 Request methods 3.4.6 Status codes Better Information Through Headers 3.5.1 Type support through content-type 3.5.2 Caching control through Pragma and Cache-Control headers 3.5.3 Security through WWW-Authenticate and Authorization headers 3.5.4 Session support through Cookie and Set-Cookie headers Evolution 3.6.1 Virtual hosting 3.6.2 Caching support 3.6.3 Persistent connections Summary Questions and Exercises Bibliography

Web Servers 4.1 Basic Operation 4.1.1 HTTP request processing 4.1.2 Delivery of static content 4.1.3 Delivery of dynamic content 4.2 Advanced Mechanisms for Dynamic Content Delivery 4.2.1 Beyond CGI and SSI 4.2.2 Native APIs (ISAPI and NSAPI) 4.2.3 FastCGI 4.2.4 Template processing 4.2.5 Servlets 4.2.6 Java server pages 4.2.7 Future directions

29 29 30 30 32 33 33 34 35 37 42 46 48 51 53 56 59 60 61 62 63 63 64 65 66 67 69 71 81 81 81 81 82 84 85 87



4.3 Advanced Features 4.3.1 Virtual hosting 4.3.2 Chunked transfers 4.3.3 Caching support 4.3.4 Extensibility 4.4 Server Configuration 4.4.1 Directory structure 4.4.2 Execution 4.4.3 Address resolution 4.4.4 MIME support 4.4.5 Server extensions 4.5 Server Security 4.5.1 Securing the installation 4.5.2 Dangerous practices 4.5.3 Secure HTTP 4.5.4 Firewalls and proxies 4.6 Summary 4.7 Questions and Exercises Bibliography 5.

Web Browsers 5.1 Architectural Considerations 5.2 Processing Flow 5.3 Processing HTTP Requests and Responses 5.3.1 HTTP requests 5.3.2 HTTP responses 5.4 Complex HTTP Interactions 5.4.1 Caching 5.4.2 Cookie coordination 5.4.3 Authorization: challenge and response 5.4.4 Re-factoring: common mechanisms for storing persistent data 5.4.5 Requesting supporting data items 5.4.6 Multimedia support: helpers and plug-ins 5.5 Review of Browser Architecture 5.6 Summary 5.7 Questions and Exercises Bibliography

88 88 89 90 91 91 92 92 93 94 95 96 96 97 98 98 100 100 102 103 105 107 112 113 120 125 125 128 129 131 133 134 136 139 139 140




HTML and its Roots 6.1 Standard Generalized Markup Language 6.1.1 The SGML declaration 6.1.2 Document type definition 6.2 HTML 6.2.1 HTML evolution 6.2.2 Structure and syntax 6.3 HTML Rendering 6.3.1 Cascading style sheets 6.3.2 Associating styles with HTML documents 6.4 JavaScript 6.5 DHTML 6.5.1 ‘Mouse-Over’ behaviors 6.5.2 Form validation 6.5.3 Layering techniques 6.6 Summary 6.7 Questions and Exercises Bibliography


XML Languages and Applications 7.1 Core XML 7.1.1 XML documents 7.1.2 XML DTD 7.1.3 XML schema 7.2 XHTML 7.3 WML 7.4 XSL 7.4.1 XSLT 7.4.2 XSL formatting objects 7.4.3 What is so important about XSL? 7.5 Summary 7.6 Questions and Exercises Bibliography


Dynamic Web Applications 8.1 Historical Perspective 8.1.1 Client-server applications 8.1.2 Web applications 8.1.3 Multi-tier web applications

141 141 143 146 150 151 152 157 158 159 161 164 164 165 167 168 169 169 171 172 172 175 177 182 183 186 186 189 195 197 198 199 201 201 201 202 203



8.2 Application Architecture 8.2.1 Interpreting and routing client requests 8.2.2 Controlling user access to the application 8.2.3 Enabling data access 8.2.4 Accessing and modifying content 8.2.5 Customizing content for presentation 8.2.6 Transmitting the formatted response 8.2.7 Logging and recording application activity 8.3 Database Processing Issues 8.3.1 Configuration 8.3.2 Transactions 8.3.3 Best practices 8.4 Summary 8.5 Questions and Exercises Bibliography 9.

Approaches to Web Application Development 9.1 Programmatic Approaches 9.1.1 CGI 9.1.2 Java Servlet API 9.2 Template Approaches 9.2.1 Server-Side Includes (SSI) 9.2.2 Cold Fusion 9.2.3 WebMacro/Velocity 9.3 Hybrid Approaches 9.3.1 PHP 9.3.2 Active Server Pages (ASP) 9.3.3 Java Server Pages 9.4 Separation of Content from Presentation 9.4.1 Application flexibility 9.4.2 Division of responsibility for processing modules 9.5 Frameworks: MVC Approaches 9.5.1 JSP ‘Model 2’ 9.5.2 Struts 9.6 Frameworks: XML-Based Approaches 9.7 Summary 9.8 Questions and Exercises Bibliography

203 205 208 216 223 231 235 235 237 238 239 241 242 242 243 245 246 246 247 247 249 250 252 254 254 255 256 259 259 261 262 262 264 266 267 269 270




Application Primer: Virtual Realty Listing Services


10.1 10.2 10.3 10.4

273 274 276 278 282 288 295 297 297



10.7 10.8


Application Requirements Application Development Environment Anatomy of a Struts Application The Structure of the VRLS Application 10.4.1 Controller: ActionServlet and custom actions 10.4.2 View: JSP Pages and ActionForms 10.4.3 Model: JavaBeans and auxiliary service classes Design Decisions 10.5.1 Abstracting functionality into service classes 10.5.2 Using embedded page inclusion to support co-branding 10.5.3 A single task for creation and modification of customer profiles Enhancements 10.6.1 Administrative interface 10.6.2 Enhancing the signup process through e-mail authentication 10.6.3 Improving partner recognition through a persistent cookie 10.6.4 Adding caching functionality to the DomainService Class 10.6.5 Paging through cached search results using the value list handler pattern 10.6.6 Using XML and XSLT for view presentation 10.6.7 Tracking user behavior Summary Questions and Exercises Bibliography

298 300 301 301 304 305 306 307 308 310 311 311 312

Emerging Technologies


11.1 Web Services 11.1.1 SOAP 11.1.2 WSDL 11.1.3 UDDI 11.2 Resource Description Framework 11.2.1 RDF and Dublin Core 11.2.2 RDF Schema 11.3 Composite Capability/Preference Profiles 11.4 Semantic Web 11.5 XML Query Language

314 314 317 319 322 322 326 328 331 332



11.6 The Future of Web Application Frameworks 11.6.1 One more time: separation of content from presentation 11.6.2 The right tools for the job 11.6.3 Simplicity 11.7 Summary 11.8 Questions and Exercises Bibliography Index

335 335 337 338 343 344 344 347


I would like to thank my wife Rita and daughter Victoria for their insightful ideas about this project. I also wish to thank my mother and the rest of my family for their support and understanding. Leon Shklar Thanks to my wife, Celia, for tolerating and enduring all the insanity associated with the writing process, and to my parents and the rest of my family for all they have done, not only in helping me finish this book, but in enabling Celia and me to have the most fantastic wedding ever in the midst of all this. Rich Rosen We would both like to acknowledge the following people for their guidance and assistance: • Karen Mosman and Jill Jeffries at John Wiley & Sons, Ltd for getting this book off the ground, • Our editor, Gaynor Redvers-Mutton, and her assistant, Jonathan Shipley, for lighting the fire underneath us that finally got us to finish it. • Nigel Chapman and Bruce Campbell for taking the time to review our work and provide us with valuable insights and advice. • And finally, our friends and colleagues from the glory days of Pencom Web Works—especially Howard Fishman, Brad Lohnes, Dave Makower, and Evan Coyne Maloney—whose critiques, comments, and contributions were as thorough, methodical, and nitpicky (and we mean that in a good way!) as an author could ever hope for.



1.1 THE WEB IN PERSPECTIVE A little more than a decade ago at CERN (the scientific research laboratory near Geneva, Switzerland), Tim Berners-Lee presented a proposal for an information management system that would enable the sharing of knowledge and resources over a computer network. The system he proposed has propagated itself into what can truly be called a World Wide Web, as people all over the world use it for a wide variety of purposes: • Educational institutions and research laboratories were among the very first users of the Web, employing it for sharing documents and other resources across the Internet. • Individuals today use the Web (and the underlying Internet technologies that support it) as an instantaneous international postal service, as a worldwide community bulletin board for posting virtual photo albums, and as a venue for holding global yard sales. • Businesses engage in e-commerce, offering individuals a medium for buying and selling goods and services over the net. They also communicate with other businesses through B2B (business-to-business) data exchanges, where companies can provide product catalogues, inventories, and sales records to other companies.

The Web vs. the Internet There is an often-overlooked distinction between the Web and the Internet. The line between the two is often blurred, partially because the Web is rooted in the fundamental protocols associated with the Internet. Today, the lines are even more blurred, as notions of ‘the Web’ go beyond the boundaries of pages delivered to Web browsers,



into the realms of wireless devices, personal digital assistants, and the next generation of Internet appliances.

1.2 THE ORIGINS OF THE WEB Tim Berners-Lee originally promoted the World Wide Web as a virtual library, a document control system for sharing information resources among researchers. Online documents could be accessed via a unique document address, a Universal Resource Locator (URL). These documents could be cross-referenced via hypertext links.

Hypertext Ted Nelson, father of the Xanadu Project, coined the term ‘hypertext’ over 30 years ago, as a way of describing ‘non-sequential writing—text that branches and allows choice to the reader.’ Unlike the static text of print media, it is intended for use with an interactive computer screen. It is open, fluid and mutable, and can be connected to other pieces of hypertext by ‘links’. The term was extended under the name hypermedia to refer not only to text, but to other media as well, including graphics, audio, and video. However, the original term hypertext persists as the label for technology that connects documents and information resources through links.

From the very beginnings of Internet technology, there has been a dream of using the Internet as a universal medium for exchanging information over computer networks. Many people shared this dream. Ted Nelson’s Xanadu project aspired to make that dream a reality, but the goals were lofty and were never fully realized. Internet file sharing services (such as FTP and Gopher) and message forum services (such as Netnews) provided increasingly powerful mechanisms for this sort of information exchange, and certainly brought us closer to fulfilling those goals. However, it took Tim Berners-Lee to (in his own words) “marry together” the notion of hypertext with the power of the Internet, bringing those initial dreams to fruition in a way that the earliest developers of both hypertext and Internet technology might never have imagined. His vision was to connect literally everything together, in a uniform and universal way.

From Web Pages to Web Sites


Internet Protocols are the Foundation of Web Technology It should be noted that the Web did not come into existence in a vacuum. The Web is built on top of core Internet protocols that had been in existence for many years prior to the Web’s inception. Understanding the relationship between ‘Web technology’ and the underlying Internet protocols is fundamental to the design and implementation of true ‘Web applications’. In fact, it is the exploitation of that relationship that distinguishes a ‘Web page’ or ‘Web site’ from a ‘Web application’.

1.3 FROM WEB PAGES TO WEB SITES The explosively exponential growth of the Web can at least partially be attributed to its grass roots proliferation as a tool for personal publishing. The fundamental technology behind the Web is relatively simple. A computer connected to the Internet, running a Web server, was all that was necessary to serve documents. Both CERN and the National Center for Supercomputer Applications (NCSA) at the University of Illinois had developed freely available Web server software. A small amount of HTML knowledge (and the proper computing resources) got you something that could be called a Web site.

Primitive Web Sites from the Pre-Cambrian Era Early Web sites were, in fact, just loosely connected sets of pages, branched off hierarchically from a home page. HTML lets you link one page to another, and a collection of pages linked together could be considered a ‘Web site’. But a Web site in this day and age is more than just a conglomeration of Web pages.

Granted, when the Web was in its infancy, the only computers connected to the Internet and capable of running server software were run by academic institutions and well-connected technology companies. Smaller computers, in any case, were hardly in abundance back then. In those days, a ‘personal’ computer sitting on your desktop was still a rarity. If you wanted access to any sort of computing power, you used a terminal that let you ‘log in’ to a large server or mainframe over a direct connection or dialup phone line. Still, among those associated with such organizations, it quickly became a very simple process to create your own Web pages. Moreover, all that was needed was a simple text editor. The original HTML language was simple enough that, even



without the more sophisticated tools we have at our disposal today, it was an easy task for someone to create a Web page. (Some would say too easy.)

“Welcome to My Home Page, Here Are Photos of My Cat and A Poem I Wrote” In those pioneer days of the Web, academic and professional organizations used the Web to share information, knowledge, and resources. But once you got beyond those hallowed halls and cubicle walls, most people’s Web pages were personal showcases for publishing bad poetry and pictures of their pets. The thought of a company offering information to the outside world through the Web, or developing an intranet to provide information to its own employees, was no more than a gleam in even the most prophetic eyes.

There is a big difference between a Web page and a Web site. A Web site is more than just a group of Web pages that happen to be connected to each other through hypertext links. At the lowest level, there are content-related concerns. Maintaining thematic consistency of content is important in giving a site some degree of identity. There are also aesthetic concerns. In addition to having thematically-related content, a Web site should also have a common look and feel across all of its pages, so that site visitors know they are looking at a particular Web site. This means utilizing a common style across the site: page layout, graphic design, and typographical elements should reflect that style. There are also architectural concerns. As a site grows in size and becomes more complex, it becomes critically important to organize its content properly. This includes not just the layout of content on individual pages, but also the interconnections between the pages themselves. Some of the symptoms of bad site design include links targeting the wrong frame (for frame-based Web sites), and links that take visitors to a particular page at an appropriate time (e.g. at a point during the visit when it is impossible to deliver content to the visitors). If your site becomes so complex that visitors cannot navigate their way through it, even with the help of site maps and navigation bars, then it needs to be reorganized and restructured.

1.4 FROM WEB SITES TO WEB APPLICATIONS Initially, what people shared over the Internet consisted mostly of static information found in files. They might edit these files and update their content, but there were few truly dynamic information services on the Internet. Granted, there were a few exceptions: search applications for finding files found on FTP archives and Gopher

How to Build Web Applications in One Easy Lesson


servers; and services that provided dynamic information directly, like the weather, or the availability of cans from a soda dispensing machine. (One of the first Web applications that Tim Berners-Lee demonstrated at CERN was a gateway for looking up numbers from a phone book database using a Web browser.) However, for the most part the information resources shared on the Web were static documents. Dynamic information services—from search engines to CGI scripts to packages that connected the Web to relational databases—changed all that. With the advent of the dynamic web, the bar was raised even higher. No longer was it sufficient to say that you were designing a ‘Web site’ (as opposed to a motley collection of ‘Web pages’). It became necessary to design a Web application.

Definition of a Web Application What is a ‘Web application?’ By definition, it is something more than just a ‘Web site.’ It is a client/server application that uses a Web browser as its client program, and performs an interactive service by connecting with servers over the Internet (or Intranet). A Web site simply delivers content from static files. A Web application presents dynamically tailored content based on request parameters, tracked user behaviors, and security considerations.

1.5 HOW TO BUILD WEB APPLICATIONS IN ONE EASY LESSON But what does it mean to design a Web application, as contrasted to a Web page or a Web site? Each level of Web design has its own techniques, and its own set of issues.

1.5.1 Web page design resources For Web page design, there is a variety of books available. Beyond the tutorial books that purport to teach HTML, JavaScript, and CGI scripting overnight, there are some good books discussing the deeper issues associated with designing Web pages. One of the better choices is The Non-Designer’s Web Book by Robin Williams (not the comedian). Williams’ books are full of useful information and guidelines for those constructing Web pages, especially those not explicitly schooled in design or typography.

1.5.2 Web site design resources When it comes to Web sites, there are far fewer resources available. Information Architecture for the World Wide Web, by Louis Rosenfeld and Peter Morville, was



one of the rare books covering the issues of designing Web sites as opposed to Web pages. It is unfortunately out of print.

1.5.3 Web application design resources When we examined the current literature available on the subject of Web application development, we found there were three main categories of books currently available. • Technical Overviews. The first category is the technical overview. These books are usually at a very high level, describing terminology and technology in broad terms. They do not go into enough detail to enable the reader to design and build serious Web applications. They are most often intended for ‘managers’ and ‘executives’ who want a surface understanding of the terminology without going too deeply into specific application development issues. Frequently, they attempt to cover technology in huge brushstrokes, so that you see books whose focus is simply ‘Java’, ‘XML’, or ‘The Web.’ Such books approach the spectrum of technology so broadly that the coverage of any specific area is too shallow to be significant. Serious application developers usually find these books far too superficial to be of any use to them. • In-Depth Technical Resources. The second category is comprised of in-depth technical resources for developing Web applications using specific platforms. The books in this category provide in-depth coverage of very narrow areas, concentrating on the ‘how-to’s’ of using a particular language or platform without explaining what is going on ‘under the hood.’ While such books may be useful in teaching programmers to develop applications for a specific platform, they provide little or no information about the underlying technologies, focusing instead on the platform-specific implementation of those technologies. Should developers be called upon to rewrite an application for another platform, the knowledge they acquired from reading these books would rarely be transferable to that new platform. Given the way Web technology changes so rapidly, today’s platform of choice is tomorrow’s outdated legacy system. When new development platforms emerge, developers without a fundamental understanding of the inner workings of Web applications have to learn their inner workings from the ground up, because they lacked an understanding of first principles—of what the systems they wrote really did. Thus, the ability to use fundamental technological knowledge across platforms is critical. • Reference Books. These form a third category. Such books are useful, naturally, as references, but not for the purpose of learning about the technology. What we found lacking was a book that provides an in-depth examination of the basic concepts and general principles of Web application development. Such

How to Build Web Applications in One Easy Lesson


a book would cover the core protocols and technologies of the Internet in depth, imparting the principles associated with writing applications for the Web. It would use examples from specific technologies (e.g. CGI scripts and servlets), but would not promote or endorse particular platforms. Why is Such a Book Needed? We see the need for such a book when interviewing job candidates for Web application development positions. Too many programmers have detailed knowledge of a particular API (Application Programming Interface), but they are lost when asked questions about the underlying technologies (e.g. the format and content of messages transmitted between the server and browser). Such knowledge is not purely academic—it is critical when designing and debugging complex systems.

Too often, developers with proficiency only within a specific application development platform (like Active Server Pages, Cold Fusion, PHP, or Perl CGI scripting) are not capable of transferring that proficiency directly to another platform. Only through a fundamental understanding of the core technology can developers be expected to grow with the rapid technological changes associated with Web application development.

1.5.4 Principles of web application design What do we mean when we discuss the general principles that need to be understood to properly design and develop Web applications? We mean the core set of protocols and languages associated with Web applications. This includes, of course, HTTP (HyperText Transfer Protocol ) and HTML (HyperText Markup Language), which are fundamental to the creation and transmission of Web pages. It also includes the older Internet protocols like Telnet and FTP, protocols used for message transfer like SMTP and IMAP, plus advanced protocols and languages like XML. Additionally, it includes knowledge of databases and multimedia presentation, since many sophisticated Web applications make use of these technologies extensively. The ideal Web application architect must in some sense be a ‘jack of all trades’. People who design Web applications must understand not only HTTP and HTML, but the other underlying Internet protocols as well. They must be familiar with JavaScript, XML, relational databases, graphic design and multimedia. They must be well versed in application server technology, and have a strong background in information architecture. If you find people with all these qualifications, please let us know—we would love to hire them! Rare is the person who can not only architect a Web site, but also design the graphics, create the database schema, produce the multimedia programs, and configure the e-commerce transactions.



In the absence of such a Web application superhero/guru/demigod, the best you can hope for is a person who at least understands the issues associated with designing Web applications. Someone who understands the underlying languages and protocols supporting such applications. Someone who can understand the mechanisms for providing access to database and multimedia information through a Web application. We hope that, by reading this book, you can acquire the skills needed to design and build complex applications for the World Wide Web. No, there is no ‘one easy lesson’ for learning the ins and outs of designing Web applications. However, this book will hopefully enable you to design and build sophisticated Web applications that are scaleable, maintainable, extensible, and reusable. We examine various approaches to the process of Web application development—starting with the CGI approach, looking at template languages like Cold Fusion and ASP, and working our way up to the Java Enterprise (J2EE ) approach. However, at each level, we concentrate not on the particular development platform, but on the considerations associated with designing and building Web applications regardless of the underlying platform.

1.6 WHAT IS COVERED IN THIS BOOK The organization of this book is as follows: • Chapter 2: TCP/IP— This chapter examines the underlying Internet protocols that form the basis of the Web. It offers some perspectives on the history of TCP/IP, as well as some details about using several of these protocols in Web applications. • Chapter 3: HTTP— The HTTP protocol is covered in detail, with explanations of how requests and responses are transmitted and processed. • Chapter 4: Web Servers— The operational intricacies of Web servers is the topic here, with an in-depth discussion of what Web servers must do to support interactions with clients such as Web browsers and HTTP proxies. • Chapter 5: Web Browsers— As the previous chapter dug deep into the inner workings of Web servers, this chapter provides similar coverage of the inner workings of Web browsers. • Chapter 6: HTML and Its Roots— In the first of our two chapters about markup languages, we go back to SGML to learn more about the roots of HTML (and XML as well). • Chapter 7: XML— This chapter covers XML and related specifications, including XML Schema, XSLT, and XSL FO, as well as XML applications like XHTML and WML.



• Chapter 8: Dynamic Web Applications— After covering Web servers and Web browsers in depth, we move on to Web applications, describing their structure and the best practices for building them, so that they will be both extensible and maintainable. In providing this information, we refer to a sample application that will be designed and implemented in a later chapter. • Chapter 9: Approaches to Web Application Development— This chapter contains a survey of available Web application approaches, including CGI, Servlets, PHP, Cold Fusion, ASP, JSP, and frameworks like Jakarta Struts. It classifies and compares these approaches to help readers make informed decisions when choosing an approach for their project, emphasizing the benefits of using the Model-View-Controller (MVC) design pattern in implementing an application. • Chapter 10: Sample Application— Having examined the landscape of available application development approaches, we decide on Jakarta Struts along with the Java Standard Tag Library (JSTL). We give the reasons for our decisions, and build the Virtual Realty Listing Services application (originally described in Chapter 8) employing the principles we have been learning in previous chapters. We then suggest enhancements to the application as exercises to be performed by the reader. • Chapter 11: Emerging Technologies— Finally, we look to the future, providing coverage of the most promising developments in Web technology, including Web Services, RDF, and XML Query, as well as speculations about the evolution of Web application frameworks.

BIBLIOGRAPHY Berners-Lee, T. (2000) Weaving the Web: The Original Design and Ultimate Destiny of the World Wide Web. New York: HarperBusiness. Nelson, T. H. (1982) Literary Machines 931. Sausalito, California: Mindful Press. Rosenfeld, L. and Morville, P. (1998) Information Architecture for the World Wide Web. Sebastopol, California: O’Reilly & Associates. Williams, R. and Tollett, J. (2000) The Non-Designer’s Web Book. Berkeley, California: Peachpit Press.


Before the Web: TCP/IP

As mentioned in the previous chapter, Tim Berners-Lee did not come up with the World Wide Web in a vacuum. The Web as we know it is built on top of core Internet protocols that had been in existence for many years before. Understanding those underlying protocols is fundamental to the discipline of building robust Web applications. In this chapter, we examine the core Internet protocols that make up the TCP/IP protocol suite, which is the foundation for Web protocols, discussed in the next chapter. We begin with a brief historical overview of the forces that led to the creation of TCP/IP. We then go over the layers of the TCP/IP stack, and show where various protocols fit into it. Our description of the client-server paradigm used by TCP/IP applications is followed by a discussion of the various TCP/IP application services, including Telnet, electronic mail, message forums, live messaging, and file servers.

2.1 HISTORICAL PERSPECTIVE The roots of Web technology can be found in the original Internet protocols (known collectively as TCP/IP), developed in the 1980s. These protocols were an outgrowth of work done for the United States Defense Department to design a network called the ARPANET. The ARPANET was named for ARPA, the Advanced Research Projects Agency of the United States Department of Defense. It came into being as a result of efforts funded by the Department of Defense in the 1970s to develop an open, common, distributed, and decentralized computer networking architecture. There were a number of problems with existing network architectures that the Defense Department wanted to resolve. First and foremost was the centralized nature of existing networks. At that time, the typical network topology was centralized. A computer network had a single point of control directing communication between all the systems belonging to that network. From a military perspective, such a


Before the Web: TCP/IP

topology had a critical flaw: Destroy that central point of control, and all possibility of communication was lost. Another issue was the proprietary nature of existing network architectures. Most were developed and controlled by private corporations, who had a vested interest both in pushing their own products and in keeping their technology to themselves. Further, the proprietary nature of the technology limited the interoperability between different systems. It was important, even then, to ensure that the mechanisms for communicating across computer networks were not proprietary, or controlled in any way by private interests, lest the entire network become dependent on the whims of a single corporation. Thus, the Defense Department funded an endeavor to design the protocols for the next generation of computer communications networking architectures. Establishing a decentralized, distributed network topology was foremost among the design goals for the new networking architecture. Such a topology would allow communications to continue, for the most part undisrupted, even if any one system was damaged or destroyed. In such a topology, the network ‘intelligence’ would not reside in a single point of control. Instead, it would be distributed among many systems throughout the network. To facilitate this (and to accommodate other network reliability considerations), they employed a packet-switching technology, whereby a network ‘message’ could be split into packets, each of which might take a different route over the network, arrive in completely mixed-up order, and still be reassembled and understood by the intended recipient. To promote interoperability, the protocols needed to be open: be readily available to anyone who wanted to connect their system to the network. An infrastructure was needed to design the set of agreed-upon protocols, and to formulate new protocols for new technologies that might be added to the network in the future. An Internet Working Group (INWG) was formed to examine the issues associated with connecting heterogeneous networks together in an open, uniform manner. This group provided an open platform for proposing, debating, and approving protocols. The Internet Working Group evolved over time into other bodies, like the IAB (Internet Activities Board, later renamed the Internet Architecture Board), the IANA (Internet Assigned Numbers Authority), and later, the IETF (Internet Engineering Task Force) and IESG (Internet Engineering Steering Group). These bodies defined the standards that ‘govern’ the Internet. They established the formal processes for proposing new protocols, discussing and debating the merits of these proposals, and ultimately approving them as accepted Internet standards. Proposals for new protocols (or updated versions of existing protocols) are provided in the form of Requests for Comments, also known as RFCs. Once approved, the RFCs are treated as the standard documentation for the new or updated protocol.



2.2 TCP/IP The original ARPANET was the first fruit borne of this endeavor. The protocols behind the ARPANET evolved over time into the TCP/IP Protocol Suite, a layered taxonomy of data communications protocols. The name TCP/IP refers to two of the most important protocols within the suite: TCP (Transmission Control Protocol ) and IP (Internet Protocol ), but the suite is comprised of many other significant protocols and services.

2.2.1 Layers The protocol layers associated with TCP/IP (above the ‘layer’ of physical interconnection) are: 1. the Network Interface layer, 2. the Internet layer, 3. the Transport layer, and 4. the Application layer. Because this protocol taxonomy contains layers, implementations of these protocols are often known as a protocol stack. The Network Interface layer is the layer responsible for the lowest level of data transmission within TCP/IP, facilitating communication with the underlying physical network. The Internet layer provides the mechanisms for intersystem communications, controlling message routing, validity checking, and message header composition/decomposition. The protocol known as IP (which stands, oddly enough, for Internet Protocol) operates on this layer, as does ICMP (the Internet Control Message Protocol ). ICMP handles the transmission of control and error messages between systems. Ping is an Internet service that operates through ICMP. The Transport layer provides message transport services between applications running on remote systems. This is the layer in which TCP (the Transmission Control Protocol ) operates. TCP provides reliable, connection-oriented message transport. Most of the well-known Internet services make use of TCP as their foundation. However, some services that do not require the reliability (and overhead) associated with TCP make use of UDP (which stands for User Datagram Protocol ). For instance, streaming audio and video services would gladly sacrifice a few lost packets to get faster performance out of their data streams, so these services often operate over UDP, which trades reliability for performance. The Application layer is the highest level within the TCP/IP protocol stack. It is within this layer that most of the services we associate with ‘the Internet’ operate.


Before the Web: TCP/IP

These Internet services provided some degree of information exchange, but it took the birth of the web to bring those initial dreams to fruition, in a way that the earliest developers of these services might never have imagined. OSI During the period that TCP/IP was being developed, the International Standards Organization (ISO) was also working on a layered protocol scheme, called ‘Open Systems Interconnection’, or OSI. While the TCP/IP taxonomy consisted of five layers (if you included the lowest physical connectivity medium as a layer), OSI had seven layers: Physical, Data Link, Network, Transport, Session, Presentation, and Application. There is some parallelism between the two models. TCP/IP’s Network Interface layer is sometimes called the Data Link layer to mimic the OSI Reference Model, while the Internet layer corresponds to OSI’s Network layer. Both models share the notion of a Transport layer, which serves roughly the same functions in each model. And the Application layer in TCP/IP combines the functions of the Session, Presentation, and Application layers of OSI. But OSI never caught on, and while some people waited patiently for its adoption and propagation, it was TCP/IP that became the ubiquitous foundation of the Internet as we know it today.

2.2.2 The client/server paradigm TCP/IP applications tend to operate according to the client/server paradigm. This simply means that, in these applications, servers (also called services and daemons, depending on the language of the underlying operating system) execute by (1) waiting for requests from client programs to arrive, and then (2) processing those requests. Client programs can be applications used by human beings, or they could be servers that need to make their own requests that can only be fulfilled by other servers. More often than not, the client and server run on separate machines, and communicate via a connection across a network. Command Line vs. GUI Over the years, the client programs used by people have evolved from command-line programs to GUI programs. Command-line programs have their origins in the limitations of the oldest human interfaces to computer systems: the teletype keyboard. In the earliest days of computing, they didn’t have simple text-based CRT terminals—let alone today’s more sophisticated monitors with enhanced graphics capabilities! The only way to enter data interactively was through a teletypewriter interface, one line at a time. As the name implies, these programs are invoked from a command line. The command line prompts users for the entry of a ‘command’ (the name of a program) and its ‘arguments’ (the parameters passed to the program). The original DOS operating



system on a PC, as well as the ‘shell’ associated with UNIX systems, are examples of command-line interfaces. Screen mode programs allowed users to manipulate the data on an entire CRT screen, rather than on just one line. This meant that arrow keys could be used to move a ‘cursor’ around the screen, or to scroll through pages of a text document. However, these screen mode programs were restricted to character-based interfaces. GUI stands for ‘Graphical User Interface’. As the name implies, GUI programs make use of a visually oriented paradigm that offers users a plethora of choices. For most, this is a welcome alternative to manually typing in the names of files, programs, and command options. The graphics, however, are not limited to just textual characters, as they are in screen mode programs. The GUI paradigm relies on WIMPS (Windows, Icons, Mouse, Pointers, and Scrollbars) to graphically display the set of files and applications users can access. Whether command-line or GUI-based, client programs provide the interface by which end users communicate with servers to make use of TCP/IP services.

Early implementations of client/server architectures did not make use of open protocols. What this meant was that client programs needed to be as ‘heavy’ as the server programs. A ‘lightweight’ client (also called a thin client) could only exist in a framework where common protocols and application controls were associated with the client machine’s operating system. Without such a framework, many of the connectivity features had to be included directly into the client program, adding to its weight. One advantage of using TCP/IP for client/server applications was that the protocol stack was installed on the client machine as part of the operating system, and the client program itself could be more of a thin client. Web applications are a prime example of the employment of thin clients in applications. Rather than building a custom program to perform desired application tasks, web applications use the web browser, a program that is already installed on most users’ systems. You cannot create a client much thinner than a program that users have already installed on their desktops! How Do TCP/IP Clients and Servers Communicate with Each Other? To talk to servers, TCP/IP client programs open a socket, which is simply a TCP connection between the client machine and the server machine. Servers listen for connection requests that come in through specific ports. A port is not an actual physical interface between the computer and the network, but simply a numeric reference within a request that indicates which server program is its intended recipient. There are established conventions for matching port numbers with specific TCP/IP services. Servers listen for requests on well-known port numbers. For example, Telnet servers normally listen for connection requests on port 23, SMTP servers listen to port 25, and web servers listen to port 80.


Before the Web: TCP/IP

2.3 TCP/IP APPLICATION SERVICES In this section, we discuss some of the common TCP/IP application services, including Telnet, electronic mail, message forums, live messaging, and file servers.

2.3.1 Telnet The Telnet protocol operates within the Application layer. It was developed to support Network Virtual Terminal functionality, which means the ability to ‘log in’ to a remote machine over the Internet. The latest specification for the Telnet protocol is defined in Internet RFC 854. Remember that before the advent of personal computers, access to computing power was limited to those who could connect to a larger server or mainframe computer, either through a phone dialup line or through a direct local connection. Whether you phoned in remotely or sat down at a terminal directly connected to the server, you used a command-line interface to log in. You connected to a single system and your interactions were limited to that system. With the arrival of Internet services, you could use the Telnet protocol to log in remotely to other systems that were accessible over the Internet. As mentioned earlier, Telnet clients are configured by default to connect to port 23 on the server machine, but the target port number can be over-ridden in most client programs. This means you can use a Telnet client program to connect and ‘talk’ to any TCP server by knowing its address and its port number.

2.3.2 Electronic mail Electronic mail, or e-mail, was probably the first ‘killer app’ in what we now call cyberspace. Since the net had its roots in military interests, naturally the tone of electronic mail started out being formal, rigid, and business-like. But once the body of people using e-mail expanded, and once these people realized what it could be used for, things lightened up quite a bit. Electronic mailing lists provided communities where people with like interests could exchange messages. These lists were closed systems, in the sense that only subscribers could post messages to the list, or view messages posted by other subscribers. Obviously, lists grew, and list managers had to maintain them. Over time, automated mechanisms were developed to allow people to subscribe (and, just as importantly, to unsubscribe) without human intervention. These mailing lists evolved into message forums, where people could publicly post messages, on an electronic bulletin board, for everyone to read. These services certainly existed before there was an Internet. Yet in those days, users read and sent their e-mail by logging in to a system directly (usually via telephone dialup or direct local connection) and running programs on that system

TCP/IP Application Services


(usually with a command-line interface) to access e-mail services. The methods for using these services varied greatly from system to system, and e-mail connectivity between disparate systems was hard to come by. With the advent of TCP/IP, the mechanisms for providing these services became more consistent, and e-mail became uniform and ubiquitous. The transmission of electronic mail is performed through the SMTP protocol. The reading of electronic mail is usually performed through either POP or IMAP.

SMTP SMTP stands for Simple Mail Transfer Protocol. As an application layer protocol, SMTP normally runs on top of TCP, though it can theoretically use any underlying transport protocol. The application called ‘sendmail’ is an implementation of the SMTP protocol for UNIX systems. The latest specification for the SMTP protocol is defined in Internet RFC 821, and the structure of SMTP messages is defined in Internet RFC 822. SMTP, like other TCP/IP services, runs as a server, service, or daemon. In a TCP/IP environment, SMTP servers usually run on port 25. They wait for requests to send electronic mail messages, which can come from local system users or from across the network. They are also responsible for evaluating the recipient addresses found in e-mail messages and determining whether they are valid, and/or whether their final destination is another recipient (e.g. a forwarding address, or the set of individual recipients subscribed to a mailing list). If the message embedded in the request is intended for a user with an account on the local system, then the SMTP server will deliver the message to that user by appending it to their mailbox. Depending on the implementation, the mailbox can be anything from a simple text file to a complex database of e-mail messages. If the message is intended for a user on another system, then the server must figure out how to transmit the message to the appropriate system. This may involve direct connection to the remote system, or it may involve connection to a gateway system. A gateway is responsible for passing the message on to other gateways and/or sending it directly to its ultimate destination. Before the advent of SMTP, the underlying mechanisms for sending mail varied from system to system. Once SMTP became ubiquitous as the mechanism for electronic mail transmission, these mechanisms became more uniform. The applications responsible for transmitting e-mail messages, such as SMTP servers, are known as MTAs (Mail Transfer Agents). Likewise, the applications responsible for retrieving messages from a mailbox, including POP servers and IMAP servers, are known as MRAs (Mail Retrieval Agents). E-mail client programs have generally been engineered to allow users to both read mail and send mail. Such programs are known as MUAs (Mail User Agents). MUAs talk to MRAs to read mail, and to MTAs to send mail. In a typical e-mail client, this is the process by which a message is sent. Once the user has composed


Before the Web: TCP/IP

a message, the client program directs it to the SMTP server. First, it must connect to the server. It does this by opening a TCP socket to port 25 (the SMTP port) of the server. (This is true even if the server is running on the user’s machine.) Client/Server Communications Requests transmitted between client and server programs take the form of commandline interactions. The imposition of this constraint on Internet communication protocols means that even the most primitive command-line oriented interface can make use of TCP/IP services. More sophisticated GUI-based client programs often hide their command-line details from their users, employing point-and-click and drag-and-drop functionality to support underlying command-line directives. After the server acknowledges the success of the connection, the client sends commands on a line-by-line basis. There are single-line and block commands. A block command begins with a line indicating the start of the command (e.g., a line containing only the word ‘DATA’) and terminates with a line indicating its end (e.g., a line containing only a period). The server then responds to each command, usually with a line containing a response code. A stateful protocol allows a request to contain a sequence of commands. The server is required to maintain the “state” of the connection throughout the transmission of successive commands, until the connection is terminated. The sequence of transmitted and executed commands is often called a session. Most Internet services (including SMTP) are session-based, and make use of stateful protocols. HTTP, however, is a stateless protocol. An HTTP request usually consists of a single block command and a single response. On the surface, there is no need to maintain state between transmitted commands. We will discuss the stateless nature of the HTTP protocol in a later chapter.

As shown in Figure 2.1, the client program identifies itself (and the system on which it is running) to the server via the ‘HELO’ command. The server decides (based on this identification information) whether to accept or reject the request. If the server accepts the request, it waits for the client to send further information. One line at a time, the client transmits commands to the server, sending information about the originator of the message (using the ‘MAIL’ command) and each of the recipients (using a series of ‘RCPT’ commands). Once all this is done, the client tells the server it is about to send the actual data: the message itself. It does this by sending a command line consisting of only the word ‘DATA’. Every line that follows, until the server encounters a line containing only a period, is considered part of the message body. Once it has sent the body of the message, the client signals the server that it is done, and the server transmits the message to its destination (either directly or through gateways). Having received confirmation that the server has transmitted the message, the client closes the socket connection using the ‘QUIT’ command. An example of an interaction between a client and an SMTP server can be found in Figure 2.1.

TCP/IP Application Services


220 mail.hoboken.company.com ESMTP xxxx 3.21 #1 Fri, 23 Feb 2001 13:41:09 -0500 HELO ubizmo.com 250 mail.hoboken.company.com Hello neurozen.com [xxx.xxx.xxx.xxx] MAIL FROM: 250 is syntactically correct RCPT TO: 250 is syntactically correct RCPT TO: 250 is syntactically correct DATA 354 Enter message, ending with "." on a line by itself From: Rich Rosen To: [email protected] Cc: [email protected] Subject: Demonstrating SMTP Leon, Please ignore this note. I am demonstrating the art of connecting to an SMTP server for the book. :-) Rich . 250 OK id=xxxxxxxx QUIT

Figure 2.1

Example of command line interaction with an SMTP server

Originally, SMTP servers executed in a very open fashion: anyone knowing the address of an SMTP server could connect to it and send messages. In an effort to discourage spamming (the sending of indiscriminate mass e-mails in a semi-anonymous fashion), many SMTP server implementations allow the system administrator to configure the server so that it only accepts connections from a discrete set of systems, perhaps only those within their local domain. When building web applications that include e-mail functionality (specifically the sending of e-mail), make sure your configuration includes the specification of a working SMTP server system, which will accept your requests to transmit messages. To maximize application flexibility, the address of the SMTP server should be a parameter that can be modified at run-time by an application administrator. MIME Originally, e-mail systems transmitted messages in the form of standard ASCII text. If a user wanted to send a file in a non-text or ‘binary’ format (e.g. an image or sound


Before the Web: TCP/IP

file), it had to be encoded before it could be placed into the body of the message. The sender had to communicate the nature of the binary data directly to the receiver, e.g., ‘The block of encoded binary text below is a GIF image.’ Multimedia Internet Mail Extensions (MIME) provided uniform mechanisms for including encoded attachments within a multipart e-mail message. MIME supports the definition of boundaries separating the text portion of a message (the ‘body’) from its attachments, as well as the designation of attachment encoding methods, including ‘Base64’ and ‘quoted-printable’. MIME was originally defined in Internet RFC 1341, but the most recent specifications can be found in Internet RFCs 2045 through 2049. It also supports the notion of content typing for attachments (and for the body of a message as well). MIME-types are standard naming conventions for defining what type of data is contained in an attachment. A MIME-type is constructed as a combination of a top-level data type and a subtype. There is a fixed set of top-level data types, including ‘text’, ‘image’, ‘audio’, ‘video’, and ‘application’. The subtypes describe the specific type of data, e.g. ‘text/html’, ‘text/plain’, ‘image/jpeg’, ‘audio/mp3’. The use of MIME content typing is discussed in greater detail in a later chapter.

POP POP, the Post Office Protocol, gives users direct access to their e-mail messages stored on remote systems. POP3 is the most recent version of the POP protocol. Most of the popular e-mail clients (including Eudora, Microsoft Outlook, and Netscape Messenger) use POP3 to access user e-mail. (Even proprietary systems like Lotus Notes offer administrators the option to configure remote e-mail access through POP.) POP3 was first defined in Internet RFC 1725, but was revised in Internet RFC 1939. Before the Internet, as mentioned in the previous section, people read and sent e-mail by logging in to a system and running command-line programs to access their mail. User messages were usually stored locally in a mailbox file on that system. Even with the advent of Internet technology, many people continued to access email by Telnetting to the system containing their mailbox and running command-line programs (e.g. from a UNIX shell) to read and send mail. (Many people who prefer command-line programs still do!) Let us look at the process by which POP clients communicate with POP servers to provide user access to e-mail. First, the POP client must connect to the POP server (which usually runs on port 110), so it can identify and authenticate the user to the server. This is usually done by sending the user ‘id’ and password one line at a time, using the ‘USER’ and ‘PASS’ commands. (Sophisticated POP servers may make use of the ‘APOP’ command, which allows the secure transmission of the user name and password as a single encrypted entity across the network.) Once connected and authenticated, the POP protocol offers the client a variety of commands it can execute. Among them is the ‘UIDL’ command, which responds with an ordered list of message numbers, where each entry is followed by a unique

TCP/IP Application Services


message identifier. POP clients can use this list (and the unique identifiers it contains) to determine which messages in the list qualify as ‘new’ (i.e. not yet seen by the user through this particular client). Having obtained this list, the client can execute the command to retrieve a message (‘RETR n’). It can also execute commands to delete a message from the server (‘DELE n’). It also has the option to execute commands to retrieve just the header of a message (‘TOP n 0’). Message headers contain metadata about a message, such as the addresses of its originator and recipients, its subject, etc. Each message contains a message header block containing a series of lines, followed by a blank line indicating the end of the message header block. From: Rich Rosen To: Leon Shklar Subject: Here is a message. . . Date: Fri, 23 Feb 2001 12:58:21 -0500 Message-ID:

The information that e-mail clients include in message lists (e.g. the ‘From’, ‘To’, and ‘Subject’ of each message) comes from the message headers. As e-mail technology advanced, headers began representing more sophisticated information, including MIME-related data (e.g. content types) and attachment encoding schemes. Figure 2.2 provides an example of a simple command-line interaction between a client and a POP server. As mentioned previously, GUI-based clients often hide the mundane commandline details from their users. The normal sequence of operation for most GUI-based POP clients today is as follows: 1. Get the user id and password (client may already have this information, or may need to prompt the user). 2. Connect the user and verify identity. 3. Obtain the UIDL list of messages. 4. Compare the identifiers in this list to a list that the client keeps locally, to determine which messages are ‘new’. 5. Retrieve all the new messages and present them to the user in a selection list. 6. Delete the newly retrieved messages from the POP server (optional). Although this approach is simple, there is a lot of inefficiency embedded in it. All the new messages are always downloaded to the client. This is inefficient because some of these messages may be quite long, or they have extremely large attachments.


Before the Web: TCP/IP

+OK mail Server POP3 v1.8.22 server ready user shklar +OK Name is a valid mailbox pass xxxxxx +OK Maildrop locked and ready uidl +OK unique-id listing follows 1 2412 2 2413 3 2414 4 2415 . retr 1 +OK Message follows From: Rich Rosen To: Leon Shklar Subject: Here is a message... Date: Fri, 23 Feb 2001 12:58:21-0500 Message-ID: The medium is the message. --Marshall McLuhan, while standing behind a placard in a theater lobby in a Woody Allen movie. .

Figure 2.2

Example of command line interaction with a POP3 server

Users must wait for all of the messages (include the large, possibly unwanted ones) to download before viewing any of the messages they want to read. It would be more efficient for the client to retrieve only the message headers and display the header information about each message in a message list. It could then allow users the option to selectively download desired messages for viewing, or to delete unwanted messages without downloading them. A web-based e-mail client could remove some of this inefficiency. (We discuss the construction of a web-based e-mail client in a later chapter.)

IMAP Some of these inefficiencies can be alleviated by the Internet Message Access Protocol (IMAP). IMAP was intended as a successor to the POP protocol, offering sophisticated services for managing messages in remote mailboxes. IMAP servers provide support for multiple remote mailboxes or folders, so users can move messages from an incoming folder (the ‘inbox’) into other folders kept on the server. In addition, they also provide support for saving sent messages in one of these remote folders, and for multiple simultaneous operations on mailboxes.

TCP/IP Application Services


IMAP4, the most recent version of the IMAP protocol, was originally defined in Internet RFC 1730, but the most recent specification can be found in Internet RFC 2060. The IMAP approach differs in many ways from the POP approach. In general, POP clients are supposed to download e-mail messages from the server and then delete them. (This is the default behavior for many POP clients.) In practice, many users elect to leave viewed messages on the server, rather than deleting them after viewing. This is because many people who travel extensively want to check e-mail while on the road, but want to see all of their messages (even the ones they’ve seen) when they return to their ‘home machine.’ While the POP approach ‘tolerates’ but does not encourage this sort of user behavior, the IMAP approach eagerly embraces it. IMAP was conceived with ‘nomadic’ users in mind: users who might check e-mail from literally anywhere, who want access to all of their saved and sent messages wherever they happen to be. IMAP not only allows the user to leave messages on the server, it provides mechanisms for storing messages in user-defined folders for easier accessibility and better organization. Moreover, users can save sent messages in a designated remote folder on the IMAP server. While POP clients support saving of sent messages, they usually save those messages locally, on the client machine. The typical IMAP e-mail client program works very similarly to typical POP e-mail clients. (In fact, many e-mail client programs allow the user to operate in either POP or IMAP mode.) However, the automatic downloading of the content (including attachments) of all new messages does not occur by default in IMAP clients. Instead, an IMAP client downloads only the header information associated with new messages, requesting the body of an individual message only when the user expresses an interest in seeing it. POP vs. IMAP Although it is possible to write a POP client that operates this way, most do not. POP clients tend to operate in ‘burst’ mode, getting all the messages on the server in one ‘shot.’ While this may be in some respects inefficient, it is useful for those whose online access is not persistent. By getting all the messages in one burst, users can work ‘offline’ with the complete set of downloaded messages, connecting to the Internet again only when they want to send responses and check for new mail. IMAP clients assume the existence of a persistent Internet connection, allowing discrete actions to be performed on individual messages, while maintaining a connection to the IMAP server. Thus, for applications where Internet connectivity may not be persistent (e.g. a handheld device where Internet connectivity is paid for by the minute), POP might be a better choice than IMAP.

Because the IMAP protocol offers many more options than the POP protocol, the possibilities for what can go on in a user session are much richer. After connection


Before the Web: TCP/IP

and authentication, users can look at new messages, recently seen messages, unanswered messages, flagged messages, and drafts of messages yet to be sent. They can view messages in their entirety or in part (e.g. header, body, attachment), delete or move messages to other folders, or respond to messages or forward them to others. IMAP need not be used strictly for e-mail messages. As security features allow mailbox folders to be designated as ‘read only’, IMAP can be used for ‘message board’ functionality as well. However, such functionality is usually reserved for message forum services.

2.3.3 Message forums Message forums are online services that allow users to write messages to be posted on the equivalent of an electronic bulletin board, and to read similar messages that others have posted. These messages are usually organized into categories so that people can find the kinds of messages they are looking for. For years, online message forums existed in various forms. Perhaps the earliest form was the electronic mailing list. As we mentioned earlier, mailing lists are closed systems: only subscribers can view or post messages. In some situations, a closed private community may be exactly what the doctor ordered. Yet if the goal is open public participation, publicly accessible message forums are more appropriate. Although message forums were originally localized, meaning that messages appeared only on the system where they were posted, the notion of distributed message forums took hold. Cooperative networks (e.g. FIDONET) allowed systems to share messages, by forwarding them to ‘neighboring’ systems in the network. This enabled users to see all the messages posted by anyone on any system in the network. The Internet version of message forums is Netnews. Netnews organizes messages into newsgroups, which form a large hierarchy of topics and categories. Among the main divisions are comp (for computing related newsgroups), sci (for scientific newsgroups), soc (for socially oriented newsgroups), talk (for newsgroups devoted to talk), and alt (an unregulated hierarchy for ‘alternative’ newsgroups). The naming convention for newsgroups is reminiscent of domain names in reverse, e.g. comp.infosystems.www.

Usenet and UUCP Netnews existed before the proliferation of the Internet. It grew out of Usenet, an interconnected network of UNIX systems. Before the Internet took hold, UNIX systems communicated with each other over UUCP, a protocol used to transmit mail and news over phone lines. It has been suggested, only half in jest, that the proliferation of UNIX by Bell Laboratories in the 1980s was an effort by AT&T to increase long distance phone traffic, since e-mail and Netnews were being transmitted by long distance calls between these UNIX systems.

TCP/IP Application Services


Today, Netnews is transmitted using an Internet protocol called NNTP (for Network News Transfer Protocol ). NNTP clients allow users to read messages in newsgroups (and post their own messages as well) by connecting to NNTP servers. These servers propagate the newsgroup messages throughout the world by regularly forwarding them to ‘neighboring’ servers. The NNTP specification is defined in Internet RFC 977. Netnews functionality is directly incorporated into browsers like Netscape Communicator, which includes the functionality into its Messenger component, which is responsible for accessing electronic mail. It is also possible to create web applications that provide Netnews access through normal web browser interactions. One site, deja.com (now a part of Google), created an entire infrastructure for accessing current as well as archived newsgroup messages, including a powerful search engine for finding desired messages.

2.3.4 Live messaging America Online’s Instant Messaging service may be responsible for making the notion of IM-ing someone part of our collective vocabulary. But long before the existence of AOL, there was a talk protocol that enabled users who were logged in to network-connected UNIX systems to talk to each other. A talk server would run on a UNIX machine, waiting for requests from other talk servers. (Since talk was a bi-directional service, servers had to run on the machines at both ends of a conversation.) A user would invoke the talk client program to communicate with a person on another machine somewhere else on the network, e.g. [email protected]. The talk client program would communicate with the local talk server, which would ask the talk server on the remote machine whether the other person is on line. If so, and if that other person was accepting talk requests, the remote talk server would establish a connection, and the two people would use a screen mode interface to have an online conversation. Today, the vast majority of Internet users eschew command-line interfaces, and the notion of being logged in to a particular system (aside from AOL, perhaps) is alien to most people. Thus, a protocol like talk would not work in its original form in today’s diverse Internet world. Efforts to create an open, interoperable Instant Messaging protocol have been unsuccessful thus far. Proprietary ‘instant messaging’ systems (such as AOL’s) exist, but they are exclusionary, and the intense competition and lack of cooperation between instant messaging providers further limits the degree of interoperability we can expect from them.

2.3.5 File servers E-mail and live messaging services represent fleeting, transitory communications over the Internet. Once an instant message or e-mail message has been read, it is


Before the Web: TCP/IP

usually discarded. Even forum-based messages, even if they are archived, lack a certain degree of permanence, and for the most part those who post such messages tend not to treat them as anything more than passing transient dialogues (or, in some cases, monologues). However, providing remote access to more persistent documents and files is a fundamental necessity to enable sharing of resources. For years before the existence of the Internet, files were shared using BBS ’s (electronic Bulletin Board Systems). People would dial in to a BBS via a modem, and once connected, they would have access to directories of files to download (and sometimes to ‘drop’ directories into which their own files could be uploaded). Various file transfer protocols were used to enable this functionality over telephone dialup lines (e.g. Kermit, Xmodem, Zmodem). To facilitate this functionality over the Internet, the File Transfer Protocol (FTP) was created.

FTP An FTP server operates in a manner similar to an e-mail server. Commands exist to authenticate the connecting user, provide the user with information about available files, and allow the user to retrieve selected files. However, e-mail servers let you access only a preset collection of folders (like the inbox), solely for purposes of downloading message files. FTP servers also allow users to traverse to different directories within the server’s local file system, and (if authorized) to upload files into those directories. The FTP specification has gone through a number of iterations over the years, but the most recent version can be found in Internet RFC 959. It describes the process by which FTP servers make files available to FTP clients. First, a user connects to an FTP server using an FTP client program. FTP interactions usually require two connections between the client and server. One, the control connection, passes commands and status responses between the client and the server. The other, the data connection, is the connection over which actual data transfers occur. User authentication occurs, of course, over the control connection. Once connected and authenticated, the user sends commands to set transfer modes, change directories, list the contents of directories, and transfer files. Whether or not a user can enter specific directories, view directory contents, download files, and/or upload files depends on the security privileges associated with his/her user account on the server. (Note that the root directory of the FTP server need not be the same as the root directory of the server machine’s local file system. System administrators can configure FTP servers so that only a discrete directory subtree is accessible through the FTP server.) FTP servers can allow open access to files without requiring explicit user authentication, using a service called anonymous FTP. When an FTP server is configured to support anonymous FTP, a user ID called ‘anonymous’ is defined that will accept

Questions and Exercises


any password. ‘Netiquette’ (Internet etiquette) prescribes that users should provide their e-mail address as the password. The system administrator can further restrict the file system subtree that is accessible to ‘anonymous’ users, usually providing read-only access (although it is possible to configure a ‘drop’ folder into which anonymous users can place files). Most of the FTP archives found on the Internet make use of anonymous FTP to provide open access to files. Other file server protocols have come into being over the years, but none has achieved the popularity of FTP. With the advent of next generation distributed filesharing systems such as the one used by Napster, we can expect to see changes in the file server landscape over the next few years.

2.4 AND THEN CAME THE WEB. . . While FTP provided interactive functionality for users seeking to transfer files across the Internet, it was not a very user-friendly service. FTP clients, especially the command-line variety, were tedious to use, and provided limited genuine interactivity. Once you traversed to the directory you wanted and downloaded or uploaded your files, your ‘user experience’ was completed. Even GUI-based FTP clients did not appreciably enhance the interactivity of FTP. Other services sought to make the online experience more truly interactive. Gopher was a service developed at the University of Minnesota (hence the name— Minnesota is the ‘gopher state’) that served up menus to users. In Gopher, the items in menus were not necessarily actual file system directories, as they were in FTP. They were logical lists of items grouped according to category, leading the user to other resources. These resources did not have to be on the same system as the Gopher menu. In fact, a Gopher menu could list local resources as well as resources on other systems, including other Gopher menus, FTP archives, and (finally) files. Again, once you reached the level of a file, your traversal was complete. There was ‘nowhere to go’, except to retrace your steps back along the path you just took. Gopher only caught on as a mainstream Internet service in a limited capacity. Over time, for a variety of reasons, it faded into the woodwork, in part because a better and more flexible service came along right behind it. That system married the power of the Internet with the capabilities of hypertext, to offer a medium for real user interactivity. Of course, as you have already figured out, that system is the one proposed by Tim Berners-Lee in the late 1980s and early 1990s, known as the World Wide Web.

2.5 QUESTIONS AND EXERCISES 1. Find and download the RFC’s associated with the POP3, SMTP and FTP protocols. 2. What kind of traffic is sent over ICMP?


Before the Web: TCP/IP

3. What is the main difference between TCP and UDP? What kinds of traffic would be suitable for each? What kinds of traffic would be suitable for both? Provide examples. 4. If you get your e-mail from a provider that offers POP3 service, use a ‘telnet’ client program to connect to your POP3 server. What POP3 commands would you use to connect, authenticate, and check for mail? What command would you use to read a message? To delete a message? To view message headers? 5. Assume you are implementing an e-mail application (MUA) for a handheld device that does not have a persistent connection to the Internet. Which protocol would you use for reading e-mail? For sending e-mail? 6. Which mode of FTP is used at public FTP sites? How does it differ from ‘normal’ FTP service?

BIBLIOGRAPHY Comer, D. (1991) Internetworking with TCP/IP (Volume 1: Principles, Protocols, and Architecture). Englewood Cliffs, NJ: Prentice-Hall. Davidson, J. (1988) An Introduction to TCP/IP. New York: Springer-Verlag. Hafner, K. and Lyon, M. (1996) Where Wizards Stay Up Late: The Origins of the Internet. New York: Simon & Schuster. Krol, E. (1994) The Whole Internet User’s Guide and Catalog, Second Edition. Sebastopol, California: O’Reilly. Stephenson, N. (1999) In the Beginning Was the Command Line. New York, NY: Avon Books. Wood, D. (1999) Programming Internet Email. Sebastopol, California: O’Reilly.


Birth of the World Wide Web: HTTP

The main subject of this chapter is the HyperText Transfer Protocol (HTTP). We begin with a short foray into the history of the World Wide Web, followed by a discussion of its core components, with the focus on HTTP. No matter how Web technology evolves in the future, it will always be important to understand the basic protocols that enable communication between Web programs. This understanding is critical because it provides important insights into the inner workings of the wide range of Web applications.

3.1 HISTORICAL PERSPECTIVE For all practical purposes, it all started at CERN back in 1989. That is when Tim Berners-Lee wrote a proposal for a hypertext-based information management system, and distributed this proposal among the scientists at CERN. Although initially interest in the proposal was limited, it sparked the interest of someone else at CERN, Robert Cailliau, who helped Berners-Lee reformat and redistribute the proposal, referring to the system as a ‘World Wide Web’. By the end of 1990, Berners-Lee had implemented a server and a command-line browser using the initial version of the HyperText Transfer Protocol (HTTP) that he designed for this system. By the middle of 1991, this server and browser were made available throughout CERN. Soon thereafter, the software was made available for anonymous FTP download on the Internet. Interest in HTTP and the Web grew, and many people downloaded the software. A newsgroup, comp.infosystems.www, was created to support discussion of this new technology. Just one year later, at the beginning of 1993, there were about 50 different sites running HTTP servers. This number grew to 200 by the autumn of that year. In addition, since the specification for the HTTP protocol was openly available, others


Birth of the World Wide Web: HTTP

were writing their own server and browser software, including GUI-based browsers that supported typographic controls and display of images.

3.2 BUILDING BLOCKS OF THE WEB There were three basic components devised by Tim Berners-Lee comprising the essence of Web technology: 1. A markup language for formatting hypertext documents. 2. A uniform notation scheme for addressing accessible resources over the network. 3. A protocol for transporting messages over the network. The markup language that allowed cross-referencing of documents via hyperlinks was the HyperText Markup Language (HTML). We shall discuss HTML in a later chapter. The uniform notation scheme is called the Uniform Resource Identifier (URI). For historic reasons, it is most often referred to as the Uniform Resource Locator (URL). We shall cover the fundamentals of the URL specification in Section 3.3. HTTP is a core foundation of the World Wide Web. It was designed for transporting specialized messages over the network. The simplicity of the protocol does not always apply to HTTP interactions, which are complicated in the context of sophisticated Web applications. This will become apparent when we discuss the complex interactions between HTML, XML, and web server technologies (e.g. servlets and Java Server Pages). Understanding of HTTP is just as critical in maintaining complex applications. You will realize it the first time you try to analyze and troubleshoot an elusive problem. Understanding the HTTP messages passed between servers, proxies and browsers leads to deeper insights into the nature of underlying problems. The inner workings of HTTP are covered in Sections 3.4–3.6.

3.3 THE UNIFORM RESOURCE LOCATOR Tim Berners-Lee knew that one piece of the Web puzzle would be a notation scheme for referencing accessible resources anywhere on the Internet. He devised this notational scheme so that it would be flexible, so that it would be extensible, and so that it would support other protocols besides HTTP. This notational scheme is known as the URL or Uniform Resource Locator.

I Am He As You Are He As URL as We Are All Together Participants in the original World Wide Web Consortium (also known as the W3C) had reservations about Berners-Lee’s nomenclature. There were concerns about his

The Uniform Resource Locator


use of the word ‘universal’ (URL originally stood for ‘Universal Resource Locator’), and about the way a URL specified a resource’s location (which could be subject to frequent change) rather than a fixed immutable name. The notion of a fixed name for a resource came to be known as the URN or Uniform Resource Name. URNs would be a much nicer mechanism for addressing and accessing web resources than URLs. URLs utilize ‘locator’ information that embeds both a server address and a file location. URNs utilize a simpler human-readable name that does not change even when the resource is moved to another location. The problem is that URNs have failed to materialize as a globally supported web standard, so for all practical purposes we are still stuck with URLs. As a matter of convenience, W3C introduced the notion of the URI (or Uniform Resource Identifier) which was defined as the union of URLs and URNs. URL is still the most commonly used term, though URI is what you should use if you want to be a stickler for formal correctness. Throughout this book, we will favor the more widely accepted term URL for the strictly pragmatic reason of minimizing confusion.

Here is the generalized notation associated with URLs: scheme://host[:port#]/path/. . ./[;url-params][?query-string][#anchor]

Let us break a URL down into its component parts: • scheme—this portion of the URL designates the underlying protocol to be used (e.g. ‘http’ or ‘ftp’). This is the portion of the URL preceding the colon and two forward slashes. • host—this is either the name of the IP address for the web server being accessed. This is usually the part of the URL immediately following the colon and two forward slashes. • port#—this is an optional portion of the URL designating the port number that the target web server listens to. (The default port number for HTTP servers is 80, but some configurations are set up to use an alternate port number. When they do, that number must be specified in the URL.) The port number, if it appears, is found right after a colon that immediately follows the server name or address. • path—logically speaking, this is the file system path from the ‘root’ directory of the server to the desired document. (In practice, web servers may make use of aliasing to point to documents, gateways, and services that are not explicitly accessible from the server’s root directory.) The path immediately follows the server and port number portions of the URL, and by definition includes that first forward slash.


Birth of the World Wide Web: HTTP

• url-params—this once rarely used portion of the URL includes optional ‘URL parameters’. It is now used somewhat more frequently, for session identifiers in web servers supporting the Java Servlet API. If present, it follows a semi-colon immediately after the path information. • query-string—this optional portion of the URL contains other dynamic parameters associated with the request. Usually, these parameters are produced as the result of user-entered variables in HTML forms. If present, the query string follows a question mark in the URL. Equal signs (=) separate the parameters from their values, and ampersands (&) mark the boundaries between parametervalue pairs. • anchor—this optional portion of the URL is a reference to a positional marker within the requested document, like a bookmark. If present, it follows a hash mark or pound sign (‘#’). The breakout of a sample URL into components is illustrated below:

http://www.mywebsite.com/sj/test;id=8079?name=sviergn&x=true#stuff SCHEME HOST PATH URL PARAMS QUERY STRING ANCHOR

= = = = = =

http www.mywebsite.com /sj/test id=8079 name=sviergn&x=true stuff

Note that the URL notation we are describing here applies to most protocols (e.g. http, https, and ftp). However, some other protocols use their own notations (e.g. "mailto:[email protected]").

3.4 FUNDAMENTALS OF HTTP HTTP is the foundation protocol of the World Wide Web. It is simple, which is both a limitation and a source of strength. Many people in the industry criticized HTTP for its lack of state support and limited functionality, but HTTP took the world by storm while more advanced and sophisticated protocols never realized their potential. HTTP is an application level protocol in the TCP/IP protocol suite, using TCP as the underlying Transport Layer protocol for transmitting messages. The fundamental things worth knowing about the HTTP protocol and the structure of HTTP messages are:

Fundamentals of HTTP


1. The HTTP protocol uses the request/response paradigm, meaning that an HTTP client program sends an HTTP request message to an HTTP server, which returns an HTTP response message. 2. The structure of request and response messages is similar to that of e-mail messages; they consist of a group of lines containing message headers, followed by a blank line, followed by a message body. 3. HTTP is a stateless protocol, meaning that it has no explicit support for the notion of state. An HTTP transaction consists of a single request from a client to a server, followed by a single response from the server back to the client. In the next few sections, we will elaborate on these fundamental aspects of the HTTP protocol.

3.4.1 HTTP servers, browsers, and proxies Web servers and browsers exchange information using HTTP, which is why Web servers are often called HTTP servers. Similarly, Web browsers are sometimes referred to as HTTP clients, but their functionality is not limited to HTTP support. It was Tim Berners-Lee’s intent that web browsers should enable access to a wide variety of content, not just content accessible via HTTP. Thus, even the earliest web browsers were designed to support other protocols including FTP and Gopher. Today, web browsers support not only HTTP, FTP, and local file access, but e-mail and netnews as well. HTTP proxies are programs that act as both servers and clients, making requests to web servers on behalf of other clients. Proxies enable HTTP transfers across firewalls. They also provide support for caching of HTTP messages and filtering of HTTP requests. They also fill a variety of other interesting roles in complex environments. When we refer to HTTP clients, the statements we make are applicable to browsers, proxies, and other custom HTTP client programs.

3.4.2 Request/response paradigm First and foremost, HTTP is based on the request/response paradigm: browsers (and possibly proxy servers as well) send messages to HTTP servers. These servers generate messages that are sent back to the browsers. The messages sent to HTTP servers are called requests, and the messages generated by the servers are called responses. In practice, servers and browsers rarely communicate directly—there are one or more proxies in between. A connection is defined as a virtual circuit that is


Birth of the World Wide Web: HTTP








Browser Proxy (gateway.myisp.net)

Proxy (firewall.neurozen.com)

Web server (www.neurozen.com)


Figure 3.1

The request/response virtual circuit

composed of HTTP agents, including the browser, the server, and intermediate proxies participating in the exchange (Figure 3.1).

3.4.3 Stateless protocol As mentioned in the previous chapter, HTTP is a stateless protocol. When a protocol supports ‘state’, this means that it provides for the interaction between client and server to contain a sequence of commands. The server is required to maintain the ‘state’ of the connection throughout the transmission of successive commands, until the connection is terminated. The sequence of transmitted and executed commands is often called a session. Many Internet protocols, including FTP, SMTP and POP are ‘stateful’ protocols. In contrast, HTTP is said to be ‘stateless’. Defining HTTP as a stateless protocol made things simpler, but it also imposed limitations on the capabilities of Web applications. By definition, the lifetime of a connection was a single request/response exchange. This meant that there was no way to maintain persistent information about a ‘session’ of successive interactions between a client and server. This also meant that there was no way to ‘batch’ requests together—something that would be useful, for example, to ask a web server for an HTML page and all the images it references during the course of one connection. Later in this chapter, we discuss the evolution of cookies as a mechanism for maintaining state in Web applications. We will also discuss advanced strategies used in HTTP/1.1 to support connections that outlive a single request/response exchange. HTTP/1.1 assumes that the connection remains in place until it is broken, or until an HTTP client requests that it be broken. However, HTTP/1.1 is designed to support persistent connections for the sake of efficiency, but not to support state. We will come back to the technicalities of establishing and breaking HTTP connections when we discuss HTTP/1.1 in detail.

Fundamentals of HTTP


3.4.4 The structure of HTTP messages HTTP messages (both requests and responses) have a structure similar to e-mail messages; they consist of a block of lines comprising the message headers, followed by a blank line, followed by a message body. The structure of HTTP messages, however, is more sophisticated than the structure of e-mail messages. E-Mail Messages vs. HTTP Messages E-mail messages are intended to pass information directly between people. Thus, both the message headers and the body tend to be ‘human-readable’. E-mail messages (at least originally) had message bodies that consisted simply of readable plain text, while their message headers included readable information like the sender address and the message subject. Over time, e-mail message structure became more sophisticated, in part to provide support for MIME functionality. There were headers added to allow decompression, decoding, and reformatting of message content based on its MIME type. In addition, multi-part messages were supported, allowing messages to have multiple sections (often corresponding to a body and a set of attachments). When HTTP servers and browsers communicate with each other, they perform sophisticated interactions, based on header and body content. Unlike e-mail messages, HTTP messages are not intended to be directly ‘human-readable’. Another fundamental difference is that HTTP request and response messages begin with special lines that do not follow the standard header format. For requests, this line is called the request line, and for responses, it is called the status line.

Let us start with a very simple example: loading a static web page residing on a web server. A user may manually type a URL into her browser, she may click on a hyperlink found within the page she is viewing with the browser, or she may select a bookmarked page to visit. In each of these cases, the desire to visit a particular URL is translated by the browser into an HTTP request. An HTTP request message has the following structure: METHOD /path-to-resource Header-Name-1: value Header-Name-2: value




optional request body

Every request starts with the special request line, which contains a number of fields. The ‘method’ represents one of several supported request methods, chief among them ‘GET’ and ‘POST’. The ‘/path-to-resource’ represents the path portion of the requested URL. The ‘version-number’ specifies the version of HTTP used by the client.


Birth of the World Wide Web: HTTP

After the first line we see a list of HTTP headers, followed by a blank line, often called a (for ‘carriage return and line feed ’). The blank line separates the request headers from the body of the request. The blank line is followed (optionally) by a body, which is in turn followed by another blank line indicating the end of the request message. For our purposes, let http://www.mywebsite.com/sj/index.html be the requested URL. Here is a simplified version of the HTTP request message that would be transmitted to the web server at www.mywebsite.com: GET /sj/index.html HTTP/1.1 Host: www.mywebsite.com

Note that the request message ends with a blank line. In the case of a GET request, there is no body, so the request simply ends with this blank line. Also, note the presence of a Host header. (We discuss headers in request and response messages in greater detail later in this chapter.) The server, upon receiving this request, attempts to generate a response message. An HTTP response message has the following structure: HTTP/version-number Header-Name-1: value Header-Name-2: value



[ response body ]

The first line of an HTTP response message is the status line. This line contains the version of HTTP being used, followed by a three-digit status code, and followed by a brief human-readable explanation of the status code. This is a simplified version of the HTTP response message that the server would send back to the browser, assuming that the requested file exists and is accessible to the requestor: HTTP/1.1 200 OK Content-Type: text/html Content-Length: 9934 ... SJ s Web Page Welcome to Sviergn Jiernsen’s Home Page

Fundamentals of HTTP



Note that the response message begins with a status line, containing the name and version of the protocol in use, a numeric response status code, and a human-readable message. In this case, the request produced a successful response, thus we see a success code (200) and a success message (OK). Note the presence of header lines within the response, followed by a blank line, followed by a block of text. (We shall see later how a browser figures out that this text is to be rendered as HTML.) The process of transmitting requests and responses between servers and browsers is rarely this simplistic. Complex negotiations occur between browsers and servers to determine what information should be sent. For instance, HTML pages may contain references to other accessible resources, such as graphical images and Java applets. Clients that support the rendering of images and applets, which is most web browsers, must parse the retrieved HTML page to determine what additional resources are needed, and then send HTTP requests to retrieve those additional resources (Figure 3.2). Server-browser interactions can become much more complex for advanced applications.

3.4.5 Request methods There are varieties of request methods specified in the HTTP protocol. The most basic ones defined in HTTP/1.1 are GET, HEAD, and POST. In addition, there are the less commonly used PUT, DELETE, TRACE, OPTIONS and CONNECT. Method to Their Madness HTTP/1.1 servers are not obligated to implement all these methods. At a minimum, any general purpose server must support the methods GET and HEAD. All other methods are optional, though you’d be hard-pressed to find a server in common usage today that does not support POST requests. Most of the newer servers also support PUT and DELETE methods. Servers may also define their own methods and assign their own constraints and processing behavior to these methods, though this approach generally makes sense only for custom implementations.

Request methods impose constraints on message structure, and specifications that define how servers should process requests, including the Common Gateway


Birth of the World Wide Web: HTTP

Step 1: Initial user request for "http://www.cs.rutgers.edu/~shklar/" GET /~shklar/ HTTP/1.1 Host: www.cs.rutgers.edu




Response HTTP/1.1 200 OK Content-Type: text/html ... ... ... Step 2: Secondary browser request for "http://www.cs.rutgers.edu/~shklar/images/photo.gif" GET /~shklar/images/photo.gif HTTP/1.1 Host: www.cs.rutgers.edu




Response HTTP/1.1 200 OK Content-Type: image/gif

Figure 3.2

Sequence of browser requests for loading a sample page

Interface (CGI) and the Java Servlet API, include discussion of how different request methods should be treated.

GET The simplest of the request methods is GET. When you enter a URL in your browser, or click on a hyperlink to visit another page, the browser uses the GET method when making the request to the web server. GET requests date back to the very first versions of HTTP. A GET request does not have a body and, until the version 1.1, was not required to have headers. (HTTP/1.1 requires that the Host header should be present in every request in order to support virtual hosting, which we discuss later in this chapter.)

Fundamentals of HTTP


In the previous section, we offered an example of a very simple GET request. In that example, we visited a URL, http://www.mywebsite.com/sj/index.html, using the GET method. Let’s take a look at the request that gets submitted by an HTTP/1.1 browser when you fill out a simple HTML form to request a stock quote:

Simple Form Simple Form Ticker:

If we enter ‘YHOO’ in the form above, then the browser constructs a URL comprised of the ‘ACTION’ field from the form followed by a query string containing all of the form’s input parameters and the values provided for them. The boundary separating the URL from the query string is a question mark. Thus, the URL constructed by the browser is http://www.finance.yahoo.com/q?s=YHOO and the submitted request looks as follows:

GET /q?s=YHOO HTTP/1.1 Host: finance.yahoo.com User-Agent: Mozilla/4.75 [en] (WinNT; U)

The response that comes back from the server looks something like this:

HTTP/1.0 200 OK Date: Sat, 03 Feb 2001 22:48:35 GMT Connection: close Content-Type: text/html Set-Cookie: B=9ql5kgct7p2m3&b=2;expires=Thu,15 Apr 2010 20:00:00 GMT; path=/; domain=.yahoo.com Yahoo! Finance - YHOO


Birth of the World Wide Web: HTTP


POST A fundamental difference between GET and POST requests is that POST requests have a body: content that follows the block of headers, with a blank line separating the headers from the body. Going back to the sample form in Section 3.2, let’s change the request method to POST and notice that the browser now puts form parameters into the body of the message, rather than appending parameters to the URL as part of a query string:

POST /q HTTP/1.1 Host: finance.yahoo.com User-Agent: Mozilla/4.75 [en] (WinNT; U) Content-Type: application/x-www-form-urlencoded Content-Length: 6 s=YHOO

Note that the URL constructed by the browser does not contain the form parameters in the query string. Instead, these parameters are included after the headers as part of the message body:

HTTP/1.0 200 OK Date: Sat, 03 Feb 2001 22:48:35 GMT Connection: close Content-Type: text/html Set-Cookie: B=9ql5kgct7p2m3&b=2;expires=Thu,15 Apr 2010 20:00:00 GMT; path=/; domain=.yahoo.com Yahoo! Finance - YHOO ...

Fundamentals of HTTP


Note that the response that arrives from finance.yahoo.com happens to be exactly the same as in the previous example using the GET method, but only because designers of the server application decided to support both request methods in the same way.

GET vs. POST Many Web applications are intended to be ‘sensitive’ to the request method employed when accessing a URL. Some applications may accept one request method but not another. Others may perform different functions depending on which request method is used. For example, some servlet designers write Java servlets that use the GET method to display an input form. The ACTION field of the form is the same servlet (using the same URL), but using the POST method. Thus, the application is constructed so that it knows to display a form when it receives a request using the GET method, and to process the form (and to present the results of processing the form) when it receives a request using the POST method.

HEAD Requests that use the HEAD method operate similarly to requests that use the GET method, except that the server sends back only headers in the response. This means the body of the request is not transmitted, and only the response metadata found in the headers is available to the client. This response metadata, however, may be sufficient to enable the client to make decisions about further processing, and may possibly reduce the overhead associated with requests that return the actual content in the message body. If we were to go back to the sample form and change the request method to HEAD, we would notice that the request does not change (except for replacing the word ‘GET’ with the word ‘HEAD’, of course), and the response contains the same headers as before but no body. Historically, HEAD requests were often used to implement caching support. A browser can use a cached copy of a resource (rather than going back to the original source to re-request the content) if the cache entry was created after the date that the content was last modified. If the creation date for the cache entry is earlier than the content’s last modification date, then a ‘fresh’ copy of the content should be retrieved from the source. Suppose we want to look at a page that we visit regularly in our browser (e.g. Leon’s home page). If we have visited this page recently, the browser will have a copy of the page stored in its cache. The browser can make a determination as to whether it needs to re-retrieve the page by first submitting a HEAD request:


Birth of the World Wide Web: HTTP

HEAD http://www.cs.rutgers.edu/∼shklar/ HTTP/1.1 Host: www.cs.rutgers.edu User-Agent: Mozilla/4.75 [en] (WinNT; U)

The response comes back with a set of headers, including content modification information: HTTP/1.1 200 OK Date: Mon, 05 Feb 2001 03:26:18 GMT Server: Apache/1.2.5 Last-Modified: Mon, 05 Feb 2001 03:25:36 GMT Content-Length: 2255 Content-Type: text/html

The browser (or some other HTTP client) can compare the content modification date with the creation date of the cache entry, and resubmit the same request with the GET method if the cache entry is obsolete. We save bandwidth when the content does not change by making a HEAD request. Since responses to HEAD requests do not include the content as part of the message body, the overhead is smaller than making an explicit GET request for the content. Today, there are more efficient ways to support caching and we will discuss them later in this chapter. The HEAD method is still very useful for implementing changetracking systems, for testing and debugging new applications, and for learning server capabilities.

3.4.6 Status codes The first line of a response is the status line, consisting of the protocol and its version number, followed by a three-digit status code and a brief explanation of that status code. The status code tells an HTTP client (browser or proxy) either that the response was generated as expected, or that the client needs to perform a specific action (that may be further parameterized via information in the headers). The explanation portion of the line is for human consumption; changing or omitting it will not cause a properly designed HTTP client to change its actions. Status codes are grouped into categories. HTTP Version 1.1 defines five categories of response messages: • 1xx—Status codes that start with ‘1’ are classified as informational. • 2xx—Status codes that start with ‘2’ indicate successful responses.

Fundamentals of HTTP


• 3xx—Status codes that start with ‘3’ are for purposes of redirection. • 4xx—Status codes that start with ‘4’ represent client request errors. • 5xx—Status codes that start with ‘5’ represent server errors.

Informational status codes (1xx) These status codes serve solely informational purposes. They do not denote success or failure of a request, but rather impart information about how a request can be processed further. For example, a status code of “100” is used to tell the client that it may continue with a partially submitted request. Clients can specify a partially submitted request by including an ‘Expect’ header in the request message. A server can examine requests containing an ‘Expect’ header, determine whether or not it is capable of satisfying the request, and send an appropriate response. If the server is capable of satisfying the request, the response will contain a status code of ‘100’:

HTTP/1.1 100 Continue ...

If it cannot satisfy the request, it will send a response with a status code indicating a client request error, i.e. ‘417’:

HTTP/1.1 417 Expectation Failed ...

Successful response status codes (2xx) The most common successful response status code is ‘200’, which indicates that the request was successfully completed and that the requested resource is being sent back to the client: HTTP/1.1 200 OK Content-Type: text/html Content-Length: 9934 ... SJ’s Web Page


Birth of the World Wide Web: HTTP

Welcome to Sviergn Jiernsen’s Home Page ...

Another example is ‘201’, which indicates that the request was satisfied and that a new resource was created on the server.

Redirection status codes (3xx) Status codes of the form ‘3xx’ indicate that additional actions are required to satisfy the original request. Normally this involves a redirection: the client is instructed to ‘redirect’ the request to another URL. For example, ‘301’ and ‘302’ both instruct the client to look for the originally requested resource at the new location specified in the ‘Location’ header of the response. The difference between the two is that ‘301’ tells the client that the resource has ‘Moved Permanently’, and that it should always look for that resource at the new location. ‘302’ tells the client that the resource has ‘Moved Temporarily’, and to consider this relocation a one-time deal, just for purposes of this request. In either case, the client should, immediately upon receiving a 301 or 302 response, construct and transmit a new request ‘redirected’ at the new location. Redirections happen all the time, often unbeknownst to the user. Browsers are designed to respond silently to redirection status codes, so that users never see redirection ‘happen’. A perfect example of such silent redirection occurs when a user enters a URL specifying a directory, but leaving off the terminating slash. To visit Leon’s web site at Rutgers University, you could enter http://www.cs.rutgers. edu/∼shklar in your browser. This would result in the following HTTP request: GET /∼shklar HTTP/1.1 Host: www.cs.rutgers.edu

But "∼shklar" is actually a directory on the Rutgers web server, not a deliverable file. Web servers are designed to treat a URL ending in a slash as a request for a directory. Such requests may, depending on server configuration, return either a file with a default name (if present), e.g. index.html, or a listing of the directory’s contents. In either case, the web server must first redirect the request, from http://www.cs.rutgers.edu/∼shklar to http://www.cs.rutgers.edu/∼ shklar/, to properly present it:

Fundamentals of HTTP


HTTP/1.1 301 Moved Permanently Location: http://www.cs.rutgers.edu/∼shklar/ Content-Type: text/html ... 301 Moved Permanently 301 Moved Permanently The document has moved here.

Today’s sophisticated browsers are designed to react to ‘301’ by updating an internal relocation table, so that in the future they can substitute the new address prior to submitting the request, and thus avoid the relocation response. To support older browsers that do not support automatic relocation, web servers still include a message body that explicitly includes a link to the new location. This affords the user an opportunity to manually jump to the new location.

Remember the Slash! This example offers a valuable lesson: if you are trying to retrieve a directory listing (or the default page associated with a directory), don’t forget the trailing ‘/’. When manually visiting a URL representing a directory in your browser, you may not even notice the redirection and extra connection resulting from omitting the trailing slash. However, when your applications generate HTML pages containing links to directories, forgetting to add that trailing slash within these links will effectively double the number of requests sent to your server.

Client request error status codes (4xx) Status codes that start with ‘4’ indicate a problem with the client request (e.g. ‘400 Bad Request’), an authorization challenge (e.g. ‘401 Not Authorized’), or the server’s inability to find the requested resource (e.g. ‘404 Not Found’). Although ‘400’, ‘401’, and ‘404’ are the most common in this category, some less common status codes are quite interesting. We have already seen (in the section on ‘Informational Status Codes’) an example of the use of ‘417 Expectation Failed’. In another example, the client might use the ‘If-Unmodified-Since’ header to request a resource only if it has not changed since a specific date:


Birth of the World Wide Web: HTTP

GET/∼shklar/HTTP/1.1 Host: www.cs.rutgers.edu If-Unmodified-Since: Fri, 11 Feb 2000 22:28:00 GMT

Since this resource did change, the server sends back the ‘412 Precondition Failed’ response: HTTP/1.1 412 Precondition Failed Date: Sun, 11 Feb 2001 22:28:31 GMT Server: Apache/1.2.5

Server error status codes (5xx) Finally, status codes that start with ‘5’ indicate a server problem that prevents it from satisfying an otherwise valid request (e.g. ‘500 Internal Server Error’ or ‘501 Not Implemented’). Status codes represent a powerful means of controlling browser behavior. There are a large number of different status codes representing different response conditions, and they are well documented in Internet RFC 2616. Familiarity with status codes is obviously critical when implementing an HTTP server, but it is just as critical when building advanced Web applications. Later in this chapter, we will offer additional examples that illustrate how creative use of status codes (and HTTP headers) can simplify application development by quite a few orders of magnitude.

3.5 BETTER INFORMATION THROUGH HEADERS As we already know, HTTP headers are a form of message metadata. Enlightened use of headers makes it possible to construct sophisticated applications that establish and maintain sessions, set caching policies, control authentication, and implement business logic. The HTTP protocol specification makes a clear distinction between general headers, request headers, response headers, and entity headers. General headers apply to both request and response messages, but do not describe the body of the message. Examples of general headers include • Date: Sun, 11 Feb 2001 22:28:31 GMT This header specifies the time and date that this message was created. • Connection: Close This header indicates whether or not the client or server that generated the message intends to keep the connection open.

Better Information Through Headers


• Warning: Danger, Will Robinson! This header stores text for human consumption, something that would be useful when tracing a problem. Request headers allow clients to pass additional information about themselves and about the request. For example: • User-Agent: Mozilla/4.75 [en] (WinNT; U) Identifies the software (e.g. a web browser) responsible for making the request. • Host: www.neurozen.com This header was introduced to support virtual hosting, a feature that allows a web server to service more than one domain. • Referer: http://www.cs.rutgers.edu/∼shklar/index.html This header provides the server with context information about the request. If the request came about because a user clicked on a link found on a web page, this header contains the URL of that referring page. • Authorization: Basic [encoded-credentials] This header is transmitted with requests for resources that are restricted only to authorized users. Browsers will include this header after being notified of an authorization challenge via a response with a ‘401’ status code. They consequently prompt users for their credentials (i.e. userid and password ). They will continue to supply those credentials via this header in all further requests during the current browser session that access resources within the same authorization realm. (See the description of the WWW-Authenticate header below, and the section on ‘Authorization’ that follows.) Response headers help the server to pass additional information about the response that cannot be inferred from the status code alone. Here are some examples: • Location: http://www.mywebsite.com/relocatedPage.html This header specifies a URL towards which the client should redirect its original request. It always accompanies the ‘301’ and ‘302’ status codes that direct clients to try a new location. • WWW-Authenticate: Basic realm="KremlinFiles" This header accompanies the ‘401’ status code that indicates an authorization challenge. The value in this header specifies the protected realm for which proper authorization credentials must be provided before the request can be processed. In the case of web browsers, the combination of the ‘401’ status code and the WWW-Authenticate header causes users to be prompted for ids and passwords.


Birth of the World Wide Web: HTTP

• Server: Apache/1.2.5 This header is not tied to a particular status code. It is an optional header that identifies the server software. Entity headers describe either message bodies or (in the case of request messages that have no body) target resources. Common entity headers include: • Content-Type: mime-type/mime-subtype This header specifies the MIME type of the message body’s content. • Content-Length: xxx This optional header provides the length of the message body. Although it is optional, it is useful for clients such as web browsers that wish to impart information about the progress of a request. Where this header is omitted, the browser can only display the amount of data downloaded. But when the header is included, the browser can display the amount of data as a percentage of the total size of the message body. • Last-Modified: Sun, 11 Feb 2001 22:28:31 GMT This header provides the last modification date of the content that is transmitted in the body of the message. It is critical for the proper functioning of caching mechanisms.

3.5.1 Type support through content-type So far, we were concentrating on message metadata, and for a good reason: understanding metadata is critical to the process of building applications. Still, somewhere along the line, there’d better be some content. After all, without content, Web applications would have nothing to present for end users to see and interact with. You’ve probably noticed that, when it comes to content you view on the Web, your browser might do one of several things. It might: • render the content as an HTML page, • launch a helper application capable of presenting non-HTML content, • present such content inline (within the browser window) through a plug-in, or • get confused into showing the content of an HTML file as plain text without attempting to render it. What’s going on here? Obviously, browsers do something to determine the content type and to perform actions appropriate for that type. HTTP borrows its content typing system from Multipurpose Internet Mail Extensions (MIME ). MIME is the standard that was designed to help e-mail clients to display non-textual content.

Better Information Through Headers


Extending MIME HTTP has extended MIME and made use of it in ways that were never intended by its original designers. Still, the use of MIME means there is much commonality between web browsers and e-mail clients (which is why it was so natural for browsers to get tightly integrated with email clients).

As in MIME, the data type associated with the body of an HTTP message is defined via a two-layer ordered encoding model, using Content-Type and ContentEncoding headers. In other words, for the body to be interpreted according to the type specified in the Content-Type header, it has to first be decoded according to the encoding method specified in the Content-Encoding header. In HTTP/1.1, defined content encoding methods for the Content-Encoding header are "gzip", "compress" and "deflate". The first two methods correspond to the formats produced by GNU zip and UNIX compress programs. The third method, "deflate", corresponds to the zlib format associated with the deflate compression mechanism documented in RFC 1950 and 1951. Note that "x-gzip" and "x-compress" are equivalent to "gzip" and "compress" and should be supported for backward compatibility. Obviously, if web servers encode content using these encoding methods, web browsers (and other clients) must be able to perform the reverse operations on encoded message bodies prior to rendering or processing of the content. Browsers are intelligent enough to open a compressed document file (e.g. test.doc.gz) and automatically invoke Microsoft Word to let you view the original test.doc file. It can do this if the web server includes the "Content-Encoding: gzip" header with the response. This header will cause a browser to decode the encoded content prior to presentation, revealing the test.doc document inside. The Content-Type header is set to a media-type that is defined as a combination of a type, subtype and any number of optional attribute/value pairs: media-type type subtype

= type "/" subtype *( ";" parameter-string ) = token = token

The most common example is "Content-Type: text/html" where the type is set to "text" and the subtype is set to "html". This obviously tells a browser to render the message body as an HTML page. Another example is: Content-Type: text/plain; charset = ’us-ascii’


Birth of the World Wide Web: HTTP

Here the subtype is "plain", plus there is a parameter string that is passed to whatever client program ends up processing the body whose content type is "text/plain". The parameter may have some impact on how the client program processes the content. If the parameter is not known to the program, it is simply ignored. Some other examples of MIME types are "text/xml" and "application/xml" for XML content, "application/pdf" for Adobe Portable Data Format, and "video/x-mpeg" for MPEG2 videos. Since MIME was introduced to support multi-media transfers over e-mail, it is not surprising that it provides for the inclusion of multiple independent entities within a single message body. In e-mail messages, these multipart messages usually take the form of a textual message body plus attachments. This multipart structure is very useful for HTTP transfers going in both directions (client-to-server and server-to-client). In the client-to-server direction, form data submitted via a browser can be accompanied by file content that is transmitted to the server. We will discuss multipart messages used for form submission when we talk about HTML in a later chapter. In the server-to-client direction, a web server can implement primitive image animation by feeding browsers a multipart sequence of images. Netscape’s web site used to include a demo of the primitive image animation technique that generated a stream of pictures of Mozilla (the Godzilla-like dragon that was the mascot of the original Netscape project):

GET /cgi-bin/doit.cgi HTTP/1.1 Host: cgi-bin.netscape.com Date: Sun, 18 Feb 2001 06:22:19 GMT

The response is a "multipart/x-mixed-replace" message as indicated by the Content-Type header. This content type instructs the browser to render enclosed image bodies one at a time, but within the same screen real estate. The individual images are encoded and separated by the boundary string specified in the header: HTTP/1.1 200 OK Server: Netscape-Enterprise-3.6 SP1 Date: Date: Sun, 18 Feb 2001 06:22:31 GMT Content-Type: multipart/x-mixed-replace; boundary=ThisRandomString Connection: close --ThisRandomString Content-Type: image/gif

Better Information Through Headers


... --ThisRandomString Content-Type: image/gif ... --ThisRandomString Content-Type: image/gif ... --ThisRandomString

Message typing is necessary to help both servers and browsers determine proper actions in processing requests and responses. Browsers use types and sub-types to either select a proper rendering module or to invoke a third-party tool (e.g. Microsoft Word). Multipart rendering modules control recursive invocation of proper rendering modules for the body parts. In the example above, the browser’s page rendering module for the multipart message of type ‘multipart/x-mixed-replace’ invokes the browser’s image rendering module once per image while always passing it the same screen location. Server-side applications use type information to process requests. For, example, a server-side application responsible for receiving files from browsers and storing them locally needs type information, to separate file content from accompanying form data that defines file name and target location.

3.5.2 Caching control through Pragma and Cache-Control headers Caching is a set of mechanisms allowing responses to HTTP requests to be held in some form of temporary storage medium, as a means of improving server performance. Instead of satisfying future requests by going back to the original data source, the held copy of the data can be used. This eliminates the overhead of re-executing the original request and greatly improves server throughput. There are three main types of caching that are employed in a Web application environment: server-side caching, browser-side caching, and proxy-side caching. In this section, we shall deal with browser-side and proxy-side caching, leaving server-side caching for a later chapter. Take a Walk on the Proxy Side In the real world, HTTP messages are rarely passed directly between servers and browsers. Most commonly, they pass through intermediate proxies. These proxies


Birth of the World Wide Web: HTTP

perform a variety of functions in the Web application environment, including the relaying of HTTP messages through firewalls and supporting the use of server farms (conglomerations of server machines that look to the outside world like they have the same IP address or host name). Admittedly, proxies sit in the middle, between servers and browsers, so it may seem silly to talk about ‘proxy-side’ caching. Even though the wording may seem strange, do not dismiss the notion of proxy-side caching as some sort of anomaly.

When is the use of a cached response appropriate? This is a decision usually made by the server, or by Web applications running on the server. Many requests arrive at a given URL, but the server may deliver different content for each request, as the underlying source of the content is constantly changing. If the server ‘knows’ that the content of a response is relatively static and is not likely to change, it can instruct browsers, proxies, and other clients to cache that particular response. If the content is so static that it is never expected to change, the server can tell its clients that the response can be cached for an arbitrarily long amount of time. If the content has a limited lifetime, the server can still make use of caching by telling its clients to cache the response but only for that limited period. Even if the content is constantly changing, the server can make the decision that its clients can ‘tolerate’ a cached response (containing somewhat out-of-date content) for a specified time period. Web servers and server-side applications are in the best position to judge whether clients should be allowed to cache their responses. There are two mechanisms for establishing caching rules. The first is associated with an older version of the HTTP protocol, version 1.0. The second is associated with HTTP version 1.1. Because there are web servers and clients that still support only HTTP 1.0, any attempt to enable caching must support both mechanisms in what is hopefully a backwardcompatible fashion. HTTP/1.1 provides its own mechanism for enforcing caching rules: the CacheControl header. Valid settings include public, private, and no-cache. The public setting removes all restrictions and authorizes both shared and non-shared caching mechanisms to cache the response. The private setting indicates that the response is directed at a single user and should not be stored in a shared cache. For instance, if two authorized users both make a secure request to a particular URL to obtain information about their private accounts, obviously it would be a problem if an intermediate proxy decided it could improve performance for the second user by sending her a cached copy of the first user’s response. The no-cache setting indicates that neither browsers nor proxies are allowed to cache the response. However, there are a number of options associated with this setting that make it somewhat more complicated than that. The header may also list the names of specific HTTP headers that are ‘non-cached’ (i.e. that must be

Better Information Through Headers


re-acquired from the server that originated the cached response). If such headers are listed, then the response may be cached, excluding those listed headers. HTTP/1.0 browsers and proxies are not guaranteed to obey instructions in the Cache-Control header that was first introduced in HTTP/1.1. For practical purposes, this means that this mechanism is only reliable in very controlled environments where you know for sure that all your clients are HTTP/1.1 compliant. In the real world, there are still many HTTP/1.0 browsers and proxies out there, so this is not practical. A partial solution is to use the deprecated Pragma header that has only one defined setting: no-cache. When used with the Cache-Control header, it will prevent HTTP/1.0 browsers and proxies from caching the response. However, this alone may not have the desired effect on clients that are HTTP/1.1 compliant, since the Pragma header is deprecated and may not be properly supported in those clients. Thus, a more complete backwards-compatible solution would be to included both Pragma and Cache-Control headers, as in the following example:

HTTP/1.1 200 OK Date: Mon, 05 Feb 2001 03:26:18 GMT Server: Apache/1.2.5 Last-Modified: Mon, 05 Feb 2001 03:25:36 GMT Cache-Control: private Pragma: no-cache Content-Length: 2255 Content-Type: text/html ...

This response is guaranteed to prevent HTTP/1.0 agents from caching the response and to prevent HTTP/1.1 agents from storing it in a shared cache. HTTP/1.1 agents may or may not ignore the Pragma: no-cache header, but we played it safe in this example to ensure that we do not implement a potentially more restrictive caching policy than intended.

3.5.3 Security through WWW-Authenticate and Authorization headers HTTP provides built-in support for basic authentication, in which authorization credentials (userid and password) are transmitted via the Authorization header


Birth of the World Wide Web: HTTP

as a single encoded string. Since this string is simply encoded (not encrypted), this mechanism is only safe if performed over a secure connection. Many Web applications implement their own authentication schemes that go above and beyond basic HTTP authentication. It is very easy to tell whether an application is using built-in HTTP authentication or its own authentication scheme. When a Web application is using built-in HTTP authentication, the browser brings up its own authentication dialog, prompting the user for authorization credentials, rather than prompting the user for this information within one of the browser’s page rendering windows. Application-specific schemes will prompt users in the main browser window, as part of a rendered HTML page. When built-in HTTP authentication is employed, browsers are responding to a pre-defined status code in server responses, namely the ‘401’ status code indicating that the request is not authorized. Let’s take a look at the server response that tells the browser to prompt for a password:

HTTP/1.1 401 Authenticate Date: Mon, 05 Feb 2001 03:41:23 GMT Server: Apache/1.2.5 WWW-Authenticate: Basic realm="Chapter3"

When a request for a restricted resource is sent to the server, the server sends back a response containing the ‘401’ status code. In response to this, the browser prompts the user for a userid and password associated with the realm specified in the WWW-Authenticate header (in this case, "Chapter3"). The realm name serves both as an aid in helping users to retrieve their names and passwords, and as a logical organizing principle for designating which resources require what types of authorization. Web server administrative software gives webmasters the ability to define realms, to decide which resources ‘belong’ to these realms, and to establish userids and passwords that allow only selected people to access resources in these realms. In response to the browser prompt, the user specifies his name and password. Once a browser has collected this input from the user, it resubmits the original request with the additional Authorization header. The value of this header is a string containing the type of authentication (usually "Basic") and a Base64-encoded representation of the concatenation of the user name and password (separated by a colon): GET /book/chapter3/index.html HTTP/1.1 Date: Mon, 05 Feb 2001 03:41:24 GMT Host: www.neurozen.com Authorization: Basic eNCoDEd-uSErId:pASswORd

Better Information Through Headers


Insecurity Note that the user name and password are encoded but not encrypted. Encryption is a secure form of encoding, wherein the content can only be decoded if a unique key value is known. Simple encoding mechanisms, like the Base64 encoding used in basic HTTP authentication, can be decoded by anyone who knows the encoding scheme. Obviously this is very dangerous when encoded (not encrypted) information is transmitted over an insecure connection. Secure connections (using extensions to the HTTP protocol like Secure HTTP) by definition encrypt all transmitted information, thus sensitive information (like passwords) is secure. It is hard to believe that there are still a large number of web sites—even ecommerce sites—that transmit passwords over open connections and establish secure connections only after the user has logged in! As a user of the web, whenever you are prompted for your name and password, you should always check whether the connection is secure. With HTTP-based authentication, you should check whether the URL of the page you are attempting to access uses https (Secure HTTP) for its protocol. With proprietary authentication schemes, you should check the URL that is supposed to process your user name and password. For example, with a forms-based login you should check the URL defined in the ‘action’ attribute. As a designer of applications for the Web, make sure that you incorporate these safeguards into your applications to ensure the security of users’ sensitive information.

The server, having received the request with the Authorization header, attempts to verify the authorization credentials. If the userid and password match the credentials defined within that realm, the server then serves the content. The browser associates these authorization credentials with the authorized URL, and uses them as the value of the Authorization header in future requests to dependent URLs. Since the browser does this automatically, users do not get prompted again until they happen to encounter a resource that belongs to a different security realm. Dependent URLs We say that one URL ‘depends’ on another URL if the portion of the second URL up to and including the last slash is a prefix of the first URL. For example, the URL http://www.cs.rutgers.edu/∼shklar/classes/ depends on the URL http:// www.cs.rutgers.edu/∼shklar/. This means that, having submitted authorization credentials for http://www.cs.rutgers.edu/∼shklar/, the browser would know to resubmit those same credentials within the Authorization header when requesting http://www.cs.rutgers.edu/∼shklar/classes/.

If the server fails to verify the userid and password sent by the browser, it either resends the security challenge using the status code 401, or refuses to serve the


Birth of the World Wide Web: HTTP

requested resource outright, sending a response with the status code of 403 Forbidden. The latter happens when the server exceeds a defined limit of security challenges. This limit is normally configurable and is designed to prevent simple break-ins by trial-and-error. We described the so-called basic authentication that is supported by both HTTP/1.0 and HTTP/1.1. It is a bit simplistic, but it does provide reasonable protection—as long as you are transmitting over a secure connection. Most commercial applications that deal with sensitive financial data use their own authentication mechanisms that are not a part of HTTP. Commonly, user names and passwords are transmitted in bodies of POST requests over secure connections. These bodies are interpreted by server applications that decide whether to send back content, repeat the password prompt or display an error message. These server applications don’t use the 401 status code that tells the browser to use its built-in authentication mechanism, though they may choose to make use of the 403 status code indicating that access to the requested resource is forbidden.

3.5.4 Session support through Cookie and Set-Cookie headers We’ve mentioned several times now that HTTP is a stateless protocol. So what do we do if we need to implement stateful applications? 10 Items or Less The most obvious example of maintaining state in a Web application is the shopping cart. When you visit an e-commerce site, you view catalog pages describing items, then add them to your ‘cart’ as you decide to purchase them. When the time comes to process your order, the site seems to remember what items you have placed in your cart. But how does it know this, if HTTP requests are atomic and disconnected from each other?

To enable the maintenance of state between HTTP requests, it will suffice to provide some mechanism for the communicating parties to establish agreements for transferring state information in HTTP messages. HTTP/1.1 establishes these agreements through Set-Cookie and Cookie headers. Set-Cookie is a response header sent by the server to the browser, setting attributes that establish state within the browser. Cookie is a request header transmitted by the browser in subsequent requests to the same (or related) server. It helps to associate requests with sessions. Server applications that want to provide ‘hints’ for processing future requests can do that by setting the Set-Cookie header: Set-Cookie: =[; expires=][; [; domain=][; secure]


Better Information Through Headers


Here, = is an attribute/value pair that is to be sent back by the browser in qualifying subsequent requests. The path and domain portions of this header delimit which requests qualify, by specifying the server domains and URL paths to which this cookie applies. Domains may be set to suffixes of the originating server’s host name containing at least two periods (three for domains other than com, org, edu, gov, mil, and int). The value of the domain attribute must represent the same domain to which the server belongs. For example, an application running on cs.rutgers.edu can set the domain to .rutgers.edu, but not to .mit.edu. A domain value of .rutgers.edu means that this cookie applies to requests destined for hosts with names of the form *.rutgers.edu. The value for the path attribute defaults to the path of the URL of the request, but may be set to any path prefix beginning at ‘/’ which stands for the server root. For subsequent requests directed at URLs where the domain and path match, the browser must include a Cookie header with the appropriate attribute/value pair. The expires portion of the header sets the cutoff date after which the browser will discard any attribute/value pairs set in this header. (If the cutoff date is not specified, this means that the cookie should last for the duration of the current browser session only.) Finally, the secure keyword tells the browser to pass this cookie only through secure connections. Cookie Jars Browsers and other HTTP clients must maintain a ‘registry’ of cookies sent to them by servers. For cookies that are intended to last only for the duration of the current browser session, an in-memory table of cookies is sufficient. For cookies that are intended to last beyond the current session, persistent storage mechanisms for cookie information are required. Netscape Navigator keeps stored cookies in a cookies.txt file, while Internet Explorer maintains a folder where each file represents a particular stored cookie.

In this example, a server application running on the cs.rutgers.edu server generates a Set-Cookie header of the following form: HTTP/1.1 200 OK Set-Cookie: name=Leon; path=/test/; domain=.rutgers.edu

The domain is set to .rutgers.edu and the path is set to /test/. This instructs the browser to include a Cookie header with the value Name=Leon every time thereafter that a request is made for a resource at a URL on any Rutgers server where the URL path starts with /test/. The absence of the expiration date means that this cookie will be maintained only for the duration of the current browser session.


Birth of the World Wide Web: HTTP

Now let’s consider a more complicated example in which we rent a movie. We start with submitting a registration by visiting a URL that lets us sign in to a secure movie rental web site. Let’s assume we have been prompted for authorization credentials by the browser and have provided them so that the browser can construct the Authorization header: GET /movies/register HTTP/1.1 Host: www.sample-movie-rental.com Authorization:. . .

Once the server has recognized and authenticated the user, it sends back a response containing a Set-Cookie header containing a client identifier: HTTP/1.1 200 OK Set-Cookie: CLIENT=Rich; path=/movies ...

From this point on, every time the browser submits a request directed at "http:// www.sample-movie.rental.com/movies/*", it will include a Cookie header containing the client identifier: GET /movies/rent-recommended HTTP/1.1 Host: www.sample-movie-rental.com Cookie: CLIENT=Rich

In this case, we are visiting a recommended movie page. The server response now contains a movie recommendation: HTTP/1.1 200 OK Set-Cookie: MOVIE=Matrix; path=/movies/ ...

Now we request access to the movie. Note that, given the URL, we are sending back both the client identifier and the recommended movie identifier within the Cookie header. GET /movies/access HTTP/1.1 Host: www.sample-movie-rental.com Cookie: CLIENT=Rich; MOVIE=Matrix



We get back the acknowledgement containing access information to the recommended movie for future status checks: HTTP/1.1 200 OK Set-Cookie: CHANNEL=42; PASSWD=Matrix007; path=/movies/status/ ...

Note that there are two new cookie values, ‘CHANNEL’ and ‘PASSWD’, but they are associated with URL path /movies/status/. Now, the browser will include movie access information with a status check request. Note that the Cookie header contains cookie values applicable to both the /movies/ path and the /movies/status/ path:

GET /movies/status/check HTTP/1.1 Host: www.sample-movie-rental.com Cookie: CLIENT=Rich; MOVIE=Matrix; CHANNEL=42; PASSWD=Matrix007

Requests directed at URLs within the /movies/ path but not within the /movies/ status/ path will not include attribute-value pairs associated with the /movies/ status/ path:

GET /movies/access HTTP/1.1 Host: www.sample-movie-rental.com Cookie: CLIENT=Rich; MOVIE=Matrix

3.6 EVOLUTION HTTP has evolved a good deal since its inception in the early nineties, but the more it evolves, the more care is needed to support backward compatibility. Even though it has been a number of years since the introduction of HTTP/1.1, there are still many servers, browsers, and proxies in the real world that are HTTP/1.0 compliant but do not support HTTP/1.1. What’s more, not all HTTP/1.1 programs revert to the HTTP/1.0 specification when they receive an HTTP/1.0 message. In this section, we will discuss the reasoning behind some of the most important changes that occurred between the versions, and the compatibility issues that affected protocol designers’ decisions, and the challenges facing Web application developers in dealing with these issues.


Birth of the World Wide Web: HTTP

3.6.1 Virtual hosting One of the challenges facing HTTP/1.1 designers was to provide support for virtual hosting, which is the ability to map multiple host names to a single IP address. For example, a single server machine may host web sites associated with a number of different domains. There must be a way for the server to determine the host for which a request is intended. In addition, the introduction of proxies into the request stream creates additional problems in ensuring that a request reaches its intended host. In HTTP/1.0, a request passing through a proxy has a slightly different format from the request ultimately received by the destination server. As we have seen, the request that reaches the host has the following form, including only the path portion of the URL in the initial request line: GET /q?s=YHOO HTTP/1.0

Requests that must pass through proxies need to include some reference to the destination server, otherwise that information would be lost and the proxy would have no idea which server should receive the request. For this reason, the full URL of the request is included in the initial request line, as shown below: GET http://finance.yahoo.com/q?s=YHOO HTTP/1.0

Proxies that connect to the destination servers are responsible for editing requests that pass through them, to remove server information from request lines. With the advent of HTTP/1.1, there is support for virtual hosting. Thus, we now need to retain server information in all requests since servers need to know which of the virtual hosts associated with a given web server is responsible for processing the request. The obvious solution would have been to make HTTP/1.1 browsers and proxies to always include server information: GET http://finance.yahoo.com/q?s=YHOO HTTP/1.1

This would have been fine except that there are still HTTP/1.0 proxies out there that are ready to cut server information from request URLs every time they see one. Obviously, HTTP/1.0 proxies don’t know anything about HTTP/1.1 and have no way of making a distinction between the two. Nonetheless, it is worth it to have this as a legal request format for both HTTP/1.1 servers and proxies. (There may come a day when we don’t have to worry about HTTP/1.0 proxies any more.)



For now, we need a redundant source of information that will not be affected by any actions of HTTP/1.0 proxies. This is the reason for the Host header, which must be included with every HTTP/1.1 request: GET http://finance.yahoo.com/q?s=YHOO HTTP/1.1 Host: finance.yahoo.com

Whether this request passes through either an HTTP/1.0 proxy or an HTTP/1.1 proxy, information about the ultimate destination of the request is preserved. Obviously, the request with abbreviated URL format (path portion only) must be supported as well: GET /q?s=YHOO HTTP/1.1 Host: finance.yahoo.com

3.6.2 Caching support In an earlier section, we described the mechanisms through which servers provide information about caching policies for server responses to browsers, proxies, and other clients. If the supplied headers tell the client that caching is feasible for this particular response, the client must then make a decision as to whether or not it should indeed use a cached version of the response that it already has available, rather than going back to the source location to retrieve the data. In HTTP/1.0, the most popular mechanism for supporting browser-side caching was the use of HEAD requests. A request employing the HEAD method would return exactly the same response as its GET counterpart, but without the body. In other words, only the headers would be present, providing the requestor with all of the response’s metadata without the overhead of transmitting the entire content of the response. Thus, assuming you have a cached copy of a previously requested resource, it is a sensible approach to submit a HEAD request for that resource, check the date provided in the Last-Modified header, and only resubmit a GET request if the date is later than that of the saved cache entry. This improves server throughput by eliminating the need for unnecessary extra requests. The only time the actual data need actually be retrieved is when the cache entry is deemed to be out of date. HTTP/1.1 uses a more streamlined approach to this problem using two new headers: If-Modified-Since and If-Unmodified-Since. Going back to one of our earlier examples: GET /∼shklar/ HTTP/1.1 Host: www.cs.rutgers.edu If-Modified-Since: Fri, 11 Feb 2001 22:28:00 GMT


Birth of the World Wide Web: HTTP

Assuming there is a cache entry for this resource that expires at 22:28 on February 11, 2001, the browser can send a request for this resource with the If-ModifiedSince header value set to that date and time. If the resource has not changed since that point in time, we get back the response with the 304 Not Modified status code and no body. Otherwise, we get back the body (which may itself be placed in the cache, replacing any existing cache entry for the same resource). Alternatively, let’s examine the following request: GET /∼shklar/ HTTP/1.1 Host: www.cs.rutgers.edu If-Unmodified-Since: Fri, 11 Feb 2000 22:28:00 GMT

For this request, we either get back the unchanged resource or an empty response (no body) with the 412 Precondition Failed status code. Both headers can be used in HTTP/1.1 requests to eliminate unnecessary data transmissions without the cost of extra requests.

3.6.3 Persistent connections Since HTTP is by definition a stateless protocol, it was not designed to support persistent connections. A connection was supposed to last long enough for a browser to submit a request and receive a response. Extending the lifetime of a request beyond this was not supported. Since the cost of connecting to a server across the network is considerable, there are a variety of mechanisms within many existing network protocols for reducing or eliminating that overhead by creating persistent connections. In HTTP, cookies provide a mechanism for persisting an application’s state across connections, but it is frequently useful to allow connections themselves to persist for performance reasons. For HTTP applications, developers came up with workarounds involving multipart MIME messages to get connections to persist across multiple independent bodies of content. (We saw an example of this when we discussed image animation using server push via multipart messages.) Late in the lifecycle of HTTP/1.0, makers of HTTP/1.0 servers and browsers introduced the proprietary Connection: Keep-Alive header, as part of a somewhat desperate effort to support persistent connections in a protocol that wasn’t designed to do so. Not surprisingly, it does not work that well. Considering all the intermediate proxies that might be involved in the transmission of a request, there are considerable difficulties in keeping connections persistent using this mechanism. Just one intermediate proxy that lacks support for the Keep-Alive extension is enough to cause the connection to be broken.

Questions and Exercises


HTTP/1.1 connections are all persistent by default, except when explicitly requested by a participating program via the Connection: Close header. It is entirely legal for a server or a browser to be HTTP/1.1 compliant without supporting persistent connections as long as they include Connection: Close with every message. Theoretically, including the Connection: Keep-Alive header in HTTP/1.1 messages makes no sense, since the absence of Connection: Close already means that the connection needs to be persistent. However, there is no way to ensure that all proxies are HTTP/1.1 compliant and know to maintain a persistent connection. In practice, including Connection: Keep-Alive does provide a partial solution: it will work for those HTTP/1.0 proxies that support it as a proprietary extension. HTTP/1.1 support for persistent connections includes pipelining requests: browsers can queue request messages without waiting for responses. Servers are responsible for submitting responses to browser requests in the order of their arrival. Browsers that support this functionality must maintain request queues, keep track of server responses, and resubmit requests that remain on queues if connections get dropped and reestablished. We will discuss HTTP/1.1 support for persistent connections in further detail when we discuss server and browser architecture.

3.7 SUMMARY In this chapter, we have discussed the fundamental facets of the HTTP protocol. This discussion was not intended as an exhaustive coverage of all the protocol’s features, but rather as an overview for understanding and working with current and future HTTP specifications from the World Wide Web Consortium. W3C specifications are the ultimate references that need to be consulted when architecting complex applications. Understanding HTTP is critical to the design of advanced Web applications. It is a prerequisite for utilizing full power of the Internet technologies that are discussed in this book. Knowledge of the inner workings of HTTP promotes reasoning from first principles, and simplifies the daunting task of learning the rich variety of protocols and APIs that depend on its features. We recommend that you return to this chapter as we discuss other technologies.

3.8 QUESTIONS AND EXERCISES 1. Consider the following hyperlink:

What HTTP/1.0 request will get submitted by the browser? What HTTP/1.1 request will get submitted by the browser? 2. Consider the example above. Will these requests change if the browser is configured to contact an HTTP proxy? If yes, how?


Birth of the World Wide Web: HTTP

3. What is the structure of a POST request? What headers have to be present in HTTP/1.0 and HTTP/1.1 requests? 4. Name two headers that, if present in an HTTP response, always have to be processed in a particular order. State the order and explain. 5. How can Multipart MIME be used to implement ‘server push’? When is it appropriate? Construct a sample HTTP response implementing server push using Multipart MIME. 6. Suppose that a content provider puts up a ‘ring’ of related sites: www.site1.provider.hahaha.com www.site2.provider.hahaha.com www.site3.provider.hahaha.com www.site4.provider.hahaha.com www.site5.provider.hahaha.com

Suppose now this provider wants unsophisticated users to remain ‘sticky’ to a particular site by preventing them from switching to a different site in the ring more frequently than once an hour. For example, after a user first accesses www.site4.provider.hahaha.com, she has to wait for at least an hour before being able to access another site in the ring but can keep accessing the same site as much as she wants. Hints: Use cookies, and look elsewhere if you need more than two or three lines to describe your solution. 7. Remember the example in which the server returns a redirect when a URL pointing to a directory does not contain a trailing slash? What would happen if the server did not return a redirect but returned an index.html file stored in that directory right away? Would that be a problem? If you are not sure about the answer, come back to this question after we discuss browser architecture.

BIBLIOGRAPHY Gourley, D. and Totty, B. (2002) HTTP: The Definitive Guide. O’Reilly & Associates. Krishnamurthy, B. and Rexford, J. (2001) Web Protocols and Practice. Addison-Wesley. Loshin, P. (2000) Big Book of World Wide Web RFCs. Morgan Kaufmann. Thomas, S. (2001) HTTP Essentials. John Wiley & Sons, Ltd. Yeager, N. and McGrath, R. (1996) Web Server Technology. Morgan Kaufmann.


Web Servers

Web servers enable HTTP access to a ‘Web site,’ which is simply a collection of documents and other information organized into a tree structure, much like a computer’s file system. In addition to providing access to static documents, modern Web servers implement a variety of protocols for passing requests to custom software applications that provide access to dynamic content. This chapter begins by describing the process of serving static documents, going on to explore the mechanisms used to serve dynamic data. Dynamic content can come from a variety of sources. Search engines and databases can be queried to retrieve and present data that satisfies the selection criteria specified by a user. Measuring instruments can be probed to present their current readings (e.g. temperature, humidity). News feeds and wire services can provide access to up-to-the-minute headlines, stock quotes, and sports scores. There are many methodologies for accessing dynamic data. The most prominent approach based on open standards is the Common Gateway Interface (CGI). While CGI is in widespread use throughout the Web, it has its limitations, which we discuss later in this chapter. As a result, many alternatives to CGI have arisen. These include a number of proprietary template languages (some of which gained enough following to become de facto standards) such as PHP, Cold Fusion, Microsoft’s Active Server Pages (ASP), and Sun’s Java Server Pages (JSP), as well as Sun’s Java Servlet API. An ideal approach would allow the processes by which Web sites serve dynamic data to be established in a declarative fashion, so that those responsible for maintaining the site are not required to write custom code. This is an important thread in the evolution of Web servers, browsers and the HTTP protocol, but we have not yet reached this goal. Later in this chapter, we discuss how Web servers process HTTP requests, and how that processing is affected by server configuration. We also discuss methods for providing robust server security.


Web Servers

4.1 BASIC OPERATION Web servers, browsers, and proxies communicate by exchanging HTTP messages. The server receives and interprets HTTP requests, locates and accesses requested resources, and generates responses, which it sends back to the originators of the requests. The process of interpreting incoming requests and generating outgoing responses is the main subject of this section. In Figure 4.1, we can see how a Web server processes incoming requests, generates outgoing responses, and transmits those responses back to the appropriate requestors. The Networking module is responsible for both receiving requests and transmitting responses over the network. When it receives a request, it must first pass it to the Address Resolution module, which is responsible for analyzing and ‘pre-processing’ the request. This pre-processing includes: 1. Virtual Hosting: if this Web server is providing service for multiple domains, determine the domain for which this request is targeted, and use the detected domain to select configuration parameters. 2. Address Mapping: determine whether this is a request for static or dynamic content, based on the URL path and selected server configuration parameters, and resolve the address into an actual location within the server’s file system. 1 Request arrives at the server Networking support

2 Request is passed to address resolution module

3 After resolution/authentication, request is passed to request processing module

Address resolution

Request processing



Address mapping Aliasing

Figure 4.1


Server operation

Template approaches


Static content Virtual hosting

Response generation

As-Is pages


Servlet API

Basic Operation


3. Authentication: if the requested resource is protected, examine authorization credentials to see if the request is coming from an authorized user. Once the pre-processing is complete, the request is passed to the Request Processing module, which invokes sub-modules to serve static or dynamic content as appropriate. When the selected sub-module completes its processing, it passes results to the Response Generation module, which builds the response and directs it to the Networking module for transmission. It is important to remember that, since the HTTP protocol is stateless, the only information available to the server about a request is that which is contained within that request. As we shall learn later in this chapter, state may be maintained in the form of session information by server-side applications and application environments (e.g. servlet runners), but the server itself does not maintain this information.

4.1.1 HTTP request processing Let us take a step back and recollect what has to happen for an HTTP request to arrive at the server. For the purposes of this example, we shall examine a series of transactions in which an end-user is visiting her friend’s personal web site found at http://mysite.org/. The process begins when the end user tells the browser to access the page found at the URL http://mysite.org/pages/simple-page.html. When the browser successfully receives and renders the page (Figures 4.2 and 4.3), the user sees that it has links to two other pages, which contain her friend’s ‘school links’ (school.html) and ‘home links’ (home.html). Suppose now that the end user follows the link to her friend’s ‘school links’ page.

You Say You Want a Resolution. . . If the links found on the page are relative URLs (incomplete URLs meant to be interpreted relative to the current page), then they must be resolved so that the browser knows the complete URL referenced by the link. In the next chapter, we discuss the steps that browsers take to resolve a relative link into an absolute URL in order to construct and submit the request.

GET http://mysite.org/pages/simple-page.html HTTP/1.1 Host: mysite.org User-Agent: Mozilla/4.75 [en] (WinNT; U)

Figure 4.2

Browser request to load the simple-page.html page


Web Servers

Simple Page My Links My school links My home links

Figure 4.3

Simple HTML page

Notice that the request in Figure 4.2 does not contain the Connection: close header. This means that, if possible, the connection to the server should be left open so that it may be used to transmit further requests and receive responses. However, there is no guarantee that the connection will still be open at the time the user requests school.html. By that time, the server, a proxy, or even the browser itself might have broken it. Persistent connections are designed to improve performance, but should never be relied upon in application logic. If the connection is still open, the browser uses it to submit the request for the school links page (Figure 4.4). Otherwise, the browser must first re-establish the connection. Depending on the browser configuration, it may either attempt to establish a direct connection to the server or connect via a proxy. Consequently, the server receives the request either directly from the browser or from the proxy. For persistent connections, the server is responsible for maintaining queues of requests and responses. HTTP/1.1 specifies that within the context of a single continuously open connection, a series of requests may be transmitted. It also specifies that responses to these requests must be sent back in the order of request arrival (FIFO). One common solution is for the server to maintain both the input and output queues of requests. When a request is submitted for processing, it is removed from the input queue and inserted on the output queue. Once the processing is complete, the request is marked for release, but it remains on the output queue while at least one of its predecessors is still there. When all of its predecessors have gone from the output queue it is released, and its associated response is sent back to the browser either directly or through a proxy. Once the request is picked from the queue, the server resolves the request URL to the physical file location and checks whether the requested resource requires GET http://mysite.org/pages/school.html HTTP/1.1 Host: mysite.org User-Agent: Mozilla/4.75 [en] (WinNT; U)

Figure 4.4

Browser request to load the school.html page

Basic Operation


authentication (Figure 4.1). If the authentication fails, the server aborts further processing and generates the response indicating an error condition (Section 3.4.3). If the authentication is not necessary or is successful, the server decides on the kind of processing required.

4.1.2 Delivery of static content Web servers present both static content and dynamic content. Static content falls into two categories: 1. static content pages: static files containing HTML pages, XML pages, plain text, images, etc., for which HTTP responses must be constructed (including headers); and 2. as-is pages: pages for which complete HTTP responses (including headers) already exist and can be presented ‘as is’. For dynamic content, the server must take an explicit programmatic action to generate a response, such as the execution of an application program, the inclusion of information from a secondary file, or the interpretation of a template. This mode of processing includes Common Gateway Interface (CGI) programs, Server-Side Include (SSI) pages, Java Server Pages (JSP), Active Server Pages (ASP), and Java Servlets, among others. We shall not attempt to describe all the details of these and other server mechanisms. Instead, we concentrate on describing the most common mechanisms and the underlying principles associated with them. Understanding the operating principles makes it easier to learn other similar mechanisms, and to develop new better mechanisms in the future. Web servers use a combination of filename suffixes/extensions and URL prefixes to determine which processing mechanism should be used to generate a response. By default, it is assumed that a URL should be processed as a request for a static content page. However, this is only one of a number of possibilities. A URL path beginning with /servlet/ might indicate that the target is a Java servlet. A URL path beginning with /cgi-bin/ might indicate that the target is a CGI script. A URL where the target filename ends in .cgi might indicate this as well. URLs where the target filename ends in .php or .cfm might indicate that a template processing mechanism (e.g. PHP or Cold Fusion) should be invoked. We shall discuss address resolution in more detail in the section describing server configuration.

Static content pages For a static content page, the server maps the URL to a file location relative to the server document root. In the example presented earlier in the chapter, we visited a page


Web Servers

HTTP/1.1 200 OK Date: Tue, 29 May 2001 23:15:29 GMT Last-Modified: Mon, 28 May 2001 15:11:01 GMT Content-type: text/html Content-length: 193 Server: Apache/1.2.5 School Page My Links My classes My friends

Figure 4.5

Sample response to the request in Figure 4.4

on someone’s personal web site, found at http://mysite.org/pages/school. html. The path portion of this URL, /pages/school.html, is mapped to an explicit filename within the server’s local file system. If the Web server is configured so that the document root is /www/doc, then this URL will be mapped to a server file at /www/doc/pages/school.html. For static pages, the server must retrieve the file, construct the response, and transmit it back to the browser. For persistent connections, the response first gets placed in the output queue before the transmission. Figure 4.5 shows the response generated for the HTTP request in Figure 4.4. As we discussed in the previous chapter, the first line of the response contains the status code that summarizes the result of the operation. The server controls browser rendering of the response through the Content-Type header that is set to a MIME type. Setting MIME types for static files is controlled through server configuration. In the simplest case, it is based on the mapping between file extensions and MIME types. Even though desktop browsers have their own mappings between MIME types and file extensions, it is the server-side mapping that determines the Content-Type header of the response. It is this header, which determines how the browser should render the response content—not the filename suffix, or a browser heuristic based on content analysis. An Experiment If you have access to your own Web server, you can try experimenting with your browser and server to see how browser rendering is determined. Within the browser,

Basic Operation


map the file extension .html to text/plain instead of text/html, then visit an HTML page with your browser. You will notice that HTML pages are still rendered as hypertext. Then, try changing the same mapping on the server, map the file extension .html to text/plain. Reload the page in your browser, and you will see the HTML markup tags as plain text.

With all this in mind, it is important that the server set the Content-Type header to the appropriate MIME type so that the browser can render the content properly. The server may also set the Content-Length header to instruct the browser about the length of the content. This header is optional, and may not be present for dynamic content, because it is difficult to determine the size of the response before its generation is complete. Still, if it is possible to predict the size of the response, it is a good idea to include the Content-Length header. We have already discussed HTTP support for caching. Later we come back to this example to discuss the Last-Modified header and its use by the browser logic in forming requests and reusing locally cached content. Note that, even though LastModified is not a required header, the server is expected to make its best effort to determine the most recent modification date of requested content and use it to set the header.

As-is pages Suppose now that for whatever reason you do not want server logic to be involved in forming response headers and the status code. Maybe you are testing your browser or proxy, or maybe you want a quick and dirty fix by setting a special ContentType for some pages. Then again, maybe you want an easy and convenient way to regularly change redirection targets for certain pages. It turns out there is a way to address all these situations—use the so-called ‘as-is’ pages. The idea is that such pages contain complete responses and the server is supposed to send them back ‘as-is’ without adding status codes or headers. That means that you can put together a desired response, store it on the server in a file that would be recognized by the server as an ‘as-is’ file, and be guaranteed to receive your unchanged response when the file is requested. Using the ‘as-is’ mechanism we can control server output by manually creating and modifying response messages. The word ‘manually’ is of course the key here—whenever you want to change the response, you have to have to go in and edit it. This is convenient for very simple scenarios but does not provide an opportunity to implement even very basic processing logic.

4.1.3 Delivery of dynamic content The original mechanisms for serving up dynamic content are CGI (Common Gateway Interface) and SSI (Server Side Includes). Today’s Web servers use more


Web Servers

sophisticated and more efficient mechanisms for serving up dynamic content, but CGI and SSI date back to the very beginnings of the World Wide Web, and it behooves us to understand these mechanisms before delving into the workings of the newer approaches.

CGI CGI was the first consistent server-independent mechanism, dating back to the very early days of the World Wide Web. The original CGI specification can be found at http://hoohoo.ncsa.uiuc.edu/cgi/interface.html. The CGI mechanism assumes that, when a request to execute a CGI script arrives at the server, a new ‘process’ will be ‘spawned’ to execute a particular application program, supplying that program with a specified set of parameters. (The terminology of ‘processes’ and ‘spawning’ is UNIX-specific, but the analog of this functionality is available for non-UNIX operating systems.) The heart of the CGI specification is the designation of a fixed set of environment variables that all CGI applications know about and have access to. The server is supposed to use request information to populate variables in Table 4.1 (nonexhaustive list) from request information other than HTTP headers. The server is responsible for always setting the SERVER SOFTWARE, SERVER NAME, and GATEWAY INTERFACE environment variables, independent of information contained in the request. Other pre-defined variable names include CONTENT TYPE and CONTENT LENGTH that are populated from Content-Type and Content-Length headers. Additionally, every HTTP header is mapped to an environment variable by converting all letters in the name of the header to upper case, replacing dash Table 4.1 Environment variables set from sources of information other then HTTP headers SERVER PROTOCOL SERVER PORT REQUEST METHOD PATH INFO



HTTP version as defined on the request line following HTTP method and URL. Server port used for submitting the request, set by the server based on the connection parameters. HTTP method as defined on the request line. Extra path information in the URL. For example, if the URL is http://mysite.org/cgi-bin/zip.cgi/test.html, and http://mysite.org/cgi-bin/zip.cgi is the location of a CGI script, then /test.html is the extra path information. Physical location of the CGI script on the server. In our example, it would be /www/cgi-bin/zip.cgi assuming that the server is configured to map the /cgi-bin path to the /www/cgi-bin directory. Set to the path portion of the URL, excluding the extra path information. In the same example, it’s /cgi-bin/zip.cgi Information that follows the ‘?’ in the URL.

Basic Operation


with the underscore, and pre-pending the HTTP prefix. For example, the value of the User-Agent header gets stored in the HTTP USER AGENT environment variable, and the value of the Content-Type header gets stored in both CONTENT TYPE and HTTP CONTENT TYPE environment variables. It is important to remember that while names of HTTP headers do not depend upon the mechanism (CGI, servlets, etc.), names of the environment variables are specific to the CGI mechanism. Some early servlet runners expected names of the CGI environment variables to retrieve header values because their implementers were CGI programmers that did not make the distinction. These servlet runners performed internal name transformations according to the rules defined for the CGI mechanism (e.g. Content-Type to CONTENT TYPE), which was a wrong thing to do. These problems are long gone, and you would be hard pressed to find a servlet runner that does it now, but they illustrate the importance of understanding that names of CGI environment variables have no meaning outside of the CGI context. The CGI mechanism was defined as a set of rules, so that programs that abide by these rules would run the same way on different types of HTTP servers and operating systems. That works as long as these servers support the CGI specification, which evolved to a suite of specifications for different operating systems. CGI was originally introduced for servers running on UNIX, where a CGI program always executes as a process with the body of the request available as standard input, and with HTTP headers, URL parameters, and the HTTP method available as environment variables. For Windows NT/2000 servers the CGI program runs as an application process. With these systems, there is no such thing as ‘standard input’ (as there is in a UNIX environment), so standard input is simulated using temporary files. Windows environment variables are similar to UNIX environment variables. Information passing details are different for other operating systems. For example, Macintosh computers pass system information through Apple Events. For simplicity, we use the CGI specification as it applies to UNIX servers for the rest of this section. You would do well to consult server documentation for information passing details for your Web server. Since the CGI mechanism assumes spawning a new process per request and terminating this process when the request processing is complete, the lifetime of a CGI process is always a single request. This means that even processing that is common for all requests has to be repeated every time. CGI applications may sometimes make use of persistent storage that survives individual requests, but persistence is a non-standard additional service that is out of scope of what is guaranteed by HTTP servers. Perl Before Swine You probably noticed that CGI programs are frequently called ‘CGI scripts.’ This is because they are often implemented using a scripting language, the most popular


Web Servers

of these being Perl. It is very common to see texts describing the mechanisms of ‘Perl/CGI programming.’ While Perl is extremely popular, it is not the only language available for implementing CGI programs. Since Perl is popular, we shall use it in our examples here. There is no reason why you cannot use C or any other programming language, as long as that language provides access to the message body, headers, and other request information in a manner that conforms to the CGI specification. The advantage of using Perl is that it is portable: scripts written in Perl can execute on any system with a Perl interpreter. The price you pay for using Perl is performance: because Perl scripts are interpreted, they run more slowly than programs written in a compiled language like C.

Figure 4.6 shows a simple HTML page containing a form that lets users specify their names and zip codes. The ACTION parameter of a FORM tag references a server application that can process form information. (It does not make sense to use forms if the ACTION references a static page!) Figure 4.7 shows the HTTP request that is submitted to the server when the user fills out the form and clicks on the ‘submit’ button. Notice that the entered information is URL-encoded : spaces are converted to plus signs, while other punctuation characters (e.g. equal signs and ampersands) are transformed to a percent sign (‘%’) followed by two-digit hexadecimal equivalents of replaced characters in the ASCII character set. Also, notice that the Content-Type header is set to application/x-www-formurlencoded, telling the server and/or server applications to expect form data. Do not confuse the Content-Type of the response that caused the browser to render this page (in this case text/html) with the Content-Type associated with the request generated by the browser when it submits the form to the server. Form data submitted from an HTML page, WML page, or an applet would have the same Content-Type — application/x-www-form-urlencoded. Simple Form Simple Form Zip Code: Name:

Figure 4.6

Sample form for submitting user name and zip code

Basic Operation


POST http://mysite.org/cgi-bin/zip.cgi HTTP/1.1 Host: mysite.org User-Agent: Mozilla/4.75 [en] (WinNT; U) Content-Length: 26 Content-Type: application/x-www-form-urlencoded Remote-Address: Remote-Host: demo-portable zip=08540&name=Leon+Shklar

Figure 4.7

HTTP request submitted by the browser for the zip code example in Figure 4.6

The server, having received the request, performs the following steps: 1. Determines that /cgi-bin/zip.cgi has to be treated as a CGI program. This decision may be based on either a configuration parameter declaring /cgi-bin to be a CGI directory, or the .cgi file extension that is mapped to the CGI processing module. 2. Translates /cgi-bin/zip.cgi to a server file system location based on the server configuration (e.g. /www/cgi-bin/zip.cgi). 3. Verifies that the computed file system location (/www/cgi-bin/) is legal for CGI executables. 4. Verifies that zip.cgi has execute permissions for the user id that is used to run the server (e.g. nobody). (This issue is relevant only on UNIX systems, where processes run under the auspices of a designated user id. This may not apply to non-UNIX systems.) 5. Sets environment variables based on the request information. 6. Creates a child process responsible for executing the CGI program, passes it the body of the request in the standard input stream, and directs the standard output stream to the server module responsible for processing the response and sending it back to the browser. 7. On the termination of the CGI program, the response processor parses the response and, if missing, adds the default status code, default Content-Type, and headers that identify the server and server software. To avoid errors in processing request parameters that may either be present in the query string of the request URL (for GET requests) or in the body of the request (for POST requests), CGI applications must decompose the ampersand-separated parameter string into URL-encoded name/value pairs prior to decoding them. The example in Figure 4.8 is a function found in a Perl script, which takes a reference to an associative array and populates it with name/value pairs either from


Web Servers

sub ReadFormFields { # set reference to the array passed into ReadFormFields my $fieldsRef = shift; my($key, $val, $buf tmp, @buf parm); #Read in form contents $buf tmp = " "; read(STDIN,$buf tmp,$ENV{’CONTENT LENGTH’}); $buf tmp = $ENV{QUERY STRING} if (!$buf tmp); @buf parm = split(/&/,$buf tmp); #Split form contents into tag/value associative array #prior to decoding foreach $parm (@buf parm) { # Split into key and value ($key, $val) = split(/=/,$parm); # Change +’s to spaces and restore all hex values $val =~ s/\+/ /g; $val =~ s/%([a-fA-F0-9][a-fA-F0-9])/ pack("C",hex($1))/ge; # Use \0 to separate multiple entries per field name $$fieldsRef{$key} .= ’\0’ if (defined($$fieldsRef {$key})); $$fieldsRef{$key} .= $val; } return($fieldsRef); }

Figure 4.8

Form and Query String processing in CGI code

the request URL’s query string (for a GET request) or from the body of the HTTP request (for a POST request). For forms found in HTML pages, the choice of request method (GET or POST) is determined by the METHOD parameter in the FORM tag. In either case, the request parameters consist of ampersand-separated URL-encoded name/value pairs. In our example, the browser composed the body of the request from information populated in the form found in Figure 4.6— zip=08540&name =Leon+Shklar. The CGI script that invokes the ReadFormFields function would pass it the reference to an associative array. The read command is used to read the number of bytes defined by the Content-Length header from the body of the request. Note that read would be blocked when attempting to read more bytes than available from the body of the request, which will not happen if the ContentLength header is set properly. Real applications should take precautions to ensure proper timeouts, etc.

Basic Operation


Having read the body of the request into the $buf tmp variable, the next step is to split it into parts using the ampersand as the separator, and use the foreach loop to split parts into keys and values along the ‘=’ signs. Every value has to be URL-decoded which means that ‘+’ signs need to get turned back into spaces and three-character control sequences (e.g. %27) need to get translated back into the original characters. It is very important that keys and values are separated prior to URL-decoding to avoid confusing the splitting operations with ‘&’ and ‘=’ signs that may be contained in the values. (For example, a parameter value may itself contain an encoded equal sign or ampersand, and URL-decoding before separation could cause a misinterpretation of the parameters.) Figure 4.9 contains a sample CGI program that uses the ReadFormFields function to retrieve key/value pairs from request bodies. The PrintFormFields function simply prints the Content-Type header and the html document with key/value pairs. The empty associative array fields is populated by ReadFormFields and then passed to the PrintFormFields function. CGI programs may output status codes and HTTP headers but the HTTP server is responsible for augmenting the output to make them legitimate HTTP responses. In the example, the server has to add the status line and may include additional #!/usr/local/bin/perl sub ReadFormFields { ... } sub PrintFormFields { my $fieldsRef = shift; my $key, $value; print "Content-Type: text/html\n\n"; print "\nhello\n"; print "\n"; foreach $key (keys(%$fieldsRef)) { $value = $$fieldsRef{$key}; print "$key: $value\n"; } print "\n\n"; } &ReadFormFields(\%fields); &PrintFormFields(\%fields); exit 0;

Figure 4.9

Printing parameters in CGI code


Web Servers

HTTP headers including Content-Length, Date, and Server. Most of the time this saves CGI programmers unnecessary hassle but there are situations where you would rather know exactly what is being sent back to the client without letting the server touch the CGI output. Such situations are rare but they do exist and may become a great source of frustration. To avoid this problem, CGI designers introduced the ‘no-parse-header’ condition, which requires that HTTP servers leave alone the output of CGI scripts with names that start with ‘nph-‘. There are many commercial and open source Perl packages, which insulate CGI programmers from the HTTP protocol. There is nothing wrong with using convenience functions, but it is important to understand the underlying data structures and protocols. Without that, finding and fixing problems may turn out to be very difficult. Moreover, understanding HTTP and mappings between the HTTP and CGI specifications simplifies learning other protocols and APIs for building Internet applications. Finally, you can use any language, not only Perl. CGI specification makes no assumptions about the implementation language, and as long as you access environment variables and standard input (or their equivalents for non-Unix operating systems), you can use any language you want. Nevertheless, the majority of CGI applications are implemented in Perl. It is no surprise, since you would use the CGI mechanism for its simplicity, not performance. The additional overhead of interpreting a Perl program should not matter that much when balanced against the convenience of using an interpreted scripting language. As a server implementer, you are responsible for detecting CGI requests, starting a new child process for each CGI request, passing request information to the newly initiated process using environment variables and the input stream, and post-processing the response. Response processing that may include adding missing headers and the status code, does not apply to no-parse-header requests.

SSI mechanism The Server Side Includes specification (SSI) dates back almost as far as the CGI specification. It provides mechanisms for including auxiliary files (or the results of the execution of CGI scripts) into an HTML page. The original specification for SSI can be found at http://hoohoo.ncsa.uiuc.edu/docs/tutorials/includes. html. Let us look at the sample CGI script in Figure 4.9. The PrintFormFields function outputs html tags, and if you want to change html, you have to go back and change the code. This is not at all desirable, since it incredibly complicates maintenance. The most obvious solution is to create partially populated HTML pages, or templates, and fill in the blanks with the output of CGI scripts and, perhaps, other server-side operations. SSI is not a replacement for CGI, but it is an easy way to add dynamic content to pages without a lot of work.

Basic Operation


SSI macros must have the following format:

The syntax is designed to place SSI commands within HTML comments ensuring that unprocessed commands are ignored when the page is sent to the browser. Valid commands include ‘config’ for controlling the parsing, ‘echo’ for outputting values of environment variables, ‘include’ for inserting additional files, ‘fsize’ for outputting file size information, and the most dangerous, ‘exec’, for executing serverside programs. The popular use of the ‘exec’ command is to invoke CGI scripts as in “exec cgi http://mysite.org/cgi-bin/zip.cgi”, but it also may be used to run other server-side programs. Using SSI to simplify the CGI script in Figure 4.9, we eliminate the need to print out the Content-Type header and the static part of the page. In the SSI example in Figure 4.10, the shorter CGI script is invoked to fill blanks in the page, and the server uses the file extension to set Content-Type. You can refer to the URL of an SSI page in the action attribute of the form tag (e.g. instead of the CGI URL in Figure 4.6), but only if you change the request method to GET. The server will produce an error if you try the POST—not at all surprising, since the CGI specification requires that bodies of POST requests be passed to CGI scripts as standard input. It is not clear what it means to pass the standard input stream to a SSI page. Making it a requirement for the server to pass bodies of POST requests to CGI scripts referenced in SSI pages would have complicated the implementation of the #!/usr/local/bin/perl sub ReadFormFields { . . . } sub PrintFormFields { my $fieldsRef = shift; my $key, $value; foreach $key (keys(%$fieldsRef)) { $value = $$fieldsRef{$key}; print "$key: $value\n"; } } &ReadFormFields(\%fields); &PrintFormFields(\%fields); exit 0;

Figure 4.10

Using SSI instead of the CGI program from Figure 4.9


Web Servers

SSI mechanism. Moreover, SSI pages may include many ‘exec cgi’ instructions, and it is unclear what it means to pass the same input stream to multiple CGI scripts. A sample SSI page is shown below:


As you no doubt remember, servers do not parse static pages—it is browsers, which are responsible for parsing pages and submitting additional requests for images and other embedded objects. This is not possible for SSI—the server cannot discover and execute SSI macros without parsing pages. In practice, pages containing SSI macros get assigned different file extensions (e.g. .shtml) to indicate the different kind of processing. Later, when we discuss server configuration, we shall look at how different file extensions may be associated with different server-side processing modules. CGI scripts that are invoked within SSI pages have access to additional context information that is not available in the standalone mode. The context information is passed through environment variables; DOCUMENT NAME, DOCUMENT URI, and LAST MODIFIED describe the SSI page—other environment variables (QUERY STRING UNESCAPED, DATE LOCAL, DATE GMT) are primarily the matter of convenience. The output of a standalone CGI script is sent to the browser after the server makes its final say in filling up the gaps—default values for required HTTP headers and the status code. When the CGI script is invoked from a SSI page, the server does not perform any error checking on the output. Responses that include the Location header are transformed into HTML anchors, but other than that, response bodies get included in the page no matter what the Content-Type of the response. You have to be careful or you can end up with the horrible GIF binaries mixed up with your HTML tags. SSI mechanism provides simple and convenient means for adding dynamic content to existing pages without having to generate the entire page. Nothing comes free, and the price of convenience in using SSI is both the additional load on the server and security worries, since fully enabling SSI means allowing page owners to execute server-side programs. The security concerns led server administrators to impose very serious limitations on the SSI mechanism, which in turn limits the portability of SSI pages. If portability and performance are not major concerns, SSI may be a convenient way to implement and maintain applications that collect information from different sources.

Advanced Mechanisms for Dynamic Content Delivery


4.2 ADVANCED MECHANISMS FOR DYNAMIC CONTENT DELIVERY Even after you have built a Web server that performs basic tasks discussed in Section 4.1, there is still much to do. In this section, we discuss alternative mechanisms for building server-side applications. In subsequent sections, we discuss advanced features including server configuration and security.

4.2.1 Beyond CGI and SSI CGI is a simple mechanism for implementing portable server-side applications. It is employed ubiquitously throughout the Web. However, there are a number of problems associated with CGI processing. Its main deficiency is performance. Processing a request that invokes a CGI script requires the spawning of a child process to execute that script (plus another process if the script is written in an interpreted language such as Perl). Moreover, any initialization and other processing that might be common to all requests must be repeated for every single request. SSI has similar deficiencies when its command processing employs CGI under the hood. It adds the additional performance penalty by requiring servers to parse SSI pages. Most importantly, SSI may represent a serious security risk, especially when not configured carefully by the server administrator. The SSI mechanism is not scaleable, and provides only limited opportunities for reuse. With this in mind, a number of other approaches to dynamic content processing arose in the Web server environment.

4.2.2 Native APIs (ISAPI and NSAPI) Efficiency concerns may be addressed by using native server APIs. A native API is simply a mechanism providing direct ‘hooks’ into the Web server’s application programming interface. Use of a native API implies the use of compiled code that is optimized for use within the context of a specific Web server environment. NSAPI and ISAPI are two approaches employed by Netscape’s Web server software and Microsoft’s IIS, respectively. The problem is that there is no commonality or consistency amongst these native APIs. They are different from each other, and code written for one environment cannot be reused in another. This makes it impossible to implement portable applications.

4.2.3 FastCGI FastCGI is an attempt to combine the portability of CGI applications with the efficiency of non-portable applications based on server APIs. The idea is simple:


Web Servers

instead of requiring the spawning of a new process every time a CGI script is to be executed, FastCGI allows processes associated with CGI scripts to ‘stay alive’ after a request has been satisfied. This means that new processes do not have to be spawned again and again, since the same process can be reused by multiple requests. These processes may be initialized once without endlessly re-executing initialization code. Server modules that enable FastCGI functionality talk to HTTP servers via their own APIs. These APIs attempt to hide the implementation and configuration details from FastCGI applications, but developers still need to be aware of the FastCGI implementation, as the various modules are not compatible with each other. Therein lies the problem: to ensure true portability, FastCGI functionality has to be supported across the board in a consistent and compatible fashion for all different HTTP servers. The failure of FastCGI modules to proliferate to main HTTP servers is the main cause of its eventual disappearance from the server-side application scene. Perhaps it would have worked, except that a much better technology (servlets) came along before FastCGI gained universal support, and FastCGI went the way of the dinosaurs.

4.2.4 Template processing Another approach used to serve dynamic content involves the use of template processors. In this approach, templates are essentially HTML files with additional ‘tags’ that prescribe methods for inserting dynamically generated content from external sources. The template file contains HTML that provides general page layout parameters, with the additional tags discretely placed so that content is placed appropriately on the rendered page. Among the most popular template approaches are PHP (an open source product), Cold Fusion (from Allaire/Macromedia), and Active Server Pages or ASP (from Microsoft). To some degree, advanced template processing approaches could be considered ‘SSI on steroids’. While SSI directives can perform simple tasks such as external file inclusion and the embedding of CGI program output within a page, advanced template processors provide sophisticated functionality. This functionality, which is found in many programming and scripting languages, includes: • submitting database queries, • iterative processing (analogous to repetitive ‘for-each’ looping), and • conditional processing (analogous to ‘if’ statements). The example in Figure 4.11 employs one of the popular template approaches, Allaire/Macromedia’s Cold Fusion, and demonstrates each of these functions. (Note that Cold Fusion’s special tags look like HTML tags, but begin with CF. . ..) The block tag describes a database query to be executed. The block

Advanced Mechanisms for Dynamic Content Delivery


SELECT id, columnX, columnY, columnZ FROM TABLE1 WHERE id = #substitution-parameter# #columnX# #columnY# #columnZ#

Figure 4.11

Sample template (Cold Fusion)

tag delimits a section that should only be included in the resulting page if the result set returned from the query was not empty (record count greater than zero). Within that block, the block tag specifies contents of an HTML table row that should be repeated (with proper value substitution) for each row in the query’s result set. (Note that text found within Cold Fusion block tags that is delimited by pound signs indicates a substitution parameter, e.g. #text#). See Figure 4.11. The advantage of this approach is that, ostensibly, templates can be created and maintained by page designers, who have background in HTML and web graphics but are not programmers. Special tags that are ‘extensions’ to HTML are considered similar enough to SSI tags to put their usage within the grasp of an average page designer. Employing these tags requires less expertise than writing code. The problem is that the more sophisticated these template approaches get, the more they begin to resemble programming languages, and the more likely it becomes that this perceived advantage of simplicity will not be realized. Some template approaches provide advanced functionality by allowing scripting within the template, but this only blurs the line between scripts and templates. It also means that, in all likelihood, two sets of people will need to be responsible for building (and maintaining) the template: people with web design skills and people with programming skills. Separation of Content and Presentation We hear a lot about the notion of ‘separating content from presentation.’ It is often taken as an axiom handed down from the mountaintop. However, we should know why it is so important.


Web Servers

The example described above provides one reason. When you mix the logic required to retrieve content with the design parameters associated with its presentation, both logic and design elements are contained within the same module. Who is responsible for maintaining this module? A programmer, who may not be skilled in the fine art of page design, or a web designer, who may not be an expert when it comes to programming? What would be ideal is an approach that explicitly and naturally separates data access logic from presentation. Programmers would maintain the logic component, which encapsulates access to desired data. Designers would maintain the presentation component, though the target format need not be HTML. Designers could create different presentations for different audiences, without affecting the data access logic. Likewise, logic of the data access component could be changed, but as long as it provides the same interface to the data, presentation components do not need to be modified. Moreover, designers and programmers would not “step on each other” while maintaining the same module. The model-view-controller (MVC) design pattern provides precisely such an approach. We shall be discussing an approach that can be used to implement this pattern later in the book.

4.2.5 Servlets A better approach to serving dynamic content is the Servlet API—Java technology for implementing applications that are portable not only across different servers but also across different hardware platforms and operating systems. Like FastCGI, the servlet API uses server application modules that remain resident and reusable, rather than requiring the spawning of a new process for every request. Unlike FastCGI, the servlet API is portable across servers, operating systems, and hardware platforms. Servlets execute the same way in any environment that provides a compliant servlet runner. The servlet API generated very strong following; it is widely used in a variety of Web server environments. Implementers of Java servlets do not need to have any knowledge of the underlying servers and their APIs. Interfacing with the server API is the responsibility of servlet runners, which include a Java Virtual Machine and are designed to communicate with host HTTP servers. The servlet runner does this by talking directly to the server API through the Java Native Interface (JNI), or by running in a stand-alone mode and listening on an internal port for servlet requests that are redirected from general-purpose HTTP servers. Servlets are Java programs that have access to information in HTTP requests and that generate HTTP responses that are sent back to browsers and proxies. Remember the CGI program that was shown in Figures 4.9 and 4.10? Let us see how the same functionality can be implemented as a servlet (Figure 4.12).

Advanced Mechanisms for Dynamic Content Delivery


As you can see, methods defined on the HttpServletRequest class take care of extracting and decoding parameters and setting response headers. Notice that the HttpServlet class has different methods (doGet() and doPost()) for different HTTP methods. In this example, we want to retrieve parameters passed in both GET and POST requests. We have the luxury of using exactly the same code in both cases— getParameterNames and getParameter methods adjust their behavior depending on the type of the request and protect programmers from having to know whether to retrieve parameters from the query string or the body of the request. We shall come back to servlets later in the book. It is worth noting that, unlike CGI scripts, a servlet is capable of handling multiple requests concurrently. Servlets may also forward requests to other servers and servlets. Of course, forwarding a request to another server is accomplished by using HTTP even as programmers stick to method calls that are defined in the servlet API.

4.2.6 Java Server Pages The Java Server Pages (JSP) mechanism came about as Sun’s response to Microsoft’s own template processing approach, Active Server Pages. JSP was originally intended to relieve servlet programmers from the tedium of having to generate static HTML or XML markup through Java code. Today’s JSP processors take static markup pages with embedded JSP instructions and translate them into servlets, which then get compiled into Java byte code. More precisely, JSP 1.1-compliant processors generate Java classes, which extend the HttpJspBase class that implements the Servlet interface. What this means is that JSP serves as a pre-processor for servlet programmers. The resulting classes are compiled modules that execute faster than a processor that interprets templates at request time. In contrast with earlier template approaches, most of which used proprietary tags, JSP instructions are formatted as XML tags, combined with Java code fragments. Together, these tags and code fragments express the logic to transform JSP pages into the markup used to present the desired content. It is not necessary for all application logic to be included in the page—the embedded code may reference other server-based resources. JSP ability to reference server-based components is similar to SSI support for referencing CGI scripts. It helps to separate the page logic from its look and feel and supports a reusable component-based design. To illustrate (Figure 4.13), the task of displaying parameters of HTTP requests just got much simpler compared to the servlet example in Figure 4.12. Notice, that we do not have to separately override the doGet and doPost methods since the HttpJspBase class is designed to override the service method (doGet and doPost are invoked from HttpServlet’s service method). Unlike servlets, the JSP technology is designed for a much wider audience since it does not require the same level of programming expertise. Moreover, we do not


Web Servers

import import import import

java.io.*; java.util.*; javax.servlet.*; javax.servlet.http.*;

public class FormServlet extends HttpServlet { public void doGet(HttpServletRequest request, HttpServletResponse response) throws IOException, ServletException { response.setContentType("text/html"); PrintWriter out = response.getWriter(); out.println("\nhello"); out.println(""); Enumeration e = request.getParameterNames(); while (e.hasMoreElements()) { String name = (String)e.nextElement(); String value = request.getParameter(name); out.println("" + name + ": " + value + ""); } out.println("\n"); } public void doPost(HttpServletRequest request, HttpServletResponse response) throws IOException, ServletException { doGet(request, response); } }

Figure 4.12

Parameter processing in Servlets hello! :

Figure 4.13

Parameter processing in JSP

Advanced Mechanisms for Dynamic Content Delivery


have to pay as high a price as we did when using SSI technology instead of CGI scripts. As you remember, using SSI meant additional parsing overhead and issues with security. This is not a problem with JSP pages that are translated into servlets, which, in turn, are compiled into Java byte code.

4.2.7 Future directions Sun sees the use of servlets and JSP as a next generation approach for web application development. But in and of itself, the combination of servlets with JSPs does not enforce or even encourage a truly modular approach to writing serverside applications. The example described in the previous section does not decouple data access logic from presentation logic, in fact it intermixes them excessively. Still, this combination can be used effectively to implement the Model-ViewController design pattern, which specifically enforces separation of content from presentation in a methodical modular way. Sun refers to this approach as JSP Model 2. It involves the use of a controlling ‘action’ servlet (the Controller component) that interfaces with JavaBeans that encapsulate access to data (the Model component), presenting the results of this processing through one or more Java Server Pages (the View component). Strict employment of this design pattern ensures that there is true separation of content and presentation. The controlling action servlet routes the request to ensure the execution of appropriate tasks. The JavaBeans (referenced in JSP “useBean” tags) encapsulate access to underlying data. JSPs refer to discrete data elements exposed in Java beans, allowing those data elements to be presented as desired. Part of the elegance of this approach is the flexibility in the presentation component. The same application can serve data to be presented on a variety of platforms, including HTML browsers running on desktop computers and WML applications running on handheld devices. Different JSPs can tailor the presentation for appropriate target platforms. Multiple view components could be developed for the same target platform to enable personalized/customized presentations. The Struts Application Framework (developed by the Apache Group as part of its Jakarta Project) provides a set of mechanisms to enable the development of Web applications using this paradigm. Through a combination of Java classes, JSP taglibs, and specifications for action mappings as XML configuration files, Struts provides a means to achieve the goal of truly modular and maintainable Web applications. The ultimate goal would be a truly declarative framework, where all components could be specified and no coding would be required. Although we are not there yet, existing mechanisms like JSP Model 2 and Struts are moving us further along the path to that goal.


Web Servers

4.3 ADVANCED FEATURES Historically, server evolution is going hand-in-hand with the evolution of the HTTP protocol. Looking at the changes that were introduced in the HTTP/1.1 specification, some of them are attempts to unify and legitimize proprietary extensions that were implemented in HTTP/1.0 servers, while others are genuinely new features that fill the need for extended functionality. For example, some HTTP/1.0 servers supported the Connection: Keep-Alive header even though it was never a part of the HTTP/1.0 specification. Unfortunately, for it to work properly it was necessary for every single proxy in between the server and the browser, and of course the browser itself, to support it as well. As we already discussed in Chapter 3, HTTP/1.1-compliant servers, browsers, and proxies have to assume that connections are persistent unless told otherwise via the Connection: Close header. Examples of new features include virtual hosting, chunked transfers, and informational (1xx) status codes.

4.3.1 Virtual hosting As we already discussed in the section dedicated to HTTP evolution, virtual hosting is the ability to map multiple server and domain names to a single IP address. The lack of support for such feature in HTTP/1.0 was a glaring problem for Internet Service Providers (ISP). After all, it is needed when you register a new domain name and want your ISP to support it. HTTP/1.1 servers have a number of responsibilities with regard to virtual hosting: 1. Use information in the required Host header to identify the virtual host. 2. Generate error responses with the proper 400 Bad Request status code in the absence of Host. 3. Support absolute URLs in requests, even though there is no requirement that the server identified in the absolute URL matches the Host header. 4. Support isolation and independent configuration of document trees and serverside applications between different virtual hosts that are supported by the same server installation. Most widely used HTTP/1.1 servers support virtual hosting. They make the common distinction between physical and logical configuration parameters. Physical configuration parameters are common for all virtual hosts; they control listening ports, server processes, limits on the number of simultaneously processed requests and the number of persistent connections, and other physical resources. Logical parameters may differ between virtual hosts; they include the location and configuration of the document tree and server-side applications, directory access options, and MIME type mappings.

Advanced Features


4.3.2 Chunked transfers The chances are there were a number of occasions when you spent long minutes sitting in front of your browser waiting for a particularly slow page. It could be because of the slow connection or it could be that the server application is slow. Either way you have to wait even though all you need may be to take a quick look at the page before you move on. HTTP/1.1 specification introduced the notion of transfer encoding as well as the first kind of transfer encoding—chunked —that is designed to enable processing of partially transmitted messages. According to the specification, the server is obligated to decode HTTP requests containing the Content-Transfer-Encoding: chunked header prior to passing it to server applications. A similar obligation is imposed on the browser, as will be discussed in the next chapter. Server applications, of course, may produce chunked responses, which are particularly recommended for slow applications. Figure 4.14 demonstrates a sample HTTP response—note the ContentTransfer-Encoding: chunked header indicating the encoding of the body. The first line of the body starts with the hexadecimal number indicating the length of the first chunk (‘1b’ or decimal 28) and is followed with the optional comment preceded by a semicolon. The next line contains exactly 28 bytes, and is followed by another line containing the length of the second chunk (‘10’ or decimal 16). The second chunk is followed with the line containing ‘0’ as the length of the next chunk, which indicates the end of the body. The body may be followed with additional headers—they are actually called footers since they follow the body; their role is to provide information about the body that may not be obtained until the body generation is complete. It may seem counter-intuitive that a browser request would be so huge as to merit separating it into chunks, but think about file transfer using PUT or POST requests. It gets a bit interesting with POST requests—try defining an HTML form with different input tags at least one of which refers to a file; upon submitting the form the browser creates the request with the Content-Type header set to multipart/form-data. HTTP/1.1 200 OK Content-Type: text/plain Content-Transfer-Encoding: chunked 1b; Ignore this abcdefghijklmnopqrstuvwxyza 10 1234567890abcdef 0 a-footer: a-value another-footer: another value

Figure 4.14

Chunked transfer


Web Servers

Chunked encoding may not be applied across different body parts of a multipart message but the browser may apply it to any body part separately, e.g. the one containing the file. Chunked encoding is a powerful feature, but it is easy to misuse it without achieving any benefit. For example, suppose you are implementing a server application that generates a very large image file and zips it up. Even after being zipped up it is still huge, so you think that sending it in chunks may be helpful. Well, let us think about it—the browser receives the first chunk and retrieves the chunk content only to realize that it needs the rest of the chunks to unzip the file prior to attempting to render it. We just wasted all the time to encode and decode the body without obtaining any substantial benefit.

4.3.3 Caching support Caching is one of the most important mechanisms in building scalable applications. Server applications may cache intermediate results to increase efficiency when serving dynamic content, but such functionality is beyond the responsibility of HTTP servers. In this section, we concentrate our discussion on server obligations in support of browser caching as well as server controls with regard to browser caching behaviors. Prior to HTTP/1.1, the majority of browsers implemented very simplistic caching policies—cached only pages they recognized as static (e.g. only pages received in response to submitting requests initiated through anchor tags as opposed to forms, and only those with certain file extensions). Once stored, the cache entries were not verified for a fixed period short of an explicit reload request. There were, of course, problems with implementing more advanced caching strategies: 1. On-request verification of cache entries meant doubling the number of requests for modified pages using HEAD requests. As you remember from our earlier HTTP discussion, HEAD requests result in response messages with empty bodies. At best, such responses contained enough information to submit GET requests. 2. HTTP/1.0 servers, as a rule, did not include the Last-Modified header in response messages, making it much harder to check whether cache entries remained current. Verification had to be based on unreliable heuristics (e.g., changes in content length, etc.). 3. There was no strict requirement for HTTP/1.0 servers to include the Date header in their responses (even though most did) making it harder to properly record cache entries. HTTP/1.1 requires servers to comply with the following requirements in support of caching policies:

Server Configuration


1. HTTP/1.1 servers must perform cache entry verification when receiving requests that include If-Modified-Since and If-Unmodified-Since headers set to a date in the GMT format (e.g. Date: Sun, 23 Mar 1997, 22:15:51 GMT). Servers must ignore invalid and future dates and attempt to generate the same response they would in the absence of these headers if the condition is satisfied (content was modified in the case of the If-Modified-Since header or not modified in the case of the If-Unmodified-Since header). Servers are also responsible for generating proper status codes for failed conditions (304 Unmodified and 412 Precondition Failed correspondingly). 2. It is recommended that server implementers make an effort to include the LastModified header in response messages whenever possible. Browsers use this value to compare against dates stored with cache entries. 3. Unlike HTTP/1.0, HTTP/1.1 servers are required to include the Date header with every response, which makes it possible to avoid errors that may happen when browsers rely on their own clocks. It is not reasonable to expect servers to implement caching policies for dynamic content—this remains the responsibility of server applications. HTTP/1.1 provides applications with much finer controls than HTTP/1.0 (Section 3.4.2). Depending on the processing mechanism (CGI, Servlet API, etc.), cache control headers may be generated either directly by applications or by the enabling mechanism based on API calls, but the headers are still the same. Understanding what headers are generated when you call a particular method and how these headers affect browser behavior is the best way to get a good intuition for any API. As you know, intuition is invaluable in designing good applications and finding problems.

4.3.4 Extensibility Real HTTP servers vary in the availability of optional built-in components that support the execution of server-side applications. They also differ in the implementation of optional HTTP methods, which are all methods except GET and HEAD. Fortunately, they provide server administrators with ways to extend the default functionality (Section 4.4.5). As will be discussed later in this chapter, server functionality may be extended in a variety of ways—from implementing optional HTTP methods to adding custom support mechanisms for building server applications.

4.4 SERVER CONFIGURATION Web server behavior is controlled by its configuration. While details of configuring a Web server differ greatly between different implementations, there are important


Web Servers

common concepts that transcend server implementations. For example, any HTTP server has to be configured to map file extensions to MIME types, and any server has to be configured to resolve URLs to addresses in the local file system. For the purpose of this section, we make use of Apache configuration examples. Note that we refer to Apache configuration as a case study and have no intent of providing an Apache configuration manual, which is freely available from the Apache site anyway. Instead, we concentrate on the concepts and it remains your responsibility to map these concepts to configuring your servers.

4.4.1 Directory structure An HTTP server installation directory is commonly referred to as the server root. Most often, other directories (document root, configuration directory, log directory, CGI and servlet root directories, etc.) are defined as subdirectories of the server root. There is normally the initial configuration file that gets loaded when the server comes up, which contains execution parameters, information about the location of other configuration files, and the location of most important directories. Configuration file formats vary for different servers—from traditional attribute-value pairs to XML formats. There exist situations when it is desirable to depart from the convention that calls for the most important directories to be defined as subdirectories of the server root. For example, you may have reasons to run different servers interchangeably on the same machine, which is particularly common in a development environment. In this case, you may want to use the same independently located document root for different servers. Similarly, you may need to be able to execute the same CGI scripts and servlets independent of which server is currently running. It is important to be particularly careful when sharing directories between different processes—it is enough for one of the processes to be insecure and the integrity of your directory structure is in jeopardy.

4.4.2 Execution HTTP server is a set of processes or threads (for uniformity, we always refer to them as threads), some of which listen on designated ports while others are dedicated to processing incoming requests. Depending on the load, it may be reasonable to keep a number of threads running at all times so that they do not have to be started and initialized for every request. Figure 4.15 contains a fragment of a sample configuration file for Apache installation on a Windows machine. The ‘standalone’ value of the server type indicates that the server process is always running and, as follows from the value of the ‘Port’ parameter, is listening on port 80. The server is configured to support persistent connections, and every connection is configured to support up to a hundred requests.

Server Configuration


ServerName demo ServerRoot "C:/Program Files/Apache Group/Apache" ServerType standalone Port 80 KeepAlive On MaxKeepAliveRequests 100 KeepAliveTimeout 15 MaxRequestsPerChild 200 Timeout 300

Figure 4.15

Fragment of a sample configuration file

At the same time, the server is supposed to break the connection if 15 seconds go by without new requests. Many servers make it possible to impose a limit on a number of requests processed without restarting a child process. This limit was introduced for very pragmatic reasons—to avoid prolonged use that results in leaking memory or other resources. The nature of the HTTP protocol, with its independent requests and responses makes it possible to avoid the problem simply by restarting the process. Finally, the timeout limits processing time for individual requests. HTTP/1.1 is designed to support virtual hosting—the ability of a single server to accept requests targeted to different domains (this of course requires setting DNS aliases). It is quite useful when getting your favorite Internet Service Provider to host your site. As we have already discussed, this functionality is the reason for requiring the Host header in every request. Every virtual host may be configured separately. This does not apply to operational parameters that were discussed in this section. After all, different virtual hosts still share the same physical resources.

4.4.3 Address resolution An HTTP request is an instruction to the server to perform specified actions. In fact, you may think of HTTP as a language, HTTP request as a program, and the server as an interpreter for the language. Requests are interpreted largely by specialized server modules and by server applications. For example, the servlet runner is responsible for interpreting session ids in Cookie headers and mapping them to server-side session information. Application logic is normally responsible for interpreting URL parameters, request bodies, and additional header information (e.g. Referer). The core server logic is responsible for the initial processing and routing of requests. First and most important steps are to select the proper virtual host, resolve aliases, analyze the URL, and choose the proper processing module. In both sample URLs in Figure 4.16, www.neurozen.com is a virtual host. The server has to locate configuration statements for this virtual host and use them to perform address translation.


Web Servers

1. http://www.neurozen.com/test?a=1&b2 2. http://www.neurozen.com/images/news.gif

ServerAdmin [email protected] Alias /test /servlet/test Alias /images /static/images DocumentRoot /www/docs/neurozen ServerName www.neurozen.com ErrorLog logs/neurozen-error-log CustomLog logs/neurozen-access-log common

Figure 4.16

Sample URLs and a configuration fragment

In the first URL, /test is defined to be an alias for /servlet/test. The server would first resolve aliases and only then use module mappings to pass the URL to the servlet runner that, in turn, invokes the test servlet. In the second URL, /images is defined to be an alias for /static/images, which is not explicitly mapped to a module and is assumed a static file. Consequently, the server translates /static/images to the path starting at the document root and looks up the image with the path /www/docs/neurozen/static/news.gif. Syntax of configuration fragments in above examples is that of the Apache distribution. Do not get mislead by the presence of angle brackets—this syntax only resembles XML, perhaps it will evolve to proper XML in future versions. Note that almost all configuration instructions may occur within the VirtualHost tags. The exception is configuration instructions that control execution parameters (Section 4.4.2). Instructions defined within the VirtualHost tag take precedence for respective host names over global instructions.

4.4.4 MIME support Successful (200 OK) HTTP responses are supposed to contain the Content-Type header instructing browsers how to render enclosed bodies. For dynamic processing, responsibility for setting the Content-Type header to a proper MIME type is deferred to server applications that produce the response. For static processing, it remains the responsibility of the server. Servers set MIME types for static files based on file extensions. A server distribution normally contains a MIME configuration file that stores mappings between MIME types and file extensions. In the example (Figure 4.17), text/html is mapped to two alternate file extensions (.html and .htm), text/xml is mapped to a single file extension (.xml), and video/mpeg is mapped to three alternate extensions (.mpeg, .mpg, and .mpe).

Server Configuration


text/css text/html text/plain text/xml video/mpeg

Figure 4.17

css html htm asc txt xml mpeg mpg mpe

Sample fragment of the Apache configuration file

There may be reasons for a particular installation to change or extend default mappings. Most servers provide for a way to do this without modifying the main MIME configuration file. For example, Apache supports special add and update directives that may be included with other global and virtual host-specific configuration instructions. The reason is to make it easy to replace default MIME mappings with newer versions without having to edit every new distribution. Such distributions are quite frequent, and are based on the work of standardization committees that are responsible for defining MIME types. It is important to understand that MIME type mappings are not used exclusively for setting response headers. Another purpose is to aid the server in selecting processing modules. This is an alternative to path-based selections (Section 4.4.3). For example, a mapping may be defined to associate the .cgi extension with CGI scripts. Such a mapping means that the server would use the .cgi extension of the file name as defined in the URL to select CGI as the processing module. This does not change server behavior in performing path-based selections when MIME-based preferences do not apply. In the example, choosing CGI as the processing module does not have any affect on setting the Content-Type header, which remains the responsibility of the CGI script.

4.4.5 Server extensions HTTP servers are packaged to support most common processing modules—As-Is, CGI, SSI, and servlet runners. Apache refers to these modules as handlers, and makes it possible not only to map built-in handlers to file extensions, but to define new handlers as well. In the example in Figure 4.18, the AddHandler directive is used to associate file instructions with handlers. The AddType directive is used to assign MIME types to the output of these handlers by associating both types and handlers with the same file extension. Further, the Action directive is designed to support introducing new handlers. In the example, the add-footer handler is defined as a Perl script that is supposed to be invoked for all .html files. According to HTTP 1.0 and 1.1 specifications, the only required server methods are GET and HEAD. You would be hard pressed to find a widely used server that does not implement POST, but many of them do not implement other optional methods— PUT, DELETE, OPTIONS, TRACE, and CONNECT. The set of optional methods


Web Servers

AddHandler send-as-is .asis AddType text /html .shtml AddHandler server-parsed .shtml Action add-footer /cgi-bin/footer.pl AddHandler add-footer .html Script PUT /cgi-bin/nph-put

Figure 4.18

Defining server extensions in Apache

for a server may be extended but custom methods are bound to have proprietary semantics. The SCRIPT directive in the Apache example extends the server to support the PUT method by invoking the nph-put CGI program. As we discussed earlier (Section, the “nph-” prefix tells the server not to process the output of the CGI program.

4.5 SERVER SECURITY Throughout the history of the human race, there has been a struggle between fear and greed. In the context of Internet programming, this tug of war takes the form of the struggle between server security and the amount of inconvenience to server administrators and application developers. Server security is about 80/20 compromises—attempts to achieve eighty percent of desired security for your servers at the cost giving up twenty percent of convenience in building and maintaining applications. Of course, there exist degenerate cases when no security is enough, but that is a separate discussion. This section is not intended as a security manual, but rather as an overview of the most common security problems in setting up and configuring HTTP servers. We do not intend to provide all the answers, only to help you start looking for them. When it concerns security, being aware of a problem takes you more than half way to finding a solution.

4.5.1 Securing the installation HTTP servers are designed to respond to external requests. Some of the requests may be malicious and jeopardize not only the integrity of the server, but of the entire network. Before we consider the steps necessary to minimize the effect of such malicious requests, we need to make sure that it is not possible to jeopardize the integrity of the server machine and corrupt the HTTP server installation. The obvious precaution is to minimize remote login access to the server machine— up to disabling it completely (on UNIX that would mean disabling the in.telnetd and

Server Security


in.logind daemons). If this is too drastic a precaution for what you need, at least make sure that all attempts to access the system are monitored and logged, and all passwords are crack-resilient. Every additional process that is running on the same machine and serves outside requests adds to the risk—for example, ftp or tftp. In other words, it is better not to run any additional processes on the same machine. If you have to, at least make sure that they are secure. And do not neglect to check for obvious and trivial problems—like file permissions on configuration and password files. There are free and commercial packages that can aid you in auditing the file system and in checking for file corruption—clear indication of danger. After all, if the machine itself can be compromised, it does not matter how secure the HTTP server is that is running on that machine. The HTTP server itself is definitely a source of danger. Back in the early days, when the URL string was limited to a hundred characters, everyone’s favorite way of getting through Web server defenses was to specify a long URL, and either achieve some sort of corruption, or force the server to execute instructions hidden in the trailing portions of these monster URLs. This is not likely to happen with newer HTTP servers but there are still gaping security holes that occasionally get exposed if server administrators are not careful. As we already discussed in Section, SSI is fraught with dangers. Primarily, this is because it supports the execution of server-side programs. Subtler security holes may be exposed because of buggy parsing mechanisms that get confused when encountering illegal syntax—a variation on ancient monster URLs. In other words, you are really asking for trouble if your server is configured to support SSI pages in user directories. Similar precautions go for CGI scripts—enabling them in user directories is dangerous though not as much as SSI pages. At the risk of repeating ourselves—it is simple security oversights that cause most problems.

4.5.2 Dangerous practices Speaking of the oversights, there are a few that seem obvious but get repeated over and over again. Quite a number of them have to do with the file system. We mentioned the file permissions, but another problem has to do with symbolic links between files and directories. Following a symbolic link may take the server outside the intended directory structure, often with unexpected results. Fortunately, HTTP servers make it possible to disable following links when processing HTTP requests. Out of all different problems caused by lack of care in configuring the server, one that stands out has to do with sharing the same file system between different processes. How often do you see people providing both FTP and HTTP access to the same files? The problem is that you can spend as much effort as you want securing your HTTP server but it will not help if it is possible to establish an anonymous FTP connection to the host machine and post an executable in a CGI directory. Now think of all different dangers of file and program corruption that may let outsiders execute their own programs on the server. It is bad enough that outside


Web Servers

programs can be executed, but it is even worse if they can access critical system files. It stands to reason that an HTTP server should execute with permissions that don’t give it access to files outside of the server directory structure. This is why, if you look at server configuration files, you may notice that the user id defaults to ‘nobody’—the name traditionally reserved for user ids assigned to HTTP servers. Unfortunately, not every operating system supports setting user ids when starting the server. Even less fortunately, system administrators, who log in with permissions that give them full access to system resources, are the ones to start the servers. As a result, the server process (and programs started through the server process) have full access to system resources. You know the consequences.

4.5.3 Secure HTTP Let us assume for the time being that the server is safe. This is still not enough to guard sensitive applications (e.g. credit card purchases, etc.). Even if the server is safe, HTTP messages containing sensitive information are still vulnerable. The most obvious solution for guarding this information is, of course, encryption. HTTPS is the secure version of the HTTP protocol. All HTTPS messages are the same except that they are transmitted over a Secure Socket Layer (SSL) connection—messages are encrypted before the transmission and decrypted after being received by the server. The SSL protocol supports the use of a variety of different cryptographic algorithms to authenticate the server and the browser to each other, transmit certificates, and establish session encryption keys. The SSL handshake protocol determines how the server and the browser negotiate what encryption algorithm to use. Normally, the server and the browser would select the strongest possible algorithm supported by both parties. Very secure servers may disable weaker algorithms (e.g. those based on 40-bit encryption). This can be a problem when you try to access your bank account and the server refuses the connection asking you to install a browser that supports 128-bit encryption. As always, you may spend a lot of effort and get bitten by a simple oversight. Even now, after so many years of Internet commerce, you can find applications that all have the same problem—they secure the connection after authenticating a user but authentication is not performed over a secure connection, which exposes user names and passwords. Next time, before you fill out a form to login to a site that makes use of your sensitive information, check whether the action attribute on that form references an HTTPS URL. If it does not—you should run away and never come back.

4.5.4 Firewalls and proxies Today, more than a third of all Internet sites are protected by firewalls. The idea is to isolate machines on a Local Area Network (LAN) and expose them to the outside

Server Security


world via a specialized gateway that screens network traffic. This gateway is what is customarily referred to as a firewall.

Firewall configurations There exist many different firewall configurations that fall into two major categories: dual-homed gateways and screened-host gateways. Dual-homed firewall is a computer with two different interface cards, one of which is connected to the LAN and one to the outside world. With this architecture, there is no direct contact between the LAN and the world, so it is necessary to run a firewall proxy on the gateway machine and make this proxy responsible for filtering network packets and passing them between the interface cards. Passing every packet requires an explicit effort, and no information is passed if the firewall proxy is down. Such configuration is very restrictive and is used only in very secure installations. Screened-host gateways are network routers that have the responsibility of filtering traffic between the LAN and the outside world. They may be configured to screen network packets based on source and destination addresses, ports, and other criteria. Normally, the router is configured to pass through only network packets that are bound for the firewall host and stops packets that are bound for other machines on the LAN. The firewall host is responsible for running a configurable filtering proxy that selectively passes through the network traffic. The screened-host configuration is very flexible—it is easy to open temporary paths to selected ports and hosts. This comes in handy when you need to show a demo running on an internal machine.

HTTP proxies It is all well and good to create a firewall but what do you do if you need to make your HTTP server visible to the outside world? The seemingly easy answer—running it on the firewall machine—is not a good one. First, any serious load on the HTTP server that is running on the firewall machine may bring the LAN’s connection to the outside world to its knees. After all, the HTTP server and network traffic filters would share the same resources. Secondly, any security breach that exposes the HTTP server could also expose the firewall machine and consequently the entire LAN. At the risk of repeating ourselves—it is a really bad idea to run the HTTP server on the firewall machine, and the reason why we keep repeating it over and over again is because people do it anyway. An alternative is to take advantage of the flexibility of screened-host gateways and allow network traffic to an internal machine when directed to a certain port (e.g. 80). It is much less dangerous than running the server on the firewall machine but still fraught with problems since you are exposing an unprotected machine albeit in a very limited way. Additionally, this approach has functional limitations—how would you redirect the request to another server running on a different port or on a different machine?


Web Servers

It turns out there exists another solution. Let us go back and think about the reasons why it is not a good idea to run an HTTP server on the firewall machine. The first reason is processing load and the second reason is security. What if we limited the functionality of the HTTP server that is running on the firewall machine, to make it defer processing to machines inside the firewall? This would solve the problem with processing load. How about security? Well, if the HTTP server is not performing any processing on the firewall machine, and passes requests along to an internal machine on the LAN, it is hard to break into this server. The simpler the functionality, the harder it is for malicious outsiders to break in. This sounds good, but what we are doing is simply passing requests along to another machine that still has to process these requests. Can malicious outsiders break into that machine? Well, not so easily—even if they manage to wreak havoc on the HTTP server machine, they cannot access that machine directly and use it as a staging ground for further penetration. To summarize, the solution is not to run a full-fledged HTTP server on the firewall machine but to replace it with an HTTP proxy that may be configured to screen HTTP requests and forward them to proper internal hosts. Different proxy configurations may be selected depending on a wide range of circumstances but what is important is that no processing is performed on the firewall host and the internal machines are not exposed directly to the outside world.

4.6 SUMMARY By now, you should have enough information to either build your own HTTP server or extend an existing open source system. We attempted to make a clear distinction between responsibilities of servers and those of server applications. Even if you do not implement your own server or server components, understanding server operation is invaluable when architecting, building, and debugging complex Internet applications. Understanding server operation is also very important in making decisions about configuring a server and securing the server installation. Implementing server applications was not the focus of this chapter. Instead, we concentrated on the comparative analysis of different application mechanisms, and on passing request and response information to and from server applications. We come back to server-side applications later in this book.

4.7 QUESTIONS AND EXERCISES 1. Describe server processing of a POST request. In case of CGI processing, how does the server pass information to a CGI program (request headers, body, URL parameters, etc.)? 2. What are the advantages and disadvantages in using the SSI mechanism? 3. What are the advantages of Servlet API vs. the CGI mechanism?

Questions and Exercises


4. How does the relationship between CGI and SSI mechanisms differ from the relationship between Servlets and JSP? 5. What was the reason for introducing ‘Transfer-Encoding: chunked’ in HTTP/1.1? 6. Is it possible to use chunked transfer encoding with multipart HTTP messages? Explain. 7. Why was it necessary to introduce the ‘Host’ header in HTTP/1.1? How is it used to support virtual hosting? Why was it not enough to require that request lines always contain a full URL (as in GET http://www.cs.rutgers.edu/∼shklar/ HTTP/1.1)? 8. When (if ever) does it make sense to include HTTP/1.0 headers in HTTP/1.1 responses directed at HTTP/1.1 browsers? 9. HTTP/1.1 servers default to the Keep-Alive setting of the Connection header. Why then do most browsers include Connection: Keep-Alive in their requests even when they know that the target server supports HTTP/1.1? 10. Is it possible for an HTTP/1.1 server not to support persistent connections and still be HTTP-compliant? 11. Name three headers that, if present in an HTTP response, always have to be processed in a particular order. State the order and explain. Why did we ask you to name two headers in Chapter 3 but three headers in this exercise? 12. What is the difference between dual-homed gateways and screened-host gateways? Which is safer? Which is more flexible? 13. What functionality would be lost if servers did not know how to associate file extensions with MIME types? 14. Is it a good idea to run an HTTP server on a firewall machine? Explain. 15. Does your answer to the previous question depend on whether the HTTP server is running as a proxy? 16. Implement a mini-server that generates legal HTTP 1.0 responses to GET, HEAD and POST requests. Your program should be able to take the port number as its command-line parameter and listen on this port for incoming HTTP/1.0 requests (remember that backward compatibility requirement is part of HTTP/1.0—this means support for HTTP/0.9). Upon receiving a request, the program should fork off a thread for processing the request and keep listening on the same port. The forked off thread should generate the proper HTTP response, send it back to the browser, and terminate. The server should be capable of processing multiple requests in parallel. Pay attention to escape sequences and separators between the key-value pairs in the bodies of POST requests and query strings of GET requests. Make sure the necessary request headers are included in incoming request (e.g. Content-Type and Content-Length in POST requests). Your program has to generate legal HTTP headers according to HTTP 1.0 (including Content-Type). It should use a configuration file (mime-config) that will store mappings between file extensions and MIME types. It should use these mappings to determine the desired Content-Type for content, which is referenced by the URL (in your case, file path) specified in the GET or POST request. You will have to support basic path translation—all static URLs will have to be defined relative document root. This also means that your server will need at least a basic general configuration file (see Apache)—at least it should be possible to specify the server root, and both the document root and the cgi-bin directory relative to the server root. 17. Implement HTTP 1.1 support for the mini-server from Exercise 16. Your program should be able to take the port number as its command-line parameter and listen on this port


Web Servers

for incoming HTTP/1.1 requests (remember that backward compatibility requirement is part of HTTP/1.1—this means support for HTTP/1.0 and HTTP/0.9). Your server should support parallel processing of multiple requests. Upon receiving a request, your server should start a new thread for processing that request and keep listening on the same port. The new thread should generate a proper HTTP response, send it back to the browser, and terminate. The server should be able to initiate processing of new requests while old requests are still being processed. You have to send HTTP/1.1 responses for HTTP/1.1 requests, HTTP/1.0 responses for HTTP/1.0 requests, and HTTP/0.9 responses for HTTP/0.9 requests. Minimal level of compliance is acceptable. Minimal level of compliance implies the following: • HTTP/1.0 and HTTP/0.9 requests must be processed as before • The server must check for presence of the Host header in HTTP/1.1 requests, and return 400 Bad Request if the header is not present; the server must accept both absolute and relative URI syntax • The server must either maintain persistent connections, or include Connection: close in every response • The server must include the Date header (date always in GMT) in every response • The server has to support If-Modified-Since and If-Unmodified-Since headers • Following methods are defined in HTTP/1.1: GET, HEAD, POST, PUT, DELETE, OPTIONS, and TRACE. You have to support GET, HEAD, and POST, return 501 Not Implemented for other defined methods, and 400 Bad Request for undefined methods The result of this exercise should be a program that would receive legal HTTP/1.1 requests, and send legal HTTP/1.1 responses back. It should function as an HTTP/1.0 server in response to HTTP/1.0 requests.

BIBLIOGRAPHY Castro, E. (2001) Perl and CGI for the World Wide Web. Peachpit Press. Kopparapu, C. (2002) Load Balancing Servers, Firewalls and Caches. John Wiley & Sons. Luotonen, A. (1997) Web Proxy Servers. Prentice Hall. Hall, M. (2002) More Servlets and Java Server Pages. Prentice Hall. Rajagopalan, S, Rajamani, R., Krishnaswamy, R. and Vijendran, S. (2002) Java Servlet Programming Bible. John Wiley & Sons. Thomas, S. (2000) SSL & TLS Essentials: Securing the Web. John Wiley & Sons. Yeager, N. and McGrath, R. (1996) Web Server Technology. Morgan Kaufmann.


Web Browsers

In this chapter, we go over the fundamental considerations in designing and building a Web browser, as well as other sophisticated Web clients. When discussing Web browsers, our focus will not be on the graphical aspects of browser functionality (i.e. the layout of pages, the rendering of images). Instead, we shall concentrate on the issues associated with the processing of HTTP requests and responses. The value of this knowledge will become apparent as we proceed to our discussion of more sophisticated Web applications. It may seem to some that the task of designing a browser is a fait accompli, a foregone conclusion, a done deal, a known problem that has already been ‘solved’. Given the history and progress of browser development—from the original www browser, through Lynx and Mosaic, to Netscape, Internet Explorer, and Opera today—it might seem a futile endeavor to ‘reinvent the wheel’ by building a new browser application. This is hardly the case at all. The desktop browser is the most obvious example of a Web client, and it’s certainly the most common, but it’s far from the only one. Other types of Web clients include agents, which are responsible for submitting requests on behalf of a user to perform some automated function, and proxies, which act as gateways through which requests and responses pass between servers and clients to enhance security and performance. These clients need to replicate much of the functionality found in browsers. Thus, it is worthwhile to understand design principles associated with browser architecture. Furthermore, there are devices like handheld personal digital assistants, cellular phones, and Internet appliances, which need to receive and send data via the Web. Although many of them have browsers available already, they are mostly primitive with limited functionality. As the capabilities of these devices grow, more advanced and robust Web clients will be needed. Finally, who said that today’s desktop browsers are perfect examples of elegant design? The Mozilla project is an effort to build a better browser from the ground


Web Browsers

up (from the ashes of an existing one, if you will). Today’s desktop browsers may be (relatively) stable, and it would be difficult if not impossible to develop and market a new desktop browser at this stage of the game. Still, there will be ample opportunities to enhance and augment the functionality of existing desktop browsers, and this effort is best undertaken with a thorough understanding of the issues of browser design. The main responsibilities of a browser are as follows: 1. Generate and send requests to Web servers on the user’s behalf, as a result of following hyperlinks, explicit typing of URLs, submitting forms, and parsing HTML pages that require auxiliary resources (e.g. images, applets). 2. Accept responses delivered by Web servers and interpret them to produce the visual representation to be viewed by the user. This will, at a bare minimum, involve examination of certain response headers such as Content-Type to determine what action needs to be taken and what sort of rendering is required. 3. Render the results in the browser window or through a third party tool, depending on the content type of the response. This, of course, is an oversimplification of what real browsers actually do. Depending on the status code and headers in the response, browsers are called upon to perform other tasks, including: 1. Caching: the browser must make determinations as to whether or not it needs to request data from the server at all. It may have a cached copy of the same data item that it retrieved during a previous request. If so, and if this cached copy has not ‘expired’, the browser can eliminate a superfluous request for the resource. In other cases, the server can be queried to determine if the resource has been modified since it was originally retrieved and placed in the cache. Significant performance benefits can be achieved through caching. 2. Authentication: since web servers may require authorization credentials to access resources it has designated as secure, the browser must react to server requests for credentials, by prompting the user for authorization credentials, or by utilizing credentials it has already asked for in prior requests. 3. State maintenance: to record and maintain the state of a browser session across requests and responses, web servers may request that the browser accept cookies, which are sets of name/value pairs included in response headers. The browser must store the transmitted cookie information and make it available to be sent back in appropriate requests. In addition, the browser should provide configuration options to allow users the choice of accepting or rejecting cookies. 4. Requesting supporting data items: the typical web page contains images, Java applets, sounds, and a variety of other ancillary objects. The proper rendering

Architectural Considerations


of the page is dependent upon the browser’s retrieving those supporting data items for inclusion in the rendering process. This normally occurs transparently without user intervention. 5. Taking actions in response to other headers and status codes: the HTTP headers and the status code do more than simply provide the data to be rendered by the browser. In some cases, they provide additional processing instructions, which may extend or supersede rendering information found elsewhere in the response. The presence of these instructions may indicate a problem in accessing the resource, and may instruct the browser to redirect the request to another location. They may also indicate that the connection should be kept open, so that further requests can be sent over the same connection. Many of these functions are associated with advanced HTTP functionality found in HTTP/1.1. 6. Rendering complex objects: most web browsers inherently support content types such as text/html, text/plain, image/gif, and image/jpeg. This means that the browser provides native functionality to render objects with these contents inline: within the browser window, and without having to install additional software components. To render or play back other more complex objects (e.g. audio, video, and multimedia), a browser must provide support for these content types. Mechanisms must exist for invoking external helper applications or internal plug-ins that are required to display and playback these objects. 7. Dealing with error conditions: connection failures and invalid responses from servers are among the situations the browser must be equipped to deal with.

5.1 ARCHITECTURAL CONSIDERATIONS So, let’s engage in an intellectual exercise: putting together requirements for the architecture of a Web browser. What are those requirements? What functions must a Web browser perform? And how do different functional components interact with each other? The following list delineates the core functions associated with a Web browser. Each function can be thought of as a distinct module within the browser. Obviously these modules must communicate with each other in order to allow the browser to function, but they should each be designed atomically. • User Interface: this module is responsible for providing the interface through which users interact with the application. This includes presenting, displaying, and rendering the end result of the browser’s processing of the response transmitted by the server. • Request Generation: this module bears responsibility for the task of building HTTP requests to be submitted to HTTP servers. When asked by the User


Web Browsers

Interface module or the Content Interpretation module to construct requests based on relative links, it must first resolve those links into absolute URLs. • Response Processing: this module must parse the response, interpret it, and pass the result to the User Interface module. • Networking: this module is responsible for network communications. It takes requests passed to it by the Request Generation module and transmits them over the network to the appropriate Web server or proxy. It also accepts responses that arrive over the network and passes them to the Response Processing module. In the course of performing these tasks, it takes responsibility for establishing network connections and dealing with proxy servers specified in a user’s network configuration options. • Content Interpretation: having received the response, the Response Processing module needs help in parsing and deciphering the content. The content may be encoded, and this module is responding to decode it. Initial responses often have their content types set to text/html, but HTML responses embed or contain references to images, multimedia objects, JavaScript code, applets, and style sheet information. This module performs the additional processing necessary for browser applications to understand these entities within a response. In addition, this module must tell the Request Generation module to construct additional requests for the retrieval of auxiliary content such as images, applets, and other objects. • Caching: caching provides web browsers with a way to economize by avoiding the unnecessary retrieval of resources that the browser already has a usable copy of, ‘cached’ away in local storage. Browsers can ask Web servers whether a desired resource has been modified since the time that the browser initially retrieved it and stored it in the cache. This module must provide facilities for storing copies of retrieved resources in the cache for later use, for accessing those copies when viable, and for managing the space (both memory and disk) allocated by the browser’s configuration parameters for this purpose. • State Maintenance: since HTTP is a stateless protocol, some mechanism must be in place to maintain the browser state between related requests and responses. Cookies are the mechanism of choice for performing this task, and support for cookies is in the responsibility of this module. • Authentication: this module takes care of composing authorization credentials when requested by the server. It must interpret response headers demanding credentials by prompting the user to enter them (usually via a dialog). It must also store those credentials, but only for the duration of the current browser session, in case a request is made for another secured resource in what the server considers to be the same security ‘realm’. (This absolves the user of the need to re-enter the credentials each time a request for such resources is made.)

Processing Flow


• Configuration: finally, there are a number of configuration options that a browser application needs to support. Some of these are fixed, while others are userdefinable. This module maintains the fixed and variable configuration options for the browser, and provides an interface for users to modify those options under their control.

5.2 PROCESSING FLOW Figure 5.1 shows the processing flow for the creation and transmission of a request in a typical browser. We begin with a link followed by a user. Users can click on hyperlinks presented in the browser display window, they might choose links from lists of previously visited links (history or bookmarks), or they might enter a URL manually. In each of these cases, processing begins with the User Interface module, which is responsible for presenting the display window and giving users access to browser functions (e.g. through menus and shortcut keys). In general, an application using a GUI (graphical user interface) operates using an event model. User actions—clicking on highlighted hyperlinks, for example—are considered events that must be interpreted properly by the User Interface module. Although this book does not concentrate on the user interface-related functionality of HTTP browsers, it is crucial that we note the events that are important for the User Interface module: 5

1 User follows link


6 Request is prepared

Request generation

Request is transmitted

Networking support

2 Do I already have a copy of this resource?

Caching support

3 Do I need to send authorization credentials?


Configuration/ preferences

Figure 5.1

Browser request generation

4 Do I need to include cookie headers?

State maintenance


Web Browsers

• Entering URLs manually: usually, this is accomplished by providing a text entry box in which the user can enter a URL, as well as through a menu option (File→ Open) that opens a dialog box for similar manual entry. The second option often interfaces with the operating system to support interactive selection of local files. • Selecting previously visited links: the existence of this mechanism, naturally, implies that the User Interface module must also provide a mechanism for maintaining a history of visited links. The maximum amount of time that such links will be maintained in this list, as well as the maximum size to which this list can grow, can be established as a user-definable parameter in the Configuration module. The ‘Location’ or ‘Address’ text area in the browser window can be a dropdown field that allows the user to select from recently visited links. The ‘Back’ button allows users to go back to the page they were visiting previously. In addition, users should be able to save particular links as “bookmarks”, and then access these links through the user interface at a later date. • Selecting displayed hyperlinks: there are a number of ways for users to select links displayed on the presented page. In desktop browsers, the mouse click is probably the most common mechanism for users to select a displayed link, but there are other mechanisms on the desktop and on other platforms as well. Since the User Interface module is already responsible for rendering text according to the specifications found in the page’s HTML markup, it is also responsible for doing some sort of formatting to highlight a link so that it stands out from other text on the page. Most desktop browsers also change the cursor shape when the mouse is ‘over’ a hyperlink, indicating that this is a valid place for users to click. Highlighting mechanisms vary for non-desktop platforms, but they should always be present in some form. Once the selected or entered link is passed on to the Request Generation module, it must be resolved. Links found on a displayed page can be either absolute or relative. Absolute URLs are complete URLs, containing all the required URL components, e.g. protocol://host/path. These do not need to be resolved and can be processed without further intervention. A relative URL specifies a location relative to: 1. the current location being displayed (i.e. the entire URL including the path, up to the directory in which the current URL resides), when the HREF contains a relative path that does not begin with a slash, e.g.: ), or 2. the current location’s web server root (i.e., only the host portion of the URL), when the HREF contains a relative path that does begin with a slash, e.g. .

Processing Flow


Current URL:


. . . → http://www.myserver.com/mydirectory/anotherdirectory/page2.html . . . → http://www.myserver.com/rootleveldirectory/homepage.html Current URL:


. . . → http://www.myserver.com/mydirectory/anotherdirectory/page2.html . . . → http://www.myserver.com/yetanotherdirectory/homepage.html Current URL:


. . . → http://www.yourserver.com/otherdir/anotherdirectory/page2.html . . . → http://www.yourserver.com/yetanotherdirectory/homepage.html

Figure 5.2

Resolution of relative URLs

The process of resolution changes if an optional tag is found in the HEAD section of the page. The URL specified in this tag replaces the current location as the “base” from which resolution occurs in the previous examples. Figure 5.2 demonstrates how relative URLs must be resolved by the browser. Once the URL has been resolved, the Request Generation module builds the request, which is ultimately passed to the Networking module for transmission. To accomplish this task, the Request Generation module has to communicate with other browser components: • It asks the Caching module “Do I already have a copy of this resource?” If so, it needs to determine whether it can simply use this copy, or whether it needs to ask the server if the resource has been modified since the browser cached a copy of this resource. • It asks the Authorization module “Do I need to include authentication credentials in this request?” If the browser has not already stored credentials for the appropriate domain, it may need to contact the User Interface module, which prompts the user for credentials. • It asks the State Mechanism module “Do I need to include Cookie headers in this request?” It must determine whether the requested URL matches domain and path patterns associated with previously stored cookies. The constructed request is passed to the Networking module so it can be transmitted over the network.


Web Browsers

2 1 Response arrives

Do I need to decode encoded content?

Response processing

Networking support 3 Do I need to send back authorization credentials?


7 Result of response processing is presented

Content interpretation 6

5 4 Do I need to store Cookie information?

State maintenance


Should I store this response in the cache?

Caching support

Do I need to request other resources?

Request generation

Configuration/ preferences

Figure 5.3

Browser response processing

Once a request has been transmitted, the browser waits to receive a response. It may submit additional requests while waiting. Requests may have to be resubmitted if the connection is closed before the corresponding responses are received. It is the server’s responsibility to transmit responses in the same order as the corresponding requests were received. However, the browser is responsible for dealing with servers that do not properly maintain this order, by delaying the processing of responses that arrive out of sequence. Figure 5.3 describes the flow for this process. A response is received by the Networking module, which passes it to the Response Processing module. This module must also cooperate and communicate with other modules to do its job. It examines response headers to determine required actions. • If the status code of the response is 401 Not Authorized, this means that the request lacked necessary authorization credentials. The Response Processing module asks the Authorization module whether any existing credentials might be used to satisfy the request. The Authorization module may, in turn, contact the User Interface module, which would prompt the user to enter authorization credentials. In either case, this results in the original request being retransmitted with an Authorization header containing the required credentials. • If the response contains Set-Cookie headers, the State Maintenance module must store the cookie information using the browser’s persistence mechanism.

Processing Flow


Next, the response is passed to the Content Interpretation module, which has a number of responsibilities: • If the response contains Content-Transfer-Encoding and/or ContentEncoding headers, the module needs to decode the body of the response. • The module examines the Cache-Control, Expires, and/or Pragma headers (depending on the HTTP version of the response) to determine whether the browser needs to cache the decoded content of the response. If so, the Caching module is contacted to create a new cache entry or update an existing one. • The Content-Type header determines the MIME type of the response content. Different MIME types, naturally, require different kinds of content processing. Modern browsers support a variety of content types natively, including HTML (text/html), graphical images (image/gif, and image/jpeg), and sounds (audio/wav). Native support means that processing of these content types is performed by built-in browser components. Thus, the Content Interpretation module must provide robust support for such processing. Leading edge browsers already provide support for additional content types, including vector graphics and XSL stylesheets. • For MIME types that are not processed natively, browsers usually provide support mechanisms for the association of MIME types with helper applications and plug-ins. Helper applications render content by invoking an external program that executes independent of the browser, while plug-ins render content within the browser window. The Content Interpretation module must communicate with the Configuration module to determine what plug-ins are installed and what helper application associations have been established, to take appropriate action when receiving content that is not natively supported by the browser. This involves a degree of interaction with the operating system, to determine system-level associations configured for filename extensions, MIME types, and application programs. However, many browsers override (or even completely ignore) these settings, managing their own sets of associations through the Configuration module. • Some content types (e.g. markup languages, applets, Flash movies) may embed references to other resources needed to satisfy the request. For instance, HTML pages may include references to images or JavaScript components. The Content Interpretation module must parse the content prior to passing it on to the User Interface module, determining if additional requests will be needed. If so, URLs associated with these requests get resolved when they are passed to the Request Generation module. As each of the requested resources arrives in sequence, it is passed to the User Interface module so that it may be incorporated in the final presentation. The Networking module maintains its queue of requests and responses, ensuring that all requests have been satisfied, and resubmitting any outstanding requests.


Web Browsers

All along the way, various subordinate modules are asked questions to determine the course of processing (including whether or not particular tasks need to be performed at all). For example, the Content Interpretation module may say ‘This page has IMG tags, so we must send HTTP requests to retrieve the associated images,’ but the Caching module may respond by saying ‘We already have a usable copy of that resource, so don’t bother sending a request to the network for it.’ (Alternatively, it may say ‘We have a copy of that resource, but let’s ask the server if its copy of the resource is more recent; if it’s not, it doesn’t need to send it back to us.’) Or the Configuration module may say ‘No, don’t send a request for the images on this page, this user has a slow connection and has elected not to see images.’ Or the State Maintenance mechanism may jump in and say ‘Wait, we’ve been to this site before, so send along this identifying cookie information with our requests.’ The rest of this chapter is devoted to a more detailed explanation of the role each of these modules plays in the processing of requests and responses. As mentioned previously, we shall not focus on the User Interface module’s responsibility in rendering graphics, as this is an extensive subject worthy its own book. However, we will concentrate on the interplay between these modules and how to design them to do their job. We begin by going over the basics of request and response processing, following that with details on the more sophisticated aspects of such processing, including support for caching, authentication, and advanced features of the HTTP protocol.

5.3 PROCESSING HTTP REQUESTS AND RESPONSES Let us examine how browsers build and transmit HTTP requests, and how they receive, interpret, and present HTTP responses. After we have covered the basics of constructing requests and interpreting responses, we can look at the more complex interactions involved when HTTP transactions involve caching, authorization, cookies, request of supporting data items, and multimedia support.

Not Just HTTP Browsers should do more than just communicate via HTTP. They should provide support for Secure HTTP (over Secure Sockets Layer). They should be able to send requests to FTP servers. And they should be able to access local files. These three types of requests correspond to URLs using the https, ftp, and file protocols, respectively. Although we shall not cover these protocols here, it is important to note that HTTP requests and responses are not the only kinds of transactions performed by browsers.

Processing HTTP Requests and Responses


5.3.1 HTTP requests The act of sending an HTTP request to a web server, in its most trivial form, consists of two basic steps: constructing the HTTP request, and establishing a connection to transmit it across the Internet to the target server or an intermediate proxy. The construction of requests is the responsibility of the Request Generation module. Once a request has been properly constructed, this module passes it to the Networking module, which opens the socket to transmit it either directly to the server or to a proxy. Before the Request Generation module has even begun the process of building the request, it needs to ask a whole series of questions of the other modules: 1. Do I already have a cached copy of this resource? If an entry exists in the cache that satisfies this same request, then the transmitted request should include an If-Modified-Since header, containing the last modification time associated with the stored cache entry. If the resource found on the server has not been modified since that time, the response will come back with a 304 Not Modified status code, and that cache entry can be passed directly to the User Interface module. (Caching Support) 2. Is there any additional information I need to send as part of this request? If this request is part of a series of requests made to a particular web server, or if the target web server has been visited previously, it may have sent “state” information (in the form of Set-Cookie headers) to the browser. The browser must set and maintain cookies according to the server instructions: either for a specified period of time or for the duration of the current session. In addition, the set of saved cookies must be examined prior to sending a request to determine whether cookie information needs to be included in that request. (State Maintenance) 3. Is there any other additional information I need to send as part of this request? If this resource is part of an authorization realm for which the user has already supplied authentication credentials, those credentials should be stored by the browser for the duration of a session, and should be supplied with requests for other resources in the same realm. (Authorization) User preferences may modify the nature of the request, possibly even eliminating the need for one entirely. For example, users may set a preference via the Configuration module telling the browser not to request images found within an HTML page. They can turn off Java applet support, meaning that requests for applets need not be processed. They can also instruct the browser to reject cookies, meaning that the browser does not need to worry about including Cookie headers in generated requests.


Web Browsers

In the chapter devoted to the HTTP protocol, we described the general structure of HTTP requests, and provided some examples. To refresh our memories, here is the format of an HTTP request: METHOD /path-to-resource Header-Name-1: value Header-Name-2: value


[ optional request body ]

An HTTP request contains a request line, followed by a series of headers (one per line), followed by a blank line. The blank line may serve as a separator, delimiting the headers from an optional body portion of the request. A typical example of an HTTP request might look something like this: POST /update.cgi HTTP/1.0 Host: www.somewhere.com Referer: http://www.somewhere.com/formentry.html name=joe&type=info&amount=5

The process of constructing an HTTP request typically begins when a web site visitor sees a link on a page and clicks on it, telling the browser to present the content associated with that link. There are other possibilities, such as entering a URL manually, or a browser connecting to a default home page when starting up, but this example allows us to describe typical browser activity more comprehensively.

Constructing the request line When a link is selected, the browser’s User Interface module reacts to an event. A GUI-based application operates using an event model, in which user actions (e.g. typing, mouse clicking) are translated into events that the application responds to. In response to a mouse click on a hyperlink, for example, the User Interface module determines and resolves the URL associated with that link, and passes it to the Request Generation module.

Filling Out the Form In the case of a user entry form, where the user has entered data into form fields and clicked a ‘Submit’ button, it may be more than just the URL that is passed to the Request Generation module. The entered data must be included as well.

Processing HTTP Requests and Responses


As we mentioned in the chapter covering HTTP processing, the data is converted into name/value pairs that are URL-encoded. The GET method includes the encoded data in the URL as a query string, while the POST method places the encoded data in the body of the request.

At this point, the Request Generation module begins to construct the request. The first portion of the request that needs to be created is the request line, which contains a ‘method’ (representing one of several supported request methods), the ‘/path-to-resource’ (representing the path portion of the requested URL), and the ‘version-number’ (specifying the version of HTTP associated with the request). Let’s examine these in reverse order. The ‘version-number’ should be either HTTP/1.1 or HTTP/1.0. A modern up-to-date client program should always seek to use the latest version of its chosen transmission protocol, unless the recipient of the request is not sophisticated enough to make use of that latest version. Thus, at the present time, a browser should seek to communicate with a server using HTTP/1.1, and should only ‘fall back’ to HTTP/1.0 if the server with which it is communicating does not support HTTP/1.1. The ‘path-to-resource’ portion is a little more complicated, and is in fact dependent on which version of HTTP is employed in the request. You may remember that this portion of the request line is supposed to contain the “path” portion of the URL. This is the part of the URL following the host portion of the URL (i.e. "http://hostname"), starting with the "/". The situation is complicated when the browser connects to a proxy server to send a request, rather than connecting directly to the target server. Proxies need to know where to forward the request. If only the path-to-resource portion is included in the request line, a proxy would have no way of knowing the intended destination of the request. HTTP/1.0 requires the inclusion of the entire URL for requests directed at proxy servers, but forbids the inclusion of the entire URL for requests that get sent directly to their target servers. This is because HTTP/1.0 servers do not understand requests where the full URL is specified in the request line. In contrast, HTTP/1.0 proxies expect that incoming requests contain full URLs. When HTTP/1.0 proxies reconstruct requests to be sent directly to their target servers, they remove the server portion of the request URL. When requests must pass through additional proxies, this reconstruction is not performed, and the requests remain unchanged. HTTP/1.1 is more flexible; it makes the inclusion of the entire URL on the request line acceptable in all situations, irrespective of whether a proxy is involved. However, to facilitate this flexibility, HTTP/1.1 requires that submitted requests all include a "Host" header, specifying the IP address or name of the target server. This header was originally introduced to support virtual hosting, a feature that allows a web server to service more than one domain. This means that a single Web server


Web Browsers

program could be running on a server machine, accepting requests associated with many different domains. However, this header also provides sufficient information to proxies so that they can properly forward requests to other servers/proxies. Unlike HTTP/1.0 proxies, HTTP/1.1 proxies do not need to perform any transformation of these requests. The ‘method’ portion of the request line is dependent on which request method is specified, implicitly or explicitly. When a hyperlink (textual or image) is selected and clicked, the GET method is implicitly selected. In the case of HTML forms, a particular request method may be specified in the tag: ...

As mentioned in the chapter on the HTTP protocol, the GET method represents the simplest format for HTTP requests: a request line, followed by headers, and no body. Other request methods such as POST and PUT make use of a request body that follows the request line, headers, and blank line. (The blank line serves as a separator delimiting the headers from the body.) This request body may contain parameters associated with an HTML form, a file to be uploaded, or a combination of both. In any case, we are still working on the construction of the request line. The ‘method’ portion will be set to "GET" by default: for textual or image-based hyperlinks that are followed, and for forms that do not explicitly specify a METHOD. If a form does explicitly specify a METHOD, that method will be used instead.

Constructing the headers Next, we come to the headers. There are a number of headers that a browser should include in the request: Host: www.neurozen.com

This header was introduced to support virtual hosting, a feature that allows a web server to service more than one domain. This means that a single Web server program could be running on a server machine, accepting requests associated with many different domains. Without this header, the Web server program could not tell which of its many domains the target of the request was. In addition, this header provides information to proxies to facilitate proper routing of requests. User-Agent: Mozilla/4.75 [en] (WinNT; U)

Identifies the software (e.g. a web browser) responsible for making the request. Your browser (or for that matter any Web client) should provide this information

Processing HTTP Requests and Responses


to identify itself to servers. The conventions are to produce a header containing the name of the product, the version number, the language this particular copy of the software uses, and the platform it runs on: Product/version.number [lang] (Platform) Referer: http://www.cs.rutgers.edu/∼shklar/index.html

If this request was instantiated because a user selected a link found on a web page, this header should contain the URL of that referring page. Your Web client should keep track of the current URL it is displaying, and it should be sure to include that URL in a Referer header whenever a link on the current page is selected. Date: Sun, 11 Feb 2001 22:28:31 GMT

This header specifies the time and date that this message was created. All request and response messages should include this header. Accept: text/html, text/plain, type/subtype,. . . Accept-Charset: ISO-8859-1, character−set− identifier,. . . Accept-Language: en, language− identifier,. . . Accept-Encoding: compress, gzip,. . .

These headers list the MIME types, character sets, languages, and encoding schemes that your client will ‘accept’ in a response from the server. If your client needs to limit responses to a finite set, then these should be included in these headers. Your client’s preferences with respect to these items can be ranked by adding relative values in the form of q =qvalue parameters, where qvalue is a digit. Content-Type: mime-type/mime-subtype Content-Length: xxx

These entity headers provide information about the message body. For POST and PUT requests, the server needs to know the MIME type of the content found in the body of the request, as well as the length of the body. Cookie:


This request header contains cookie information that the browser has found in responses previously received from Web servers. This information needs to be sent back to those same servers in subsequent requests, maintaining the ‘state’ of a browser session by providing a name-value combination that uniquely identifies a particular user. Interaction with the State Maintenance module will determine whether these headers need to be included in requests, and if so what their values should be. Note that a request will contain multiple Cookie headers if there is more than one cookie that should be included in the request.


Web Browsers


SCHEME encoded-userid:password

This request header provides authorization credentials to the server in response to an authentication challenge received in an earlier response. The scheme (usually ‘basic’) is followed by a string composed of the user ID and password (separated by a colon), encoded in the base64 format. Interaction with the Authorization module will determine what the content of this header should be.

Constructing the request body This step of the request construction process applies only for methods like POST and PUT that attach a message body to a request. The simplest example is that of including form parameters in the message body when using the POST method. They must be URL-encoded to enable proper parsing by the server, and thus the Content-Type header in the request must be set to application/x-www-formurlencoded. There are more complex uses for the request body. File uploads can be performed through forms employing the POST method (using multipart MIME types), or (with the proper server security configuration) web resources can be modified or created directly using the PUT method. With the PUT method, the ContentType of the request should be set to the MIME type of the content that is being uploaded. With the POST method, the Content-Type of the request should be set to multipart/form-data, while the Content-Type of the individual parts should be set to the MIME type of those parts. This Content-Type header requires the "boundary" parameter, which specifies a string of text that separates discrete pieces of content found in the body:

... Content-Type: multipart/multipart subtype; boundary="random-string" --random-string Content-Type: type/subtype of part 1 Content-Transfer-Encoding: encoding scheme for part 1

content of part 1 --random-string Content-Type: type/subtype of part 2 Content-Transfer-Encoding: encoding scheme for part 2

content of part 2

Processing HTTP Requests and Responses


Note that each part specifies its own Content-Type, and its own ContentTransfer-Encoding. This means that one part can be textual, with no encoding specified, while another part can be binary (e.g. an image), encoded in Base64 format, as in the following example: ... Content-Type: multipart/form-data; boundary="gc0p4Jq0M2Yt08jU534c0p" --gc0p4Jq0M2Yt08jU534c0p Content-Type: application/x-www-form-urlencoded

&filename=. . .¶m =value --gc0p4Jq0M2Yt08jU534c0p Content-Type: image/gif Content-Transfer-Encoding: base64 FsZCBoYWQgYSBmYXJtCkUgSST2xkIE1hY0Rvbm GlzIGZhcm0gaGUgaGFkBFIEkgTwpBbmQgb24ga IHKRSBJIEUgSSBPCldpdGggYSNvbWUgZHVja3M BxdWjayBoZXJlLApFjayBxdWFhIHF1YWNrIHF1 XJlLApldmVyeSB3aGYWNrIHRoZVyZSBhIHF1YW NrIHF1YWNrCEkgTwokUgSSBFI=

SOAP Opera One increasingly popular usage of the request body is as a container for Remote Procedure Calls (RPC). This is especially true now that there are XML-based implementations of RPC, including the aptly named XML-RPC and its successor, SOAP (Simple Object Access Protocol). When using SOAP over HTTP, the body of a request consists of a SOAP payload, which is an XML document containing RPC directives including method calls and parameters. We will discuss this in greater detail in a later chapter.

Transmission of the request Once the request has been fully constructed, it is passed to the Networking module, which transmits the request. This module must first determine the target of the request. Normally, this can be obtained by parsing the URL associated with the request. However, if the browser is configured to employ a proxy server, the target of the request would be that proxy server. Thus, the Configuration module must be queried to determine the actual target for the request. Once this is done, a socket is opened to the appropriate machine.


Web Browsers

5.3.2 HTTP responses In the request/response paradigm, the transmission of a request anticipates the receipt of some sort of a response. Hence, browsers and other Web clients must be prepared to process HTTP responses. This task is the responsibility of the Response Processing module. As we know, HTTP responses have the following format:

HTTP/version-number status-code Header-Name-1: value Header-Name-2: value


[ response body ]

An HTTP response message consists of a status line (containing the HTTP version, a three-digit status code, and a brief human-readable explanation of the status code), a series of headers (again, one per line), a blank line, and finally the body of the response. The following is an example of the HTTP response message that a server would send back to the browser when it is able to satisfy the incoming request:

HTTP/1.1 200 OK Content-Type: text/html Content-Length: 1234 ... . . . . . . ...

In this case, we have a successful response: the server was able to satisfy the client’s request and sent back the requested data. Now, of course, the requesting client must know what to do with this data. When the Networking module receives a response, it passes it to the Response Processing module. First, this module must interpret the status code and header

Processing HTTP Requests and Responses


information found in the response to determine what action it should take. It begins by examining the status code found in the first line of the response (the status line). In the chapter covering the HTTP protocol, we delineated the different classes of status codes that might be sent by a Web server: • informational status codes (1xx ), • successful response status codes (2xx ), • redirection status codes (3xx ), • client request error status codes (4xx ), and • server error status codes (5xx ). Obviously, different actions need to be taken depending on which status code is contained in the response. Since the successful response represents the simplest and most common case, we will begin with the status code "200".

Processing successful responses The status code "200" represents a successful response, as indicated by its associated message "OK". This status code indicates that the browser or client should take the associated content and render it in accordance with the specifications included in the headers: Content-Transfer-Encoding: chunked Content-Encoding: compress | gzip

The presence of these headers indicates that the response content has been encoded and that, prior to doing anything with this content, it must be de-coded. Content-Type:


This header specifies the MIME type of the message body’s content. Browsers are likely to have individualized rendering modules for different MIME types. For example, text/html would cause the HTML rendering module to be invoked, text/plain would make use of the plain text rendering module, and image/gif would employ the image rendering module. Browsers provide built-in support for a limited number of MIME types, while deferring processing of other MIME types to plug-ins and helper applications. Content-Length:


This optional header provides the length of the message body in bytes. Although it is optional, when it is provided a client may use it to impart information about the progress of a request. When the header is included, the browser can display


Web Browsers

not only the amount of data downloaded, but it can also display that amount as a percentage of the total size of the message body. Set-Cookie: name=value; domain=domain.name; path=path-within-server; [ secure ]

If the server wishes to establish a persistent mechanism for maintaining session state with the user’s browser, it includes this header along with identifying information. The browser is responsible for sending back this information in any requests it makes for resources within the same domain and path, using Cookie headers. The State Maintenance module stores cookie information found in the response’s SetCookie headers, so that the browser can later retrieve that information for Cookie headers it needs to include in generated requests. Note that a response can contain multiple Set-Cookie headers. Cache-Control: private | no-cache | . . . Pragma: no-cache Expires: Sun, 11 Feb 2001 22:28:31 GMT

These headers influence caching behavior. Depending on their presence or absence (and on the values they contain), the Caching Support module will decide whether the content should be cached, and if so, for how long (e.g. for a specified period of time or only for the duration of this browser session). Once the content of a successful response has been decoded and cached, the cookie information contained in the response has been stored, and the content type has been determined, then the response content is passed on to the Content Interpretation module. This module delegates processing to an appropriate submodule, based on the content type. For instance, images (Content-Type: image/*) are processed by code devoted to rendering images. HTML content (Content-Type: text/html) is passed to HTML rendering functions, which would in turn pass off processing to other functions. For instance, JavaScript—contained within block tags or requested via references to URLs in tags—must be interpreted and processed appropriately. In addition, stylesheet information embedded in the page must also be processed. Only after all of this processing is complete is the resulting page passed to the User Interface module to be displayed in the browser window. (Auxiliary requests for additional resources are explained in the section on Requesting Supporting Data Items later in this chapter.) There are other status codes that fit into the ‘successful response’ category (2xx) including: "201 Created": a new resource was created in response to the request, and the Location header contains the URL of the new resource. "202 Accepted": the request was accepted, but may or may not be processed by the server.

Processing HTTP Requests and Responses


"204 No Content": no body was included with the response, so there is no content to present. This tells the browser not to refresh or update its current presentation as a result of processing this request. "205 Reset Content": this is usually a response to a form processed for data entry. It indicates that the server has processed the request, and that the browser should retain the current presentation, but that it should clear all form fields. Although these status codes are used less often than the popular 200 OK, browsers should be capable of interpreting and processing them appropriately.

Processing of responses with other status codes Aside from the successful status code of 200, the most common status codes are the ones associated with redirection (3xx) and client request errors (4xx). Client request errors are usually relatively simple to process: either the browser has somehow provided an invalid request (400 Bad Request), or the URL the browser requested couldn’t be found on the server (404 Not Found). In either of these cases, the browser simply presents a message indicating this state of affairs to the user. Authentication challenges that are caused by the browser attempting to access protected resources (e.g. 401 Not Authorized) are also classified as ‘client error’ conditions. Some Web servers may be configured to provide custom HTML presentations when one of these conditions occurs. In those situations, the browser should simply render the HTML page included in the response body: HTTP/1.1 404 Not Found Content-Type: text/html Whoops! Look What You’ve Done! You’ve broken the internet! (Just kidding, you simply requested an invalid address on this site.)

Security Clearance There is another type of client error that is not quite so simple to process: 401 Not Authorized and 403 Forbidden responses. Servers send responses with the 401


Web Browsers

status code when authorization credentials are required to access the resource being requested, and send responses with the 403 status code when the server does not want to provide access at all. The latter may happen when the browser exceeds the server limit for unsuccessful authentication challenges. The methods for dealing with authorization challenges will be discussed later in this chapter.

Redirection status codes are also relatively simple to process. They come in two varieties: 301 Moved Permanently and 302 Moved Temporarily. The processing for each of these is similar. For responses associated with each of these status codes, there will be a Location: header present. The browser needs to submit a further request to the URL specified in this header to perform the desired redirection. Some Web servers may be configured to include custom HTML bodies when one of these conditions arises. This is for the benefit of older browsers that do not support automatic redirection and default to rendering the body when they don’t recognize the status code. Browsers that support redirection can ignore this content and simply perform the redirection as specified in the header: HTTP/1.1 301 Moved Permanently Location: http://www.somewhere-else.com/davepage.html Content-Type: text/html Dave’s Not Here, Man! Dave’s Not Here, Man! Dave is no longer at this URL. If you want to visit him, click here.

This response should cause the browser to generate the following request:

GET /davepage.html HTTP/1.1 Host: www.somewhere-else.com ...

Complex HTTP Interactions


The difference between the 301 and 302 status codes is the notion of ‘moved permanently’ versus ‘moved temporarily’. The 301 status code informs the browser that the data at the requested URL is now permanently located at the new URL, and thus the browser should always automatically go to the new location. In order to make this happen, browsers need to provide a persistence mechanism for storing relocation URLs. In fact, the mechanism used for storing cookies, authorization credentials, and cached content can be employed for this purpose as well. In the future, whenever a browser encounters a request for a relocated URL, it would automatically build a request asking for the new URL.

5.4 COMPLEX HTTP INTERACTIONS Now that we covered the basics of request and response processing, let us move on to situations where the interplay of requests and responses yields more sophisticated functionality. The areas we mentioned earlier were caching, authorization, cookies, the request of supporting data items, and the processing of other complex response headers (including advanced HTTP functionality). Let us examine how the modules in our browser architecture interact to provide this functionality.

5.4.1 Caching When we speak of caching, we are referring to the persistence in some storage mechanisms of generated and retrieved server resources to improve the performance of the response generation process. There is server-side caching, which relieves the server of the responsibility of regenerating a response from scratch in appropriate situations. When resources (such as HTML responses or dynamically generated images) are stored in a serverside cache, the server does not need to go through the process of building these responses from the ground up. This can yield an enormous performance benefit, provided the stored response is still deemed usable (i.e. has not expired ). This variety of caching is application-specific and goes beyond the scope of providing support for HTTP standards. (This is discussed further in the chapter on Web applications). More relevant to our concerns in designing a browser is client-side caching. Client-side caching can relieve the client of the responsibility for re-requesting a response from the server, and/or relieve the server of the responsibility for re-sending a response containing an already requested resource. This can yield enormous


Web Browsers

savings in data transmission time. To support client-side caching, Web clients must store retrieved resources in a client-side cache. Subsequent requests for the same resource (i.e. the response generated by a request to the same URL with the same parameters) should examine the cache to see if that resource already has been stored in the cache and is still valid (i.e. has not expired). If it has expired, the client should ask the server to send back a copy of the resource only if it has changed since the last time it was requested. Support for browser caching requires three components: 1. Mechanism for including appropriate headers in requests to support caching (part of the Request Generation module), 2. Examination of response headers for directives regarding the caching of the response (part of the Response Generation module), and 3. Mechanism for saving retrieved resources in some persistent storage mechanism (memory or disk) until the specified expiration date. The third item is a module in and of itself: the Caching module, which does the bulk of the decision making regarding how to construct requests to support caching, and how to deal with potentially cacheable content found in the response. Before an HTTP request is generated for a resource, the Request Generation module should query the Caching Support module to determine whether a saved copy of this resource exists and has not yet expired. If there is such a copy, it can be used to satisfy the request, rather than requiring the browser to transmit an explicit request to a server and wait for a response to be transmitted back. Even if the copy has ‘expired’, the request for the resource can be sent with an If-Modified-Since header. If the resource has not changed since the time it was originally retrieved, the server can respond with a 304 Not Modified status code, rather than sending a new copy of the resource in the response. If there is no local copy stored in the cache, the Request Generation module does not include the conditional header in the request. The content of the response should be considered as a candidate object to be stored in the cache. If there are no directives in the headers indicating that this item should not be cached, the item can be stored in the cache, along with any associated expiration information. The Response Processing module must perform the necessary examination of response headers and pass the content of the response to the Caching module if appropriate. The caching module determines whether or not the content should be stored in the cache. Here we see an example of an HTTP response specifying that the content should not be cached. Subsequent requests for this same resource would result in its repeated transmission from the server:

Complex HTTP Interactions


HTTP/1.1 200 OK Date: Sun, 13 May 2001 12:36:04 GMT Content-Type: image/jpeg Content-Length: 34567 ... Cache-Control: no-cache Pragma: no-cache ...

An example of an HTTP response with the defined expiration date is shown below. This entry would expire one day after the original request was made:

HTTP/1.1 200 OK Date: Sun, 13 May 2001 12:36:04 GMT Content-Type: image/jpeg Content-Length: 34567 Cache-Control: private Expires: Mon, 14 May 2001 12:36:04 GMT Last-Modified: Sun, 13 May 2001 12:36:04 GMT ...

The next time this particular resource is desired, the cached copy may be used, at least until the specified expiration date. When a copy of a resource is stored in the cache, the Caching module maintains other information (metadata) about the cache entry, namely its expiration date and its last modification date. This information is useful in optimizing the use of cache entries to satisfy requests. For example, let’s say a full day passes from the point in time at which this resource was cached. According to the expiration date specified in the Expires header, the cache entry will have expired by then. At this point, the Request Generation module could simply submit a request for a fresh copy of the resource, telling the Caching Support module simply to dispose of the expired cache entry. But this may not be necessary. If the content of the resource has not changed since it was originally retrieved, the stored copy is still usable. So, why not use it? There are several ways to determine whether a resource has been modified since it was last accessed. Prior to HTTP/1.1, the most economical way to do this was to make use of the HEAD method associated with HTTP requests. The HEAD method


Web Browsers

returns the same results as a GET request, but without the response’s body. In other words, only the headers are sent in the response. The browser could simply look at the Last-Modified header in the response, and compare it to the last modification date associated with the cache entry. If the date specified in the header was less than or equal to the date found in the cache entry, the cache entry can still be used. Otherwise, the cache entry is deleted and a request is made for a new copy of the requested resource. HTTP/1.1 provides a simpler way to accomplish the same goal. Requests can include a new header, If-Modified-Since, that would specify the last modification date found in the cache entry. If the date specified in the header is greater than or equal to the last modification date associated with the requested resource, the server will send back a response with a status code of 304 Not Modified. This tells the browser that the cache entry can still be used. Otherwise, the server sends the new copy of the requested resource, and the browser deletes the cache entry. With this new feature of HTTP/1.1, what used to take (potentially) two sets of HTTP requests and responses now can be accomplished in one. Thus, when the Caching module informs the Request Generation module that a cache entry exists but may have expired, the Request Generation module can add the If-Modified-Since header to its request. The Response Processing module, upon receiving a 304 Not Modified response, will then make use of the cache entry. History Repeats An awkward situation that occurs in even the most sophisticated browsers is the history anomaly. Users can employ the ‘Back’ button to see for a second time the presentation of pages they have already visited. The history mechanism within the browser often stores, not only a reference to links, but also the presentations associated with those links, regardless of any caching directives (e.g. Cache-Control: no-cache) that the server may have included in the original response headers! What browsers need to do is treat the ‘Back’ button event as a request for new content, making the decision to re-use the presentation only after examining caching directives associated with the original response. When it comes to pages with form fields, even within the scope of this anomaly, browsers do not act consistently. Internet Explorer 5 seems to present the page again, but with all form fields empty. Older browsers frequently left form fields filled in with whatever values had been entered on the previous visit to the page. You may have noticed that some sites take precautions against this, explicitly taking action to clear form fields on the page as the form is being submitted. To do this, they must engage in ‘stupid JavaScript tricks’, copying entered fields into another form with hidden fields, resetting the form with the visible fields, and submitting the hidden form.

5.4.2 Cookie coordination HTTP is a stateless protocol, but cookies are a mechanism for maintaining state during a browser session even though a stateless protocol is being used. The principle

Complex HTTP Interactions


is simple: in responses sent to browsers, servers can include key/value pairs (cookies), which the clients are responsible for remembering. Every time the client sends a request back to a server for which it has received a cookie, it must include it in the request. This helps servers to identify specific browser instances, allowing them to associate sets of otherwise disjointed requests with particular users. These requests, taken together, do not comprise an actual session in the traditional network connectivity sense, but rather a logical session. Servers transmit cookies to browsers via the "Set-Cookie" response header. This header provides a name-value combination that represents the cookie itself. In addition, this header contains information about the server’s domain and the URL path with which the cookie is to be associated. It also can contain the secure keyword, which instructs the browser to limit the transmission of accompanied cookies to secure connections (e.g. using HTTPS, which is no more than HTTP over SSL). The domain parameter of the Set-Cookie header can be a fully qualified host name, such as ecommerce.mysite.com, or a pattern such as .mysite.com, which corresponds to any fully qualified host name that tail-matches this string (e.g. for domain=.mysite.com, ecommerce.mysite.com and toys.ecommerce. mysite.com match, but mysite.com does not). The value of this parameter server must be a domain that the server sending the cookie belongs to. In other words, ecommerce.mysite.com could set a cookie with a domain parameter with a value of ecommerce.mysite.com, or .mysite.com, but not catalog.mysite.com. The path parameter designates the highest level of a URL path for which the cookie applies. For instance, if a path parameter with a value of / is included in a Set-Cookie header sent by a server in a response, the browser should include the value of that cookie in requests for all URLs on the server. If the path parameter is set to /cgi-bin/, then the cookie need only be sent by the browser in requests for URLs within the /cgi-bin/ directory subtree on the server. Browsers send identifying cookies back to appropriate servers by including the Cookie header in requests to those servers. The content of this Cookie header is simply a set of key/value pairs originally sent by the server, which the browser has stored for future reference: Cookie: key1=value1; key2=value2; . . .

5.4.3 Authorization: challenge and response As with any sophisticated mechanism employed within HTTP requests and responses, the authorization mechanism associated with basic HTTP authentication is an ongoing interchange. If we start at the very beginning, it would be when a simple HTTP request is made for a resource that just happens to be ‘protected.’


Web Browsers

Mechanisms exist on virtually all web servers to ‘protect’ selected resources. Usually, this is accomplished through a combination of IP address security (ranges of addresses are explicitly allowed or denied access to the resources) and some form of Access Control List (ACL) delineating the identifiers and passwords for users that are allowed to access the resources. ACLs are generally associated with realms, which is an abstract classification that a Webmaster can use to organize secure resources into discrete categories. The Webmaster associates groups of resources (usually directory subtrees) with specific realms. In the chapter on Web servers, we discussed the design of services that implement both IP address security and ACL-based security. Here we will cover the mechanisms that browsers and other Web clients need to employ to interact with these services. Let’s start with a Web client request for a protected resource. The request looks like (and is) a perfectly normal HTTP request, because the client may not even realize that this resource is protected: GET /protected/index.html HTTP/1.1 Host: secret.resource.com

The Web server, however, knows that the resource is protected (and is associated with a particular realm), and sends an appropriate response, with the 401 Not Authorized status code: HTTP/1.1 401 Not Authorized Date: Sun, 11 Feb 2001 22:28:31 GMT WWW-Authenticate: Basic realm ="Top Seekrit" Content-type: text/html ...

This response is the authentication challenge. At this point, the client must answer the challenge by providing authorization credentials. First, before it does anything else, the client should look in its own data storage to see if it has, during the current session, already provided credentials for this realm to this particular server. If it has, it does not need to obtain or derive these credentials anew, it can simply retransmit the stored credentials in its response. But we’re getting ahead of ourselves: our scenario involves a first request to a server for protected resources. The Web browser would obtain authorization credentials by prompting the user. Normally, this is accomplished by displaying a dialog box asking the user to enter a userid and password for the realm associated with the requested resource. Once the browser has obtained these credentials, it must include them in a resubmitted HTTP request for the same resource:

Complex HTTP Interactions


GET /protected/index.html HTTP/1.1 Host: secret.resource.com Authorization: Basic encoded-userid:password

If the credentials do not match the userid and password in the ACL, it sends another response with a 401 Not Authorized status code, causing the browser to re-prompt for valid credentials. If the user elects to stop trying (e.g. by choosing the ‘Cancel’ option), the browser will present a message to the user, or alternatively, it will present the HTML content (if any) provided with the 401 response. Most servers will give up after a certain number of exchanges with the browser, changing the 401 status code to 403 (Forbidden). If the credentials provided do match the userid and password in the ACL, the Web server will then finally transmit the contents of the requested resource. At the same time, the browser should save the credentials so that the next time a request is made for a resource within the same realm on the same server (during the same session), the user need not be prompted for those credentials again.

5.4.4 Re-factoring: common mechanisms for storing persistent data Note that there are many similarities between the mechanisms provided for storing cached content, cookie information, and authorization credentials. They can all be thought of as a form of persistence, since they all represent efforts to store similarly structured information that will be reused later on. It is probably a good idea to build a generalized persistence mechanism into your browser, and to use that mechanism for all of these purposes. This mechanism would need to support both in-memory persistence (where persisted information is saved only for the duration of the browser session and never stored permanently), and long-term persistence (where persisted information may be placed in some form of permanent data storage so that it lasts beyond the end of the browser session). Obviously, there are differences in the ways this mechanism would be used for each of these functions, as summarized in Table 5.1. • For caching, the decision to store the data in the cache is based on response headers that the Caching module must interpret. The key used for addressing a cache entry is the requested URL (along with, potentially, any query string, POST content, and URL parameters associated with the request). Cache entries should be stored with an expiration date. If one is provided in an Expires header, then that should be used. If the date provided is in the past, then this indicates that this entry should only be cached for the duration of the browser session, meaning they should be flushed from the cache when the session ends. In addition, there should


Table 5.1

Web Browsers

Browser mechanisms for storing persistent data Decision to store

Access key


Depends on response URL associated with headers request


Depends on user settings

Authorization Always credentials

Domain and path parameters in cookie Server address and authentication realm

When to delete

Storage mechanism

At expiration date or when cache is full At expiration date, or at end of session if no date provided At end of session!!

Memory and/or disk

Memory for cookies that expire at end of session, disk for persisted cookies Memory only (never store on disk)

be a mechanism in the Configuration module for establishing the maximum cache size. If that size is exceeded (or approached), the Caching module should flush the oldest entries found in the cache. • For cookies, the decision to store the cookie information found in a response’s Set-Cookie header is based on only one factor: whether or not the user has elected (via the interface to the Configuration module) to accept cookies. This should be the default behavior in a browser (i.e. users should have to take explicit action to reject cookies). The key for addressing a cookie is the domain and path information specified in the Set-Cookie header. Cookies also have an expiration date. If none is specified, or if the date specified is in the past, the cookie information should only be stored for the duration of the current browser session, and flushed when the session ends. The browser can provide limits on the amount of space available for storage of cookies; it is not required to store all cookies indefinitely. • For authorization credentials, there is no decision to be made: this information should always be retained. However, authorization credentials are always flushed when the session ends and are never kept beyond the end of the browser session. Thus they should be kept in memory and never written to stable storage. The key for addressing authorization credentials is the IP address (or name) of the server, and the name of the realm with which the server associates the requested resource. Information that is required only for the duration of the browser session should be kept in memory, while information that must be persisted beyond the end of the session must be recorded using permanent data storage. This can be as simple as a text file (as Netscape does with cookies) or a directory subtree (as most browsers do for cached content), but more sophisticated mechanisms can be used as well.

Complex HTTP Interactions


5.4.5 Requesting supporting data items Even the simplest web page is not ‘self-contained’. Most pages at the very least contain references to images found at other URLs. In order to support graphical page rendering properly, a browser must make supplementary requests to retrieve supporting resources. To accomplish this, the browser needs to parse HTML markup and find additional resources that are specified on the page. The Content Interpretation module is responsible for performing this analysis. Once it has determined which additional resources are desired, it must tell the Request Generation module to construct and transmit HTTP requests to obtain these resources (Figure 5.4).

Step 1: Initial user request for "http://www.cs.rutgers.edu/~shklar/" GET /~shklar/ HTTP/1.1 Host: www.cs.rutgers.edu




Response HTTP/1.1 200 OK Content-Type: text/html ... ... ... Step 2: Secondary browser request for "http://www.cs.rutgers.edu/~shklar/images/photo.gif" GET /~shklar/images/photo.gif HTTP/1.1 Host: www.cs.rutgers.edu




Response HTTP/1.1 200 OK Content-Type: image/gif

Figure 5.4

Browser steps for requesting supporting data items


Web Browsers

Keep the Connection Open! These resources (e.g. images) are often found on the same server as the HTML page itself. Prior to the advent of HTTP/1.1, this would mean that a browser would repeatedly open and close connections to the same server to get all the resources it needed. Fortunately, HTTP/1.1 by default supports persistent connections. This means that, unless otherwise specified, the browser will keep the connection to a server open so that supplemental requests can be made without additional overhead.

Remember that caching comes into play when making supplemental requests. A well-organized site is likely to use the same images on many different pages. If the server indicates that these images are cacheable and if a reasonable expiration date is specified, the browser may not need to make additional requests for some of these images: it may already have them in the cache!

5.4.6 Multimedia support: helpers and plug-ins From the beginning, browsers provided integrated support for many different types of data. Obviously, they rendered HTML (as well as plain text). When it came to graphics, support for GIF and JPEG images was practically ubiquitous by the early browsers like Mosaic and Netscape, and support for animated GIFs followed soon thereafter. The tag was included in the very earliest versions of HTML. HTML, plain text, GIF and JPEG are supported “natively” by pretty much all of the modern desktop browsers. There are exceptions: for example, text-only browsers like Lynx, and browsers for handheld devices of limited bandwidth and screen size, do not include support for images. Apart from images, there are other popular types of data presented via the Web. The most prominent example is that of multimedia objects (audio and video), but there are also proprietary formats such as Adobe Acrobat PDF and Macromedia Flash. To enable the presentation of these data objects, browsers can do one of three things: they can provide native support for the format, they can allow the invocation of helper programs to present the object, or they can provide support for plug-ins. Plug-ins are program modules closely integrated with the browser, which enable the rendering/presentation of particular kinds of objects within the browser window. The first option can be overwhelming. There are many multimedia data formats out there, and supporting even the most popular ones through embedded code within the browser is a daunting task. Furthermore, proprietary formats like Flash and PDF are subject to frequent change, as vendors keep implementing more advanced versions of these formats. It surely seems a far better idea to offload

Complex HTTP Interactions


support for these formats onto the people best capable of providing that support: the vendors themselves. This leaves the choice between helper applications and plug-ins. Helper application support is relatively simple to implement. As we know, all of these different formats are associated with particular MIME types. Browsers can be configured to defer presentation of objects whose formats they do not support natively to programs that are specifically intended for presentation of such objects. A browser may create a mapping of MIME types to helper applications (and give users control over which applications should be invoked as helpers), or it can take the simpler route of using the mapping between MIME types and file extensions to defer the choosing of applications to the operating system. The downside of this approach is that these objects are rendered or presented outside of the browser window. A separate application is started to access and present the object, sometimes obscuring the browser window entirely, but at the very least abruptly shifting the user’s focus from the browser to another application. This can be very confusing and can negatively impact the user’s impression of the Web presentation. What’s more, this approach is limited to usage with links pointing directly at the object, e.g. . In many cases, page designers want to embed an object directly into the current page, rather than forcing users to click on a link so that the object can be presented to them. This leaves us with the plug-in approach. This approach makes use of and HTML tags to tell the browser to render an embedded object. The tag has been deprecated in favor of the tag. In fact, the W3C sees the tag as a generalized approach to embedding multimedia objects. The browser must maintain a table defining what actions should be taken to present each defined MIME type. This table should specify whether support for the MIME type is built into the browser, whether there is an installed plug-in, or whether the browser should launch a helper application. Although the browser should provide explicit plug-in specifications in this table (so that it knows which plug-in to use), it need not do this for helper applications. If it does not designate specific helper applications for MIME types, it defers the responsibility of choosing a helper application to the operating system. In any case, the Configuration module should provide a mechanism for users to customize these associations. This table must also associate MIME types with filename extensions (suffixes). These associations are needed when the browser attempts to present local files, and when it attempts to present responses that do not contain a properly formatted Content-Type header. Such responses violate HTTP protocol requirements, but browsers can use this table of associations to infer a MIME type from the filename suffix found in the URL. Browsers should be designed not to be too clever in this regard: some versions of Internet Explorer tried to infer a response’s MIME type by examining the


Web Browsers

content. If it ‘looked like’ HTML, it would try to present it with a Content-Type of text/html. But webmasters sometimes want HTML content to be explicitly presented as text/plain (e.g. to show what the unprocessed HTML associated with a page fragment actually looks like). With this in mind, browsers should only engage in heuristic practices to infer the MIME type if all else fails (i.e. if there is no Content-Type and no known URL filename suffix).

Great Moments in MIME History The notion of associating MIME types with designated applications dates back to the earliest uses of MIME. Remember that MIME was originally intended for use with e-mail attachments. (You may recall that MIME stands for Multimedia Internet Mail Extensions.) UNIX systems made use of a .mailcap file which was basically a table, associating MIME types with application programs. From the earliest days of the Web, browsers made use of this capability. Early browsers on UNIX systems often used the .mailcap file directly, but as technology advanced and plug-ins got added into the mix, other browsers (e.g. Netscape) started to use their own MIME configuration files.

5.5 REVIEW OF BROWSER ARCHITECTURE Table 5.2 reviews the browser modules discussed in this chapter, and summarizes their responsibilities: Table 5.2

Browser modules and their responsibilities




User Interface

Providing user interface. Rendering and presenting end result of browser processing.

1. Displays browser window for rendering the content received from Content Interpretation module. 2. Provides user access to browser functions through menus, shortcut keys, etc. 3. Responds to user-initiated events: — selecting/entering URLs — filling in forms — using navigation buttons (e.g. ‘Back’) — viewing page source, resource info, etc. — setting Configuration options 4. Passes request information to Request Generation module.

Request Generation

Constructing HTTP requests.

1. Receives request information from User Interface module or Content Interpretation module, resolving relative URLs. 2. Constructs request line and basic headers: — Content-Type:/Content-Length: (if body included in request)

Review of Browser Architecture

Table 5.2



(continued ) Function






— Referer: (passed from User Interface module) — Host: — Date: — User-Agent: — Accept-*: Asks Caching module if usable cache entry exists: — passes the entry to the Content Interpretation module if unexpired, — or adds If-Modified-Since header to force the server to only send back newer content. Asks Authorization module if this is a domain/path for which we have credentials: — if so, adds Authorization header to provide credentials to server, — if not, tells User Interface module to prompt the user. Asks State Maintenance module if this is a domain/path for which we have cookies? — if so, adds Cookie header to transmit cookies to server. Passes fully constructed request to the Networking module.

Response Processing

Analyzing, parsing, & processing HTTP responses.

1. Receives responses from Networking module. 2. Checks for 401 status code (Not Authorized): — Asks Authorization module for credentials for realm named in WWW-Authenticate header: yes—resubmit request with saved credentials, no—prompt user for credentials and resubmit request. 3. Checks for request redirection status codes (301/302/307): — Resubmit request to URL specified in Location header. — If 301, store new location in persistent lookup table (so browser relocates automatically when URL is visited again). 4. Checks for Set-Cookie headers: — Stores cookies using browser’s persistence mechanism. 5. Passes result to Content Interpretation module.


Interfacing with operating system’s network services, creating sockets to send requests and receive responses over the network, and maintaining queues of requests and responses.

1. Receives requests from Request Generation module and adds them to transmission queue. 2. Opens sockets to transmit queued requests to server. — Connection kept open as additional requests are received. — Connection can be closed explicitly with last resource. 3. Waits for responses to queued requests, which are passed to Response Processing module. 4. Queries Configuration module to determine proxy configuration and other network options. (continued overleaf )


Table 5.2

Web Browsers

(continued )




Content Interpretation

Content-type specific 1. Receives content from Response Processing module (in processing (images, some cases, from the Caching module). HTML, JavaScript, CSS, 2. Examines encoding headers (if present, decode content) XML, applets, plug-ins, — Content-Encoding: — Content-Transfer-Encoding: etc.) 3. Passes decoded content to MIME-type-specific submodule based on Content-Type header. 4. If content has references to other resources, passes URLs to Request Generation module to get auxiliary content. 5. Passes each resource as it is processed to the User Interface module.


Creating, keeping track of, and providing access to cached copies of web resources.

1. Request Generation module asks whether appropriate cache entry exists: — If it does, add If-Modified-Since header to request containing last modification time of cached entry. 2. Response Processing module requests caching of retrieved resource (when appropriate).

State Maintenance

Recording cookie information from response headers, and including cookie information in request headers when appropriate.

1. Response Processing module checks for Set-Cookie headers and requests recording of cookie information using browser’s persistence mechanism. 2. Request Generation module examines stored cookie information and includes Cookie headers when appropriate.


Providing mechanisms 1. Response Processing module checks for responses with "401 Not Authorized" status code. for submitting — If browser has stored credentials for the realm defined authorization credentials, in WWW-Authenticate header, resubmit request with and keeping track of added Authorization header containing credentials. supplied credentials so — If not, User Interface module prompts user for that users do not have to credentials. (Credentials are stored for duration of keep resubmitting them. browser session only, so that resources in same realm will not ask for credentials again.) 2. Request Processing module checks to see if any stored credentials match the domain/path of the request URL. If so, add Authorization header containing credentials.


Providing persistence mechanism for browser settings. Providing interface for users to modify customizable settings.

Queried by all modules to determine what action is to be taken based on user-specified preferences.

Questions and Exercises


5.6 SUMMARY In the previous two chapters, we have examined design considerations for both Web servers and Web clients (specifically browsers). We discussed their module structure and operation, as well as the reasoning behind key design decisions. Given the proliferation of different devices, you just might end up having to implement your own server or browser. Even if you don’t, the knowledge and understanding of Web agents and their operation will give you the edge in building and troubleshooting sophisticated Internet applications.

5.7 QUESTIONS AND EXERCISES 1. What main steps does a browser go through to submit an HTTP/1.1 request? 2. What main steps does the browser go through in processing an HTTP/1.1 response? 3. Is it possible for the browser not to support persistent connections and still be compliant with the HTTP/1.1 specification? Why or why not? 4. What is the structure of a POST request? What headers have to be present in HTTP/1.0 and HTTP/1.1 requests? 5. What functionality would be lost if browsers did not know how to associate file extensions with MIME types? 6. What functionality would be lost if browsers did not know how to associate MIME types with helper applications? 7. Consider an html document (say, http://www.cs.rutgers.edu/∼shklar/): How many connections would an HTTP/1.0 browser need to establish in order to load this document? What determines the number of connections? How about an HTTP/1.1 browser? What determines the number of connections in this case? What would be the answer for you own home page? 8. Describe a simple solution for using HTTP protocol for submitting local files to the server through your browser. How about the other way around? Use the POST method for transmitting files to the server and the GET method for transmitting files from the server. Make sure to describe file transfer in both directions separately and to take care of details (e.g. setting the correct MIME type and its implications, etc.). Do you need any server-side applications? Why or why not? 9. There was an example from Chapter 3 where the server returned a redirect when a URL pointing to a directory did not contain a trailing slash. What would happen if the server did not return a redirect but returned an index.html file stored in that directory right away? Would it create a problem for browser operation? Why? 10. Suppose we installed a server application at this URL: http://www.vrls.biz/servlet/xml. The servlet supports two ways for passing arguments—as query string and as path info (e.g. http://www.vrls.biz/servlet/xml?name =/test/my.xml and http://www.vrls.biz/servlet/ xml/test/my.xml). It is designed to apply a default transformation to the referenced XML file that is located at the server, generate an HTML file, and send it back to the browser. Something is wrong, and even though the servlet generates exactly the html in both


Web Browsers

scenarios, the browser renders it as HTML in the first example and as XML in the second. Moreover, when we compare HTTP responses (including status codes, headers, and bodies) in these two cases, it turns out that they are identical. How is this possible? What is the problem with the servlet? Why does the browser behave differently when the HTTP response is exactly the same? How do we fix this problem?

BIBLIOGRAPHY Gourley, D. and Totty, B. (2002) HTTP: The Definitive Guide. O’Reilly & Associates. Stanek, W. (1999) Netscape Mozilla Source Code Guide. Hungry Minds.


HTML and Its Roots

One of the original cornerstones of the Web is HTML—a simple markup language whose original purpose was to enable cross-referencing of documents through hyperlinks. In this chapter, we will discuss HTML and its origins as an application of the Standard Generalized Markup Language (SGML). We shall cover SGML fundamentals, and show how HTML is defined within the framework of SGML. We then cover selected details of HTML as a language, and discuss related technologies. It is important to know HTML’s origins to understand its place in the overall evolution of markup languages. Admittedly, SGML is a niche language, though it is certainly one with an extensive history. Thus, it may not be of interest to every reader. We believe that knowledge of SGML is useful both from a historical perspective, and to better understand the advantages of XML. However, readers who want to get through the material quickly can skip directly to Section 6.2, bypassing the details of our SGML discussion. The eXtensible Markup Language known as XML is the cornerstone of a new generation of markup languages, which are covered in the next chapter. For now, it is important to understand the relationship of SGML and HTML with XML, XHTML, and related technologies. Both SGML and XML are meta-languages for defining specialized markup languages. Figure 6.1 illustrates that XML is a subset of SGML. As you can see, both HyTime and HTML are SGML applications, while XHTML, SOAP, SMIL, and WML are XML applications. Since XML is a subset of SGML, it is theoretically possible to construct SGML specifications for these languages, but since they are much easier to define using XML, you are not likely to find SGML specifications for them. We will come back to this discussion in the next chapter.

6.1 STANDARD GENERALIZED MARKUP LANGUAGE HTML did not just appear out of the void. It was defined as an application of the Standard Generalized Markup Language (SGML)—a language for defining markup


HTML and Its Roots

Meta-markup language SGML

Application Markup language




Figure 6.1




SGML, XML, and their applications

languages. SGML was created long before the advent of the World Wide Web. It was designed to define annotation and typing schemes that were jointly referred to as markup. Such markup schemes were originally intended to determine page layouts and fonts. Later, they were extended to cover all kinds of control sequences that get inserted into text to serve as instructions for formatting, printing, and other kinds of processing. SGML was not the first attempt at digital typesetting—people have been using latex, troff, and other programs that produced all kinds of incompatible proprietary formats. This caused tremendous pains and gave birth to a great number of conversion programs that never did the job right. SGML was the first attempt to create a language for creating different specialized but compatible markup schemes. In a sense, this makes SGML a meta-markup language. An interesting effect is that while there is a huge legacy of latex and other documents, it became quite easy to convert them to HTML and other SGML applications. In fact, as soon as it became clear that only few converters are really important, and that converters to different SGML markups can share quite a bit of code, it became much easier to achieve decent results. An SGML application (e.g. HTML) consists of four main parts that all have separate roles in defining syntax and semantics of the control sequences: 1. The SGML declaration that specifies characters and delimiters that may legally appear in the application.

Standard Generalized Markup Language


2. The Document Type Definition (DTD) that defines valid markup constructs. The DTD may include additional definitions such as numeric and named character entities (e.g. " or "). 3. A specification that describes the semantics to be ascribed to the markup. This specification is also used to impose additional syntax restrictions that cannot be expressed within the DTD. 4. Document instances containing content and markup. Each instance contains a reference to the DTD that should be used to interpret it. In this section, we discuss elements of sample SGML applications. Our examples will center mainly on the HTML specification, providing a link to the rest of the chapter.

DTDeja Vu Many among you may already have encountered DTDs and other related constructs in the context of XML. Do not get confused—this chapter does not talk about XML. DTDs first got introduced in SGML and found their way into XML much later. We shall discuss the commonalities and differences between SGML and XML DTDs in the next chapter. For now, keep an open mind, and do not overreact when you see DTDs that look just a little different than what you may be used to.

6.1.1 The SGML declaration The role of the SGML declaration is to set the stage for understanding DTDs that define valid markup constructs for SGML applications. The declaration has to specify the character set, delimiters, and constraints on what may and may not be specified in a DTD.

The document character set The problem of defining a proper character set normally becomes apparent when you face a screen full of very odd characters that were perfectly readable when you composed your document on another system. That is normally when it occurs to you that ‘A’ is not always an ‘A’—it may be something totally incomprehensible if interpreted by a program that has a different convention for representing characters. The two most commonly used conventions for representing text are ASCII (American Standard for Coded Information Interchange) and EBCDIC (Extended Binary-Coded Decimal Interchange Code). The first is most probably the one you have encountered—it is used on most personal computers. Within both systems, many permutations depend on the country or the application domain.


HTML and Its Roots

What is needed is a way to associate the bit combination used in the document for a particular character to that character’s meaning. It is too verbose to define character meanings directly, which is why SGML allows them to be defined as modifications to standard character sets. For example, the EBCDIC system represents capital letters C and D using bit combinations B’11000011 (decimal 195) and B’11000100 (decimal 196). Suppose that we need to represent these characters in the sevenbit character-encoding standard known as ISO 646. Characters ‘C’ and ‘D’ are encoded in this standard using bit combinations B’01000011 (decimal 67) and B’01000100 (decimal 68). The SGML declaration for this association looks like the example shown in Figure 6.2. In this example, ISO 646 is used as the base character set. On top of this base set we define two additional characters that get mapped to characters ‘C’ and ‘D’ of the base set. In addition, we state that the EBCDIC capital letter E (decimal 197) does not occur in the document—a bit strange but this may make sense in some bizarre cases. Figure 6.3 contains the real character set definition for HTML 4. As we can see, except for the unused characters, the HTML 4 character set maps directly into an CHARSET BASESET "ISO 646:1983//CHARSET International Reference Version (IRV)//ESC 2/5 4/0" DESCSET 195 2 67 -- Map 2 document characters starting at 195(EBCDIC C and D)to base set characters 67 and 68(ISO 646 C and D)

Figure 6.2

Sample SGML character set definition for EBCDIC characters

CHARSET BASESET "ISO Registration Number 177//CHARSET ISO/IEC 10646-1:1993 UCS-4 with implementation level 3//ESC 2/5 2/15 4/6" DESCSET 0 9 UNUSED 9 2 9 11 2 UNUSED 13 1 13 14 18 UNUSED 32 95 32 127 1 UNUSED 128 32 UNUSED 160 55136 160 55296 2048 UNUSED 57344 1056768 57344

Figure 6.3

SGML character set definition for HTML 4

Standard Generalized Markup Language


ISO-defined base set. Characters that are mapped to ‘UNUSED’ (e.g. two characters starting with decimal 11, or eighteen characters starting with decimal 14) should not occur in HTML 4 documents.

The concrete syntax The SGML language is defined using the so-called ‘delimiter roles’ rather than concrete characters. SGML always refers to delimiters by their role names. Once it comes to defining the concrete syntax for the application, roles get associated with character sequences. For example, even though most SGML applications use the ‘ attrs "substitution text">

Sample entity definitions

character entities). In this section, we discuss how to define elements, attributes and entities, and illustrate these discussions using HTML examples.

Entity definitions The DTD for HTML 4 begins with entity definitions. Entity definitions can be thought of as text macros. It is important to remember that by now we have already defined the concrete syntax, so we are not trying to define SGML entities, but rather macros that may be expanded elsewhere in the DTD. When the macro is referenced in the DTD, it is expanded into the string that appeared in the entity definition. In the first two examples in Figure 6.5, %head.misc is defined to expand to the partial enumeration of elements that may occur within the HEAD element— SCRIPT | STYLE | META | LINK —while %heading is defined to expand to the enumeration of elements denoting section and block headings. In the third example, %attrs is defined to expand the sequence of other entities that would get expanded recursively. Notice the difference between the so-called parameter entities that are a matter of convenience for the DTD specification itself, and general entities (last example in Figure 6.5). General entities become a part of the language that is being defined by a DTD and are referenced using the ampersand (e.g. &attrs). General entities are not important for HTML, but remember about them when we get to XML.

Elements An SGML DTD defines elements that represent structures or behaviors. It is important to make the clear distinction between elements and tags. It is not uncommon for people to erroneously refer to elements as tags (e.g. ‘the P tag’). An element typically consists of three parts: a start tag, content, and an end tag (e.g. test). It is possible for an element to have no content, and it is for such empty elements that notions of the element and the tag coincide (e.g. the HTML line break element— ). An element definition has to specify its name, structure of its content (if any), and whether the end tag is optional or required. For example, the ordered list element is defined in Figure 6.6 to have the name OL, to require both the start tag and the end tag (first and second dashes), and to contain at least one LI element as its content. The line break element is defined to have the name BR, and to not require the end tag. This element is defined not to have any content. The DTD mechanism was designed to specify verifiable constraints on documents, so it has to express relatively sophisticated relationships between different elements.


HTML and Its Roots


Sample element definitions

Such relationships are expressed by defining content models. Very simple examples of such models are the empty body, body composed of one or more LI elements, and the body that can only contain document text (#PCDATA) but not other elements (see the first three examples in Figure 6.6). More generally, content models make it possible either to specify forbidden elements or to enumerate allowed elements or text, their sequence, and number of occurrences. The last example in Figure 6.6 defines the TABLE element. As you can see, no text is allowed directly within TABLE, which is not to say that it cannot be specified within one of its contained elements. Element CAPTION has to appear first within the TABLE element. It may or may not be present as indicated by ‘?’, a so-called quantifier indicating that CAPTION may appear exactly 0 or 1 times. Another quantifier, ‘*’, that is applied to elements COL and COLGROUP indicates that they may appear any number of times or not appear at all—this is almost the same as ‘+’ except that ‘+’ requires that the element appears at least once. According to the definition of TABLE, a TABLE element should contain at least one TBODY but may or may not contain other elements. Another interesting observation relates to grouping COL* and COLGROUP* constructs using parenthesis. The ‘|’ operator indicates that the constructs may appear in any order and one of them may not appear at all. The ‘&’ operator is very similar except that both constructs have to appear. Of course, if the constructs themselves are defined using quantifiers that allow 0 occurrences (e.g. ‘?’ or ‘*’), there will be no practical difference between these operators (e.g. the COL*|COLGROUP* and COL* & COLGROUP* expressions). Another operator is ‘,’ which indicates the fixed order. In the example, the CAPTION element may or may not occur within TABLE (due to the ‘?’ quantifier), but if it does it has to occur first because expressions separated by the ‘,’ operator impose fixed order. Both table specifications shown in Figure 6.7 are valid because close tags for elements THEAD, TFOOT, and TBODY are optional, and it is perfectly legal for the TFOOT element to be absent. However, the table in Figure 6.8 is not valid because order imposed by the ‘,’ operator in the HTML DTD (Figure 6.6) is violated—the TFOOT element occurs after the TBODY element. Now, the fact that the HTML fragment in Figure 6.8 is invalid does not necessarily mean that your browser will not display it correctly—desktop browsers are designed to be somewhat tolerant of bad HTML. However, writing invalid HTML is a bad habit that may cause problems when errors accumulate, or when your HTML is served by non-desktop browsers.

Standard Generalized Markup Language


Table 1 December 2001 123 685 2345612345 Table 2 123 685 2345612345

Figure 6.7

Valid HTML fragment—sample HTML tables Table 2 123 685 December 2001

Figure 6.8

Invalid HTML fragment (TFOOT occurs after TBODY)

Attributes An element may allow one or more attributes that provide additional information to processing agents. For example, the SRC=url attribute of the HTML SCRIPT element instructs the browser the retrieve the script from the specified URL instead of the body of the SCRIPT element. An attribute definition begins with the keyword ATTLIST and is followed by the element name (e.g. TABLE in the example in Figure 6.9) and a list of attribute definitions. An attribute definition starts with the name of an attribute (e.g. width or cols), and specifies its type and default value. Sample types shown in Figure 6.9 include NUMBER that represents integers and CDATA that represents document text. Other frequently used types include NAME and ID —both representing character sequences that start with a letter and may include letters, digits, hyphens, colons and periods; ID represents document-wide unique identifiers.


HTML and Its Roots


Figure 6.9

Sample attribute definitions for the TABLE element

Notice the use of a DTD entity in defining the align attribute. This entity— Talign —is defined in the same example to expand to (left|center|right); its use is simply the matter of convenience. Either way, the domain of the align attribute is defined as the enumeration of left, center, and right. When the attribute is defined as #IMPLIED, its value is supplied by the processing agent (e.g. browser). The keyword #REQUIRED indicates that the attribute should always be defined with its element. Alternatively, an attribute may be defined as a fixed value, as in the rowspan and colspan attributes for the table header cell element Th in Figure 6.9.

6.2 HTML Our SGML discussion does not attempt to teach you all the details of designing SGML applications. However, it is important that you understand the roots of HTML and its upcoming replacement—XHTML. Later in the book, when it is time to talk about XML, we will look back at SGML declarations and DTDs and think about how to make use of them in the XML world. Understanding this connection between SGML and XML is a critical prerequisite to understanding the relationship between HTML, XML, and XML applications. For now, let us be concerned with HTML—both syntactic constraints imposed on HTML documents by HTML declarations and DTDs, and HTML semantics. This is why we used HTML examples throughout the SGML discussion. Speaking of which, you probably noticed that SGML declarations and DTDs do not help with assigning semantics to HTML tags—HTML semantics is defined in HTML specifications using plain English. It is the responsibility of implementers of HTML agents (e.g. desktop or cable box browsers) to read, understand, and follow the specification. By now, there are quite a few versions of such specifications around; we will have to spend at least a little time sorting it all out. Once we do, we



can discuss the more interesting HTML constructs, paying special attention to the relationship between these constructs and the HTTP protocol.

6.2.1 HTML evolution As you should have noticed from our SGML discussion, HTML syntax is rather flexible. The syntax of HTML tags is set in stone, but the structure of HTML documents is relatively unconstrained. For example, many HTML elements have optional closing tags, which in practice are commonly omitted. To make things worse, real HTML documents often violate even the liberal constraints imposed by the HTML specification because commercial browsers are tolerant of such violations. Nevertheless, bad HTML, even if it is being rendered properly at the moment, causes all kinds of problems over the lifetime of the document. A simple modification may add just enough insult to the injury to break rendering through a forgiving browser. It gets worse if it becomes important to re-purpose the same markup to non-desktop devices—non-desktop browsers that support HTML are much less tolerant to bad syntax. Over the last ten years, the HTML specification has gone through a number of transformations. The common theme for all of these transformations is the tightening of the syntax. The latest and final revision of HTML (HTML 4.01) was released in December 1999. It soon became apparent that future developments would be hard to achieve in the context of SGML. The major additional burden is the need to maintain backward compatibility. HTML 4.01 partially addresses this problem by providing both ‘strict’ and ‘transitional’ specifications. It is now clear that HTML 4.01 will remain the final specification of the language, with all new development centering on its successor—XHTML. Rendering modules of early HTML browsers associated fixed behavior with every HTML element. The only way to modify such behavior was through browserglobal settings. Even at that time the HTML 2 specification made early attempts at abstraction. For example, it was not recommended to use the element to indicate bold text. Instead, it was recommended to use the element for the same purpose, leaving it up to the rendering engine to load a meaning for that by default mapped to anyway. Unfortunately, very few web designers actually followed such recommendations. By the time HTML 4 came out, this simple abstraction developed into a mechanism for style sheets that make it possible to control rendering of HTML elements. HTML 4 also includes mechanisms for scripting, frames, and embedding objects. The new standard mechanism supports embedding generic media objects and applications in HTML documents. The element (together with its predecessors and ) supports the inclusion of images, video, sound, mathematical expressions, and other objects. It finally provides document authors with a consistent way to define hierarchies of alternate renderings. This problem has been around


HTML and Its Roots

since Mosaic and Lynx (early graphical and command-line browsers) that needed alternate ways for presenting images. Another important development that is represented in HTML 4 is internationalization. With the expansion of the Web it became increasingly important to support different languages. HTML 4 bases its character set on the ISO/IEC:10646 standard (as you remember, SGML gives us the power to define the character set). ISO/IEC:10646 is an inclusive standard that supports the representation of international characters, text direction, and punctuation, which are all crucial in supporting the rich variety of world languages.

6.2.2 Structure and syntax As already mentioned, it is important not to be mislead by high tolerance to HTML syntax and structure violations that characterizes commercial desktop browsers. HTML specification is the only common denominator for diverse commercial tools, and compliance to this specification is the best way to avoid problems over the lifetime of your documents. According to the specification, an HTML 4 document must contain a reference to the HTML version, a header section containing document-wide declarations, and the body of the document. Figure 6.10 contains an example of a compliant HTML document. As can be seen, the version declaration names the DTD that should be used to validate the document. HTML 4.01 defines three DTDs—the strict DTD that is designed for strict compliance and the other two, one of which supports deprecated elements excluding frames and the other supports frames as well.

HTML header The header section starts with the optional element and includes documentwide declarations. The most commonly used header element is the that Sample HTML Document I don’t have to close the tag.

Figure 6.10

Sample HTML 4.01 compliant document



Figure 6.11

Sample META elements

dates back to early versions of HTML. Most browsers display the value of this element outside of the body of the document. In a way, it is an early attempt to specify document metadata. It is widely used by search agents that normally assign it higher weight than the rest of the document. The recently added element provides a lot more flexibility in defining document properties and providing input to browsers and other user agents. Figure 6.11 shows examples of defining the ‘Author’ and ‘Publisher’ properties using the element. The element may also be used to specify HTTP headers. The last two examples in Figure 6.11 tell the browser to act as if the following two additional HTTP headers were present:

Expires: Sun, 17 Feb 2002 15:21:09 GMT Date: Tue, 12 Feb 2002 08:05:22 GMT

You can use this syntax to define any HTTP header you want, but you should not be surprised if some of them affect processing and some don’t. No surprise here: some headers have to be processed by the browser before the HTML document is parsed, and therefore cannot be overridden using this method. For example, to start parsing the HTML document, the browser must have already established (from the headers) that the Content-Type of the body is text/html. Thus, the Content-Type cannot be modified using an instance of the element in this way. The browser would not have been able to process the element in the first place unless it had already known the Content-Type. It follows that Content-Type headers defined in the HTML tag, as well as ContentEncoding and Content-Transfer-Encoding headers, would be ignored. There are advantages to embedding HTTP headers in HTML files. The main reason why we are discussing it in such detail is that it illustrates an important link between different Web technologies. Embedding HTTP-based logic in HTML files may be invaluable in building applications that are very easy to install and distribute across different hardware and software platforms. Whenever you desire to employ this mechanism, you should consider different processing steps that occur prior to parsing the markup, to decide whether your element would have


HTML and Its Roots

H1 {border-width: 1; border: solid; text-align: center}

Figure 6.12

Sample STYLE element

any effect. This decision may depend on your processing agent—a browser or a specialized intelligent proxy. Going back to the previous example, the proxy may use the value of the Content-Type header defined in the element, to set the Content-Type of the response prior to forwarding it to the browser. Other elements that are defined in the HTML header section include and . The element is designed to alter the default browser behavior when rendering the body of the markup. In the example shown in 6.12, we override the default rendering of the element, telling the browser to center its value in a box with a solid border. The syntax of the style instructions is defined to comply with the Cascading Style Sheets (CSS) specification. It is not necessary to include the style specification in the header section. In fact, it is far more common to reference a standalone style document using the ‘src’ attribute of the element. This way, it is possible to completely change the look and feel of HTML documents simply by changing the ‘src’ attribute. We shall return to the CSS specification later in the chapter. The element, in combination with event handlers that may be referenced from the body of the document, is designed to provide access to browser objects that get created when processing HTTP responses. Figure 6.13 illustrates function setMethod(form) { if (navigator.appName == "Netscape" && navigator.appVersion.match(/^\ s*[1-4]/)) { form.method = "get"; } else { form.method = "post"; } form.submit(); } ... ...

Figure 6.20

An embedded style sheet Chapter 1

Figure 6.21

Inline styles



DRAFT Column 1Column 2. . .

Figure 6.22

Associating styles with portions of HTML documents

If you feel that there needs to be a middle ground between document-wide and inline styles, you are not alone. Such middle ground can be achieved by using inline styles with and elements, which were introduced for the express purpose of associating styles with portions of HTML documents. While is an inline element, which is used in a manner similar to and , may contain blocks that include paragraphs, tables, and, recursively, other elements (Figure 6.22).

6.4 JAVASCRIPT Markup languages serve their purpose of providing formatting instructions for the static rendering of content. HTML, especially in combination with style sheets, is quite good at performing this function. The HTTP server sends a response to the requesting browser containing an HTML page as its body, which is rendered by the browser. HTML lets you control page layout, combine text with graphics, and support minimal interactive capabilities through links and forms. The problem is that the presentation is static: there is no way to modify it programmatically once the page is rendered. The rendered page just stays there until the next request. Even a simple operation (e.g. validation of a form entry) requires server-side processing, which means an extra connection to the server. One of the most noticeable trends in the evolution of HTML has been the introduction of new elements and attributes that make it possible to go beyond static rendering and to control browser behavior in a programmatic fashion. For example, it is possible to automate the submission of certain new requests by the browser by using the element in conjunction with the HTTP-EQUIV attribute. This approach can only take you so far, and is not a replacement for programmable behavior. Something more was needed to provide programmatic functionality within the browser. The solution was to introduce an object-oriented programming language, JavaScript, that includes built-in functionality for accessing browser objects (e.g. page elements, request generation modules, etc.), and to introduce HTML attributes that enable mapping of JavaScript methods to user events. But I thought we were talking about markup languages! Although this is a chapter devoted to markup languages, it behooves us to talk about JavaScript here. The HTML specification has evolved to define handlers for browser


HTML and Its Roots

events, assuming that those handlers are implemented using a browser-supported scripting language that interfaces with the page’s document object model. JavaScript was designed with this purpose in mind; it was intended as a browser-side programming/scripting language. It is not possible to completely separate HTML from JavaScript, which is why JavaScript deserves a place in this discussion.

JavaScript was initially developed for Netscape Navigator but is now supported by most desktop browsers, including Internet Explorer. Browsers process JavaScript statements embedded in HTML pages as they interpret bodies of HTTP responses that have their Content-Type set to text/html. The thing to remember is that with all cross-browser support, there are subtle differences in JavaScript implementations for different browsers. It is possible to write JavaScript programs that work consistently across different browsers, but even if you are using relatively simple JavaScript functionality, consistent cross-browser implementation often becomes an iterative trial-and-error process. Another important consideration is that not all JavaScript processing is performed at the same time. Some statements are interpreted prior to rendering the HTML document, or while the document is being rendered. These statements are usually contained within script blocks as we shall see shortly. Other statements are grouped into event handlers that are associated with browser events through HTML tag attributes. Let us take a look at the example in Figure 6.23 that makes use of two JavaScript functions, which are defined using the element in the page header. The setMethod function takes the form object as an argument and sets its HTTP request method to POST unless the browser is an early version of Netscape Navigator (versions 1.x through 4.x). The setMethod function is used by the adminAction function, which gets invoked through an event handler at the click of a button. Note that the association between buttons and event handlers is defined through the onClick attribute of the element. The example makes use of the Navigator, Form, and Button objects, which are available through the JavaScript processor. Other objects associated with HTML elements include Frame, Image, Link, and Window, to name a few. In addition, JavaScript processors include general-purpose objects that may be used in the language statements, including RegEx for regular expressions, as well as Math for simple computations, Number for manipulating numeric values, and Date. Control flow constructs similar to those found in Java are supported as well. Server-Side JavaScript In the early years of the Web, there were attempts to use JavaScript on the server, but they did not generate an extensive following. There are a number of open source



JavaScript interpreters (e.g. Rhino1 ), which could be used for server-side processing, but they generally do not contain objects or methods for manipulating HTML documents.

JavaScript is not the right language for implementing sophisticated logic. Apart from making your pages entirely unreadable, and violating the principle of separating content and presentation, complex JavaScript constructs often lead to inconsistencies in the way different desktop browsers behave. To compensate for this, designers JavaScript Example function setMethod(form) { if (navigator.appName == "Netscape" && navigator.appVersion.match(/ ^\ s*[1-4]/)) { form.method = "get"; } else { form.method = "post"; } } function adminAction(button) { setMethod(button.form); if (button.name == "Exit") { button.form.action = "/test/servlet/action/invalidate"; } else if (button.name == "New") { button.form.action = "/test/servlet/action/process/next"; } button.form.submit(); } ... ...

Figure 6.23 1

JavaScript example



HTML and Its Roots

often feel the need to add browser-dependent JavaScript code, tailored to work properly in specific browsers. (More accurately, tailored to work in specific versions of specific browsers!) This makes it very difficult to achieve consistency across different desktop browsers, let alone different devices. Complex JavaScript is one of the main reasons why some pages would render properly, for example, only through Internet Explorer. Still, JavaScript is invaluable for field validation and event handling, but it is best to defer the more complex processing to the server. If you’ve ever seen a page where HTML tags are thoroughly mixed together with a combination of Java code, JavaScript event handlers, and document.write statements used to dynamically generate HTML as the page is rendered, then you know what not to do! JavaScript is a little bit like an organic poison—it would kill you in large doses but may be an invaluable cure if you use it just right.

6.5 DHTML Dynamic HTML (DHTML) is often talked about as if it were some new version of the HTML specification, with advanced functionality above and beyond HTML itself. In reality, DHTML is just a catch-all name used to describe existing features in HTML, JavaScript and CSS that are used to provide engaging forms of page presentation. While HTML by itself provides a static presentation format, the coupling of HTML tags with JavaScript directives (event handlers) and CSS style specifications offers a degree of interactive control over page presentation. It is dynamic in the sense that the presentation of a given page may change over time through user interaction, in contrast to dynamic Web applications that generate entire page presentations. It is not our intention to provide a DHTML primer in this chapter. Many good books are devoted to describing the intricacies of DHTML, but we believe it is worthwhile to understand the principles associated with DHTML presentation techniques. With this in mind, we describe simple examples illustrating the most common uses of DHTML.

6.5.1 ‘Mouse-Over’ behaviors JavaScript provides mechanisms to perform actions when the mouse pointer is ‘over’ an area specified as a hyperlink (e.g. presenting different images associated with a given hyperlink depending on the position of the mouse pointer). The onMouseOver and onMouseOut directives can be added to an HTML anchor tag () to make this happen. This image swap technique shown in Figure 6.24 demonstrates the most common mouse-over behavior used by page designers. Note that the IMG tag is identified through



Figure 6.24

Sample implementation of onmouseOver behavior

its NAME attribute, making it possible to reference the image directly. The SRC attribute of the IMG tag is set to the relative URL of the default image (images/pic1.gif). JavaScript code to change the image’s location (its src attribute) to a different URL is invoked "onMouseOver" (when the mouse pointer is over the image), causing a different image (images/pic1a.gif) to be displayed. Similar code is invoked "onMouseOut" (when the mouse pointer leaves the area occupied by the image) to redisplay the original image. The two code fragments are JavaScript event handlers that are invoked when the respective events occur. (Note that real-world event handlers perform image swapping by invoking JavaScript functions defined within a script block or in an external JavaScript source file.) Similarly, CSS can be used to define styles associated with the mouse-over behavior. Figure 6.25 illustrates the highlighting of links and the addition or removal of underlining through the A:hover pseudo-class. Here, hyperlinks are normally displayed in a shade of green (#009900) with no underlining, but when the mouse pointer is over a link it becomes underlined, its text color switches to black, and its background color is transformed to a shade of yellow (#ffff99).

6.5.2 Form validation Some degree of client-side form validation can be performed using JavaScript, obviating the need for a potentially time-consuming request/response ‘roundtrip’ to perform server-side validation. This is accomplished through the FORM tag’s onSubmit event handler, which is executed prior to the submission of the form. If the code returned by the event handler evaluates to the boolean value true, the HTML form has passed validation and an HTTP request is generated to submit it to A { color: #009900 ; text-decoration: none } A:hover { background-color: #ffff99; color: black; text-decoration: underline }

Figure 6.25

Usage of CSS A:hover pseudo-class for mouse-over link highlighting


HTML and Its Roots

function validate(form) { var errors = " " ; if (form.firstname.value.length < 3) { errors += "\nFirst name must be at least 3 characters long." ; } if (form.lastname.value.length < 5) { errors += "\nLast name must be at least 5 characters long." ; } if (errors.length > 0) { alert("Please correct the following errors:" + errors) ; return false ; } else { return true ; } } First Name: Last Name:

Figure 6.26

Example of client-side form validation using JavaScript

the server. Otherwise, in the case of validation errors, the code evaluates to false and no submission takes place. It is good practice for the event handler to display a JavaScript alert box describing any validation errors that have been detected. Figure 6.26 demonstrates how client-side form validation can be accomplished using JavaScript. The form contains two fields, and the FORM tag calls for the



execution of a JavaScript function, validate(), when the form is submitted. This function examines the values entered in the two form fields and determines whether they are valid. If either value is unacceptable, an appropriate error message is appended to the String variable errors. If the length of this variable is greater than zero after all the fields in the form have been checked by the validate()function, a JavaScript alert box is displayed containing the error messages, and the function returns false. Otherwise, the function returns true, causing the form to be submitted to the server. Note that this approach starts the validation process when the form is submitted. It is possible to validate on a per-field basis, by using the onChange or onBlur JavaScript event handlers in the INPUT tags associated with individual form fields. Since the displaying of alert boxes as data is being entered can become overwhelming and confusing to users, and since the acceptability of entered field values is often dependent on what has been entered in other form fields, the ‘holistic’ approach of validating on submission is preferred. Client-side form validation is useful in ‘pre-validating’ field values before the form is submitted to the server. Validations that are dependent on comparisons to data values available only on the server (e.g. user authorization credentials) require server-side validation.

6.5.3 Layering techniques A page can be divided into ‘layers’ that occupy the same coordinates on the screen. Only one layer is visible at a time, but users are able to control which layer is visible through mouse clicks or other page interactions. This functionality is often used by page designers to provide tabbed panes and collapsible menus. The term ‘layers’ was originally Netscape-specific, referring to the proprietary tag understood only by the Netscape browser. The term is now used in DHTML to describe the use of CSS positionable elements to support this functionality in a browser-neutral way. This is usually accomplished by specifying co-located blocks of text on the page as elements with CSS style attributes that determine their position and visibility. In Figure 6.27, CSS styles are explicitly defined for ‘layer 1’ and ‘layer 2’. JavaScript functions to change the visibility attribute of page elements are also defined. Hyperlinks give users the option to see either ‘layer 1’ (which is visible when the page loads) or ‘layer 2’ (which is initially hidden). The setVisibility JavaScript function shown here is somewhat oversimplified. Describing how to implement it in a truly cross-browser compatible manner is too complex for a short overview. Suffice to say that cross-browser compatibility when using CSS positionable elements is difficult but by no means impossible, thanks to the availability of cross-browser JavaScript APIs such as Bob Clary’s xbDom and xbStyle (available at http://www.bclary.com/xbProjects).


HTML and Its Roots

#layer1 { position: absolute; z-index: 1; visibility: visible; } #layer2 { position: absolute; z-index: 2; visibility: hidden; } function show(objectID) { setVisibility(objectID,’visible’) ; } function hide(objectID) { setVisibility(objectID,’hidden’) ; } function setVisibility(objectID, state) { var obj = document.getElementById(objectID).style ; obj.visibility = state ; } Show Layer 1 Show Layer 2 Text that will appear when layer1 is visible Text that will appear when layer2 is visible

Figure 6.27

Example of CSS positionable elements used for layering

Our examples barely scratch the surface in describing the capabilities of DHTML. Readers are invited to pursue referenced sources and explore on their own the possibilities that arise from the interaction of JavaScript, CSS, and HTML.

6.6 SUMMARY In this chapter, we have discussed HTML and related technologies; we have also established the necessary foundation for the upcoming discussion of XML. We discussed SGML, its DTD syntax, and using it to define HTML as an SGML application. We further discussed the HTML markup language, concentrating on its features that influence HTTP interactions, as well as features that enable the separation of markup and rendering.



We again stress that it is important to distinguish between the separation of markup and rendering, and the separation of content and presentation. The former is accomplished through stylesheets, while the latter is the function of proper application design. The CSS language discussed in this chapter is only the first step in stylesheet evolution. In the upcoming XML chapter, we will refer to CSS as the starting point for our stylesheet discussion. No matter how many specialized HTML elements and attributes are introduced into the language, there is still an expressed need for a programming language that can be used to introduce simple processing for input validation and event handling. JavaScript does fulfill this role, and numerous interfaces to JavaScript functions are now part of the HTML specification. But beware—using JavaScript to implement complex logic not only makes your applications difficult to debug, it almost always creates browser dependencies as well.

6.7 QUESTIONS AND EXERCISES 1. What is the relationship between HTML, XML, SGML, and XHTML? Explain. 2. What HTTP headers will be ignored when specified using the HTTP-EQUIV mechanism? What headers will not be ignored? Provide examples. Explain. 3. Describe options for using HTML to generate HTTP requests. Can you control the ‘Content-Type’ header for browser requests? What settings are imposed by the browser and under what circumstances? 4. Put together a simple HTML form for submitting desktop files to the server using POST requests. Remember to provide information about the target location of the file after transmission. What will be the format of the request? 5. What is the purpose for introducing CSS? What are the alternatives for associating styles with HTML documents? 6. What is the relationship between HTML and JavaScript? What is the role of event handlers? 7. What is DHTML? Describe the most common DHTML use patterns and technologies that are used to implement these patterns. 8. How difficult would it be to implement an HTML parser? Why? How would you represent semantics of HTML elements?

BIBLIOGRAPHY Flanagan, D. (2001) JavaScript: The Definitive Guide. O’Reilly & Associates. Livingston, D. (2000) Essential CSS and DHTML for Web Professionals, 2nd Edition. Prentice Hall. Maler, E. and El Andaloussi, J. Developing SGML DTDs. Prentice Hall PTR, 1995. Meyer, E. (2000) Cascading Style Sheets: The Definitive Guide. O’Reilly & Associates. Musciano, C. and Kennedy, B. (2002) HTML and XHTML, The Definitive Guide. Fifth Edition. O’Reilly & Associates. Teague, J. C. (2001) DHTML and CSS for the World Wide Web: Visual Quickstart Guide. Second Edition. Berkeley, California: Peachpit Press.


XML Languages and Applications

For all its power, SGML has remained a niche language. It originated in the 1970s and enjoyed very strong following in the text representation community. However, the price for the power and flexibility of SGML was its complexity. Just as the simplicity of HTTP gave birth to the brave new World Wide Web, something a lot simpler than SGML was needed in the area of markup languages. The initial approach was to create a targeted SGML application, HTML, which worked relatively well during the early years of the Web. However, it was neither sufficiently powerful and flexible, nor rigorous enough for the information processing needs of sophisticated Web applications. The solution was to define a relatively simple subset of SGML that would retain the most critical features of the language. Such a subset, called the eXtensible Markup Language or XML, was designed to serve as the foundation for the new generation of markup languages. By giving up some of the flexibility (e.g. SGML character set and concrete syntax declarations) and imposing additional structural constraints, it became possible to construct a language that is easy to learn and conducive to the creation of advanced authoring tools. XML was designed as a subset of SGML, but it did not stay that way. The very simplicity of the language lent itself to the evolution that was not practical for SGML. While XML DTDs are simply a subset of SGML DTDs, XML Schema is the new generation language for defining application-specific constraints. Moreover, entirely new mechanisms have emerged, such as XPath to address fragments of XML documents and XSLT to define document transformations. XML is proving to provide much more than a replacement for SGML-derived HTML. XML applications include specialized markup languages (e.g. MathML), communication protocols (e.g. SOAP), and configuration instructions (e.g. configuration files for HTTP servers and J2EE containers).


XML Languages and Applications

In the rest of this chapter, we will discuss XML and related languages that either stand on their own (e.g. XML DTD and XPath), or are defined as XML applications (e.g. XSL and XML Schema). We will also discuss the relationship between SGML and HTML on one side, and XML and XHTML on the other. Finally, we will provide a brief overview of other XML applications.

7.1 CORE XML As you will recall, the first steps in defining SGML applications were to define the character set and the concrete syntax. XML syntax is relatively rigid—the character set is fixed, and there is a limited number of tag delimiters (‘’, ‘/>’, and ‘ ]]>

Figure 7.1

Sample XML document

XML comment syntax is similar to HTML comments. Comments can only appear after the XML declaration. They may not be placed within a tag, but may be used to surround and hide individual tags and XML fragments that do not contain comments (e.g. the second element). In addition to comments, XML possesses an even stronger mechanism for hiding XML fragments—CDATA sections. CDATA sections exclude enclosed text from XML parsing—all text is interpreted as character data. It is useful for hiding document fragments that contain comments, quotes, or ‘&’ characters. For example, the CDATA section in Figure 7.1 hides the improperly formatted comment and two XML tags from the XML parser. The only character sequence that may not occur in the CDATA section is ‘]]>’, which terminates the section. As you can see from the example, XML elements can be represented with two kinds of tags. Non-empty elements use open and close tags that have the same syntax as HTML tags (e.g. . . . and . . .).


XML Languages and Applications





[status=“In Print”]





[firstName=“Leon”] [lastName=“Shklar”]



[firstName=“Rich”] [lastName=“Rosen”]

“An in-depth examination …”

“Web Application Architecture” “Principles, protocols and practices”





[year=“2003”] [usd=“45”] [bp=“27.50”] [source=“John Wiley and Sons, Ltd.”]

Figure 7.2

XML element tree for the example in Figure 7.1

Empty elements are represented with single tags that are terminated with the ‘/>’ sequence (e.g. and ). An element may or may not have attributes, which, for non-empty elements, have to be included in open tags (for empty elements, there is only one tag, so there is not much choice). The document in Figure 7.1 contains sample entities in the body of the second element. Both < and > are references to built-in entities that are analogous to escape sequences in HTML. Since HTML is an SGML application, it already has a DTD, so there is no way to define new escape sequences. XML is a subset of SGML, so it is possible to define new entities using XML DTDs (or schemas). An example of a reference to a newly defined entity is &jw;. Since jw is not a built-in entity, it should be defined in the ‘books.dtd’ file for the document to be considered well formed. Note that the < character sequence that occurs within the CDATA section is not an entity reference—as we discussed, CDATA sections exclude enclosed text from XML parsing. Figure 7.2 is a graphic representation of the XML document in Figure 7.1. Here, internal nodes represent elements, solid edges represent the containment relationship, dashed edges represent the attribute relationship, and leaf nodes—either attributes or element content. For example, element is represented as the combination of elements , , and , while elements and are represented with their character content. Attribute names and

Core XML


values are shown in brackets, and attribute value edges are represented with equal signs (e.g. [firstName=“Leon”]). Processing an XML document often involves traversing the element tree. There is even a special specification for the traversal paths. It is discussed later in this chapter (Section 7.4.1).

7.1.2 XML DTD The meaning of XML tags is in the eye of the beholder. Even though you may be looking at the sample document and telling yourself that it makes sense, it is only because of semantics associated with the English language names of elements and attributes. Language semantics are of course inaccessible to XML parsers. The immediate problem for XML parsers, thus, is validation. A human would most likely guess that the value of the usd attribute of the element is the book price in dollars. It would come as a surprise if for one of the books the value of this attribute is the number 1000 or the word ‘table’. Again, this surprise would be based on language semantics and not on any formal specification that is required to perform automated validation. The second problem is even more complex. How do we make XML documents useful for target applications? It is easy with HTML—every tag has a well-defined meaning that is hard-coded into browser processing and rendering modules. XML elements do not have assigned meaning, though a particular XML application (e.g. XHTML) may have predefined meaning for individual elements. Let us attempt to address the validation problem using XML DTD (Figure 7.3). Notice that the XML DTD syntax for element definitions is simpler than SGML syntax (see SGML DTD fragment in Figure 7.4). Even if we add more element definitions to the example in Figure 7.4, we would still notice that all of them have pairs of dashes indicating that both an open and a close tag are required. There is no reason to keep this part of the SGML DTD syntax—all XML elements always require both open and close tags. Similar to the SGML DTD examples in Chapter 6, the XML DTD specification in Figure 7.3 defines the nesting and recurrence of individual elements. The root element is defined to contain at least one , which, in turn, contains exactly one , at least one , and may or may not contain , , and elements. XML DTDs provide simple mechanisms for defining new entities, which are very similar to those of SGML DTDs. For example (Figure 7.3), the entity “jw” is defined to expand to ‘John Wiley and Sons, Ltd.’; it is used in the sample document in Figure 7.1. An obvious reason for defining names of publishers as entities is their applicability to multiple book entries. Another example in Figure 7.3 is the entity ‘JWCopyright’, which is defined as a reference to an HTTP resource.


XML Languages and Applications

Figure 7.3

The books.dtd file—XML DTD for the sample document in Figure 7.1

... ...

Figure 7.4

Fragment of an SGML DTD for the sample document in Figure 7.1

DTDs do not provide easy ways of defining constraints that go beyond element nesting and recurrence. Similarly, while it is easy to associate elements with attributes and to define basic constraints on attribute types (e.g. predefined name token constraints, or enumeration), more complex cases are often an exercise in futility. And of course, DTD is no help in trying to define semantics.

Core XML


7.1.3 XML Schema XML Schema is a relatively new specification from the W3C. It does not have an analog in the SGML world. XML Schema was designed as an alternative to the DTD mechanism; it adds stronger typing and utilizes XML syntax. XML Schema contains additional constructs that make it a lot easier to define sophisticated constraints. XML Schema supports the so-called ‘complex’ and ‘simple’ types. Simple types are defined by imposing additional constraints on built-in types, while complex types are composed recursively from other complex and simple types. Built-in types available in the XML Schema are much richer and more flexible than those available in the DTD context. This is very important because it means that the XML Schema makes it possible to express sophisticated constraints without resorting to application logic, which must be coded using a procedural language (e.g. Java) and is much more expensive to maintain. XML syntax for referencing schemas is shown in Figure 7.5. XML schema specification does not provide for defining entities, so entity definitions have to be included in the document using the DTD syntax. In the example in Figure 7.6, we demonstrate the brute force approach to defining an XML schema. This pattern is sometimes called the ‘Russian Doll Design’, referring to the wooden ‘matreshka’ dolls that are nested inside each other. Here, every type is defined in place, there is no reuse, and the resulting schema is relatively difficult to read. Nevertheless, it is a good starting point for our analysis. We begin with defining the element by associating it with the complex type that is defined using the ‘sequence’ compositor. Sequence compositors impose the sequential order; they are equivalent to commas when defining constraints on an element nesting in the DTD syntax. Other compositors include ‘all’, which does not impose an order as long as every enumerated element (or group of elements) is present in accordance with the occurrence constraints, and ‘choice’, that requires the presence of exactly one element or group of elements.

]> ...

Figure 7.5

Changes to the sample XML document in Figure 7.1 (required for schemabased validation)


XML Languages and Applications

Figure 7.6

The books1.xsd file—XML Schema for the sample document in Figure 7.1

Core XML


Figure 7.6

(continued )

Here, the complex type for the element is defined as the sequence of and elements. In this design, types are defined in the depth-first manner, and it is often difficult to see all elements of the sequence when reading the schema. The element is, in turn, associated with the complex type that is also defined as the sequence, etc. XML Schema syntax for defining element quantifiers (number of occurrences) differs from the DTD syntax, but semantically both possibilities are the same. For example, the number of occurrences for the element is defined to be at least one, which is the same as ‘+’ in the DTD syntax; the element is defined as optional (zero or one occurrences), which is the same as ‘?’ in the DTD syntax. Just as in DTDs, the number of occurrences, if unspecified, defaults to exactly one. Attributes can be defined only for complex types. It is quite a relief that unlike DTDs, schemas support the same built-in types for attributes as they do for elements. By default, attributes are optional, but may be made required, as in the case of the ‘usd’ attribute for the element. Our next step is to improve on the design in Figure 7.6. The new schema in Figure 7.7 uses named types to make the schema more readable and easy to maintain. Instead of defining complex types in place, as in the old design, we start by defining complex types that can be composed out of simple types and attribute definitions, and proceed in order of increasing complexity. For example, complex types authorType and infoType in Figure 7.7 are first defined and then referenced by name in the definition of bookType. The result is the XML schema composed of simple reusable definitions. The primary purpose of the schema is to support document validation. It remains to be seen if we can improve the quality of this validation by defining additional constraints. We have done our job in defining element quantifiers and nesting constraints. However, it would be great to impose useful constraints on simple types. For example, we can assume that a name should not be longer than thirty two characters, and that the price of a book should be in the range of 0.01 to 999.99, whether we are using dollars, pounds, or euros. In Figure 7.8, we define two new


XML Languages and Applications

Figure 7.7

Improved schema design for the sample document in Figure 7.1

Core XML


Figure 7.7

(continued )

Figure 7.8

Defining constraints on simple types

simple types— nameBaseType and priceBaseType —by imposing constraints on the base string type. In the first case, the new constraint is maximum length, while in the second it is the pattern—one to three digits possibly followed by the decimal point and another two digits. Of course, we have to go back to the schema in Figure 7.7 and make changes to the type references to take advantage of these new constraints (Figure 7.9). XML schemas provide extensive capabilities for defining custom types and utilizing them for document validation. Moreover, XML schemas are themselves XML

Figure 7.9

Changes to the type definitions in Figure 7.7


XML Languages and Applications

documents, and may be validated as well, which serves as a good foundation for building advanced tools. However, advanced validation is not a solution for associating semantics with XML elements, which remains a very difficult problem. Some partial solutions to this problem will be discussed further in this chapter.

7.2 XHTML As discussed in Chapter 6, the structure of HTML documents is relatively unconstrained. For example, closing tags for many HTML elements are optional and are often omitted. Real-world HTML documents often violate even the liberal constraints imposed by the HTML specification because commercial browsers are implemented to be extremely tolerant of such violations. XHTML is a reformulation of HTML 4.0 (the last HTML specification) as an XML application. Migration to XHTML not only makes it possible to impose strict structural constraints, but also to dispose with the legacy support for bad syntax. Even commercial browsers do not have to exhibit tolerance when validating documents that claim to implement the XHTML specification. Differences between the sample HTML document from the previous chapter, and the sample XHTML document in Figure 7.10, include the use of lower case element names, and the presence of the tag. Apart from the document declaration, the document in Figure 7.10 is both valid XHTML and valid HTML. Many XHTML constraints do not break HTML validation, including the required close tags and the requirement to enclose attribute values within quotes. Unfortunately, this is not true for all XHTML constructs that occur in real documents. For example, adding the tag after words “I do have to close” would not affect HTML validation, but would constitute a syntactic violation for XHTML. Replacing the tag with the tag would produce the reverse effect. However, using the construct would not break either HTML or XHTML validation Sample HTML Document I do have to close the <p> tag.

Figure 7.10

Sample XHTML document



... I I I I ...

have have have have

Figure 7.11

to to to to

close the <p> tag. close the <p> tag. close the <p> tag. close the <p> tag.


HTML and XHTML validation

Figure 7.12

XHTML script syntax

(Figure 7.11). It is fair to mention that the last construct, while valid in both HTML and XHTML contexts, is likely to cause problems for some XML tools. On the surface, there is not a difference between HTML escape sequences and the XHTML use of references to pre-defined entities, but the real story is more complicated. The nature of XML processing is to recognize and process entity references in #PCDATA context. Elements that are defined to have character content (e.g. and ), are vulnerable to the presence of such entity references. For example, < would be resolved to the ‘ It is currently This page was last modified on

Figure 9.2

SSI example

• results of executing various system commands, • results of executing a CGI script, • CGI environment variables associated with the request, • other environment variables associated with the file and/or the server, and • date and time. The Apache version of SSI also provided simple conditional constructs for including portions of HTML pages selectively based on the value of environment variables. The combination of CGI scripts with server-side includes offered additional power to the Web page designer, but ultimately not enough to support more robust dynamically generated pages, especially for database-driven applications. The problem with accessing the database from a CGI script and invoking that script from an SSI template is that the script is responsible for all formatting of query results and provides designers with no control over the look and feel of these results.

9.2.2 Cold Fusion Cold Fusion represents one of the first commercial template approaches to dynamic server-side page generation, providing a set of tags that support the inclusion of external resources, conditional processing, iterative result presentation, and data access. Cold Fusion owes much of its success to two features: 1. Queries are very simple to create and use, and 2. Every form of data access acts just like a query. Database queries are constructed using the element, referencing an ODBC datasource with the SQL code embedded between the opening and the closing tags. The results can be iteratively traversed using the element, with each column available for variable substitution (Figure 9.3). In addition to for talking to databases, Cold Fusion provides elements for accessing other sources of data, including POP3 e-mail servers, FTP servers, and the local file system. Each of these elements utilizes the same method for accessing and presenting iterative results through variable substitution (Figure 9.4).

Template Approaches



Figure 9.3

Simple Cold Fusion example involving database queries

Like the Servlet API, Cold Fusion provides access to scoped environment variables (e.g. query string parameters, URL components, and session data). It also allows for the creation of custom tags (much like the JSP custom tags that we will discuss later). As the Cold Fusion platform evolved, it succumbed to the pressure to provide scripting capabilities within templates. Although this capability need not be used (and is not used by most deployed Cold Fusion applications), it makes it too easy to create that clumsy mixture of code and formatting within the same source object.


Approaches to Web Application Development


Figure 9.4

Another Cold Fusion example using

Although Cold Fusion offers many of the features associated with a solid template approach to Web application development, it has serious deficiencies. Most importantly, it is a proprietary software product, and the Cold Fusion Markup Language (CFML) is the intellectual property of Macromedia. This is a matter of serious concern, since Cold Fusion tags represent a proprietary approach to Web application development. The irony is that many of the tag specifications in Sun’s Java Standard Tag Library, which we will discuss later in this chapter, are semantically similar to Cold Fusion tags. Although this does not eliminate the problem, it is a sign that Cold Fusion may have been on the right track regarding tag functionality. In addition, there are performance, scalability, and stability issues with Cold Fusion, especially on non-Windows platforms. The Cold Fusion engine was originally designed for Windows, and efforts to port the engine to UNIX and other environments have met with mixed success. Still, it offers a good deal of functionality, and has a significant following among Web developers, who use it to get an application up and running quickly. It may be acceptable if you are developing a simple application with a small number of users that does not require the power and performance of a robust framework.

9.2.3 WebMacro/Velocity WebMacro is a true template-based approach to dynamic page generation. Using a small set of logic constructs to support iteration, conditional processing, and the inclusion of external resources, it provides the functionality needed by page designers to build dynamic Web pages without scattering code fragments and related clutter throughout the page. Although by itself it is not fully compliant with the ModelView-Controller (MVC ) design pattern (described in detail in Section 9.4.1), it does fit into the MVC paradigm. Velocity is the Apache/Jakarta project’s open source implementation of WebMacro.

Template Approaches


#set ($message = "Blah blah blah!") #if ($x == $y) Here is your message: $message #end ... #include filename.wm ... #foreach ($row in $dbquery.results) $name $address $phone #end

Figure 9.5

Simple example of Velocity template

Figure 9.5 shows a fragment from a sample Velocity Template Language (VTL) template. The VTL directives illustrated include a conditional construct (a message that is only displayed if $x is equal to $y), an inclusion of an external file (filename.wm), and the iterative construct that maps the result set produced by the database query stored in a previously defined variable ($dbquery) to the HTML table. As in other template (and hybrid) approaches, Velocity templates depend on the request context that is established by a controlling servlet. This makes it possible to provide template designers with access to content that has been transformed into an appropriate data model by an MVC-compliant controller. There are a couple of outstanding issues with the WebMacro/Velocity template language. The first is its emphasis on the UNIX-style parameter substitution, which shows a bias towards programmers rather than page designers. The $variable notation is obvious and intuitive to UNIX users and Perl programmers, but probably not so obvious and intuitive to page designers. The same variable boundary issues that arise in UNIX shells come into play in WebMacro/Velocity templates as well. They are resolved by providing a ‘formal’ notation for variables (${variable}) to alleviate confusion (e.g. by replacing $variableness with ${variable}ness). The second issue is XML compliance. Template directives, which begin with ‘#’, are obviously not XML-compliant. WebMacro/Velocity functionality is not impaired by the lack of XML compliance (which is also absent in older approaches, e.g. Cold Fusion and ASP), but it would be very nice indeed to see XML compliance in a future version of Velocity.


Approaches to Web Application Development

9.3 HYBRID APPROACHES Hybrid approaches combine scripting elements with template structures. They have more programmatic power than pure templates because they allow embedded blocks containing ‘scripts’. This would seem to offer the benefit of a page-oriented structure combined with additional programmatic power. Examples of this approach include PHP, Microsoft’s Active Server Pages (ASP) and Sun’s Java Server Pages (JSP). Is this really the ‘best of both worlds,’ or a Web developer’s worst nightmare? The intermixing of script blocks with presentation formatting represents a serious violation of the principle of separating content from presentation. The issue of who ‘owns’ the source object becomes very muddled. The frequent contention that page designers can easily learn to work around the embedded code constructs is not borne out by experience. Once again, designers and developers must work on the same source objects, leading to conflicts and collisions when code changes break the HTML formatting, or when changes made by designers inadvertently introduce bugs into the embedded code. Most of these systems have been designed to translate the hybrid source objects into code. The systems have evolved significantly since their inception, but their origins still expose serious issues with these approaches.

9.3.1 PHP PHP is a recursive acronym that stands for PHP Hypertext Preprocessor. It allows developers to embed code within HTML templates, using a language similar to Perl and UNIX shells. The source object is structured as an HTML page, but dynamic content generation is programmatic. For example, the PHP fragment:


gets translated into:

print ""; if ($xyz >= 3) { print $myHeading; } else { print "DEFAULT HEADING"; } print ""

Hybrid Approaches


In other words, text embedded within blocks is processed using the native PHP language, while text outside of these blocks is treated as arguments passed to ‘print’ statements. While other template-based approaches provide several distinct elements designed to perform specific tasks, in PHP there is one— —which serves as a ‘container’ for PHP code. Although PHP scripts are often referred to as templates, being dependent on code to perform most of the work associated with dynamic page generation makes PHP closer to a scripting approach than a template approach, putting it beyond the reach of the average page designer as a tool for building dynamic Web pages.

9.3.2 Active Server Pages (ASP) By the late 1990s, many companies produced their own proprietary server-side processing solutions. Netscape offered LiveWire (which evolved into Server Side Javascript). Other companies, including Allaire (ColdFusion), NetDynamics (now rolled into SunOne) and Art Technology Group (Dynamo), also developed products to support their own approaches for building dynamic Web applications. Microsoft entered the fray with Active Server Pages (ASP). ASP combined serverside scripting capabilities with access to the wide variety of OLE and COM objects in the Microsoft arsenal, including ODBC data sources. Bundled with Microsoft’s free Internet Information Server, ASP quickly gained popularity among Visual Basic programmers who appreciated the VB-like syntax and structure of ASP scripts. Unfortunately, that syntax and structure are ill suited to modern Web applications. ASP pages contain references to obscurely named COM objects, intermixed with HTML formatting. Unlike object-oriented languages like Java or C++, the language used within ASP pages is flat, linear, and strictly procedural. In the ASP example in Figure 9.6, there are two ‘script blocks’ embedded within the page. The first block, which appears before the start of HTML markup, sets up the page context by creating a database connection, opening it with appropriate credentials, creating a result set, associating it with the connection, and populating it with the results of the database query. The second block is inserted in the middle of HTML table markup; it contains procedural code that writes an HTML table row whose cells contain values associated with columns in the result set. Like PHP, ASP’s structure is simple: blocks delimited with the character sequences contain script code to be executed by the server at response generation time, while text found outside such blocks is treated as ‘raw’ HTML. Thus, as with PHP, the page is simply divided between discrete blocks of code and HTML. (Note the presence of page directives in ASP, e.g. .) The fact that ASP is bundled with Microsoft’s IIS Web server makes it an attractive option for those installations that employ Microsoft-only solutions. ASP is popular enough to have been ported to other platforms besides Microsoft’s IIS.


Approaches to Web Application Development

Active Server Page

Figure 9.6

ASP example

This is probably good for the future of ASP, given the security holes and other problems associated with IIS. As with Cold Fusion, its benefits are mostly in the area of speeding up the deployment of relatively simple Web applications. Microsoft’s .NET offering purports to be a ‘framework’ that alleviates many of the limitations of ASP. In reality, it is ASP on steroids: a set of extensions to the existing ASP infrastructure that offer many of the convenience features found in the Java language, coupled with the option to create pages using a variety of languages (e.g. VB.NET and the new language C#). There is a lot of additional power provided in .NET, but there are still limitations in scalability, flexibility, and reusability of components.

9.3.3 Java Server Pages Java Server Pages (JSP) was Sun’s answer to Microsoft’s ASP. As with PHP, JSP support was implemented through a pre-processor that turned page objects

Hybrid Approaches


My Heading Item # is ’’.

Figure 9.7

Sample JSP page

with embedded code blocks into servlet source code. For example, the sample JSP page in Figure 9.7, would be translated into servlet code similar to that shown in Figure 9.8. The first line of the JSP fragment in Figure 9.7 is the page directive to import classes in the java.io package. The next three lines represent a variable declaration. Java code blocks are delimited with ‘’ character sequences. HTML outside of these delimiters is translated into ‘print’ statements as shown. The entire page is translated into a complete Java class that is compiled by the server. JSP represents yet another approach to convert hybrid page-like structures into code that is then compiled and executed. (In the case of JSP, the code is translated into a Java servlet that is compiled and executed by the Web server’s servlet engine.) The vestiges of such origins can be found in the structure of a typical JSP page (e.g. page directives, declarations, and—in pages that fail to satisfy strict design constraints—clumsy intermixing of ‘scriptlets’ and HTML formatting). However, JSP evolved over time, providing new powerful features that allow it to transcend its roots. Among these features is the JSP taglib. A taglib is a library of custom JSP tags that can abstract functionality that would otherwise have required the inclusion of an embedded scriptlet containing complex Java code. These tags are a step towards XML compliance in the JSP world, since they are specified using XML namespaces and defined in XML configuration files. Two of the most commonly used tags are and . The tag allows page designers to embed a JavaBean (constructed and populated by the application and perhaps stored as a session variable) within a JSP page. They can also access and possibly modify properties within that JavaBean using the and constructs. These constructs are translated by the process that JSPs go through prior to compilation and execution. For example:


Approaches to Web Application Development

package jsp. myapp ; import java.io.* ; import java.util.* ; import javax.servlet.* ; import javax.servlet.http.* ; import javax.servlet.jsp.* ; public class

mypage extends HttpJspBase {

private CustomObject myObject; public void jspService(HttpServletRequest req, HttpServletResponse resp) { ServletConfig config = getServletConfig() ; ServletContext application = config.getServletContext() ; Object page = this ; PageContext pageContext = JspFactory.getDefaultFactory().getPageContext(this, req, resp, null, true, 8192, true) ; JspWriter out = pageContext.getOut() ; HttpSession session = request.getSession(true) ; out.print("My Heading") ; for(int i = 0; i < myObject.getCount(); i++) { out.print("Item #" + i + " is ’" + myObject.getItem(i) + "’.") ; } } }

Figure 9.8

Translation output for the JSP page in Figure 9.7

... The value of the ’thing’ property is ’’.

is translated into: MyBean myBean = (MyBean) session.getAttribute("myBean") ; out.print("The value of the ’thing’ property is ’" + myBean.getThing().toString() + "’." ;

Separation of Content from Presentation


Note the syntactic complexities associated with variable substitution in the JSP environment. To access a property from a JavaBean, the tag must be included. (The alternative—to use the syntax—is no less complex.) In addition, despite the claims that these tags make JSP XMLcompliant, variable substitutions may actually force violations of XML formatting requirements. Take, for example, this attempt to use a JavaBean property to specify the SRC parameter for an IMG tag:





The text with the grey background above is a tag embedded within an HTML tag. Not only is this difficult to read, but it violates XML tag formatting constraints (i.e. that tags can not be embedded within one another. JSP provides workarounds to produce the same result in an XML-compliant way, but a more friendly mechanism for parameter substitution is desirable, especially if JSPs are intended for manipulation by page designers.

9.4 SEPARATION OF CONTENT FROM PRESENTATION Ultimately, none of these approaches fulfils one of the primary requirements of a good Web application framework: the true separation of content from presentation. It is like the Holy Grail, sought out by all the various Web application development approaches. Essentially, it boils down to understanding that (1) there is content or data (often called the model ), (2) there is the way in which that data is presented (often called the view ), and (3) these are two separate things. Why is it so important to keep the two separate?

9.4.1 Application flexibility When people talk about ‘confusing the map and the territory,’ they are describing exactly the same problem that occurs when the distinct natures of content and presentation are confused. The map is not the territory; it is a representation of that territory in one of many possible ways. A map could be a street map, showing the highways and roads found in a region. It could be a topographical map, describing


Approaches to Web Application Development

the surfaces and elevations of that region. A map might not even be graphical: a set of explicit verbal directions to get from one place to another is also a representation of the territory and thus a kind of map. We have the flexibility of representing the territory in a number of different ways, using a variety of different maps. In Web applications, the ‘territory’ is the actual data or content. The ‘map’ is the view—the organization and layout of the content in the desired format. The content can be represented in many different ways. The choice of presentation mode should be separate from the choices made to access the data, so that any ‘territory’ can be represented as any kind of ‘map’ (HTML, WML, VoiceXML, etc.). The ‘map’ can be personalized, co-branded, embedded, or otherwise customized in a variety of ways. It does not matter whether your content was read from a file, extracted from a database via a query, requested from an online directory service, or downloaded as a list of messages from an e-mail server. What matters is that the data model should be open-ended so that it is usable by a variety of views, and that some controlling mechanism should be the glue that hooks up retrieved content with the appropriate presentation format—hence, the Model-View-Controller or MVC design pattern. In Figure 9.9, the Controller receives a user request, constructs the Model that fulfils this request, and selects a View to present the results. The View communicates



Controller constructs Model . . .

View transmits user request to Controller . . . View interacts with Model to keep apprised of its contents . . .

Controller selects View for presentation . . .


Figure 9.9

Model-View-Controller design pattern

Separation of Content from Presentation


with the Model to determine its content, and presents that content to the user in the desired format. The View also serves as the interface for transmitting further requests from the user to the Controller. This pattern, designed to facilitate true separation of content from presentation, enables the development of applications that can dynamically tailor and customize presentations based on user preferences, device capabilities, business rules, and other constraints. The data model is not tied to a single presentation format that limits the flexibility of the application.

9.4.2 Division of responsibility for processing modules There is one other reason why separation of content from presentation is critical: the people responsible for these two aspects of an application have very different skill sets and agendas. Presentation specialists are page designers whose skills center on formatting languages such as HTML, page design tools such as Macromedia Dreamweaver and Microsoft Frontpage, and possibly XML with XSLT. They are not programmers, thus their expertise is not in the area of coding and application logic. Content access is the responsibility of application developers and/or database specialists. It may require elaborate conditional logic and complicated queries to obtain the desired data. Just as you would not ask a page designer to code up your SQL stored procedures, you would not want an application developer to design and implement the layout of your pages. Some approaches, including Cold Fusion, ASP, and JSP, offer a great deal of power by combining presentation formatting with application/data access logic. But who is responsible for ASP or JSP page development? Who ‘owns’ the Cold Fusion module that accesses the database and presents the tabular results to the user? The application developer? The database specialist? The page designer? What happens when the page designer’s modifications break the JSP developer’s embedded Java code? What happens when a database specialist alters the query in a Cold Fusion page, inadvertently altering the HTML layout? (And don’t get us started about ASP!) This is one of the most prominent (though least emphasized) reasons why separation of content from presentation is so important: to ensure the division of responsibility between those who access and process content and those who present it. Few if any people have all the skills necessary to perform all the tasks associated with dynamic page generation. The issue of page ownership cannot be understated. Designers and developers have different orientations, different skill sets, and different requirements. Collisions between the efforts of designers and developers modifying the same page modules occur all too frequently. When allowing the intermixing of programmatic code blocks within the page markup, the temptation to turn the entire page into one contiguous block of code (like a CGI script) is enormous. Anyone who has worked heavily with PHP, ASP,


Approaches to Web Application Development

or JSP can attest to this. What’s more, the intermixing gets ugly, so that the page modules become extremely difficult for both designer and developer to read. An MVC-based approach makes it possible to combine application flexibility with the appropriate division of responsibility. Developers are responsible for the controller component. This is often a lightweight module, which delegates processing to appropriate subordinate tasks, also created by developers. These tasks are responsible for accessing data and building the model. Presentation specialists are responsible for building the views. The tasks (and/or the controller) can determine dynamically which view will present the data associated with the model. Notice that we are no longer talking about encapsulating a page in a single module; we are now talking about a complex set of interactions between components. This degree of complexity requires that conventions and standards for these interactions are well defined and easily understood. When a Web application development approach reaches this level of sophistication, it can justifiably be called a framework.

9.5 FRAMEWORKS: MVC APPROACHES 9.5.1 JSP ‘Model 2’ JSP Model 2 (as distinguished from Sun’s Java Server Pages version 2.0) is Sun’s attempt to wrap JSP within the Model-View-Controller (MVC) paradigm. It’s not so much a product offering (or even an API) as it is a set of guidelines, that go along with Sun’s packaging of Java-based components and services under the umbrella of J2EE (Java 2 Enterprise Environment). The general structure of a Web application using the JSP Model 2 architecture is: 1. User requests are directed to the controller servlet. 2. The controller servlet accesses required data and builds the model, possibly delegating the processing to helper classes. 3. The controller servlet (or the appropriate subordinate task) selects and passes control to the appropriate JSP responsible for presenting the view. 4. The view page is presented to the requesting user. 5. The user interacts with the controller servlet (via the view) to enter and modify data, traverse through results, etc. Data access and application logic should be contained entirely within the controller servlet and its helper classes. The controller servlet (or the helper class) should select the appropriate JSP page and transfer control to that page object based on the request parameters, state, and session information. The availability of this

Frameworks: MVC Approaches


information to the controller servlet offers a number of customization options. Based on user identification information, the controller servlet can retrieve user preferences, select JSP pages, and let selected pages personalize the response. For example, the referring URL may help to perform content co-branding. By examining the request, it is possible to learn about the User Agent, infer the type of the device making the request, and choose different formatting options (HTML, WML, VoiceXML, etc.) appropriately. One of the major advances that came along with JSP Model 2 is Sun’s specification of the Java Standard Tag Library (JSTL). It specifies the standard set of tags for iteration, conditional processing, database access, and many other formatting functions. The Jakarta project (part of the Apache Software Foundation that gave us the Apache Web Server) includes a subproject that is focusing on JSP taglibs. This subproject has developed a reference implementation for JSTL. In addition to the guidelines associated with JSP Model 2, Sun also provided a set of blueprints for building applications using the MVC paradigm. These blueprints were eventually renamed the J2EE Core Patterns. They are too numerous and complex to examine in detail here, but some of the more important patterns are described below: • Front Controller—a module (often a servlet) acting as the centralized entry point into a Web application, managing request processing, performing authentication and authorization services, and ultimately selecting the appropriate view. • Service-to-Worker and Dispatcher View—strategies for MVC applications where the front controller module defers processing to a dispatcher that is selected based on the request context. The dispatcher can be a part of the front controller, but normally it is a separate task, selected by the controller module based on the request context. In the Dispatcher View pattern, the dispatcher performs static processing to select the ultimate presentation view. In the Service-to-Worker pattern, the dispatcher’s processing is more dynamic, translating logical task names into concrete task module references, and allowing tasks to perform complex processing that determines the ultimate presentation view. • Intercepting Filter—allows for pluggable filters to be inserted into the “request pipeline” to perform pre- and post-processing of incoming requests and outgoing responses. These filters can perform common services required for all or most application tasks, including authentication and logging. • Value List Handler—a mechanism for caching results from database queries, presenting discrete subsets of those results, and providing iterative traversal through the sequence of subsets. • Data Access Object (DAO)—a centralized mechanism for abstracting and encapsulating access to complex data sources, including relational databases, LDAP


Approaches to Web Application Development

directories, and CORBA business services. The DAO acts as an adapter, allowing the external interface to remain constant even when the structure of the underlying data source changes. The structures and guidelines defined by JSP Model 2 form the foundation for a number of tightly integrated frameworks.

9.5.2 Struts The Struts framework provides a robust infrastructure for Model 2 application development. Developed within the open source Apache Jakarta project, Struts makes use of the Model-View-Controller, Front Controller, and Service-to-Worker patterns to provide a true framework for Web application development. A Struts application generally consists of the following components: • Controller—generally, the org.apache.struts.action.ActionServlet class that comes with Struts is flexible enough to work for most applications, though it is possible to extend this class if required. This servlet class represents the entry point for user requests. • Dispatcher—again, the org.apache.struts.action.RequestProcessor class that comes with Struts is flexible enough to work for most applications, though it is possible to extend this class if required. • Request handlers (custom)—these are application-specific classes, often called actions, that extend the org.apache.struts.action.Action class and override its execute() method to perform the processing required by the application. • View helpers (custom)—for Struts, this functionality are contained in the org.apache.struts.action.ActionForm class. Custom subclasses that extend this abstract class are Java Beans that mediate between the Model and the View, providing getter and setter methods for form fields and implementing custom validation if desired. • Views (custom)—the Struts framework is platform-neutral with regard to views: your view components can be JSPs, Velocity templates, or any other mechanism that can access the servlet runtime context. The main attraction of the Struts framework is that developers can make use of configurable application components (e.g. the controller servlet) that come with the Struts distribution, instead of having to implement these components themselves. The whole application comes together through the XML configuration file named struts-config.xml that is located in the application’s WEB-INF directory (Figure 9.10):

Frameworks: MVC Approaches


: : : :

Figure 9.10

Sample struts-config.xml file

1. The section of the file tells the dispatcher (RequestProcessor) which request handler (Action) should process an incoming request, based on the path portion of the request URL. 2. The element in the example maps the/myapp/login URL (the action’s ‘logical name’) to the name of the Java class implementing the request handler to be invoked. It also references the form processing bean by its logical name (as defined in a element elsewhere in the file) and establishes the scope of the action to be that of the current request. 3. A separate element maps the logical name of the form processing bean referenced in the element to the Java class implementing the form processing bean. 4. In addition, elements (nested within elements) can further define processing components by mapping names (e.g, success and failure) to URL paths associated either with views, or with other processing components. The execute() method of the Action class returns an ActionForward object. The name associated with the returned ActionForward object determines what the application does next after this action has been performed. Notice that there is no need to implement a new Java class for every processing component. It is possible to define just a few generic components and control their behavior through the configuration. Decisions about the generality of application-specific action classes, form beans, and other components are part of the application design.


Approaches to Web Application Development

Using a small set of extensible, reusable components, along with a well-organized structure hooking those components together, Struts provides a viable platform for serious Web application development. Add to this the Struts JSP taglibs that make it easier to format pages that make use of ActionForm beans, and you have a powerful framework. And to top it all off: it’s open source. Still, it is not the ‘be all and end all’ of Web application frameworks. In fact, there are a number of other competing Jakarta projects working on alternatives to Struts (e.g. Turbine), and Craig McClanahan (creator of Struts and primary developer of Tomcat 4) is now working with Sun on a framework called Java Server Faces (JSF). MVC frameworks are still relatively young, and it is too early to say which framework (if any) will win out.

9.6 FRAMEWORKS: XML-BASED APPROACHES A number of approaches to Web application development make use of XML as the foundation for their data models. (See Chapter 7 for additional information about XML.) In these approaches, an XML skeleton selected or constructed by the controller module serves as the data model. It may contain request context elements that are exposed to page designers to help them ‘flesh out’ the skeleton. XSLT is the common approach for transforming this data model into an appropriate presentation format (XHTML, WML, SMIL, VoiceXML, etc.). Tidying Up HTML Pages XPath expressions (used to specify the set of elements to process in XSL stylesheets) can be employed independently of XSLT, simply as a mechanism for identifying and extracting portions of an XML document. This can even be used on existing HTML documents retrieved from a Web server, provided the proper precautions are taken. Most HTML documents are not XHTML-compliant, and thus cannot be used as-is to generate an XML DOM tree. But there is a solution. Tidy—a parser that converts an HTML page into a compliant XHTML document—can be used to produce a valid DOM tree from most HTML documents found on the Web. From this DOM tree, fragments identified via XPath expressions can be extracted from the page. HttpUnit (the open source Web site testing tool) makes use of this method to analyze and extract portions of HTML pages.

As this book is being written, there are a few competing XML-based approaches, including another Apache/Jakarta project, Cocoon. None seems robust enough to upset the applecart as a true next generation Web application framework. Nonetheless, this approach has a lot of merit, since XML provides so much flexibility, but there are a number of issues with both existing approaches and with the concept in general. Among the most prominent of these is the complexity of XSL.



While it is claimed that XSL transformations are within the grasp of the average page designer, once again, this is not borne out by experience. XSL is yet another example of a failure to keep simple things simple in order to provide the most flexibility. There is, however, no reason why the power of XSLT cannot be enclosed in more user-friendly ‘wrappers’ that make application-specific functions more accessible to page designers.

9.7 SUMMARY It would seem that the most viable approach to building a durable and flexible Web application is to make use of the MVC paradigm in conjunction with the power of XML. At the moment, the most viable approach that satisfies these requirements is the Struts framework. In the next chapter, we will design our simple real estate broker application using Struts. Before we move on, let us compare the existing Web application development approaches, side by side (Table 9.1). Even though the MVC-oriented architecture may be the ideal, no existing framework (including Struts) achieves all the goals of that architecture. In real life, we do not always get to choose the best platform for our application development. With this in mind, it behooves us to know the capabilities—and limitations—of a variety of Web application development approaches. Table 9.1

Web application development approaches compared








open standard

1. Portable across all Web servers. 2. Simple programming paradigm. 3. Modules available to augment base language functionality. 4. Open standard.

1. All HTML formatting performed programmatically. 2. Overhead of process creation and initialization for each request. 3. Programmatic approach puts it beyond grasp of average page designer



open standard

1. Simple syntax. 2. Open standard.

1. Not enough power by today’s standards. 2. Security holes.



open source

1. Structural change from code 1. Intermixing of code and focus to page focus. formatting. 2. Modules available to 2. Who is the target audience? augment base language Page designers? functionality. Programmers? 3. Open source. (continued overleaf )


Table 9.1

Approaches to Web Application Development

(continued )






Servlet API


Sun specification 1. Portable across all Web 1. Programmatic approach puts (open source servers that support servlets. it beyond grasp of average implementations 2. Access to full power and page designer. available) extensibility of the Java 2. HTML formatting still language (JDBC, JNDI, performed RMI, EJB) programmatically. 3. Though proprietary, uses open specification with community participation.

Cold Fusion

template/ hybrid

Macromedia proprietary

1. Portable across all Web 1. Program logic and data servers supporting CGI. access still embedded 2. Simple programming within the page structure. paradigm. 2. Simpler than most 3. Modules available to programmatic approaches, augment base language but out of reach for most functionality. page designers. 4. Quick way to get a Web 3. Proprietary application up and running.



Microsoft proprietary (has been ported to non-Microsoft environments)

1. Direct access to COM and 1. Abrupt intermixing of code ActiveX objects, ODBC and formatting. databases. 2. Visual Basic code 2. “Free” (with Microsoft IIS). orientation not sophisticated 3. Quick way to get a Web and structured enough for application up and running. advanced scalable Web applications. 3. Too complex for page designers to create without programmer assistance. 4. Proprietary



Sun specification 1. Power of servlets within a (open source page-oriented framework. implementations 2. The tag allows direct access to available) named scoped JavaBeans and their accessible properties. 3. Custom taglibs provide extensibility. 4. Though proprietary (like servlets), uses open specification with community participation.

1. Does nothing to prevent or even discourage intermixing of formatting and code. 2. Variable substitution is unnecessarily ornate, and is difficult to read. 3. The claim that JSP is ‘accessible’ to page designers does not hold up under scrutiny, given the complexity of JSP tags (no improvement over ASP).

Questions and Exercises

Table 9.1


(continued )






WebMacro/ Velocity


open source

1. True template approach. 1. UNIX orientation for 2. Limits code infestation parameter substitution—is it within templates to iteration friendly/intuitive? and conditional processing 2. Not XML-compliant constructs. 3. Works well within MVC architecture.



open source

1. Full fledged MVC framework. 2. Infrastructure includes dynamic dispatching, form validation, custom taglibs. 3. Flexibility in selecting presentation views (JSP, Velocity template, etc.).

XML-based framework (e.g. Cocoon)

open source

1. DOM allows encapsulation 1. Performance of XSLT of all sorts of data. transformation (even with 2. XPath expressions can be caching of preprocessed used to extract elements (or stylesheets) is slow. sets of elements) from the 2. Complexity of XSLT DOM structure. beyond the grasp of most 3. XSLT is a very powerful page designers. mechanism for data transformation. 4. Different stylesheets can be established/dynamically pieced together to build pages.

1. Careful design is required to reap full benefits.

9.8 QUESTIONS AND EXERCISES 1. What is the difference between a programmatic approach and a template approach? Provide examples. Can we apply this classification to the MVC paradigm? Explain. 2. Give examples of a hybrid approach. Explain. 3. What are the advantages of the Model-View-Controller pattern for Web application development? 4. The Model-View-Controller paradigm provides separation of content from presentation, which means that the same model can be presented using many different views. Give as many reasons as you can why applications might require multiple views. 5. What are the main advantages of the Struts framework?


Approaches to Web Application Development

6. Describe the main components of a Struts application and their operation. 7. What would be the effect of XML and XSLT on different approaches? 8. What was the approach that you used last? Were you satisfied with it? Describe your main concerns with regard to this approach. Can you recommend improvements?

BIBLIOGRAPHY Birznieks, G., Guelich, S. and Gundavaram, S. (2000) CGI Programming with Perl. Sebastopol, California: O’Reilly & Associates. Converse, T. and Park, J. (2002) PHP Bible, 2nd Edition. Indianapolis, Indiana: John Wiley & Sons. Hunter, J. and Crawford, W. (2001) Java Servlet Programming, Second Edition. Sebastopol, California: O’Reilly & Associates. Payne, C. (2002) Teach Yourself ASP.NET in 21 Days (2nd Edition). Indianapolis, Indiana: Sams Publishing. Forta, B., Weiss, N., Chalnick, L. and Buraglia, A. C. (2002) Cold Fusion MX Web Application Construction Kit. San Francisco, California: Macromedia Press. Goodwill, J. (2002) Mastering JSP Custom Tags and Tag Libraries. New York, NY: John Wiley & Sons. Spielman, S. (2002) The Struts Framework: A Practical Guide for Programmers. San Francisco, California.


Application Primer: Virtual Realty Listing Services

In the last two chapters, we discussed guidelines for designing Internet applications. We reviewed application development frameworks that simplify the design and implementation processes. It is time to return to the sample application described in Chapter 8 and go through the process of architecting, building, and deploying it. To reiterate the nature of that application, Virtual Realty Listing Services (VRLS) is a fictitious online real estate company that supports multiple listing services, a cooperative venture that is common in the real estate community. Many brick-andmortar real estate brokers share listings for properties they want to sell or lease with other brokers, in an attempt to attract customers who want to buy or rent these properties. If a customer goes to one broker and buys or rents a property associated with another, the two brokers split the commission. In this way, there is a greater chance for all brokers to sell or rent their properties. An online version of this service would link the web sites of several real estate brokerages to a database of shared property listings. Customers locate the VRLS site through links from their real estate broker, online search, print advertisements, or word-of-mouth. On this site, they have access to property listings from many different brokers. They can browse the available listings but need to register in order to see details about particular properties. When customers register, they are associated with the broker whose site referred them to the VRLS registration page. These referring brokers are called affiliates or partners. In the sample scenario in Figure 10.1, Jane starts her house search by visiting the ‘Why-Kurt’ realty web site. While browsing through that site, she comes across a link to the VRLS application and follows it. Upon her initial arrival at the VRLS site, her affiliation is identified based on the referring site (found in the HTTP request’s Referer header), and she is presented with the welcome page co-branded to the look and feel of ‘Why-Kurt.’ Jane uses the VRLS application to search for shared listings, but never follows any links to property details and remains an


Application Primer: Virtual Realty Listing Services

Why–Kurt Realtors



Virtual Realty Listing Services (VRLS)

1. Jane visits Why-Kurt Realty web site 2. Why-Kurt presents page with link to VRLS application 3. Jane follows link to VRLS application 4. VRLS presents page co-branded for Why-Kurt Realty 5. Jane use VRLS to search database of shared listings 6. VRLS sends Jane the results of her search request, co-branded for Why-Kurt

1. John visits Decade23 Realty web site 2. Decade23 presents page with link to VRLS application

3. John follows link to VRLS application

4. VRLS presents page co-branded for Decade23 Realty

5. John use VRLS to search database of shared listings 6. VRLS sends John the results of his search request, co-branded for Decade23 7. John signs up with VRLS through Decade23

8. John's signup confirmed (via e-mail)

1. Later, John goes directly to VRLS and signs in

2. VRLS presents page co-branded for Decade23 Realty

Figure 10.1

Sample search and access scenario

anonymous user. Consequently, her affiliation with ‘Why-Kurt’ is preserved only for the duration of her session. Meanwhile, John visits the ‘Decade 23’ site and follows a link to the VRLS application from there. His affiliation is recognized as well, and he is presented with the welcome page co-branded to the look and feel of ‘Decade23’. Just like Jane, John searches for his dream house and finds a listing that on the surface looks interesting. He attempts to retrieve detailed information about the listing, which results in an invitation to either login or register in order to proceed. John registers, receives an email message with his assigned password and logs in. By that time,

Application Requirements


his affiliation is already stored in his profile. From that point on, whenever he signs in he will be presented with pages co-branded to the look and feel of ‘Decade23’.

10.1 APPLICATION REQUIREMENTS Let us pretend we are building the VRLS application for a real company. Ideally, the client would provide detailed application requirements from the start. Anyone who has worked on real-world projects knows that this rarely happens. More often, application developers get a loosely defined set of objectives, which have yet to be detailed and clarified to serve as the foundation for building the application Getting clients to construct well-defined requirements is almost an art form, which goes way beyond the scope of this book. Still, developing an application on a foundation of poorly defined requirements is like constructing a building on a foundation of Jell-O. It may stand, for a while, but it will rarely be stable. Thus, it is important to clarify and refine requirements carefully and methodically. With that in mind, let us assume that through a process of client interviews, case scenarios, and business process analyses, we have come up with the following simplified set of application requirements: 1. There should be four classes of users for this Web application: customers, anonymous visitors, partners, and administrators. 2. There should be a mechanism for associating partner brokers with individual requests to the VRLS application. This identification could be either implicit (e.g. identifying the partner using the referring URL), or explicit (e.g. identifying the partner via a query string parameter found in its links to the VRLS site). When unregistered visitors to VRLS arrive from a partner site, they see a view that is customized with partner-specific branding (e.g. a toolbar and company logo) that is applied across all pages of the site. 3. Mechanisms should exist for login by registered customers and for new customer signup. 4. Once they log in, existing customers should see co-branded pages, according to their partner affiliation. Partner identification for existing customers prior to login is on a ‘best-effort’ basis. If it is not possible to determine the active partner, the application should present a default view. 5. Customers should have the ability to create their personal profiles at registration time, and to modify them in the future. Profile information should include a login name and password, name, address, phone number, e-mail address (used for confirmation), and the identity of the partner. Customers should not be able to modify their system-assigned unique user ids, their login names, or their partner affiliation, but should be able to modify all other profile parameters.


Application Primer: Virtual Realty Listing Services

6. After a successful signup, customers should receive a confirmation notice via email, containing a link to the login page and a generated password that can be modified once the customer logs in. 7. Customers should be able to search the available listings for properties that satisfy their search criteria, which could include type of property, number of bedrooms, etc. Search results should include summary information about each listing that satisfies the search criteria. 8. Authenticated customers should be able to retrieve details about a particular property. Anonymous visitors attempting to view property details should be redirected to the login page. There they can identify themselves if they have already registered, or they can follow a link to the signup page. 9. The details page should contain images and additional information about the property, including links for further inquiries. So far, we have discussed requirements for the customer interface. However, as we mentioned in Chapter 8, the administrative interface is just as important as the interface exposed to the ‘outside world.’ Let us provide a brief summary of administrative requirements for this application: 1. Customer administration: select customer, reset customer password, change customer status (active, suspended, etc.), remove customer. 2. Partner administration: add new partner, specify custom markup (logos, background colors, etc.), specify partner selection rules, remove partner. 3. Listing administration: add a new listing, modify listing (change summary and detail information, add/remove image), remove listing. 4. Administrator authentication: access to the administrative interface should be internal and restricted to IP addresses within the company firewall. While these requirements are not as detailed as they could be, they can form the starting point for our design and development. In the real world, we would create page mockups to demonstrate workflow scenarios. Next, we would iteratively ‘nail down’ the precise application requirements. For the purposes of this chapter, we assume that this process did occur and resulted in the workflows and page layouts used in the rest of this chapter.

10.2 APPLICATION DEVELOPMENT ENVIRONMENT Our goal is to provide a practical demonstration of application design principles. To that end, we want to employ an application development framework that represents state-of-the-art programming practices and paradigms. For this reason, we

Application Development Environment


will implement the VRLS application using Struts (the MVC application framework from the Apache Jakarta Project), along with the reference implementation of Sun’s Java Standard Tag Library (JSTL). This chapter is not a tutorial on Struts. The focus is on applying the principles of Web application architecture that we have been describing throughout this book. These principles can and should be transferable to other frameworks and approaches that come along in the future. There are times when application architects are not free to decide on the underlying framework/approach, and it is necessary to make the most of what is available. The hope is that these principles can be applied no matter what platform is used to create an application. Furthermore, Struts is not the be-all and end-all of application frameworks. New frameworks will come along, and these general principles also should be applicable to applications developed using those future frameworks. Struts is a stable MVC-based framework that is being used successfully in many different production environments on a variety of server platforms. Its action mapping abstraction feature in and of itself is one of Struts’ biggest selling points. Even in the simplest Struts application, the use of action mapping makes it possible to keep the exposed URLs unchanged even when the underlying pages (e.g. JSP templates) responsible for producing the view change or move. This remains true even when switching to an entirely new view component architecture (e.g. Velocity templates). Struts includes its own tag libraries that offer dynamic application functions (e.g. conditional and iterative processing, etc.). In particular, the Struts HTML tag library (struts-html) provides a bridge between HTML forms and Struts FormBean classes. However, there exists a project separate from Struts, known as JSTL (JSP Standard Tag Libraries), which is endorsed by Sun and is the most advanced effort to date to simplify JSPs and make them more accessible to page designers. It employs consistent tags and a class-agnostic ‘expression language’ that is much simpler than embedded Java scriptlets. While struts-html is not part of JSTL, the most recent version of Struts provides a version of the struts-html tag library that supports the very same expression language used in core JSTL tags. It is this version of the struts-html tag library that we will employ in implementing our application. Since both Struts and JSTL rest on top of J2EE, we chose to use the most upto-date version of Sun’s Java Development Kit (JDK version 1.4) and the J2EE environment (version 1.3, including Servlet API 2.3 and JSP 1.2). We selected the Jakarta Tomcat server to deploy our application. Tomcat is a stable open source server that supports these choices. It is also the official reference implementation of the Java Servlet API and the JSP specification. Likewise, we use the MySQL relational database management system to support persistence. While MySQL is a fully functional RDBMS, it lacks a number of features common to sophisticated commercial database products, including referential integrity and callable statements (stored procedures). For this reason, both our database schema and our persistence functionality employ the lowest common


Application Primer: Virtual Realty Listing Services

denominator of RDBMS capabilities. Re-factoring this application for usage with a commercial RDBMS (e.g. Oracle) might involve reworking parts of the schema to employ referential integrity, and to take advantage of stored procedures (which can greatly improve usability and performance) to support persistence. We are using well-defined and widely accepted standards, which ensure that the application is easily portable to other J2EE servers (e.g. BEA WebLogic, IBM Websphere, SunOne, Macromedia JRun, etc.) and other database management systems (e.g. Oracle, Sybase, PostgresSQL). It should run on any operating system that supports Java (e.g. Solaris, Windows 98/NT/XP/2000, Linux, Mac OS X).

10.3 ANATOMY OF A STRUTS APPLICATION Although it is not our intention to provide a tutorial on Struts, we will review the main organizing principles of a Struts application. In an MVC application, the entry point is the Controller component. Incoming requests are directed to the Controller, which serves as a ‘traffic cop’ that determines, based on the request context, which task should be performed next. These tasks are mapped to application use cases. The components that perform these tasks may be part of the core Controller module or distinct processing components in their own right. They include data access functions and additional processing to access and manipulate the Model, based on the current state and input parameters associated with the request. When the selected task is complete, the Controller determines whether it is necessary to perform another task, or to generate a response offering a specific presentation (the View ) to the requestor. The presentation sent to the user may provide links back to the Controller, for further requests to perform additional tasks. In a Struts application, MVC components are organized as follows: • The Model is comprised of a set of well-defined JavaBeans. In complex applications, a separate business layer (e.g. EJBs) may communicate with back-end data sources to provide access to the Model implementation. • The View components are provided by JSPs, although the Struts framework supports other alternatives (e.g. Velocity templates, etc.). In addition, there are view helper classes (subclasses of org.apache.struts.action.ActionForm) used to support interaction with HTML forms. • The Controller function is performed by the ActionServlet, which is provided with the Struts distribution. The struts-config.xml configuration file provides action mappings enabling the ActionServlet to direct requests to applicationspecific components (called Actions) that implement individual processing tasks. For each task, an action mapping (defined in the struts-config.xml configuration file) specifies its path, which is a URL defined relative to the servlet context root, and

Anatomy of a Struts Application


public class CustomerAuthCheckAction extends VrlsBaseAction { public ActionForward performAction(ActionMapping p mapping, ActionForm p form, HttpServletRequest p request, HttpServletResponse p response) { HttpSession session = p request.getSession() ; if (session.getAttribute("customer") == null) { return (p mapping.findForward("login")) ; } else { session.setAttribute("customer", null) ; return (p mapping.findForward("logout")) ; } } }

Figure 10.2

Example of an Action class

its type, which is the name of the Java class (a subclass of org.apache.struts. action.Action) associated with it. The action mapping also specifies a set of forwards, which are symbolic names mapped to URL paths (also defined relative to the servlet context root), or to other actions. (Global forwards may be defined and used as possible outcomes for any configured Action.) When a request reaches the application, the controller servlet examines its URL to determine which Action class is to be executed. Depending on the result of its processing, the Action class selects one of the defined forwards by name. It consequently constructs and returns an instance of the ActionForward class that specifies the context-relative URL associated with the selected forward name. The controller servlet then routes processing to this URL, either by forwarding or redirecting (depending on the value of the optional redirect attribute that may also be specified in the action mapping). Figure 10.2 provides an example of how a simple Action class accomplishes all this. If an action mapping defines an input attribute (Figure 10.3), that attribute represents the context-relative URL of a view component (or the name of a forward, which is mapped to a view component) that is responsible for the display of an HTML form used for data entry. This form’s fields correspond to those defined in the subclass of org.apache.struts.action.ActionForm specified by the action mapping’s name attribute. The data entered by the user in the displayed form will be validated if the action’s validate attribute is set to ‘true,’ and if the custom ActionForm class has a validate() method. This method returns an ActionErrors object, which is a collection of every ActionError encountered during validation. If the returned ActionErrors object is empty, then the Action considers the form fields to have been successfully validated. If it is not empty, then the form has not passed validation, indicating to the controller servlet that it should redisplay this view component to allow correction of invalid data. Messages associated with


Application Primer: Virtual Realty Listing Services

named errors can be defined in the properties file named in the application parameter associated with the Struts controller servlet in the application’s web.xml file, usually named ApplicationResources.properties. The presence of an input attribute tells the controller servlet that the first time an action is processed, it should route processing to the view component directly or indirectly specified by this attribute, typically to display a data entry form. The validate() method does not get invoked the first time the form is displayed, because it may consider empty fields to be invalid. ActionForm classes allow for an optional reset() method that can be used to provide initial values to be displayed in the data entry form. This method would be used, for example, if you are not entering data for the first time (as you would when entering profile information for a new customer), but instead you are modifying an existing set of data (such as the profile of an already registered customer). In this case, the method would populate the fields of the ActionForm from the existing customer profile.

10.4 THE STRUCTURE OF THE VRLS APPLICATION Our application does not stray far from the general structure of a Struts application: • The Controller is a custom subclass of the ActionServlet class distributed with Struts, which performs additional application-specific tasks, including partner identification. • The View makes use of JSPs that do not embed Java code. Instead, they use the core JSTL tags and the new version of the Struts HTML tag library that supports the JSTL Expression Language. The pages that utilize form submission (e.g. login, profile, and search pages) have corresponding form beans (subclasses of ActionForm) associated with them. • The Model is a small set of JavaBeans persisted in a relational database. The bean classes implement CustomerProfile, Listing, and Partner interfaces. The application configuration is defined in the struts-config.xml file shown in Figure 10.3. This file contains separate sections for defining form beans, global forwards, action mappings, and properties that tell the controller how to interpret the directives found in this file. For example, the element found within the element towards the end of this file tells the controller that input attributes associated with actions are the names of forwards rather than explicit URL paths. We chose to use the /action/name URL format for defining actions, so that the URLs are more or less abstract. There is no dependency (at least as far as

The Structure of the VRLS Application

Figure 10.3

The struts-config.xml configuration file for the VRLS application



Application Primer: Virtual Realty Listing Services

Figure 10.3

(continued )

action URLs go) on suffixes like *.do, *.jsp, etc. When next generation controller and view components, or even entirely new frameworks, come along, URLs like http://server/context/action/home are more reusable than URLs like http://server/context/home.do.

The Structure of the VRLS Application

Table 10.1


Action mappings for the VRLS application Controller


Actions and Forwards

View Page

biz.vrls.struts.action /action/home

— present home page /action/authcheck

— log in or log out depending on state /action/login

— identify & authenticate

Java Bean


SuccessAlwaysAction — on success -->


CustomerAuthCheckAction — on logout -->


— on login:/action/login CustomerLoginAction — on failure (input) -->






— otherwise invoke reroute() method




— sign up if new user — modify profile if logged-in customer

— on failure (input) --> — on success – >



CustomerSearchAction — on none (input) -->


— browse for listings satisfying search criteria

Model Forms


— on many:/action/results — on one:/action/details of Listings

SuccessAlwaysAction — on success -->







— view listing details

— on success --> — on unauthorized:


CustomerContactAction — on failure (input) -->


— view search results


— send e-mail to realtor

contact.jsp emailConfirm.jsp


— on success -->

Table 10.1 provides a summary of the contents of the struts-config.xml file, including action mappings, forwards, form beans, and view components. Note that the names of JSP pages found in this table differ from the names specified in the struts-config.xml file. The elements specify URLs that point to a single JSP page, /pages/main.jsp, with a query string parameter that provides the name of the target page. In other words, a path of /pages/main.jsp?name=home ultimately routes processing to /pages/partnerName/home.jsp where partnerName is the name of the active partner associated with this session. The page names in this table are the ultimate target pages (e.g. home.jsp). Since portions of the application are restricted to registered customers, we need a mechanism for identifying and authenticating customers when they access the application. We use forms-based authentication rather than HTTP authentication, because it provides more control and flexibility. Applications can use custom HTML form pages for transmission of credentials, and the persistence mechanism for user


Application Primer: Virtual Realty Listing Services

credentials is under application control. In our application, user credentials are stored in a relational database as part of the customer’s profile. For additional security, the passwords are encrypted as one-way hashes.

Authentication Roles There are alternative approaches that may make HTTP authentication more attractive for some applications. In version 2.2 and later of the Java Servlet API, the web.xml file can contain elements. Each element may contain elements (each defining a resource as a basic unit of authentication) and an element (defining who has access to a resource). A element defines a resource by specifying its name and the URL pattern(s) that are associated with it. An element may contain any number of elements specifying which roles (e.g. enduser, administrator) will be allowed access to the defined resource. The mechanism for specifying the roles and associating them with users is specific to the application container. For example, Tomcat comes with built-in support for specifying user names, passwords, and role membership via realms, which can be configured either in an XML file (tomcat-users.xml) or in a relational database. Since our application requirements necessitated the flexibility of forms-based authentication, we did not pursue the usage of this functionality. Additionally, we did not want to introduce dependencies on the usage of a particular application container (i.e. Tomcat). This functionality is worth investigating if you have complex authorization requirements and you are already committed to using a particular application container.

10.4.1 Controller: ActionServlet and custom actions Our VrlsActionServlet class is a subclass of the org.jakarta.struts. action.ActionServlet class, which is responsible for the initial processing and routing of requests (Figure 10.4). The process() method of this class locates the proper subclass of org.jakarta.struts.action.Action based on the mappings defined in the struts-config.xml file, and passes control to that Action’s execute() method. We did not define our Action classes as direct subclasses of org.apache.struts. action.Action. Instead, they are subclasses of our own biz.vrls.struts. action.VrlsBaseAction class, which is itself a subclass of org.apache.struts. action.Action VrlsBaseAction over-rides the Action class’s execute() method to perform common processing functions and then invoke an abstract method, performAction(), which must be defined in subclasses. Consequently, application-specific functionality that is common to all actions can be specified either in VrlsActionServlet or in VrlsBaseAction. All other things being equal, it is much less intrusive to override Action than it is to over-ride ActionServlet. While creating a custom base Action class is encouraged, creating

The Structure of the VRLS Application


javax.servlet.http HttpServlet

org.apache.struts.action ActionServlet


biz.vrls.struts.action VrlsActionServlet


+determinePartner() : Partner

+performAction(...) : ActionMapping








Figure 10.4

Action class hierarchy


Application Primer: Virtual Realty Listing Services

public class VrlsActionServlet extends ActionServlet implements ApplicationConstants { protected void process(HttpServletRequest p request, HttpServletResponse p response) throws IOException, ServletException { HttpSession session = p request.getSession() ; if (session.getAttribute("partner") == null) { session.setAttribute("partner", determinePartner()) ; } ... super.process(p request, p response) ; } protected Partner determinePartner() throws IOException, ServletException { ... } }

Figure 10.5

Fragment of the VrlsActionServlet class

a custom ActionServlet class, though not recommended, is sometimes justified and is not actively discouraged. This application provided us with circumstances that warranted the creation of a custom ActionServlet class. While the process() method of the ActionServlet class is invoked for every request, the execute() method of the custom Action is not invoked when initially displaying an input form. This has very practical implications. If we decided to perform partner identification in the VrlsBaseAction’s execute() method, then whenever a user visited one of the form pages directly before visiting the home page, there would be no chance to identify the partner prior to displaying the form. At best, this would result in a strange user experience—going from a default presentation to a partner-specific one after submitting the form. Consequently, we chose to perform partner identification in the VrlsActionServlet’s process() method, so that proper partner identification would be performed for all requests (Figure 10.5). Let us move on to a brief discussion of individual tasks and their action mappings: /action/home: biz.vrls.struts.action.SuccessAlwaysAction

• Entry page for site. • Always display welcome page. • Forwards: Ž success: /pages/main.jsp?name=home -> /pages/.../home.jsp

The Structure of the VRLS Application


The performAction() method for the SuccessAlwaysAction class always returns the ActionForward associated with ‘success.’ It is used by actions that have a ‘trivial’ nature, e.g. always routing to the home page, or always presenting search results. /action/authcheck: biz.vrls.struts.action.CustomerAuthCheckAction

• Invoked when customer selects ‘log in’ or ‘log out’ from navigation bar. • If customer already logged in, returns the ‘logout’ forward, goes to logout page. • If customer is not logged in, returns the ‘login’ forward, performs login action. • Forwards: Ž logout: /pages/main.jsp?name=logout -> /pages/.../logout.jsp Ž login: /action/login The CustomerAuthCheckAction class is designed as a mediator between ‘login’ and ‘logout’ actions. If the customer is logged in, its performAction() method invalidates the session and returns the ActionForward associated with ‘logout,’ which is mapped to the logout notification. If the customer is not logged in, the performAction() method returns the ActionForward associated with the name ‘login,’ which is mapped to /action/login. /action/login: biz.vrls.struts.action.CustomerLoginAction

• Invoked either by CustomerAuthCheckAction or by actions not supported for anonymous users. • For input, displays login page. • User enters user id and password for authentication. • Validates and checks credentials against user database. • On authentication failure, redisplays login page with error message(s). • Once authenticated: Ž Constructs CustomerProfile from user database. Ž Updates CustomerProfile to reflect the date of the last visit. Ž Maintains CustomerProfile object until logout. Ž On success, invokes reroute() method from VrlsBaseAction, which forwards either to the home page (when invoked by CustomerAuthCheckAction), or to the original target (for actions not supported for anonymous users). • Forwards: Ž failure: /pages/main.jsp?name=login -> /pages/.../login.jsp The CustomerLoginAction class uses the CustomerLoginForm form bean referenced on the login.jsp page, to support specifying user name and password. A


Application Primer: Virtual Realty Listing Services

user may initiate this action by following a link to ‘authcheck,’ or the application may perform this action if an anonymous user tries to use a function limited to signed-in registered customers. The input attribute of the action is set to ‘failure,’ which points to the login page. This page is presented when the action is first invoked, and again as long as the entered credentials do not match those of a registered user. Once the credentials match, the reroute() method inherited from the VrlsBaseAction class is invoked, to perform the originally intended task that required authentication. /action/profile: biz.vrls.struts.action.CustomerProfileAction

• Invoked when customer selects ‘sign up’ or ‘profile’ from navigation bar. • For input, displays profile page. • Allows unregistered visitors to sign up by entering new customer profile. • Allows signed-in registered customers to modify their profiles. • Provides a blank form to unregistered visitors. • Provides pre-populated forms containing CustomerProfile data for signedin customers. • Forwards: Ž failure: /pages/main.jsp?name=profile->/pages/. . ./profile.jsp Ž success: /pages/main.jsp?name=profileConfirm-> /pages/. . ./profileConfirm.jsp

The CustomerProfileAction class serves a dual purpose: to allow new users to enter profile information so that they can become registered customers, and to allow already registered customers to modify their profiles. The profile page contains conditional logic that causes it to present itself differently for each of these situations. The input attribute of the action is set to ‘failure,’ which routes to the profile page when the action is first invoked, and repeats this presentation until the entered information passes validation. At that point, the action’s performAction() method returns the ActionForward associated with the name ‘success,’ which results in the presentation of a confirmation page. /action/search: biz.vrls.struts.action.CustomerSearchAction

• Invoked when customers select ‘search’ from navigation bar. • Provides data entry form for search criteria. • If no results found, returns to the input page. • If multiple results found, routes to ‘results’ action.

The Structure of the VRLS Application


• If one result found, routes directly to the action for displaying details about a single listing. • Forwards Ž failure: /pages/main.jsp?name=search->/pages/. . ./search.jsp /action/results Ž many: /action/details Ž one: The CustomerSearchAction class allows users to enter selection criteria for searching the listing database. The input attribute of the action is set to ‘failure,’ which routes to the search page when the action is first invoked, and again if the search returns no results. If the query produces multiple results, the action’s performAction() method returns the ActionForward associated with the name ‘many,’ which is mapped to /action/results. If the query produces a single result, the action’s performAction() method returns the ActionForward associated with the name ‘one,’ which is mapped to /action/details, bypassing the results page. /action/details: biz.vrls.struts.action.CustomerSearchDetailsAction

• Invoked when customers follow a link from results.jsp. • Also invoked when search produces a single result. • Displays details about a particular listing. • Uses a request parameter to identify the listing. • Not supported for anonymous visitors. • Forwards Ž success: /pages/main.jsp?name=details->/pages/. . ./details.jsp Ž failure: /pages/main.jsp?name=error->/pages/. . ./error.jsp Ž notauthorized: /action/login (global forward) The CustomerSearchDetailsAction class is responsible for displaying details about a particular property listing. It may be referenced either explicitly or through the ‘search’ action. The action’s performAction()method returns the ActionForward associated with the name ‘success’ if the user is a signed-in registered customer, and if the provided listingId parameter corresponds to a valid listing. If the user is not signed-in, the action’s performAction()method returns the global ActionForward associated with the name ‘notauthorized,’ which is mapped to the URL /action/login. /action/images: biz.vrls.struts.action.ImageDisplayAction

• Invoked through details.jsp tags to display images for a particular listing.


Application Primer: Virtual Realty Listing Services

• Only available to registered users (i.e. a registered user should not be able to send a URL for one of these images to a non-registered person and allow them to see it). • Displays an image containing an error message to anonymous users. /action/contact: biz.vrls.struts.action.CustomerContactAction

• Invoked when customers select ‘contact us’ from the navigation bar (or the listing details page). • Provides input form for identifying users and sending messages to realtors. • Once the form is filled in, e-mail is sent to the partner’s e-mail contact address. • Forwards: Ž failure: /pages/main.jsp?name=contact->/pages/. . ./contact.jsp Ž success: /pages/main.jsp?name=emailConfirm-> /pages/. . ./emailConfirm.jsp

The CustomerContactAction class allows users to contact the broker to express interest in a particular listing, or simply to request further information about the broker. The input attribute of the action is set to ‘failure,’ which routes to the contact page when the action is first invoked, and redisplays the contact page on failed validation (i.e. if the e-mail address entered in the form is improperly formatted). If the user is a signed-in registered customer, the e-mail address field on the form is pre-populated with the e-mail address from the CustomerProfile. If the user was looking at details for a particular listing when this action was invoked, the subject field is pre-populated with a reference to the listing’s ID. Once a valid address has been entered, the action’s performAction()method sends e-mail to the referring partner’s contact e-mail address and displays a confirmation page.

10.4.2 View: JSP Pages and ActionForms We need the ability to present a number of different pages throughout the application. The home page, search results page and listing details page are designed to display information, while the login, profile, search, and contact pages are interactive forms (Figure 10.6). We also need the ability to present partner-specific versions of these pages, with each set of pages having a ‘look and feel’ that reflects that of the partner’s own branded Web site. Design alternatives for supporting partner-specific presentations vary greatly. They range from the sharing of all page templates, to maintaining separate sets of page templates for each partner. Sharing templates between partners limits the degree of customization to style sheets (and possibly custom images such as corporate logos). We made the choice to maintain separate sets of templates in

The Structure of the VRLS Application


org.apache.struts.action ActionForm







login.jsp home.jsp

Figure 10.6









ActionForm hierarchy and associated JSPs

order to achieve the greatest flexibility, and to enable partners to create and upload their own templates. All of these pages have a lot in common: they reference the same JSP taglibs, display the same navigation bar (with variations), and have the same general look and feel. Clearly, something needs to be done to reduce redundancy. Our first step in this direction was to create an include.jsp page (Figure 10.8), which is included in all other pages. This page defines tag libraries and initializes shared variables that define commonly referenced URLs and paths used for other shared page components such as the navigation bar. To support co-branding, our original intent was to create a set of ‘root’ pages, each of which would embed its corresponding custom page within a directory containing custom pages for a specific partner. The VrlsActionServlet would have, by this point, determined who the active partner was and placed the appropriate Partner object in the session. In other words, /pages/home.jsp would embed ${sessionScope.partner.code}/home.jsp. However, as we discovered during the course of implementing this strategy, these root pages were practically identical, and the only difference between them was the name of the custom page to be embedded. Thus, we re-factored our design


Application Primer: Virtual Realty Listing Services

Figure 10.7

Shared main.jsp file

to use one common root page, main.jsp, which determined which custom page should be embedded via the query string parameter name. In other words, a request to /pages/main.jsp?name=profile would embed /pages/abc/profile.jsp, assuming abc was the code associated with the active partner. The main.jsp page is shown in Figure 10.7. As mentioned previously, it embeds include.jsp. Using JSTL’s tag, it selects and embeds a partner-specific page based on session information and request parameters, i.e. ${sessionScope.partner.code}/${param.name}.jsp. It also references a partner-specific stylesheet found in the partner directory (${sessionScope.partner.code}/vrls.css). Every custom page should include a navigation bar, but apart from that, we leave it up to each partner to design their page templates as they see fit. The navigation bar should include the following links: 1. /action/home for the home page, 2. /action/authcheck for the authorization action that eventually routes to the login or logout page (depending on whether the customer is logged in or not), 3. /action/profile for the profile data entry action that routes to the profile page for both new customer profile entry and existing customer profile modification, 4. /action/search for the search form page, and 5. /action/contact for the customer contact page. It does not make sense for the action that displays search results, /action/results, to be included in the navigation bar, since it should only be accessible through the search page. By convention, each page should not include a link to itself in the navigation bar (which is one reason a common include page or tag for the navigation bar was not implemented).

The Structure of the VRLS Application


Figure 10.8

Shared include.jsp file

The labels associated with links in the navigation bar appear to be obvious: ‘home’ for the home page, ‘login’ for the login page, ‘profile’ for the profile page, etc. However, these labels should depend on the visitor’s ‘state’—logged in or logged out. When dealing with an anonymous visitor (someone who has not signed in), it is appropriate for the authorization link to have a label of ‘login,’ but the profile link should say ‘sign up,’ since the visitor is not modifying an existing profile (they don’t have one yet), but rather signing up for the first time. Likewise, the label for the authorization link should say ‘log in’ for an anonymous customer who has not logged in, but ‘log out’ for a logged-in customer. Thus, the set of links in the navigation bar is static, but the set of labels is not. As you can see in Figure 10.8, the include.jsp page defines the set of links using the JSTL tag as a set of session attributes. This tag is intelligent enough to create a context-relative link, and to append appropriate parameters to support URL rewriting when cookie support is not available from the browser. The set of labels is defined elsewhere, in the VrlsActionServlet class. A custom object, an instance of the biz.vrls.util.AppTextLabels class, is stored as a session attribute with the name ‘navbar.’ This class implements the java.util.Map interface, and maintains two sets of label mappings: one for the logged-in state, and another for the anonymous state. The set of labels is retrieved from the ApplicationResources.properties file, where Struts also looks for error message mappings and other localized application properties. Labels that are supposed to have different values depending on the customer state are defined twice, once using a ‘vanilla’ name (e.g. app.navbar.profile), and again adding the suffix .auth for logged-in customers (e.g. app.navbar.profile.auth), like this:


Application Primer: Virtual Realty Listing Services

app.navbar.home=home app.navbar.authcheck=log in app.navbar.authcheck.auth=log out app.navbar.profile=sign up app.navbar.profile.auth=profile app.navbar.search=search app.navbar.contact=contact us

From these specifications, two Maps are built and maintained by the AppTextLabels class, one for the logged-in state, and one for the anonymous state. The isLoggedIn() method in the biz.vrls.utils.SessionUtils class is invoked to determine which set of label mappings should be displayed. Let us move on to the discussion of default templates provided as part of the application: home.jsp

• Acts as a welcome page. • Personalized to display customer name. • Contains navigation bar to link to other important application functions. This simple page displays static information and does not contain a form for interactive data entry. login.jsp and CustomerLoginForm

• Simple form for authenticating users by entering user name and password. • Contains the navigation bar to link to other important application functions. A fragment of this page is shown in Figure 10.9. The CustomerLoginForm bean does not directly correspond to a Model component, but refers indirectly to the CustomerProfile object. The CustomerLoginForm’s validate() method is invoked on form submission. The page is redisplayed on validation failure. Validation error messages, if any, are requested through the tag. Note how the struts-config.xml file (Figure 10.3) associates the CustomerLoginForm class with the /login action, which uses the login.jsp page for input. Note that in our sample application we send these credentials ‘in the clear’ (over a non-secure connection). In practice, this action should always operate over a secured connection, especially if sensitive personal information is included in the transmission.

The Structure of the VRLS Application


... User ID: Password:   ...

Figure 10.9

Template page fragment for login.jsp

profile.jsp and CustomerProfileForm

• Structure is similar to login.jsp and CustomerLoginForm. • Form for entering personal information as part of user registration. • Also used for modifying personal information in existing customer profiles. • CustomerProfileForm does directly correspond to the CustomerProfile object, but does not include getters and setters for all of its attributes. • Contains navigation bar to link to other important application functions. Note how the struts-config.xml file (see Figure 10.3) associates the CustomerProfileForm class with the /profile action, which uses the profile.jsp page for input. Remember that the label text associated with the link to this action depends on whether or not the visitor is logged in.


Application Primer: Virtual Realty Listing Services


• The structure is similar to home.jsp. • Displayed upon ‘success’ of /profile action: confirms successful entry of personal information. • Contains navigation bar to link to other important application functions. search.jsp and CustomerSearchForm

• The structure is similar to login.jsp and CustomerLoginForm. • Form for entering search criteria for browsing the property listings database. • Contains navigation bar to link to other important application functions. Note how the struts-config.xml file (see Figure 10.3) associates the Customer SearchForm class with the /search action, which uses the search.jsp page for input. results.jsp

• Designed to display search results based on search criteria entered from the search.jsp page by iterating over a List of biz.vrls.listing.Listing objects placed in the session by CustomerSearchAction class (associated with the /search action). • Each displayed listing provides a link to the /details action, which is implemented by SearchDetailsAction, for individual results. • Contains navigation bar to link to other important application functions. Note how the struts-config.xml file (see Figure 10.3) associates the Customer SearchForm class with the /search action, which uses the search.jsp page for input. details.jsp

• Designed to display details about individual properties. • Queries an instance of biz.vrls.listing.Listing placed on the session by the SearchDetailsAction class (associated with the /results action) to display attributes of a particular real estate property. • Contains navigation bar to link to other important application functions. contact.jsp, CustomerContactForm, and emailConfirm.jsp

• Mechanism for customers to contact brokers with general questions or queries about specific listings.

The Structure of the VRLS Application


• emailConfirm.jsp serves as confirmation page. • The structure is similar to profile.jsp, CustomerProfileForm, and profileConfirm.jsp. • If customer is logged in, e-mail address is pre-populated, otherwise anonymous visitors can enter their e-mail addresses manually. • If reached from a link on the listing details page, subject is pre-populated with mention of specific listing ID. • Contains navigation bar to link to other important application functions. Note how the struts-config.xml file (see Figure 10.3) associates the CustomerContactForm class with the /contact action, which uses the contact.jsp page for input. The CustomerContactForm bean does not directly correspond to a Model component.

10.4.3 Model: JavaBeans and Auxiliary Service Classes Our model (shown in Figure 10.10) includes beans that implement interfaces associated with the three main classes of objects in our application: CustomerProfile, Partner, and Listing. Each of these interfaces extends three common interfaces: Identifiable, Describable, and Logged. The CustomerProfile bean is designed to store information about a customer. Much of this information comes from user input provided during the signup process. The most common use case for this process has a visitor following the ‘profile’ link in the navigation bar, which routes to /action/profile, ultimately displaying the profile.jsp page. As you can tell by looking at the struts-config.xml file, the ‘profile’ action is associated with the CustomerProfileForm bean. It may seem reasonable to use the same class for both the model and the form bean, but it is not a good idea. If the CustomerProfile bean were used as the form bean, hostile users could figure out the names of bean properties that are not exposed to the outside world, and construct HTTP requests that reset these properties and jeopardize the integrity of our application. In a way, CustomerProfileForm acts as a ‘firewall’ for CustomerProfile —users do not have direct access to setter and getter methods on the CustomerProfile bean. Without this separation, we would be relying on ‘security through obscurity.’ which is a very dangerous practice. The Partner bean stores partner-specific information, including partner id and URL prefix that are necessary for inferring partner affiliation for new visitors. Many of its properties are populated through the PartnerDataForm bean (which is part of the administrative interface left as an exercise for our readers). The Listing bean is designed to represent individual real estate properties. Details about individual homes are populated interactively, through the ListingForm bean


Application Primer: Virtual Realty Listing Services

«Interface» Logged «Interface» Describable

«Interface» Identifiable +getId() : int +setId(p_id : int) : void

+getDateEntered() : Date +setDateEntered(p_dateEntered : Date) : void +getDateLastModified() : Date +setDateLastModified(p_dateLastModified : Date) : void

+getDescription() : String +setDescription(p_description : String) : void

«Interface» CustomerProfile

«Interface» Partner

+getLogin() : String +setLogin(p_login : String) : void +getPasswordHash() : String +setPasswordHash(p_passwordHash : String) : void +getFirstName() : String +setFirstName(p_firstName : String) : void +getLastName() : String +setLastName(p_lastName : String) : void +getReferringPartnerId() : int

+getName() : String +setName(p_name : String) : void +getCode() : String +setCode(p_code : String) : void +getContactName() : String +setContactName(p_contactName : String) : void



«Interface» Listing +getTitle() : String +setTitle(p_title : String) : void +getType() : int +setType(p_type : int) : void +getOfferType() : int +setOfferType(p_offerType : int) : void +getNumBedrooms() : float +setNumBedrooms(p_numBedrooms : float) : void +getNumBathrooms() : float +setNumBathrooms(p_numBathrooms : float) : void +getReferringPartnerId() : int



+retrieveCustomerProfileById(p_id : int) : CustomerProfile +retrieveCustomerProfileByLogin(p_login : String) : CustomerProfile +persistCustomerProfile(p_customerProfile : CustomerProfile) : boolean +retrievePartnerById(p_id : int) : Partner +retrievePartnerByCode(p_code : String) : Partner +persistPartner(p_partner : Partner) : boolean +retrieveListingById(p_id : int) : Listing +persistListing(p_Listing : Listing) : boolean


+getDataSourceName() : String +getDataSource(p_dataSourceName : String) : DataSource +getConnection() : Connection +executeQuery(p_query : String, p_connection : Connection) : ResultSet +executeUpdate(p_query : String, p_connection : Connection) : int

Figure 10.10

Model class hierarchy

that is exposed through the administrative interface. (As we already mentioned, the implementation of the administrative interface is left as an exercise for our readers, though a brief discussion of its design is provided later in this chapter.) Our design supports model persistence, implemented through auxiliary service classes— DataAccessService and DomainService —that are discussed in the

Design Decisions


following section. On successful login, an instance of CustomerProfile is populated from the database and stored on the session. This caches customer information for the duration of the browser session. A modification to one or more attributes of the CustomerProfile instance causes a ‘write through’ to the database to maintain data integrity. The Partner bean is populated from the database when a new session is established. The VrlsActionServlet has a method (determinePartner()) that figures out partner association. This association may be modified on login if the initial inference about partner identity was incorrect. Modifications are handled in the same way as with CustomerProfile. Strictly speaking, it is not necessary to store the Listing bean on a session because of its transient nature. We do it to simplify processing in the details.jsp page, which refers to this session attribute to determine which listing should be displayed. We also make use of it in the contact.jsp page: if this attribute exists and is non-null, its ID is included in the subject of the e-mail to be sent to the partner.

10.5 DESIGN DECISIONS In the course of building this application, we made a number of critical design decisions. Here we discuss the rationale behind them. In some cases, we also list alternatives and areas where there is room for improvement.

10.5.1 Abstracting functionality into service classes In designing this application, we abstracted various functions into service classes in the biz.vrls.services package. The DataAccessService class encapsulates the acquisition of database connections and the execution of SQL queries (for both selection and update). The DomainService class provides methods to retrieve and persist the Model components (CustomerProfile, Partner, Listing) associated with the application. The EmailService class obtains a javax.mail.Session through the Web server’s JNDI lookup facilities, and uses it to process e-mail. Service classes exploit the singleton pattern, which ensures that only one instance of a class is present in a system. In Java, this pattern is implemented by providing a static method, getInstance(), that is the only way to access an instance of the class. There are no public constructors for the class; the getInstance() method returns a static member variable (instantiated using a private constructor) that represents the single instance of the class. Applications use methods found in service classes by calling the static getInstance() method and invoking instance methods on the returned object, e.g.:


Application Primer: Virtual Realty Listing Services

DomainService ds = DomainService.getInstance(); Listing listing = ds.retrieveListingById(1234); or Listing listing = DomainService.getInstance().retrieveListingById(1234);

BENEFITS 1. Code simplification—for domain objects, retrieval or persistence is accomplished through a single method call within the DomainService class. For other database functions, queries can be executed directly via methods in the DataAccessService class with a RowSet (a disconnected cacheable implementation of the java.sql.ResultSet interface) returned for processing by the calling class. Developers do not need to know the details of how persistence and retrieval take place. 2. Flexibility—the Model components are designed explicitly around interfaces rather than concrete implementation classes, and the return values for the methods in these classes are interfaces. This makes the application independent of specific implementations of the Model components. 3. Extensibility—since the code for these functions is in one place, instead of being scattered throughout the application, maintenance of this code is also simplified. Any or all of these service classes can be replaced with a new version that performs its operations differently. This opens the door for versions that make use of more sophisticated persistence mechanisms (e.g. EJB, JDO).

ALTERNATIVES/IMPROVEMENTS 1. We could have defined our services as interfaces, and used the Factory pattern to create instances of classes which implement that interface. The particular implementation class to be used can be specified in a properties file, providing flexibility in configuring the application at deployment time. 2. We also could have used Fulcrum, a Jakarta project services framework that used to be part of the Velocity template engine but is now a project in its own right. For those implementing complex services, Fulcrum has already laid down a foundation and done a large part of the work. 3. Finally, we could have added smart caching functionality to the DomainService class. We shall discuss this further in the next section.

Design Decisions


10.5.2 Using embedded page inclusion to support co-branding The VRLS application supports presentation co-branding based on the referring partner. This means that if a visitor comes to the VRLS application through a particular partner broker’s Web site, the pages will have a look and feel that conforms to the layout of that site. The method is simple. There is one root JSP page, /pages/main.jsp, which contains instructions for including common JavaScript files and CSS stylesheets. The ‘name’ parameter included in the query string specifies the action being invoked—‘home,’ ‘login,’ etc. The code associated with the partner (who has already been identified by the VrlsActionServlet) provides the name of the directory in which custom pages are to be found. Thus, a visitor who came to the site through a partner whose code is ‘partner1,’ attempting to access the URL http://host/context/action/home, would be routed to http://host/context/pages/main.jsp?name=home which would embed http://host/context/pages/partner1/home.jsp. Since the root page acts as a ‘wrapper’ for the embedded page, the latter is assumed to contain only HTML body content (i.e. not the head part of the HTML document).

BENEFITS 1. Simplicity—our approach uses a single root JSP page that embeds a custom partner page given a partner directory name and a function. The URL /pages/ main.jsp?name=home refers directly to /pages/partnerName/home.jsp. 2. Extensibility—the addition of new partners does not require any modifications to the application. It is simply a matter of adding a partner to the database, creating a directory to hold partner JSPs and images, and creating or uploading those JSPs and images.

ALTERNATIVES/IMPROVEMENTS 1. We could have eliminated virtually all the HTML in the root page and had it simply embed an entire page. This would mean that all inclusion and common functions would need to be replicated in every custom page. 2. Alternatively, we could have created one set of common JSP pages and put placeholders in them to facilitate co-branding customization. The placeholders could have been used to change a small set of common elements like page background color, URL for the logo image, etc., using substitution parameters provided in a partner configuration file or in attributes of the Partner object itself. This approach would indeed work if all we wanted was this sort of limited customization capability.


Application Primer: Virtual Realty Listing Services

partner.bgcolor =#999999 partner.logo =http://partner1.com/images/logo.gif ...

3. One other possibility is to use Tiles, another Jakarta framework that partners with Struts to provide JSP template functionality. The Tiles framework lets you create JSP templates that define the general page layout as a set of components (known as tiles). Other JSPs can use this layout, and specify which external resources should be used to fill in the template components, by including a tag (functionally similar to a JSP include tag). When used in conjunction with Struts, Tiles layouts (and the URL paths they are associated with) can be configured directly in an XML file. The article referenced at the end of this chapter provides a good introduction to Tiles.

10.5.3 A single task for creation and modification of customer profiles There are two discrete tasks in this application that were combined into one: the creation of a new customer profile by an anonymous user (during signup), and the modification of an existing customer profile by a registered user. While these tasks are similar, they are obviously not equivalent. For example, the signup process allows users to select a login identifier, but the modification process does not allow them to change this identifier once it has been selected. We could have chosen to treat each function as its own task, with its own Action class and its own view components (JSP pages and ActionForm view helper classes). Instead, we chose to concentrate on what these two functions have in common, and create one Action class and one set of view components. The CustomerProfileAction class determines whether the user is logged in. If so, it considers this a task that modifies an existing profile. Otherwise, it considers it a task to create a new one. If the task is to modify an existing profile, the HTML form is populated with current profile values, and conditional logic within the JSP page presents the login name as a fixed text field rather than a form input field. If the task is to create a new profile, the form is initially displayed in an unpopulated state, including a form input field for a user-selected login name.

BENEFITS 1. Minimizes redundancy—if we created separate tasks and separate sets of view components, there would have been substantial duplication, e.g. two form pages with mostly redundant fields, and two ActionForm beans. Our approach emphasizes



what the two processes have in common rather than focusing on what makes them different. In doing so, the number of objects that must be created and maintained is reduced.

ALTERNATIVES 1. We could create separate Action classes, ActionForm beans, and JSP pages for each of these functions. Whether this is a good idea, depends on how much difference there is between the views and processing for each task. Other applications may have requirements that cross this threshold.

10.6 ENHANCEMENTS No application is ever complete. Even if you successfully build an application that fulfills all the specified requirements, you can bet that these requirements will change after the application is deployed (probably even before!). This application is no exception. Since it was designed as a tutorial, some of the requirements were deliberately not implemented, or were implemented only partially. The suggestions described in this section describe enhancements over and above the original requirements, as well as steps necessary to implement those unimplemented and partially implemented requirements.

10.6.1 Administrative interface Although we strongly emphasized that an application is not complete without an administrative interface, we did not make the implementation of the administrative interface available for download. The application package includes SQL queries for adding partners and listings to the database, but no mechanism (other than manual execution of SQL queries) for updating the database to add partners or listings. The administrative interface should have its own authentication scheme. In other words, the mechanism used to identify and authenticate administrators must be separate from the one used to identify and authenticate customers. If you want an interface that employs one fixed administrative password shared by all administrators, you can include that (preferably encrypted) in the application resources file. It would be more thorough (and more secure) to provide a database table (similar to VRLS CUSTOMER PROFILE DATA used for administrator authentication). Ideally, the administrative interface should be a separate application, installed with a distinct servlet context that is not associated with the main application. It would naturally require its own set of actions and view components. Figure 10.11 offers a sample struts-config.xml file for the administrative interface. It defines actions to perform login, partner additions and modifications,


Application Primer: Virtual Realty Listing Services

Figure 10.11

(continued )

and listing additions and modifications. There should be mechanisms for selecting partners or listings that need to be modified. For selecting partners (assuming the number of partners was not huge), there could be a form with a dropdown box to allow partner selection. Since the number of listings can be expected to be much larger, a more sophisticated mechanism is needed for listing selection. The simplest approach would be to allow the administrator to type in a listing ID manually, but this would be prone to error. The same mechanisms provided for searching in the main application can be transplanted into the administrative interface to serve this purpose. Administrators could enter search criteria via a form on the search page. If the search produced many results, the /results action would present the results page, with links to the /details action for each listing. If the search produced only one result, processing would be routed directly to the /details action which would present the details page for that listing. In contrast to the main application, where the details page is a read-only presentation of information about a listing, the administrative details page is a form that could be used for either modifying an existing listing or entering a new one. The administrative interface should employ its own base Action class, similar to the VrlsBaseAction class associated with the main application but tailored to the needs of the administrative interface. Since virtually all actions in the administrative


Application Primer: Virtual Realty Listing Services

interface (with the exception of the login function) require that the administrator be logged in, this base class could return ‘unauthorized’ whenever a user that is not logged in attempts access, using that global forward to route directly to an error page.

10.6.2 Enhancing the signup process through e-mail authentication One of the requirements associated with this application was the ability for new users to sign up and create new customer profiles. Our implementation allows users to enter all the information needed to populate a CustomerProfile object, including their chosen login name and password. However, application requirements specify that new users should not be allowed to specify their passwords. They should be able to enter all required information, including e-mail addresses, but this information should not include passwords. Instead, once the form has passed validation, the application should send a confirmatory e-mail to the address that was entered, containing a random password automatically generated by the application. The user, upon receiving the e-mail at the specified address, would return to the application, entering their chosen login name and the provided password. Once they have successfully logged in, only then could they change the password to one of their own choosing, using the application’s profile modification functionality. This is a more secure method of enrolling new users than the one we have built into the application. In its current state, the application allows someone to enter an invalid e-mail address, or someone else’s valid e-mail address, with impunity. Requiring that new users enter a valid e-mail address, where they will receive a message containing the password they need to log in, ensures that we have verified the identity of our enrolling user. Currently, the application includes a form field on the signup page in which new users can enter their password of choice. They can then proceed to the login page to provide their login name and password. To make this process more secure, the following steps need to be taken: 1. Remove the password field (and the password confirmation field) from the signup page. Note that these fields are still needed when modifying profiles for existing customers, so the profile.jsp page must be modified appropriately to support this. 2. The profile confirmation page that users are routed to after the profile form has been validated (profileConfirm.jsp) should provide an indication to the new user that they should expect an e-mail containing their password, which they can change after they connect to the application.



3. After successful form validation (but before routing to the profile confirmation page), the application should use the EmailService class to send an e-mail to the provided address. The e-mail should contain the random password generated by the application.

Proper Profile Password Processing Note also that the right way to perform customer profile modification would be to require the current password to be entered, making it a pre-condition for the processing of changes. Thus, three password fields (all initially blank) would be required on the form for profile modification: one for the current password, one for the desired new password, and one more to provide confirmation of the desired new password. If the current password field is incorrect, the form should fail validation. Otherwise, if the two ‘new password’ fields are blank, the password should not be changed, but other changes should be processed. Changes to the password should only be processed when the current password is entered correctly and when both new password fields are filled in and match each other.

10.6.3 Improving partner recognition through a persistent cookie One of the problems with the current design is that customers returning to the application may not have their partner affiliations recognized until after they sign in. Some approaches to partner identification make this process easier (e.g. using a sub-domain strategy to always identify the partner for requests to http://rrr.vrls.biz as rrr). Using the referring URL for partner identification is more problematic, especially on subsequent visits when the customer comes directly to the site (i.e. there is no referring URL). Saving a persistent cookie to identify the partner would alleviate this problem. This could be accomplished by having the VrlsActionServlet include a Set-Cookie header in its generated responses once the user has successfully signed up with a particular partner. This cookie should identify the application’s domain and set a value of partnerCode=abc (assuming the partner’s code is ‘abc’) and an expiration date far into the future (e.g. six months to a year). Subsequent requests from this customer (from the same browser on the same computer) will include a Cookie header providing this name/value pair. The VrlsActionServlet’s determinePartner() method should be enhanced so that it looks first for this cookie before trying to determine the partner via the Referer header. Note that other information can be persisted in this cookie, including the customer’s login name and password, which would cause the customer to be identified and logged in automatically. We can do this, but best practices dictate that we do not do so, unless customers have explicitly elected (via a checkbox on the profile


Application Primer: Virtual Realty Listing Services

page) to keep this information in a persistent cookie so that they can be logged in automatically the next time they visit the site.

10.6.4 Adding caching functionality to the DomainService class Many different kinds of caching can be used in a Web application. The kind of caching we are talking about here is not Web caching but object caching, using the DomainService class. Methods that store and retrieve instances of CustomerProfile, Partner, and Listing make a database call every time, which may be very costly. It is relatively simple to cache all retrieved objects in a Map that is maintained as an instance variable in the DomainService class. The key associated with an entry in the Map is the identifying field used to retrieve the object, and the value is the object itself. (For retrieveById methods, the ‘id’ field is generally an integer, so the key must be converted into an object using the Integer wrapper class, since the java.util.Map interface requires that a key be an object and not a primitive type.) Each retrieval method in the DomainService class should be modified as in the example in Figure 10.12. Similarly, the persistence methods should update the Map whenever objects are modified by the application. It should be noted, however, that the underlying data source could be modified independently of the application (e.g. through direct database updates). A decision must be made as to whether the application should tolerate this discontinuity or provide a mechanism for clearing the cache to allow updated objects to be refreshed from the database. public CustomObject retrieveCustomObjectById(int p id) { Integer idKey = new Integer(id) ; if (m customObjectCacheMap.contains(idKey)) { return (CustomObject) m customObjectCacheMap.get(idKey) ; } else { CustomObject customObj = null ; //perform database retrieval functions from original method ... if (customObj ! = null) { m customObjectCacheMap.put(idKey, customObj) ; } return customObj ; } }

Figure 10.12

Object caching example



To make this whole process work, the following steps must be taken: 1. Modify the retrieval methods in the DomainService class according to the example in Figure 10.12. 2. Similarly modify the persistence methods to update the Map when persisting new or modified objects. 3. Provide public methods that can be invoked to clear object caches (individually or collectively). The administrative interface should provide a mechanism to invoke these methods directly, so that the cache can be cleared on request when necessary.

10.6.5 Paging through cached search results using the Value List Handler pattern The result set returned from a search query can be quite large. Rather than displaying the entire result set on one page, the number of results displayed per page should have a predefined limit, and the application should provide a mechanism for customers to page through the discrete result subsets. Sun refers to this as the Value List Handler pattern (one of the Core J2EE Patterns). It is also known as the Paged List or Page-by-Page Iterator pattern. We have already laid the groundwork for implementing this pattern. Query execution methods in the DataAccessService class return a CachedRowSet, which is a disconnected cacheable implementation of the RowSet interface (which extends the ResultSet interface). Normally, ResultSet objects produced by database queries are inextricably tied to the database Connection. In other words, they are destroyed once the Connection has been closed, and since well-behaved applications close Connections as part of their task cleanup, these ResultSets cannot ‘live’ across multiple HTTP requests. CachedRowSets are ‘disconnected’ (not tied to the Connection), thus they can be used across requests, provided they are stored in the HTTP session. Fortunately, the application already does this. To implement this pattern in our application, the following steps must be taken: 1. Provide a mechanism for defining how many results should be displayed per page. Using a property in the ApplicationResources.properties file makes this parameter configurable. 2. Modify the SearchResultsAction class to acknowledge a request parameter, ‘page,’ that indicates which page number should be displayed (defaulting to 1). Use this parameter to calculate sequence numbers of the first and last results that should appear on the page, and store these values as page attributes named ‘begin’ and ‘end.’ In addition, set two boolean page attributes named ‘atBegin’ and ‘atEnd’ that indicate whether this is the first or last page in the result set.


Application Primer: Virtual Realty Listing Services

3. Modify the tag on the results.jsp page(s) so that it displays only the specified range of results. This is accomplished by adding these attributes: • begin="${pageScope.begin}" • end="${pageScope.end}" 4. Add links to the page to allow forward backward traversal to the next/previous page (e.g. http://host/context/action/results?page=${request.page+1}). Place these links within conditional constructs (e.g. or tags) so that the next page link does not appear if this is the last page, and the previous page link does not appear if this is the first page.

10.6.6 Using XML and XSLT for view presentation The most forward-looking approach to presenting application views is to use XML and XSLT. An incremental approach to adding XSLT functionality would be to modify the existing HTML templates so that they are in XHTML. Once they are in an XML-compliant format, they can be used as an XML source to which XSLT transformations can be applied. These XSLT transformations would serve as a post-processing step that performed final customizations on presented views. This approach is the least intrusive but ultimately the most costly in terms of performance. A more robust alternative is to construct the model as an XML document and to use XSLT stylesheets to transform the model into an appropriate view. The beauty of this approach is its inherent flexibility. The application can choose a target format (e.g. HTML, WML, SMIL, VoiceXML) based on the type of device or program making the request. It can then select a specific presentation by choosing a custom XSLT stylesheet appropriate for that target format. This technique realizes the promise of MVC fully: the model is completely decoupled from the view, and the number of views that can be made available is limited only by the number of stylesheets that developers can construct.1 Two shortcomings to this approach have impeded its acceptance in the Web application development community. The first is the sluggish performance associated with XSLT transformation; the second is the overall complexity historically associated with XML and XSLT processing. XSLT performance has been dramatically improved through mechanisms that allow the compilation and caching of stylesheets. Still, there is room for improvement, but performance is no longer the impediment to XSLT acceptance that it once was. 1 Even this number is not a true upper limit, since XSLT stylesheets can be constructed from embedded fragments, thus exponentially increasing the number of possible combinations.



The complexity of both XML and XSLT processing has been reduced significantly. Early adopters of XML and XSLT had to deal with cumbersome configuration issues and inconsistent APIs. Now virtually all commercial frameworks and server products provide native support for XML processing, and native support for XSLT is not far behind. In addition, frameworks like Cocoon simplify the building of applications that publish content using XSLT as their presentation layer. Perhaps the most radical simplification in XML and XSLT processing can be found in JSTL’s XML tags. These tags not only provide the ability to parse XML documents into a DOM tree, but also the ability to perform direct XSLT transformation on a constructed or imported XML document. Assuming that the controller component has constructed an XML document and put it into the session, and assuming that this component (or some other component) has either imported or constructed an appropriate XSLT style sheet and put it into the session, then the controller could route processing to a JSP page as follows:

XSLT is an extremely powerful mechanism for transforming XML documents into human-readable presentations, but it is also rather complex. It was hoped that Web designers would ultimately be the ones who create XSLT style sheets, Unfortunately, most of their tools do not yet support interactive stylesheet creation, and most designers have not taken it upon themselves to learn XSLT (which, given its complexity, is not surprising). Yet another alternative is to build a DOM tree using the JSTL tag, and access individual elements (or sets of elements) using the , , and tags. The select attribute in each of these tags can be set to an XPath expression that may return an individual element or (with the tag) a node set. Using this approach, the results.jsp page could be rewritten as in fragment in Figure 10.13. In this fragment, a variable called listings is constructed as an XML DOM tree from the listingsAsXml session attribute using the tag. (In a properly segmented MVC application, a controller component would have built this XML document and stored it in the session.) The tag selects each element matching the XPath expression /listings/listing and processes it, displaying the values of elements found within it. As you can see, there are a number of options available for using XML, XSLT, and XPath functionality to make the selection and generation of application views more dynamic and flexible.


Application Primer: Virtual Realty Listing Services

Property Type: Offer Type: Region:

Figure 10.13

Example of parsing XML documents using the JSTL tag

10.6.7 Tracking user behavior Keeping track of user actions and recording them for later analysis is another capability that can be added to our application. For example, when customers visit the search page and enter criteria for browsing the listing database, their entries could be recorded implicitly. When a customer views the details of a particular listing, this fact could also be saved (in a log or in a database table) for future reference. This recorded data could be used by our application, and other applications, to perform a number of tasks: 1. Sending targeted e-mails based on tracked behaviors about the availability of properties of a particular type. 2. Reminding customers who have not logged in for an extended time about the existence of the application, through a reminder e-mail (especially useful for subscription Web sites, e.g. to notify inactive customers that their subscription or free trial has lapsed or is about to lapse). 3. Personalizing customer home pages by showing thumbnails of new property listings that satisfy their past search criteria. 4. Keeping anonymous statistics about the popularity of individual listings (based on the results of searches, detailed views, and broker inquiries). To accomplish this, the Action classes need to be modified to record information about tracked events. The least intrusive approach would be simply to record the

Questions and Exercises


events we want to track into a log file. Each log entry would need to contain all the relevant information, including timestamps, customer ids (or an indicator denoting an anonymous visitor), listing ids, and source (search, targeted e-mail, etc.). The format of log records should be standardized enough that those records could be browsed and searched later. Our LoggingService class simplifies the process of writing records to a log. A more methodical approach would record tracked events in a database. This means creating tables that would contain records of well-defined events that need to be tracked, e.g. ‘customer login,’ ‘listing view,’ ‘customer search.’ The main advantage of this approach is that it is much easier to perform analysis by querying a database than by parsing log entries from a text file.

10.7 SUMMARY Our goal in this chapter was to walk through the process of designing and implementing a Web application. On our Web site, we offer this application for non-commercial tutorial purposes, in a package that includes source code, database schema, and configuration instructions. We think that providing a complete working application is a better starting point than having readers build it themselves from the ground up. The enhancements described in Section 10.6 provide readers with the opportunity to start with a working application and build on it.

10.8 QUESTIONS AND EXERCISES 1. Install and deploy the sample application. Follow the instructions found on the Web site, at http://www.WebAppBuilders.com/. . . (You will need to register and sign in to download the application package.) 2. What changes need to be made to the DataAccessService class to allow the application to work in environments that do not support JNDI lookup for datasources? How could this be done in a way that would still provide runtime configuration options (i.e. without hard-coding database connectivity parameters)? 3. In Section 10.4.2, we mention that we did not implement a custom tag or an includable page to display the navigation bar, because the links presented on the navigation bar would be different on every page (i.e. a page should not present a link to ‘itself’). Remember, though, that a parameter in the request URL identifies the page that is currently being displayed. Thus, it is possible to build a reusable mechanism for presenting the navigation bar that uses conditional processing to skip the label and link associated with the current page. Implement this functionality using either an included JSP page or a custom tag, and modify embedded pages to refer to it. 4. Formulate a plan for building the administrative interface described in Section 10.6.1. Include a separate database table containing credentials for administrators.


Application Primer: Virtual Realty Listing Services

5. Modify the application to provide the enhancements described in Section 10.6.2 (and in its associated footnote). What changes need to be made to the profile.jsp page to support this? What other components need to be modified to enable this functionality? 6. Modify the application to provide the enhancements described in Section 10.6.3 to use a persistent cookie for partner identification. Include the functionality that would also enable automatic login if customers indicate this as a preference in their profiles. What changes must be made to the profile.jsp page? What other components need to be modified to enable this functionality? 7. Modify the application to provide the enhancements described in Section 10.6.4, providing caching functionality within the service classes. Include the functionality that would clear the cache on request. Which part of the application should expose this function? 8. Modify the application to provide the enhancements described in Section 10.6.5 to provide a mechanism for paging through large result sets using discrete subsets. 9. What difficulties are likely to arise in maintaining an application that makes use of the ‘less intrusive’ XML support strategy described in the beginning of Section 10.6.6? 10. Modify the application to implement the model as an XML document, and to use XSLT stylesheets to transform the model into an appropriate view, as described in Section 10.6.6. What are the maintainability and performance improvements over the original approach, and over the less intrusive alternative described earlier in that section? What issues does this approach solve?

BIBLIOGRAPHY Spielman, S. (2002) The Struts Framework: A Practical Guide for Programmers. San Francisco, CA. Husted, T. et al. (2002) Struts in Action: Building Web Applications with the Leading Java Framework. Greenwich, CT, Manning Publications. Malani, P. (2002) UI design with Tiles and Struts. JavaWorld, January 2002.

Bayern, S. (2002) JSTL in Action. Greenwich, CT: Manning Publications. Kolb, M. (2002) JSTL: The Java Server Pages Standard Tag Library. Presentation at Colorado Software Summit 2002.

Yarger, Rees and King. (1999) MySQL & mSQL. Sebastopol, CA: O’Reilly & Associates. Harrison, P. and McFarland, I. (2002) Mastering Tomcat Development. New York, NY: John Wiley & Sons.


Emerging Technologies

Rapid expansion of Internet technologies has not come without cost. Technological incompatibilities and inconsistencies have put a strain on the Web application development process. Today, after more than a decade of exponential growth, Internet technologies are reaching the point where they are stable, robust, and part of the mainstream. We are seeing encouraging examples of technology convergence. XHTML is supplanting (if not replacing) HTML, WML is being redefined as an extension to XHTML, and the relationship between XSL, XSLT, XSLFO, and other stylesheet specifications is coming into focus. The most recent specifications from the W3C and other standard-setting bodies (e.g. WAP Forum, OASIS, etc.) concentrate on achieving improvements to accepted and emerging technologies, as well as convergence between them, as opposed to dramatic new directions. This chapter is devoted to a discussion of the most significant of the emerging technologies, including: • Web Services which represent an important architectural advancement in building distributed Web applications. • Resource Description Framework (RDF) which is currently the leading specification for machine-understandable metadata. • XML Query which supports the extraction of data from XML documents, closing an important gap between the Web world and the database world. We shall also introduce one particular RDF application, Composite Capabilities/ Preference Profiles (CC/PP), which is a promising platform for serving content across multiple devices and formats, followed by a brief overview of the Semantic Web that may very well employ RDF as its foundation. Finally, we present our speculations and suggestions regarding the future of Web application development frameworks.


Emerging Technologies

11.1 WEB SERVICES Web Services are distributed Web applications that provide discrete functionality and expose that functionality in a well-defined manner over standard Internet protocols to other Web applications. In other words, they are Web applications that fit into the client-server paradigm, except that the clients are not people but other Web applications. The type of Web application we have discussed throughout the book has employed an architecture in which the data is provided to a human being—an end user—usually via a Web browser. Using a browser, end users submit HTTP requests (consisting of a URL plus query string parameters, headers, and an optional body) to Web servers. Web servers send back HTTP responses (consisting of headers and a body) for browsers to present to users. The body of the response is some human-readable content such as an HTML page, an image, or a sound. Web Services work similarly, except that the intended recipient of the response is another Web application. Since the recipient is a software program rather than a person, the response should be machine-understandable. Consequently, it must conform to protocols that machines (i.e. computers running Web applications) can understand. If you are writing an application, which will only be used within a very limited environment, you can make your own decisions about how it operates. The goal of Web Services is not only to provide inter-application communication, but also to do so in a uniform, well-defined, open, and extensible manner. Using a broad definition, applications providing Web Services have been around for a long time. As we have mentioned earlier in the book, the ‘server side’ of a Web application can be a client that transmits its own requests to other applications. Responses generated by those other applications are consumed by the original Web application, which further processes them to deliver a response to the end user. Today, the term ‘Web Services’ means something more: it refers to the set of protocols for defining standardized service descriptions, the mechanisms for publicizing their existence, and the construction, transmission, and processing of Web Service requests. Together, these protocols provide uniformity, extensibility, and interoperability, making it possible for Web Services to work across a variety of environments (including Sun’s J2EE and Microsoft’s .NET).

11.1.1 SOAP The most popular protocols for Web Service requests and responses are XML-RPC (XML Remote Procedure Call ) and SOAP (Simple Object Access Protocol ), with SOAP having overtaken XML-RPC as the protocol of choice. SOAP is an application layer protocol for constructing and processing Web Service requests and

Web Services


-73.0 40.0 en/us

Figure 11.1

Example of a SOAP request with multiple parameters

responses. It can use HTTP, SMTP, and a variety of other protocols (e.g. messaging protocols like JMS and MQ/MSMQ) to transport requests and responses. Figure 11.1 shows an example of a simple SOAP request with multiple parameters. The request is represented in XML format. In the example, the Web Service returns local weather information. The intent of the request is to retrieve information about New York weather. For that, you must provide the longitude and latitude of the region, as well as locale information (e.g. en/us), which determines not only the language but also the format for the temperature (Celsius or Fahrenheit) and wind velocity (MPH or km/h). Note the use of different namespaces to disambiguate SOAP envelope elements, references to XML data types, and service-specific elements. The SOAP envelope contains the body element (this example does not contain an optional header). The body contains a single element defining a remote procedure call, by specifying a method (getWeather) and its arguments (degreeslong, degreeslat, and locale). The envelope is transported to a SOAP server over a protocol such as HTTP or SMTP. In the case of HTTP, the envelope comprises the body of the HTTP request, which follows the request line (e.g. POST /services HTTP/1.1) and associated headers, as shown in Figure 11.2. After receiving the request, the SOAP server invokes the specified method with the provided arguments, and generates a response for transmission back to the requesting application. In Figure 11.3, the response contains values for pre-defined response elements, which inform the requestor that the temperature in New York is 25◦ (Fahrenheit), the conditions are partly cloudy, and the wind is from the southeast at 5 MPH. Since the response is transmitted back to the requestor over HTTP, it includes the appropriate HTTP headers. A SOAP client can translate this response into a human-readable format. This can be accomplished by using one of the available SOAP APIs or toolkits (e.g.


Emerging Technologies

POST /services HTTP/1.1 Host: www.intlweather.com Content-Type: text/xml; charset ="utf-8" Content-Length: . . . -73.0 40.0 en/us

Figure 11.2

The same SOAP request transmitted over HTTP

HTTP/1.1 200 OK Content-Type: text/xml; charset ="utf-8" Content-Length: . . . 25 F partly cloudy 5 MPH SE

Figure 11.3

The response to the previous SOAP request

Web Services


Microsoft SOAP Toolkit, JAXM/SAAJ) or by transforming the body of the response using XSLT into a human-readable format (e.g. HTML, WML, or VoiceXML).

11.1.2 WSDL Defining a SOAP-based Web Service is only a partial step toward true interoperability. Constructing a SOAP request requires knowledge about the service—the name of the method to be invoked, its arguments, and their datatypes, as well as the response semantics. This knowledge could be available through human-readable documentation, but this falls short of the goal of true interoperability. Since Web Services are meant to be machine-understandable, their semantics (and even their existence) should be exposed to Web applications, so that they could discover and make use of them without human intervention. WSDL and UDDI are designed to close the interoperability gap. WSDL (Web Services Definition Language) provides a common language for defining a Web Service and communication semantics. (UDDI, which stands for Universal Description, Discovery, and Integration, serves as a mechanism for registering and publishing Web Services. It is covered in the following section.) Let us examine the Web Service definition shown in Figure 11.4 from the bottom up. The element contains a element (to give the service a human-readable description) and a element, bound to the element with the name WeatherServiceBinding, which is in turn associated (through its type attribute) to the element with the name WeatherServicePortType. The element contains a element whose location attribute defines the URL that can be used to invoke the service. The element defines the transport mechanism (SOAP over HTTP) and the names of operations that may be performed using the service as elements. In this case, there is just one operation, getWeather (which we saw in the previous section). Specifications for the encoding format for the input and output bodies are included here. The element lists names of operations as elements. Since there is only one operation associated with the service, there is only one element. It contains and elements, which in turn refer to elements (getWeatherInput and getWeatherOutput). The elements specify references to complex data types, weatherRequest and weatherResponse, respectively. The definitions of complex types (which also make use of XML Schema Datatypes, which were covered in Section 7.1.3) specify components for both input messages (requests) and output messages (responses). This may seem like overkill for a simple Web Service, and indeed, it is. There are a number of ways to simplify this definition. Our example avoids shortcuts to mention some of the more complex aspects of WSDL and demonstrate the available options.


Emerging Technologies

Figure 11.4

Sample WSDL definition for the Weather Service

Web Services


My first service

Figure 11.4

(continued )

11.1.3 UDDI While WSDL provides a common standard for defining Web Service semantics, UDDI provides the last piece to the interoperability puzzle through its mechanisms for registering and advertising Web Services. UDDI servers provide two functions: inquiry and publishing. Inquiry allows users to look for Web Services that fit into specific categories (e.g. business name, service name, service type) and match specified search criteria. The inquiry request in Figure 11.5 is a query for businesses whose names contain the word ‘weather.’ You can see that this UDDI request is also a SOAP request (albeit somewhat simpler than the one found in our original Web Service example). The results from such inquiries are (naturally) SOAP responses (as shown in Figure 11.6). SOAP clients can parse them to derive information about available Web Services that match the provided search criteria. The clients can then choose a Web Service (from those located by the inquiry), access its WSDL definition, and invoke it. Note that, for brevity, we have omitted the businessKey and serviceKey attributes. These are identifying keys assigned by a UDDI registrar when adding a business or service to the registry.


Emerging Technologies


Figure 11.5

Example of a UDDI request from a SOAP client

International Weather Weather information InternationalWeatherService Weatherwax Dog Kennel ...

Figure 11.6

Fragment of a UDDI response

The publishing component of UDDI lets a Web Service provider register their service in a UDDI registry (Figure 11.7). The publishing component accepts SOAP requests to add an entry to the registry, allowing the service provider to specify the service name, description, access point (i.e. the URL), and a reference to the tModel

Web Services


International Weather Weather information http://www.internationalweather.com/services"

Figure 11.7

Example of a request to publish a Web Service in a UDDI registry

(service type) associated with this service. Note again that we have omitted various key attributes for brevity: businessKey, serviceKey, bindingKey, and finally, tModelKey —a key that identifies a specific service type definition also maintained in the UDDI registry.

The Chicken or the Egg? You may have already noticed that there is a Catch-22 situation with respect to UDDI servers: once you know about them, it is easy to make use of them, but how do you discover the existence of UDDI servers in the same way you use UDDI to discover new Web Services? The problem is not unlike the ‘chicken and the egg’ situation associated with DNS (Domain Name Service)—for your system to use DNS to translate domain names into IP addresses, your network configuration must know the IP address of a DNS server. Knowing the name of your DNS server (e.g. dns.myprovider.com) is useless, since you need DNS to determine the IP address to which this name resolves.


Emerging Technologies

Analogously, there is no way to ‘know’ about new UDDI servers, without already ‘knowing’ about them. Currently, there are only a small number of centralized UDDI servers, so this problem does not yet manifest itself. As the number of Web Services grows, it will become very difficult (if not impossible) for a small number of servers to support UDDI inquiries. Research is already under way to extend the possibilities associated with UDDI service; using distributed servers like those used for DNS and federated servers like those employed in P2P networks.

In a broad sense, any layered Web application makes use of ‘Web Services.’ Backend systems provide loosely defined ‘services’ to other application components. The importance of SOAP, WSDL and UDDI specifications is in providing more formalized and rigorous definitions of Web Services, and methods for locating, accessing and utilizing them. This makes them structured, modular, and reusable for a wide variety of applications. Web Services functionality provides a platform that will allow many other emerging technologies to flourish.

11.2 RESOURCE DESCRIPTION FRAMEWORK The next wave of technological advances may very well be powered by machineunderstandable metadata. Metadata technologies have been slow to gain momentum, but now that the base technologies are consolidating, this is already changing. RDF is a standard that was designed to support machine-understandable metadata, and to enable interoperability between metadata-based applications. Early applications of RDF address real problems in the areas of resource discovery, intelligent software agents, content rating, mobile devices, and privacy preferences. RDF is used to construct metadata models that may be understood by processing agents. Strictly speaking, RDF is not an XML application, even though XML is used to encode and transport RDF models. XML is not the exclusive mechanism for representing RDF models; other representation mechanisms may be available in the future. (Natively, RDF models are defined as sets of triples, as described in the next section.)

11.2.1 RDF and Dublin Core The Dublin Core (DC) metadata standard predates RDF. It was proposed as an element set for describing a wide range of networked resources. DC’s initial design goals were very ambitious and not all of them materialized. What emerged was the simple set of fifteen elements, the semantics of which have been established through

Resource Description Framework


long and painful negotiations within the international, cross-disciplinary group that included librarians and computer scientists. The DC elements cover such core notions as ‘Title,’ ‘Creator,’ ‘Publisher,’ ‘Date,’ ‘Language,’ ‘Format,’ and ‘Identifier.’ Together with qualifiers, the nouns corresponding to these key concepts can be arranged into simple statements, which enable simple ‘pidgin-level’ communications. DC elements are easy to use but are not up to the task of communicating complex concepts. The emergence of RDF breathed new life into the DC specification. RDF provides the formal mechanism for describing DC concepts. More importantly, the DC specification provides the necessary ‘semantic grounding’ for RDF models through its atomic concepts that were designed for describing networked resources. The most basic RDF concept is that of a resource, which is any entity represented with a URI. An RDF triple is the combination of a subject, an object, and a property (also referred to as predicate). Both subjects and properties are RDF resources, while objects may be either resources or literals (constants). Our example in Figure 11.8 is the simplified RDF model for this book. The meaning of the model is obvious—it describes the book, by specifying authors and the publisher. RDF models are designed to be machine-understandable —their meaning may be interpreted and acted upon by computer programs that do not have any built-in knowledge of the matter (in this case, publishing). The book resource, which is the object of all three triples in Figure 11.8, is identified by its URI— http://purl.org/net/shklar/wabook. The resource identified with http://purl.org/net/shklar represents the first author of the book and is the subject of one of the triples; the ‘Creator’ property is represented with http://purl.org/dc/elements/1.1/creator. Similarly, the resource identified with http://www.neurozen.com represents the second author of the book and is the subject of the second triple (the property is the same). The subject of the final







Figure 11.8

Sample RDF model



Emerging Technologies

http://www.neurozen.com http://purl.org/net/shklar http://www.wiley.com

Figure 11.9

XML Representation of the RDF model in Figure 11.8

triple is the resource representing the ‘John Wiley & Sons’ publishing company; the ‘Publisher’ property is represented with http://purl.org/dc/elements/1.1/ publisher. The XML representation of the model is shown in Figure 11.9. The DC vocabulary of atomic concepts is identified by its URI— http://purl.org/dc/elements /1.1/ —and the semantic grounding of RDF properties is achieved by mapping them to DC concepts ‘Creator’ and ‘Publisher.’ The triples in Figure 11.8 all relate to the common object represented by the element. The and elements represent RDF properties, and the content of these elements represent subjects of their respective triples. By the nature of XML, the structure in Figure 11.9 is hierarchical, which creates the obvious impedance mismatch problem for arbitrary RDF models. In this example, the hierarchical nature of XML works to our advantage, resulting in the very compact representation. It gets a lot more complicated for complex models. The same resource may be the subject of one triple, and the object of another. In Figure 11.11, we show the representation of the original model with two additional triples that specify the creation date for both the book and the publisher. The creation date for the book is its publication date, ‘2003-05.’ while the creation date for the publisher is the foundation date of the company=‘1807’. Notice, that the publisher resource— http://www.wiley.com —is the object of the property and the object of the property. The structure of the XML representation in Figure 11.11 did not change all that much compared to the XML document in Figure 11.9. However, you can recognize the early signs of trouble—the resource identified with http://www.wiley.com is referenced in two different places, which is not the case in the model (Figure 11.10). In XML representations of complex models, there may be numerous references to different resources, which is the indication of the impedance mismatch problem we mentioned earlier. The hierarchical nature of XML makes it impractical to process XML representations of RDF models directly. Instead, RDF processors use such representations

Resource Description Framework



http://purl.org/dc/elements/1.1/creator http://purl.org/net/shklar/wabook


http:// www.neurozen.com http://purl.org/elements/1.1/created

“2003-05” “1807”


http://purl.org/elements/1.1/created http:// www.wiley.com

Figure 11.10

Modified RDF model from Figure 11.8

http://www.neurozen.com http://purl.org/net/shklar http://www.wiley.com 2003-05 1807

Figure 11.11

Adding information about the Publisher to the Model in Figure 11.9

as input for constructing RDF graphs. In other words, RDF is more than another XML application. It is a separate specification based on an entirely different nonhierarchical model. RDF serves the purpose of expressing complex interrelationships between Internet resources. Its long-term goal is to enable automated reasoning in the space of resources, their properties, and relationships. Persistent URLs You may have noticed that addresses of Dublin Core elements refer to purl.org. Remember, back in Chapter 3 we discussed that a URI (Uniform Resource Identifier)


Emerging Technologies

may be either a URL (Uniform Resource Locator) or a URN (Uniform Resource Name). By definition, URNs do not change when pages move. URNs used to be a theoretical notion, but purl.org is an early attempt to make it practical. The name of the site stands for ‘Persistent URL.’ It is a public service site, which makes it possible to assign persistent names to Internet resources. (Think of it as an open source/public domain equivalent to AOL’s proprietary ‘keywords’ if you like.) This is exactly what we did with the book resource— http://purl.org/net/ shklar/wabook. At the moment, this URI maps to http://wiley.com/WileyCDA/ WileyTitle/productCd-0471486566.html, which is a transient address. Not to worry—if Wiley decides to reorganize their site, all we need to do is change the mapping on the purl.org site and the URN would continue to work. As long as we distribute the URN and not the physical address of the page, and make sure that it stays current, we will be all right.

11.2.2 RDF Schema The RDF Schema specification aims at constraining the model context by introducing the notion of model validity. This is quite different from the validity of XML documents—a valid XML document may represent an invalid RDF model. Remember, XML is just one of many representation vehicles for RDF documents. RDF Schema enables the definition of new resources as specializations of the ones that already exist. This makes it possible to define new concepts by semantically grounding them to existing specifications. For example, we can take advantage of the Dublin Core ‘creator’ property and its associated semantic concept in defining two new properties— firstAuthor and secondAuthor (Figure 11.12). As you can see, both new properties are defined as specialization of the dc:creator property through rdfs:subclassOf, which is defined in the RDF Schema specification. The rdfs:necessity property, which is also defined in the RDF Schema specification, serves to express occurrence constraints—a book always has at least one author, but may have two or more; only one author may appear in the particular position on the cover. Note that the order of triples representing the dc:creator property in Figures 11.9 and 11.11 is arbitrary and not semantically meaningful. Now that we have defined our new properties and specified the authoritative location of the new schema (rdfs:isDefinedBy), we can modify the model in Figure 11.11 to take advantage of the new specification (Figure 11.13). As you see, we introduce the additional name space ‘book’ and use it to qualify the new properties. To repeat the obvious, RDF Schema is not an alternative to the XML Schema. Figures 11.13 and 11.14 both represent valid XML documents, but the model in Figure 11.14 is not valid—it violates the rdfs:necessity constraint imposed on

Resource Description Framework


First Author The author whose name appears first on the book cover. Second Author The author whose name appears second on the book cover.

Figure 11.12

Sample RDF schema

http://www.neurozen.com http://purl.org/net/shklar http://www.wiley.com 2003-05 1807

Figure 11.13

Taking advantage of the schema in Figure 11.12


Emerging Technologies

http://www.neurozen.com http://purl.org/net/shklar http://www.wiley.com 2003-05 1807

Figure 11.14

An invalid version of the model in Figure 11.13

the firstAuthor property in Figure 11.12. RDF Schema provides the same service for RDF models as the XML Schema for XML documents—enables specialized applications.

11.3 COMPOSITE CAPABILITY/PREFERENCE PROFILES One of the early applications of RDF is the Composite Capabilities/Preference Profiles (CC/PP) specification, which is the joint effort of W3C and the Wireless Access Protocol (WAP) Forum. The idea is quite simple—let the devices and user agents describe themselves and make those descriptions available to smart services that would tailor their responses accordingly. User agents that run on different platforms may expect different content types (e.g. XML, WML, HTML, etc.) and structures (e.g. different arrangement into tables and cards or different use of graphics). The flexibility of RDF makes it possible to create self-describing device specifications based on their screen size, keyboard (if any), display characteristics, etc. Devices are represented as composites of features, and properly constructed services do not need to be modified every time a new device comes out. Services can combine information about devices and user agents with information about connection bandwidth and use it dynamically to customize output. Targeted output transformations lend themselves to the application of XSLT technology. XSLT stylesheets get composed from parameterized feature-specific components. An efficient server would optimize stylesheet construction by caching components as well as intermediate composites. For example, caching device-specific stylesheets that are constructed based on device profiles and combining them with stylesheet components that are determined by the operating system, user agent software, and connection bandwidth.

Composite Capability/Preference Profiles


Dual: XYZ Special XYZ Corp. 123 PhoneKeypad 200x240 No

Figure 11.15

Sample device description

Figure 11.15 contains a CC/PP-compliant description of the device ‘123’ from the ‘XYZ Corporation’. In this example, rdf and prf prefixes are bound to URIs for ‘RDF Syntax’ and the WAP Forum’s ‘User Agent Profile’ namespaces, correspondingly. The first element of the specification is rdf:Description for our device; it contains only one specification component that describes device hardware. The rdf:type element references the schema element that identifies the hardware platform. Next, prf:CPU defines the default CPU, prf:ScreenSize defines the default screen size, etc. Figure 11.16 contains CC/PP-compliant descriptions of software for the same device. It includes two separate components that describe the device operating system and the user agent. The OS is our hypothetical ‘XYZ-OS,’ and acceptable content is limited to text/plain and text/vnd.wap.wml (prf:OSName and prf:CcppAccept). The user agent is the particular version of Mozilla that supports tables (prf:BrowserName, prf:BrowserVersion, and prf:TablesCapable). The semantic grounding of CC/PP concepts is based on existing specifications. For example, acceptable values for prf:CcppAccept are MIME types, and acceptable values for prf:Vendor and prf:Model come from industry registries. This ‘by reference’ approach to defining semantics is well-suited for the real world. There are a growing number of CC/PP specifications for wireless devices developed and maintained by real world equipment manufacturers. Individual devices and user agents often differ from default configurations. For example, my personal version of the ‘XYZ 123’ device may have an optional


Emerging Technologies

XYZ-OS text/plain text/vnd.wap.wml Mozilla Symbian Yes

Figure 11.16

Sample description of device software

screen, and may be image capable. Fortunately, it is possible to incorporate default configurations by reference as in Figure 11.17. The prf:Defaults element references the default profile from Figure 11.15, which we assume to be available from http://www.xyz-wireless.com/123Profile. Here, prf:ScreenSize and prf:ImageCapable override default properties of the device. The resulting profile would be uploaded to the profile registry either when the device first goes online, or after it is modified. The server-side agent that controls automated assembly of the XSLT stylesheet would interpret the profile. Ideally, this stylesheet would produce markup that can take advantage of optional features added to the device. CC/PP, in combination with XML and XSLT, enables applications that can serve content to the wide variety of desktop and wireless devices. Most importantly,

Semantic Web


220x280 Yes

Figure 11.17

Individual hardware profile

properly constructed applications would require minimal or no modification to expand support to new and modified devices and software platforms.

11.4 SEMANTIC WEB The Semantic Web is the major new effort on the part of the World Wide Web Consortium. It aims to create the next generation Internet infrastructure, where information has well-defined meaning, making it possible for people and programs to cooperate with each other. The critical part is to associate data with meaning, and here the important role belongs to RDF and its descendants. In the long term, we expect that RDF in conjunction with other standards such as XML, WSDL, SOAP, and UDDI will serve as the foundation for Semantic Web applications. When RDF first came out a few years ago, people often thought of the Semantic Web as a collection of RDF applications. New RDF-based specifications, including the DARPA Agent Markup Language (DAML) and the Ontology Inference Layer (OIL), strengthened the belief that RDF would be the foundation of the Semantic Web. However, it soon became clear that it is going to be some time until DAML and OIL applications would be practical. As a result, more and more people are taking a wider view of the Semantic Web, including the possibility of using existing standards in conjunction with RDF models for building advanced Web services.


Emerging Technologies

Applications that benefit from the use of machine-understandable metadata range from information retrieval to system integration. Machine-understandable metadata is emerging as a new foundation for componentbased approaches to application development. Web services represent the latest advancement in the context of distributed component-based architectures. Whether applications make use of RDF, or are trying to achieve similar goals by using XML, WSDL, UDDI, SOAP, and XSL, they create fertile grounds for the future.

11.5 XML QUERY LANGUAGE As the scope and variety of XML applications have grown, so have the integration requirements associated with those applications, and with them the necessity to query XML-structured information. The convergence and consolidation of XML specifications made it practical to define uniform query facilities for extracting data from both real and virtual XML documents, with the ultimate goal to access and query XML information as a distributed database. The challenge is that XML documents are very different from relational databases. Instead of tables where almost every column has a value, we have to deal with distributed hierarchies, and with optional elements that may or may not be present in a particular document. Relational query languages such as SQL are thus not suited for XML data. The XML Query language, XQuery, is still being designed by W3C. Even so, there are already numerous implementations based on the early specifications. XQuery combines the notions of query and traversal. The traversal component serves to define the query context, which is determined by the current XML element and its location in the DOM tree. The query component serves to evaluate conditions along different axis (element, attribute, etc.) in the query context. Both components are involved in evaluating an expression. For example, consider the sample XML document (sample.xml) from Chapter 7 (Figure 7.1). We modified this example to support unique identifiers for individual books by adding the element (Figure 11.18). We shall make use of the unique identifiers later in this section to demonstrate multi-document queries. All XQuery expressions in Figure 11.19 are designed to select books written by Rich Rosen. The first expression uses full syntax to define traversal from the document root down to the author element. Notice that the traversal expression is composed of slash-separated axis-selection criteria. Here, ‘child::’ determines the element axis, ‘child::*’ means that any element edge should be investigated, and ‘child::author’ limits the traversal paths to those that lead to elements. The predicate enclosed in the pair of square brackets establishes selection conditions, limiting acceptable elements to those that have ‘firstName’ and ‘lastName’ attributes set to ‘Rich’ and ‘Rosen,’ respectively.

XML Query Language


Web Application Architecture Principles, protocols and practices 0471486566 An in-depth examination of the basic concepts and general principles associated with Web application development. ...

Figure 11.18

Modified sample.xml file from Figure 7.1

document("sample.xml")/child::*/child::*/child::author [attribute::firstName = "Rich" and attribute::lastName = "Rosen"] document("sample.xml")/*/*/author[@firstName = "Rich" and @lastName = "Rosen"] document("sample.xml")/**/author[@firstName = "Rich" and @lastName = "Rosen"] document("sample.xml")/books/book/author[@firstName = "Rich" and @lastName = "Rosen"]

Figure 11.19

Sample XQuery expressions

The second expression is identical to the first, except that it makes use of the abbreviated syntax, which includes the default selection of the element axis. The third expression, while producing identical results for ‘sample.xml,’ has somewhat different semantics—it results in evaluating all paths along the element axis that originate at the document root and lead to the element, regardless of the number of hops. Finally, the fourth expression contains very explicit instructions for the evaluation engine to only consider element edges through the and element nodes. It is easy to see that as far as performance is concerned, this is the least


Emerging Technologies


Figure 11.20

Daily sales log in XML format

expensive option for the evaluation engine, while the third expression is potentially the most expensive option. Notice that the syntax and semantics of XQuery expressions are closely related to XPath—the simple, specialized query and traversal language that we discussed briefly in the context of XSLT (Section 7.4.1). This is not a coincidence—XQuery designers made every effort to be consistent with existing XML specifications. Of course, there has to be more to the XML query language than path expressions. The language has to provide ways to express complex conditions that involve multiple documents and path expressions, and to control format of the result. For example, suppose we want to analyze book sales logs that are collected daily in XML format (Figure 11.20). The sales log, which is stored in the file sales log.xml has very simple format—the root element contains the ‘date’ attribute; every individual element corresponds to a single sale and contains the single attribute that serves as ISBN reference. We need a report on the sale of books that reach at least one hundred copies per day. The query to generate such a sales report is shown in Figure 11.21. The ‘for’ clause implements iteration through elements in sample.xml. Here, the ‘$i’ variable is always bound to the current element in the set. The ‘let’ clause defines the join and produces the binding between the element and for $i in document("sample.xml")/*/book let $s := document("sales log.xml")/*/record[@isbn = $i/isbn] where count ($s) > 99 return { $i/title, $i/isbn, {count($s)} } sortby (sales) descending

Figure 11.21

Sample query

The Future of Web Application Frameworks


elements in the sales log.xml file. The count($s) > 99 condition in the ‘where’ clause eliminates all bindings that do not include at least one hundred

sales records. The ‘return’ clause, which determines output format, is executed once for every binding that was not eliminated by conditions in the ‘where’ clause. It produces units of output that are sorted according to the ‘sortby’ clause. For every execution of the ‘return’ clause, the ‘$i’ variable is bound to the current element, which provides context for path expressions in the ‘return’ clause (‘$i/title’ and ‘$i/isbn’). XQuery is a very complex language. We did not even scratch the surface in our brief discussion. There are many different kinds of expressions that were not covered, as well as the whole issue of XQuery types and their relationship to the XML Schema. Still, the objective was to provide the flavor of language expressions and query construction.

11.6 THE FUTURE OF WEB APPLICATION FRAMEWORKS Although Web application frameworks have come a long way during the course of the last few years, there are still outstanding issues with currently available frameworks that aggravate existing problems in Web application development and deployment.

11.6.1 One more time: separation of content from presentation Foremost among the outstanding issues is the difficulty that arises when content and presentation are mixed. When this mixture occurs, collisions are bound to occur between those responsible for the development and maintenance of a Web application, namely web page designers and programmers. There is still no framework that enforces module-level separation of responsibility. This means that input from both designers and programmers is required to create and modify Web application modules. Depending on the framework, either designers must provide input for code modules, or programmers must provide input for page view modules. Even MVCcompliant frameworks like Struts do not have all the answers. The ‘view’ component in these frameworks is supposed to be the responsibility of page designers, but the technologies used (e.g. JSPs) are still too complex for non-programmers to handle on their own. The inefficiency that arises from this situation cannot be understated. The current state of affairs dictates that designers first come up with a page layout based on creative input, which programmers then modify to embed programming constructs. In all probability, such a module becomes a hodgepodge of design elements and


Emerging Technologies

programming constructs that neither programmer nor designer has control over. Efforts employed by various frameworks to separate these elements within a module have not been entirely successful. In theory, designers could make direct changes to the module. In practice, a seemingly simple change in page layout made by a designer may require a complete reworking of the module. Programmers must either embed the constructs they need in the new layout all over again, or try to fit new and modified design elements into the existing module. Since we have been bringing up the issue of separating content from presentation throughout the book, it is probably about time we proposed some solutions. A solution to this problem would require a two-pronged approach. First, the next generation of Web application frameworks would need to enforce a cleaner separation between programming logic modules (application code), which represent the Controller in the MVC model, and presentation modules (‘pages’), which represent the View. Application code should establish a page context that determines the set of discrete display components that could be selected for presentation on the page. Secondly, the tools used to develop Web applications, especially front-end design tools, need to catch up with the rest of the technology, and support integration and cooperation with these frameworks. As we suggested earlier, pages should be structured as templates that use markup languages such as HTML, WML, VoiceXML, and SMIL. Stylesheet technologies like CSS and XSLT should be used to provide flexible formatting. Page templates should be ‘owned’ and maintained by page designers, who use display components found in the page context. Programmers would be responsible for making these components available to designers through standard uniform interfaces. The display components should support a simple limited set of programmatic constructs for dynamic content generation through iteration, conditional logic and external resource inclusion. The simplicity is important because it is designers, not programmers, who own and maintain the templates. Designers should be the ones who decide how to present the results, and they should be able to make these decisions without requiring programmer assistance. Making a clean separation in focus between application code and presentation views helps ensure that this will happen.

Keeping Complex Processing Out of the Page We want to repeat that the page is not the appropriate place for complex business logic, or even for deciding which of several possible views should be presented. If such processing is required, its place is within the application code. When an application module produces a discrete atomic result, it can be included in the page context as an explicit display component that the designer can use in the presentation. For example, a processing component that determines the ‘state’ of a transaction performed by the application (e.g. ‘completed’ or ‘in progress’) should set

The Future of Web Application Frameworks


the value of a display component in the page context that reports this state. Designers can map different states to messages of their choosing (e.g. ‘This transaction is still in progress. Please wait. . . ’) and make use of this in the page layout. (Note that the actual message is not in the realm of application code, so that modifying it does not require a code change.) On the other hand, when the difference between the possible results of such processing is dramatic enough, the application code should make a choice as to which page should present the results, rather than deferring the decision between coarse-grained presentation alternatives to the page itself. In other words, if the presentation to be employed when the state is ‘completed’ is radically different from the one desired when the state is ‘in progress’, it may not be advisable to expose the state to application designers within the page context. Doing this would lead to several complete alternate presentations embedded in one page. Instead, the application code should choose between different templates. Each presentation is an individual template that is the designer’s responsibility. All the aforementioned suggestions are feasible but not enforced within existing frameworks. We hope to see that changed in next generation frameworks.

11.6.2 The right tools for the job The second facet of the solution is to make the tools used by page designers functional within these new frameworks. Today, page designers rarely code HTML by hand; they make use of automated page design tools such as Macromedia Dreamweaver and Microsoft FrontPage. For these tools to work properly within the kind of frameworks described above, they would need to provide support for the dynamic constructs that can be embedded within a page. For example, support for iterative constructs in these tools could be designed to allow creation of a foreach block where the substitution variables are populated with data dynamically generated by the tool, so that designers can get an idea of what the final result would look like. A more sophisticated approach would be to integrate the tool with the development environment, so that the tool had knowledge of what substitution variable names were available from the page context, what their data types and likely sizes are, etc. Similar support could be provided for constructs used for conditional logic and external resource inclusion. Figure 11.22 shows a page fragment using JSTL tags for iterative and conditional processing. Designers need to see the results of their work as they progress, but attempting to preview the JSP fragment shown in Figure 11.22 would not work: browsers would ignore the JSP tags and display the substitution variables (e.g. ${transactiondata.order number}) as is (see Figure 11.24). To enable tighter integration between front-end design tools and back-end application frameworks, the design tools should have a browser preview function that understands the iterative and conditional constructs used in the underlying framework. At


Emerging Technologies

OrderNumber CustomerID TotalAmount Completed? ${transactiondata.order number} ${transactiondata.customer id} ${transactiondata.total amount} *

Figure 11.22

Example of page fragment using JSTL ‘foreach’ and ‘if’ tags

the very least, they should mock up an appropriate page layout, based on heuristic definitions provided for each of the display components, presenting ‘dummy’ data of appropriate type and length so that the browser preview is meaningful to the designer. Figure 11.23 is a mockup of a dialog box that could be used in a page design tool, to specify how results from the ‘foreach’ tag should be presented when the designer previews the page. Sample browser previews of results (with and without framework integration) are shown in Figures 11.24 and 11.25. In a more integrated environment, the design tool might have access to application configuration information so that it can derive these heuristic definitions directly. Advances in Web application frameworks indicate that we are moving in the right direction. Smarter frameworks already provide intelligence in their iteration constructs (e.g. the ‘foreach’ directives/tags in Velocity and JSTL) to make them class agnostic. In other words, they don’t care what kind of object is returned from a data request, as long as it is an object that they can somehow iterate over (e.g. array, Enumeration, Iterator, Collection, Vector). This may enable more seamless integration between front-end design tools and the application framework. It would be nice to see these class-agnostic constructs extended to support tabular objects as well (e.g. RowSets, collections of JavaBeans, Lists of Map objects).

11.6.3 Simplicity Finally, the Web application development world is faced with a more global problem. As developers and designers, perhaps foremost among our cardinal sins is our failure

The Future of Web Application Frameworks


Figure 11.23

Dialog box for specifying content to be displayed by an iterative page construct

Figure 11.24

Browser preview of the page fragment without framework integration


Emerging Technologies

Figure 11.25

Browser preview of the page fragment with framework integration (mockup)

to keep the simple things simple. We want to ensure maximum flexibility, especially when developing APIs and tools that others will use, but we often do so at the cost of providing simple ways of doing simple things. This can make the tools difficult to use by anyone—except perhaps the developers themselves! This has become a major concern in the Web application development community. There have even been online debates among developers about ‘civil disobedience’ against overly complicated specifications that make it difficult to do simple things in a simple way. Part of the problem in the Web application framework space is the difference in perspective between developers and the supposed target audience of the presentationlevel tools, the page designers. Developers see it as ‘no big deal’ to ‘simply’ write a program or script, insert a code snippet, or compile and build a tool. This is why programmatic approaches appeal more to them than to page designers. But the price for using programmatic approaches is that programmers must ‘get involved’ in the maintenance of the view component. This means that programmers and not designers must implement even the most trivial design changes. This is an inefficient use of staff resources. Moreover, programmers do not like recurring requests to ‘tweak’ page layouts. Layout changes requested by creative and business staff may be subtle and significant (to them), but to programmers those changes seem trivial, especially when they are called upon to make such changes repeatedly. When the nature of the application development approach requires that programmers be the ones who make such changes, the flexibility and viability of the application suffer, since seemingly trivial tasks wind up being inordinately complex. Page designers (or even site administrators) should be able to make these changes themselves, without programmer intervention.

The Future of Web Application Frameworks


Using programmers to perform trivial tasks is not just a misapplication of staff resources; it is also an inordinate waste of time for the entire organization. When programmers make changes to code, the code must go through an entire ‘build’ cycle, where it is recompiled, unit tested, integration tested, packaged for deployment, deployed to a ‘test’ environment for functional and regression testing, and finally deployed to the production environment. Changes made at the presentation level (or through an administrative interface) can bypass the recompilation, packaging, and unit testing phases. Designers can preview them locally, approve, and deploy to production. One More Reason This brings us back to another reason why we do not want programmers creating markup as part of the application code. Imagine an application that invokes the Weather Service we described earlier in the chapter. Suppose this application provides all the information obtained from the Weather Service in a page fragment that includes formatting, as shown in Figure 11.26. If it were decided by creative or business people that this formatting should be changed (e.g. the temperature should stand out by itself in large red text), this would require a change to the code rather than the page template. By inserting individual data items associated with the Weather Service into the page context as discrete display components, we bring the layout completely under the control of the page designers, allowing them to change it themselves (as shown in Figures 11.27 and 11.28).

Designers want ready-made, user-friendly tools that do not require them to become programmers. Perhaps the root of this problem is in the question: “Why are people who don’t care about simplicity, ease-of-use, and user-friendliness building interfaces and tools for people who do?” Temp: 28° F Conditions: Sunny Wind: SE 5 MPH

Figure 11.26

Directly embedded page fragment generated by the Weather Service


Emerging Technologies

Temp: ${weather.temp} ° ${weather.tempscale} Conditions: ${weather.conditions} Wind: ${weather.winddirection} ${weather.windvelocity} ${weather.windvelocityunits}

Figure 11.27

Same as Figure 11.26 but using discrete display components in the page context

${weather.conditions} ${weather.temp} ° ${weather.tempscale} ${weather.winddirection} ${weather.windvelocity}


Sunny 28 ° F SE 5 MPH

Figure 11.28

An alternate specification of the same page fragment using discrete display components



Many people in the open source community have a disdain for Microsoft and the products they offer. But their products have been successful in the marketplace, at least in part, because they were designed to keep simple things simple (e.g. through user-friendly interfaces called ‘wizards’). The argument that these products can only do the ‘simple things’ has some merit. Fortunately, this is not an ‘either-or’ situation. Tools, APIs, and application development frameworks can be designed to make simple things simple for those who design, develop, deploy, and administer Web applications, while still providing flexibility for those who need more complex functionality. It is not unlike the dichotomy between those who prefer commandline interfaces and those who prefer GUIs. There is no reason both cannot exist side by side, in the same environment, with the individual free to choose which mode they want to work in. It is easier to keep simple things simple in a Web application when the framework is designed to support such simplicity. Even in the absence of such a framework, it is still the responsibility of Web application architects to employ good design practices, so tasks that ought to be easy to perform actually are. They should rigorously analyze and document application requirements upfront, including use case analysis to determine the tasks likely to be performed. The design should facilitate adding support for as many of those tasks as possible without requiring that the entire application be rebuilt. Proper utilization of these practices should ensure that the application is flexible, extensible, and viable. Such goals can be accomplished within existing frameworks (e.g. Struts), but only by following solid application design and development practices. Existing frameworks do not enforce good design practices; the best of them simply provide a platform that enables good design. Hopefully, the next generation frameworks will make it a trivial task to follow these practices, so that Web applications can be more flexible and be developed more quickly.

11.7 SUMMARY Current trends in the world of Web application development are extremely promising. Recent XML specifications, including XSL, XSLFO, and XQuery further the objective of making XML a mainstream technology. Support for Web Services is now an integrated part of many commercial products. Due to its complexity, RDF has been relatively slow to gain momentum. However, recent developments show the growing acceptance of this technology. First RDF applications are already taking hold (e.g. CC/PP), and more are under development. It is too early to say whether RDF will be the main power behind the Semantic Web, but it deserves to be watched very closly. In the J2EE world, JSTL tags also make both XML and SQL processing simpler, and represent a huge step towards making JSPs accessible to page designers. JSP


Emerging Technologies

2.0 raises the bar even higher, incorporating the Expression Language directly into the JSP syntax and opening the door for declarative definition of custom tags. Alternative approaches to page presentation exist, both open-source (like Velocity) and proprietary (like Macromedia Cold Fusion and Microsoft ASP.NET). The more flexible approaches strive to fit into the widely accepted MVC paradigm, serving as a possible View component architecture for frameworks like Struts. The next generation of Web application development frameworks are likely to employ the technologies described in this chapter. They should solve many of the pressing problems that currently face developers and designers of Web applications. We hope this book has prepared you, not only to understand the current generation of Web technology, but also to play a part in the development of the next.

11.8 QUESTIONS AND EXERCISES 1. What is SOAP? If SOAP is a protocol, what does it mean that SOAP is an XML application? What is the relationship between SOAP and HTTP? Is it possible to use SOAP with SMTP? Explain. 2. What is a Web Service? What specification is used to define Web service semantics? 3. What is the role of WSDL and UDDI? Why do we need both specifications? How do WSDL, UDDI, and SOAP together support Web Services? 4. What is RDF? What is the purpose of introducing the RDF specification? 5. What is the relationship between RDF and XML? Since an RDF model can be represented in XML, is it not enough to use XML Schema to impose constraints on the model? Why do we need an RDF Schema? 6. What is the relationship between RDF and Dublin Core? 7. What is the purpose of CC/PP? What is the relationship between CC/PP and RDF? 8. Does your cell phone support Web browsing? Can you find or define a CC/PP-compliant description for your cell phone? 9. Let us go back to the CarML markup language and XML documents, which resulted from your exercises in Chapter 7. Define an XQuery-compliant query to retrieve all red cars that have two doors and whose model year is no older than 2000. 10. Suppose that you have access not only to documents describing cars, but to the owner records as well (you can make assumptions about the structure of these records). Can you define a query to retrieve all red cars that have a 6-cylinder engine and that are owned by a person who is less than 25 years old? 11. What future advances do you consider the most important? Explain.

BIBLIOGRAPHY Alur, D., Crupi, J. and Malks, D. (2003) Core J2EE Patterns. Upper Saddle River, NJ: Prentice-Hall.



Glass, G. (2001) Web Services: Building Blocks for Distributed Systems. Upper Saddle River, NJ: Prentice-Hall. McGovern, J, Bothner, P., Cagle, K., Linn, J. and Nagarahan, V. (2003) XQuery Kick Start . Sams. Powers, S. (2003) Practical RDF. Sebastapol, CA: O’Reilly & Associates. Tate, B. A. (2002) Bitter Java. Greenwich, CT: Manning Publications.


Accept header (HTTP) 117, 137 Actions in Struts 207–08, 264–66, 275–88, 301–03 Accessibility, content 155–6 Address resolution, server 66, 69, 93–4 aliasing 93–4 mapping 66, 69–70 Aggregation, content 204, 231–33 Apache Foundation 263, 264 Jakarta project 87, 252, 265, 275 Web server 46, 48, 53, 54, 92–6, 101, 250 Approaches, web application development 245–70 hybrid—see Hybrid approaches programmatic—see Programmatic approaches template—see Template approaches see also MVC/Model-View Controller Architecture application 201–43 browser 103–40 sample 271–312 server 65–102 ARPANET 13 As-is pages 66, 69, 71, 95–6 ASP/Active Server Pages 7, 8, 253, 254, 255–56, 261, 268, 270 Attributes in CSS 160, 165, 167–8 in HTML tags 155, 159–60, 161–62, 165 in SGML 147, 149–50 in XML 174–6, 178–81, 186, 188–9, 191–4, 198–9

Authorization 45, 47, 53–6, 66–7, 104, 106, 107, 109–10, 113, 118 challenge and response 45, 54 HTTP header 47, 54–5 Authentication 55, 66–7, 69, 104, 106, 107, 109–10 automatic 212–6 basic 47, 53–5, 118 forms-based 55–6, 210 secure 56, 98, 209–11 Best practices 222–3, 231, 235, 237, 241–2 content access 216–31 customization and personalization 232–5 data sources 222–3 database processing 237–42 logging 235–7 user classification 215–6 Browser, HTTP—see Web browser Cache-Control header (HTTP) 51–3, 111, 122, 127–8 Caching 33, 41–2, 46, 48, 51–3, 61–2 design 61–2, 90–1, 125–8 database queries 263, 298, 306–8 and HTTP 42 and Cache-Control header 52–3, 111, 122, 127–8 and Pragma header 53, 111, 122, 127 by Web browsers 125–28 by Web servers 125 Cascading Style Sheets—see CSS



CC/PP 313, 328–31 and mobile devices 328–9 and RDF 328 CGI/Common Gateway Interface 5, 7, 38, 65–6, 69, 71–2, 246–7 advantages 73–4, 78, 267 deficiencies 72, 247, 267 FastCGI 81–2, 247 Perl 7, 73–4, 76, 77 Chunked transfers 89–90, 121 Client-server paradigm 14–15 fat clients vs. thin clients 15, 202–3 proprietary protocols 202 Co-branding, content 205, 232–3, 234, 243 Cold Fusion 7, 8, 82–3, 206, 207, 249, 250–2 compared to JSTL 252 Command line interfaces 14–16, 18–19, 20–1 vs. GUIs 14–15 Common Gateway Interface—see CGI Connection header (HTTP) 39, 50, 46, 50, 62–3, 68, 88 Content-Encoding header (HTTP) 49, 121, 138 Content-Length header (HTTP) 70–2, 75, 78, 101 Content-Transfer-Encoding header (HTTP) 89, 101, 111, 118–19, 121, 138, 153 Content-Type header in e-mail 19–20, 35, 48, 49, 136 in HTTP 36, 40, 48–51, 70–74, 75, 77, 79, 89, 94–95, 101, 104, 111, 117, 118, 121, 135, 136, 156, 162, 169, 206 and MIME 38, 48, 49, 50, 62, 71, 94–95, 111, 207, 247 values application/x-www-form-urlencoded 40, 74–5, 118–19, 157 image/* 122, 126–7 multipart/* 50–1, 62, 64, 89, 118–19, 156–7 text/html 49, 70–71, 74, 77, 86, 94–95, 136, 162, 247

text/plain 49–50, 70–71, 94–95, 136, 157 Cookies 34, 56–9, 104, 106, 113, 125, 128–9, 131–2, 137–8 for authentication 210–14 and Cookie header 58–9, 107, 109, 117, 129 domain 57, 129 lifetime 57 path 57, 129 persistent 212–214, 306–7 as session identifiers 56, 93, 210 and Set-Cookie header 39, 40, 56–8, 110, 113, 122 and URL rewriting 212, 291 CSS/Cascading Style Sheets 154, 158–61 and HTML 158–61 and layering 167–8 and mouseovers 165 and XSL 189–90, 198 CSV/Comma Separated Values 217–18 data format DAO/Data Access Objects 238–42, 263–4 DataAccessService and DomainService classes 296–8 Databases, relational 214, 216–17, 219–23, 225–6, 229, 235, 237–43, 275–6, 278 design (database schema) 216, 217, 221, 241 and DataSources 222, 223, 241–2 and JDBC 268 MySQL 238, 275 queries 216, 225, 226, 235, 237, 241, 250–1, 253, 255, 297 ResultSets (and RowSets) 226, 235, 237, 261, 263, 298 and SQL 237, 241, 301 transactions 205, 220, 239–41 Date header (HTTP) 46, 117, 137 Design patterns 231, 263–4 DAO/Data Access Object 238–42, 263–4 Dispatcher View 263 Factory 298



Front Controller 207, 263, 264 Intercepting Filter 263 Master-Detail 225, 231, 237, 243 Many-One-None 228–9, 231 MVC/Model-View-Controller 252, 260–1, 264, 269 Page by Page Iterator 227, 231 Service-To-Worker 263, 264 Singleton 297 Value List Handler 227, 231, 243, 263, 307–8 DHTML/Dynamic HTML 164–8 and CSS 167–8 for form validation 165–7 and JavaScript 164–8 layering 167–8 mouseover 164–5 Dispatcher View (design pattern) 263 DTDs/Document Type Definitions in SGML 143, 146–50, 152 in XML 171–2, 174, 175–6 vs. XML Schema 177–9 Dublin Core 322–3 metadata 322 and RDF 323 Dynamic content 65–7, 69–70, 71–87, 219–35 aggregation 231–3 from database queries 216–7, 220–1, 222–3 personalization 233–4 presenting results 224–9 syndication 231–3 Dynamic HTML—see DHTML

Elements 147–9, 172–82 and attributes 149–50 definitions 147–9 HTML 151–7, 182–3 XML 172–82 Encoding model used in HTTP 49 see also Content-Encoding header, Content-Transfer-Encoding header

Electronic mail (E-mail) 16–24, 225, 228, 236–7, 274, 288, 294–5, 297, 304–5, 310–11 agents 17 attachments 20–1, 23–4, 50, 136 and authentication 20, 24 IMAP 22–4 mailing list 16, 17, 20, 24 and MIME 19–20, 48–50, 136 message format 19, 21, 22, 33, 35 POP3 20–22, 23, 228, 250, 252 SMTP 17–20, 315, 344

HEAD element (HTML) 152–5 HEAD method (HTTP) 37, 41–2 Headers, HTTP—see HTTP headers Host header (HTTP) 36, 38, 47, 61, 88, 101, 115, 116, 137 Hosting, virtual 38, 47, 60, 66, 88, 93, 95, 101 HTML 3, 7, 10, 30, 37, 45, 48, 68, 71, 82–3, 141ff, 150–161 body 155–7 forms 39, 74, 76 dynamic—see DHTML

Factory (design pattern) 298 FastCGI 81–2, 247 Firewalls 98–9 Forms, HTML 39–41, 74–6, 154–6, 162–3, 210, 212 and HTTP methods 98, 114–5, 116, 118, 155, 169 and Struts 264–5, 277–8, 281, 284, 288, 293, 300, 303 validation 161, 165–7 Front Controller (design pattern) 207, 263, 264, 277–8 FTP protocol 2, 7, 26–7 anonymous 26–7, 97 archive 4, 29 server 26, 97, 112, 250 GET method (HTTP) 35–9 vs. POST 41 Gopher 2, 4, 27, 33, 34 GUI/Graphical User Interfaces 14–15, 21, 27, 30, 107, 114 vs. command line interfaces 16, 18–19



HTML (continued ) evolution 151–2 head 152–5 and HTTP tags 71, 74, 76, 78, 82, 109, 134, 135 and SGML 142–50 and XHTML 182–3 HTTP headers 35–7, 40ff, 46–63, 69ff, 74, 76–8, 80, 88ff Accept, Accept-Charset, etc. 117, 137 Authorization 47, 54–5 Cache-Control 51–3, 111, 122, 127–8 Cookie 58–9, 107, 109, 117, 129 Connection 39, 50, 46, 50, 62–3, 68, 88 Content-Encoding 49, 121, 138 Content-Length 70–2, 75, 78, 101 Content-Transfer-Encoding 89–90, 121, 138 Content-Type 36, 40, 48–51, 70–74, 75, 77, 79, 89, 94–95, 101, 104, 111, 117, 118, 121, 135, 136, 156, 162, 169, 206 Date 46, 117, 137 Host 36, 38, 47, 61, 88, 101, 115, 116, 137 If-Modified-Since 61, 91, 102, 113, 126, 128, 137–8 If-Unmodified-Since 61, 91, 102 Last-Modified 48, 90 Pragma 53, 111, 122, 127 Referer 47, 93, 117, 271, 305 Set-Cookie 39, 40, 56–8, 110, 113, 122 User-Agent 47, 73, 116, 137 WWW-Authenticate 47, 53–4, 130, 137–8 HTTP methods CONNECT 37, 95, 102 DELETE 37, 95, 102 GET 35–9, 75–6, 95, 102, 225 HEAD 37, 41–2, 95, 102 OPTIONS 37, 95, 102 POST 40–1, 64, 75–6, 95, 100, 102, 225 PUT 37, 95–6, 102, 116–8 TRACE 37, 95, 102

HTTP protocol, versions differences between (0.9, 1.0, 1.1) 52–3, 59ff, 88, 90, 91, 95, 101–2 HTTP requests body 33, 35–7, 38, 40–2 generation 69ff, 105, 107–9, 113–119, 126ff format 35 queue 68 processing 66–8 routing 116 transmission 119, 126 HTTP responses body 33, 35–7, 45 generation 66, 69ff format 36 and Content-Type header 48–51, 94–5, 104, 111, 121–2, 135–6, 138 queue 68 processing 120–5, 126ff status codes 36–7, 42–5, 70, 71, 77–8, 91, 104–5, 120–1 1xx 88, 121 2xx (e.g., 200) 43–4, 121–3 3xx (e.g. 301, 302) 44–5, 113, 124 4xx (e.g. 400, 401, 404) 45–6, 88, 110, 123 5xx 46, 121 Hybrid approaches 254–9 ASP/Active Server Pages 7, 8, 253, 254, 255–56, 261, 268, 270 JSP/Java Server Pages 85–7, 207–8, 243, 256–9, 275–6, 278–295 disadvantages 254, 268–9 Hypertext 2–3, 7, 27 and HTML 29–30 ICMP protocol 13, 27 and Ping 13 If-Modified-Since header (HTTP) 61, 91, 102, 113, 126, 128, 137–8 If-Unmodified-Since header (HTTP) 61, 91, 102 IMAP protocol 22–4 and POP3 23 Instant Messaging 25 and Talk protocol 25



Intercepting Filter (design pattern) 263 Internet Explorer 162 and browser incompatibilities 164 ISAPI 81 Jakarta projects 9, 87, 252, 265, 275 JSTL 9, 263, 275, 278, 290ff, 309ff Struts 9, 87, 207, 243, 264–6, 267, 269–70, 275ff, 300, 312 Taglibs 263, 266, 269 Tomcat 266, 275, 282, 312 Velocity 249, 252–3, 264, 269, 275–6, 298 Java, language 6, 65, 69, 84–8, 208, 246, 262–3, 268, 275–6, 278ff applets 37, 74, 104, 113 and JDBC 268 and JSP 30, 65, 69, 85–7, 207–8, 243, 256–9, 275–6, 278–295 and JSTL 9, 263, 275, 278+K263 and J2EE 8, 207, 262 and servlets 30, 32, 38, 41, 65, 84–5, 247 JavaBeans 87, 235, 281, 338 as Model in MVC 87, 264, 276, 278, 295–7 and useBean JSP tag 257ff JavaScript 5, 7, 106, 111, 122, 128, 138, 154, 155, 161–4, 183, 299 and form validation 165–7 and layering 167–8 and mouseovers 164–5 Rhino 162–3 server-side 255 JDBC protocol and databases 268 DataSources 222, 223, 241–2 and ODBC 250, 255 ResultSets vs. RowSets 226, 235, 237, 261, 263, 296, 298, 307 JSP/Java Server Pages 85–7, 207–8, 243, 256–9, 275–6, 278–295 with embedded Java code 86, 257–9, 261, 278 Model 2, 87, 262–4 tag libraries (taglibs) 87, 257, 263, 266, 268–9 JSTL/Java Standard Tag Library 9, 263

and code reduction 275, 278 and Cold Fusion 252 core tags 278, 290, 291 XML tags 309–10, 312 Languages markup—see markup languages query—see query languages programming—see programming languages Last-Modified header (HTTP) 48, 90 LDAP 204, 214, 225, 263 Logging 205, 235–7, 263 Many-One-None (design pattern) 228–9, 231 Markup languages HTML 3, 7, 10, 30, 37, 45, 48, 68, 71, 82–3, 141ff, 150–161 SMIL 141, 195, 198, 235, 263, 266, 308, 336 VoiceXML 317, 336 WML 183–6 WSDL 198, 317–19, 322, 331–2, 344 XHTML 182–3 XML 171–200 Master-Detail (design pattern) 225, 231, 237, 243 Metadata 21, 41, 46, 48, 61, 127, 153, 220, 313, 322, 332 Message forums 2, 11, 16, 24–6, 224, 243 Netnews 2, 24–5, 33 META element (HTML) 147, 153–4, 160–1, 290 MIME 19–20, 21, 35, 48–50, 62, 64, 70–71, 88, 92, 94–95, 101, 111, 117, 118, 121, 135–6, 138, 139, 157, 159, 207, 247, 329 and Content-Type header 48–50, 70–71, 74, 79, 89, 94–95, 101, 111, 117, 118, 121, 135–6, 138, 157, 206 Model 9, 49, 84, 87, 203, 223, 252–3, 260–2, 262–4, 266, 269, 276, 278, 281, 292, 295–7, 298, 308, 312, 323–8, 336, 343 data 203, 223, 253, 260, 266



Model (continued ) and JavaBeans 276, 278, 281, 292, 295–7 in MVC/Model-View-Controller 9, 84, 87, 252–3, 260–2, 262–4, 266, 269, 276, 278, 281, 292, 295–7, 298, 308, 312, 336 in RDF 322–8, 343 Model-View Controller—see MVC MRA/Mail Retrieval Agent 17 MTA/Mail Transfer Agent 17 MUA/Mail User Agent 17, 28 MVC/Model-View-Controller 9, 84, 87, 252–3, 260–2, 262–4, 266, 267, 269, 276, 278, 281, 292, 295–7, 298, 308, 312, 335–6, 344 and content 83–4, 87, 253, 259–63, 269, 335–6 and JSP Model 2, 87, 262–4 and Struts 9, 87, 264–6, 267, 269, 270, 275–95, 300–3, 335, 344 and presentation 83–4, 87, 259–63, 266, 269, 276, 308–9, 335–6 MySQL database management system 238, 275 limitations 275–6 and referential integrity 275 Netnews 2, 24–5, 33 and NNTP 25 and Usenet 24 Netscape 255 and browser incompatibilities 155 Messenger (e-mail client) 20, 25 Navigator (web browser) 57, 103, 132, 134, 136, 154–5, 162–3, 167 web server 81, 255 web site 50 NSAPI 81 OSI 14 vs. TCP/IP


Page-By-Page Iterator (design pattern) 227, 231, 307 Perl 7, 73–8, 81, 95, 246–7, 253, 254 and CGI 7, 73–8, 81, 246–7 and SSI 78–80

Personalization, content 87, 205, 232, 233–5, 236, 260, 263, 292, 310 PHP 7, 9, 65, 69, 82, 206–8, 254–5, 256, 261, 267 Ping 13 and ICMP 13, 27 POP3 protocol 20–3, 27, 28, 34, 228, 250, 252 and IMAP 23 Ports, TCP/IP 15–20 for HTTP 31, 84, 92–3, 99, 101 POST, HTTP method 35, 37, 40–1, 56, 64, 75, 76, 79, 85, 89, 95, 100, 101, 102, 115, 116–7, 118, 131, 139, 155, 162, 169, 225, 315–6 vs. GET 40–1, 75, 76, 85, 155, 225 Pragma header (HTTP) 51–3, 111, 122, 127 and caching 51–3, 111 Presentation 7, 49, 83–4, 87, 111, 123, 128, 134–5, 159, 161, 163, 164, 169, 183, 189, 191, 195, 198, 205, 217, 229–31, 231–3, 235, 245, 249, 250, 254, 259–62, 276, 284, 288, 299, 303, 308–9, 335–7, 340–1, 344 with JSP Model 2, 87, 262 with MVC 84, 87, 260, 262, 276, 335–6, 344 paged results 189, 231, 307 separation from content 83–4, 159, 163, 169, 217, 231–3, 235, 245, 249, 254, 259–62, 309, 335–7 with Struts 87, 264, 276, 335, 343–4 Profile and CC/PP 313, 328–31 hardware 328–9, 331 software 329–30 user 224, 230, 273, 278, 286, 288, 290–5, 300–1, 304–5 Programmatic approaches 37, 65ff, 246ff, 340 CGI 7, 9, 37–8, 65, 71–8, 79–82, 84, 85, 87, 91, 96, 97, 100, 101, 204, 205, 207, 212, 246–7, 249–50, 261, 267–8 PHP 7, 9, 65, 69, 82, 206–7, 254–6, 261, 267



Servlet API 7, 9, 38, 65, 67, 69, 73, 84–6, 87, 91–5, 100, 101, 140, 157, 204, 206–7, 211, 212, 232, 247, 251, 257, 262–4, 267–8, 275–8, 282, 301 Programming languages Java 6, 37, 65, 69, 84–7, 104, 113, 162, 164, 177, 206, 208, 246–7, 255–8, 261–2, 265–6, 268, 275–6, 278, 297 JavaScript 5, 7, 106, 111, 122, 128, 155, 161–8, 169 Perl 7, 9, 73–8, 81, 95, 246–7, 253–4 PHP 7, 9, 65, 69, 82, 206–7, 254, 5, 256, 261, 267 Protocols 29–34ff, 65ff, 203, 314–15, 328, 344 FTP 2, 4, 7, 26–7, 28, 29, 31, 32, 33, 34, 97, 112, 250 HTTP 7, 8, 18, 29–64, 65–9, 70–8, 80, 82, 84–5, 88ff, 106, 112, 114, 116, 121, 128, 135, 139, 151, 153–5, 156–7, 160–2, 165, 168, 169, 171, 175, 197, 206, 210, 214, 235, 243, 281–2, 307, 315, 317 ICMP 13, 27 IMAP 7, 17, 22–4 JDBC 268 MIME 19–21, 37, 48–50, 62, 64, 70–1, 88, 92, 94–5, 101, 111, 117, 118, 121, 135ff, 157, 159, 207, 247, 329 NNTP 25 ODBC 250, 255, 268 proprietary 12, 55, 62–3, 88, 103 POP3 17, 20–3, 27, 28, 34 SMTP 7, 15, 17–19, 27, 34 SOAP 119, 141, 171, 198, 235, 314–7, 319–22, 331–2, 344 stateless vs. stateful 18, 33, 34, 56, 62, 67, 106, 128, 206 TCP/IP 8, 11–29, 32, 203, 224 Telnet 7, 11, 15, 16, 28, 96 UDP 13, 28 WAP 235, 328 Proxies, HTTP—see Web proxies

Query languages 82–3, 189, 198, 225, 226, 237, 241, 250, 261, 266, 269, 309, 313, 332–5, 343, 344 SQL 237, 241, 250, 261 XQuery 9, 198, 313, 332–5, 343, 344 XPath 189, 266, 269, 309, 334 Query string 31–2, 39–40, 72, 75–6, 85, 101, 115, 131, 135, 205, 225, 226, 251, 273, 281, 290, 299, 314 RDF

9, 313, 322–8, 328–31, 332, 343, 344 applications 313, 328–31 and Dublin Core 322–6 model 322–8, 343 schema 326–8 Relational databases—see Databases, relational Referer header (HTTP) 47, 93, 117, 233, 243, 271, 305 Requests, HTTP –see HTTP requests Responses, HTTP –see HTTP responses ResultSet 83, 298, 307 vs. RowSet 83, 298, 307 RFC/Request for Comments 12, 16, 17, 20, 23, 26, 27, 46, 49 as Internet standards 12 RowSet 83, 298, 307, 338 vs. ResultSet 83, 298, 307 Sample application 9, 208–9, 210–11, 213–14, 234–5, 271–311 design decisions 297–301 enhancements 9, 301–11, 312–13 requirements 273–4, 282, 301, 304 Schema database 7, 216–17, 219–21, 241, 275–6, 311 RDF 326–8, 329, 344 XML 8, 171–2, 174, 177–8, 186, 188, 195, 197, 198–9, 317, 335, 344 Security 5, 24, 26, 53–6, 65, 80, 81, 87, 96–100, 103, 106, 118, 123, 129, 130, 209–10, 215–16, 219, 233, 238, 241–2, 256, 267, 282, 295



Security (continued ) and authentication 53–6, 66–7, 69, 98, 103, 106, 118, 123, 129–31, 132, 137, 206, 209–12, 214–15, 216, 222–3, 243, 245, 263, 281–2, 301, 304 and encryption 55, 98, 210 FTP 26, 97 HTTPS and SSL 98, 112, 129, 209, 216 and IMAP 24 through obscurity 215–6, 219, 295 Server, HTTP—see Web server Server Side Includes—see SSI Service classes 295–8, 305–7, 311–12 and data access 296–8, 307, 311 and Singleton design pattern 297 Servlets 7, 9, 30ff, 65–73, 82–7, 91–5, 100, 101, 139–40, 157, 204, 206–8, 211–12, 232, 247, 251, 253, 257, 262–8, 275–8, 282, 301 API 84–5, 211, 232, 247, 251, 257, 262–4, 267–8, 275, 282 configuration 264–5, 276–8, 301 and JSPs 30, 85–7, 262–4, 266, 268, 275–6, 278, 281, 299–301 and MVC 9, 84, 262–6 Sessions, HTTP 32, 34, 56–9, 67, 93, 104, 106, 113, 117, 122, 128–9, 131, 132, 205–6, 210–12, 216, 245, 247, 251, 257, 262, 272, 281, 285, 289, 291, 296–7, 307, 309 and cookies 56–9, 93, 104, 113, 117, 122, 128–9, 132, 210, 211, 212–214, 291 and beans 257, 289, 294, 296, 297 and servlets 32, 67, 93, 211, 247, 262, 285, 289, 291, 296–7, 309 and URL rewriting 212, 291 Set-Cookie header (HTTP) 39, 40, 56–9, 110, 113, 122, 129, 132, 137–8, 211 SGML 8, 141–150, 151–2, 168, 169 attributes 147, 149–50 applications 141–3, 145, 150, 168, 171–2, 174, 186 concrete syntax 145–6, 147, 171–2 elements 147–9, 150, 175

entities 143, 147, 150, 175 DTD 143, 146, 147–8, 150, 152, 168, 171–2, 174, 175 and HTML 8, 141–150, 151–2, 168, 169, 171–2, 174, 186 as precursor to XML 8, 141–3, 147, 150, 168, 171, 174, 175 SMIL/Synchronized Multimedia Integration Language 141, 195, 198, 235, 263, 266, 308, 336 SMTP protocol 7, 15, 17–19, 27, 34, 315, 344 SOAP protocol 119, 141, 171, 198, 235, 314–7, 319–22, 331–2, 344 client 315, 319 envelope 315 message 198, 317 and UDDI 317, 319–22 and Web Services 235, 314–15, 317, 319–22 and WSDL 317–19, 322, 331–2, 344 SQL/Structured Query Language 237, 241, 250, 261, 297, 301, 332, 343 SSI/Server Side Includes 69, 71–2, 78–81, 82, 83, 85, 87, 95, 97, 100, 101, 219, 249–50, 267 Status codes, HTTP—see HTTP responses Struts 9, 87, 207, 264–6, 267, 269, 270, 275–95, 300–3, 335, 343–4 Actions 207, 264–6, 276–8, 281, 282–8, 301, 303, 310 ActionForms 264, 266, 276–8, 288–9, 300–1 architecture 264–6, 276–8, 278–82 configuration 87, 265, 276–82, 292–5, 298–303, 309 controller 264, 276–8, 282–8, 309 and JSP Model 2, 87, 264 and MVC 9, 264, 275–6, 308–9, 335, 344 and taglibs 9, 87, 266, 275, 289 STYLE element (HTML) 154, 158, 159–60, 165, 169, 183 Stylesheets 106, 111, 151, 154, 158–9, 160–1, 164–5, 167, 169, 183, 186–9, 189–95, 205, 266, 269, 290, 299, 308–9, 312, 313, 328, 330, 336




138, 154, 158–9, 160–1, 164–5, 167, 169, 189–90, 195, 198, 299, 336 XSL/XSLT 8, 111, 122, 171–2, 186–95, 198–9, 205, 266, 269, 308–9, 312, 313, 328, 330, 336 XSLFO 8, 189–95, 198, 313 Tags Cold Fusion 82–3, 250–2 HTML 71, 78, 80, 82–3, 89, 90, 109, 116, 134–5, 145–8, 150–3, 160, 162–7, 175, 182–3, 287 JSP custom tags 85, 251, 257, 259, 263, 268, 275, 278, 289, 337 JSTL tag library 252, 263, 275, 278, 290–1, 309–10, 337–8, 343–4 XML 85, 172–5, 182–3, 198, 257, 259, 309 TCP/IP protocols 8, 11–29, 32, 224 and applications 11, 13, 15–17 FTP 2, 4, 7, 26–7, 28, 29, 31, 32, 33, 34, 97, 112, 250 HTTP 7, 8, 18, 29–64, 65–9, 70–1, 72–8, 80, 82, 84–5, 88–9, 90–1, 92, 93, 94, 96–8, 99–100, 101, 106, 112, 114, 116, 121, 128, 135, 139, 151, 153–5, 156–7, 160–2, 165, 168, 169, 171, 175, 197, 206, 210, 214, 235, 243, 281–2, 307, 315, 317 ICMP 13, 27 IMAP 7, 17, 22–4 layers 11, 13–14, 17, 32, 98 vs. OSI 14 POP3 17, 20–3, 27, 28, 34 ports 15, 16, 17, 18, 20, 31, 84, 92–3, 99, 101 proprietary 12, 55, 62–3, 88, 103 SMTP 7, 15, 17–19, 27, 34 sockets 15, 18, 98 Telnet 7, 11, 15, 16, 28, 96 UDP 13, 28 Template approaches 8, 65, 69, 78, 82–3, 85, 206, 218–9, 226, 247–59, 264, 267–9, 275–6, 288ff, 298, 300, 308, 336–7, 341

advantages 80, 83, 100, 267–9 ASP 8, 65, 85, 206, 253, 255–6, 261, 268 Cold Fusion 8, 65, 82–3, 206, 249, 250–2, 256, 261 dangers/disadvantages 80, 83, 97, 100, 267–9 SSI/Server Side Includes 69, 71–2, 78–81, 82, 83, 85, 87, 95, 97, 100, 101, 218, 249–50, 267 tiles 300 WebMacro/Velocity 249, 252–3, 264, 269, 275–6, 298 Tomcat 266, 275, 282 configuration 282 and data sources 276, 306, 311 UDDI 317, 319–22, 331–2, 344 and SOAP 317, 319–22, 344 and WSDL 317, 319, 331–2, 344 UDP 13, 28 for streaming media 13 URL/Universal Resource Locator 2, 30–2, 35, 38, 39, 40, 41, 44, 45, 47, 52, 55, 57, 60–1, 64, 66–8, 69, 72, 73, 74, 76, 77, 79, 88, 92, 93, 94, 95, 97, 100, 101, 107–8, 109, 114–15, 117–19, 122–5, 129, 131–2, 135, 137–8, 139, 156, 323–326 host 31–2, 93, 108, 115 and HTTP 30, 35, 36, 38, 41, 44, 55, 60–1, 67–8, 72, 73, 88, 101, 115, 119, 137, 139 path 31–2, 35, 56–9, 60–1, 66, 69, 70, 72, 94, 101, 108–9, 115, 129, 138 port 31 query string 31–2, 39, 40, 72, 75–6, 85, 115, 131, 135 scheme 31–2 vs. URI and URN 30–1, 323–326 User-Agent header (HTTP) 42, 47, 73, 75, 116, 137 and browser incompatibilities Value List Handler (design pattern) 227, 231, 243, 263, 307–308



Virtual hosting—see Hosting, virtual VoiceXML 317, 336 WAP Forum 184, 186, 313, 328–9 Web applications 3–9, 11, 15, 19, 25, 29, 30, 34, 41, 46, 48, 52, 54, 63, 87, 103, 125, 164, 171, 201–42, 245, 249, 255–6, 260, 268, 275, 306, 308, 311, 331, 335–344 J2EE 8, 207, 208, 227, 231, 262–3, 275–6, 307, 314, 343 vs. Web site 4, 5, 206, 208, 213–14, 219, 221, 230ff WEB-INF directory 207, 264 web.xml configuration file 207, 278, 282 Web browser 1, 5, 7, 8, 9, 15, 25, 29–64, 65, 66–8, 70–1, 74–6, 79–80, 84, 87, 88, 89–90, 91, 94, 98, 101, 103–40, 148–69, 175, 182–3, 186, 189, 195, 197, 202–3, 210ff, 219, 242, 291, 297, 305, 314, 329, 337–8 address resolution 106, 108–9, 111, 114, 136 authentication 53–6, 103, 106, 118, 123, 129–31, 132, 137, 210–12 caching 41–2, 48, 51–3, 65, 90–1, 104, 106, 109, 111–12, 113, 122, 125–8, 131–2, 134, 137–8, 296–7 and cookies 34, 57, 62, 104, 106, 109–10, 112–13, 117, 122, 125, 128–9, 131–2, 137–8, 210–13, 216, 291, 305–6 content interpretation 49, 56, 104, 106–7, 112–12, 120, 122–3, 131, 133, 136–8, 162–3 incompatibilities 151, 167 Internet Explorer 57, 103, 128, 135, 162, 164 Lynx 103, 134, 152, 156 modules, processing 106, 110, 120, 128, 137–8, 151, 161, 175, 183, 186 Mosaic 103, 134, 152 Netscape 57, 103, 132, 134, 136, 155, 162, 167

networking 106–7, 109–111, 113, 119–20, 137 Opera 103 and proxies 30, 33–4, 42, 51–3, 61–3, 66, 68, 71, 84, 88, 103, 106, 113, 115–16, 119, 137, 154 rendering 37, 48–51, 54, 67, 70–1, 74, 82, 90, 94, 103–5, 108, 111–12, 121–4, 133, 134–5, 136, 151, 154–6, 157–61, 162, 164, 168–9, 175, 186, 189–91, 194 state maintenance 34, 56, 104, 106, 110, 112–13, 117, 137, 205–6 supporting data items 104–5, 112, 125, 133–4 user interface 105–14, 122, 136–8 see also HTTP requests, HTTP responses Web proxy 8, 30, 33–4, 42, 51–3, 59–61, 62–3, 66, 68, 71, 88, 98–100, 101, 103, 106, 113, 115–16, 119, 137, 154 compatibility 52–3, 59, 60–1, 115–16 caching 33, 51–3, 61 connections 33–4, 62–3, 68, 88, 99, 106, 113, 115 Web server 8, 15, 29–64, 65–102, 104, 106, 108, 113, 115–117, 121, 123, 124, 129–131, 139, 157, 161–2, 165–7, 169, 201–7, 216–19, 222–3, 236, 243, 245, 255, 267–8, 297, 314 address resolution 66, 69, 93–4 as-is pages 69, 71, 95–6 chunked transfers 88, 89–90, 101 configuration 31, 44, 56, 65, 66, 69, 70, 72, 75, 80, 81–2, 88, 91–6, 97, 98, 100, 101, 118, 123–4, 207, 214, 282 content, static 35, 52, 66–7, 69–71, 74, 79, 80, 85, 90, 94, 101, 205–6, 216ff content, dynamic 65, 66–7, 69, 71, 72, 78, 80, 81–6, 90–1, 94, 201ff, 245ff, 336 directory structure 92, 108, 129–30, 139, 206–7, 217–9, 222, 243



modules 66–7, 75, 80, 82, 84, 93–4, 95, 204, 207–8, 261–2 and proxies 8, 30, 33–4, 51–3, 59–61, 62–3, 66, 68, 88, 98–100, 101, 103, 106, 113, 115–16 operation 8, 66–71, 93, 100 security 55–6, 65, 80, 81, 87, 96–100, 118, 209–10, 256, 282 state 32–4, 56, 67, 113, 122, 205–6, 212, 245, 247, 262 virtual hosting 38, 47, 60–1, 66, 88, 93–5, 101, 115–6 see also HTTP requests, HTTP responses Web Services 9, 197–198, 235, 314–22, 344 examples 315ff and SOAP 119, 198, 235, 314–17, 319–22, 331–2, 344 and UDDI 317, 319–22, 331–2, 344 and WSDL 198, 317–19, 322, 331–2, 344 web.xml configuration file 207, 278, 282 and J2EE applications 207, 278, 282 and Tomcat 282 WML/Wireless Markup Language 8, 74, 87, 141, 183–6, 195, 198, 199, 205, 235, 260, 263, 266, 308, 313, 317, 328, 336 World Wide Web Consortium—see W3C WSDL/Web Services Definition Language 198, 317–19, 322, 331–2, 344 WWW-Authenticate header (HTTP) 47, 53–5, 130, 137–8 W3C/World Wide Web Consortium 30, 31, 63, 177, 195, 313, 328, 332 XHTML 8, 141, 150–1, 169, 172, 175, 182–3, 184–5, 186, 189, 191, 195, 198, 199, 266, 308, 313 differences from HTML 141, 151, 182–3, 266, 308 XML 6, 7, 8, 30, 50, 69, 85, 87, 92, 94, 119, 138, 139–40, 141, 143, 146, 147, 150, 168, 169, 171–98, 205,

207, 233, 235, 253ff, 266–7, 269, 270, 308–10, 312, 313ff applications 8, 141, 150, 171–2, 175, 182–3, 186, 190, 195, 197–8, 322, 325, 332, 344 attributes 174–6, 179–81, 188–9, 191, 198 core 172–82 DTD 143, 150, 171–2, 174, 175–6, 177, 179, 184–5, 186, 198 elements 172–4, 175–6, 177–81, 182–3, 185–6, 188–9, 191, 194–5, 198 and HTML 8, 141, 151, 182–3, 205, 233, 235, 260, 263, 266, 308, 313, 317, 328 query 9, 189, 198, 313, 332–5, 343 and RDF 322–8, 331–2, 344 schema 8, 171–2, 174, 177–8, 186, 188, 195, 197, 198–9, 317, 326, 328, 335 and SGML 8, 141–3, 147, 150, 168, 171, 174, 175 and SMIL 141, 195, 198, 235, 263, 266, 308, 336 and WML 8, 141, 183–6, 195, 198, 199, 205, 235, 260, 263, 266, 308, 313, 317, 328 and XHTML 8, 141, 150–1, 169, 172, 175, 182–3, 184–5, 186, 189, 191, 195, 198, 199, 266, 308, 313 XQuery (XML query language) 9, 198, 332–5, 343–4 examples 332–5 and XPath 334 XPath 171–2, 189, 266, 269, 309, 334 and XQuery 334 and XSLT 171, 189, 266, 269 XSL and XSLT 119, 171–2, 186–95, 198–9, 205, 233, 261, 266–7, 269, 270, 308–9, 312, 313, 317, 328, 330ff and CSS 138, 154, 158–9, 160–1, 164–5, 167, 169, 189–90, 195, 198, 336 XSLFO 8, 189–95, 198, 313, 343
Web Application Architecture Principles, Protocols and Practices

Related documents

374 Pages • 122,767 Words • PDF • 3.8 MB

442 Pages • 147,229 Words • PDF • 2.6 MB

944 Pages • 169,224 Words • PDF • 13 MB

559 Pages • 172,195 Words • PDF • 4.6 MB

110 Pages • 25,816 Words • PDF • 8.7 MB

528 Pages • 155,942 Words • PDF • 14.6 MB

92 Pages • 319 Words • PDF • 18 MB

2 Pages • 2 Words • PDF • 76.8 KB

482 Pages • 164,704 Words • PDF • 4.4 MB

525 Pages • 128,508 Words • PDF • 22.8 MB

1,367 Pages • 415,857 Words • PDF • 24.5 MB

881 Pages • 317,618 Words • PDF • 3 MB