Welcome!

Microservices Expo Authors: Elizabeth White, Pat Romanski, Jason Bloomberg, Kong Yang, Mark Leake

Related Topics: Industrial IoT

Industrial IoT: Article

XHTML: XML On The Client Side

XHTML: XML On The Client Side

The XML developer doesn't have to be convinced of XML's strength. You've heard it a million times: it's all about the data. The same is true on the client side. XHTML strongly embraces the separation of content and presentation, and brings XML's syntactical logic, as well as extensible opportunities, to the client-side table.

XHTML 1.0, a reformulation of HTML as an XML application, has been the W3C's Web markup recommendation (www.w3.org/TR/xhtml1/) since January 26, 2000. That's more than a year now, but client-side authors and developers of popular authoring software have been slow on the take. Much of the problem lies in the fact that XHTML is misunderstood and not well publicized. Some client-side authors don't see it as having any special advantages, and many critics have claimed that XHTML simply won't be widely adopted. This may well be proven out. XHTML has been almost completely missed by the vast majority of entry- and mid-level professionals.

Ignoring or overlooking XHTML is problematic for the professional developer. Whether it's a useful client-side methodology remains a personal question. However, knowing what it is, why it is, and how it may or may not effectively aid the work you do allows you to make an informed, empowered decision about the technologies you choose to employ.

XHTML: What and Why
In simple terms, writing documents in XHTML means that instead of authoring that old familiar HTML, you are in essence writing XML. XML, in XHTML 1.0, employs HTML as its vocabulary. So elements and attributes are not arbitrary - they're drawn directly from HTML. Similarly, XML syntax rules are applied.

But how does this help client-side authors? The answer is simple. How many of you have honestly paid much attention to the HTML you generate? Some of you will certainly say you do, but most developers - like most Web designers - are guilty of a slapdash attitude toward HTML. It's not your fault. HTML has become sloppy, in part because it's been bent in many directions to accommodate the rapid growth of the Web. And browsers are extremely forgiving of poor markup. Nothing has demanded that you write clean documents because for the most part you haven't had to.

The problems resulting from this are manifold. First, there's no consistency in markup from one HTML author to the next. They've each got their own methodology - some write elements in uppercase, others in lowercase. Quotes are sometimes in use, sometimes not. Looking under the hood at even the most high-end site is usually not a pretty experience. So adding a little syntactical rigor to the mix via XHTML gets authors on the same page, if you'll pardon the pun. That can make for a much smoother workflow among teams.

XHTML 1.0 focuses heavily on getting markup cleaned up. But XHTML has another goal, too, and that's to extend to user agents beyond the Web browser: PDAs, smart phones, set-top boxes, and other alternative and wireless devices. Streamline and strengthen the markup, and you've got a stronger base from which to extend it. That's a logical and rational idea.

Another argument made in XHTML's defense - and it's a controversial one but I buy into it - is that it helps the client-side author who has XML phobia to begin moving into the XML arena via familiar means. I like this argument because as an educator, I've seen proof that it works. Take entry- or mid-level Web authors, teach them XHTML, and suddenly you can also teach them other XML applications: WML, SMIL, SVG. The light bulb goes on because they're operating in an environment that's familiar - HTML. The XML kind of sneaks in via document structure and syntactical rules.

Brass Tacks: XHTML Document Structure
To gain a better idea of how XHTML 1.0 works, let's first examine its document structure.
Ideally, an XML document begins with an XML declaration:

<?xml version="1.0" encoding="UTF-8"?>
But XHTML documents are most often viewed using popular Web browsers, which in some cases will render anything with an XML declaration as text. So for XHTML 1.0, the W3C recommends (but does not require) that the XML declaration be intact. Most Web authors leave it off.

Next comes the DTD, which is required. With XHTML 1.0, you can choose from three public DTDs: strict, transitional, or frameset. Developers working with HTML 4.0 will be familiar with these DTDs and know that the strict DTD uses the most limited set of elements and attributes of the three, basing much of its selection on the idea that presentation and structure must be separate. So you won't find the font element in a strict document. Transitional documents, however, are more flexible, understanding that Web authors must make some accommodations in order to achieve the best interoperability possible. Frameset documents are limited to framesets and can employ elements from strict or transitional DTDs.

For a strict XHTML 1.0 document, you'll use the following declaration:

<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/ xhtml1-strict.dtd">

If you want to write your document in accordance with the transitional DTD, you'll use:

<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/ xhtml1-transitional.dtd">
Finally, if you're authoring a frameset, you'll use this declaration:
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Frameset//EN" "http://www.w3.org/TR/xhtml1/DTD/ xhtml1-frameset.dtd">
It's important to remember that there are no exceptions to the rule here. You must declare the proper DTD in your XHTML 1.0 document.

Now it's time to add the namespace to the root. In XHTML 1.0 the root element is html. The root and namespace is also a requirement, and is written as follows:

<html xmlns="http://www.w3.org/1999/xhtml">
Listing 1 shows a strict document template using the XML declaration. In Listing 2 I show a transitional document template using a meta workaround for document encoding should you choose not to use the XML declaration.

Get Tough: XHTML Syntax
Now that you've got the document structure down, it's time to explore the syntactical rules that XHTML 1.0 embodies:

  • Must be well formed
  • Is case specific
  • Insists on closing tags for nonempty elements, and termination of empty elements with a trailing slash
  • Demands that all attribute values be quoted Let's take a closer look.

Well-Formedness
Remember, Web browsers are built to forgive. That's one reason they're so bloated; they have to be able to interpret such a wide variety of markup styles. And most browsers will forgive ill-formed syntax. Try the following poorly formed bit of HTML in a browser:

<b><i>An ill-formed bit of HTML</b></i>
In common browsers such as MSIE and NN, this markup will appear in both bold and italics. However, if you examine the HTML, you'll see that the tags are improperly nested. If this markup were well formed, the tags would nest properly:
<b><i>A well-formed bit of HTML</i></b>
XHTML 1.0 must be well formed to be valid XHTML. A little trick I use to make sure I've nested my tags properly is to draw an imaginary line from the opening tag to its closing companion. If the lines don't intersect, it's properly nested and therefore well formed. Intersecting lines will indicate improperly nested, ill-formed markup.

Case Specificity
As you're already aware, XML is case sensitive:

<PRODUCT>
</PRODUCT>
and
<product>
</product>
are two different tag sets.

HTML, on the other hand, is not case specific:

<P align="right">
</P> is the same as: <p ALIGN="right">
</p>
XHTML is case specific. In every instance all elements and attribute names must be lower case:
<p align="right">
</p>
Note that attribute values can be in upper- or lowercase as necessary to accommodate file names, code strings, and URIs.

Element Handling
XHTML 1.0 adopts the XML method of closing all nonempty elements and terminating empty elements with a trailing slash. In HTML you can write the following:

<ul>
<li>list item 1
<li>list item 2
<li>list item 3
</ul>

but in XHTML, you must close the nonempty element:

<ul>
<li>list item 1</li>
<li>list item 2</li>
<li>list item 3</li>
</ul>
One of the more obvious places this occurs is with the paragraph <p> tag. You must close all nonempty elements, no exceptions.

If an element is empty (no content), it must terminate. In XML this is done by using a trailing slash as follows:

<br/>
But many Web browsers will choke on this method and subsequently not render a page or render it improperly. a workaround is to add a space before the slash. This allows all empty elements to be properly rendered. A few examples:
<img />
<hr />
<meta />
<link />
As with nonempty elements, there can be no exception to the rules. You must terminate the element accordingly.

Quoth the Attribute Value...
One of the more frustrating things about HTML - at least to my eye - is the arbitrariness of attribute value quoting. In HTML it's a now-you-see-it, now-you-don't phenomenon. So you can have:

<img src="my.jpg" border="1" width=400 height=200 alt="company logo">
or any combination of attribute value quotations you like. In most instances a browser will properly render the markup whether you've quoted the attribute value or not.

XHTML insists that you quote all attribute values, leaving nothing to chance:

<img src="my.jpg" border="1" width="400" height="200" alt="company logo" />

Not so hard, really
as you can now see, XHTML 1.0 is really no great challenge. Does it mean employing a little more care when creating documents? Yes. Does it mean watching your syntax? Absolutely. But with a few minor adjustments you can have clean markup that works in today's browsers with as close-to-perfect interoperability as HTML and still complies with W3C recommendations.

Advancing Notions: Modularization of XHTML
So what's a little cleanliness, anyway? Critics of XHTML have pointed out that changing habits just to write cleaner documents doesn't provide much incentive. It's time consuming and why on earth would you want to go back and rewrite hundreds, possibly thousands, of Web documents just to comply with a W3C recommendation when those documents function perfectly well? I can't, and won't, argue this point. It's too strong an argument. But if you're interested in moving toward extensibility, want to create consistent documents organization-wide, and want to assist your client-side authors in expanding their markup horizons, working with XHTML makes sense.

While XHTML 1.0 offers little option for extensibility - you've got three set DTDs and a specific namespace - the modularization of XHTML does offer expansion. Modularization of XHTML, which allows for the use of XML DTDs and provides the means to create subsets and extensions to XHTML, takes XHTML 1.0 from its limited place closer to its goal of working for numerous user agents. As of this writing, modularization of XHTML is a Candidate Rec-ommendation of the W3C (www.w3.org/TR/2000/CR-xhtml-modularization-20001020/).

Modularization of XHTML is a decomposition of HTML as we know it today. Instead of lumping markup methods such as managing text, images, tables, and forms, modularization breaks these things into separate modules. Then, using XML DTDs (an implementation of XML schemas is also under discussion), authors can pull together a subset of XHTML using only those modules necessary to accomplish a given task.

If you put modularization in the context of alternative device design, the rationale for XHTML begins to make a lot of sense. Many alternative devices simply don't have the processing, RAM, and video power to handle HTML's original functions. So why have all the overhead? A streamlined markup language using only those modules necessary for the device means faster, customizable delivery to equally streamlined optimized user agents.

A perfect example of modularization exists in XHTML Basic (www.w3.org/TR/xhtml-basic/), a subset of XHTML 1.1, made up of specific modules that apply to wireless devices such as PDAs, smart phones, and smart pagers. These devices are limited in their processing power, so XHTML Basic supplies those modules only for markup that make sense, such as text, links, images, very basic tables, and forms. Frames or scripting demand processing power, so they're left out of the subset. XHTML Basic, at this writing a Proposed Recommendation of the W3C, looks just like XHTML, but of course any element that falls into a module not set forth in the recommendation can't be used in a valid XHTML Basic document. However, you can extend XHTML Basic if you want to. This enables the creation of additional subsets and extensions.

Listing 3 shows a simple XHTML Basic page suitable for display on a small, wireless device such as a PDA. The listing clearly illustrates how XHTML Basic uses the structural elements set forth in XHTML 1.0, only this time the DTD that's declared is for XHTML Basic itself. The namespace is the same, as are the syntactical methodologies.

Bring It On Home
The developer who's empowered with knowledge can make better decisions. Whether you embrace client-side XML in the form of XHTML is up to you. But a careful survey of your needs and directions will help answer the question of whether XHTML will be useful in your unique situation. Being aware of what's happening with XHTML and its goals will keep you at the ready should your circumstances require you to develop not only for the Web of tomorrow, but for the wireless world and beyond.

More Stories By Molly E. Holzschlag

Molly E. Holzschlag is the executive editor of WebReview.com. She is the author of 16 books on Internet and Web design and development topics, including her most recent, Special Edition Using XHTML, from Que. You can visit her Web site at www.molly.com.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


@MicroservicesExpo Stories
Managing mission-critical SAP systems and landscapes has never been easy. Add public cloud with its myriad of powerful cloud native services and this may not change any time soon. Public cloud offers exciting new possibilities for enterprise workloads. But to make use of these possibilities and capabilities, IT teams need to re-think everything they have done before. Otherwise, they will just end up using public cloud as a hosting platform for their workloads, aka known as “lift and shift.”
You know you need the cloud, but you’re hesitant to simply dump everything at Amazon since you know that not all workloads are suitable for cloud. You know that you want the kind of ease of use and scalability that you get with public cloud, but your applications are architected in a way that makes the public cloud a non-starter. You’re looking at private cloud solutions based on hyperconverged infrastructure, but you’re concerned with the limits inherent in those technologies.
"Tintri focuses on the Ops side of the DevOps, which basically is pushing more and more of the accessibility of the infrastructure to the developers and trying to get behind the scenes," explained Dhiraj Sehgal of Tintri in this SYS-CON.tv interview at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
Both SaaS vendors and SaaS buyers are going “all-in” to hyperscale IaaS platforms such as AWS, which is disrupting the SaaS value proposition. Why should the enterprise SaaS consumer pay for the SaaS service if their data is resident in adjacent AWS S3 buckets? If both SaaS sellers and buyers are using the same cloud tools, automation and pay-per-transaction model offered by IaaS platforms, then why not host the “shrink-wrapped” software in the customers’ cloud? Further, serverless computing, cl...
In the decade following his article, cloud computing further cemented Carr’s perspective. Compute, storage, and network resources have become simple utilities, available at the proverbial turn of the faucet. The value they provide is immense, but the cloud playing field is amazingly level. Carr’s quote above presaged the cloud to a T. Today, however, we’re in the digital era. Mark Andreesen’s ‘software is eating the world’ prognostication is coming to pass, as enterprises realize they must be...
Hybrid IT is today’s reality, and while its implementation may seem daunting at times, more and more organizations are migrating to the cloud. In fact, according to SolarWinds 2017 IT Trends Index: Portrait of a Hybrid IT Organization 95 percent of organizations have migrated crucial applications to the cloud in the past year. As such, it’s in every IT professional’s best interest to know what to expect.
A common misconception about the cloud is that one size fits all. Companies expecting to run all of their operations using one cloud solution or service must realize that doing so is akin to forcing the totality of their business functionality into a straightjacket. Unlocking the full potential of the cloud means embracing the multi-cloud future where businesses use their own cloud, and/or clouds from different vendors, to support separate functions or product groups. There is no single cloud so...
In 2014, Amazon announced a new form of compute called Lambda. We didn't know it at the time, but this represented a fundamental shift in what we expect from cloud computing. Now, all of the major cloud computing vendors want to take part in this disruptive technology. In his session at 20th Cloud Expo, Doug Vanderweide, an instructor at Linux Academy, discussed why major players like AWS, Microsoft Azure, IBM Bluemix, and Google Cloud Platform are all trying to sidestep VMs and containers wit...
Companies have always been concerned that traditional enterprise software is slow and complex to install, often disrupting critical and time-sensitive operations during roll-out. With the growing need to integrate new digital technologies into the enterprise to transform business processes, this concern has become even more pressing. A 2016 Panorama Consulting Solutions study revealed that enterprise resource planning (ERP) projects took an average of 21 months to install, with 57 percent of th...
The taxi industry never saw Uber coming. Startups are a threat to incumbents like never before, and a major enabler for startups is that they are instantly “cloud ready.” If innovation moves at the pace of IT, then your company is in trouble. Why? Because your data center will not keep up with frenetic pace AWS, Microsoft and Google are rolling out new capabilities. In his session at 20th Cloud Expo, Don Browning, VP of Cloud Architecture at Turner, posited that disruption is inevitable for comp...
New competitors, disruptive technologies, and growing expectations are pushing every business to both adopt and deliver new digital services. This ‘Digital Transformation’ demands rapid delivery and continuous iteration of new competitive services via multiple channels, which in turn demands new service delivery techniques – including DevOps. In this power panel at @DevOpsSummit 20th Cloud Expo, moderated by DevOps Conference Co-Chair Andi Mann, panelists examined how DevOps helps to meet the de...
"When we talk about cloud without compromise what we're talking about is that when people think about 'I need the flexibility of the cloud' - it's the ability to create applications and run them in a cloud environment that's far more flexible,” explained Matthew Finnie, CTO of Interoute, in this SYS-CON.tv interview at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
"We are a monitoring company. We work with Salesforce, BBC, and quite a few other big logos. We basically provide monitoring for them, structure for their cloud services and we fit into the DevOps world" explained David Gildeh, Co-founder and CEO of Outlyer, in this SYS-CON.tv interview at DevOps Summit at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
Colocation is a central pillar of modern enterprise infrastructure planning because it provides greater control, insight, and performance than managed platforms. In spite of the inexorable rise of the cloud, most businesses with extensive IT hardware requirements choose to host their infrastructure in colocation data centers. According to a recent IDC survey, more than half of the businesses questioned use colocation services, and the number is even higher among established businesses and busine...
For most organizations, the move to hybrid cloud is now a question of when, not if. Fully 82% of enterprises plan to have a hybrid cloud strategy this year, according to Infoholic Research. The worldwide hybrid cloud computing market is expected to grow about 34% annually over the next five years, reaching $241.13 billion by 2022. Companies are embracing hybrid cloud because of the many advantages it offers compared to relying on a single provider for all of their cloud needs. Hybrid offers bala...
@DevOpsSummit at Cloud Expo taking place Oct 31 - Nov 2, 2017, at the Santa Clara Convention Center, Santa Clara, CA, is co-located with the 21st International Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. The widespread success of cloud computing is driving the DevOps revolution in enterprise IT. Now as never before, development teams must communicate and collaborate in a dynamic, 24/7/365 environment. There is ...
The reality of data ubiquity is here—data is buried in operational statistics, machine logs, stacks of overflowing tickets and customer details, among other things. How can any user get valuable information amid this rapid influx of data? Imagine a situation where your firm’s revenue takes a hit owing to an unexpected failure in some business process. It would be a nightmare for IT admins to sift through the interminable piles of data to deduce exactly why and where the problem occurred. To sav...
For organizations that have amassed large sums of software complexity, taking a microservices approach is the first step toward DevOps and continuous improvement / development. Integrating system-level analysis with microservices makes it easier to change and add functionality to applications at any time without the increase of risk. Before you start big transformation projects or a cloud migration, make sure these changes won’t take down your entire organization.
What's the role of an IT self-service portal when you get to continuous delivery and Infrastructure as Code? This general session showed how to create the continuous delivery culture and eight accelerators for leading the change. Don Demcsak is a DevOps and Cloud Native Modernization Principal for Dell EMC based out of New Jersey. He is a former, long time, Microsoft Most Valuable Professional, specializing in building and architecting Application Delivery Pipelines for hybrid legacy, and cloud ...
21st International Cloud Expo, taking place October 31 - November 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA, will feature technical sessions from a rock star conference faculty and the leading industry players in the world. Cloud computing is now being embraced by a majority of enterprises of all sizes. Yesterday's debate about public vs. private has transformed into the reality of hybrid cloud: a recent survey shows that 74% of enterprises have a hybrid cloud strategy. Me...