Welcome!

Microservices Expo Authors: Elizabeth White, Liz McMillan, Andy Thurai, Pat Romanski, John Katrick

Related Topics: Microservices Expo

Microservices Expo: Article

The Tools Landscape

The Tools Landscape

Web services products have matured rapidly over the past 12 months, to the point where it's become acceptable to utilize the technology in major projects. However, though improving, the currently available toolkits can't be considered complete. This article surveys the rapidly evolving development tools landscape and addresses what tools a fully featured environment should provide to the Web-services developer.

IDE Integration
Historically, developers have toiled with an array of utilities, editors, and command-line tools. The ability to effectively navigate this maze of programs is a source of pride to many of them; however, this maze introduces a steep learning curve and, in the long run, lowers productivity. Some of the costs associated with such a nonintegrated approach are time lost switching back and forth between programs, time required to learn a new user interface, and cost of owning and managing multiple applications. It's also been costly for the tools vendors; each has written many thousands of lines of code that could have been shared between implementations. This includes code for the user interface, file and project management, version control, and many other housekeeping tasks.

The past year has seen major changes in the development landscape. Proprietary environments have fast lost ground to the concept of a single development framework into which additional tools can be deployed via a plug-in architecture. Products like JBuilder and NetBeans have provided this type of framework for some time, but it's fair to say that an industry-wide paradigm shift has coincided with the arrival of two über-frameworks: Microsoft's Visual Studio .NET and the open-source Eclipse project.

IBM's reasons for donating Eclipse to the open-source community are no mystery. If the non-Microsoft community doesn't close ranks around a single framework, it will not be economically feasible to compete with the de facto IDE monopoly for Windows development: Visual Studio. Developing an IDE is an expensive business - IBM spent $40 million on Eclipse. This may seem a costly bit of software to give away, but having the developer community adopt its technology gives IBM a significant head start since IBM's tools already run in Eclipse, while other vendors must go to considerable lengths to move to a plug-in architecture. Tools vendors may not wish to swallow this bait, but the drive toward a single IDE platform is given unassailable momentum by two converging forces: the developer-side desire for an integrated experience, and the economics of proprietary development.

One of the consequences of IDE integration is that products built on the same framework will largely look the same. The mainstream toolsets will also have significant overlap in functionality, though vendors will try hard to differentiate their offerings. This commoditization of the tools market is bad news for smaller vendors. In a battle of commodities, the larger, better-resourced players tend to win. It's not all gloom though. As vendors strive to outshine the competition, we can expect to see higher quality and more innovative software. Another plus: open-source groups and creators of niche or boutique software can take advantage of the IDE frameworks to deliver their software in a first-rate package. Although developers will initially lose some choice due to the inevitable process of consolidation in the tools market, this will be more than offset by improved product sets from surviving vendors and a more diverse range of software from smaller software houses. Web-services developers in particular stand to gain, as there are a great number of companies targeting this space.

Basic Tools
Before we survey some of the more interesting tools that are becoming available, let's establish the basic functionality required to develop Web services. From within the IDE, the developer should be able to:

  • Generate Web Services Description Language (WSDL) from a component: This should be as simple as right-clicking com. acme.MyClass and selecting "Generate WSDL...". This command should bring up a dialog that allows the developer to specify which methods to expose, what to name the Web service, and other basic options.
  • Generate stubs and skeletons from WSDL: It should be possible to generate both client-side stubs and server-side skeletons, preferably in a variety of languages. For Java, the developer should have the option of generating EJB or regular Java skeletons.
  • Deploy a Web service: Deployment can be an intricate process, so the IDE should provide a wizard to help assemble the classes, documents, and configuration data necessary to successfully deploy. Several products have introduced the concept of a Web Services Archive, similar to a JAR or WAR file. This can make it considerably easier to manage deployment, especially across multiple machines.

    Note that the degree to which an IDE exposes WSDL can vary. In Visual Studio, the developer adds WSDL to a project by invoking the "Add Web Reference..." command. This automatically generates client proxy code and adds it to the project. The WSDL is effectively hidden from the developer. Hiding the complexity of WSDL is fine if your application is a consumer of Web services, but server-side development requires more control. This is particularly the case with Java because Web services haven't been tightly integrated into the JVM as Microsoft has done with the .NET Common Language Runtime (CLR). We might expect the toolset to grant the developer considerable control over the WSDL. In particular, it's important to be able to customize the mapping between WSDL and the native language.

    WSDL Mapping
    When WSDL is generated, the created schema types are typically named after classes, with elements named after the members of the class. Frequently, the developer wants to rename or omit a member, or otherwise change how the class is serialized. Why? Perhaps the Web service has to conform to a specific WSDL interface defined by a standards organization or commercial partner. Or it might be for more cosmetic reasons, such as changing unhelpful element names; or for business reasons, such as not exposing sensitive information.

    To serialize an object, a Web services platform uses a mapping layer that defines how the native representation (e.g., a Java class) is mapped to XML (see Figure 1). The method and degree of control over this layer varies by product, but there are three common ways to provide customization:

     

  • Configuration file: This will usually allow fairly limited customization, such as changing member names or omitting elements.
  • XSLT: The native object is first serialized per the default rules, and then an XSLT transform is performed, allowing a greater control. However, developers can find XSLT difficult to work with, and the transform is processor intensive.
  • Custom serialization class: Instead of using reflection to inspect and serialize the object, the serialization process can be delegated to a helper class created by the developer. This mechanism provides almost unlimited control, although modifying and managing the additional classes is cumbersome.

    Most platforms support the use of at least one method of directly editing the mapping layer. However, tools support is still weak. In the near future, developers can expect to see software that provides full round-trip support for editing WSDL. Such an editor will allow developers to modify elements in the WSDL and will automatically propagate these changes to the mapping layer. Note that it should be possible to perform the converse operation: modify the source component, and the changes are propagated toward the WSDL. Mapping is one concept that developers will encounter frequently with Web services. This is because of an under-appreciated enabling technology: XSLT.

    XSLT and Graphical Mapping
    XSLT is a technology that most developers haven't had significant contact with. It's often thought of in the context of generating HTML pages, but its utility extends far beyond that. As mentioned earlier, XSLT can be used to control the mapping between WSDL and a native language. Some other uses are:

    • Map requests from an older version of a Web service into the new version's format
    • Map non-SOAP XML documents into SOAP requests
    XSLT has a steep learning curve - for many developers the "side effect-free" programming model is less than intuitive to grasp, so good tools are essential. Thankfully XSLT lends itself quite well to graphical editing, where two schemas are laid out side by side and related elements are linked by dragging one to the other (see Figure 2).

     

    Let's look at an example. AirportWeather is a publicly available Web service that reports on conditions at airport weather stations around the world. There's also a recently created successor Web service, GlobalWeather, which is faster and has a richer data model. Both of the services have an operation that takes one parameter, the weather station code (e.g., JFK or SFO), and returns a weather report for that location.

    We can use XSLT to transform a SOAP request for AirportWeather into a SOAP request for GlobalWeather. Instead of creating the XSLT by hand, we load the source schema (AirportWeather.wsdl) and the target schema (GlobalWeather.wsdl) side by side. Then we graphically link the two parameters - AirportWeather's nonintuitively named "arg0" and GlobalWeather's "code" - and the mapper will generate the necessary XSLT. In a similar manner, we can use XSLT to transform a non-SOAP XML document into a GlobalWeather request (see Figure 3).

     

    XSLT is an underused technology with great potential, in particular for integrating Web services and legacy XML applications. A good development environment will support the graphical creation of XSLT, and it should allow the easy deployment of XSLT documents into your Web-services platform.

    Testing
    Once a Web service is built and deployed, the next step is to test it. Testing can range from merely verifying that a service is operational to a full automated test suite. The most common testing approach is to generate a client proxy from the WSDL and write a test class to invoke the proxy. But there's a better way: the IDE can generate a test class which invokes the Web service's operations using default parameter values. For Java, the test cases can make use of the JUnit test framework and the Ant build system to make it easy to integrate Web service testing into existing automated test suites.

    Another approach is to generate a SOAP document from the schema information in the WSDL and send it directly to the Web service endpoint using HTTP. This isn't amenable to automated testing but is useful if the developer is interested in the lower-level details of the SOAP messages. However, the friendliest approach is to generate an HTML Web client that's deployed on the Web server alongside the Web service. This Web client is typically based on JSP or ASP and consists of one page for each operation. The form fields correspond to the operation parameters, and submission of the form results in the Web service being invoked. As well as being a fast means of verifying that a service is alive, this is also an effective way of demonstrating a developing Web service.

    Debugging
    Debugging a Web service is essentially the same as debugging any other server-side software, such as an EJB. The IDE's debugger attaches to the service process using the standard mechanisms for the platform, which usually involves starting the server in debug mode. But unlike an EJB, the raw messages between a client and a Web service are intelligible and of interest to the developer.

    There are two quite similar ways of monitoring SOAP messages. A TCP tunnel is a program that listens on a port and directs all traffic to a specified host and port (e.g., tunnel from localhost:7000 to server.acme. com: 8000). The tunnel program can then display the messages in a GUI. To enable tunneling, the SOAP endpoint URL used by the client must be modified to point at the tunnel's listening port.

    Like a TCP tunnel, an HTTP proxy listens on a port, but it has specific knowledge of the HTTP protocol. The proxy reads the "Host" field in the HTTP header and directs traffic to the indicated location. A proxy is more transparent than a tunnel in that proxy settings can generally be set on a process-wide basis and thus modification of individual endpoints is not required.

    Conclusion
    Monitoring tools were an indispensable item in the early days of Web services, when developers spent a lot of time working with raw SOAP messages. Although a development environment should still provide the ability to work at this level, many developers will never need to do so. It's a sign of the maturity of the technology that this is the case. Web services tools are entering the mainstream.

  • More Stories By Neil O'Toole

    Neil O'Toole is a Technology Evangelist at Cape Clear, where he is responsible for promoting the adoption and effective use of Web Services by developers. He manages the CapeScience developer network (www.capescience.com), moderates the Cape Clear newsgroup, and works with the Web Services development community to help define Cape Clear
    product direction.
    He is the author of the popular NetTool debugging utility, has written dozens of technical articles and papers, and is an accomplished public speaker.
    Before beginning his career in evangelism, Neil was a senior developer at Cape Clear. Prior to that he worked with Goldman Sachs (an investment bank), CR2 (banking software), and Esat Net, part of British Telecom. He holds a first class degree in Computer Science from Trinity College Dublin.

    Comments (0)

    Share your thoughts on this story.

    Add your comment
    You must be signed in to add a comment. Sign-in | Register

    In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


    @MicroservicesExpo Stories
    In his session at @DevOpsSummit at 20th Cloud Expo, Kelly Looney, director of DevOps consulting for Skytap, showed how an incremental approach to introducing containers into complex, distributed applications results in modernization with less risk and more reward. He also shared the story of how Skytap used Docker to get out of the business of managing infrastructure, and into the business of delivering innovation and business value. Attendees learned how up-front planning allows for a clean sep...
    "I focus on what we are calling CAST Highlight, which is our SaaS application portfolio analysis tool. It is an extremely lightweight tool that can integrate with pretty much any build process right now," explained Andrew Siegmund, Application Migration Specialist for CAST, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
    In IT, we sometimes coin terms for things before we know exactly what they are and how they’ll be used. The resulting terms may capture a common set of aspirations and goals – as “cloud” did broadly for on-demand, self-service, and flexible computing. But such a term can also lump together diverse and even competing practices, technologies, and priorities to the point where important distinctions are glossed over and lost.
    "I will be talking about ChatOps and ChatOps as a way to solve some problems in the DevOps space," explained Himanshu Chhetri, CTO of Addteq, in this SYS-CON.tv interview at @DevOpsSummit at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
    Kin Lane recently wrote a couple of blogs about why copyrighting an API is not common. I couldn’t agree more that copyrighting APIs is uncommon. First of all, the API definition is just an interface (It is the implementation detail … Continue reading →
    The United States spends around 17-18% of its GDP on healthcare every year. Translated into dollars, it is a mind-boggling $2.9 trillion. Unfortunately, that spending will grow at a faster rate now due to baby boomers becoming an aging population, and they are the largest demographic in the U.S. Unless the U.S. gets this spiraling healthcare spending under control, in a few short years we will be spending almost 25% of our entire GDP in healthcare trying to fix people’s failing health, instead o...
    You know you need the cloud, but you’re hesitant to simply dump everything at Amazon since you know that not all workloads are suitable for cloud. You know that you want the kind of ease of use and scalability that you get with public cloud, but your applications are architected in a way that makes the public cloud a non-starter. You’re looking at private cloud solutions based on hyperconverged infrastructure, but you’re concerned with the limits inherent in those technologies.
    For organizations that have amassed large sums of software complexity, taking a microservices approach is the first step toward DevOps and continuous improvement / development. Integrating system-level analysis with microservices makes it easier to change and add functionality to applications at any time without the increase of risk. Before you start big transformation projects or a cloud migration, make sure these changes won’t take down your entire organization.
    When you focus on a journey from up-close, you look at your own technical and cultural history and how you changed it for the benefit of the customer. This was our starting point: too many integration issues, 13 SWP days and very long cycles. It was evident that in this fast-paced industry we could no longer afford this reality. We needed something that would take us beyond reducing the development lifecycles, CI and Agile methodologies. We made a fundamental difference, even changed our culture...
    In his session at 20th Cloud Expo, Mike Johnston, an infrastructure engineer at Supergiant.io, discussed how to use Kubernetes to set up a SaaS infrastructure for your business. Mike Johnston is an infrastructure engineer at Supergiant.io with over 12 years of experience designing, deploying, and maintaining server and workstation infrastructure at all scales. He has experience with brick and mortar data centers as well as cloud providers like Digital Ocean, Amazon Web Services, and Rackspace. H...
    You often hear the two titles of "DevOps" and "Immutable Infrastructure" used independently. In his session at DevOps Summit, John Willis, Technical Evangelist for Docker, covered the union between the two topics and why this is important. He provided an overview of Immutable Infrastructure then showed how an Immutable Continuous Delivery pipeline can be applied as a best practice for "DevOps." He ended the session with some interesting case study examples.
    "DivvyCloud as a company set out to help customers automate solutions to the most common cloud problems," noted Jeremy Snyder, VP of Business Development at DivvyCloud, in this SYS-CON.tv interview at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
    Without lifecycle traceability and visibility across the tool chain, stakeholders from Planning-to-Ops have limited insight and answers to who, what, when, why and how across the DevOps lifecycle. This impacts the ability to deliver high quality software at the needed velocity to drive positive business outcomes. In his general session at @DevOpsSummit at 19th Cloud Expo, Eric Robertson, General Manager at CollabNet, will discuss how customers are able to achieve a level of transparency that e...
    The Jevons Paradox suggests that when technological advances increase efficiency of a resource, it results in an overall increase in consumption. Writing on the increased use of coal as a result of technological improvements, 19th-century economist William Stanley Jevons found that these improvements led to the development of new ways to utilize coal. In his session at 19th Cloud Expo, Mark Thiele, Chief Strategy Officer for Apcera, compared the Jevons Paradox to modern-day enterprise IT, examin...
    We all know that end users experience the internet primarily with mobile devices. From an app development perspective, we know that successfully responding to the needs of mobile customers depends on rapid DevOps – failing fast, in short, until the right solution evolves in your customers' relationship to your business. Whether you’re decomposing an SOA monolith, or developing a new application cloud natively, it’s not a question of using microservices - not doing so will be a path to eventual ...
    In his session at 20th Cloud Expo, Scott Davis, CTO of Embotics, discussed how automation can provide the dynamic management required to cost-effectively deliver microservices and container solutions at scale. He also discussed how flexible automation is the key to effectively bridging and seamlessly coordinating both IT and developer needs for component orchestration across disparate clouds – an increasingly important requirement at today’s multi-cloud enterprise.
    Containers are rapidly finding their way into enterprise data centers, but change is difficult. How do enterprises transform their architecture with technologies like containers without losing the reliable components of their current solutions? In his session at @DevOpsSummit at 21st Cloud Expo, Tony Campbell, Director, Educational Services at CoreOS, will explore the challenges organizations are facing today as they move to containers and go over how Kubernetes applications can deploy with lega...
    Your homes and cars can be automated and self-serviced. Why can't your storage? From simply asking questions to analyze and troubleshoot your infrastructure, to provisioning storage with snapshots, recovery and replication, your wildest sci-fi dream has come true. In his session at @DevOpsSummit at 20th Cloud Expo, Dan Florea, Director of Product Management at Tintri, provided a ChatOps demo where you can talk to your storage and manage it from anywhere, through Slack and similar services with...
    Learn how to solve the problem of keeping files in sync between multiple Docker containers. In his session at 16th Cloud Expo, Aaron Brongersma, Senior Infrastructure Engineer at Modulus, discussed using rsync, GlusterFS, EBS and Bit Torrent Sync. He broke down the tools that are needed to help create a seamless user experience. In the end, can we have an environment where we can easily move Docker containers, servers, and volumes without impacting our applications? He shared his results so yo...
    Enterprise architects are increasingly adopting multi-cloud strategies as they seek to utilize existing data center assets, leverage the advantages of cloud computing and avoid cloud vendor lock-in. This requires a globally aware traffic management strategy that can monitor infrastructure health across data centers and end-user experience globally, while responding to control changes and system specification at the speed of today’s DevOps teams. In his session at 20th Cloud Expo, Josh Gray, Chie...