Welcome!

Microservices Expo Authors: Harry Trott, Pat Romanski, Steve Wilson, Jason Bloomberg, Stackify Blog

Related Topics: Microservices Expo

Microservices Expo: Article

Portals and Web Services - When business issues are technical

Portals and Web Services - When business issues are technical

We've all heard the terms: portals, gadgets, portlets, dashboarding. But what does it all mean? And what role do Web services play in this exciting new world of componentized content?

I would define "portalization" as the creation of an environment that gives the end user one-stop shopping for actionable content and collaboration. Portals don't create anything new; they merely give us comprehensive, personalized access to content that may reside in any number of repositories (see Figure 1). Moreover, portal environments benefit developers and implementers by providing a robust set of "out of the box" tools and features to ease application development and deployment. These tools include single sign-on (SSO) access to virtually any authentication data source, centralized authorization, logging and reporting, document versioning, and so on.

 

Content and Web services
In the vast expanse of where content may come from, Web services is just one of the vast sources that provide content. The difference, however, between Web services and other content-providing mechanisms is that Web services are a natural and synergic fit with portal environments. From the dawn of Web services time, the goal has been to deliver content in a standard, XML-based format that allows the consumers of that data to render or use it as they choose. Since personalization, localization, and customization are all standard fare in a portal environment, the presentation layer can be left to the developer of the portlet consuming the data. Sounds like the perfect job for good ol' Web services, doesn't it? It's also no secret that the great benefits to using Web services are their reuse and client-agnostic and protocol-independent properties. Given all this, the use of Web services in a portal environment is a no-brainer.

When to 'Portalize'
Does portalizing the weather along with a link to your company's homepage give you a good ROI on your portal software? I would guess probably not. But many other applications may make sense. Let's say we are a Web-engineering firm and at any given time our CEO wants to get:

  • Information on sales leads that are in the pipeline
  • Updates on development efforts that are in progress and status of client deliverables
  • Meeting information for the day
  • Access to the latest proposals with impending deadlines
  • And maybe even an employee vacation schedule

    Access to this content in a centralized, concise, easy-to-use format is a powerful thing. It eliminates the need to open and swap among multiple applications. It lets him log in to one SSO system and have access to all authorized resources rather than needing to log in to multiple systems. Furthermore, it eliminates the hassle of crawling through a document server's directory structure to find a specific proposal document. Finally, it allows him to collaborate with other team members on documents, meetings, etc. With its ability to improve project management and accelerate proposal and development efforts, the ROI for this type of portal application is very valuable, indeed.

    The portal above can be described as a "dashboard" specific to our CEO's needs. It gives a personalized set of relevant information to the current end user. The information is a collection of data subsets from larger repositories. Those repositories in this case are a sales system, a project system, a scheduling system, a file structure, and an HR system. To create a portlet on top of an entire repository that exposes the complete functionality of that system may seem great, but in reality the end user wants only a slice of that data customized for them. For example, at first glance exposing the entire sales system sounds like it could be meaningful for many employees. However, the sales team is only interested in lead-flow information and not billing information, while the admin department only needs to see billing information so they can generate invoices. In an ideal portal application, each of these data slices remain separate as part of a specific user's personalized dashboard.

    How Is It Done?
    So how do we go about building a dashboard that is relevant and useful to each end user? This is done by leveraging a combination of built-in portal environment functionality and custom development. The portal environment gives the developer a nice framework for doing common tasks such as authentication, authorization, content formatting, and portlet behavior. Custom development helps to provide exposure and access to application content and functionality and makes the portal "consumable" by existing and future applications. The former development effort is very specific to the business need. If the business need allows, the content access exposure is best developed using Web services. By using Web services to expose key business functionality you are "future proofing" your application. This service may be consumed by a portlet today but may need to be consumed by a third-party application tomorrow. Once the necessary content and/or functionality are exposed, it must be consumed. This is the job of a portlet.

    Today developers are somewhat limited when building consuming portlets. They must adhere to the APIs defined by the portal environment. The portability of these portlets is not quite there yet as standards like JSR 168 are still being worked out. If you do expose your content through Web services, the development effort to consume that data can be quite trivial in most portal environments. Most have simple wizard interfaces that allow a developer to point at a Web service and have the portlet nearly build itself. Due to the reusability, ease of deployment, and "future proofing" of your applications, using Web services to expose functionality and content is a smart move.

    Portal Environment: Build or Buy?
    While today's portal software is impressive, it comes at a price. Portal software with implementation can run a company anywhere from the mid–five figures to well into the high six figures and beyond, depending on the number of users, servers, etc. Is it worth the investment to get into a portal environment? The answer really depends on your business need. Giving your staff access to their vacation day reports and links to HR policies within a branded intranet site may not be worthy of the sexy and expensive features of a full-blown portal environment. In this case, your efforts would be better spent handing this project to a junior developer and giving him or her a month to bang it out, giving you far greater ROI than deploying a real portal. If, however, you truly intend to portalize your business functions as described in our CEO dashboard example, there is no doubt you'll want to buy versus building your own. The time and investment needed to match the features and functionality of a portal environment are not time and money well spent. In this instance, a good portal environment is worth the investment.

    While all portal environments differ in the goodies they offer, most will include some flavor of SSO authentication and authorization; the ability to define groups of users into communities with common interests and the associated content and applications; monitoring, management and reporting; content aggregation, indexing and searching; document check in, check out, and versioning; collaboration; portlet development APIs or SDKs; and much more. Since portal environments have been around for some time now, and most offer easy ways to develop portlets to their environment, giant libraries of portlets have been developed. Plumtree, for example, has hundreds of portlets in its library, from integrating with Lotus Notes, SAP, Siebel, PeopleSoft, and others, to using AOL Instant Messenger. Portal environment libraries provide a huge advantage to building all these components as many of them are open source or relatively inexpensive.

    What About Integration?
    How well a portal environment integrates with Web services is also very dependent on the portal company you choose. While there are certainly some environments that make the integration process less painful than others, this should not be the developers' main consideration. Developers should be more concerned about how well a portal vendor interoperates with other vendors and portlets. In other words, how good is this vendor's support for emerging portal and Web service standards? At the beginning of the portal revolution, it was acceptable for each vendor to operate in its own box. Each portlet developed was proprietary to the environment in which it lived, as standards specific to portlets did not yet exist. Enter OASIS and the Java Community Process (JCP). With the recent approval of the Web Services for Remote Portlets (WSRP) standard by OASIS and the development of the JSR 168 Portlet Specification by the JCP, vendors now have the choice of whether they want to play nice together or not. Happily, several vendors have actually integrated these standards into their products already. This is a big step in the right direction and great news for those wishing to implement Web services as portlets.

    Web Services Standards and Portals
    The WSRP is an interesting standard and somewhat of a paradigm shift from what we are used to in the Web services world. Until now everything was very datacentric. It's the nature of Web services to not worry about specific protocols or client displays. Web service calls either performed a function or they accessed some data. Presentation was left in the hands of the consumer. The WSRP standard has changed this for portlets. The idea is that no longer are portlet Web services "data-oriented," as OASIS calls it. They are now "presentation-oriented" (see Figure 2).

     

    WSRP defines a Web service interface that allows a much richer interaction with Web services than straight Web service calls. The producer of the WSRP-compliant service can expose description information and capabilities of the service, presentation markup that is now part of the service, an optional registration interface for creating a relationship between the consumer and producer, and optional management interfaces. By creating WSRP-compliant portlets, producers are able to publish portlets that have been developed in their familiar environments and be assured that consumers will receive the service as it was intended from the application logic up to the presentation. It lets them reuse portlets, make them WSRP compliant, and expose them to other consumers. From the consumer side, no additional development is required to integrate a WSRP portlet, as there would be if a straight Web service were being consumed. So long as the consumer is also WSRP compliant, the inclusion of this portlet is virtually development free.

    The adoption of the WSRP standard will ultimately make portlet interoperability issues between portal vendors a problem of the past, opening up new opportunities for businesses to expose and market their developed portlets while maintaining full control over application and presentation logic. It will also benefit businesses looking to consume existing WSRP portlets by offering them a complete packaged portlet application, presentation and all, requiring virtually no development effort.

    Though it is clear that there are many benefits to using Web services in a portal environment, there are still some drawbacks. Until the portal standards have been fully adopted, developers will need to continue to produce and consume Web services the "old fashioned" way. Consuming this way will require a good deal of additional development to create presentation and logic around the data you get back. Another problem with standard Web services is that you are constrained by the APIs you build. Let's say, for example, that you decide your fancy sales lead Web service should now return fax numbers, which it didn't do before. You will need to first produce an updated Web service containing this new field. Then this definition WSDL needs to be republished into a UDDI service so new consumers of the service may develop to the new API. This is okay for new consumers, but your existing consumers are now out of sync.

    Each existing consumer will need to develop against this new API to handle the fax data. You can see how this has the potential to very quickly become a maintenance nightmare. If controlling presentation, data, and other service attributes from a central location is a requirement due to varied consumers and ease of deployment, you may want to choose a vendor who is on board with these new portal standards.

    Implementing Web Services
    So, let's say you're willing to take the good with the bad and want in on using data-oriented Web services in a portal, how do you go about doing that? Like everything else, this is dependent on the environment you are in. But here are the basics you will need to follow: first you must develop the business logic that needs to be exposed. Since we are in the Web services world you can do this with any language and on any platform you like. Next, you need to expose this business logic as a Web service. At this stage of the game just about every IDE has some support for generating the files you need for Web service exposure. These files may include your Web service wrapper code, WSDL definition file, deployment descriptors, etc. Deploy this service and publish its description to a UDDI server and you have successfully produced a Web service.

    Now you must consume this service in your portal. The development kits that come with your portal software will determine how easily you integrate your newly deployed service. If you're lucky, you'll have a wizardstyle interface that will ask you where the service is defined. From the WSDL, the wizard will create all the necessary client proxy files to access this service. This is still only one piece of the client work. In the Java world this is just the M of the MVC (Model-View- Controller). You will still need to build the presentation layer and any logic that is necessary to interact with the service. In building the client code, you will have access to all the goodies your portal environment offers, such as session information, portlet-to-portlet communication, authorization information, and so on. Implementing portlets as Web services will take some work. How smoothly your implementation goes will depend on what development platform you use, what portal vendor you choose, and what in-house talents you can leverage to handle this development.

    Conclusion
    Individually Web services and portals are very powerful technologies. One would think then that together they would be unstoppable. In theory that's correct, but in reality there are still some limitations to seamless, portable, simple Web service portlet integration. The good news is that these are known problems and are being addressed by both standards bodies and the portal vendors themselves. If your business needs dictate it, a portal can be a wise investment. Your ROI will not take that long to be seen if your portal is used properly. And when all the portlet libraries have been searched with no luck for that special application you require, take a stab at creating your portlet with a Web service. For all the current and future standards, platform and protocol independence, reusability, interoperability, and "future proofing" benefits you get, you'll be glad you did.

  • More Stories By Alec Graziano

    Director of Web engineering at Miller Systems, Alec Graziano is responsible for all of Miller Systems’ Internet-based software development engagements and process methodology, including design, architecture, and programming. Alec has significant and broad experience in designing and managing the delivery of industry-standard, Web-based software, particularly in the Web services arena, in both J2EE and .NET frameworks.

    Comments (0)

    Share your thoughts on this story.

    Add your comment
    You must be signed in to add a comment. Sign-in | Register

    In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


    @MicroservicesExpo Stories
    As many know, the first generation of Cloud Management Platform (CMP) solutions were designed for managing virtual infrastructure (IaaS) and traditional applications. But that’s no longer enough to satisfy evolving and complex business requirements. In his session at 21st Cloud Expo, Scott Davis, Embotics CTO, will explore how next-generation CMPs ensure organizations can manage cloud-native and microservice-based application architectures, while also facilitating agile DevOps methodology. He wi...
    There are several reasons why businesses migrate their operations to the cloud. Scalability and price are among the most important factors determining this transition. Unlike legacy systems, cloud based businesses can scale on demand. The database and applications in the cloud are not rendered simply from one server located in your headquarters, but is instead distributed across several servers across the world. Such CDNs also bring about greater control in times of uncertainty. A database hack ...
    These days, change is the only constant. In order to adapt and thrive in an ever-advancing and sometimes chaotic workforce, companies must leverage intelligent tools to streamline operations. While we're only at the dawn of machine intelligence, using a workflow manager will benefit your company in both the short and long term. Think: reduced errors, improved efficiency and more empowered employees-and that's just the start. Here are five other reasons workflow automation is leading a revolution...
    We define Hybrid IT as a management approach in which organizations create a workload-centric and value-driven integrated technology stack that may include legacy infrastructure, web-scale architectures, private cloud implementations along with public cloud platforms ranging from Infrastructure-as-a-Service to Software-as-a-Service.
    Docker is sweeping across startups and enterprises alike, changing the way we build and ship applications. It's the most prominent and widely known software container platform, and it's particularly useful for eliminating common challenges when collaborating on code (like the "it works on my machine" phenomenon that most devs know all too well). With Docker, you can run and manage apps side-by-side - in isolated containers - resulting in better compute density. It's something that many developer...
    While some vendors scramble to create and sell you a fancy solution for monitoring your spanking new Amazon Lambdas, hear how you can do it on the cheap using just built-in Java APIs yourself. By exploiting a little-known fact that Lambdas aren’t exactly single-threaded, you can effectively identify hot spots in your serverless code. In his session at @DevOpsSummit at 21st Cloud Expo, Dave Martin, Product owner at CA Technologies, will give a live demonstration and code walkthrough, showing how ...
    Did you know that you can develop for mainframes in Java? Or that the testing and deployment can be automated across mobile to mainframe? In his session and demo at @DevOpsSummit at 21st Cloud Expo, Dana Boudreau, a Senior Director at CA Technologies, will discuss how increasingly teams are developing with agile methodologies, using modern development environments, and automating testing and deployments, mobile to mainframe.
    Cloud adoption is often driven by a desire to increase efficiency, boost agility and save money. All too often, however, the reality involves unpredictable cost spikes and lack of oversight due to resource limitations. In his session at 20th Cloud Expo, Joe Kinsella, CTO and Founder of CloudHealth Technologies, tackled the question: “How do you build a fully optimized cloud?” He will examine: Why TCO is critical to achieving cloud success – and why attendees should be thinking holistically ab...
    As DevOps methodologies expand their reach across the enterprise, organizations face the daunting challenge of adapting related cloud strategies to ensure optimal alignment, from managing complexity to ensuring proper governance. How can culture, automation, legacy apps and even budget be reexamined to enable this ongoing shift within the modern software factory?
    @DevOpsSummit at Cloud Expo taking place Oct 31 - Nov 2, 2017, at the Santa Clara Convention Center, Santa Clara, CA, is co-located with the 21st International Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. The widespread success of cloud computing is driving the DevOps revolution in enterprise IT. Now as never before, development teams must communicate and collaborate in a dynamic, 24/7/365 environment. There is ...
    With Cloud Foundry you can easily deploy and use apps utilizing websocket technology, but not everybody realizes that scaling them out is not that trivial. In his session at 21st Cloud Expo, Roman Swoszowski, CTO and VP, Cloud Foundry Services, at Grape Up, will show you an example of how to deal with this issue. He will demonstrate a cloud-native Spring Boot app running in Cloud Foundry and communicating with clients over websocket protocol that can be easily scaled horizontally and coordinate...
    Docker is on a roll. In the last few years, this container management service has become immensely popular in development, especially given the great fit with agile-based projects and continuous delivery. In this article, I want to take a brief look at how you can use Docker to accelerate and streamline the software development lifecycle (SDLC) process.
    DevOps at Cloud Expo, taking place October 31 - November 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA, is co-located with 21st Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. The widespread success of cloud computing is driving the DevOps revolution in enterprise IT. Now as never before, development teams must communicate and collaborate in a dynamic, 24/7/365 environment. There is no time to w...
    In his session at 20th Cloud Expo, Scott Davis, CTO of Embotics, discussed how automation can provide the dynamic management required to cost-effectively deliver microservices and container solutions at scale. He also discussed how flexible automation is the key to effectively bridging and seamlessly coordinating both IT and developer needs for component orchestration across disparate clouds – an increasingly important requirement at today’s multi-cloud enterprise.
    IT organizations are moving to the cloud in hopes to approve efficiency, increase agility and save money. Migrating workloads might seem like a simple task, but what many businesses don’t realize is that application migration criteria differs across organizations, making it difficult for architects to arrive at an accurate TCO number. In his session at 21st Cloud Expo, Joe Kinsella, CTO of CloudHealth Technologies, will offer a systematic approach to understanding the TCO of a cloud application...
    API Security has finally entered our security zeitgeist. OWASP Top 10 2017 - RC1 recognized API Security as a first class citizen by adding it as number 10, or A-10 on its list of web application vulnerabilities. We believe this is just the start. The attack surface area offered by API is orders or magnitude larger than any other attack surface area. Consider the fact the APIs expose cloud services, internal databases, application and even legacy mainframes over the internet. What could go wrong...
    The goal of Continuous Testing is to shift testing left to find defects earlier and release software faster. This can be achieved by integrating a set of open source functional and performance testing tools in the early stages of your software delivery lifecycle. There is one process that binds all application delivery stages together into one well-orchestrated machine: Continuous Testing. Continuous Testing is the conveyer belt between the Software Factory and production stages. Artifacts are m...
    In IT, we sometimes coin terms for things before we know exactly what they are and how they’ll be used. The resulting terms may capture a common set of aspirations and goals – as “cloud” did broadly for on-demand, self-service, and flexible computing. But such a term can also lump together diverse and even competing practices, technologies, and priorities to the point where important distinctions are glossed over and lost.
    In his session at @DevOpsSummit at 20th Cloud Expo, Kelly Looney, director of DevOps consulting for Skytap, showed how an incremental approach to introducing containers into complex, distributed applications results in modernization with less risk and more reward. He also shared the story of how Skytap used Docker to get out of the business of managing infrastructure, and into the business of delivering innovation and business value. Attendees learned how up-front planning allows for a clean sep...
    Most companies are adopting or evaluating container technology - Docker in particular - to speed up application deployment, drive down cost, ease management and make application delivery more flexible overall. As with most new architectures, this dream takes a lot of work to become a reality. Even when you do get your application componentized enough and packaged properly, there are still challenges for DevOps teams to making the shift to continuous delivery and achieving that reduction in cost ...