Microservices Expo Authors: Pat Romanski, Elizabeth White, Liz McMillan, Yeshim Deniz, Zakia Bouachraoui

Related Topics: Microservices Expo

Microservices Expo: Article

Rogue Web Services

Risks and success strategies

Like the hero of a Greek tragedy, Web services' most compelling advantages are simultaneously its most serious dangers. Web services have passed the initial hype cycle. The convergence of industry support, ease of use, and the desire for cost-effective solutions for integration and services-oriented architectures (SOA) has made it a popular choice for architects, developers, and integration analysts, with numerous projects underway. Web services technologies are making inroads within organizations in much the same way Web site technologies proliferated. However, the benefits of loose coupling, decentralized development, and support for heterogeneity - rapid grassroots development of Web services with flexible, agile architectures - introduce a multitude of new issues organizations must address to prevent the negatives from outweighing the positives. Security, reliability, and performance are all critical issues to be specially managed in a Web services environment. This article looks at "rogue Web services," already a growing concern in IT, particularly for organizations that have not applied top-down governance on usage.

Rogue Web Services
A rogue Web service (RWS) is a Web service that's out of control. It might be perfectly benign, but unsanctioned by IT. Or it might be intentionally malicious - either attacking your systems or squatting and consuming your resources. It might even be an officially sanctioned service that unintentionally starts hammering other Web services due to a coding bug. Of course, even the most benign rogue service could turn up in the last category at any time - almost by definition it hasn't gone through the same QA or testing as production code.

Perhaps the most compelling reason RWS threaten to become a significant danger is the ease with which they can be created. Although veterans of earlier large-scale distributed technologies, such as DCOM and CORBA, frequently disparage Web services, those were very complex systems requiring a fair amount of knowledge and programming skill to deliver a functional application. In addition, distributed object technologies were never able to break out of their silos. The prime differentiators between these systems and Web services are the ease with which a Web service can be constructed to perform fairly sophisticated tasks, and the loosely coupled nature of Web services technologies. These significantly lower the barriers to entry for both the technical know-how for building a Web service and the time required to get a new service initiated or integrated with an existing service. And that significantly increases the number of people capable of building an RWS.

Unsanctioned internal Web services, particularly clients, but servers as well, can arise on any computer accessible through HTTP. It takes relatively little programming skill to execute a Perl or Python script in a Command shell to listen for requests on a particular port, do some additional processing, and return the results. From there, it's also possible to create a Web service client that creates messages for a variety of Web services and coordinates the result. These are often called "composite" Web services, but despite becoming a buzzword they are scarcely more difficult to build than ordinary ones, especially if you don't worry about making them safe.

The primary means of describing a Web service, WSDL, is a fairly easy-to-read interface definition language. Unlike CORBA or ASN.1 stub generators, an astute programmer can easily generate a stub from the description, and there are many easily available generators for common programming languages. Even where that is not available, a message itself is often self-explanatory - a new message can be "cloned" from an old one just by replacing bits and pieces.

The barrier is even lower when Web services are easily integrated into the latest versions of popular desktop software, such as Word macros, Excel spreadsheets, and PowerPoint slides. There is explicit Web service support in MS Office 2003, but it is possible to access Web services through macros and extensions in earlier versions. Given an RPC-style service, a stub only needs a URL, a function name, and a list of parameter names, types, and values, to create a SOAP message. For simple return values, little is needed beyond simple pattern matching to retrieve the answer. A PowerPoint slide set containing a Web services call made publicly available could generate a request every time a particular slide is viewed.

Once a Web services message is prepared, it moves along one of the most ubiquitous and familiar protocols created - HTTP. Many programming languages already have libraries to create HTTP messages, but it is easy to create an HTTP message by hand and send it along a socket. From a programming perspective, it is a simple request/response requiring very little code.

So we see that Web services lower the barriers to entry for the construction of distributed applications for legitimate developers and users as well as for illegitimate ones.

Risks Associated with Rogue Web Services
Rogue Web services traffic is more difficult to protect against than random traffic because much of the danger is in information targeted at the application level that cannot be filtered at the IP level the way traditional firewalls can. It is quite possible for rogue traffic to originate behind the firewall from people in your own IT shop or even from end users. Also, RWS cannot be identified just by source and destination IP - it may be that the message is coming from an RWS at a partner location, so it's important to cut off just the aberrant user, not the entire site. The destination host may contain any number of Web services through information not accessible at the IP or even Web server proxy level. While the server may recognize the URL, the actual identity of the operation being invoked is in the contents of the message, requiring a level of filtering capable of looking at application- level information.

RWS, even of the most benign sort, represent a threat to a company's ability to control its own destiny. Even avoiding, for the moment, the worst possible abuses, unknown Web services can create a considerable drain on network resources. Allowing unimpeded grassroots development of Web services without any centralized attempts at standardization can lead to significant duplication as well as many avoidable mismatches among Web services. While the flexibility of the Web services SOA makes it much easier to deal with independently developed Web services, a small investment in shared design can go a long way to avoiding extra work in the long run. Therefore, it is important for an organization to control the set of technologies used.

As many Web services are a thin layer over existing applications, once access to a Web service spreads beyond the approved users, the damage can be as bad as any other kind of intrusion. The intruder can have the same kind of impact as anyone who has logged into your system. As more functionality becomes accessible through Web services, such as management and provisioning, there won't be much that can't be done using Web services. Worse yet, if your security credentials, such as private key, are stolen, then it is not just your internal systems that are compromised, but your expanded Web services environment as well, including fee-based services.

Success Strategies
Every organization is different. The most successful strategies depend not only on the technologies that are being used but also on the people and organizations involved. Organizationally, many IT groups deal with the rogue service issue through top-down governance, usually by an architecture and standards body. These groups define the ground rules for how services are created, what standards should be followed, and the rules that are required for corporate and industry compliance. In other organizations, governance of Web services is enforced by the CISO or associated security group. In still other organizations, it may be defined and enforced by the IT operations group. More often than not, all of these groups are somehow involved in defining the minimum security, monitoring, and management requirements for WS development, deployment, and management.

Many tools are in existence for detection, enforcement, and management of the XML Web service environment. A variety of sniffer tools are available for detecting XML and SOAP traffic, many of them free. Using simple rules, you can determine if the traffic is unsanctioned and fire off the necessary alert. Firewalls and other proxies can also be configured to perform content inspection, although they may lack sophisticated rule sets and the performance for more robust environments. UDDI directories and other service directories can be used to store sanctioned Web services to help ease management. A new class of product called XML Firewalls and Web Services Management (WSM) platforms can be used to address the security, monitoring, and management of services. These products are typically noninvasive and help detect and address RWS while providing a management framework and set of tools to enforce top down governance requirements. Many analysts agree that a fully integrated XML firewall and WSM solution provides, among many other benefits, the best solution for enforcement and ongoing administration for RWS.

Nevertheless, an important part of the value of Web services is lost in a regime that is strictly maintained. Not all Web services are made equal, and infrastructures that don't appropriately distinguish between the varying requirements will veer unacceptably in one direction or another. An effective regime will be able to distinguish between core and periphery, where the core represents the bottom tiers of client/server architecture, and the periphery represents Web service clients. Another important distinction is among services that cause dynamic updates to information or consume significant resources (such as money), and those that don't and may be simply informational. Rather than taking an overly restrictive stance, tools can be used to create policies to adaptively manage Web services traffic so that important systems are only accessible from approved clients, but others can be accessed in a more relaxed fashion with content filters at the periphery to inspect outgoing information.

The proliferation of RWS is not necessarily a bad sign. In fact, it might be said that this is an indication of the benefits that Web services provides organizations today. However, there are associated risks when Web services traffic is not appropriately controlled. A combination of managing the proper procedures and controls mixed with the appropriate technologies can enable any organization to realize the full value of Web services while minimizing the security and cost risks.

More Stories By Matthew Fuchs

Dr. Matthew Fuchs is a member of the technical staff at Westbridge Technology. Previously, he was chief scientist for XML Technologies at Commerce One, and pioneered the theory and practice of using domain-specific languages in XML and SGML for distributed applications and agent-oriented communication over the Internet. At Commerce One he developed a variety of XML technologies, including SOX, the first implemented, publicly available, object-oriented Schema language and parser for XML

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.

Microservices Articles
Containers and Kubernetes allow for code portability across on-premise VMs, bare metal, or multiple cloud provider environments. Yet, despite this portability promise, developers may include configuration and application definitions that constrain or even eliminate application portability. In this session we'll describe best practices for "configuration as code" in a Kubernetes environment. We will demonstrate how a properly constructed containerized app can be deployed to both Amazon and Azure ...
Lori MacVittie is a subject matter expert on emerging technology responsible for outbound evangelism across F5's entire product suite. MacVittie has extensive development and technical architecture experience in both high-tech and enterprise organizations, in addition to network and systems administration expertise. Prior to joining F5, MacVittie was an award-winning technology editor at Network Computing Magazine where she evaluated and tested application-focused technologies including app secu...
Using new techniques of information modeling, indexing, and processing, new cloud-based systems can support cloud-based workloads previously not possible for high-throughput insurance, banking, and case-based applications. In his session at 18th Cloud Expo, John Newton, CTO, Founder and Chairman of Alfresco, described how to scale cloud-based content management repositories to store, manage, and retrieve billions of documents and related information with fast and linear scalability. He addresse...
The now mainstream platform changes stemming from the first Internet boom brought many changes but didn’t really change the basic relationship between servers and the applications running on them. In fact, that was sort of the point. In his session at 18th Cloud Expo, Gordon Haff, senior cloud strategy marketing and evangelism manager at Red Hat, will discuss how today’s workloads require a new model and a new platform for development and execution. The platform must handle a wide range of rec...
SYS-CON Events announced today that DatacenterDynamics has been named “Media Sponsor” of SYS-CON's 18th International Cloud Expo, which will take place on June 7–9, 2016, at the Javits Center in New York City, NY. DatacenterDynamics is a brand of DCD Group, a global B2B media and publishing company that develops products to help senior professionals in the world's most ICT dependent organizations make risk-based infrastructure and capacity decisions.
Discussions of cloud computing have evolved in recent years from a focus on specific types of cloud, to a world of hybrid cloud, and to a world dominated by the APIs that make today's multi-cloud environments and hybrid clouds possible. In this Power Panel at 17th Cloud Expo, moderated by Conference Chair Roger Strukhoff, panelists addressed the importance of customers being able to use the specific technologies they need, through environments and ecosystems that expose their APIs to make true ...
In his keynote at 19th Cloud Expo, Sheng Liang, co-founder and CEO of Rancher Labs, discussed the technological advances and new business opportunities created by the rapid adoption of containers. With the success of Amazon Web Services (AWS) and various open source technologies used to build private clouds, cloud computing has become an essential component of IT strategy. However, users continue to face challenges in implementing clouds, as older technologies evolve and newer ones like Docker c...
CloudEXPO New York 2018, colocated with DXWorldEXPO New York 2018 will be held November 11-13, 2018, in New York City and will bring together Cloud Computing, FinTech and Blockchain, Digital Transformation, Big Data, Internet of Things, DevOps, AI, Machine Learning and WebRTC to one location.
DevOpsSummit New York 2018, colocated with CloudEXPO | DXWorldEXPO New York 2018 will be held November 11-13, 2018, in New York City. Digital Transformation (DX) is a major focus with the introduction of DXWorldEXPO within the program. Successful transformation requires a laser focus on being data-driven and on using all the tools available that enable transformation if they plan to survive over the long term.
In his session at 20th Cloud Expo, Scott Davis, CTO of Embotics, discussed how automation can provide the dynamic management required to cost-effectively deliver microservices and container solutions at scale. He also discussed how flexible automation is the key to effectively bridging and seamlessly coordinating both IT and developer needs for component orchestration across disparate clouds – an increasingly important requirement at today’s multi-cloud enterprise.