Click here to close now.




















Welcome!

Microservices Expo Authors: Pat Romanski, Liz McMillan, Elizabeth White, Trevor Parsons, SmartBear Blog

Related Topics: Microservices Expo, Java IoT, Microsoft Cloud, Silverlight, Cloud Security

Microservices Expo: Article

Any Means Possible: Tales from Penetration Testing

Problems centered on web service APIs can potentially be just as dangerous as an SQLi vulnerability

When we aren't fighting crime, taking over the world, or enjoying a good book by the fire, we here on the eEye Research team like to participate in the Any Means Possible (AMP) Penetration Testing engagements with our clients. For us, it's a great way to interact one-on-one with IT folks and really dig into the security problems that they are facing. We can sharpen our skills with real-world scenarios and practice the academic techniques presented in the industry, all the while helping to connect better with our customers and identify their security needs. During these engagements, we target a number of attack surfaces, ranging from exposed external server interfaces to client-side attacks launched on individual workstations. What I would like to talk about today is centered purely on the web-based attack surface, with a common problem we see consistently during our AMP engagements.

When talking about web vulnerabilities, you can't even begin to breach the subject without someone throwing out Cross-Site Scripting (XSS) or SQL Injection (SQLi). Unfortunately, poor little web services never seem to get any attention in the mix. Web service vulnerabilities are arguably just as widespread and dangerous as the aforementioned classes of vulnerabilities, but with so little talk and discussion around them, very rarely are these issues identified and remediated. Let's fix that.

Vulnerabilities in web services stem from the developer's line of thinking that says "I can trust the input from programs that I write." It's true that in some situations data coming from a known source that you wrote can sometimes be trusted. This is not however true when that data communication travels over an untrusted medium, such as the Internet. A common mistake in web design and development that we still see frequently is a server relying on input that was parsed and filtered by the client's browser. An example of such would be JavaScript running in the browser that is doing all of the filtering for malicious characters. Surely the JavaScript has filtered out all characters that could allow an attacker to insert malicious SQL queries into the back-end SQL Database, right? Wrong, any data that the server receives from a client's browser can be sent directly to the server from another, custom-written, application. This means that an attacker can bypass server-provided client-side SQLi and XSS protections by simply sending the queries directly to the server. When traveling over the Internet, it becomes quite difficult to determine the exact means in which the data was sent; it may have never been sent from the application that you intended it to be sent from. This makes exploitation of these vulnerabilities a bit more obscure, but still possible. The same holds true for web service APIs used by client-side applications.

Figure 1: Demo Microsoft Silverlight application. The left is a failed attempt to login and reveal the user's secret data, the right is a successful login.

Many browser applications, such as Adobe Flash and Microsoft Silverlight, communicate back to the server programmatically using web services. These services are exposed interfaces on the server that can be called directly from custom-written applications. In many situations, these services can expose potentially sensitive and privileged information that would otherwise not be accessible. Figure 1 shows a Microsoft Silverlight application that was constructed for demonstration purposes. This application is not vulnerable to XSS or SQLi and, to the average user, there is nothing about this application that allows someone without a password to access the legitimate user's secret data. However, what a lot of people don't seem to take into consideration is that you have access to anything that is running in your browser. Now, we can't pull the entire project down off of the server, but we can reverse-engineer the application interface running in the browser to see if there is anything potentially sensitive that is being exposed.

The first thing that should be done when auditing web sites is to make sure all requests are being logged through a local request proxy. For this example, I will be using Tamper Data (https://addons.mozilla.org/en-US/firefox/addon/tamper-data/) to log all of the requests that FireFox makes to our target Silverlight application. Right away, we see that the application requests an XAP file, shown in Figure 2. This is a fun thing to play around with that I will come back to later. As soon as we click the button on the page, we see the browser make a request to an SVC file; this is our web services interface and is also shown in Figure 2.

Figure 2: Browser makes requests for an XAP file and an SVC file. The XAP file is loaded immediately into the browser when the application is started and the SVC file is loaded as soon as the user attempts to submit data back to the server.

Now, when we find a site serving up an SVC web services file, it's usually game over for that particular site. The reason is that these interfaces are usually trusted by the developer. Developers will assume that the only thing calling these exposed interfaces is the client application that they wrote. However, browsing to the service file directly in your favorite web browser will usually show you the basic interface of the exposed web service. The next step is creating a custom application to interface with the web service directly. You can use any language that you want as long as it can interface with a web server, but I usually like to use C# in Visual Studio. Creating the application is quite easy - simply create a new C# project and add a service reference to the hosted SVC file. Visual Studio will automatically import all of the references to everything exposed by the service. Figure 3 shows what is exposed by the sample service.

Figure 3: Object Viewer's list of the imported Web Services interface.

This service exports two functions: GetUserSecret and Login. The interesting thing here is that GetUserSecret takes a string and gives back a string, likely representing the secret data associated with that provided user. Now, it's perfectly possible that there is some form of authentication check that happens on the server side when this function is called, which ensures no secrets are disclosed to unauthenticated clients. However, in many situations I have encountered, this is not the case. We can test if code is properly checking for authentication by writing our own custom interface for the exposed web service. The following code snippet instantiates a client and queries for the secret data of two users without first authenticating with the server. Figure 4 shows the output from that program.

LoginService.LoginServiceClient client = new LoginService.LoginServiceClient();
Console.WriteLine("eEyeResearch's secret: "+client.GetUserSecret("eEyeResearch"));
Console.WriteLine("admin's secret: " + client.GetUserSecret("admin"));

Figure 4: Output from the code written to call the example service directly.

The output from our code shows that this exposed service is callable directly, without requiring any authentication. The only information needed is the user's name and, as many of you know from attacks that have made the press over the past year, that information can be acquired through social engineering or brute force style attacks quite easily.

This vulnerability is quite straightforward, but I think many of you would be surprised how often we encounter issues very similar to this in real-world penetration testing scenarios. It's a fairly easy mistake to make, to assume that any malicious tampering of a web page would be done through a browser or front-end web application, but the simple truth is that this is not the case.

If you wanted to take this a bit further, you could examine the manifest files that are used by the client-side browser application. Remember the XAP file mentioned at the beginning? That file is actually a ZIP archive containing manifest information and binary executable files used by the Silverlight application. Examining these files will show you all of the web services APIs that the application can potentially call, even the authenticated ones. This information has proven to be quite useful on various engagements. A simple web application, that wasn't vulnerable to XSS or SQLi, revealed a manifest of previously unknown web services, which eventually allowed downloading all of the information hidden behind the login page. Because these services were only referenced after the user had authenticated through the login screen, these APIs may have never been found with a purely unauthenticated audit had the manifest files not been checked for additional exposed interfaces.

As if freely available manifest information wasn't enough, the DLL files presented in this archive can also prove to be a lot of fun. Ask any professional or hobby reverse-code engineer, languages such as Java or C# are quite easy to decompile. Due to the managed nature of such languages, there are actually freely available tools that do quite a good job of turning the compiled binaries back into the original (or very similar) high-level code. These DLLs only represent the client-side browser code that gets executed by Silverlight in the browser, so you won't be getting the original server code out of this. However, a very common mistake made by programmers is to incorporate some of the application logic into the user interface as well. In these situations, such reversing sessions may yield valuable information about how the application is working behind the scenes. In fact, this has been used in the past to gain all kinds of interesting information about target applications, including default credentials to the authenticated sections of the application, which were set in a button click-event handler of the application's user interface.

Though this entire article has been purely focused on Silverlight, the same concept applies to most other client-side web applications out there. Often times, these applications will rely on web services in order to communicate with the server, for both unauthenticated and authenticated communications alike. Developers often times rely on the client-side application to do all of the relevant filtering and data integrity checking of information being sent to these web services.

Along with authenticated actions on behalf of an unauthenticated application, we have used these service APIs to inject malicious data into hosted material. I think my favorite case with that was when we attacked a Flash application as part of an AMP engagement that called a web service API in the background. This API was used to lay text over greeting card images that were being hosted on the affected server. The Flash application filtered input to only allow alphanumeric characters but calling the API directly allowed us to insert malicious JavaScript to sit on top of the images. Upon viewing the page or the image link directly, we gained the ability to execute arbitrary JavaScript in the user's browser or embed hidden iFrames that could be used to host various exploits. The basic point here is that successful exploitation can yield a variety of things for the attacker. This isn't something that, when exploited, only dumps information or only changes the way a page is viewed; the limits of these vulnerabilities is only determined by the functionality of the web application.

Problems centered on web service APIs can potentially be just as dangerous as an SQLi vulnerability. It's somewhat unfortunate that SQLi has become so trendy, taking away any deserved fame or glory from the other interesting web vulnerabilities. It's important to keep in mind that, though this was a very heavily focused Microsoft and Silverlight example, the same issues apply across the board in many different web application technologies. The issue is actually very easy to audit for, especially if you already know exactly what your application should and shouldn't be able to do at every level of authentication.

I recommend if you manage servers hosting websites or you manage the websites that you take a few minutes to sit down and browse through each of these services. Be aware of exactly what is exposed on the external facing interfaces. If anything looks out of place, try connecting directly to the service and see what information is exposed and available to your users. Try preventing the service from displaying its metadata by removing the mex endpoint binding and setting httpGetEnabled for service metadata to false in the web configuration file. This prevents users from reading the web services descriptions and makes it nontrivial to arbitrarily connect and communicate with these services without prior knowledge of the internal workings of the application. These problems are quite easy to identify, potentially trivial to remediate, and can save an organization from a serious compromise if steps are taken to proactively identify and address these issues.

•   •   •

This article was written by Jared Day, a researcher with eEye's Research Team led by Marc Maiffret.

If you are interested in learning more about our AMP services, you can visit our page here (http://www.eeye.com/services/penetration-testing).

More Stories By Jared Day

Jared Day, Security Research Engineer, eEye Research Team. He joined the research team in 2010 and works primarily as a security advocate for eEye clients; participating and leading the Any Means Possible (AMP) Penetration Tests, as well as custom private research related to malware, threat, and patch mitigation analysis.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


@MicroservicesExpo Stories
SYS-CON Events announced today that Pythian, a global IT services company specializing in helping companies leverage disruptive technologies to optimize revenue-generating systems, has been named “Bronze Sponsor” of SYS-CON's 17th Cloud Expo, which will take place on November 3–5, 2015, at the Santa Clara Convention Center in Santa Clara, CA. Founded in 1997, Pythian is a global IT services company that helps companies compete by adopting disruptive technologies such as cloud, Big Data, advance...
In his session at 17th Cloud Expo, Ernest Mueller, Product Manager at Idera, will explain the best practices and lessons learned for tracking and optimizing costs while delivering a cloud-hosted service. He will describe a DevOps approach where the applications and systems work together to track usage, model costs in a granular fashion, and make smart decisions at runtime to minimize costs. The trickier parts covered include triggering off the right metrics; balancing resilience and redundancy ...
SYS-CON Events announced today the Containers & Microservices Bootcamp, being held November 3-4, 2015, in conjunction with 17th Cloud Expo, @ThingsExpo, and @DevOpsSummit at the Santa Clara Convention Center in Santa Clara, CA. This is your chance to get started with the latest technology in the industry. Combined with real-world scenarios and use cases, the Containers and Microservices Bootcamp, led by Janakiram MSV, a Microsoft Regional Director, will include presentations as well as hands-on...
SYS-CON Events announced today that HPM Networks will exhibit at the 17th International Cloud Expo®, which will take place on November 3–5, 2015, at the Santa Clara Convention Center in Santa Clara, CA. For 20 years, HPM Networks has been integrating technology solutions that solve complex business challenges. HPM Networks has designed solutions for both SMB and enterprise customers throughout the San Francisco Bay Area.
Several years ago, I was a developer in a travel reservation aggregator. Our mission was to pull flight and hotel data from a bunch of cryptic reservation platforms, and provide it to other companies via an API library - for a fee. That was before companies like Expedia standardized such things. We started with simple methods like getFlightLeg() or addPassengerName(), each performing a small, well-understood function. But our customers wanted bigger, more encompassing services that would "do ...
The pricing of tools or licenses for log aggregation can have a significant effect on organizational culture and the collaboration between Dev and Ops teams. Modern tools for log aggregation (of which Logentries is one example) can be hugely enabling for DevOps approaches to building and operating business-critical software systems. However, the pricing of an aggregated logging solution can affect the adoption of modern logging techniques, as well as organizational capabilities and cross-team ...
Docker containerization is increasingly being used in production environments. How can these environments best be monitored? Monitoring Docker containers as if they are lightweight virtual machines (i.e., monitoring the host from within the container), with all the common metrics that can be captured from an operating system, is an insufficient approach. Docker containers can’t be treated as lightweight virtual machines; they must be treated as what they are: isolated processes running on hosts....
In today's digital world, change is the one constant. Disruptive innovations like cloud, mobility, social media, and the Internet of Things have reshaped the market and set new standards in customer expectations. To remain competitive, businesses must tap the potential of emerging technologies and markets through the rapid release of new products and services. However, the rigid and siloed structures of traditional IT platforms and processes are slowing them down – resulting in lengthy delivery ...
Culture is the most important ingredient of DevOps. The challenge for most organizations is defining and communicating a vision of beneficial DevOps culture for their organizations, and then facilitating the changes needed to achieve that. Often this comes down to an ability to provide true leadership. As a CIO, are your direct reports IT managers or are they IT leaders? The hard truth is that many IT managers have risen through the ranks based on their technical skills, not their leadership ab...
Puppet Labs has announced the next major update to its flagship product: Puppet Enterprise 2015.2. This release includes new features providing DevOps teams with clarity, simplicity and additional management capabilities, including an all-new user interface, an interactive graph for visualizing infrastructure code, a new unified agent and broader infrastructure support.
Skeuomorphism usually means retaining existing design cues in something new that doesn’t actually need them. However, the concept of skeuomorphism can be thought of as relating more broadly to applying existing patterns to new technologies that, in fact, cry out for new approaches. In his session at DevOps Summit, Gordon Haff, Senior Cloud Strategy Marketing and Evangelism Manager at Red Hat, discussed why containers should be paired with new architectural practices such as microservices rathe...
It’s been proven time and time again that in tech, diversity drives greater innovation, better team productivity and greater profits and market share. So what can we do in our DevOps teams to embrace diversity and help transform the culture of development and operations into a true “DevOps” team? In her session at DevOps Summit, Stefana Muller, Director, Product Management – Continuous Delivery at CA Technologies, answered that question citing examples, showing how to create opportunities for ...
Whether you like it or not, DevOps is on track for a remarkable alliance with security. The SEC didn’t approve the merger. And your boss hasn’t heard anything about it. Yet, this unruly triumvirate will soon dominate and deliver DevSecOps faster, cheaper, better, and on an unprecedented scale. In his session at DevOps Summit, Frank Bunger, VP of Customer Success at ScriptRock, will discuss how this cathartic moment will propel the DevOps movement from such stuff as dreams are made on to a prac...
SYS-CON Events announced today that DataClear Inc. will exhibit at the 17th International Cloud Expo®, which will take place on November 3–5, 2015, at the Santa Clara Convention Center in Santa Clara, CA. The DataClear ‘BlackBox’ is the only solution that moves your PC, browsing and data out of the United States and away from prying (and spying) eyes. Its solution automatically builds you a clean, on-demand, virus free, new virtual cloud based PC outside of the United States, and wipes it clean...
What does “big enough” mean? It’s sometimes useful to argue by reductio ad absurdum. Hello, world doesn’t need to be broken down into smaller services. At the other extreme, building a monolithic enterprise resource planning (ERP) system is just asking for trouble: it’s too big, and it needs to be decomposed.
Early in my DevOps Journey, I was introduced to a book of great significance circulating within the Web Operations industry titled The Phoenix Project. (You can read our review of Gene’s book, if interested.) Written as a novel and loosely based on many of the same principles explored in The Goal, this book has been read and referenced by many who have adopted DevOps into their continuous improvement and software delivery processes around the world. As I began planning my travel schedule last...
SYS-CON Events announced today that G2G3 will exhibit at SYS-CON's @DevOpsSummit Silicon Valley, which will take place on November 3–5, 2015, at the Santa Clara Convention Center in Santa Clara, CA. Based on a collective appreciation for user experience, design, and technology, G2G3 is uniquely qualified and motivated to redefine how organizations and people engage in an increasingly digital world.
Any Ops team trying to support a company in today’s cloud-connected world knows that a new way of thinking is required – one just as dramatic than the shift from Ops to DevOps. The diversity of modern operations requires teams to focus their impact on breadth vs. depth. In his session at DevOps Summit, Adam Serediuk, Director of Operations at xMatters, Inc., will discuss the strategic requirements of evolving from Ops to DevOps, and why modern Operations has begun leveraging the “NoOps” approa...
DevOps has traditionally played important roles in development and IT operations, but the practice is quickly becoming core to other business functions such as customer success, business intelligence, and marketing analytics. Modern marketers today are driven by data and rely on many different analytics tools. They need DevOps engineers in general and server log data specifically to do their jobs well. Here’s why: Server log files contain the only data that is completely full and accurate in th...
The Microservices architectural pattern promises increased DevOps agility and can help enable continuous delivery of software. This session is for developers who are transforming existing applications to cloud-native applications, or creating new microservices style applications. In his session at DevOps Summit, Jim Bugwadia, CEO of Nirmata, will introduce best practices, patterns, challenges, and solutions for the development and operations of microservices style applications. He will discuss ...