Click here to close now.




















Welcome!

Microservices Expo Authors: Elizabeth White, Pat Romanski, Liz McMillan, VictorOps Blog, Trevor Parsons

Related Topics: SDN Journal, Microservices Expo, Microsoft Cloud, Containers Expo Blog, @CloudExpo, Cloud Security

SDN Journal: Blog Feed Post

So That DNS DDoS Thing Happened

Smurfs aren't just for ICMP anymore...

DNS, like any public service, is vulnerable. Not in the sense that it has vulnerabilities but vulnerable in the sense that it must, by its nature and purpose, be publicly available. It can't hide behind access control lists or other traditional security mechanisms because the whole point of DNS is to provide a way to find your corporate presence in all its digital forms.

It should therefore not come as a surprise that eventually it turned up in the news as the primary player in a global and quite disruptive DDoS attack.

The gory details, most of which have already been circulated, are nonetheless fascinating given the low technological investment required. You can duplicate the effort with about 30 friends each with a 30Mbps connection (that means I'm out, sorry). As those who've been in the security realm for a while know, that's because these types of attacks require very little on the attack side; the desired effects come due to the unbalanced request-response ratio inherent in many protocols, DNS being one of them.

In the world of security taxonomies these are called "amplification" attacks. They aren't new; "Smurf attacks" (which exploited ICMP) were first seen in the 1990s and effected their disruption by taking advantage of broadcast addresses. DNS amplification works on the same premise, because queries are small but responses tend to be large. Both ICMP and DNS amplification attacks are effective because they use UDP, which does not require a handshake and is entirely uninterested in verifying whether or not the IP address in the request is the one from which the the request was received. It's ripe for spoofing with much less work than a connection-oriented protocol such as TCP.

To understand just how unbalanced the request-response ratio was in this attack, consider that the request was: “dig ANY ripe.net @ <OpenDNSResolverIP> +edns0=0 +bufsize=4096”. That's 36 bytes. The responses are typically 3K bytes, for an amplification factor of 100. There were 30,000 open DNS resolvers in the attack, each sending 2.5Mbps of traffic each, all directed at the target victim. CloudFlare has a great blog on the attack, I recommend a read. Another good resource on DNS amplification attacks is this white paper. Also fascinating is that this attack differed in that the target was sent a massive number of DNS responses - rather than queries - that it never solicited in the first place.

The problem is DNS is, well, public. Restricting responses could ostensibly unintentionally block legitimate client resolvers causing a kind of self-imposed denial of service. That's not acceptable. Transitioning to TCP to take advantage of handshaking and thus improve the ability to detect and shut down attempted attacks would certainly work, but at the price of performance. While F5's BIG-IP DNS solutions are optimized to avoid that penalty, most DNS infrastructure isn't and that means a general slowdown of a process that's already considered "too slow" by many users, particularly those trying to navigate the Internet via a mobile device.

So it seems we're stuck with UDP and with being attacked. But that doesn't mean we have to sit back and take it. There are ways in which you can protect against the impact of such an attack as well as others lurking in the shadows.

1. DEPRECRATE REQUESTS (and CHECKING RESPONSES)

It is important to validate that the queries being sent by the clients are ones that the DNS servers are interested in answering, and are able to. A DNS firewall or other security product can be used to validate and only allow the DNS queries that the DNS server is configured for. When the DNS protocol was designed, there were a lot of features built into the protocol that are no longer valid due to the evolving nature of the Internet. This includes many DNS query types, flags available and other settings.  One would be surprised at what types of parameters are available to mark on a DNS request and how they can be manipulated.  For example, DNS type 17=RP, which is the Responsible Person for that record.  In addition, there are ways to disrupt DNS communications by putting bad data in many of these fields. A DNS firewall is able to inspect these DNS queries and drop the requests that do not conform to DNS standards and do not use parameters that the DNS servers are configured for.

But as this attack proved, it's not just queries you have to watch out for - it's aslo responses. F5 DNS firewall features include stateful inspection of responses which means any unsolicited DNS responses are immediately dropped. While that won't change the impact on bandwidth, it will keep the server from becoming overwhelmed by processing unnecessary responses.

F5’s DNS Services includes industry-leading DNS Firewall services

2. ENSURE CAPACITY

DNS query capacity is critical to delivering a resilient available DNS infrastructure. Most organizations recognize this and put into place solutions to ensure high availability and scale of DNS. Often these solutions are simply caching DNS load balancing solutions which have their own set of risks, including being vulnerable to attack using random, jabberwocky host names. Caching DNS solutions only cache responses returned from authoritative sources and thus when presented with an unknown host name, it must query the origin server. Given a high enough volume of queries, the origin servers can still be overwhelmed, regardless of the capacity of the caching intermediary.

A high performance in-memory authoritative DNS server such as F5 DNS Express (part of F5 BIG-IP Global Traffic Manager) can shield origin servers from being overwhelmed.

3. PROTECT AGAINST HIJACKING

The vulnerability of DNS to hijacking and poisoning is still very real. In 2008, a researcher, Evgeniy Polyakov, showed that it was possible to cache poison a DNS server that was patched and running current code within 10 hours. This is simply unacceptable in an Internet-driven world that relies, ultimately, on the validity and integrity of DNS. The best solution to this and other vulnerabilities which compromise the integrity of DNS information is DNSSEC. DNSSEC was introduced to specifically correct the open and trusting nature of the protocol’s original design. DNS queries and responses are signed using keys that validate that the DNS answer was not tampered with and that it came from a reliable DNS server.

F5 BIG-IP Global Traffic Manager (GTM) not only supports DNSSEC, but does so without breaking global server load balancing techniques.

As a general rule, you should verify that you aren't accidentally running an open resolver. Consider the benefits of implementing DNS with an ICSA certified and hardened solution that does not function as an open resolver, period. And yes, F5 is a good choice for that.

Additional Resources:

Read the original blog entry...

More Stories By Lori MacVittie

Lori MacVittie is responsible for education and evangelism of application services available across F5’s entire product suite. Her role includes authorship of technical materials and participation in a number of community-based forums and industry standards organizations, among other efforts. MacVittie has extensive programming experience as an application architect, as well as network and systems development and administration expertise. Prior to joining F5, MacVittie was an award-winning Senior Technology Editor at Network Computing Magazine, where she conducted product research and evaluation focused on integration with application and network architectures, and authored articles on a variety of topics aimed at IT professionals. Her most recent area of focus included SOA-related products and architectures. She holds a B.S. in Information and Computing Science from the University of Wisconsin at Green Bay, and an M.S. in Computer Science from Nova Southeastern University.

@MicroservicesExpo Stories
Akana has announced the availability of the new Akana Healthcare Solution. The API-driven solution helps healthcare organizations accelerate their transition to being secure, digitally interoperable businesses. It leverages the Health Level Seven International Fast Healthcare Interoperability Resources (HL7 FHIR) standard to enable broader business use of medical data. Akana developed the Healthcare Solution in response to healthcare businesses that want to increase electronic, multi-device acce...
Any Ops team trying to support a company in today’s cloud-connected world knows that a new way of thinking is required – one just as dramatic than the shift from Ops to DevOps. The diversity of modern operations requires teams to focus their impact on breadth vs. depth. In his session at DevOps Summit, Adam Serediuk, Director of Operations at xMatters, Inc., will discuss the strategic requirements of evolving from Ops to DevOps, and why modern Operations has begun leveraging the “NoOps” approa...
SYS-CON Events announced today that the "Second Containers & Microservices Expo" will take place November 3-5, 2015, at the Santa Clara Convention Center in Santa Clara, CA. Containers and microservices have become topics of intense interest throughout the cloud developer and enterprise IT communities.
SYS-CON Events announced today that G2G3 will exhibit at SYS-CON's @DevOpsSummit Silicon Valley, which will take place on November 3–5, 2015, at the Santa Clara Convention Center in Santa Clara, CA. Based on a collective appreciation for user experience, design, and technology, G2G3 is uniquely qualified and motivated to redefine how organizations and people engage in an increasingly digital world.
SYS-CON Events announced today that DataClear Inc. will exhibit at the 17th International Cloud Expo®, which will take place on November 3–5, 2015, at the Santa Clara Convention Center in Santa Clara, CA. The DataClear ‘BlackBox’ is the only solution that moves your PC, browsing and data out of the United States and away from prying (and spying) eyes. Its solution automatically builds you a clean, on-demand, virus free, new virtual cloud based PC outside of the United States, and wipes it clean...
Puppet Labs has announced the next major update to its flagship product: Puppet Enterprise 2015.2. This release includes new features providing DevOps teams with clarity, simplicity and additional management capabilities, including an all-new user interface, an interactive graph for visualizing infrastructure code, a new unified agent and broader infrastructure support.
It’s been proven time and time again that in tech, diversity drives greater innovation, better team productivity and greater profits and market share. So what can we do in our DevOps teams to embrace diversity and help transform the culture of development and operations into a true “DevOps” team? In her session at DevOps Summit, Stefana Muller, Director, Product Management – Continuous Delivery at CA Technologies, answered that question citing examples, showing how to create opportunities for ...
Early in my DevOps Journey, I was introduced to a book of great significance circulating within the Web Operations industry titled The Phoenix Project. (You can read our review of Gene’s book, if interested.) Written as a novel and loosely based on many of the same principles explored in The Goal, this book has been read and referenced by many who have adopted DevOps into their continuous improvement and software delivery processes around the world. As I began planning my travel schedule last...
Microservice architecture is fast becoming a go-to solution for enterprise applications, but it's not always easy to make the transition from an established, monolithic infrastructure. Lightweight and loosely coupled, building a set of microservices is arguably more difficult than building a monolithic application. However, once established, microservices offer a series of advantages over traditional architectures as deployment times become shorter and iterating becomes easier.
In his session at 17th Cloud Expo, Ernest Mueller, Product Manager at Idera, will explain the best practices and lessons learned for tracking and optimizing costs while delivering a cloud-hosted service. He will describe a DevOps approach where the applications and systems work together to track usage, model costs in a granular fashion, and make smart decisions at runtime to minimize costs. The trickier parts covered include triggering off the right metrics; balancing resilience and redundancy ...
SYS-CON Events announced today that HPM Networks will exhibit at the 17th International Cloud Expo®, which will take place on November 3–5, 2015, at the Santa Clara Convention Center in Santa Clara, CA. For 20 years, HPM Networks has been integrating technology solutions that solve complex business challenges. HPM Networks has designed solutions for both SMB and enterprise customers throughout the San Francisco Bay Area.
Whether you like it or not, DevOps is on track for a remarkable alliance with security. The SEC didn’t approve the merger. And your boss hasn’t heard anything about it. Yet, this unruly triumvirate will soon dominate and deliver DevSecOps faster, cheaper, better, and on an unprecedented scale. In his session at DevOps Summit, Frank Bunger, VP of Customer Success at ScriptRock, will discuss how this cathartic moment will propel the DevOps movement from such stuff as dreams are made on to a prac...
SYS-CON Events announced today that Pythian, a global IT services company specializing in helping companies leverage disruptive technologies to optimize revenue-generating systems, has been named “Bronze Sponsor” of SYS-CON's 17th Cloud Expo, which will take place on November 3–5, 2015, at the Santa Clara Convention Center in Santa Clara, CA. Founded in 1997, Pythian is a global IT services company that helps companies compete by adopting disruptive technologies such as cloud, Big Data, advance...
The pricing of tools or licenses for log aggregation can have a significant effect on organizational culture and the collaboration between Dev and Ops teams. Modern tools for log aggregation (of which Logentries is one example) can be hugely enabling for DevOps approaches to building and operating business-critical software systems. However, the pricing of an aggregated logging solution can affect the adoption of modern logging techniques, as well as organizational capabilities and cross-team ...
Culture is the most important ingredient of DevOps. The challenge for most organizations is defining and communicating a vision of beneficial DevOps culture for their organizations, and then facilitating the changes needed to achieve that. Often this comes down to an ability to provide true leadership. As a CIO, are your direct reports IT managers or are they IT leaders? The hard truth is that many IT managers have risen through the ranks based on their technical skills, not their leadership ab...
In today's digital world, change is the one constant. Disruptive innovations like cloud, mobility, social media, and the Internet of Things have reshaped the market and set new standards in customer expectations. To remain competitive, businesses must tap the potential of emerging technologies and markets through the rapid release of new products and services. However, the rigid and siloed structures of traditional IT platforms and processes are slowing them down – resulting in lengthy delivery ...
Several years ago, I was a developer in a travel reservation aggregator. Our mission was to pull flight and hotel data from a bunch of cryptic reservation platforms, and provide it to other companies via an API library - for a fee. That was before companies like Expedia standardized such things. We started with simple methods like getFlightLeg() or addPassengerName(), each performing a small, well-understood function. But our customers wanted bigger, more encompassing services that would "do ...
Docker containerization is increasingly being used in production environments. How can these environments best be monitored? Monitoring Docker containers as if they are lightweight virtual machines (i.e., monitoring the host from within the container), with all the common metrics that can be captured from an operating system, is an insufficient approach. Docker containers can’t be treated as lightweight virtual machines; they must be treated as what they are: isolated processes running on hosts....
SYS-CON Events announced today the Containers & Microservices Bootcamp, being held November 3-4, 2015, in conjunction with 17th Cloud Expo, @ThingsExpo, and @DevOpsSummit at the Santa Clara Convention Center in Santa Clara, CA. This is your chance to get started with the latest technology in the industry. Combined with real-world scenarios and use cases, the Containers and Microservices Bootcamp, led by Janakiram MSV, a Microsoft Regional Director, will include presentations as well as hands-on...
DevOps has traditionally played important roles in development and IT operations, but the practice is quickly becoming core to other business functions such as customer success, business intelligence, and marketing analytics. Modern marketers today are driven by data and rely on many different analytics tools. They need DevOps engineers in general and server log data specifically to do their jobs well. Here’s why: Server log files contain the only data that is completely full and accurate in th...