Welcome!

Microservices Expo Authors: Lori MacVittie, Dana Gardner, Elizabeth White, Liz McMillan, Pat Romanski

Related Topics: SDN Journal, Microservices Expo, Microsoft Cloud, Containers Expo Blog, @CloudExpo, Cloud Security

SDN Journal: Blog Feed Post

So That DNS DDoS Thing Happened

Smurfs aren't just for ICMP anymore...

DNS, like any public service, is vulnerable. Not in the sense that it has vulnerabilities but vulnerable in the sense that it must, by its nature and purpose, be publicly available. It can't hide behind access control lists or other traditional security mechanisms because the whole point of DNS is to provide a way to find your corporate presence in all its digital forms.

It should therefore not come as a surprise that eventually it turned up in the news as the primary player in a global and quite disruptive DDoS attack.

The gory details, most of which have already been circulated, are nonetheless fascinating given the low technological investment required. You can duplicate the effort with about 30 friends each with a 30Mbps connection (that means I'm out, sorry). As those who've been in the security realm for a while know, that's because these types of attacks require very little on the attack side; the desired effects come due to the unbalanced request-response ratio inherent in many protocols, DNS being one of them.

In the world of security taxonomies these are called "amplification" attacks. They aren't new; "Smurf attacks" (which exploited ICMP) were first seen in the 1990s and effected their disruption by taking advantage of broadcast addresses. DNS amplification works on the same premise, because queries are small but responses tend to be large. Both ICMP and DNS amplification attacks are effective because they use UDP, which does not require a handshake and is entirely uninterested in verifying whether or not the IP address in the request is the one from which the the request was received. It's ripe for spoofing with much less work than a connection-oriented protocol such as TCP.

To understand just how unbalanced the request-response ratio was in this attack, consider that the request was: “dig ANY ripe.net @ <OpenDNSResolverIP> +edns0=0 +bufsize=4096”. That's 36 bytes. The responses are typically 3K bytes, for an amplification factor of 100. There were 30,000 open DNS resolvers in the attack, each sending 2.5Mbps of traffic each, all directed at the target victim. CloudFlare has a great blog on the attack, I recommend a read. Another good resource on DNS amplification attacks is this white paper. Also fascinating is that this attack differed in that the target was sent a massive number of DNS responses - rather than queries - that it never solicited in the first place.

The problem is DNS is, well, public. Restricting responses could ostensibly unintentionally block legitimate client resolvers causing a kind of self-imposed denial of service. That's not acceptable. Transitioning to TCP to take advantage of handshaking and thus improve the ability to detect and shut down attempted attacks would certainly work, but at the price of performance. While F5's BIG-IP DNS solutions are optimized to avoid that penalty, most DNS infrastructure isn't and that means a general slowdown of a process that's already considered "too slow" by many users, particularly those trying to navigate the Internet via a mobile device.

So it seems we're stuck with UDP and with being attacked. But that doesn't mean we have to sit back and take it. There are ways in which you can protect against the impact of such an attack as well as others lurking in the shadows.

1. DEPRECRATE REQUESTS (and CHECKING RESPONSES)

It is important to validate that the queries being sent by the clients are ones that the DNS servers are interested in answering, and are able to. A DNS firewall or other security product can be used to validate and only allow the DNS queries that the DNS server is configured for. When the DNS protocol was designed, there were a lot of features built into the protocol that are no longer valid due to the evolving nature of the Internet. This includes many DNS query types, flags available and other settings.  One would be surprised at what types of parameters are available to mark on a DNS request and how they can be manipulated.  For example, DNS type 17=RP, which is the Responsible Person for that record.  In addition, there are ways to disrupt DNS communications by putting bad data in many of these fields. A DNS firewall is able to inspect these DNS queries and drop the requests that do not conform to DNS standards and do not use parameters that the DNS servers are configured for.

But as this attack proved, it's not just queries you have to watch out for - it's aslo responses. F5 DNS firewall features include stateful inspection of responses which means any unsolicited DNS responses are immediately dropped. While that won't change the impact on bandwidth, it will keep the server from becoming overwhelmed by processing unnecessary responses.

F5’s DNS Services includes industry-leading DNS Firewall services

2. ENSURE CAPACITY

DNS query capacity is critical to delivering a resilient available DNS infrastructure. Most organizations recognize this and put into place solutions to ensure high availability and scale of DNS. Often these solutions are simply caching DNS load balancing solutions which have their own set of risks, including being vulnerable to attack using random, jabberwocky host names. Caching DNS solutions only cache responses returned from authoritative sources and thus when presented with an unknown host name, it must query the origin server. Given a high enough volume of queries, the origin servers can still be overwhelmed, regardless of the capacity of the caching intermediary.

A high performance in-memory authoritative DNS server such as F5 DNS Express (part of F5 BIG-IP Global Traffic Manager) can shield origin servers from being overwhelmed.

3. PROTECT AGAINST HIJACKING

The vulnerability of DNS to hijacking and poisoning is still very real. In 2008, a researcher, Evgeniy Polyakov, showed that it was possible to cache poison a DNS server that was patched and running current code within 10 hours. This is simply unacceptable in an Internet-driven world that relies, ultimately, on the validity and integrity of DNS. The best solution to this and other vulnerabilities which compromise the integrity of DNS information is DNSSEC. DNSSEC was introduced to specifically correct the open and trusting nature of the protocol’s original design. DNS queries and responses are signed using keys that validate that the DNS answer was not tampered with and that it came from a reliable DNS server.

F5 BIG-IP Global Traffic Manager (GTM) not only supports DNSSEC, but does so without breaking global server load balancing techniques.

As a general rule, you should verify that you aren't accidentally running an open resolver. Consider the benefits of implementing DNS with an ICSA certified and hardened solution that does not function as an open resolver, period. And yes, F5 is a good choice for that.

Additional Resources:

Read the original blog entry...

More Stories By Lori MacVittie

Lori MacVittie is responsible for education and evangelism of application services available across F5’s entire product suite. Her role includes authorship of technical materials and participation in a number of community-based forums and industry standards organizations, among other efforts. MacVittie has extensive programming experience as an application architect, as well as network and systems development and administration expertise. Prior to joining F5, MacVittie was an award-winning Senior Technology Editor at Network Computing Magazine, where she conducted product research and evaluation focused on integration with application and network architectures, and authored articles on a variety of topics aimed at IT professionals. Her most recent area of focus included SOA-related products and architectures. She holds a B.S. in Information and Computing Science from the University of Wisconsin at Green Bay, and an M.S. in Computer Science from Nova Southeastern University.

@MicroservicesExpo Stories
Sharding has become a popular means of achieving scalability in application architectures in which read/write data separation is not only possible, but desirable to achieve new heights of concurrency. The premise is that by splitting up read and write duties, it is possible to get better overall performance at the cost of a slight delay in consistency. That is, it takes a bit of time to replicate changes initiated by a "write" to the read-only master database. It's eventually consistent, and it'...
The burgeoning trends around DevOps are translating into new types of IT infrastructure that both developers and operators can take advantage of. The next BriefingsDirect Voice of the Customer thought leadership discussion focuses on the burgeoning trends around DevOps and how that’s translating into new types of IT infrastructure that both developers and operators can take advantage of.
With so much going on in this space you could be forgiven for thinking you were always working with yesterday’s technologies. So much change, so quickly. What do you do if you have to build a solution from the ground up that is expected to live in the field for at least 5-10 years? This is the challenge we faced when we looked to refresh our existing 10-year-old custom hardware stack to measure the fullness of trash cans and compactors.
The emerging Internet of Everything creates tremendous new opportunities for customer engagement and business model innovation. However, enterprises must overcome a number of critical challenges to bring these new solutions to market. In his session at @ThingsExpo, Michael Martin, CTO/CIO at nfrastructure, outlined these key challenges and recommended approaches for overcoming them to achieve speed and agility in the design, development and implementation of Internet of Everything solutions wi...
DevOps at Cloud Expo, taking place Nov 1-3, 2016, at the Santa Clara Convention Center in Santa Clara, CA, is co-located with 19th Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. The widespread success of cloud computing is driving the DevOps revolution in enterprise IT. Now as never before, development teams must communicate and collaborate in a dynamic, 24/7/365 environment. There is no time to wait for long dev...
Thomas Bitman of Gartner wrote a blog post last year about why OpenStack projects fail. In that article, he outlined three particular metrics which together cause 60% of OpenStack projects to fall short of expectations: Wrong people (31% of failures): a successful cloud needs commitment both from the operations team as well as from "anchor" tenants. Wrong processes (19% of failures): a successful cloud automates across silos in the software development lifecycle, not just within silos.
A company’s collection of online systems is like a delicate ecosystem – all components must integrate with and complement each other, and one single malfunction in any of them can bring the entire system to a screeching halt. That’s why, when monitoring and analyzing the health of your online systems, you need a broad arsenal of different tools for your different needs. In addition to a wide-angle lens that provides a snapshot of the overall health of your system, you must also have precise, ...
Using new techniques of information modeling, indexing, and processing, new cloud-based systems can support cloud-based workloads previously not possible for high-throughput insurance, banking, and case-based applications. In his session at 18th Cloud Expo, John Newton, CTO, Founder and Chairman of Alfresco, described how to scale cloud-based content management repositories to store, manage, and retrieve billions of documents and related information with fast and linear scalability. He addres...
The following fictional case study is a composite of actual horror stories I’ve heard over the years. Unfortunately, this scenario often occurs when in-house integration teams take on the complexities of DevOps and ALM integration with an enterprise service bus (ESB) or custom integration. It is written from the perspective of an enterprise architect tasked with leading an organization’s effort to adopt Agile to become more competitive. The company has turned to Scaled Agile Framework (SAFe) as ...
Monitoring of Docker environments is challenging. Why? Because each container typically runs a single process, has its own environment, utilizes virtual networks, or has various methods of managing storage. Traditional monitoring solutions take metrics from each server and applications they run. These servers and applications running on them are typically very static, with very long uptimes. Docker deployments are different: a set of containers may run many applications, all sharing the resource...
Internet of @ThingsExpo, taking place November 1-3, 2016, at the Santa Clara Convention Center in Santa Clara, CA, is co-located with 19th Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. The Internet of Things (IoT) is the most profound change in personal and enterprise IT since the creation of the Worldwide Web more than 20 years ago. All major researchers estimate there will be tens of billions devices - comp...
It's been a busy time for tech's ongoing infatuation with containers. Amazon just announced EC2 Container Registry to simply container management. The new Azure container service taps into Microsoft's partnership with Docker and Mesosphere. You know when there's a standard for containers on the table there's money on the table, too. Everyone is talking containers because they reduce a ton of development-related challenges and make it much easier to move across production and testing environm...
Cloud Expo 2016 New York at the Javits Center New York was characterized by increased attendance and a new focus on operations. These were both encouraging signs for all involved in Cloud Computing and all that it touches. As Conference Chair, I work with the Cloud Expo team to structure three keynotes, numerous general sessions, and more than 150 breakout sessions along 10 tracks. Our job is to balance the state of enterprise IT today with the trends that will be commonplace tomorrow. Mobile...
The 19th International Cloud Expo has announced that its Call for Papers is open. Cloud Expo, to be held November 1-3, 2016, at the Santa Clara Convention Center in Santa Clara, CA, brings together Cloud Computing, Big Data, Internet of Things, DevOps, Digital Transformation, Microservices and WebRTC to one location. With cloud computing driving a higher percentage of enterprise IT budgets every year, it becomes increasingly important to plant your flag in this fast-expanding business opportuni...
SYS-CON Events announced today that Venafi, the Immune System for the Internet™ and the leading provider of Next Generation Trust Protection, will exhibit at @DevOpsSummit at 19th International Cloud Expo, which will take place on November 1–3, 2016, at the Santa Clara Convention Center in Santa Clara, CA. Venafi is the Immune System for the Internet™ that protects the foundation of all cybersecurity – cryptographic keys and digital certificates – so they can’t be misused by bad guys in attacks...
As the world moves toward more DevOps and Microservices, application deployment to the cloud ought to become a lot simpler. The Microservices architecture, which is the basis of many new age distributed systems such as OpenStack, NetFlix and so on, is at the heart of Cloud Foundry - a complete developer-oriented Platform as a Service (PaaS) that is IaaS agnostic and supports vCloud, OpenStack and AWS. Serverless computing is revolutionizing computing. In his session at 19th Cloud Expo, Raghav...
DevOps at Cloud Expo – being held November 1-3, 2016, at the Santa Clara Convention Center in Santa Clara, CA – announces that its Call for Papers is open. Born out of proven success in agile development, cloud computing, and process automation, DevOps is a macro trend you cannot afford to miss. From showcase success stories from early adopters and web-scale businesses, DevOps is expanding to organizations of all sizes, including the world's largest enterprises – and delivering real results. Am...

Modern organizations face great challenges as they embrace innovation and integrate new tools and services. They begin to mature and move away from the complacency of maintaining traditional technologies and systems that only solve individual, siloed problems and work “well enough.” In order to build...

The post Gearing up for Digital Transformation appeared first on Aug. 25, 2016 12:15 PM EDT  Reads: 1,381

SYS-CON Events announced today that eCube Systems, a leading provider of middleware modernization, integration, and management solutions, will exhibit at @DevOpsSummit at 19th International Cloud Expo, which will take place on November 1–3, 2016, at the Santa Clara Convention Center in Santa Clara, CA. eCube Systems offers a family of middleware evolution products and services that maximize return on technology investment by leveraging existing technical equity to meet evolving business needs. ...
To leverage Continuous Delivery, enterprises must consider impacts that span functional silos, as well as applications that touch older, slower moving components. Managing the many dependencies can cause slowdowns. See how to achieve continuous delivery in the enterprise.