Welcome!

Microservices Expo Authors: Liz McMillan, Pat Romanski, Carmen Gonzalez, Elizabeth White, Jason Bloomberg

Related Topics: Microservices Expo

Microservices Expo: Blog Feed Post

Is Your Application Infrastructure Architecture Based on the Postal Service Delivery Model?

If it is, you might want to reconsider how you’re handling security

If it is, you might want to reconsider how you’re handling security, acceleration, and delivery of your applications before users “go postal” because of poor application performance.

postal-service-spc

Sometimes wisdom comes from the most unexpected places. Take Jason Rahm’s status update on Facebook over the holidays. He’s got what is likely a common complaint regarding the delivery model of the US postal service: the inefficiency of where postage due is determined. Everyone has certainly had the experience of sending out a letter (you know, those paper things) and having it returned a week or more later with a big stamp across it stating: Returned – Postage Due.

As Jason points out, the US postal service doesn’t determine whether postage may be due or not until the package arrives at its destination. If the addressee isn’t willing/able to pay that postage due, the package is of course returned via the delivery service, which incurs round-trip costs of transportation and handling at every point along the way.

If this sounds anything like your application infrastructure architecture, then you might want to reconsider how you’re handling the delivery of applications and where you’re applying policies that may affect the delivery process.


STRATEGIC POINTS of CONTROL

Every architecture has them: strategic points of control. These are points at which decisions can – and should – be made regarding the delivery of applications. Such points of control range from routing to admission control (security and identity management functions) to application-specific authorizations. There are myriad policies that govern access to and delivery of applications and each one is most efficiently applied at a different point in the infrastructure. If every function – admission control, delivery optimization, application  authorization – is applied at the application, it leads to a postal service architecture in which the same costs (both monetary and in performance) are incurred for every request and response, regardless of whether they were actually fulfilled or even legitimate requests.

If the postal service were cost conscious, they’d examine the package at the first strategic point of control based on the destination and the package variables and there determine how much it would cost before it ships that happy box of caffeine off only to be returned – days or weeks later – for lack of proper postage that it should have been able to determine in the first place.

The postal service – and you – likely have all the data available at the first point of entry into your application to determine whether the request is legitimate and what optimizations need to be applied before the package enters “the delivery system”, a.k.a. the infrastructure. Incurring costs associated with processing, storage, and risk by processing what could have already been detected as malicious or illegitimate seems a terrible waste of infrastructure on the scale of the waste associated with the postal service.

Why apply compression to data on the application server when that data may need to be examined by other components in the architecture on the way back to the user and may, in fact, degrade performance rather than improve it? Why not apply compression at the last point possible; at the strategic control point that sits between your infrastructure and the “rest of the world”, i.e. the user and their network. Why are requests not examined at the first possible strategic point of control for validity? Why allow what is potentially a dangerous and malicious request pass through the infrastructure so it can be processed by every component in the architecture and potentially wreak havoc throughout the data center? Why not examine the request at the first possible point and accept or reject it before the costs associated with that processing and the risks are incurred by the organization?

All this additional processing on what are illegitimate and malicious requests places a burden on the entire infrastructure and, especially in the case of web and application servers, that burden can translate into reduced performance for legitimate users as well as additional costs in the form of unnecessary increases in resource capacity required to support both illegitimate and legitimate requests.

You can’t eliminate all the costs, of course, but you can significantly reduce them when you apply application delivery policies at the most strategic point in your architecture possible. That means web application and e-mail scrubbing at the outer edges of your network, preventing spam and illegitimate requests from using up bandwidth and processing power on network, application network, storage, and application infrastructure. It means a reduction in the size of your logs, which makes correlation and reporting easier, faster, and less of a chore for IT personnel who must comb through gigabytes of data daily looking for needles in haystacks to help application developers track down errors in application code. It means reducing the overall costs associated with delivering applications to user and improving the performance and reliability of your entire architecture.

Very few IT architects would point to the US postal service as an ideal model of delivery. So if your infrastructure looks anything like the postal service, maybe it’s time to take another look at how you’re applying policies and processing requests and make some modifications to a more cost-effective, efficient service delivery model.

Read the original blog entry...

More Stories By Lori MacVittie

Lori MacVittie is responsible for education and evangelism of application services available across F5’s entire product suite. Her role includes authorship of technical materials and participation in a number of community-based forums and industry standards organizations, among other efforts. MacVittie has extensive programming experience as an application architect, as well as network and systems development and administration expertise. Prior to joining F5, MacVittie was an award-winning Senior Technology Editor at Network Computing Magazine, where she conducted product research and evaluation focused on integration with application and network architectures, and authored articles on a variety of topics aimed at IT professionals. Her most recent area of focus included SOA-related products and architectures. She holds a B.S. in Information and Computing Science from the University of Wisconsin at Green Bay, and an M.S. in Computer Science from Nova Southeastern University.

Microservices Articles
Modern software design has fundamentally changed how we manage applications, causing many to turn to containers as the new virtual machine for resource management. As container adoption grows beyond stateless applications to stateful workloads, the need for persistent storage is foundational - something customers routinely cite as a top pain point. In his session at @DevOpsSummit at 21st Cloud Expo, Bill Borsari, Head of Systems Engineering at Datera, explored how organizations can reap the bene...
"NetApp's vision is how we help organizations manage data - delivering the right data in the right place, in the right time, to the people who need it, and doing it agnostic to what the platform is," explained Josh Atwell, Developer Advocate for NetApp, in this SYS-CON.tv interview at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
The Jevons Paradox suggests that when technological advances increase efficiency of a resource, it results in an overall increase in consumption. Writing on the increased use of coal as a result of technological improvements, 19th-century economist William Stanley Jevons found that these improvements led to the development of new ways to utilize coal. In his session at 19th Cloud Expo, Mark Thiele, Chief Strategy Officer for Apcera, compared the Jevons Paradox to modern-day enterprise IT, examin...
In his session at 20th Cloud Expo, Mike Johnston, an infrastructure engineer at Supergiant.io, discussed how to use Kubernetes to set up a SaaS infrastructure for your business. Mike Johnston is an infrastructure engineer at Supergiant.io with over 12 years of experience designing, deploying, and maintaining server and workstation infrastructure at all scales. He has experience with brick and mortar data centers as well as cloud providers like Digital Ocean, Amazon Web Services, and Rackspace. H...
Skeuomorphism usually means retaining existing design cues in something new that doesn’t actually need them. However, the concept of skeuomorphism can be thought of as relating more broadly to applying existing patterns to new technologies that, in fact, cry out for new approaches. In his session at DevOps Summit, Gordon Haff, Senior Cloud Strategy Marketing and Evangelism Manager at Red Hat, will discuss why containers should be paired with new architectural practices such as microservices ra...
In his session at 20th Cloud Expo, Scott Davis, CTO of Embotics, discussed how automation can provide the dynamic management required to cost-effectively deliver microservices and container solutions at scale. He also discussed how flexible automation is the key to effectively bridging and seamlessly coordinating both IT and developer needs for component orchestration across disparate clouds – an increasingly important requirement at today’s multi-cloud enterprise.
The Software Defined Data Center (SDDC), which enables organizations to seamlessly run in a hybrid cloud model (public + private cloud), is here to stay. IDC estimates that the software-defined networking market will be valued at $3.7 billion by 2016. Security is a key component and benefit of the SDDC, and offers an opportunity to build security 'from the ground up' and weave it into the environment from day one. In his session at 16th Cloud Expo, Reuven Harrison, CTO and Co-Founder of Tufin, ...
DevOps is often described as a combination of technology and culture. Without both, DevOps isn't complete. However, applying the culture to outdated technology is a recipe for disaster; as response times grow and connections between teams are delayed by technology, the culture will die. A Nutanix Enterprise Cloud has many benefits that provide the needed base for a true DevOps paradigm. In their Day 3 Keynote at 20th Cloud Expo, Chris Brown, a Solutions Marketing Manager at Nutanix, and Mark Lav...
Many organizations are now looking to DevOps maturity models to gauge their DevOps adoption and compare their maturity to their peers. However, as enterprise organizations rush to adopt DevOps, moving past experimentation to embrace it at scale, they are in danger of falling into the trap that they have fallen into time and time again. Unfortunately, we've seen this movie before, and we know how it ends: badly.
TCP (Transmission Control Protocol) is a common and reliable transmission protocol on the Internet. TCP was introduced in the 70s by Stanford University for US Defense to establish connectivity between distributed systems to maintain a backup of defense information. At the time, TCP was introduced to communicate amongst a selected set of devices for a smaller dataset over shorter distances. As the Internet evolved, however, the number of applications and users, and the types of data accessed and...