Welcome!

Microservices Expo Authors: Liz McMillan, Mano Marks, JP Morgenthal, XebiaLabs Blog, Pat Romanski

Related Topics: @CloudExpo, Mobile IoT, Microservices Expo, Containers Expo Blog, Machine Learning , Agile Computing

@CloudExpo: Blog Feed Post

BYOD and the Death of the DMZ

It's context that counts, not corporate connections

BYOD remains a topic of interest as organizations grapple not only technologically with the trend but politically, as well. There are dire warnings that refusing to support BYOD will result in an inability to attract and retain up and coming technologists, that ignoring the problems associated with BYOD will eventually result in some sort of karmic IT event that will be painful for all involved.

Surveys continue to tell us organizations cannot ignore BYOD. A recent ITIC survey indicated a high level of BYOD across the global 550 companies polled.

51% of workers utilize smart phones as their BYOD devices; another 44% use notebooks and ultra books, while 31% of respondents indicated they use tablets (most notably the Apple iPad) and 23% use home-based desktop PCs or Macs.

It's here, it's now, and it's in the data center. The question is no longer "will you allow it" but "how will you secure/manage/support it"? It's that first piece – secure it – that's causing some chaos and confusion.  Just as we discovered with cloud computing early on, responsibility for anything shared is muddled. When asked who should bear responsibility for the security of devices in BYOD situations, respondents offered a nearly equal split between company (37%) and end-user (39%) with 21% stating it fell equally on both.   byodsecurity

From an IT security perspective, this is not a bad split. Employees should be active participants in organizational security. Knowing is, as GI Joe says, half the battle and if employees bringing their own devices to work are informed and understand the risks, they can actively participate in improving security practices and processes.

But relying on end-users for organizational security would be folly, and thus IT must take responsibility for the technological enforcement of security policies developed in conjunction with the business.

One of the first and most important things we must do to enable better security in a BYOD (and cloudy) world is to kill the DMZ.

[Pause for apoplectic fits]

By kill the DMZ I don't mean physically dismantle the underlying network architecture supporting it – I mean logically. The DMZ was developed as a barrier between the scary and dangerous Internet and sensitive corporate data and applications. That barrier now must extend to inside the data center, to the LAN, where the assumption has long been devices and users accessing data center resources are inherently safe.

They are not (probably never have been, really).

Every connection, every request, every attempt to access an application or data within the data center must be treated as suspect, regardless of where it may have originated and without automatically giving certain devices privileges over others. A laptop on the LAN may or may not be BYOD, it may or may not be secure, it may or may not be infected. A laptop on the LAN is no more innately safe than a tablet than is a smart phone.

SMARTER CONTROL

This is where the concept of a strategic point of control comes in handy. If every end-user is funneled through the same logical tier in the data center regardless of network origination, policies can be centrally deployed and enforced to ensure appropriate levels of access based on the security profile of the device and user.

inside-outside

By sharing access control across all devices, regardless of who purchased and manages them, policies can be tailored to focus on the application and the data, not solely on the point of origination.

While policies may trigger specific rules or inspections based on device or originating location, ultimately the question is who can access a given application and data and under what circumstances? It's context that counts, not corporate connections.

The questions must be asked, regardless of whether the attempt to access begins within the corporate network boundaries or not. Traffic coming from the local LAN should not be treated any differently than that of traffic entering via the WAN. The notion of "trusted" and "untrusted" network connectivity has simply been obviated by the elimination of wires and the rampant proliferation of malware and other destructive digital infections.

In essence, the DMZ is being – and must be - transformed. It's no longer a zone of inherent distrust between the corporate network and the Internet, it's a zone of inherent distrust between corporate resources and everything else. Its design and deployment as a buffer is still relevant, but only in the sense that it stands between critical assets and access by hook, crook, or tablet.

The DMZ as we have known it is dead.

Trust no one.

Read the original blog entry...

More Stories By Lori MacVittie

Lori MacVittie is responsible for education and evangelism of application services available across F5’s entire product suite. Her role includes authorship of technical materials and participation in a number of community-based forums and industry standards organizations, among other efforts. MacVittie has extensive programming experience as an application architect, as well as network and systems development and administration expertise. Prior to joining F5, MacVittie was an award-winning Senior Technology Editor at Network Computing Magazine, where she conducted product research and evaluation focused on integration with application and network architectures, and authored articles on a variety of topics aimed at IT professionals. Her most recent area of focus included SOA-related products and architectures. She holds a B.S. in Information and Computing Science from the University of Wisconsin at Green Bay, and an M.S. in Computer Science from Nova Southeastern University.

@MicroservicesExpo Stories
"We got started as search consultants. On the services side of the business we have help organizations save time and save money when they hit issues that everyone more or less hits when their data grows," noted Otis Gospodnetić, Founder of Sematext, in this SYS-CON.tv interview at @DevOpsSummit, held June 9-11, 2015, at the Javits Center in New York City.
2016 has been an amazing year for Docker and the container industry. We had 3 major releases of Docker engine this year , and tremendous increase in usage. The community has been following along and contributing amazing Docker resources to help you learn and get hands-on experience. Here’s some of the top read and viewed content for the year. Of course releases are always really popular, particularly when they fit requests we had from the community.
Here’s a novel, but controversial statement, “it’s time for the CEO, COO, CIO to start to take joint responsibility for application platform decisions.” For too many years now technical meritocracy has led the decision-making for the business with regard to platform selection. This includes, but is not limited to, servers, operating systems, virtualization, cloud and application platforms. In many of these cases the decision has not worked in favor of the business with regard to agility and cost...
Software delivery was once specific to the IT industry. Now, Continuous Delivery pipelines are used around world from e-commerce to airline software. Building a software delivery pipeline once involved hours of scripting and manual steps–a process that’s painful, if not impossible, to scale. However Continuous Delivery with Application Release Automation tools offers a scripting-free, automated experience. Continuous Delivery pipelines are immensely powerful for the modern enterprise, boosting ...
"Plutora provides release and testing environment capabilities to the enterprise," explained Dalibor Siroky, Director and Co-founder of Plutora, in this SYS-CON.tv interview at @DevOpsSummit, held June 9-11, 2015, at the Javits Center in New York City.
"We provide DevOps solutions. We also partner with some key players in the DevOps space and we use the technology that we partner with to engineer custom solutions for different organizations," stated Himanshu Chhetri, CTO of Addteq, in this SYS-CON.tv interview at DevOps at 18th Cloud Expo, held June 7-9, 2016, at the Javits Center in New York City, NY.
@DevOpsSummit at Cloud taking place June 6-8, 2017, at Javits Center, New York City, is co-located with the 20th International Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. The widespread success of cloud computing is driving the DevOps revolution in enterprise IT. Now as never before, development teams must communicate and collaborate in a dynamic, 24/7/365 environment. There is no time to wait for long developm...
True Story. Over the past few years, Fannie Mae transformed the way in which they delivered software. Deploys increased from 1,200/month to 15,000/month. At the same time, productivity increased by 28% while reducing costs by 30%. But, how did they do it? During the All Day DevOps conference, over 13,500 practitioners from around the world to learn from their peers in the industry. Barry Snyder, Senior Manager of DevOps at Fannie Mae, was one of 57 practitioners who shared his real world journe...
Adding public cloud resources to an existing application can be a daunting process. The tools that you currently use to manage the software and hardware outside the cloud aren’t always the best tools to efficiently grow into the cloud. All of the major configuration management tools have cloud orchestration plugins that can be leveraged, but there are also cloud-native tools that can dramatically improve the efficiency of managing your application lifecycle. In his session at 18th Cloud Expo, ...
As 2016 approaches its end, the time to prepare for the year ahead is now! Following our own advice, we sat down with three XebiaLabs thought leaders–Andrew Phillips, Tim Buntel, and TJ Randall–and asked what they think the future has in store for the DevOps world. In 2017, we’ll see a new wave of “next gen platform” projects focused on container orchestration frameworks such as Kubernetes, and re-tooled PaaS platforms such as OpenShift or Cloud Foundry. Acceptance of the need for a cross-machi...
We call it DevOps but much of the time there’s a lot more discussion about the needs and concerns of developers than there is about other groups. There’s a focus on improved and less isolated developer workflows. There are many discussions around collaboration, continuous integration and delivery, issue tracking, source code control, code review, IDEs, and xPaaS – and all the tools that enable those things. Changes in developer practices may come up – such as developers taking ownership of code ...
Software development is a moving target. You have to keep your eye on trends in the tech space that haven’t even happened yet just to stay current. Consider what’s happened with augmented reality (AR) in this year alone. If you said you were working on an AR app in 2015, you might have gotten a lot of blank stares or jokes about Google Glass. Then Pokémon GO happened. Like AR, the trends listed below have been building steam for some time, but they’ll be taking off in surprising new directions b...
Containers have changed the mind of IT in DevOps. They enable developers to work with dev, test, stage and production environments identically. Containers provide the right abstraction for microservices and many cloud platforms have integrated them into deployment pipelines. DevOps and Containers together help companies to achieve their business goals faster and more effectively. In his session at DevOps Summit, Ruslan Synytsky, CEO and Co-founder of Jelastic, reviewed the current landscape of D...
When building DevOps or continuous delivery practices you can learn a great deal from others. What choices did they make, what practices did they put in place, and how did they connect the dots? At Sonatype, we pulled together a set of 21 reference architectures for folks building continuous delivery and DevOps practices using Docker. Why? After 3,000 DevOps professionals attended our webinar on "Continuous Integration using Docker" discussing just one reference architecture example, we recogn...
An overall theme of Cloud computing and the specific practices within it is fundamentally one of automation. The core value of technology is to continually automate low level procedures to free up people to work on more value add activities, ultimately leading to the utopian goal of full Autonomic Computing. For example a great way to define your plan for DevOps tool chain adoption is through this lens. In this TechTarget article they outline a simple maturity model for planning this.
The proper isolation of resources is essential for multi-tenant environments. The traditional approach to isolate resources is, however, rather heavyweight. In his session at 18th Cloud Expo, Igor Drobiazko, co-founder of elastic.io, drew upon his own experience with operating a Docker container-based infrastructure on a large scale and present a lightweight solution for resource isolation using microservices. He also discussed the implementation of microservices in data and application integrat...
In his General Session at DevOps Summit, Asaf Yigal, Co-Founder & VP of Product at Logz.io, will explore the value of Kibana 4 for log analysis and will give a real live, hands-on tutorial on how to set up Kibana 4 and get the most out of Apache log files. He will examine three use cases: IT operations, business intelligence, and security and compliance. This is a hands-on session that will require participants to bring their own laptops, and we will provide the rest.
All organizations that did not originate this moment have a pre-existing culture as well as legacy technology and processes that can be more or less amenable to DevOps implementation. That organizational culture is influenced by the personalities and management styles of Executive Management, the wider culture in which the organization is situated, and the personalities of key team members at all levels of the organization. This culture and entrenched interests usually throw a wrench in the work...
As the race for the presidency heats up, IT leaders would do well to recall the famous catchphrase from Bill Clinton’s successful 1992 campaign against George H. W. Bush: “It’s the economy, stupid.” That catchphrase is important, because IT economics are important. Especially when it comes to cloud. Application performance management (APM) for the cloud may turn out to be as much about those economics as it is about customer experience.
When you focus on a journey from up-close, you look at your own technical and cultural history and how you changed it for the benefit of the customer. This was our starting point: too many integration issues, 13 SWP days and very long cycles. It was evident that in this fast-paced industry we could no longer afford this reality. We needed something that would take us beyond reducing the development lifecycles, CI and Agile methodologies. We made a fundamental difference, even changed our culture...