Click here to close now.

Welcome!

@MicroservicesE Blog Authors: Elizabeth White, Lori MacVittie, Liz McMillan, Pat Romanski, XebiaLabs Blog

Related Topics: @CloudExpo, @MicroservicesE Blog, Containers Expo, Cloud Security

@CloudExpo: Blog Feed Post

Getting at the Heart of Security in the Cloud

CloudPassage digs a bit deeper into the issue of security and public cloud computing and finds some interesting results

Security is a pretty big word. It’s used to represent everything from attack prevention to authentication and authorization to securing transport protocols. It’s used as an umbrella term for such a wide variety of concerns that it has become virtually meaningless when applied to technology.

security-umbrellaFor some time, purveyors of security studies have asked the market, “What’s stopping you from adopting cloud?” Invariably one of the most often cited show-stoppers is “security.” Pundits raced to tell us this, but in no wise did they offer deeper insight into what, exactly, security meant.

So it was nice to see CloudPassage dig deeper into “security in the cloud” with a recent survey it conducted. You may recall that CloudPassage has a more than passing interest in cloud-based security, as its focus is on cloud-based security with an emphasis on host-based firewalls. Published in February 2012, it sheds some light on what IT professionals consider most important with respect to public cloud security.

Not unsurprisingly, “lack of perimeter defenses and/or network control” was the most often cited concern with respect to security in public cloud environments with 25% of respondents indicating it was troubling. This response would appear to go hand in hand with the 12% who cited an inability to leverage “enterprise security tools” in public cloud environments. It is no secret that duplicating security architectures and processes in the cloud is not something we seen done at this juncture. When you combine an inability to replicate security policy and process in the cloud due to incompatibilities of infrastructure and software with a less than robust security service offering in public cloud environments, the “lack of perimeter defenses and/or network control” answer being top of the list makes a lot of sense.

cloudpassage-concerns

WHERE ARE WE GOING?

There are myriad surveys that indicate organizations are moving to use public cloud computing, despite these concerns, and one assumes that this means they are finding ways to resolve these issues. Many organizations are turning back the clock and taking advantage of agent-based (host deployed) solutions to secure their assets in public cloud environments, which affords much better protection than nothing at all, and others still are leveraging the tried-and-true “checklist” method: manually securing servers based on best-practices and corporate policy.

Neither is optimal from an operational perspective. Neither is the use of cloud provider offered services such as Amazon security groups because the result is a disjointed set of security policies across multiple environments. Policy languages and implementation – not to mention capabilities – vary widely from service to service. While the most basic of protections – firewalling – is more compatible from the perspective of ability to codify, still the actual policy language will differ. These disconnects can lead to gaps in security policies that leave open to attack the organization’s assets. Inconsistent management and deployment processes spanning multiple environments leave open the possibility of human error and misconfiguration, an often cited cause of outages and breaches in general.

cloudpassage-securetoday

Where we are today is sitting with a disjointed set of options from which to choose, and the need to somehow cobble together these disparate tools and services into a comprehensive security strategy capable of consistently securing servers, applications, and other resources from attack, exploitation, and breach.

It is not really an inspiring view at the moment.

Vendors and providers need to work toward some common language and services that enable consistent replication – and thus enforcement - of the policies that govern access and protection of all corporate resources, regardless of location. Whether through standards initiatives or brokerage of APIs or better ability of organizations to deploy security solutions in both the data center and public cloud environments is not necessarily the question. The question is how can enterprises better address the specific security-related concerns they have regarding public cloud deployments in a way that minimizes risk of misconfiguration or gaps in policy enforcement while providing for operationally consistent processes that ensure the benefits of public cloud computing are not lost.

REVERSE INTEGRATION

One of the interesting trends that we’re seeing is around the demand for consistency in infrastructure across environments, and this will eventually drive demand for integration of what are today “cloud only” solutions back into data center components. Folks like CloudPassage and other cloud-focused systems that deliver host-based security coupled with a SaaS management model will eventually need to consider integration with “traditional” enterprise solutions as a means to deliver the consistency necessary to maintain cloud-related operational benefits.

Right now we’re seeing a move toward preserving operational consistency through replication of policy from within the data center out, to the cloud. But as cloud-hosted solutions continue to mature and evolve, one would expect to see the ability to replicate policy in the other direction – from the cloud back into the data center. This is no trivial task, as it requires the SaaS management component of such solutions to become what might be considered a policy broker; that is, their system becomes the point of policy creation and management and it is through integration with both cloud and data center infrastructure that such policies are deployed, updated, and managed.

This is why the notion of API-enabled infrastructure, a.k.a. Infrastructure 2.0, is so important. It’s not just about creating a vibrant and healthy ecosystem of solutions within the data center, but in the cloud and in between, as well. It is the glue that will integrate disparate systems and normalize policies across environments, and ultimately provide the market with a broader set of choices that can more efficiently and effectively address the specific security (and other operational) concerns that may be preventing organizations from fully embracing cloud computing.

Read the original blog entry...

More Stories By Lori MacVittie

Lori MacVittie is responsible for education and evangelism of application services available across F5’s entire product suite. Her role includes authorship of technical materials and participation in a number of community-based forums and industry standards organizations, among other efforts. MacVittie has extensive programming experience as an application architect, as well as network and systems development and administration expertise. Prior to joining F5, MacVittie was an award-winning Senior Technology Editor at Network Computing Magazine, where she conducted product research and evaluation focused on integration with application and network architectures, and authored articles on a variety of topics aimed at IT professionals. Her most recent area of focus included SOA-related products and architectures. She holds a B.S. in Information and Computing Science from the University of Wisconsin at Green Bay, and an M.S. in Computer Science from Nova Southeastern University.

@MicroservicesExpo Stories
Manufacturing has widely adopted standardized and automated processes to create designs, build them, and maintain them through their life cycle. However, many modern manufacturing systems go beyond mechanized workflows to introduce empowered workers, flexible collaboration, and rapid iteration. Such behaviors also characterize open source software development and are at the heart of DevOps culture, processes, and tooling.
SYS-CON Events announced today that JFrog, maker of Artifactory, the popular Binary Repository Manager, will exhibit at SYS-CON's @DevOpsSummit Silicon Valley, which will take place on November 3–5, 2015, at the Santa Clara Convention Center in Santa Clara, CA. Based in California, Israel and France, founded by longtime field-experts, JFrog, creator of Artifactory and Bintray, has provided the market with the first Binary Repository solution and a software distribution social platform.
Conferences agendas. Event navigation. Specific tasks, like buying a house or getting a car loan. If you've installed an app for any of these things you've installed what's known as a "disposable mobile app" or DMA. Apps designed for a single use-case and with the expectation they'll be "thrown away" like brochures. Deleted until needed again. These apps are necessarily small, agile and highly volatile. Sometimes existing only for a short time - say to support an event like an election, the Wor...
The cloud has transformed how we think about software quality. Instead of preventing failures, we must focus on automatic recovery from failure. In other words, resilience trumps traditional quality measures. Continuous delivery models further squeeze traditional notions of quality. Remember the venerable project management Iron Triangle? Among time, scope, and cost, you can only fix two or quality will suffer. Only in today's DevOps world, continuous testing, integration, and deployment upend...
Sharding has become a popular means of achieving scalability in application architectures in which read/write data separation is not only possible, but desirable to achieve new heights of concurrency. The premise is that by splitting up read and write duties, it is possible to get better overall performance at the cost of a slight delay in consistency. That is, it takes a bit of time to replicate changes initiated by a "write" to the read-only master database. It's eventually consistent, and it'...
"Plutora provides release and testing environment capabilities to the enterprise," explained Dalibor Siroky, Director and Co-founder of Plutora, in this SYS-CON.tv interview at @DevOpsSummit, held June 9-11, 2015, at the Javits Center in New York City.
The most often asked question post-DevOps introduction is: “How do I get started?” There’s plenty of information on why DevOps is valid and important, but many managers still struggle with simple basics for how to initiate a DevOps program in their business. They struggle with issues related to current organizational inertia, the lack of experience on Continuous Integration/Delivery, understanding where DevOps will affect revenue and budget, etc. In their session at DevOps Summit, JP Morgenthal...
Explosive growth in connected devices. Enormous amounts of data for collection and analysis. Critical use of data for split-second decision making and actionable information. All three are factors in making the Internet of Things a reality. Yet, any one factor would have an IT organization pondering its infrastructure strategy. How should your organization enhance its IT framework to enable an Internet of Things implementation? In his session at @ThingsExpo, James Kirkland, Red Hat's Chief Arch...
Data center models are changing. A variety of technical trends and business demands are forcing that change, most of them centered on the explosive growth of applications. That means, in turn, that the requirements for application delivery are changing. Certainly application delivery needs to be agile, not waterfall. It needs to deliver services in hours, not weeks or months. It needs to be more cost efficient. And more than anything else, it needs to be really, dc infra axisreally, super focus...
Discussions about cloud computing are evolving into discussions about enterprise IT in general. As enterprises increasingly migrate toward their own unique clouds, new issues such as the use of containers and microservices emerge to keep things interesting. In this Power Panel at 16th Cloud Expo, moderated by Conference Chair Roger Strukhoff, panelists addressed the state of cloud computing today, and what enterprise IT professionals need to know about how the latest topics and trends affect t...
DevOps tends to focus on the relationship between Dev and Ops, putting an emphasis on the ops and application infrastructure. But that’s changing with microservices architectures. In her session at DevOps Summit, Lori MacVittie, Evangelist for F5 Networks, will focus on how microservices are changing the underlying architectures needed to scale, secure and deliver applications based on highly distributed (micro) services and why that means an expansion into “the network” for DevOps.
Containers are changing the security landscape for software development and deployment. As with any security solutions, security approaches that work for developers, operations personnel and security professionals is a requirement. In his session at DevOps Summit, Kevin Gilpin, CTO and Co-Founder of Conjur, will discuss various security considerations for container-based infrastructure and related DevOps workflows.
Containers have changed the mind of IT in DevOps. They enable developers to work with dev, test, stage and production environments identically. Containers provide the right abstraction for microservices and many cloud platforms have integrated them into deployment pipelines. DevOps and Containers together help companies to achieve their business goals faster and more effectively. In his session at DevOps Summit, Ruslan Synytsky, CEO and Co-founder of Jelastic, reviewed the current landscape of...
Summer is finally here and it’s time for a DevOps summer vacation. From San Francisco to New York City, our top summer conferences list is going to continuously deliver you to the summer destinations of your dreams. These DevOps parties are hitting all the hottest summer trends with Microservices, Agile, Continuous Delivery, DevSecOps, and even Continuous Testing. Move over Kanye. These are the top 5 Summer DevOps Conferences of 2015.
Cloud Migration Management (CMM) refers to the best practices for planning and managing migration of IT systems from a legacy platform to a Cloud Provider through a combination professional services consulting and software tools. A Cloud migration project can be a relatively simple exercise, where applications are migrated ‘as is’, to gain benefits such as elastic capacity and utility pricing, but without making any changes to the application architecture, software development methods or busine...
Overgrown applications have given way to modular applications, driven by the need to break larger problems into smaller problems. Similarly large monolithic development processes have been forced to be broken into smaller agile development cycles. Looking at trends in software development, microservices architectures meet the same demands. Additional benefits of microservices architectures are compartmentalization and a limited impact of service failure versus a complete software malfunction. ...
Many people recognize DevOps as an enormous benefit – faster application deployment, automated toolchains, support of more granular updates, better cooperation across groups. However, less appreciated is the journey enterprise IT groups need to make to achieve this outcome. The plain fact is that established IT processes reflect a very different set of goals: stability, infrequent change, hands-on administration, and alignment with ITIL. So how does an enterprise IT organization implement change...
While DevOps most critically and famously fosters collaboration, communication, and integration through cultural change, culture is more of an output than an input. In order to actively drive cultural evolution, organizations must make substantial organizational and process changes, and adopt new technologies, to encourage a DevOps culture. Moderated by Andi Mann, panelists discussed how to balance these three pillars of DevOps, where to focus attention (and resources), where organizations migh...
At DevOps Summit NY there’s been a whole lot of talk about not just DevOps, but containers, IoT, and microservices. Sessions focused not just on the cultural shift needed to grow at scale with a DevOps approach, but also made sure to include the network ”plumbing” needed to ensure success as applications decompose into the microservice architectures enabling rapid growth and support for the Internet of (Every)Things.
Mashape is bringing real-time analytics to microservices with the release of Mashape Analytics. First built internally to analyze the performance of more than 13,000 APIs served by the mashape.com marketplace, this new tool provides developers with robust visibility into their APIs and how they function within microservices. A purpose-built, open analytics platform designed specifically for APIs and microservices architectures, Mashape Analytics also lets developers and DevOps teams understand w...