Welcome!

Microservices Expo Authors: Jason Bloomberg, Elizabeth White, Pat Romanski, Zakia Bouachraoui, Liz McMillan

Related Topics: @CloudExpo, Java IoT, Microservices Expo, Containers Expo Blog, SDN Journal

@CloudExpo: Blog Feed Post

Houston, We Have Cloud

The data centers of the future may look more like NASA ground control – governance inside, resources out

The data centers of the future may look more like NASA ground control – governance inside, resources out

One theme has remained consistent throughout the evolution of cloud thus far - enterprise IT wants to retain control of both its data and access to to it.

This is not an unreasonable demand. After all, it is enterprise IT - and its leadership - that will pay the price should customer data leak or regulations not complied with. Despite the growing view that cloud security is a joint, shared responsibility between customer and provider, it is enterprise IT that must put into place the mechanisms for both controlling and proving control over data and access, not cloud providers or integrators. The provider can offer services designed to provide that control, but it is not the one that must implement the polices or report on their effectiveness.

Amazon throws down the gauntlet for enterprise IT

While a collaboration and file-sharing app has been moved to AWS, access controls have to remain in-house, according to Oliver Alvarez, lead enterprise security architect for the World Bank's International Finance Corporation.

"We need to maintain control and custodianship of information," he said.

Access control by its nature must include identity management. Without the means to manage the credentials and map authorization of access to data and services to those credentials, control is lost. If customer data is the lifeblood of an organization, identity stores are the heart's valves, controlling when and where that data is moved and by whom.

TWO EMERGING ARCHITECTURES

Two architectures for control over identity and access are beginning to emerge, both having a common premise - identity stores are local, data and services are remote. In one architecture a provider - usually of a SaaS solution - deploys a virtual appliance on premise that brokers identity. This essentially enables LDAP/AD integration between the data center and the SaaS. In the second, a strategic control layer acting as a cloud services broker provides integration between environments using standard protocols, such as SAML, to enable control over authentication and authorization of cloud services.

The appliance model is an extension of agent-based services, merely expanded to the data center level. There are some concerns that go along with this model, chiefly that an external entity has control of an agent within the data center but in general this models appears to enjoy market acceptance, especially in cases where a standards-based approach is unavailable.

The alternative, standards-based model, uses the same brokering model but the broker is under the control of enterprise IT, not the provider. It relies on the same principles of abstraction we've come to recognize with virtualization and SDN as being beneficial to agility in the network and data center, putting a layer of control between resources and users so as to enable more flexibility in not just access control and identity management but in making routing decisions with respect to those resources.

That layer of control within enterprise IT is unlikely to go away for the very reasons cited above: control (governance) is a legal and operational necessity for enterprise IT. Cloud providers who fail to recognize this need and move to provide services supportive of that necessity are merely shooting themselves in the foot with respect to gaining more traction with enterprise customers.

Cloud gateways and broker services are going to end up enabling this architecture on the enterprise side. It is in providers' best interests to make these architectures as painless to implement as possible.

Read the original blog entry...

More Stories By Lori MacVittie

Lori MacVittie is responsible for education and evangelism of application services available across F5’s entire product suite. Her role includes authorship of technical materials and participation in a number of community-based forums and industry standards organizations, among other efforts. MacVittie has extensive programming experience as an application architect, as well as network and systems development and administration expertise. Prior to joining F5, MacVittie was an award-winning Senior Technology Editor at Network Computing Magazine, where she conducted product research and evaluation focused on integration with application and network architectures, and authored articles on a variety of topics aimed at IT professionals. Her most recent area of focus included SOA-related products and architectures. She holds a B.S. in Information and Computing Science from the University of Wisconsin at Green Bay, and an M.S. in Computer Science from Nova Southeastern University.

Microservices Articles
The explosion of new web/cloud/IoT-based applications and the data they generate are transforming our world right before our eyes. In this rush to adopt these new technologies, organizations are often ignoring fundamental questions concerning who owns the data and failing to ask for permission to conduct invasive surveillance of their customers. Organizations that are not transparent about how their systems gather data telemetry without offering shared data ownership risk product rejection, regu...
Containers and Kubernetes allow for code portability across on-premise VMs, bare metal, or multiple cloud provider environments. Yet, despite this portability promise, developers may include configuration and application definitions that constrain or even eliminate application portability. In this session we'll describe best practices for "configuration as code" in a Kubernetes environment. We will demonstrate how a properly constructed containerized app can be deployed to both Amazon and Azure ...
DevOps is often described as a combination of technology and culture. Without both, DevOps isn't complete. However, applying the culture to outdated technology is a recipe for disaster; as response times grow and connections between teams are delayed by technology, the culture will die. A Nutanix Enterprise Cloud has many benefits that provide the needed base for a true DevOps paradigm. In their Day 3 Keynote at 20th Cloud Expo, Chris Brown, a Solutions Marketing Manager at Nutanix, and Mark Lav...
The now mainstream platform changes stemming from the first Internet boom brought many changes but didn’t really change the basic relationship between servers and the applications running on them. In fact, that was sort of the point. In his session at 18th Cloud Expo, Gordon Haff, senior cloud strategy marketing and evangelism manager at Red Hat, will discuss how today’s workloads require a new model and a new platform for development and execution. The platform must handle a wide range of rec...
The Internet of Things is clearly many things: data collection and analytics, wearables, Smart Grids and Smart Cities, the Industrial Internet, and more. Cool platforms like Arduino, Raspberry Pi, Intel's Galileo and Edison, and a diverse world of sensors are making the IoT a great toy box for developers in all these areas. In this Power Panel at @ThingsExpo, moderated by Conference Chair Roger Strukhoff, panelists discussed what things are the most important, which will have the most profound e...
If your cloud deployment is on AWS with predictable workloads, Reserved Instances (RIs) can provide your business substantial savings compared to pay-as-you-go, on-demand services alone. Continuous monitoring of cloud usage and active management of Elastic Compute Cloud (EC2), Relational Database Service (RDS) and ElastiCache through RIs will optimize performance. Learn how you can purchase and apply the right Reserved Instances for optimum utilization and increased ROI.
TCP (Transmission Control Protocol) is a common and reliable transmission protocol on the Internet. TCP was introduced in the 70s by Stanford University for US Defense to establish connectivity between distributed systems to maintain a backup of defense information. At the time, TCP was introduced to communicate amongst a selected set of devices for a smaller dataset over shorter distances. As the Internet evolved, however, the number of applications and users, and the types of data accessed and...
Consumer-driven contracts are an essential part of a mature microservice testing portfolio enabling independent service deployments. In this presentation we'll provide an overview of the tools, patterns and pain points we've seen when implementing contract testing in large development organizations.
In his session at 19th Cloud Expo, Claude Remillard, Principal Program Manager in Developer Division at Microsoft, contrasted how his team used config as code and immutable patterns for continuous delivery of microservices and apps to the cloud. He showed how the immutable patterns helps developers do away with most of the complexity of config as code-enabling scenarios such as rollback, zero downtime upgrades with far greater simplicity. He also demoed building immutable pipelines in the cloud ...
You have great SaaS business app ideas. You want to turn your idea quickly into a functional and engaging proof of concept. You need to be able to modify it to meet customers' needs, and you need to deliver a complete and secure SaaS application. How could you achieve all the above and yet avoid unforeseen IT requirements that add unnecessary cost and complexity? You also want your app to be responsive in any device at any time. In his session at 19th Cloud Expo, Mark Allen, General Manager of...