Welcome!

Microservices Expo Authors: Carmen Gonzalez, Kalyan Ramanathan, Liz McMillan, Sematext Blog, Elizabeth White

Related Topics: @CloudExpo, Java IoT, Microservices Expo, Cloud Security, @BigDataExpo, SDN Journal

@CloudExpo: Article

Don't Stick Your Head in the Sand, Create a Proactive Security Strategy

Preventing data leakage from the cloud

In business, data is currency. It is the oil that keeps the commercial engine in motion and databases are the digital banks that store and retrieve this valuable information. And, according to IDC, data is doubling every two years. But as the overall amount of data grows, so does the amount of sensitive and regulated data. All this data stored by enterprises requires high levels of security. Presently (again, according to IDC) only about a quarter of that data is being properly protected now. Like all currency, data must be protected.

And herein lays a key issue. Too many executives see security as a cost center and are often reticent to invest beyond the bare minimum--whatever keeps the nasty viruses out; whatever is absolutely necessary for compliance. Their thought process is akin to “we haven’t been attacked before…or we don't have a high enough profile for hackers to care” I call this “ostriching” – putting your head in the sand and hoping misfortune never darkens your door.

To substantiate this attitude many organizations look toward on premise-based protection that encrypts or monitors network traffic containing critical information. For the average company, this can be a budget buster and a significant resource drain...that is until they look toward the cloud and explore cloud-based security options.

Yet regardless of deployment options, most security experts will agree the best defense is a proactive strategy.

Data leak prevention (DLP), like most security efforts, is a complex challenge. It is meant to prevent the deliberate and inadvertent release of sensitive information. Too many companies are trying to cure the symptoms rather than prevent them in the first place.

Part of the protection equation is being overlooked. Database management systems must also be a component of a proactive data security strategy. Like the bank vault, it requires strong protections at its foundation. DLP is one part of a comprehensive enterprise data security program that includes comprehensive security best practices for the protection of mission-critical enterprise data repositories. The security must be able to both foil attackers who are financially motivated and won't be deterred by minimalist security and prevent the accidental release of data. Data security will go nowhere without robust, proactive database security.

To properly achieve these goals, organizations need to implement functions that comprise of a variety of solutions. And when used cooperatively, a company can instantly discover who is doing what and when on the network, identify the potential impact and take the necessary steps to prevent or allow access/usage. Just like a bank vault—security cameras follow you to see who you are, you need a password  to get into the vault itself (during business hours!) and your only allowed to open your own safety deposit box (as long as you have the key). Here are four proactive measures you can take:

Intrusion detection (security information and event monitoring): The first step in protection is to know who is proverbially knocking on the door…or sneaking around the back entrance. Activity monitoring and blocking is the first line of defense for your firewall and beyond (this includes BYOD access. And vigilance on the front lines create real time correlation to detect patterns of traffic, spot usage anomalies and prevent internal or external attacks. SIEM actually provides the forensic analysis that determines whether or not any access of a network is friendly/permissible, suspicious or threatening. This analysis is the basis of creating alerts to take appropriate action/alerts to prevent data leakage.

Traffic monitoring (Log Management): Once you know who’s accessing the network, log management looks to make sense of the patterns and historical usage so one can identify suspicious IP addresses, locations, and users as likely transgressors. If you can predict the traffic, then you can create the rules to block sources, prevent access and create a reportable audit trail of activity. But to be proactive, it must be continuous and in real time.  Looking at reams of machine logs days or weeks after might discover breaches and careless users, but it can’t prevent it. It is the proverbial equivalent of chasing the horse that has left the barn.

Provisioning: (Identity Management): One of the best ways of ensuring users only access data to which they are entitled to see or use is through proper delegation of user rights. This is handled through identity management provisioning. In well too many documented cases, a user (typically an employee) leaves the fold, but never relinquishes access to this sensitive information. Just as provisioning gives users certain rights, automatic de-provsioning keeps former employees and other away from certain sections of your database. And when connected to SIEM and Log Management, when and if deprovsioned users try to use retired passwords or accounts, you know about it when it happens!

Authentication and Credentialing: (Access Management) This is more than password management (and making sure these codes are more substantial than “password123” B making sure access is controlled by at least two or more credentialing (multi-factored authentication) For example, a hospital may choose to require authorized personnel to present a log in credentials like a password and a unique variable code to access certain protected/sensitive areas of the network or database. In doing so, they have additional protection against the use of lost or unauthorized credentials. It is another layer of protection that can deflect potential data leakage.

In this assessment, there are at least four individual solutions which require implementation and monitoring. If the executives were unwilling before, how can an IT department muster the leverage to find money or the proposed staffing to deploy this preventive strategy? The good news is they don’t have to do either. Through a unified security model (real time event and access correlation technology) from the cloud combines the capabilities and functionalities from each of these toolsets and creates a strong, cost-effective enterprise platform. It leverages the key features in a single cooperative, centralized  source that enhances visibility throughout the enterprise. All the cost saving benefits inherent with cloud computing are realized and as a security-as-a-service, the need for additional headcount is moot. Part of the service is the live expert analysts watching over your virtual borders 24/7/365.

The additional benefit it’s the ability to leverage existing programs into a REACT platform. If a company previously invested in a Log Management or Single Sign On solution, they can easily integrate the other pieces of the puzzle to ensure a layered, holistic approach. This way all the independent silos are monitored and covered. Because each of the solutions interact and intersect with one another, the seamless communication creates a layered, responsive defense that anticipates, controls and alerts as opposed attempting to put the toothpaste back into the tube. The damage of a breach (whether through user carelessness, internal sabotage or direct attack) is more than just the compliance fines and the blowback of the data currency affected. Substantial and detrimentally impactful as they are, they can’t touch the cost of broken trust. That, in itself, is a driving reason to get ahead on the issue of proactive security.

As enterprise systems are exposed to substantial risk from data loss, theft, or manipulation, unified security platforms from the cloud IS that fine balance of data leakage prevention, protection of IP assets, maintenance of compliance standards versus cost/resource responsibility. It is an accountable way of becoming proactive.

Kevin Nikkhoo

CloudAccess

More Stories By Kevin Nikkhoo

With more than 32 years of experience in information technology, and an extensive and successful entrepreneurial background, Kevin Nikkhoo is the CEO of the dynamic security-as-a-service startup Cloud Access. CloudAccess is at the forefront of the latest evolution of IT asset protection--the cloud.

Kevin holds a Bachelor of Science in Computer Engineering from McGill University, Master of Computer Engineering at California State University, Los Angeles, and an MBA from the University of Southern California with emphasis in entrepreneurial studies.

@MicroservicesExpo Stories
@DevOpsSummit taking place June 6-8, 2017 at Javits Center, New York City, is co-located with the 20th International Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. @DevOpsSummit at Cloud Expo New York Call for Papers is now open.
Logs are continuous digital records of events generated by all components of your software stack – and they’re everywhere – your networks, servers, applications, containers and cloud infrastructure just to name a few. The data logs provide are like an X-ray for your IT infrastructure. Without logs, this lack of visibility creates operational challenges for managing modern applications that drive today’s digital businesses.
Today we can collect lots and lots of performance data. We build beautiful dashboards and even have fancy query languages to access and transform the data. Still performance data is a secret language only a couple of people understand. The more business becomes digital the more stakeholders are interested in this data including how it relates to business. Some of these people have never used a monitoring tool before. They have a question on their mind like “How is my application doing” but no id...
In IT, we sometimes coin terms for things before we know exactly what they are and how they’ll be used. The resulting terms may capture a common set of aspirations and goals – as “cloud” did broadly for on-demand, self-service, and flexible computing. But such a term can also lump together diverse and even competing practices, technologies, and priorities to the point where important distinctions are glossed over and lost.
Monitoring of Docker environments is challenging. Why? Because each container typically runs a single process, has its own environment, utilizes virtual networks, or has various methods of managing storage. Traditional monitoring solutions take metrics from each server and applications they run. These servers and applications running on them are typically very static, with very long uptimes. Docker deployments are different: a set of containers may run many applications, all sharing the resource...
Without lifecycle traceability and visibility across the tool chain, stakeholders from Planning-to-Ops have limited insight and answers to who, what, when, why and how across the DevOps lifecycle. This impacts the ability to deliver high quality software at the needed velocity to drive positive business outcomes. In his session at @DevOpsSummit 19th Cloud Expo, Eric Robertson, General Manager at CollabNet, showed how customers are able to achieve a level of transparency that enables everyone fro...
Information technology is an industry that has always experienced change, and the dramatic change sweeping across the industry today could not be truthfully described as the first time we've seen such widespread change impacting customer investments. However, the rate of the change, and the potential outcomes from today's digital transformation has the distinct potential to separate the industry into two camps: Organizations that see the change coming, embrace it, and successful leverage it; and...
Join Impiger for their featured webinar: ‘Cloud Computing: A Roadmap to Modern Software Delivery’ on November 10, 2016, at 12:00 pm CST. Very few companies have not experienced some impact to their IT delivery due to the evolution of cloud computing. This webinar is not about deciding whether you should entertain moving some or all of your IT to the cloud, but rather, a detailed look under the hood to help IT professionals understand how cloud adoption has evolved and what trends will impact th...
Internet of @ThingsExpo, taking place June 6-8, 2017 at the Javits Center in New York City, New York, is co-located with the 20th International Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. @ThingsExpo New York Call for Papers is now open.
The 20th International Cloud Expo has announced that its Call for Papers is open. Cloud Expo, to be held June 6-8, 2017, at the Javits Center in New York City, brings together Cloud Computing, Big Data, Internet of Things, DevOps, Containers, Microservices and WebRTC to one location. With cloud computing driving a higher percentage of enterprise IT budgets every year, it becomes increasingly important to plant your flag in this fast-expanding business opportunity. Submit your speaking proposal ...
You have great SaaS business app ideas. You want to turn your idea quickly into a functional and engaging proof of concept. You need to be able to modify it to meet customers' needs, and you need to deliver a complete and secure SaaS application. How could you achieve all the above and yet avoid unforeseen IT requirements that add unnecessary cost and complexity? You also want your app to be responsive in any device at any time. In his session at 19th Cloud Expo, Mark Allen, General Manager of...
Keeping pace with advancements in software delivery processes and tooling is taxing even for the most proficient organizations. Point tools, platforms, open source and the increasing adoption of private and public cloud services requires strong engineering rigor – all in the face of developer demands to use the tools of choice. As Agile has settled in as a mainstream practice, now DevOps has emerged as the next wave to improve software delivery speed and output. To make DevOps work, organization...
"Dice has been around for the last 20 years. We have been helping tech professionals find new jobs and career opportunities," explained Manish Dixit, VP of Product and Engineering at Dice, in this SYS-CON.tv interview at 19th Cloud Expo, held November 1-3, 2016, at the Santa Clara Convention Center in Santa Clara, CA.
Rapid innovation, changing business landscapes, and new IT demands force businesses to make changes quickly. In the eyes of many, containers are at the brink of becoming a pervasive technology in enterprise IT to accelerate application delivery. In this presentation, attendees learned about the: The transformation of IT to a DevOps, microservices, and container-based architecture What are containers and how DevOps practices can operate in a container-based environment A demonstration of how ...
More and more companies are looking to microservices as an architectural pattern for breaking apart applications into more manageable pieces so that agile teams can deliver new features quicker and more effectively. What this pattern has done more than anything to date is spark organizational transformations, setting the foundation for future application development. In practice, however, there are a number of considerations to make that go beyond simply “build, ship, and run,” which changes how...
Without lifecycle traceability and visibility across the tool chain, stakeholders from Planning-to-Ops have limited insight and answers to who, what, when, why and how across the DevOps lifecycle. This impacts the ability to deliver high quality software at the needed velocity to drive positive business outcomes. In his general session at @DevOpsSummit at 19th Cloud Expo, Phil Hombledal, Solution Architect at CollabNet, discussed how customers are able to achieve a level of transparency that e...
Between 2005 and 2020, data volumes will grow by a factor of 300 – enough data to stack CDs from the earth to the moon 162 times. This has come to be known as the ‘big data’ phenomenon. Unfortunately, traditional approaches to handling, storing and analyzing data aren’t adequate at this scale: they’re too costly, slow and physically cumbersome to keep up. Fortunately, in response a new breed of technology has emerged that is cheaper, faster and more scalable. Yet, in meeting these new needs they...
In his session at 19th Cloud Expo, Claude Remillard, Principal Program Manager in Developer Division at Microsoft, contrasted how his team used config as code and immutable patterns for continuous delivery of microservices and apps to the cloud. He showed how the immutable patterns helps developers do away with most of the complexity of config as code-enabling scenarios such as rollback, zero downtime upgrades with far greater simplicity. He also demoed building immutable pipelines in the cloud ...
Application transformation and DevOps practices are two sides of the same coin. Enterprises that want to capture value faster, need to deliver value faster – time value of money principle. To do that enterprises need to build cloud-native apps as microservices by empowering teams to build, ship, and run in production. In his session at @DevOpsSummit at 19th Cloud Expo, Neil Gehani, senior product manager at HPE, discussed what every business should plan for how to structure their teams to delive...
As we enter the final week before the 19th International Cloud Expo | @ThingsExpo in Santa Clara, CA, it's time for me to reflect on six big topics that will be important during the show. Hybrid Cloud This general-purpose term seems to provide a comfort zone for many enterprise IT managers. It sounds reassuring to be able to work with one of the major public-cloud providers like AWS or Microsoft Azure while still maintaining an on-site presence.