Welcome!

Microservices Expo Authors: Elizabeth White, Liz McMillan, Pat Romanski, Mehdi Daoudi, Stackify Blog

Related Topics: @CloudExpo, Microservices Expo, Containers Expo Blog, Cloud Security

@CloudExpo: Article

The Impact of the Cloud on Digital Forensics - Part 1

Taking digital forensics beyond the traditional security perimeter into a cloud security perimeter.

Digital Forensics is not an elephant, it is a process and not just one process, but a group of tasks and processes in investigation. Examiners now perform targeted examinations using forensic tools and databases of known files, selecting specific files and data types for review while ignoring files of irrelevant type and content. Despite the application of sophisticated tools, the forensic process still relies on the examiner's knowledge of the technical aspects of the specimen and understanding of the case and the law - Mark Pollitt.

As has been established from articles by various authors including myself, this re-branded model of  computing now called cloud computing proposes benefits that can improve productivity, harness high-speed systems which can  manage large data sets as well as systems implementations, and could have a net positive impact on the operational budget (scaling,elasticity) of some small and midsized enterprises.

Of course there is the possibility that a private cloud for a small enterprise may not warrant its cost, in comparison to that of harnessing the benefits of a public cloud offering.

For a larger enterprise with say multiple and/or international locations, a private cloud infrastructure can provide an added cost benefit that whilst not as cheap as a public cloud offering, would offset that cost variance in terms of the risk profile of systems being moved into a private cloud e.g. critical databases, transactional and/or processing systems as well as potential compliance concerns.

If however an enterprise chooses to utilize a public cloud offering there will be the added complications for information security, in terms of procedural and legal standpoints. This leads us to the point that, with a public cloud system; we no longer have the traditional defined security perimeter.

This new cloud security perimeter can now be any place on any device where people will access an enterprise provided network, resources and systems.

With regard to digital forensics and the e-discovery process, this new cloud security perimeter stemming from the trend with which data is now accessed via the internet, housed and consumed on multiple systems and devices internationally, will pose some serious challenges(legally and technically) with the potential to complicate a security investigation. e.g. defining incident response, access rules and policies governing  access as well as  support processes.

Traditional network forensics  metrics will not give a complete picture of what can occur within the cloud computing environment; for instance there could be limitations in terms of focus only on data going  into and out from  systems which an enterprise has access to, and as we know this generally stops at the gateway into the cloud.

In terms of network forensics, packet capture and analysis is important; with the cloud ecosystem there is the real possibility of an increase in the vast amount of data that may need to be processed. This will only increase the workload on the digital investigator who will most likely have more than a plate full of hex patterns, network metadata and logs to analyze., as is the case with a traditional system analysis.

This increased volume can severely cripple an investigation; more so if a forensic investigator does not completely understand the cloud ecosystem's architecture, its complex linkages that bridge cloud services and an enterprise's systems in addition to how these systems impact an enterprise in terms of potential ingress points that can lead to systems compromise.

The cloud while a boon to enterprise CapEx/OpEx is also a gold-mine for crackers who can set up systems for attack with as little as $50 e.g with  Amazon Web Services (AWS), an Amazon Machine Image (AMI) either Linux or Windows can  run a virtual machine which can be set it up to do whatever an end-user wants to do with it, that is, within the confines of the virtualized world; this environment is owned by the enduser (a cracker in this case) from the operating system  up.

Of course the IAAS and other hardware systems, IDS/IPS, firewalls, remain under the control and belong to the cloud service provider.

With regard to say conducting a forensic investigation on a virtualized server,there is that potential loss of data that can be relevant to an investigation once an image is stopped or a virtualized server is shut down, with minimal chance of retrieving a specific image from its virtualized server.

As mentioned there are several merits for the case to adopt a cloud service however, from a digital forensics point of view; an understanding of the inherent limitations of such a system needs to be clearly understood and properly reviewed and scoped by an enterprises IT Security team  regarding how such an implementation will adapt to their current security model. These metrics may vary based on the selected cloud provider the enterprise will use.

Gathered data can then assist the enterprise security on how to mitigate the potential for compromise and other risk that can affect the enterprises operations stemming from this added environment. This in turn can potentially alleviate the pains of a digital forensics investigation with cloud computing overtures.

Digital Forensic expert Nicole Bebee stated, "No research has been published on how cloud computing environmnets affect digital artifacts, and legal issues related to cloud computing environments."

Of note is the fact that with the top CSPs (Amazon, Rackspace, Azure) one can find common attributes from which a security manager can tweak the enterprises security policies.

Some things of note that will impact a forensic investigation within the cloud ecosystem are:

  1. A network forensics investigator is limited to tools on the box rather than the entire network, however if a proper ISO is made of the machine image, then all the standard information in the machine image's ISO should be available as it would with any other server in a data center.
  2. Lack of access to network routers, load balancers and other networked components.
  3. No access to large firewall installations
  4. There are challenges in mapping known hops from instance to instance which will remain static across the cloud-routing schema.
  5. System Administrators can build and tear down virtual machines (VMs) at will. This can influence an enterprises security policy and plans as, new rules and regulations will have to be implemented as we work with cloud servers and services that are suspected of being compromised.
  6. An enterprises threat environment should be treated with the same mindset for the cloud ecosystem as it would for any exposed service that is offered across the Internet.
  7. With the cloud ecosystem an advantage with regards to forensics is the ability for a digital investigator to store very large log files on a storage instance or in a very large database for easy data retrieval and discovery.
  8. An enterprise has to be open to the fact that there will be a risk of data being damaged, accessed, altered, or denied by the CSP.
  9. Routing information that is not already on "the box" will be difficult to obtain within this ecosystem.
  10. For encrypted disks, wouldn't it be theoretically feasible to spin up "n" cloud instances to help crack the encryption? According to Dan Morrill this can be an expensive process.

As those of us who are students and practitioners within the field of digital forensic know , any advance in this area tend to be primarily reactionary in nature and most likely developed  to respond to a specific incident or subset of incidents. This can pose a major challenge in the traditional systems; one can only imagine what can occur when faced with a distributed cloud ecosystem.

In terms of digital forensics, any tool that will make an examiners job easier, improve results, reduce false positives and generate data that is relevant, pertinent and can be admitted in a court of law will be of value.

Being my firms lead solutions researcher and consultant I am always on the lookout for any new process, system or tool that will make my job as well as that of my team easier as we work with our clients. This led me to attend a webinar: The Case for Network Forensics; from a company called Solera Networks ...continued in Part 2.

Special thanks to Mark Pollitt for his valuable insight.

References

  1. Politt MM. Six blind men from Indostan. Digital forensics research workshop (DFRWS); 2004.
  2. Digital Forensics:Defining a Research Agenda -Nance,Hay Bishop 2009;978-0-7695-3450-3/09 IEEE
  3. Dan Morrill- 10 things to think about with cloud-computing and forensics

More Stories By Jon Shende

Jon RG Shende is an executive with over 18 years of industry experience. He commenced his career, in the medical arena, then moved into the Oil and Gas environment where he was introduced to SCADA and network technologies,also becoming certified in Industrial Pump and Valve repairs. Jon gained global experience over his career working within several verticals to include pharma, medical sales and marketing services as well as within the technology services environment, eventually becoming the youngest VP of an international enterprise. He is a graduate of the University of Oxford, holds a Masters certificate in Business Administration, as well as an MSc in IT Security, specializing in Computer Crime and Forensics with a thesis on security in the Cloud. Jon, well versed with the technology startup and mid sized venture ecosystems, has contributed at the C and Senior Director level for former clients. As an IT Security Executive, Jon has experience with Virtualization,Strategy, Governance,Risk Management, Continuity and Compliance. He was an early adopter of web-services, web-based tools and successfully beta tested a remote assistance and support software for a major telecom. Within the realm of sales, marketing and business development, Jon earned commendations for turnaround strategies within the services and pharma industry. For one pharma contract he was responsibe for bringing low performing districts up to number 1 rankings for consecutive quarters; as well as outperforming quotas from 125% up to 314%. Part of this was achieved by working closely with sales and marketing teams to ensure message and product placement were on point. Professionally he is a Fellow of the BCS Chartered Institute for IT, an HITRUST Certified CSF Practitioner and holds the CITP and CRISC certifications.Jon Shende currently works as a Senior Director for a CSP. A recognised thought Leader, Jon has been invited to speak for the SANs Institute, has spoken at Cloud Expo in New York as well as sat on a panel at Cloud Expo Santa Clara, and has been an Ernst and Young CPE conference speaker. His personal blog is located at http://jonshende.blogspot.com/view/magazine "We are what we repeatedly do. Excellence, therefore, is not an act, but a habit."

@MicroservicesExpo Stories
In his session at @DevOpsSummit at 20th Cloud Expo, Kelly Looney, director of DevOps consulting for Skytap, showed how an incremental approach to introducing containers into complex, distributed applications results in modernization with less risk and more reward. He also shared the story of how Skytap used Docker to get out of the business of managing infrastructure, and into the business of delivering innovation and business value. Attendees learned how up-front planning allows for a clean sep...
In IT, we sometimes coin terms for things before we know exactly what they are and how they’ll be used. The resulting terms may capture a common set of aspirations and goals – as “cloud” did broadly for on-demand, self-service, and flexible computing. But such a term can also lump together diverse and even competing practices, technologies, and priorities to the point where important distinctions are glossed over and lost.
Enterprise architects are increasingly adopting multi-cloud strategies as they seek to utilize existing data center assets, leverage the advantages of cloud computing and avoid cloud vendor lock-in. This requires a globally aware traffic management strategy that can monitor infrastructure health across data centers and end-user experience globally, while responding to control changes and system specification at the speed of today’s DevOps teams. In his session at 20th Cloud Expo, Josh Gray, Chie...
"At the keynote this morning we spoke about the value proposition of Nutanix, of having a DevOps culture and a mindset, and the business outcomes of achieving agility and scale, which everybody here is trying to accomplish," noted Mark Lavi, DevOps Solution Architect at Nutanix, in this SYS-CON.tv interview at @DevOpsSummit at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
We have already established the importance of APIs in today’s digital world (read about it here). With APIs playing such an important role in keeping us connected, it’s necessary to maintain the API’s performance as well as availability. There are multiple aspects to consider when monitoring APIs, from integration to performance issues, therefore a general monitoring strategy that only accounts for up-time is not ideal.
In his session at 20th Cloud Expo, Mike Johnston, an infrastructure engineer at Supergiant.io, discussed how to use Kubernetes to set up a SaaS infrastructure for your business. Mike Johnston is an infrastructure engineer at Supergiant.io with over 12 years of experience designing, deploying, and maintaining server and workstation infrastructure at all scales. He has experience with brick and mortar data centers as well as cloud providers like Digital Ocean, Amazon Web Services, and Rackspace. H...
All organizations that did not originate this moment have a pre-existing culture as well as legacy technology and processes that can be more or less amenable to DevOps implementation. That organizational culture is influenced by the personalities and management styles of Executive Management, the wider culture in which the organization is situated, and the personalities of key team members at all levels of the organization. This culture and entrenched interests usually throw a wrench in the work...
When you focus on a journey from up-close, you look at your own technical and cultural history and how you changed it for the benefit of the customer. This was our starting point: too many integration issues, 13 SWP days and very long cycles. It was evident that in this fast-paced industry we could no longer afford this reality. We needed something that would take us beyond reducing the development lifecycles, CI and Agile methodologies. We made a fundamental difference, even changed our culture...
As many know, the first generation of Cloud Management Platform (CMP) solutions were designed for managing virtual infrastructure (IaaS) and traditional applications. But that’s no longer enough to satisfy evolving and complex business requirements. In his session at 21st Cloud Expo, Scott Davis, Embotics CTO, will explore how next-generation CMPs ensure organizations can manage cloud-native and microservice-based application architectures, while also facilitating agile DevOps methodology. He wi...
Docker is sweeping across startups and enterprises alike, changing the way we build and ship applications. It's the most prominent and widely known software container platform, and it's particularly useful for eliminating common challenges when collaborating on code (like the "it works on my machine" phenomenon that most devs know all too well). With Docker, you can run and manage apps side-by-side - in isolated containers - resulting in better compute density. It's something that many developer...
Most companies are adopting or evaluating container technology - Docker in particular - to speed up application deployment, drive down cost, ease management and make application delivery more flexible overall. As with most new architectures, this dream takes a lot of work to become a reality. Even when you do get your application componentized enough and packaged properly, there are still challenges for DevOps teams to making the shift to continuous delivery and achieving that reduction in cost ...
These days, change is the only constant. In order to adapt and thrive in an ever-advancing and sometimes chaotic workforce, companies must leverage intelligent tools to streamline operations. While we're only at the dawn of machine intelligence, using a workflow manager will benefit your company in both the short and long term. Think: reduced errors, improved efficiency and more empowered employees-and that's just the start. Here are five other reasons workflow automation is leading a revolution...
As today's digital disruptions bounce and smash their way through conventional technologies and conventional wisdom alike, predicting their path is a multifaceted challenge. So many areas of technology advance on Moore's Law-like exponential curves that divining the future is fraught with danger. Such is the problem with artificial intelligence (AI), and its related concepts, including cognitive computing, machine learning, and deep learning.
We have Continuous Integration and we have Continuous Deployment, but what’s continuous across all of what we do is people. Even when tasks are automated, someone wrote the automation. So, Jayne Groll evangelizes about Continuous Everyone. Jayne is the CEO of the DevOps Institute and the author of Agile Service Management Guide. She talked about Continuous Everyone at the 2016 All Day DevOps conference. She describes it as "about people, culture, and collaboration mapped into your value streams....
There are several reasons why businesses migrate their operations to the cloud. Scalability and price are among the most important factors determining this transition. Unlike legacy systems, cloud based businesses can scale on demand. The database and applications in the cloud are not rendered simply from one server located in your headquarters, but is instead distributed across several servers across the world. Such CDNs also bring about greater control in times of uncertainty. A database hack ...
“Why didn’t testing catch this” must become “How did this make it to testing?” Traditional quality teams are the crutch and excuse keeping organizations from making the necessary investment in people, process, and technology to accelerate test automation. Just like societies that did not build waterways because the labor to keep carrying the water was so cheap, we have created disincentives to automate. In her session at @DevOpsSummit at 20th Cloud Expo, Anne Hungate, President of Daring System...
API Security is complex! Vendors like Forum Systems, IBM, CA and Axway have invested almost 2 decades of engineering effort and significant capital in building API Security stacks to lockdown APIs. The API Security stack diagram shown below is a building block for rapidly locking down APIs. The four fundamental pillars of API Security - SSL, Identity, Content Validation and deployment architecture - are discussed in detail below.
Did you know that you can develop for mainframes in Java? Or that the testing and deployment can be automated across mobile to mainframe? In his session and demo at @DevOpsSummit at 21st Cloud Expo, Dana Boudreau, a Senior Director at CA Technologies, will discuss how increasingly teams are developing with agile methodologies, using modern development environments, and automating testing and deployments, mobile to mainframe.
As DevOps methodologies expand their reach across the enterprise, organizations face the daunting challenge of adapting related cloud strategies to ensure optimal alignment, from managing complexity to ensuring proper governance. How can culture, automation, legacy apps and even budget be reexamined to enable this ongoing shift within the modern software factory?
While some vendors scramble to create and sell you a fancy solution for monitoring your spanking new Amazon Lambdas, hear how you can do it on the cheap using just built-in Java APIs yourself. By exploiting a little-known fact that Lambdas aren’t exactly single-threaded, you can effectively identify hot spots in your serverless code. In his session at @DevOpsSummit at 21st Cloud Expo, Dave Martin, Product owner at CA Technologies, will give a live demonstration and code walkthrough, showing how ...