Welcome!

Microservices Expo Authors: John Rauser, Liz McMillan, Madhavan Krishnan, VP, Cloud Solutions, Virtusa, Jason Bloomberg, Pat Romanski

Related Topics: Cloud Security, Java IoT, Microservices Expo, Linux Containers

Cloud Security: Article

Malware Analysis | Part 1

How to use a number of tools to analyze a memory image file from an infected windows machine

Having your network environment protected with the latest virus protection, control what software is installed and allowed to run, restrict ingress and egress network access, protect web browsing, limit user account access, update security patches, change management practices, etc. All these efforts are critical to follow in the corporate environment but all will fall short if you don't have the proper monitoring in place to detect badness on your network and to respond quickly and effectively when it happens. When your network has the proper monitoring in place and knowledgeable engineers to monitor for outbreaks you will begin to have better visibility of how network traffic flows in your environment. When you understand how traffic flows on your network you can respond better when badness happens.

I will demonstrate how to use a number of tools to analyze a memory image file from an infected windows machine. I will demonstrate how to acquire a memory image from a windows machine that is currently running will malware infection and the process of memory analysis using various tools.

To gather an image file from an infected machine can be performed a number of ways. If you have an enterprise version of EnCase you can acquire evidence very fast and from various devices such as laptop, desktop, and mobile devices like smartphones and tablets. For most of us our IT budget is limited and this option is not viable. Using something like F-Response TACTICAL is a solution and requires only two usb sticks. One is labeled TACTICAL Subject and the other is TACTICAL Examiner, you put the Examiner one in the box you are researching malware. Next you put the Subject on the box that is infected with Malware. Below I demonstrate how this is performed with the subject on a windows box (infected with malware) and the examiner installed on a Linux platform (SANS SIFT workstation) to acquire the image.

Once the usb stick is loaded on the windows box install the program so it can listen on its external interface (see Figure #1).

Figure #1

Running the subject program on the infected windows box, remember to enable physical memory

On your SIFT workstation insert the usb stick examiner, make sure it shows up as loaded on your workstation (See Figure #2). Next execute the program f-response-tacex-lin.exe using the following syntax (see Figure #3). Notice that it connects to the following:

  • · iqn.2008-02.com.f-response.cr0wn-d00e37654:disk-0
  • · iqn.2008-02.com.f-response.cr0wn-d00e37654:disk-1
  • · iqn.2008-02.com.f-response.cr0wn-d00e37654:vol-c
  • · iqn.2008-02.com.f-response.cr0wn-d00e37654:vol-e
  • · iqn.2008-02.com.f-response.cr0wn-d00e37654:pmem

Figure #2

Make sure the examiner usb is loaded on the SIFT workstation

Figure #3

Perform the connection between the SIFT workstation and the infected windows box

Next we are going to login to iqn.2008-02.com.f-response.cr0wn-d00e37654:disk-0 with the following command (see Figure #4):

# iscsiadm -m node -targetname=iqn.2008-02.com.f-response.cr0wn-d00e37654:disk-0 --login

Figure #4

Successfully connected to windows box at 192.168.1.129

The iscsiadm command is an open-iscsi administration utility that allows discovery and login to iSCSI targets, as well as access and management of the open-iscsi database. The -m specify the mode which is node it can also be defined as: discoverydb, fw, host iface or session. With the mode selected as node we use the -targetname= and specify the location of the target drive.

After successfully connecting to the remote machine run fdisk -l and see our new device located at /dev/sdd1 (see Figure #5)

Figure #5

Results after running fdisk -l

Next we will mount the partition /dev/sdd1 which is located in the screenshot above (Figure #5) using the following mount command.

# mount -o ro,show_sys_files,streams_interface=windows /dev/sdd1 /mnt/windows_mount

Using the mount command with the -o option: ro - mount the file system read-only, show_sys_files - show all system files as normal files, streams_interface=windows - this option controls how named data streams in WIMfiles are made available with "windows" the named data stream. This will mount the memory from our windows box to /mnt/windows_mount. After changing into that directory and list files you will see the following (see Figure #6)

Figure #6

List of files after mounting the memory from our target windows box following by login to the pmem location

Now we need to login to the process memory of the target which is the pmem location (see Figure #3 ‘F-Response Target = iqn.2008-02.com.f-response.cr0wn-d00e37654:pmem'). We will use the iscsiadm open-iscsi administration utility to perform this task with the following command:

# iscsiadm -m node -targetname=iqn.2008-02.com.f-response.cr0wn-d00e37654:pmem -login

Again we are using the isciadm utility specifying the node with targetname of where the pmem file is located. Now we will run fdisk -l and see the partition tables (see Figure #7).

Figure #7

Results after running fdisk -l notice the HPFS/NTFS system at /dev/sdd1. This is the result after login to the pmem location.

Now we can image the remote systems memory using dc3dd which was developed by Jesse Komblum at the DoD Cyber Crime Center. Dc3dd is similar to dd but allows us to use for forensic work, allowing you to take hashes and split an image all from one command. Open up a terminal and type the following:

# dc3dd if=/dev/sde of=/cases/remote-system-memory8.img progress=on hash=md5 hashlog=/cases/remote-system-memory8.md5

Here is a breakdown of the command:

  • · if=DEVICE or FILE - Read input from a device or a file, in this case /dev/sde (see Figure #7 ‘Disk /dev/sde: 2466 MB, 2466250752 bytes
  • · of=FILE or DEVICE - Write output to a file or device, in this case /cases/remote-system-memory8.img
  • · progress=on - Will show progress on screen
  • · hash=ALGORITHM - Compute an ALGORITHM hash of the input and also of any outputs specified using hof=, hofs=, phod=, or fhod=, where ALGORITHM is one of md5, sha1, sha256, or sha512
  • · hashlog=FILE - Log total hashes and piecewise hashes to FILE.

This will do a forensic copy of the windows memory file to your computer; you can see a screenshot of the progress (see Figure #8).

Figure #8

Performing a forensic copy of the windows memory file using dc3dd

Now that we have an image file of the windows memory we can analysis for existence of malware. There are a couple of tools that you can use one is for the windows platform called Redline by Mandiant which I will be going over in greater detail later. The second tool which is open source is Volatility implemented in Python for the extraction of digital artifacts from volatile memory (RAM) samples. I will be discussing both in very limited bases in this month's article.

If the memory image was acquired from an unknown system and although this was a closed lab environment and I know what system it came from you will need to identify the operation system using Volatility (see Figure #9).

Figure #9

Using Volatility to identify what operation system the dump came from

We use the imageinfo plug-in for Volatility to find out the operation system the memory dump belongs to. Here we see in the suggested profile portion of the output it is a WinXP SP2x86 system, you will need this information to perform more work using Volatility on this memory image file.

To look at the running processes we use the following command:

$ vol.py -profile=WinXPSP2x86 pslist -f remote-system-memory8.img

You can also use the psscan plugin to scan the memory image for EPROCESS blocks with the following command:

$ vol.py -profile=WinXPSP2x86 psscan -f remote-system-memory8.img

Use the psscan to enumerate processes using pool tag scanning that can find processes that previously terminated (inactive) and processes that have been hidden or unlinked by a rootkit (see Figure #10).

Figure #10

Volatility with the psscan invoked

Now for a quick view of Mandiant Redline application we copy the windows memory images off our SANS Investigate Forensic Toolkit (SIFT) and on to a separate Windows workstation where you have Mandiant Redline installed. Next you will analysis your memory image with Redline (see Figure #11).

Figure #11

Loading memory image to be analyzed by Mandiant Redline followed by choosing ‘I am Reviewing a Full Live Response or Memory Image'.

Mandiant Redline is a free tool that provides host investigative capabilities to users and finds signs of malicious activity through memory and file analysis to develop a threat assessment profile. After I infected the test windows box with a known malware variant and allowed the system to react the machine wanted to restart at that moment I acquired a memory image and loaded it into Redline. I then allowed the machine to reboot and took another memory image. The total processes that are running on the system are in Figures #12 (left before reboot & right after reboot).

Figure #12

Total numbers of processes running after installing of malware then list of processes running after reboot

After comparing the two different lists we see that after reboot we have new processes running (jh MRI Score 61 PID - 38533 and svchost.exe MRI Score 61 PID - 1560). MRI Score is the Redline analyzes of each process and memory section to calculate a Malware Risk Index (MRI) score for each process.

Next month I will dive deeper into further information you can learn from analysis of memory images using both Mandiant Redline and Volatility.

More Stories By David Dodd

David J. Dodd is currently in the United States and holds a current 'Top Secret' DoD Clearance and is available for consulting on various Information Assurance projects. A former U.S. Marine with Avionics background in Electronic Countermeasures Systems. David has given talks at the San Diego Regional Security Conference and SDISSA, is a member of InfraGard, and contributes to Secure our eCity http://securingourecity.org. He works for Xerox as Information Security Officer City of San Diego & pbnetworks Inc. http://pbnetworks.net a Service Disabled Veteran Owned Small Business (SDVOSB) located in San Diego, CA and can be contacted by emailing: dave at pbnetworks.net.

@MicroservicesExpo Stories
While we understand Agile as a means to accelerate innovation, manage uncertainty and cope with ambiguity, many are inclined to think that it conflicts with the objectives of traditional engineering projects, such as building a highway, skyscraper or power plant. These are plan-driven and predictive projects that seek to avoid any uncertainty. This type of thinking, however, is short-sighted. Agile approaches are valuable in controlling uncertainty because they constrain the complexity that ste...
Agile has finally jumped the technology shark, expanding outside the software world. Enterprises are now increasingly adopting Agile practices across their organizations in order to successfully navigate the disruptive waters that threaten to drown them. In our quest for establishing change as a core competency in our organizations, this business-centric notion of Agile is an essential component of Agile Digital Transformation. In the years since the publication of the Agile Manifesto, the conn...
"This all sounds great. But it's just not realistic." This is what a group of five senior IT executives told me during a workshop I held not long ago. We were working through an exercise on the organizational characteristics necessary to successfully execute a digital transformation, and the group was doing their ‘readout.' The executives loved everything we discussed and agreed that if such an environment existed, it would make transformation much easier. They just didn't believe it was reali...
The cloud revolution in enterprises has very clearly crossed the phase of proof-of-concepts into a truly mainstream adoption. One of most popular enterprise-wide initiatives currently going on are “cloud migration” programs of some kind or another. Finding business value for these programs is not hard to fathom – they include hyperelasticity in infrastructure consumption, subscription based models, and agility derived from rapid speed of deployment of applications. These factors will continue to...
"Opsani helps the enterprise adopt containers, help them move their infrastructure into this modern world of DevOps, accelerate the delivery of new features into production, and really get them going on the container path," explained Ross Schibler, CEO of Opsani, and Peter Nickolov, CTO of Opsani, in this SYS-CON.tv interview at DevOps Summit at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
"We're developing a software that is based on the cloud environment and we are providing those services to corporations and the general public," explained Seungmin Kim, CEO/CTO of SM Systems Inc., in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
Enterprises are adopting Kubernetes to accelerate the development and the delivery of cloud-native applications. However, sharing a Kubernetes cluster between members of the same team can be challenging. And, sharing clusters across multiple teams is even harder. Kubernetes offers several constructs to help implement segmentation and isolation. However, these primitives can be complex to understand and apply. As a result, it’s becoming common for enterprises to end up with several clusters. Thi...
"Codigm is based on the cloud and we are here to explore marketing opportunities in America. Our mission is to make an ecosystem of the SW environment that anyone can understand, learn, teach, and develop the SW on the cloud," explained Sung Tae Ryu, CEO of Codigm, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
"CA has been doing a lot of things in the area of DevOps. Now we have a complete set of tool sets in order to enable customers to go all the way from planning to development to testing down to release into the operations," explained Aruna Ravichandran, Vice President of Global Marketing and Strategy at CA Technologies, in this SYS-CON.tv interview at DevOps Summit at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
The nature of test environments is inherently temporary—you set up an environment, run through an automated test suite, and then tear down the environment. If you can reduce the cycle time for this process down to hours or minutes, then you may be able to cut your test environment budgets considerably. The impact of cloud adoption on test environments is a valuable advancement in both cost savings and agility. The on-demand model takes advantage of public cloud APIs requiring only payment for t...
Cavirin Systems has just announced C2, a SaaS offering designed to bring continuous security assessment and remediation to hybrid environments, containers, and data centers. Cavirin C2 is deployed within Amazon Web Services (AWS) and features a flexible licensing model for easy scalability and clear pay-as-you-go pricing. Although native to AWS, it also supports assessment and remediation of virtual or container instances within Microsoft Azure, Google Cloud Platform (GCP), or on-premise. By dr...
Let's do a visualization exercise. Imagine it's December 31, 2018, and you're ringing in the New Year with your friends and family. You think back on everything that you accomplished in the last year: your company's revenue is through the roof thanks to the success of your product, and you were promoted to Lead Developer. 2019 is poised to be an even bigger year for your company because you have the tools and insight to scale as quickly as demand requires. You're a happy human, and it's not just...
Many enterprise and government IT organizations are realizing the benefits of cloud computing by extending IT delivery and management processes across private and public cloud services. But they are often challenged with balancing the need for centralized cloud governance without stifling user-driven innovation. This strategy requires an approach that fundamentally reshapes how IT is delivered today, shifting the focus from infrastructure to services aggregation, and mixing and matching the bes...
identify the sources of event storms and performance anomalies will require automated, real-time root-cause analysis. I think Enterprise Management Associates said it well: “The data and metrics collected at instrumentation points across the application ecosystem are essential to performance monitoring and root cause analysis. However, analytics capable of transforming data and metrics into an application-focused report or dashboards are what separates actual application monitoring from relat...
The benefits of automation are well documented; it increases productivity, cuts cost and minimizes errors. It eliminates repetitive manual tasks, freeing us up to be more innovative. By that logic, surely, we should automate everything possible, right? So, is attempting to automate everything a sensible - even feasible - goal? In a word: no. Consider this your short guide as to what to automate and what not to automate.
DevOps teams have more on their plate than ever. As infrastructure needs grow, so does the time required to ensure that everything's running smoothly. This makes automation crucial - especially in the server and network monitoring world. Server monitoring tools can save teams time by automating server management and providing real-time performance updates. As budgets reset for the New Year, there is no better time to implement a new server monitoring tool (or re-evaluate your current solution)....
While some developers care passionately about how data centers and clouds are architected, for most, it is only the end result that matters. To the majority of companies, technology exists to solve a business problem, and only delivers value when it is solving that problem. 2017 brings the mainstream adoption of containers for production workloads. In his session at 21st Cloud Expo, Ben McCormack, VP of Operations at Evernote, discussed how data centers of the future will be managed, how the p...
We just came off of a review of a product that handles both containers and virtual machines in the same interface. Under the covers, implementation of containers defaults to LXC, though recently Docker support was added. When reading online, or searching for information, increasingly we see “Container Management” products listed as competitors to Docker, when in reality things like Rocket, LXC/LXD, and Virtualization are Dockers competitors. After doing some looking around, we have decided tha...
High-velocity engineering teams are applying not only continuous delivery processes, but also lessons in experimentation from established leaders like Amazon, Netflix, and Facebook. These companies have made experimentation a foundation for their release processes, allowing them to try out major feature releases and redesigns within smaller groups before making them broadly available. In his session at 21st Cloud Expo, Brian Lucas, Senior Staff Engineer at Optimizely, discussed how by using ne...
Digital transformation has changed the way users interact with the world, and the traditional healthcare experience no longer meets rising consumer expectations. Enterprise Health Clouds (EHCs) are designed to easily and securely deliver the smart and engaging digital health experience that patients expect today, while ensuring the compliance and data integration that care providers require. Jikku Venkat