Welcome!

Microservices Expo Authors: Derek Weeks, PagerDuty Blog, Kevin Benedict, Yeshim Deniz, Elizabeth White

Related Topics: Cloud Security, Industrial IoT, Microservices Expo, Adobe Flex, Agile Computing

Cloud Security: Article

Capture File Filtering with Wireshark

Needle in a Haystack

Intrusion detection tools that use the libpcap C/ C++ library [1] for network traffic capture (such as Snort [2] and Tcpdump [1]) can output packet capture information to a file for later reference. The format of this capture file is known as pcap. By capturing packet data to a file, an investigator can return later to study the history of an intrusion attempt – or to turn up other important clues about clandestine activity on the network.

Of course, the traffic history data stored in a pcap file is much too vast to study by just viewing the file manually. Security experts use specialized filtering tools to search through the file for pertinent information. One way to look for clues in a pcap file is to use the Wireshark protocol analysis tool [3] and its accompanying command-line utility tshark.

Wireshark is included by default on many Linux distros, and if not, it is available through the leading package repositories. You can also download Wireshark through the project website [4]. In this article, I describe how to use Wireshark and tshark to search a pcap file for information on network activity. I will assume you already have a pcap file ready for investigation. For more on how to capture a pcap file with Tcp-dump, see my article “Intruder Detection with Tcpdump,” which is available online at the ADMIN magazine website [5].

tshark at the Command Line
The tshark utility is a simple tool included with the Wireshark package that lets you filter the contents of a pcap file from the command line. To get a view of the most significant activity, I use the following command:

$ tshark ‑nr dumpfile1.gz ‑qz "io,phs" > details.txt

The ‑n switch disables network object name resolution, ‑r indicates that packet data is to be read from the input file, in this case dumpfile1.gz. The ‑z allows for statistics to display after it finishes reading the capture file, the ‑q flag specifies that only the statistics are printed, and the > redirection
sends the output to the file called details.txt. See Figure 1 for the output of this information. To view a list of help commands used with tshark, type:

$ tshark ‑h

And for a list of ‑z arguments type:

$ tshark ‑z help

Figure 1

Say you would like to know whether a particular IP address appeared in a packet dump and what port it was connecting on. The following command line checks the dump file for the IP address 98.139.126.21:

$ tshark ‑V ‑nr dumpfile.gz ip.src == 98.139.126.21 | grep "Source port" | awk {'print $3'} | sort ‑n | uniq 80

The resulting output on the line following the command shows that the packet dump file recorded IP address 98.139.126.21 having connections on port 80.

If you were given a packet dump file and asked to find possible IRC traffic on your network, how would you do it? First, you would need to know what port numbers were associated with IRC traffic, and that could be accomplished with Google or by issuing the following command:

$ grep irc /usr/share/nmap/nmap‑services | grep tcp

Figure 2 shows the results of the preceding command.

Figure 2

Now I can search the packet dump and look for evidence of IRC traffic using the following commands:

$ tshark ‑nr dumpfile1.gz 'ip.addr==172.16.134.191 and tcp.port >=  6667 and tcp.port <= 6670 and irc' |  awk {'print $3,$4,$5,$6'} | sort ‑n | uniq ‑c

Figure 1: tshark statistics output.
Figure 2: Locating IRC port numbers with grep.
Figure 3: IRC connections found in the packet dump.
Figure 4: The Wireshark startup window.

The breakdown of this command is shown in Table 1, and the output is in Figure 3.

Figure 3

In the GUI
The Wireshark GUI application is easier on the eyes, and it provides some options that aren’t available at the command line. You can start Wireshark from the Application menu or from the terminal.  To load a capture file, select Open in the startup window (Figure 4) or select File | Open from the menubar.  Once you have a packet capture file loaded, you can start searching packet dumps within the Wireshark interface.

Figure 4

The Filter box below the Wireshark toolbar lets you enter criteria for the search. For instance, to search for all the Canonical Name records within the capture file, type in the following filter: dns.resp.type == CNAME (see Figure 5).  After you enter a filter, remember to clear the filter text to see the full file before starting a new search.

Figure 5

Table 1: Parts of a tshark Command

Option Description
‘ip.addr==172.16.134.191         This is my network
and tcp.port >= 6667                   Start of the port range
and tcp.port <= 6670                   End of the port range
and irc’                                           Searches for IRC traffic only
awk {‘print $3,$4,$5,$6’}             Prints the third through sixth patterns from each matching line
sort ‑n                                            Sorts according to string numerical value
uniq ‑c                                            Only prints the number of matches that are unique

Digging deeper, if I want to know how long a client resolver cached the IP address associated with the name cookex.amp.gapx.yahoodns.net (Figure 6), I would enter the following filter:

dns.resp.name == "cookex.amp.gapx. yahoodns.net"

Figure 6

The filter ip.addr == 10.37.32.97 gives information on all communications that involve 10.37.32.97 in the packet dump. If needed, use && to isolate to a specific protocol. The filter ip.dst == 10.37.32.97 or ip.src == 10.37.32.97 looks for a source or destination IP address.

How could I find the password used over Telnet between two IP addresses? For example, if a user at 172.21.12.5 is using Telnet to access a device at 10. 37. 140.160, I can enter:

ip.dst == 10.37.140.160 && ip.src == 172.21.12.5 && telnet.data contains "Password"

The preceding command will list the connections that meet the search requirement, and you can right-click on the packet and click Follow TCP Stream to view the password. (See Figures 7 and 8.)

Figure 7

Figure 8

Note: A much easier way to get the password on the network, if you were sniffing the traffic instead of reading from a capture file, would be to use ettercap, as follows:

$ ettercap ‑Tzq //23

To discover whether someone was viewing a suspicious web page, I can perform a filter search to find out what picture the person at IP address 10.225.5.107 was viewing at Yahoo (216.115.97.236) with the following filter:

ip.dst == 10.225.5.107 && ip.src == 216.115.97.236 && (image‑jfif || image‑gif)

Figure 9

Figure 9 shows the results. If you then right-click on a line in the output and select Follow TCP Stream for the results shown in Figure 10.
The second line in the Follow TCP Stream output specifies that wynn‑ rovepolitico.jpg is the image this person was viewing on the site.

Figure 10

Conclusion
Wireshark can do more than just watch the wires in real time. If you save a snapshot of network activity in a capture file in the pcap format, you can use Wireshark to search through the file to look for clues about nefarious activity.

In this article, I described how to search for information in a capture file using Wireshark and the tshark command-line tool.

Info

[1] Tcpdump and Libpcap: http://www.tcpdump.org/
[2] Snort: http://www..snort.org
[3] Wireshark: http://www.wireshark.org/
[4] Wireshark Download: http://www.wireshark.org/download.html
[5] “Intruder Detection with Tcpdump” by David J. Dodd: http://www.admin‑magazine.com/Articles/Intruder‑Detection‑with‑tcpdump/

More Stories By David Dodd

David J. Dodd is currently in the United States and holds a current 'Top Secret' DoD Clearance and is available for consulting on various Information Assurance projects. A former U.S. Marine with Avionics background in Electronic Countermeasures Systems. David has given talks at the San Diego Regional Security Conference and SDISSA, is a member of InfraGard, and contributes to Secure our eCity http://securingourecity.org. He works for Xerox as Information Security Officer City of San Diego & pbnetworks Inc. http://pbnetworks.net a Service Disabled Veteran Owned Small Business (SDVOSB) located in San Diego, CA and can be contacted by emailing: dave at pbnetworks.net.

@MicroservicesExpo Stories
When building DevOps or continuous delivery practices you can learn a great deal from others. What choices did they make, what practices did they put in place, and how did they connect the dots? At Sonatype, we pulled together a set of 21 reference architectures for folks building continuous delivery and DevOps practices using Docker. Why? After 3,000 DevOps professionals attended our webinar on "Continuous Integration using Docker" discussing just one reference architecture example, we recogn...
Thanks to Docker and the DevOps revolution, microservices have emerged as the new way to build and deploy applications — and there are plenty of great reasons to embrace the microservices trend. If you are going to adopt microservices, you also have to understand that microservice architectures have many moving parts. When it comes to incident management, this presents an important difference between microservices and monolithic architectures. More moving parts mean more complexity to monitor an...
The evolution of JavaScript and HTML 5 to support a genuine component based framework (Web Components) with the necessary tools to deliver something close to a native experience including genuine realtime networking (UDP using WebRTC). HTML5 is evolving to offer built in templating support, the ability to watch objects (which will speed up Angular) and Web Components (which offer Angular Directives). The native level support will offer a massive performance boost to frameworks having to fake all...
All clouds are not equal. To succeed in a DevOps context, organizations should plan to develop/deploy apps across a choice of on-premise and public clouds simultaneously depending on the business needs. This is where the concept of the Lean Cloud comes in - resting on the idea that you often need to relocate your app modules over their life cycles for both innovation and operational efficiency in the cloud. In his session at @DevOpsSummit at19th Cloud Expo, Valentin (Val) Bercovici, CTO of Soli...
"We're bringing out a new application monitoring system to the DevOps space. It manages large enterprise applications that are distributed throughout a node in many enterprises and we manage them as one collective," explained Kevin Barnes, President of eCube Systems, in this SYS-CON.tv interview at DevOps at 18th Cloud Expo, held June 7-9, 2016, at the Javits Center in New York City, NY.
The rise of containers and microservices has skyrocketed the rate at which new applications are moved into production environments today. While developers have been deploying containers to speed up the development processes for some time, there still remain challenges with running microservices efficiently. Most existing IT monitoring tools don’t actually maintain visibility into the containers that make up microservices. As those container applications move into production, some IT operations t...
Docker containers have brought great opportunities to shorten the deployment process through continuous integration and the delivery of applications and microservices. This applies equally to enterprise data centers as well as the cloud. In his session at 20th Cloud Expo, Jari Kolehmainen, founder and CTO of Kontena, will discuss solutions and benefits of a deeply integrated deployment pipeline using technologies such as container management platforms, Docker containers, and the drone.io Cl tool...
For organizations that have amassed large sums of software complexity, taking a microservices approach is the first step toward DevOps and continuous improvement / development. Integrating system-level analysis with microservices makes it easier to change and add functionality to applications at any time without the increase of risk. Before you start big transformation projects or a cloud migration, make sure these changes won’t take down your entire organization.
SYS-CON Events announced today that CA Technologies has been named “Platinum Sponsor” of SYS-CON's 20th International Cloud Expo®, which will take place on June 6-8, 2017, at the Javits Center in New York City, NY, and the 21st International Cloud Expo®, which will take place October 31-November 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. CA Technologies helps customers succeed in a future where every business – from apparel to energy – is being rewritten by software. From ...
SYS-CON Events announced today that Outlyer, a monitoring service for DevOps and operations teams, has been named “Bronze Sponsor” of SYS-CON's 20th International Cloud Expo®, which will take place on June 6-8, 2017, at the Javits Center in New York City, NY. Outlyer is a monitoring service for DevOps and Operations teams running Cloud, SaaS, Microservices and IoT deployments. Designed for today's dynamic environments that need beyond cloud-scale monitoring, we make monitoring effortless so you...
DevOps is being widely accepted (if not fully adopted) as essential in enterprise IT. But as Enterprise DevOps gains maturity, expands scope, and increases velocity, the need for data-driven decisions across teams becomes more acute. DevOps teams in any modern business must wrangle the ‘digital exhaust’ from the delivery toolchain, "pervasive" and "cognitive" computing, APIs and services, mobile devices and applications, the Internet of Things, and now even blockchain. In this power panel at @...
Cloud Expo, Inc. has announced today that Andi Mann and Aruna Ravichandran have been named Co-Chairs of @DevOpsSummit at Cloud Expo 2017. The @DevOpsSummit at Cloud Expo New York will take place on June 6-8, 2017, at the Javits Center in New York City, New York, and @DevOpsSummit at Cloud Expo Silicon Valley will take place Oct. 31-Nov. 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
With 10 simultaneous tracks, keynotes, general sessions and targeted breakout classes, Cloud Expo and @ThingsExpo are two of the most important technology events of the year. Since its launch over eight years ago, Cloud Expo and @ThingsExpo have presented a rock star faculty as well as showcased hundreds of sponsors and exhibitors! In this blog post, I provide 7 tips on how, as part of our world-class faculty, you can deliver one of the most popular sessions at our events. But before reading the...
TechTarget storage websites are the best online information resource for news, tips and expert advice for the storage, backup and disaster recovery markets. By creating abundant, high-quality editorial content across more than 140 highly targeted technology-specific websites, TechTarget attracts and nurtures communities of technology buyers researching their companies' information technology needs. By understanding these buyers' content consumption behaviors, TechTarget creates the purchase inte...
@DevOpsSummit at Cloud taking place June 6-8, 2017, at Javits Center, New York City, is co-located with the 20th International Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. The widespread success of cloud computing is driving the DevOps revolution in enterprise IT. Now as never before, development teams must communicate and collaborate in a dynamic, 24/7/365 environment. There is no time to wait for long developm...
Microservices (μServices) are a fascinating evolution of the Distributed Object Computing (DOC) paradigm. Initial design of DOC attempted to solve the problem of simplifying developing complex distributed applications by applying object-oriented design principles to disparate components operating across networked infrastructure. In this model, DOC “hid” the complexity of making this work from the developer regardless of the deployment architecture through the use of complex frameworks, such as C...
DevOps and microservices are permeating software engineering teams broadly, whether these teams are in pure software shops but happen to run a business, such Uber and Airbnb, or in companies that rely heavily on software to run more traditional business, such as financial firms or high-end manufacturers. Microservices and DevOps have created software development and therefore business speed and agility benefits, but they have also created problems; specifically, they have created software securi...
Microservices are a very exciting architectural approach that many organizations are looking to as a way to accelerate innovation. Microservices promise to allow teams to move away from monolithic "ball of mud" systems, but the reality is that, in the vast majority of organizations, different projects and technologies will continue to be developed at different speeds. How to handle the dependencies between these disparate systems with different iteration cycles? Consider the "canoncial problem" ...
Both SaaS vendors and SaaS buyers are going “all-in” to hyperscale IaaS platforms such as AWS, which is disrupting the SaaS value proposition. Why should the enterprise SaaS consumer pay for the SaaS service if their data is resident in adjacent AWS S3 buckets? If both SaaS sellers and buyers are using the same cloud tools, automation and pay-per-transaction model offered by IaaS platforms, then why not host the “shrink-wrapped” software in the customers’ cloud? Further, serverless computing, cl...
Hardware virtualization and cloud computing allowed us to increase resource utilization and increase our flexibility to respond to business demand. Docker Containers are the next quantum leap - Are they?! Databases always represented an additional set of challenges unique to running workloads requiring a maximum of I/O, network, CPU resources combined with data locality.