Menu Close

Category: Network security

Using tcpdump for SIP diagnostics

TCPdump is a powerful command-line packet analyzer, which may be used for a SIP message sniffing/analyzing. TCPdump is preinstalled on many linux distributions, or may be installed directly from debian repository:

apt-get install tcpdump

TCPdump allows write sniff to a file or display it realtime. Its usage for SIP message analysis may look like:

Network traffic dataset PCAP anonymization

Network traffic dataset PCAP anonymization

  • Author: Miroslav Kohútik

Sometimes you may need to provide PCAP files to third-party organizations or perhaps, in our case, publish a network traffic dataset. In order to not reveal your network infrastructure and/or other sensitive data, you must anonymize these files before sharing them with anyone outside of you organization.

TraceWrangler

We use TraceWrangler for network data anonymization on OSI Layers 2 through 4. TraceWrangler is very easy to use and has an intuitive GUI:
TraceWrangler

TraceWrangler, however, isn’t perfect. First of all, the maximum size of a file that TraceWrangler can open is 2 GB. Since a typical network traffic dataset usually consists of PCAP/pcapng files that are several gigabytes in size, you will need to split the files in question into smaller, more digestible chunks.
To split up PCAP files we use Wireshark’s editcap feature. Since editcap lacks a GUI, we need to use Windows Command Prompt interface.
First, we need to change directory to Wireshark’s installation directory where editcap is located, by default it is C:Program FilesWireshark:

cd "C:Program FilesWireshark"

A typical Windows command to split a file using editcap looks something like this:

editcap -c 300000 "C:datasetsdataset.pcap" "C:datasetsanondataset-split-.pcap"

The option -c 300000 defines the maximum amount of packets in a single output file. “C:datasetsdataset.pcap” is the path to input file and “C:datasetsanondataset-split-.pcap” contains the path and the name template of the output files.
Since TraceWrangler is still in beta and therefore has some bugs, like random errors that occur during anonymization of files larger that 50 MB, we recommend to set the maximum amount of packets for editcap output files to a value that would produce files well under 2GB, possibly even under 50 MB.

After you open the files you are about to anonymize in TraceWrangler, click “anonymize files” to open the anonymization options menu. Before you begin, make sure to clear all default anonymization settings first, otherwise you will end up with heavily truncated files:
Anonymization options

If you want to anonymize a large amount of IP addresses, it would be illogical to replace each one with a manually entered address. For this purpose you can check “Replace IP addresses by subnet” and pick “keep host part” from the list of options. Check “Recalculate CRC” and pick “Keep bad checksums bad” if needed.

IPv4 anonymization using TraceWrangler

Finally, in the Output settings you can pick the directory to which you want to save the files. If you set filename to < filename>_anonymized, the resulting file’s name will be the original file’s name with the string _anonymized appended. Confirm the setting by clicking “Okay” and click “Run” to start anonymization.

To merge the PCAP files into one, we use another feature of Wireshark: mergecap. Wireshark also provides file merging through GUI, however this is supported for two files at a time only. In our case, this would be very time consuming, therefore, we have used command line interface:

mergecap.exe -w "C:datasetsdataset.pcap" "C:datasetsdataset-split01-anonymized.pcap" "C:datasetsdataset-split02-anonymized.pcap" "C:datasetsdataset-split03-anonymized.pcap" "C:datasetsdataset-split04-anonymized.pcap" "C:datasetsdataset-split05-anonymized.pcap" "C:datasetsdataset-split06-anonymized.pcap" "C:datasetsdataset-split07-anonymized.pcap" "C:datasetsdataset-split08-anonymized.pcap" "C:datasetsdataset-split09-anonymized.pcap" "C:datasetsdataset-split10-anonymized.pcap" "C:datasetsdataset-split11-anonymized.pcap"

The -w option specifies the output file and all of the other paths specify the files to be merged. Files are merged chronologically according to their timestamps.

HxD

TraceWrangler, is only capable of anonymizing OSI layers 2 through 4 and thus cannot sanitize URIs, e.g. http://192.168.4.2/index.php. To sanitize URIs, we use hex editor HxD. Unlike TraceWrangler, HxD is capable of modifying files of any size, located both on disk and RAM alike.
HxD

Theoretically, you could use HxD to anonymize all layers without the need to use TraceWrangler. This would, however,  result in incorrect checksums in all of the headers.
To anonymize L2 through L4 data, you can use search and replace using Hex values:
Search and replace using Hex
Be careful, though, the above example will replace the first two octets in the network 192.168.0.0/16 with 172.16., but will also replace any two consecutive octets 192 and 168 in other addresses as well, e.g. 10.0.192.168 becomes 10.0.172.16. The more specific you are, the lower the risk of unwanted replacement: if you want to replace 192.168.1.1 with 192.0.0.1, be sure to replace 192.168.1. with 192.0.0., not just the latter two octets.

Things are much easier on L7, here you can be much more specific with your replacements using text string replacing:
Search and replace using text string

Depending on whether you are editing the file in your RAM or on you disk, changes to the file may not be permanent,always save your work after you’re done:
Save file

KIS 2019 network traffic dataset

KIS 2019 network traffic dataset

  • Authors : Jana Uramová, Tomáš Mokoš, Patrik Rodina, Peter Seemann, Miroslav Kohútik

This article describes the KIS 2019 network traffic dataset. If you wish to access this dataset, contact us by e-mail at dataset[AT]kis.fri.uniza.sk.

The KIS 2019 dataset was created by Tomáš Mokoš as a part of his Diploma thesis at the Department of Information Networks on the University of Žilina’s Faculty of Management Science and Informatics.

Moloch Upgrade

Moloch Upgrade

  • Authors: Tomáš Mokoš, Miroslav Kohútik

Upgrading Moloch to the latest version is not possible from all versions. Some older versions require installation of newer versions in an exact order.

Upgrading to Moloch 1.1.0

The oldest version of Moloch we have had in active use was version 0.50.
Upgrading Moloch from version 0.50 to version 1.0 and higher requires reindexing of all session data due to the major changes introduced in version 1.0. Reindexing is done in the background after upgrading, so there is little downtime before the server is back online.

Installation of Scirius CE

Installation of Scirius CE

  • Author: Miroslav Kohútik
  • Operating system : Ubuntu 16.04

Scirius Community Edition is a web interface dedicated to Suricata ruleset management. It handles the rules file and updates of the associated files.

This guide will walk you through the installation of Scirius Community Edition on Ubuntu 16.04 operating system.
Before proceeding with installation of Scirius CE, you need to have IDS Suricata installed. Installation guide for Suricata can be found here.

Installation of Suricata

Installation and basic setup of Suricata

First, add the latest stable Suricata repository to APT:

sudo add-apt-repository ppa:oisf/suricata-stable
sudo apt-get update

Now you can either install Suricata with:

sudo apt-get install suricata 

or the Suricata package with built-in (enabled) debugging

sudo apt-get install suricata-dbg

Basic setup

Start with creating a directory for Suricata’s log information.

sudo mkdir /var/log/suricata

To prepare the system for using it, enter:

sudo mkdir /etc/suricata

The next step is to copy classification.config, reference.config and suricata.yaml from the base build/installation directory (ex. from git it will be the oisf directory) to the /etc/suricata directory. Do so by entering the following:

sudo cp classification.config /etc/suricata
sudo cp reference.config /etc/suricata
sudo cp suricata.yaml /etc/suricata

Auto setup

You can also use the available auto setup features of Suricata:

The make install-conf option will do the regular “make install” and then automatically create/setup all the necessary directories and suricata.yaml.

 ./configure && make && make install-conf

The make install-rules option will do the regular “make install” and it automatically downloads and sets up the latest ruleset from Emerging Threats available for Suricata.

./configure && make && make install-rules

The make install-full option combines everything mentioned above (install-conf and install-rules) – and will present you with a ready to run (configured and set up) Suricata

./configure && make && make install-full

Source:

Suricata – Ubuntu installation

Moloch/Arkime- Installation

Installation of Moloch/Arkime

  • Author : Miroslav Kohútik
  • Tested version : 1.7.0
  • Operating system : Ubuntu 16.04

Installation of Moloch is no trivial matter, that is why we have prepared this guide on how to set up the system in cloud environment.

Setup before installation

Before installing Moloch itself, you need to install the Elasticsearch database and make the following changes in configuration of the operating system.

Add Java repository

sudo add-apt-repository ppa:webupd8team/java 

Perform an update of the list of packages and packages themselves to the latest versions

sudo apt-get update -y && sudo apt-get upgrade -y

Download and install the public GPG signing key

wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo apt-key add -

Add Elastic Repository

echo "deb https://artifacts.elastic.co/packages/5.x/apt stable main" | sudo tee -a /etc/apt/sources.list.d/elastic-5.x.list

Perform another package update

sudo apt-get update -y && sudo apt-get upgrade -y && sudo apt-get dist-upgrade -y 

Clean-up (Optional)

sudo apt-get autoremove

Disable swap

sudo swapoff -a
sudo nano /etc/fstab

Edit fstab – comment out the following:

#/dev/mapper/logs--vg-swap_1 none     swap   sw      0     0

or

#/dev/mapper/user--vg-swap_1 none     swap   sw      0     0

Install Java 8

sudo apt-get install oracle-java8-installer

Install Elasticsearch

sudo apt-get install elasticsearch

Install Moloch/Arkime

Install additional necessary packages

sudo apt-get install wget curl libpcre3-dev uuid-dev libmagic-dev pkg-config g++ flex bison zlib1g-dev libffi-dev gettext libgeoip-dev make libjson-perl libbz2-dev libwww-perl libpng-dev xz-utils libffi-dev

Download the  newest version of Moloch/Arkime (Arkime downloads)

wget https://files.molo.ch/builds/ubuntu-16.04/moloch_1.7.0-1_amd64.deb

Install Moloch

Note: when asked whether or not to install Elasticsearch choose no, since you have already installed Elasticsearch earlier and this script offers only the demo version.

sudo dpkg -i moloch_1.7.0-1_amd64.deb

Install dependencies (If the previous step halts due to errors)

sudo apt-get -f install

Configure Moloch/Arkime

Start Elasticsearch on startup

sudo systemctl enable elasticsearch.service

Configure Elasticsearch (OPTIONAL) (Configure as needed [max RAM allocation is 32GB])

It is recommended Elasticsearch be installed on a separate machine

sudo nano /etc/elasticsearch/jvm.options

Start Elasticsearch

sudo systemctl start elasticsearch.service

Check Elasticsearch Status

sudo systemctl status elasticsearch.service

To configure Moloch, you can either download a configuration file from https://github.com/aol/moloch/wiki/Settings or you can configure Moloch yourself using the following two commands

Before configuring Moloch manually, delete the config.ini file from /data/moloch/etc/

sudo rm /data/moloch/etc/config.ini 

Configure Moloch as needed

sudo /data/moloch/bin/Configure

Initialize Elasticsearch Database

sudo /data/moloch/db/db.pl http://localhost:9200 init

Install and update npm

sudo apt install npm
npm update

Add Moloch User

sudo /data/moloch/bin/moloch_add_user.sh admin admin PASSWORDGOESHERE --admin

Start Moloch Capture Service

sudo systemctl start molochcapture.service

Check Moloch Capture Service status

sudo systemctl status molochcapture.service

Start Moloch Viewer Service

sudo systemctl start molochviewer.service

Check Moloch Viewer Service status

sudo systemctl status molochviewer.service

Provided you have done everything right so far, you should be able to access the web interface at http://IPADDRESSOFINTERFACE:8005

Sources:

Forensic analytic tools

Forensic analytic tools

  • Author : Tomáš Mokoš

NetworkMiner

NetworkMiner is a Network forensic analysis tool (NFAT) for Windows operating systems. NetworkMiner can be used as a passive network sniffer/packet capturing tool in order to detect operating systems, sessions, hostnames, open ports etc. NetworkMiner’s primary purpose is collection of data regarding network hosts, rather than data regarding network traffic. In addition to direct file capture, NetworkMiner can also parse PCAP files for off-line analysis and to regenerate/reassemble transmitted files and certificates from PCAP files. This function can be used for extraction and archiving of media files transferred through the network. Supported file extraction protocols are FTP, SMB and HTTP. Extracted user credentials (username and password) for supported protocols can be found in the Credentials tab. Other useful features include keyword search in the captured/archived data and Nmap MAC vendor lookup.

Xplico

Xplico is an open-source NFAT. The goal of Xplico is the extraction of application data contained in a capture sample of Internet traffic. For example, Xplico can export all e-mails (POP, IMAP and SMTP), HTTP contents, VoIP calls, FTP and TFTP files, etc.

Elastic Stack

Elastic Stack provides reliable and safe transfer of data of any format from any source and real-time searching analysis and visualization. Elastic Stack consists of Kibana, Elasticsearch, Beats and Logstash. Elasticsearch is a search and analytics engine. Beats is a dta gathering platform. Logstash is a server‑side data processing pipeline that ingests data from multiple sources simultaneously, transforms it, and then sends it to a “stash” like Elasticsearch. Kibana lets users visualize data with charts and graphs in Elasticsearch.

Sguil

Sguil is built by network security analysts for network security analysts. Sguil’s main component is an intuitive GUI that provides access to realtime events, session data, and raw packet captures. Sguil facilitates the practice of Network Security Monitoring and event driven analysis. The Sguil client is written in tcl/tk and can be run on any operating system that supports tcl/tk (including Linux, BSD, Solaris, MacOS, and Win32).

Sources:

  • CRZP Komplexný systém pre detekciu útokov a archiváciu dát – Moloch