Menu Close

Tag: Moloch

Moloch/Arkime- Installation

Installation of Moloch/Arkime

  • Author : Miroslav Kohútik
  • Tested version : 1.7.0
  • Operating system : Ubuntu 16.04

Installation of Moloch is no trivial matter, that is why we have prepared this guide on how to set up the system in cloud environment.

Setup before installation

Before installing Moloch itself, you need to install the Elasticsearch database and make the following changes in configuration of the operating system.

Moloch Upgrade

Moloch Upgrade

  • Authors: Tomáš Mokoš, Miroslav Kohútik

Upgrading Moloch to the latest version is not possible from all versions. Some older versions require installation of newer versions in an exact order.

Upgrading to Moloch 1.1.0

The oldest version of Moloch we have had in active use was version 0.50.
Upgrading Moloch from version 0.50 to version 1.0 and higher requires reindexing of all session data due to the major changes introduced in version 1.0. Reindexing is done in the background after upgrading, so there is little downtime before the server is back online.

Moloch – Hardware requirements

Hardware Requirements

The architecture of Moloch enables it to be distributed on multiple devices. For small networks, demonstrations or home deployment, it is possible to host all the tools necessary on a single device; however, for capturing large volumes of data at high transfer rates, it is recommended not to run Capture and Elasticsearch on the same machine. Moloch allows for software demo version testing directly on the website. In case of storage space shortage, Moloch replaces the oldest data with the new. Moloch can also perform replications, effectively doubling storage space usage. We advise to thoroughly think through the use of this feature.

Elasticsearch and amount of nodes

Amount of nodes(servers) to be used depends on:

  • The amount of RAM available to each node
  • For how many days will the metadata(SPI data) be stored
  • Disk speed
  • Size of the HTTP portion of traffic
  • Average transfer rate of all interfaces
  • Whether the connections are short-term or long-term
  • Required reaction speed of requests
  • Estimated number of users requesting service at the same time

It must be taken into account, that to store one day’s worth of Elasticsearch module metadata (SPI data) at 1Gbit/s, roughly 200GB of disk space is needed. For example, to store 14 days’ worth of traffic at average network traffic of 2.5Gbit/s, we can easily calculate the amount of storage needed is 14 * 2.5 * 200, which amounts to roughly 7TB.

The formula to approximately calculate the amount of nodes needed for Elasticsearch is: ¼ * [average network traffic in Gbit/s] * [number of days to be archived]. For example, to archive 20 days’ worth of traffic at 1Gbit/s, 5 nodes would be needed. If Moloch is to be deployed on higher performance machines, multiple Elasticsearch nodes can be run on a single device. Since the deployment of additional nodes is a simple task, we recommend starting with fewer nodes and adding new ones until the required reaction speed of requests is reached.

Capture

It has to be remarked that while capturing at 1Gbit/s of traffic, 11TB of disk space is required for archiving of pcap files alone. For example, to store 7 days’ worth of traffic at average speed of 2.5 Gbit/s, the amount of storage needed is [ 7 * 2.5 * 11 ] TB, which amounts to 192.5TB. Total bandwidth size must include both directions of transfer, therefore a 10G uplink is capable of generating 20Gbit of capture data (10Gbit for each direction). Considering this, it is recommended to have multiple uplinks connected to Moloch. For example, for 10G uplink with 4Gbit/s traffic in both directions, it would be advisable to use two 10G uplinks for capture, since using a single 10G uplink runs a risk of packet loss.

To capture large amounts of data (several Gbit/s) we advise using the following hardware :

  • RAM: 64 GB to 96 GB
  • OS disks: RAID5 works best. SSDs are not required
  • CAPTURE disks: 20+x 4TB disks or 6 TB SATA.
  • RAID: Hardware RAID card with at least 1GB of cache.
  • NIC: New Intel NICs are recommended, but most NICs should work fine.
  • CPU: at least 2*6 cores. The amount of cores needed grows with average uplink traffic. Moloch allows for device load balancing through mirroring.

When considering purchase of additional SSDs or NICs, considering adding another monitoring device instead is advised.

Sources

  • CRZP Komplexný systém pre detekciu útokov a archiváciu dát – Moloch

Moloch – Components and architecture

Components

Moloch consists of three components:

  • Elasticsearch – search engine powering the Moloch system. It is distributed under the terms of Apache license. Requests are handled using HTTP and results are returned in JSON file format. Elasticsearch supports database sharding, making it fast and scalable.
  • Capture – C language based application for real-time network traffic monitoring. Captured data is written to disk in PCAP format. Alternatively, it can be used to import PCAP files for analysis and archiving manually through command line. The application analyzes protocols of OSI layers three through seven and creates SPI data which it sends to the Elasticsearch cluster for indexing.
  • Viewer – The viewer uses a number of node.js tools. Node.js is an event-based, server-side Javascript platform with its own HTTP and JSON communication. Viewer runs on each device with running Capture module and it provides a web UI for searching, displaying and exporting of PCAP files. GUI/API calls are carried out using URIs, enabling integration with security information and event management (SIEM) systems, consoles or command line for PCAP file obtaining.

Architecture

All of the components can be located and run on a single node, however this is not recommended for processing of larger data flows. Whether the data flow is too large can be determined by requests taking too long to respond, in that case, transition to multi-node architecture is advised. The individual components have distinct requirements, Capture requires large amounts of disk space to store received PCAP files, by contrast, Elasticsearch requires large amount of RAM for idexing and searching. The viewer has the smallest requirements of the three, allowing it to be located anywhere.

MolochS

Moloch can be easily scaled to multiple nodes for Capture and Elasticsearch components. One or several instances of Capture can run on a single or multiple nodes, while sending data to the Elasticsearch database. Similarly, single one or multiple instances of Elasticsearch can run on either one or several nodes to increase the amount of RAM capacity for indexing. This architecture type is therefore recommended for data flow capture and real-time indexing.

MolochM

We recommend deploying Moloch behind a mirrored switch interface, in our case a Cisco SPAN port. Click here for more information on port mirroring.

Sources

  • CRZP Komplexný systém pre detekciu útokov a archiváciu dát – Moloch

Moloch – Usage possibilities of Moloch

Usage possibilities of Moloch

  • Author : Tomáš Mokoš

Moloch offers many distinct usage possibilities, the set of which is not limited to the ones mentioned down below and can be expanded by individual users, provided they can find other applications of this service:

    • DOS attacks – Analysis of connections suspected of originating DOS attacks.
    • Geolocation – Identification of connection’s country of origin.
    • Access Intelligence – Helps with the analysis of authorized/non-authorized access to system resources, applications, servers, system operation and different functions. You can also perform depth analysis (with the use of tagging) of a particular system, application or service running in the network
    • Port connection usage – amount of connections on a particular port.
    • URL connection usage – amount of connections tied to a particular URL by requests.
    • Data volumes

    As an example, we will show you the use of Moloch for analysis of the CICIDS 2017 dataset, where we analyze a DDoS Hulk attack. First, we filter the traffic. Using the command tags == CICIDS2017_WEDNESDAY && ip.dst == 192.168.10.50 we extract the traffic from the day of the attack with the webserver’s IP as the destination address.

    Moloch1

    Afterwards, in the  SPI Graph tab, we can look up the source IP addresses that communicated with this web server by setting SPI Graph to ip.src.

    As we can see, the IP address 172.16.0.1 generated 84315 of the 85268 sessions, making it likely to be the address of the attacker.

    Moloch2

    In the SPI View tab, we can see that the network communication did not originate from just one port, but several thousands and almost all of these were bound for the port 80. Furthermore, we can see that most of the communication was bound for miscellaneous URIs, which is characteristic of a Hulk attack. By generating random URIs, Hulk attack causes resource depletion of the web server, making the server inaccessible.Moloch3
    Moloch4

    Sources

    • CRZP Komplexný systém pre detekciu útokov a archiváciu dát – Moloch

Moloch – Network interface configuration

Considering the possibility of packet loss at high traffic flows, it is recommended for the packet capture interface to NOT be the same as the interface connected to the internet, in this case, the interface assigned with static IP address. On the server in our lab there are two interfaces, one for packet capture and one for “outside” communication. To prevent packet loss, it is recommended to increase the Moloch-side interface’s buffer to maximum and turn off most of the NIC’s services by using the following commands:

ethtool –G enp0s9 rx 4096 tx 4096 
ethtool –K enp0s9 rx off tx off gs off tso off gso off 

You can find out the maximum buffer size using the ethool -g command, to check NIC’s services use the ethtool -k command. Disable most of NIC’s services, since you want to capture network traffic instead of what the OS can see, they are not going to be used anyway.

Sources

  • CRZP Komplexný systém pre detekciu útokov a archiváciu dát – Moloch

Moloch – Load Testing

  • Author : Tomáš Mokoš, Marek Brodec

In our topology, the server running Moloch was connected to a 100Mbps switch, therefore, even though the generated network traffic reached 140Mbps, the flow was subsequently limited on switch.

Single source to single destination test

At first, while generating packets with a generated IP address from cloud to a lab PC, we have had a problem with the cloud’s security policies. These policies prevented the sending of packets with source IP address different from the one assigned to the hosting cloud instance, therefore we have only generated traffic from a single source IP address to a single destination IP address.

We have observed the volume of incoming data and its effect on performance using network monitoring tools mentioned in the previous chapters. The graphs below show that in the moment of traffic spike, CPU usage spiked as well, however it did not exceed 20%. The graph titled “Mem” shows the RAM usage of all running services including the instances of Moloch, Snort and many others with the addition of cache memory.

Bigdesk1

The following graph shows utilization of RAM allocated exclusively to Elasticsearch. Having been allocated with 25 GB of RAM, Elasticsearch uses 4.2 GB in this instance. This test revealed that Elasticsearch uses unnecessary amount of RAM and after the test we have decreased the amount of allocated RAM to 18 GB.

Bigdesk2

N sources to single destination test

The previous packet generation was not ideal because the relations had many shared characteristics. All test cases had identical IP addresses, number of packets and payload, which made indexing much easier. To approximate real network traffic as precisely as possible, we have tried to address the issue with source IP address generation. The solution was packet generation from our own laptop with OS Kali Linux running in VirtualBox as a source of attacks. The laptop was connected to our switch and generated traffic towards the lab. The packets passed through the interface which was mirrored towards our server.

Test results showed CPU utilization rise in the range from 20 to 30%.

Bigdesk3

“Heap Mem” graph shows that, of the allocated 19 GB of RAM, 6.3 GB is used .

Bigdesk4

The results revealed that the amount of allocated RAM can be further decreased, freeing up space for other processes. Graph titled “Mem” shows that all running processes use 31 GB of RAM, therefore it would be advisable to stop all unnecessary processes before testing. This, however, was not an option for us, because in parallel with us, other students have been working on their respective bachelor’s theses.

Testing evaluation

Restart of services was performed before the single source to single destination test, but service instances were not restarted before other tests. We have concluded that higher network traffic mainly influences CPU utilization. Graphs of both Elasticsearch RAM usage and overall RAM usage do not show any significant spikes during the arrival of generated test traffic. The table shows that during normal traffic (4.5 Mbps at the time), RAM usage is higher in both cases above than before the first test, where traffic was 20 times higher than normal. These two graphs are mainly affected by the time elapsed from service start and possibly the amount of captured data. These tests were performed in the same order as they were mentioned above, with one week between the individual tests. Even though the graphs indicate utilization of 31 out of 32 GB of RAM, from the 18 GB allocated to Elasticsearch, only one third is used. This can be solved by reducing the amount of RAM allocated to Elasticsearch, thus freeing up space for other processes.

Cache can be cleared by the following command:

free && sync && echo 3 > /proc/sys/vm/drop_caches && free 

This drops RAM usage from 31 GB to 20 GB.

Sources

  • Report Project 1-2 – Marek Brodec

Moloch – CPU, RAM and HDD usage

  • Author : Tomáš Mokoš, Marek Brodec

Considering the fact that the formulas that we used to calculate for how many days can Moloch archive network traffic and what hardware should we use were only approximate, we have decided to measure some statistics to help us clear up these values.

From the Elasticsearch node quantity calculation formula: ¼ * [average network traffic in Gbit/s] * [number of days to be archived], we get that at 2 Mbit/s, one node should suffice.

Using the formula 86400 * [average network traffic per second] * [number of days to be archived], we can calculate that 1Gbit/s of traffic takes up 11TB of disk space daily, therefore 2Mbit/s of traffic will take up 22GB per day. At this rate we can archive approximately 113 days’ worth of raw data.

Since our lab server is not under heavy load, only 7GB (22%) of RAM is used on average. This is due to the the fact that during the night the network traffic is minimal. Non-uniform network traffic creates distortions, therefore long-term observation would be desirable.

RAM

Moloch by itself uses about 5% of total CPU utilization and 1.0 to 1.3 GB of RAM (3-3.5%).

Utilization of disk capacity was 340 GB (17%) on the first week, 220 GB (11%) on second week and 140 GB (7%) on third week.

GB

Thanks to our use of data trimming we have managed to archive 6 months’ worth of traffic, although the actual value is closer to 4 months, since during the two months of the exam period, network traffic was very low. The captured data took up 52% (1.3 TB) of storage.

HDD

Sources

  • Report Projekt 1-2 – Marek Brodec