Menu Close

Moloch Upgrade

Moloch Upgrade

  • Authors: Tomáš Mokoš, Miroslav Kohútik

Upgrading Moloch to the latest version is not possible from all versions. Some older versions require installation of newer versions in an exact order.

Upgrading to Moloch 1.1.0

The oldest version of Moloch we have had in active use was version 0.50.
Upgrading Moloch from version 0.50 to version 1.0 and higher requires reindexing of all session data due to the major changes introduced in version 1.0. Reindexing is done in the background after upgrading, so there is little downtime before the server is back online.

Configruration L2TP over IPsec

Configuration of L2TP over IPsec tunnel connection with Cisco router as a server and MikroTik router as a client.

Configuration of Cisco server

(config)#int loopback 0 
(config-if)#ip address 192.168.1.1 255.255.255.255
(config-if)#exit
(config)#ip local pool l2tp-pool 192.168.1.5 192.168.1.10
(config)#vpdn enable
(config)#vpdn-group l2tp-group
(config-vpdn)#accept-dialin
(config-vpdn-acc-in)#protocol l2tp
(config-vpdn-acc-in)#virtual-template 1
(config-vpdn-acc-in)#exit
(config-vpdn)#no l2tp tunnel authentication
(config-vpdn)#exit
(config)#interface virtual-template 1
(config-if)#ip unnumbered loopback 0
(config-if)#peer default ip address pool l2tp-pool
(config-if)#ppp authentication ms-chap-v2
(config-if)#exit
(config)#crypto isakmp policy 1
(config-isakmp)#encryption aes 256
(config-isakmp)#hash sha512
(config-isakmp)#authentication pre-share
(config-isakmp)#group 2
(config-isakmp)#lifetime 3600
(config-isakmp)#exit
(config)#crypto isakmp key PRESHARED_KEY address 0.0.0.0 !or peer address 
(config)#crypto ipsec transform-set l2tp-ipsec-transport-esp esp-aes 256 esp-sha512-hmac
(cfg-crypto-trans)#mode transport
(cfg-crypto-trans)#exit
(config)#crypto dynamic-map my-dynamic-map 1
(config-crypto-map)#set nat demux
(config-crypto-map)#set transform-set l2tp-ipsec-transport-esp
(config-crypto-map)#exit
(config)#crypto map my-static-map 1  ipsec-isakmp dynamic my-dynamic-map
(config)#interface fastEthernet 4 ! Your WAN interface
(config-if)#crypto map my-static-map
(config-if)#exit

Now we are able to connect to this router with L2TP/IPsec tunnel.

Elasticsearch cluster upgrade

Elasticsearch cluster upgrade from 5.5.1 to 6.8.1

  • Author : Miroslav Kohútik
  • Operating System : Ubuntu 16.04

In this guide we will show you how to upgrade an Elasticsearch cluster located on a single machine.
As an example we will use our Elasticsearch cluster that consists of five ES nodes.

All nodes need to be stopped before upgrading

sudo systemctl stop elasticsearch_data1
sudo systemctl stop elasticsearch_data2
sudo systemctl stop elasticsearch_data3
sudo systemctl stop elasticsearch_ingest
sudo systemctl stop elasticsearch_master

Download the installation package for Elasticsearch version 6.8.1

wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-6.8.1.deb

Install the new version

sudo dpkg -i elasticsearch-6.8.1.deb

Elasticsearch should now be successfully updated to version 6.8.1. However, we cannot start up our cluster just yet. First, we need to update the Linux services for each node since service definition in 6.x is slightly different from version 5.x.
Our Cluster’s nodes’ services are located in /usr/lib/systemd/system/

Here is an excerpt from /usr/lib/systemd/system/elasticsearch_master.service:

[Service]
Environment=ES_HOME=/usr/share/elasticsearch
Environment=CONF_DIR=/etc/master
Environment=DATA_DIR=/var/lib/elasticsearch/master
Environment=LOG_DIR=/var/log/elasticsearch/master
Environment=PID_DIR=/var/run/elasticsearch
EnvironmentFile=-/etc/default/elasticsearch

WorkingDirectory=/usr/share/elasticsearch

User=elasticsearch
Group=elasticsearch

ExecStartPre=/usr/share/elasticsearch/bin/elasticsearch-systemd-pre-exec

ExecStart=/usr/share/elasticsearch/bin/elasticsearch 
                                                -p ${PID_DIR}/elasticsearch.pid 
                                                -Edefault.path.logs=${LOG_DIR} 
                                                -Edefault.path.data=${DATA_DIR} 
                                                -Edefault.path.conf=${CONF_DIR}

Here is the same excerpt from the same service file updated for version 6.x:

[Service]
Environment=ES_HOME=/usr/share/elasticsearch
Environment=PID_DIR=/var/run/elasticsearch
EnvironmentFile=-/etc/default/elasticsearch
LimitMEMLOCK=infinity
RuntimeDirectory=elasticsearch
PrivateTmp=true
Environment=ES_PATH_CONF=/etc/master

WorkingDirectory=/usr/share/elasticsearch

User=elasticsearch
Group=elasticsearch

ExecStart=/usr/share/elasticsearch/bin/elasticsearch -p ${PID_DIR}/elasticsearch.pid --quiet

Make sure that for every single variable you have set in your elasticsearch_.service files you have also commented out its equivalent in /etc/default/elasticsearch. Otherwise, values in the latter file will override the changes you have made in the former.

Service files of the remaining nodes (in our case the following files: elasticsearch_ingest.service, elasticsearch_data1.service, elasticsearch_data2.service and elasticsearch_data3.service) need to be updated in a similar manner.

Each node’s service also requires its own elasticsearch.yaml file. This file should be located on the path set in ES_PATH_CONF in the service file as seen above (in the case of master node it is /etc/master/).
Here is an example of elasticsearch.yaml located in /etc/master/. Note the attributes node.master, node.data, and node.ingest, these need to be set in respect to the role of the node in particular and are different for nodes of other types.

# ---------------------------------- Cluster -----------------------------------
# Use a descriptive name for your cluster:
cluster.name: elastic
# ------------------------------------ Node ------------------------------------
# Use a descriptive name for the node:
node.name: master
# Add custom attributes to the node:
node.master: true
node.data: false
node.ingest: false
node.max_local_storage_nodes: 5
# ----------------------------------- Paths ------------------------------------
# Path to directory where to store the data (separate multiple locations by comma):
path.data: /data/elasticsearch/data_master
# Path to log files:
path.logs: /var/log/elasticsearch/master
# ----------------------------------- Memory -----------------------------------
# Lock the memory on startup:
bootstrap.memory_lock: true
# Make sure that the heap size is set to about half the memory available
# on the system and that the owner of the process is allowed to use this
# limit.

Each node also uses a distinct pair of HTTP and TCP ports specified by attributes http.port and transport.tcp.port.

# ---------------------------------- Network -----------------------------------
# Set the bind address to a specific IP (IPv4 or IPv6):
network.host: 192.168.1.1
# Set a custom port for HTTP:
http.port: 9200
transport.tcp.port: 9300

Master node needs to bo able to discover other nodes in the cluster, therefore, attribute discovery.zen.ping.unicast.hosts contains a list of IPs and transport ports of all the other nodes. On nodes other than master it will contain only the master’s IP and transport port [“192.168.1.1:9300”]:

# --------------------------------- Discovery ----------------------------------
# Pass an initial list of hosts to perform discovery when new node is started:
# The default list of hosts is ["127.0.0.1", "[::1]"]
discovery.zen.ping.unicast.hosts: ["192.168.1.1:9301","192.168.1.1:9302","192.168.1.1:9303","192.168.1.1:9304"]

You should now be able to get the Elasticsearch cluster up and running:

sudo systemctl stop elasticsearch_master
sudo systemctl stop elasticsearch_ingest
sudo systemctl stop elasticsearch_data1
sudo systemctl stop elasticsearch_data2
sudo systemctl stop elasticsearch_data3

Moloch – Hardware requirements

Hardware Requirements

The architecture of Moloch enables it to be distributed on multiple devices. For small networks, demonstrations or home deployment, it is possible to host all the tools necessary on a single device; however, for capturing large volumes of data at high transfer rates, it is recommended not to run Capture and Elasticsearch on the same machine. Moloch allows for software demo version testing directly on the website. In case of storage space shortage, Moloch replaces the oldest data with the new. Moloch can also perform replications, effectively doubling storage space usage. We advise to thoroughly think through the use of this feature.

Elasticsearch and amount of nodes

Amount of nodes(servers) to be used depends on:

  • The amount of RAM available to each node
  • For how many days will the metadata(SPI data) be stored
  • Disk speed
  • Size of the HTTP portion of traffic
  • Average transfer rate of all interfaces
  • Whether the connections are short-term or long-term
  • Required reaction speed of requests
  • Estimated number of users requesting service at the same time

It must be taken into account, that to store one day’s worth of Elasticsearch module metadata (SPI data) at 1Gbit/s, roughly 200GB of disk space is needed. For example, to store 14 days’ worth of traffic at average network traffic of 2.5Gbit/s, we can easily calculate the amount of storage needed is 14 * 2.5 * 200, which amounts to roughly 7TB.

The formula to approximately calculate the amount of nodes needed for Elasticsearch is: ¼ * [average network traffic in Gbit/s] * [number of days to be archived]. For example, to archive 20 days’ worth of traffic at 1Gbit/s, 5 nodes would be needed. If Moloch is to be deployed on higher performance machines, multiple Elasticsearch nodes can be run on a single device. Since the deployment of additional nodes is a simple task, we recommend starting with fewer nodes and adding new ones until the required reaction speed of requests is reached.

Capture

It has to be remarked that while capturing at 1Gbit/s of traffic, 11TB of disk space is required for archiving of pcap files alone. For example, to store 7 days’ worth of traffic at average speed of 2.5 Gbit/s, the amount of storage needed is [ 7 * 2.5 * 11 ] TB, which amounts to 192.5TB. Total bandwidth size must include both directions of transfer, therefore a 10G uplink is capable of generating 20Gbit of capture data (10Gbit for each direction). Considering this, it is recommended to have multiple uplinks connected to Moloch. For example, for 10G uplink with 4Gbit/s traffic in both directions, it would be advisable to use two 10G uplinks for capture, since using a single 10G uplink runs a risk of packet loss.

To capture large amounts of data (several Gbit/s) we advise using the following hardware :

  • RAM: 64 GB to 96 GB
  • OS disks: RAID5 works best. SSDs are not required
  • CAPTURE disks: 20+x 4TB disks or 6 TB SATA.
  • RAID: Hardware RAID card with at least 1GB of cache.
  • NIC: New Intel NICs are recommended, but most NICs should work fine.
  • CPU: at least 2*6 cores. The amount of cores needed grows with average uplink traffic. Moloch allows for device load balancing through mirroring.

When considering purchase of additional SSDs or NICs, considering adding another monitoring device instead is advised.

Sources

  • CRZP Komplexný systém pre detekciu útokov a archiváciu dát – Moloch

Setup Kodi to use Tvheadend backend

If you have working Tvheadend backend available, you can use Kodi as a frontend to watch live TV channels or browse EPG and setup and watch TV recordings.

This guide assumes, that the Kodi is already installed. You can obtain Kodi for Windows/macOS/Linux using the official Kodi webpage: https://kodi.tv/download For Raspberry Pi, we recommend using LibreELEC distribution available at: https://libreelec.tv/downloads_new