Menu Close

Running Fortigate FW VM inside of GNS3

In this post we describe how to run Fortigate FW VM appliance inside of the GNS3 (local or remote).

Prerequisities and environment

  • GNS3
    • In my case of version 2.1.1 running on a remote linux server (physical HW, not GSN3 VM).
    Fortigate VM Image for KVM
    • In my case FortiGate for KVM platform Version 6.2.
    • Download from HERE using Fortigate.ONE account (may create for free).
    GNS3 Fortigate Appliance

    Note: FortiGate VM evaluation license

    FortiGate VM includes a limited embedded 15-day trial license that supports:

Multi tabbed, multi execution telnet/ssh client

Working on our practical networking lessons our students and we, as their teachers, we usually configure several routers and switches (sometime up to ten), which are accessible remotely. For this we welcome the use of multi tabbed and especially multi exection clients.

It allows to efficentienly organize working space and run commands in one task on all connected network euqipments (for example to save running config).

Installation of Suricata


Installation and basic setup of Suricata

First, add the latest stable Suricata repository to APT:

sudo add-apt-repository ppa:oisf/suricata-stable
sudo apt-get update

Now you can either install Suricata with:

sudo apt-get install suricata 

or the Suricata package with built-in (enabled) debugging

sudo apt-get install suricata-dbg

Basic setup

Start with creating a directory for Suricata’s log information.

sudo mkdir /var/log/suricata

To prepare the system for using it, enter:

sudo mkdir /etc/suricata

The next step is to copy classification.config, reference.config and suricata.yaml from the base build/installation directory (ex. from git it will be the oisf directory) to the /etc/suricata directory. Do so by entering the following:

sudo cp classification.config /etc/suricata
sudo cp reference.config /etc/suricata
sudo cp suricata.yaml /etc/suricata

Auto setup

You can also use the available auto setup features of Suricata:

The make install-conf option will do the regular “make install” and then automatically create/setup all the necessary directories and suricata.yaml.

 ./configure && make && make install-conf

The make install-rules option will do the regular “make install” and it automatically downloads and sets up the latest ruleset from Emerging Threats available for Suricata.

./configure && make && make install-rules

The make install-full option combines everything mentioned above (install-conf and install-rules) – and will present you with a ready to run (configured and set up) Suricata

./configure && make && make install-full

Source:

Suricata – Ubuntu installation

Setup Kodi to use Tvheadend backend

If you have working Tvheadend backend available, you can use Kodi as a frontend to watch live TV channels or browse EPG and setup and watch TV recordings.

This guide assumes, that the Kodi is already installed. You can obtain Kodi for Windows/macOS/Linux using the official Kodi webpage: https://kodi.tv/download For Raspberry Pi, we recommend using LibreELEC distribution available at: https://libreelec.tv/downloads_new

Tclsh script examples: how to generate router loop interfaces with IPv4 addresses

This example shows how to generate 254 loop interfaces with assigned ipv4 addresses 172.16.0.1/24 up to 172.16.255.1/24. The code is:

enable
tclsh
for {set i 0} {$i < 256} {incr i} {
ios_config "int loop $i" "ip address 172.16.$i.1 255.255.255.0"
}
ios_config "end"
tclquit

and you may just simply copy and paste it into a Cisco router CLI. Therefore first run tclsh within of privileged EXEC mode

Moloch v1.7.0 – Installation


Installation of Moloch

  • Author : Miroslav Kohútik
  • Tested version : 1.7.0
  • Operating system : Ubuntu 16.04

Installation of Moloch is no trivial matter, that is why we have prepared this guide on how to set up the system in cloud environment.

Setup before installation

Before installing Moloch itself, you need to install the Elasticsearch database and make the following changes in configuration of the operating system.

Add Java repository

sudo add-apt-repository ppa:webupd8team/java 

Perform an update of the list of packages and packages themselves to the latest versions

sudo apt-get update -y && sudo apt-get upgrade -y

Download and install the public GPG signing key

wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo apt-key add -

Add Elastic Repository

echo "deb https://artifacts.elastic.co/packages/5.x/apt stable main" | sudo tee -a /etc/apt/sources.list.d/elastic-5.x.list

Perform another package update

sudo apt-get update -y && sudo apt-get upgrade -y && sudo apt-get dist-upgrade -y 

Clean-up (Optional)

sudo apt-get autoremove

Disable swap

sudo swapoff -a
sudo nano /etc/fstab

Edit fstab – comment out the following:

#/dev/mapper/logs--vg-swap_1 none     swap   sw      0     0

or

#/dev/mapper/user--vg-swap_1 none     swap   sw      0     0

Install Java 8

sudo apt-get install oracle-java8-installer

Install Elasticsearch

sudo apt-get install elasticsearch

Install Moloch

Install additional necessary packages

sudo apt-get install wget curl libpcre3-dev uuid-dev libmagic-dev pkg-config g++ flex bison zlib1g-dev libffi-dev gettext libgeoip-dev make libjson-perl libbz2-dev libwww-perl libpng-dev xz-utils libffi-dev

Download Moloch (https://molo.ch/#downloads)

wget https://files.molo.ch/builds/ubuntu-16.04/moloch_1.7.0-1_amd64.deb

Install Moloch

Note: when asked whether or not to install Elasticsearch choose no, since you have already installed Elasticsearch earlier and this script offers only the demo version.

sudo dpkg -i moloch_1.7.0-1_amd64.deb

Install dependencies (If the previous step halts due to errors)

sudo apt-get -f install

Configure Moloch

Start Elasticsearch on startup

sudo systemctl enable elasticsearch.service

Configure Elasticsearch (OPTIONAL) (Configure as needed [max RAM allocation is 32GB])

It is recommended Elasticsearch be installed on a separate machine

sudo nano /etc/elasticsearch/jvm.options

Start Elasticsearch

sudo systemctl start elasticsearch.service

Check Elasticsearch Status

sudo systemctl status elasticsearch.service

To configure Moloch, you can either download a configuration file from https://github.com/aol/moloch/wiki/Settings or you can configure Moloch yourself using the following two commands

Before configuring Moloch manually, delete the config.ini file from /data/moloch/etc/

sudo rm /data/moloch/etc/config.ini 

Configure Moloch as needed

sudo /data/moloch/bin/Configure

Initialize Elasticsearch Database

sudo /data/moloch/db/db.pl http://localhost:9200 init

Install and update npm

sudo apt install npm
npm update

Add Moloch User

sudo /data/moloch/bin/moloch_add_user.sh admin admin PASSWORDGOESHERE --admin

Start Moloch Capture Service

sudo systemctl start molochcapture.service

Check Moloch Capture Service status

sudo systemctl status molochcapture.service

Start Moloch Viewer Service

sudo systemctl start molochviewer.service

Check Moloch Viewer Service status

sudo systemctl status molochviewer.service

Provided you have done everything right so far, you should be able to access the web interface at http://IPADDRESSOFINTERFACE:8005

Sources:

Converting Windows Server 2019 Evaluation to Volume

Obtaining Evaluation version of Windows Server 2019 is possible directly via Microsoft Evaluation Center. But what to do with the already installed Evaluation version, if you obtain a license? You don’t need to do a reinstall using non-eval ISO. It is possible to convert Evaluation to Volume edition using these steps:

  1. Get a generic Volume key from Microsoft: https://docs.microsoft.com/en-us/windows-server/get-started/kmsclientkeys or prepare your own key.
  2. Launch Command prompt or PowerShell as Administrator.
  3. Enter the following command (applies for Windows Server 2019 Datacenter):
DISM /online /Set-edition:ServerDatacenter /ProductKey:WMDGN-G9PQG-XVVXX-R3X43-63DFG /AcceptEula

You should reboot the system after successful command application.

Analysis of the ISCX dataset from june 15th


Dataset 2012 – ISCX – Elsevier

In this article we take a closer look at the ISCX IDS 2012 dataset created by the Canadian Institute for Cybersecurity.

Network datasets serve for the purpose of training of network security systems, namely IDS and IPS.

Analysis of the ISCX dataset from June 15th in Moloch

The size of PCAP data from this day is 24.5 GB. The dataset is described in three XML files, with the attack being described in the file TestbedTueJun15-3Flows. The description implies a DDoS attack using an IRC botnet.

According to PCAP data, the most intense part of the attack lasted for one hour from 21:05 to 22:05. In the XML file, the attack is recorded at 16:04, therefore there is a 5-hour delay between the data.

The attack originated from infected devices in a private network, with the target being the device with IP address 192.168.5.122. Other sessions marked as attacks were of too low intensity to be visibly displayed. According to the XML description, the attack commenced roughly one hour before the start of the most intense part and lasted for five more hours after its end.

This is an illustration of the most intense part of the attack. IP address with the highest traffic representing the device being attacked is located in the center of the graph. Network communication that was not a part of the attack is being displayed too

This illustration shows only the originating addresses of sessions with destination address 192.168.5.122 – target of the attack.

Analysis of the dataset in IDS Suricata

Immediately, Suricata detected high amounts of P2P bittorrent traffic (this does not necessarily imply an attack, rather than a violation of network terms).

In the early morning hours (3 a.m. to 4 a.m. ) of the following day (16.6.), several brute force attack attempts on the aforementioned IP address (192.168.5.122) were detected. Several attempts of the same attack in the opposite direction were also detected (internal IP address was attempting to reach an external IP address via SSH).

In addition, Suricata detected several possible Trojans and Malwares (about 60), e.g. Blue Botnet for attack generation and Sality for infection of files in OS Windows. In the afternoon and evening hours, an access to website Regnow.com was also detected. This site is linked to typical scammers from India who pose as MS support and demand your credit card number for cleaning of a purportedly infected computer.

Suricata failed to directly detect an ongoing DDoS attack, the only sign which was generation of “STREAM 3way handshake with ack in wrong dir” alert between IP addresses 192.168.5.122 and 192.168.4.120 150x per second. However, since the alert always regarded the same IP addresses, we should have been dealing with a DoS attack, rather than a DDoS attack. The aforementioned TCP anomaly occurred for unknown reasons, if it were not for this, there would be no sign of an attack.

Suricata supports rule thresholding, which can be used to detect DDoS attacks. These thresholds have parameters which define the number of sessions, timeframe, maintaining count by source or destination IP address etc. A signature for detecting DDoS attacks using this rule is located online. However, the test performed on this dataset was unsuccessful even after editing of the aforementioned parameters. Dataset analysis revealed that the malicious packets contain TCP flags PUSH and ACK, while the signature expected packets with SYN flag (TCP SYN flood detection). After the removal of the rule, the signature was unusable, since its rules have been met even for normal traffic. I have tried editing the rule’s TCP flags to PUSH and ACK and tested its functionality.

It was necessary to find the marginal value of packets per second low enough to trigger the alert for a potential DDoS attack and high enough not to trigger the alert for common traffic and make sure that alert is triggered only for IP addresses involved in the attack.

My work was complicated by the fact that there were not just the packets incoming from the attacking IP addresses, but also packets outbound in the opposite direction because the attacked server tried to respond to all those HTTP requests. I could have set a signature rule to match destination IP address to server IP, this would, however, render the signature useless for other purposes. The solution to this was to consider only the source IP addresses of the alerts during text formatting (elimination of duplicities and other superfluous data) and ignore the destination addresses.

While counting packets by destination IP, I needed to get at least one alert. I have concluded that precisely one alert is generated at traffic exceeding 8500 packets per second. Minimal traffic involving only the attacking IP addresses was ca. 800 packets per second.

In contrast, while counting packets by source IP, it is necessary to generate at least one alert for each attacking IP address. The maximum flow at which all the attacking IP addresses were detected was ca. 200 packets per second. Minimum flow at which no attacking IP addresses were detected was ca. 100 packets per second.

Summary of attacks (excluding DDoS attacks) detected by IDS Suricata:

PCAP time Attack Src. IP and port Dest. IP and port
05:58:34.34 Blue Botnet 192.168.3.115:3204 72.32.84.3:80
06:52:16.11 Blue Botnet 192.168.3.115:3745 72.32.84.3:80
07:00:26.37-07:01:49.75 Blue Botnet 192.168.2.108 68.178.178.33:80
07:26:06.70 SSH Scan 192.168.2.107:4611 112.203.155.205:22
09:13:55.36 Sality 192.168.2.107:1193 208.87.32.68:80
09:52:16.48 Sality 192.168.2.112:4139 208.87.32.68:80
11:52:36.59-12:20:50.57 Blue Botnet 192.168.4.119:3056 68.178.178.33:80
12:22:13.68 Blue Botnet 192.168.4.119:3071 68.178.178.33:80
13:42:37.50 Blue Botnet 192.168.3.115:3965 69.20.70.155:80
13:43:17.58 Blue Botnet 192.168.3.115:4147 72.32.84.3:80
13:43:26.02 Sality 192.168.3.115:4165 208.87.32.68:80
13:50:04.74 Blue Botnet 192.168.1.102:1033 67.113.14.176:80
13:51:41.43 Blue Botnet 192.168.1.102:1070 67.113.14.176:80
13:52:37.86 Blue Botnet 192.168.1.105:60212 174.137.114.60:80
13:52:40.68 Blue Botnet 192.168.1.105:60230 174.137.114.60:80
13:52:41.26 Blue Botnet 192.168.1.102:1035 67.113.14.176:80
13:53:53.14 Sality 192.168.4.121:59808 208.87.32.68:80
13:54:08.33 Blue Botnet 192.168.1.105:60267 174.137.114.60:80
13:55:32.63 Sality 192.168.1.105:60415 208.87.32.68:80
13:57:59.03 Sality 192.168.1.105:60542 208.87.32.68:80
14:01:57.07 Sality 192.168.2.113:4080 208.87.32.68:80
14:30:42.14 Sality 192.168.1.102:2012 208.87.32.68:80
15:00:18.78 Sality 192.168.4.120:1197 208.87.32.68:80
15:52:56.48 Sality 192.168.1.105:5486 208.87.32.68:80
16:19:14.08 Sality 192.168.4.120:1825 208.87.32.68:80
16:36:36.10 Blue Botnet 192.168.1.102:1116 76.74.254.123:80
16:39:18.07 Blue Botnet 192.168.1.102:1280 76.74.254.120:80
16:54:38.75 Blue Botnet 192.168.2.111:1629 68.178.178.33:80
16:55:15.07 Sality 192.168.1.102:2409 208.87.32.68:80
17:21:56.68 Blue Botnet 192.168.4.118:1087 60.199.247.118:80
17:56:24.79 Sality 192.168.3.116:2900 208.87.32.68:80
18:54:46.43 Sality 192.168.4.120:1365 208.87.32.68:80
19:17:27.72 Sality 192.168.1.105:18681 208.87.32.68:80
20:27:20.45- 22:06:47.20 IRC správy * *
22:18:00.45 Regnow.com 192.168.4.118:1868 209.87.178.183:80
23:12:26.69 Sality 192.168.1.105:31055 208.87.32.68:80
23:13:43.47- 23:14:04.25 Blue Botnet 192.168.1.101:2190 68.178.178.97:80
23:18:04.48 Regnow.com 192.168.1.102:4394 209.87.178.183:80
00:03:06.58 Sality 192.168.1.105:31329 208.87.32.68:80
03:56:02.22- 03:57:15.40 SSH Scan 217.76.44.243 192.168.5.122:22
04:01:24.27 MSIL/Karmen Rans. 192.168.4.119:2376 208.122.215.180:80
04:36:30.46 SSH Scan 217.76.44.243:57117 192.168.5.122:22

*

192.168.1.103:4889 -> 192.168.2.112:6667

192.168.1.105:22348 -> 192.168.2.112:6667

192.168.2.109:2969 -> 192.168.2.112:6667

192.168.2.110:3311 -> 192.168.2.112:6667

192.168.2.112:6667 -> 192.168.1.103:4889

192.168.2.112:6667 -> 192.168.1.105:22348

192.168.2.112:6667 -> 192.168.2.109:2969

192.168.2.112:6667 -> 192.168.2.110:3311

192.168.2.112:6667 -> 192.168.2.113:2581

192.168.2.112:6667 -> 192.168.4.118:3761

192.168.2.112:6667 -> 192.168.4.120:4784

192.168.2.113:2581 -> 192.168.2.112:6667

192.168.4.118:3761 -> 192.168.2.112:6667

192.168.4.120:4784 -> 192.168.2.112:6667

Dataset analysis from XML file and PDF article (JU)

  • Test period: 00:01:06 Friday, 11.06.2011 – 00:01:06 Friday, 18.06.2011

  • We have analyzed Tuesday, 15.6.2011
    • Time difference between XML and PCAP is 5 hours
      (i.e. 16:00 in XML = 21:00 in PCAP)

    How we proceeded:

        1. Information gathering and reconnaissance (passive and active)
    1. Vulnerability identification and scanning
    2. Gaining access and compromising a system
    3. Maintaining access and creating backdoors
    4. Covering tracks

    Scenario 1: infiltrating the network from the inside

    1.

    • We have used a DNS request to discover mail server IP address. Messages containing a virus were sent to e-mail addresses from the server, this enabled the attackers to access the network. Using the Metasploit tool and reverse TCP shell 5555 the attackers created connection to the devices inside the network.
      • The devices that were misused belonged to network: 192.168.3.0/24

      Scenario 2: HTTP DoS

      • The Slowloris tool was used to overload the Web server by sending incomplete HTTP requests to keep the socket open. Since the number of sockets available to the Web server is finite, it is only a matter of time until they are exhausted and server becomes inaccessible.

      Scenario 3: DDoS using an IRC Botnet (Internet Relay Chat)

      • This bot was sent to users as an update message
      • Subsequently, DoS launched HTTP GET attack from each infected device, creating hundreds of requests
      • The attack lasted for 60 minutes

      1.

      • AppName marked in XML: HTTPweb
      • IP address of the targeted web server: 192.168.5.122
      • Beginning and end of the attack:
        • PCAP time: approximately 21:00 – 22:00
        • XML time: 16:04:42 – 17:05:49

        First and last session that were part of the attack:

        • 16:04:42 – 16:04:43
          • IPsource: 192.168.2.113 /Ps: 2677
          • IPdest: 192.168.5.122 /Pd: 80
          • Duration: 1s
          • Inbound packet count (source): 103
          • Outbound packet count (dest.): 270

          17:05:48 – 17:05:49

          • IPsource: 192.168.2.110 /Ps: 4131
          • IPdest: 192.168.5.122 /Pd: 80
          • Duration: ?
          • Inbound packet count (source): 78
          • Outbound packet count (dest.): 170

          Scenario 4: Brute Force SSH

          1.

          • Dictionary attack on users

How to kill ESET AV process

Eset AV sometimes prevents to run and install some applications, for example microtorrent client or virtualbox extension pack. If the AV pausing does not help, there is an option to kill the AV process using a standard way (using the task manager). However, Eset AV has enabled by default a Self-defense feature preventing to do that.

Therefore to be able to kill the process this feature has to be disabled. To do that follow:

Forensic analytic tools


Forensic analytic tools

  • Author : Tomáš Mokoš

NetworkMiner

NetworkMiner is a Network forensic analysis tool (NFAT) for Windows operating systems. NetworkMiner can be used as a passive network sniffer/packet capturing tool in order to detect operating systems, sessions, hostnames, open ports etc. NetworkMiner’s primary purpose is collection of data regarding network hosts, rather than data regarding network traffic. In addition to direct file capture, NetworkMiner can also parse PCAP files for off-line analysis and to regenerate/reassemble transmitted files and certificates from PCAP files. This function can be used for extraction and archiving of media files transferred through the network. Supported file extraction protocols are FTP, SMB and HTTP. Extracted user credentials (username and password) for supported protocols can be found in the Credentials tab. Other useful features include keyword search in the captured/archived data and Nmap MAC vendor lookup.

Xplico

Xplico is an open-source NFAT. The goal of Xplico is the extraction of application data contained in a capture sample of Internet traffic. For example, Xplico can export all e-mails (POP, IMAP and SMTP), HTTP contents, VoIP calls, FTP and TFTP files, etc.

Elastic Stack

Elastic Stack provides reliable and safe transfer of data of any format from any source and real-time searching analysis and visualization. Elastic Stack consists of Kibana, Elasticsearch, Beats and Logstash. Elasticsearch is a search and analytics engine. Beats is a dta gathering platform. Logstash is a server‑side data processing pipeline that ingests data from multiple sources simultaneously, transforms it, and then sends it to a “stash” like Elasticsearch. Kibana lets users visualize data with charts and graphs in Elasticsearch.

Sguil

Sguil is built by network security analysts for network security analysts. Sguil’s main component is an intuitive GUI that provides access to realtime events, session data, and raw packet captures. Sguil facilitates the practice of Network Security Monitoring and event driven analysis. The Sguil client is written in tcl/tk and can be run on any operating system that supports tcl/tk (including Linux, BSD, Solaris, MacOS, and Win32).

Sources:

  • CRZP Komplexný systém pre detekciu útokov a archiváciu dát – Moloch