merge with dev and fix conflicts

This commit is contained in:
m0duspwnens
2020-06-04 09:59:12 -04:00
155 changed files with 11668 additions and 2406 deletions

View File

@@ -1,32 +1,34 @@
## Hybrid Hunter Beta 1.2.1 - Beta 1 ## Hybrid Hunter Beta 1.3.0 - Beta 2
### Changes: ### Changes:
- Full support for Ubuntu 18.04. 16.04 is no longer supported for Hybrid Hunter. - New Feature: Codename: "Onion Hunt". Select Hunt from the menu and start hunting down your adversaries!
- Introduction of the Security Onion Console. Once logged in you are directly taken to the SOC. - Improved ECS support.
- New authentication using Kratos. - Complete refactor of the setup to make it easier to follow.
- During install you must specify how you would like to access the SOC ui. This is for strict cookie security. - Improved setup script logging to better assist on any issues.
- Ability to list and delete web users from the SOC ui. - Setup now checks for minimal requirements during install.
- The soremote account is now used to add nodes to the grid vs using socore. - Updated Cyberchef to version 9.20.3.
- Community ID support for Zeek, osquery, and Suricata. You can now tie host events to connection logs! - Updated Elastalert to version 0.2.4 and switched to alpine to reduce container size.
- Elastic 7.6.1 with ECS support. - Updated Redis to 5.0.9 and switched to alpine to reduce container size.
- New set of Kibana dashboards that align with ECS. - Updated Salt to 2019.2.5
- Eval mode no longer uses Logstash for parsing (Filebeat -> ES Ingest) - Updated Grafana to 6.7.3.
- Ingest node parsing for osquery-shipped logs (osquery, WEL, Sysmon). - Zeek 3.0.6
- Fleet standalone mode with improved Web UI & API access control. - Suricata 4.1.8
- Improved Fleet integration support. - Fixes so-status to now display correct containers and status.
- Playbook now has full Windows Sigma community ruleset builtin. - local.zeek is now controlled by a pillar instead of modifying the file directly.
- Automatic Sigma community rule updates. - Renamed so-core to so-nginx and switched to alpine to reduce container size.
- Playbook stability enhancements. - Playbook now uses MySQL instead of SQLite.
- Zeek health check. Zeek will now auto restart if a worker crashes. - Sigma rules have all been updated.
- zeekctl is now managed by salt. - Kibana dashboard improvements for ECS.
- Grafana dashboard improvements and cleanup. - Fixed an issue where geoip was not properly parsed.
- Moved logstash configs to pillars. - ATT&CK Navigator is now it's own state.
- Salt logs moved to /opt/so/log/salt. - Standlone mode is now supported.
- Strelka integrated for file-oriented detection/analysis at scale - Mastersearch previously used the same Grafana dashboard as a Search node. It now has its own dashboard that incorporates panels from the Master node and Search node dashboards.
### Known issues: ### Known Issues:
- The Hunt feature is currently considered "Preview" and although very useful in its current state, not everything works. We wanted to get this out as soon as possible to get the feedback from you! Let us know what you want to see! Let us know what you think we should call it!
- You cannot pivot to PCAP from Suricata alerts in Kibana or Hunt.
- Updating users via the SOC ui is known to fail. To change a user, delete the user and re-add them. - Updating users via the SOC ui is known to fail. To change a user, delete the user and re-add them.
- Due to the move to ECS, the current Playbook plays may not alert correctly at this time. - Due to the move to ECS, the current Playbook plays may not alert correctly at this time.
- The osquery MacOS package does not install correctly. - The osquery MacOS package does not install correctly.

View File

@@ -1 +1 @@
1.2.2 1.3.0

View File

@@ -37,7 +37,9 @@ log_file: /opt/so/log/salt/master
# #
file_roots: file_roots:
base: base:
- /opt/so/saltstack/salt - /opt/so/saltstack/local/salt
- /opt/so/saltstack/default/salt
# The master_roots setting configures a master-only copy of the file_roots dictionary, # The master_roots setting configures a master-only copy of the file_roots dictionary,
# used by the state compiler. # used by the state compiler.
@@ -53,7 +55,8 @@ file_roots:
pillar_roots: pillar_roots:
base: base:
- /opt/so/saltstack/pillar - /opt/so/saltstack/local/pillar
- /opt/so/saltstack/default/pillar
peer: peer:
.*: .*:

View File

@@ -1,7 +1,8 @@
#!/usr/bin/env bash #!/usr/bin/env bash
# This script adds sensors/nodes/etc to the nodes tab # This script adds sensors/nodes/etc to the nodes tab
default_salt_dir=/opt/so/saltstack/default
local_salt_dir=/opt/so/saltstack/local
TYPE=$1 TYPE=$1
NAME=$2 NAME=$2
IPADDRESS=$3 IPADDRESS=$3
@@ -15,7 +16,7 @@ MONINT=$9
#HOTNAME=$11 #HOTNAME=$11
echo "Seeing if this host is already in here. If so delete it" echo "Seeing if this host is already in here. If so delete it"
if grep -q $NAME "/opt/so/saltstack/pillar/data/$TYPE.sls"; then if grep -q $NAME "$local_salt_dir/pillar/data/$TYPE.sls"; then
echo "Node Already Present - Let's re-add it" echo "Node Already Present - Let's re-add it"
awk -v blah=" $NAME:" 'BEGIN{ print_flag=1 } awk -v blah=" $NAME:" 'BEGIN{ print_flag=1 }
{ {
@@ -31,27 +32,29 @@ if grep -q $NAME "/opt/so/saltstack/pillar/data/$TYPE.sls"; then
if ( print_flag == 1 ) if ( print_flag == 1 )
print $0 print $0
} ' /opt/so/saltstack/pillar/data/$TYPE.sls > /opt/so/saltstack/pillar/data/tmp.$TYPE.sls } ' $local_salt_dir/pillar/data/$TYPE.sls > $local_salt_dir/pillar/data/tmp.$TYPE.sls
mv /opt/so/saltstack/pillar/data/tmp.$TYPE.sls /opt/so/saltstack/pillar/data/$TYPE.sls mv $local_salt_dir/pillar/data/tmp.$TYPE.sls $local_salt_dir/pillar/data/$TYPE.sls
echo "Deleted $NAME from the tab. Now adding it in again with updated info" echo "Deleted $NAME from the tab. Now adding it in again with updated info"
fi fi
echo " $NAME:" >> /opt/so/saltstack/pillar/data/$TYPE.sls echo " $NAME:" >> $local_salt_dir/pillar/data/$TYPE.sls
echo " ip: $IPADDRESS" >> /opt/so/saltstack/pillar/data/$TYPE.sls echo " ip: $IPADDRESS" >> $local_salt_dir/pillar/data/$TYPE.sls
echo " manint: $MANINT" >> /opt/so/saltstack/pillar/data/$TYPE.sls echo " manint: $MANINT" >> $local_salt_dir/pillar/data/$TYPE.sls
echo " totalcpus: $CPUS" >> /opt/so/saltstack/pillar/data/$TYPE.sls echo " totalcpus: $CPUS" >> $local_salt_dir/pillar/data/$TYPE.sls
echo " guid: $GUID" >> /opt/so/saltstack/pillar/data/$TYPE.sls echo " guid: $GUID" >> $local_salt_dir/pillar/data/$TYPE.sls
echo " rootfs: $ROOTFS" >> /opt/so/saltstack/pillar/data/$TYPE.sls echo " rootfs: $ROOTFS" >> $local_salt_dir/pillar/data/$TYPE.sls
echo " nsmfs: $NSM" >> /opt/so/saltstack/pillar/data/$TYPE.sls echo " nsmfs: $NSM" >> $local_salt_dir/pillar/data/$TYPE.sls
if [ $TYPE == 'sensorstab' ]; then if [ $TYPE == 'sensorstab' ]; then
echo " monint: $MONINT" >> /opt/so/saltstack/pillar/data/$TYPE.sls echo " monint: $MONINT" >> $local_salt_dir/pillar/data/$TYPE.sls
salt-call state.apply common queue=True salt-call state.apply grafana queue=True
fi fi
if [ $TYPE == 'evaltab' ]; then if [ $TYPE == 'evaltab' ]; then
echo " monint: $MONINT" >> /opt/so/saltstack/pillar/data/$TYPE.sls echo " monint: $MONINT" >> $local_salt_dir/pillar/data/$TYPE.sls
salt-call state.apply common queue=True if [ ! $10 ]; then
salt-call state.apply grafana queue=True
salt-call state.apply utility queue=True salt-call state.apply utility queue=True
fi
fi fi
#if [ $TYPE == 'nodestab' ]; then #if [ $TYPE == 'nodestab' ]; then
# echo " nodetype: $NODETYPE" >> /opt/so/saltstack/pillar/data/$TYPE.sls # echo " nodetype: $NODETYPE" >> $local_salt_dir/pillar/data/$TYPE.sls
# echo " hotname: $HOTNAME" >> /opt/so/saltstack/pillar/data/$TYPE.sls # echo " hotname: $HOTNAME" >> $local_salt_dir/pillar/data/$TYPE.sls
#fi #fi

View File

@@ -1 +0,0 @@
evaltab:

View File

@@ -1 +0,0 @@
mastertab:

View File

@@ -1 +0,0 @@
nodestab:

View File

@@ -1 +0,0 @@
sensorstab:

View File

@@ -1,13 +1,13 @@
#!/usr/bin/env bash #!/usr/bin/env bash
# This script adds ip addresses to specific rule sets defined by the user # This script adds ip addresses to specific rule sets defined by the user
local_salt_dir=/opt/so/saltstack/local
POLICY=$1 POLICY=$1
IPADDRESS=$2 IPADDRESS=$2
if grep -q $2 "/opt/so/saltstack/pillar/firewall/$1.sls"; then if grep -q $2 "$local_salt_dir/pillar/firewall/$1.sls"; then
echo "Firewall Rule Already There" echo "Firewall Rule Already There"
else else
echo " - $2" >> /opt/so/saltstack/pillar/firewall/$1.sls echo " - $2" >> $local_salt_dir/pillar/firewall/$1.sls
salt-call state.apply firewall queue=True salt-call state.apply firewall queue=True
fi fi

View File

@@ -0,0 +1,5 @@
healthcheck:
enabled: False
schedule: 300
checks:
- zeek

View File

@@ -5,6 +5,7 @@ logstash:
- so/0900_input_redis.conf.jinja - so/0900_input_redis.conf.jinja
- so/9000_output_zeek.conf.jinja - so/9000_output_zeek.conf.jinja
- so/9002_output_import.conf.jinja - so/9002_output_import.conf.jinja
- so/9034_output_syslog.conf.jinja
- so/9100_output_osquery.conf.jinja - so/9100_output_osquery.conf.jinja
- so/9400_output_suricata.conf.jinja - so/9400_output_suricata.conf.jinja
- so/9500_output_beats.conf.jinja - so/9500_output_beats.conf.jinja

View File

@@ -2,7 +2,7 @@ base:
'*': '*':
- patch.needs_restarting - patch.needs_restarting
'*_eval or *_helix or *_heavynode or *_sensor': '*_eval or *_helix or *_heavynode or *_sensor or *_standalone':
- match: compound - match: compound
- zeek - zeek
@@ -40,6 +40,18 @@ base:
- healthcheck.eval - healthcheck.eval
- minions.{{ grains.id }} - minions.{{ grains.id }}
'*_standalone':
- logstash
- logstash.master
- logstash.search
- firewall.*
- data.*
- brologs
- secrets
- healthcheck.standalone
- static
- minions.{{ grains.id }}
'*_node': '*_node':
- static - static
- firewall.* - firewall.*

View File

@@ -13,21 +13,100 @@ socore:
- createhome: True - createhome: True
- shell: /bin/bash - shell: /bin/bash
# Create a state directory
statedir:
file.directory:
- name: /opt/so/state
- user: 939
- group: 939
- makedirs: True
# Install packages needed for the sensor salttmp:
sensorpkgs: file.directory:
- name: /opt/so/tmp
- user: 939
- group: 939
- makedirs: True
# Install epel
{% if grains['os'] == 'CentOS' %}
epel:
pkg.installed: pkg.installed:
- skip_suggestions: False - skip_suggestions: True
- pkgs:
- epel-release
{% endif %}
# Install common packages
{% if grains['os'] != 'CentOS' %}
commonpkgs:
pkg.installed:
- skip_suggestions: True
- pkgs:
- apache2-utils
- wget
- ntpdate
- jq
- python3-docker
- docker-ce
- curl
- ca-certificates
- software-properties-common
- apt-transport-https
- openssl
- netcat
- python3-mysqldb
- sqlite3
- argon2
- libssl-dev
- python3-dateutil
- python3-m2crypto
- python3-mysqldb
- git
heldpackages:
pkg.installed:
- pkgs:
- containerd.io: 1.2.13-2
- docker-ce: 5:19.03.9~3-0~ubuntu-bionic
- hold: True
- update_holds: True
{% else %}
commonpkgs:
pkg.installed:
- skip_suggestions: True
- pkgs: - pkgs:
- wget - wget
- ntpdate
- bind-utils
- jq - jq
{% if grains['os'] != 'CentOS' %}
- apache2-utils
{% else %}
- net-tools
- tcpdump - tcpdump
- httpd-tools - httpd-tools
{% endif %} - net-tools
- curl
- sqlite
- argon2
- mariadb-devel
- nmap-ncat
- python3
- python36-docker
- python36-dateutil
- python36-m2crypto
- python36-mysql
- yum-utils
- device-mapper-persistent-data
- lvm2
- openssl
- git
heldpackages:
pkg.installed:
- pkgs:
- containerd.io: 1.2.13-3.2.el7
- docker-ce: 3:19.03.11-3.el7
- hold: True
- update_holds: True
{% endif %}
# Always keep these packages up to date # Always keep these packages up to date

View File

@@ -1,5 +1,6 @@
{% set docker = { {% set docker = {
'containers': [ 'containers': [
'so-filebeat',
'so-nginx', 'so-nginx',
'so-telegraf', 'so-telegraf',
'so-dockerregistry', 'so-dockerregistry',

View File

@@ -5,7 +5,7 @@
'so-telegraf', 'so-telegraf',
'so-soc', 'so-soc',
'so-kratos', 'so-kratos',
'so-acng', 'so-aptcacherng',
'so-idstools', 'so-idstools',
'so-redis', 'so-redis',
'so-elasticsearch', 'so-elasticsearch',

View File

@@ -4,7 +4,7 @@
'so-telegraf', 'so-telegraf',
'so-soc', 'so-soc',
'so-kratos', 'so-kratos',
'so-acng', 'so-aptcacherng',
'so-idstools', 'so-idstools',
'so-redis', 'so-redis',
'so-logstash', 'so-logstash',

View File

@@ -1,6 +1,5 @@
{% set docker = { {% set docker = {
'containers': [ 'containers': [
'so-nginx',
'so-telegraf', 'so-telegraf',
'so-steno', 'so-steno',
'so-suricata', 'so-suricata',

View File

@@ -18,7 +18,7 @@
} }
},grain='id', merge=salt['pillar.get']('docker')) %} },grain='id', merge=salt['pillar.get']('docker')) %}
{% if role == 'eval' %} {% if role in ['eval', 'mastersearch', 'master', 'standalone'] %}
{{ append_containers('master', 'grafana', 0) }} {{ append_containers('master', 'grafana', 0) }}
{{ append_containers('static', 'fleet_master', 0) }} {{ append_containers('static', 'fleet_master', 0) }}
{{ append_containers('master', 'wazuh', 0) }} {{ append_containers('master', 'wazuh', 0) }}
@@ -28,30 +28,14 @@
{{ append_containers('master', 'domainstats', 0) }} {{ append_containers('master', 'domainstats', 0) }}
{% endif %} {% endif %}
{% if role == 'heavynode' %} {% if role in ['eval', 'heavynode', 'sensor', 'standalone'] %}
{{ append_containers('static', 'strelka', 0) }}
{% endif %}
{% if role in ['heavynode', 'standalone'] %}
{{ append_containers('static', 'broversion', 'SURICATA') }} {{ append_containers('static', 'broversion', 'SURICATA') }}
{% endif %} {% endif %}
{% if role == 'mastersearch' %}
{{ append_containers('master', 'grafana', 0) }}
{{ append_containers('static', 'fleet_master', 0) }}
{{ append_containers('master', 'wazuh', 0) }}
{{ append_containers('master', 'thehive', 0) }}
{{ append_containers('master', 'playbook', 0) }}
{{ append_containers('master', 'freq', 0) }}
{{ append_containers('master', 'domainstats', 0) }}
{% endif %}
{% if role == 'master' %}
{{ append_containers('master', 'grafana', 0) }}
{{ append_containers('static', 'fleet_master', 0) }}
{{ append_containers('master', 'wazuh', 0) }}
{{ append_containers('master', 'thehive', 0) }}
{{ append_containers('master', 'playbook', 0) }}
{{ append_containers('master', 'freq', 0) }}
{{ append_containers('master', 'domainstats', 0) }}
{% endif %}
{% if role == 'searchnode' %} {% if role == 'searchnode' %}
{{ append_containers('master', 'wazuh', 0) }} {{ append_containers('master', 'wazuh', 0) }}
{% endif %} {% endif %}

View File

@@ -0,0 +1,21 @@
{% set docker = {
'containers': [
'so-nginx',
'so-telegraf',
'so-soc',
'so-kratos',
'so-aptcacherng',
'so-idstools',
'so-redis',
'so-logstash',
'so-elasticsearch',
'so-curator',
'so-kibana',
'so-elastalert',
'so-filebeat',
'so-suricata',
'so-steno',
'so-dockerregistry',
'so-soctopus'
]
} %}

View File

@@ -0,0 +1,9 @@
{% set docker = {
'containers': [
'so-strelka-coordinator',
'so-strelka-gatekeeper',
'so-strelka-manager',
'so-strelka-frontend',
'so-strelka-filestream'
]
} %}

View File

@@ -17,6 +17,9 @@
. /usr/sbin/so-common . /usr/sbin/so-common
default_salt_dir=/opt/so/saltstack/default
local_salt_dir=/opt/so/saltstack/local
SKIP=0 SKIP=0
while getopts "abowi:" OPTION while getopts "abowi:" OPTION
@@ -80,10 +83,10 @@ if [ "$SKIP" -eq 0 ]; then
fi fi
echo "Adding $IP to the $FULLROLE role. This can take a few seconds" echo "Adding $IP to the $FULLROLE role. This can take a few seconds"
/opt/so/saltstack/pillar/firewall/addfirewall.sh $FULLROLE $IP $default_salt_dir/pillar/firewall/addfirewall.sh $FULLROLE $IP
# Check if Wazuh enabled # Check if Wazuh enabled
if grep -q -R "wazuh: 1" /opt/so/saltstack/pillar/*; then if grep -q -R "wazuh: 1" $local_salt_dir/pillar/*; then
# If analyst, add to Wazuh AR whitelist # If analyst, add to Wazuh AR whitelist
if [ "$FULLROLE" == "analyst" ]; then if [ "$FULLROLE" == "analyst" ]; then
WAZUH_MGR_CFG="/opt/so/wazuh/etc/ossec.conf" WAZUH_MGR_CFG="/opt/so/wazuh/etc/ossec.conf"

View File

@@ -1,11 +1,12 @@
#!/bin/bash #!/bin/bash
local_salt_dir=/opt/so/saltstack/local
bro_logs_enabled() { bro_logs_enabled() {
echo "brologs:" > /opt/so/saltstack/pillar/brologs.sls echo "brologs:" > $local_salt_dir/pillar/brologs.sls
echo " enabled:" >> /opt/so/saltstack/pillar/brologs.sls echo " enabled:" >> $local_salt_dir/pillar/brologs.sls
for BLOG in ${BLOGS[@]}; do for BLOG in ${BLOGS[@]}; do
echo " - $BLOG" | tr -d '"' >> /opt/so/saltstack/pillar/brologs.sls echo " - $BLOG" | tr -d '"' >> $local_salt_dir/pillar/brologs.sls
done done
} }

View File

@@ -1,5 +1,5 @@
#!/bin/bash #!/bin/bash
#
# Copyright 2014,2015,2016,2017,2018,2019,2020 Security Onion Solutions, LLC # Copyright 2014,2015,2016,2017,2018,2019,2020 Security Onion Solutions, LLC
# #
# This program is free software: you can redistribute it and/or modify # This program is free software: you can redistribute it and/or modify
@@ -17,4 +17,5 @@
. /usr/sbin/so-common . /usr/sbin/so-common
/usr/sbin/so-restart cortex $1 /usr/sbin/so-stop cortex $1
/usr/sbin/so-start thehive $1

View File

@@ -17,4 +17,4 @@
. /usr/sbin/so-common . /usr/sbin/so-common
/usr/sbin/so-start cortex $1 /usr/sbin/so-start thehive $1

View File

@@ -1,5 +1,5 @@
#!/bin/bash #!/bin/bash
#
# Copyright 2014,2015,2016,2017,2018,2019,2020 Security Onion Solutions, LLC # Copyright 2014,2015,2016,2017,2018,2019,2020 Security Onion Solutions, LLC
# #
# This program is free software: you can redistribute it and/or modify # This program is free software: you can redistribute it and/or modify

View File

@@ -0,0 +1,112 @@
#!/bin/bash
# Copyright 2014,2015,2016,2017,2018,2019,2020 Security Onion Solutions, LLC
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
got_root(){
if [ "$(id -u)" -ne 0 ]; then
echo "This script must be run using sudo!"
exit 1
fi
}
master_check() {
# Check to see if this is a master
MASTERCHECK=$(cat /etc/salt/grains | grep role | awk '{print $2}')
if [ $MASTERCHECK == 'so-eval' ] || [ $MASTERCHECK == 'so-master' ] || [ $MASTERCHECK == 'so-mastersearch' ] || [ $MASTERCHECK == 'so-standalone' ] || [ $MASTERCHECK == 'so-helix' ]; then
echo "This is a master. We can proceed"
else
echo "Please run soup on the master. The master controls all updates."
exit 1
fi
}
update_docker_containers() {
# Download the containers from the interwebs
for i in "${TRUSTED_CONTAINERS[@]}"
do
# Pull down the trusted docker image
echo "Downloading $i"
docker pull --disable-content-trust=false docker.io/soshybridhunter/$i
# Tag it with the new registry destination
docker tag soshybridhunter/$i $HOSTNAME:5000/soshybridhunter/$i
docker push $HOSTNAME:5000/soshybridhunter/$i
done
}
version_check() {
if [ -f /etc/soversion ]; then
VERSION=$(cat /etc/soversion)
else
echo "Unable to detect version. I will now terminate."
exit 1
fi
}
got_root
master_check
version_check
# Use the hostname
HOSTNAME=$(hostname)
BUILD=HH
# List all the containers
if [ $MASTERCHECK != 'so-helix' ]; then
TRUSTED_CONTAINERS=( \
"so-acng:$BUILD$VERSION" \
"so-thehive-cortex:$BUILD$VERSION" \
"so-curator:$BUILD$VERSION" \
"so-domainstats:$BUILD$VERSION" \
"so-elastalert:$BUILD$VERSION" \
"so-elasticsearch:$BUILD$VERSION" \
"so-filebeat:$BUILD$VERSION" \
"so-fleet:$BUILD$VERSION" \
"so-fleet-launcher:$BUILD$VERSION" \
"so-freqserver:$BUILD$VERSION" \
"so-grafana:$BUILD$VERSION" \
"so-idstools:$BUILD$VERSION" \
"so-influxdb:$BUILD$VERSION" \
"so-kibana:$BUILD$VERSION" \
"so-kratos:$BUILD$VERSION" \
"so-logstash:$BUILD$VERSION" \
"so-mysql:$BUILD$VERSION" \
"so-navigator:$BUILD$VERSION" \
"so-nginx:$BUILD$VERSION" \
"so-playbook:$BUILD$VERSION" \
"so-redis:$BUILD$VERSION" \
"so-soc:$BUILD$VERSION" \
"so-soctopus:$BUILD$VERSION" \
"so-steno:$BUILD$VERSION" \
"so-strelka:$BUILD$VERSION" \
"so-suricata:$BUILD$VERSION" \
"so-telegraf:$BUILD$VERSION" \
"so-thehive:$BUILD$VERSION" \
"so-thehive-es:$BUILD$VERSION" \
"so-wazuh:$BUILD$VERSION" \
"so-zeek:$BUILD$VERSION" )
else
TRUSTED_CONTAINERS=( \
"so-filebeat:$BUILD$VERSION" \
"so-idstools:$BUILD$VERSION" \
"so-logstash:$BUILD$VERSION" \
"so-nginx:$BUILD$VERSION" \
"so-redis:$BUILD$VERSION" \
"so-steno:$BUILD$VERSION" \
"so-suricata:$BUILD$VERSION" \
"so-telegraf:$BUILD$VERSION" \
"so-zeek:$BUILD$VERSION" )
fi
update_docker_containers

View File

@@ -166,8 +166,7 @@ cat << EOF
What elasticsearch index do you want to use? What elasticsearch index do you want to use?
Below are the default Index Patterns used in Security Onion: Below are the default Index Patterns used in Security Onion:
*:logstash-* *:so-ids-*
*:logstash-beats-*
*:elastalert_status* *:elastalert_status*
EOF EOF

View File

@@ -15,12 +15,13 @@
# You should have received a copy of the GNU General Public License # You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>. # along with this program. If not, see <http://www.gnu.org/licenses/>.
default_salt_dir=/opt/so/saltstack/default
ELASTICSEARCH_HOST="{{ MASTERIP}}" ELASTICSEARCH_HOST="{{ MASTERIP}}"
ELASTICSEARCH_PORT=9200 ELASTICSEARCH_PORT=9200
#ELASTICSEARCH_AUTH="" #ELASTICSEARCH_AUTH=""
# Define a default directory to load pipelines from # Define a default directory to load pipelines from
ELASTICSEARCH_TEMPLATES="/opt/so/saltstack/salt/logstash/pipelines/templates/so/" ELASTICSEARCH_TEMPLATES="$default_salt_dir/salt/logstash/pipelines/templates/so/"
# Wait for ElasticSearch to initialize # Wait for ElasticSearch to initialize
echo -n "Waiting for ElasticSearch..." echo -n "Waiting for ElasticSearch..."

View File

@@ -15,10 +15,11 @@
# along with this program. If not, see <http://www.gnu.org/licenses/>. # along with this program. If not, see <http://www.gnu.org/licenses/>.
. /usr/sbin/so-common . /usr/sbin/so-common
local_salt_dir=/opt/so/saltstack/local
VERSION=$(grep soversion /opt/so/saltstack/pillar/static.sls | cut -d':' -f2|sed 's/ //g') VERSION=$(grep soversion $local_salt_dir/pillar/static.sls | cut -d':' -f2|sed 's/ //g')
# Modify static.sls to enable Features # Modify static.sls to enable Features
sed -i 's/features: False/features: True/' /opt/so/saltstack/pillar/static.sls sed -i 's/features: False/features: True/' $local_salt_dir/pillar/static.sls
SUFFIX="-features" SUFFIX="-features"
TRUSTED_CONTAINERS=( \ TRUSTED_CONTAINERS=( \
"so-elasticsearch:$VERSION$SUFFIX" \ "so-elasticsearch:$VERSION$SUFFIX" \

View File

@@ -1,4 +1,7 @@
#!/bin/bash #!/bin/bash
local_salt_dir=/opt/so/saltstack/local
got_root() { got_root() {
# Make sure you are root # Make sure you are root
@@ -10,13 +13,13 @@ got_root() {
} }
got_root got_root
if [ ! -f /opt/so/saltstack/pillar/fireeye/init.sls ]; then if [ ! -f $local_salt_dir/pillar/fireeye/init.sls ]; then
echo "This is nto configured for Helix Mode. Please re-install." echo "This is nto configured for Helix Mode. Please re-install."
exit exit
else else
echo "Enter your Helix API Key: " echo "Enter your Helix API Key: "
read APIKEY read APIKEY
sed -i "s/^ api_key.*/ api_key: $APIKEY/g" /opt/so/saltstack/pillar/fireeye/init.sls sed -i "s/^ api_key.*/ api_key: $APIKEY/g" $local_salt_dir/pillar/fireeye/init.sls
docker stop so-logstash docker stop so-logstash
docker rm so-logstash docker rm so-logstash
echo "Restarting Logstash for updated key" echo "Restarting Logstash for updated key"

37
salt/common/tools/sbin/so-kibana-config-export Normal file → Executable file
View File

@@ -1,6 +1,35 @@
#!/bin/bash #!/bin/bash
KIBANA_HOST=10.66.166.141 #
# {%- set FLEET_MASTER = salt['pillar.get']('static:fleet_master', False) -%}
# {%- set FLEET_NODE = salt['pillar.get']('static:fleet_node', False) -%}
# {%- set FLEET_IP = salt['pillar.get']('static:fleet_ip', '') %}
# {%- set MASTER = salt['pillar.get']('master:url_base', '') %}
#
# Copyright 2014,2015,2016,2017,2018,2019,2020 Security Onion Solutions, LLC
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
KIBANA_HOST={{ MASTER }}
KSO_PORT=5601 KSO_PORT=5601
OUTFILE="saved_objects.json" OUTFILE="saved_objects.ndjson"
curl -s -H 'kbn-xsrf: true' -H 'Content-Type: application/json' -XPOST $KIBANA_HOST:$KSO_PORT/api/saved_objects/_export -d '{ "type": "index-pattern", "type": "config", "type": "dashboard", "type": "query", "type": "search", "type": "url", "type": "visualization" }' -o $OUTFILE curl -s -H 'kbn-xsrf: true' -H 'Content-Type: application/json' -XPOST $KIBANA_HOST:$KSO_PORT/api/saved_objects/_export -d '{ "type": [ "index-pattern", "config", "visualization", "dashboard", "search" ], "excludeExportDetails": false }' > $OUTFILE
# Clean up using PLACEHOLDER
sed -i "s/$KIBANA_HOST/PLACEHOLDER/g" $OUTFILE
# Clean up for Fleet, if applicable
# {% if FLEET_NODE or FLEET_MASTER %}
# Fleet IP
sed -i "s/{{ FLEET_IP }}/FLEETPLACEHOLDER/g" $OUTFILE
# {% endif %}

View File

@@ -0,0 +1,57 @@
#!/bin/bash
# Copyright 2014,2015,2016,2017,2018,2019,2020 Security Onion Solutions, LLC
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
default_salt_dir=/opt/so/saltstack/default
clone_to_tmp() {
# TODO Need to add a air gap option
# Make a temp location for the files
mkdir /tmp/sogh
cd /tmp/sogh
#git clone -b dev https://github.com/Security-Onion-Solutions/securityonion-saltstack.git
git clone https://github.com/Security-Onion-Solutions/securityonion-saltstack.git
cd /tmp
}
copy_new_files() {
# Copy new files over to the salt dir
cd /tmp/sogh/securityonion-saltstack
git checkout $BRANCH
rsync -a --exclude-from 'exclude-list.txt' salt $default_salt_dir/
rsync -a --exclude-from 'exclude-list.txt' pillar $default_salt_dir/
chown -R socore:socore $default_salt_dir/salt
chown -R socore:socore $default_salt_dir/pillar
chmod 755 $default_salt_dir/pillar/firewall/addfirewall.sh
rm -rf /tmp/sogh
}
got_root(){
if [ "$(id -u)" -ne 0 ]; then
echo "This script must be run using sudo!"
exit 1
fi
}
got_root
if [ $# -ne 1 ] ; then
BRANCH=master
else
BRANCH=$1
fi
clone_to_tmp
copy_new_files

View File

@@ -32,5 +32,5 @@ fi
case $1 in case $1 in
"all") salt-call state.highstate queue=True;; "all") salt-call state.highstate queue=True;;
"steno") if docker ps | grep -q so-$1; then printf "\n$1 is already running!\n\n"; else docker rm so-$1 >/dev/null 2>&1 ; salt-call state.apply pcap queue=True; fi ;; "steno") if docker ps | grep -q so-$1; then printf "\n$1 is already running!\n\n"; else docker rm so-$1 >/dev/null 2>&1 ; salt-call state.apply pcap queue=True; fi ;;
*) if docker ps | grep -q so-$1; then printf "\n$1 is already running\n\n"; else docker rm so-$1 >/dev/null 2>&1 ; salt-call state.apply $1 queue=True; fi ;; *) if docker ps | grep -E -q '^so-$1$'; then printf "\n$1 is already running\n\n"; else docker rm so-$1 >/dev/null 2>&1 ; salt-call state.apply $1 queue=True; fi ;;
esac esac

View File

@@ -15,7 +15,7 @@
# You should have received a copy of the GNU General Public License # You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>. # along with this program. If not, see <http://www.gnu.org/licenses/>.
{%- from 'common/maps/so-status.map.jinja' import docker with context %} {%- from 'common/maps/so-status.map.jinja' import docker with context %}
{%- set container_list = docker['containers'] %} {%- set container_list = docker['containers'] | sort %}
if ! [ "$(id -u)" = 0 ]; then if ! [ "$(id -u)" = 0 ]; then
echo "This command must be run as root" echo "This command must be run as root"

View File

@@ -0,0 +1,21 @@
#!/bin/bash
#
# Copyright 2014,2015,2016,2017,2018,2019,2020 Security Onion Solutions, LLC
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
. /usr/sbin/so-common
/usr/sbin/so-stop thehive-es $1
/usr/sbin/so-start thehive $1

View File

@@ -0,0 +1,20 @@
#!/bin/bash
# Copyright 2014,2015,2016,2017,2018,2019,2020 Security Onion Solutions, LLC
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
. /usr/sbin/so-common
/usr/sbin/so-start thehive $1

View File

@@ -0,0 +1,20 @@
#!/bin/bash
#
# Copyright 2014,2015,2016,2017,2018,2019,2020 Security Onion Solutions, LLC
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
. /usr/sbin/so-common
/usr/sbin/so-stop thehive-es $1

View File

@@ -1,5 +1,5 @@
#!/bin/bash #!/bin/bash
#
# Copyright 2014,2015,2016,2017,2018,2019,2020 Security Onion Solutions, LLC # Copyright 2014,2015,2016,2017,2018,2019,2020 Security Onion Solutions, LLC
# #
# This program is free software: you can redistribute it and/or modify # This program is free software: you can redistribute it and/or modify

View File

@@ -1,5 +1,5 @@
#!/bin/bash #!/bin/bash
#
# Copyright 2014,2015,2016,2017,2018,2019,2020 Security Onion Solutions, LLC # Copyright 2014,2015,2016,2017,2018,2019,2020 Security Onion Solutions, LLC
# #
# This program is free software: you can redistribute it and/or modify # This program is free software: you can redistribute it and/or modify

View File

@@ -0,0 +1,39 @@
#!/bin/bash
# Copyright 2014,2015,2016,2017,2018,2019,2020 Security Onion Solutions, LLC
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
# Show Zeek stats (capstats, netstats)
show_stats() {
echo '##############'
echo '# Zeek Stats #'
echo '##############'
echo
echo "Average throughput:"
echo
docker exec -it so-zeek /opt/zeek/bin/zeekctl capstats
echo
echo "Average packet loss:"
echo
docker exec -it so-zeek /opt/zeek/bin/zeekctl netstats
echo
}
if docker ps | grep -q zeek; then
show_stats
else
echo "Zeek is not running! Try starting it with 'so-zeek-start'." && exit 1;
fi

View File

@@ -1,12 +1,8 @@
{% if grains['role'] == 'so-node' %} {%- if grains['role'] in ['so-node', 'so-searchnode', 'so-heavynode'] %}
{%- set cur_close_days = salt['pillar.get']('node:cur_close_days', '') -%}
{%- set cur_close_days = salt['pillar.get']('node:cur_close_days', '') -%} {%- elif grains['role'] in ['so-eval', 'so-mastersearch', 'so-standalone'] %}
{%- set cur_close_days = salt['pillar.get']('master:cur_close_days', '') -%}
{% elif grains['role'] == 'so-eval' %} {%- endif -%}
{%- set cur_close_days = salt['pillar.get']('master:cur_close_days', '') -%}
{%- endif %}
--- ---
# Remember, leave a key empty if there is no value. None will be a string, # Remember, leave a key empty if there is no value. None will be a string,
@@ -28,9 +24,8 @@ actions:
disable_action: False disable_action: False
filters: filters:
- filtertype: pattern - filtertype: pattern
kind: prefix kind: regex
value: logstash- value: '^(logstash-.*|so-.*)$'
exclude:
- filtertype: age - filtertype: age
source: name source: name
direction: older direction: older

View File

@@ -1,11 +1,7 @@
{% if grains['role'] == 'so-node' %} {%- if grains['role'] in ['so-node', 'so-searchnode', 'so-heavynode'] %}
{%- set log_size_limit = salt['pillar.get']('node:log_size_limit', '') -%}
{%- set log_size_limit = salt['pillar.get']('node:log_size_limit', '') -%} {%- elif grains['role'] in ['so-eval', 'so-mastersearch', 'so-standalone'] %}
{%- set log_size_limit = salt['pillar.get']('master:log_size_limit', '') -%}
{% elif grains['role'] == 'so-eval' %}
{%- set log_size_limit = salt['pillar.get']('master:log_size_limit', '') -%}
{%- endif %} {%- endif %}
--- ---
# Remember, leave a key empty if there is no value. None will be a string, # Remember, leave a key empty if there is no value. None will be a string,
@@ -24,8 +20,8 @@ actions:
disable_action: False disable_action: False
filters: filters:
- filtertype: pattern - filtertype: pattern
kind: prefix kind: regex
value: logstash- value: '^(logstash-.*|so-.*)$'
- filtertype: space - filtertype: space
source: creation_date source: creation_date
use_age: True use_age: True

View File

@@ -1,17 +1,13 @@
{% if grains['role'] == 'so-node' %} {%- if grains['role'] in ['so-node', 'so-searchnode', 'so-heavynode'] %}
{%- set ELASTICSEARCH_HOST = salt['pillar.get']('node:mainip', '') -%}
{%- set ELASTICSEARCH_HOST = salt['pillar.get']('node:mainip', '') -%} {%- set ELASTICSEARCH_PORT = salt['pillar.get']('node:es_port', '') -%}
{%- set ELASTICSEARCH_PORT = salt['pillar.get']('node:es_port', '') -%} {%- set LOG_SIZE_LIMIT = salt['pillar.get']('node:log_size_limit', '') -%}
{%- set LOG_SIZE_LIMIT = salt['pillar.get']('node:log_size_limit', '') -%} {%- elif grains['role'] in ['so-eval', 'so-mastersearch', 'so-standalone'] %}
{%- set ELASTICSEARCH_HOST = salt['pillar.get']('master:mainip', '') -%}
{% elif grains['role'] == 'so-eval' %} {%- set ELASTICSEARCH_PORT = salt['pillar.get']('master:es_port', '') -%}
{%- set LOG_SIZE_LIMIT = salt['pillar.get']('master:log_size_limit', '') -%}
{%- set ELASTICSEARCH_HOST = salt['pillar.get']('master:mainip', '') -%} {%- endif -%}
{%- set ELASTICSEARCH_PORT = salt['pillar.get']('master:es_port', '') -%}
{%- set LOG_SIZE_LIMIT = salt['pillar.get']('master:log_size_limit', '') -%}
{%- endif %}
#!/bin/bash #!/bin/bash
# #
@@ -37,17 +33,17 @@ LOG="/opt/so/log/curator/so-curator-closed-delete.log"
# Check for 2 conditions: # Check for 2 conditions:
# 1. Are Elasticsearch indices using more disk space than LOG_SIZE_LIMIT? # 1. Are Elasticsearch indices using more disk space than LOG_SIZE_LIMIT?
# 2. Are there any closed logstash- indices that we can delete? # 2. Are there any closed logstash-, or so- indices that we can delete?
# If both conditions are true, keep on looping until one of the conditions is false. # If both conditions are true, keep on looping until one of the conditions is false.
while [[ $(du -hs --block-size=1GB /nsm/elasticsearch/nodes | awk '{print $1}' ) -gt "{{LOG_SIZE_LIMIT}}" ]] && while [[ $(du -hs --block-size=1GB /nsm/elasticsearch/nodes | awk '{print $1}' ) -gt "{{LOG_SIZE_LIMIT}}" ]] &&
curl -s {{ELASTICSEARCH_HOST}}:{{ELASTICSEARCH_PORT}}/_cat/indices | grep "^ close logstash-" > /dev/null; do curl -s {{ELASTICSEARCH_HOST}}:{{ELASTICSEARCH_PORT}}/_cat/indices | grep -E "^ close (logstash-|so-)" > /dev/null; do
# We need to determine OLDEST_INDEX. # We need to determine OLDEST_INDEX.
# First, get the list of closed indices that are prefixed with "logstash-". # First, get the list of closed indices that are prefixed with "logstash-" or "so-".
# For example: logstash-ids-YYYY.MM.DD # For example: logstash-ids-YYYY.MM.DD
# Then, sort by date by telling sort to use hyphen as delimiter and then sort on the third field. # Then, sort by date by telling sort to use hyphen as delimiter and then sort on the third field.
# Finally, select the first entry in that sorted list. # Finally, select the first entry in that sorted list.
OLDEST_INDEX=$(curl -s {{ELASTICSEARCH_HOST}}:{{ELASTICSEARCH_PORT}}/_cat/indices | grep "^ close logstash-" | awk '{print $2}' | sort -t- -k3 | head -1) OLDEST_INDEX=$(curl -s {{ELASTICSEARCH_HOST}}:{{ELASTICSEARCH_PORT}}/_cat/indices | grep -E "^ close (logstash-|so-)" | awk '{print $2}' | sort -t- -k3 | head -1)
# Now that we've determined OLDEST_INDEX, ask Elasticsearch to delete it. # Now that we've determined OLDEST_INDEX, ask Elasticsearch to delete it.
curl -XDELETE {{ELASTICSEARCH_HOST}}:{{ELASTICSEARCH_PORT}}/${OLDEST_INDEX} curl -XDELETE {{ELASTICSEARCH_HOST}}:{{ELASTICSEARCH_PORT}}/${OLDEST_INDEX}

View File

@@ -1,11 +1,7 @@
{% if grains['role'] == 'so-node' %} {% if grains['role'] in ['so-node', 'so-searchnode', 'so-heavynode'] %}
{%- set elasticsearch = salt['pillar.get']('node:mainip', '') -%}
{%- set elasticsearch = salt['pillar.get']('node:mainip', '') -%} {% elif grains['role'] in ['so-eval', 'so-mastersearch', 'so-standalone'] %}
{%- set elasticsearch = salt['pillar.get']('master:mainip', '') -%}
{% elif grains['role'] == 'so-eval' %}
{%- set elasticsearch = salt['pillar.get']('master:mainip', '') -%}
{%- endif %} {%- endif %}
--- ---

View File

@@ -1,6 +1,6 @@
{% set VERSION = salt['pillar.get']('static:soversion', 'HH1.2.2') %} {% set VERSION = salt['pillar.get']('static:soversion', 'HH1.2.2') %}
{% set MASTER = salt['grains.get']('master') %} {% set MASTER = salt['grains.get']('master') %}
{% if grains['role'] == 'so-node' or grains['role'] == 'so-eval' %} {% if grains['role'] in ['so-searchnode', 'so-eval', 'so-node', 'so-mastersearch', 'so-heavynode', 'so-standalone'] %}
# Curator # Curator
# Create the group # Create the group
curatorgroup: curatorgroup:
@@ -89,7 +89,7 @@ curdel:
so-curatorcloseddeletecron: so-curatorcloseddeletecron:
cron.present: cron.present:
- name: /usr/sbin/so-curator-closed-delete - name: /usr/sbin/so-curator-closed-delete > /opt/so/log/curator/cron-closed-delete.log 2>&1
- user: root - user: root
- minute: '*' - minute: '*'
- hour: '*' - hour: '*'
@@ -99,7 +99,7 @@ so-curatorcloseddeletecron:
so-curatorclosecron: so-curatorclosecron:
cron.present: cron.present:
- name: /usr/sbin/so-curator-close - name: /usr/sbin/so-curator-close > /opt/so/log/curator/cron-close.log 2>&1
- user: root - user: root
- minute: '*' - minute: '*'
- hour: '*' - hour: '*'
@@ -109,7 +109,7 @@ so-curatorclosecron:
so-curatordeletecron: so-curatordeletecron:
cron.present: cron.present:
- name: /usr/sbin/so-curator-delete - name: /usr/sbin/so-curator-delete > /opt/so/log/curator/cron-delete.log 2>&1
- user: root - user: root
- minute: '*' - minute: '*'
- hour: '*' - hour: '*'

View File

@@ -2,7 +2,7 @@
{% set esport = salt['pillar.get']('master:es_port', '') %} {% set esport = salt['pillar.get']('master:es_port', '') %}
# This is the folder that contains the rule yaml files # This is the folder that contains the rule yaml files
# Any .yaml file will be loaded as a rule # Any .yaml file will be loaded as a rule
rules_folder: /etc/elastalert/rules/ rules_folder: /opt/elastalert/rules/
# Sets whether or not ElastAlert should recursively descend # Sets whether or not ElastAlert should recursively descend
# the rules directory - true or false # the rules directory - true or false

View File

@@ -1,107 +0,0 @@
# -*- coding: utf-8 -*-
# HiveAlerter modified from original at: https://raw.githubusercontent.com/Nclose-ZA/elastalert_hive_alerter/master/elastalert_hive_alerter/hive_alerter.py
import uuid
from elastalert.alerts import Alerter
from thehive4py.api import TheHiveApi
from thehive4py.models import Alert, AlertArtifact, CustomFieldHelper
class TheHiveAlerter(Alerter):
"""
Use matched data to create alerts containing observables in an instance of TheHive
"""
required_options = set(['hive_connection', 'hive_alert_config'])
def get_aggregation_summary_text(self, matches):
text = super(TheHiveAlerter, self).get_aggregation_summary_text(matches)
if text:
text = '```\n{0}```\n'.format(text)
return text
def create_artifacts(self, match):
artifacts = []
context = {'rule': self.rule, 'match': match}
for mapping in self.rule.get('hive_observable_data_mapping', []):
for observable_type, match_data_key in mapping.items():
try:
artifacts.append(AlertArtifact(dataType=observable_type, data=match_data_key.format(**context)))
except KeyError as e:
print(('format string {} fail cause no key {} in {}'.format(e, match_data_key, context)))
return artifacts
def create_alert_config(self, match):
context = {'rule': self.rule, 'match': match}
alert_config = {
'artifacts': self.create_artifacts(match),
'sourceRef': str(uuid.uuid4())[0:6],
'title': '{rule[name]}'.format(**context)
}
alert_config.update(self.rule.get('hive_alert_config', {}))
for alert_config_field, alert_config_value in alert_config.items():
if alert_config_field == 'customFields':
custom_fields = CustomFieldHelper()
for cf_key, cf_value in alert_config_value.items():
try:
func = getattr(custom_fields, 'add_{}'.format(cf_value['type']))
except AttributeError:
raise Exception('unsupported custom field type {}'.format(cf_value['type']))
value = cf_value['value'].format(**context)
func(cf_key, value)
alert_config[alert_config_field] = custom_fields.build()
elif isinstance(alert_config_value, str):
alert_config[alert_config_field] = alert_config_value.format(**context)
elif isinstance(alert_config_value, (list, tuple)):
formatted_list = []
for element in alert_config_value:
try:
formatted_list.append(element.format(**context))
except (AttributeError, KeyError, IndexError):
formatted_list.append(element)
alert_config[alert_config_field] = formatted_list
return alert_config
def send_to_thehive(self, alert_config):
connection_details = self.rule['hive_connection']
api = TheHiveApi(
connection_details.get('hive_host', ''),
connection_details.get('hive_apikey', ''),
proxies=connection_details.get('hive_proxies', {'http': '', 'https': ''}),
cert=connection_details.get('hive_verify', False))
alert = Alert(**alert_config)
response = api.create_alert(alert)
if response.status_code != 201:
raise Exception('alert not successfully created in TheHive\n{}'.format(response.text))
def alert(self, matches):
if self.rule.get('hive_alert_config_type', 'custom') != 'classic':
for match in matches:
alert_config = self.create_alert_config(match)
self.send_to_thehive(alert_config)
else:
alert_config = self.create_alert_config(matches[0])
artifacts = []
for match in matches:
artifacts += self.create_artifacts(match)
if 'related_events' in match:
for related_event in match['related_events']:
artifacts += self.create_artifacts(related_event)
alert_config['artifacts'] = artifacts
alert_config['title'] = self.create_title(matches)
alert_config['description'] = self.create_alert_body(matches)
self.send_to_thehive(alert_config)
def get_info(self):
return {
'type': 'hivealerter',
'hive_host': self.rule.get('hive_connection', {}).get('hive_host', '')
}

View File

@@ -1,6 +1,8 @@
{% set es = salt['pillar.get']('static:masterip', '') %} {% set es = salt['pillar.get']('static:masterip', '') %}
{% set hivehost = salt['pillar.get']('static:masterip', '') %} {% set hivehost = salt['pillar.get']('static:masterip', '') %}
{% set hivekey = salt['pillar.get']('static:hivekey', '') %} {% set hivekey = salt['pillar.get']('static:hivekey', '') %}
{% set MASTER = salt['pillar.get']('master:url_base', '') %}
# hive.yaml # hive.yaml
# Elastalert rule to forward IDS alerts from Security Onion to a specified TheHive instance. # Elastalert rule to forward IDS alerts from Security Onion to a specified TheHive instance.
# #
@@ -15,7 +17,7 @@ timeframe:
buffer_time: buffer_time:
minutes: 10 minutes: 10
allow_buffer_time_overlap: true allow_buffer_time_overlap: true
query_key: ["rule.signature_id"] query_key: ["rule.uuid"]
realert: realert:
days: 1 days: 1
filter: filter:
@@ -23,10 +25,11 @@ filter:
query_string: query_string:
query: "event.module: suricata" query: "event.module: suricata"
alert: modules.so.thehive.TheHiveAlerter alert: hivealerter
hive_connection: hive_connection:
hive_host: https://{{hivehost}}/thehive/ hive_host: http://{{hivehost}}
hive_port: 9000/thehive
hive_apikey: {{hivekey}} hive_apikey: {{hivekey}}
hive_proxies: hive_proxies:
@@ -37,9 +40,9 @@ hive_alert_config:
title: '{match[rule][name]}' title: '{match[rule][name]}'
type: 'NIDS' type: 'NIDS'
source: 'SecurityOnion' source: 'SecurityOnion'
description: "`NIDS Dashboard:` \n\n <https://{{es}}/kibana/app/kibana#/dashboard/ed6f7e20-e060-11e9-8f0c-2ddbf5ed9290?_g=(refreshInterval:(display:Off,pause:!f,value:0),time:(from:now-24h,mode:quick,to:now))&_a=(columns:!(_source),index:'*:logstash-*',interval:auto,query:(query_string:(analyze_wildcard:!t,query:'sid:')),sort:!('@timestamp',desc))> \n\n `IPs: `{match[source][ip]}:{match[source][port]} --> {match[destination][ip]}:{match[destination][port]} \n\n `Signature:`{match[rule][rule]}" description: "`Hunting Pivot:` \n\n <https://{{MASTER}}/#/hunt?q=event.module%3A%20suricata%20AND%20rule.uuid%3A{match[rule][uuid]}%20%7C%20groupby%20source.ip%20destination.ip%20rule.name> \n\n `Kibana Dashboard - Signature Drilldown:` \n\n <https://{{MASTER}}/kibana/app/kibana#/dashboard/ed6f7e20-e060-11e9-8f0c-2ddbf5ed9290?_g=(refreshInterval:(display:Off,pause:!f,value:0),time:(from:now-24h,mode:quick,to:now))&_a=(columns:!(_source),index:'*:logstash-*',interval:auto,query:(query_string:(analyze_wildcard:!t,query:'sid:')),sort:!('@timestamp',desc))> \n\n `Kibana Dashboard - Community_ID:` \n\n <https://{{MASTER}}/kibana/app/kibana#/dashboard/30d0ac90-729f-11ea-8dd2-9d8795a1200b?_g=(filters:!(('$state':(store:globalState),meta:(alias:!n,disabled:!f,index:'*:so-*',key:network.community_id,negate:!f,params:(query:'{match[network][community_id]}'),type:phrase),query:(match_phrase:(network.community_id:'{match[network][community_id]}')))),refreshInterval:(pause:!t,value:0),time:(from:now-7d,to:now))> \n\n `IPs: `{match[source][ip]}:{match[source][port]} --> {match[destination][ip]}:{match[destination][port]} \n\n `Signature:`{match[rule][rule]}"
severity: 2 severity: 2
tags: ['{match[rule][signature_id]}','{match[source][ip]}','{match[destination][ip]}'] tags: ['{match[rule][uuid]}','{match[source][ip]}','{match[destination][ip]}']
tlp: 3 tlp: 3
status: 'New' status: 'New'
follow: True follow: True

View File

@@ -14,24 +14,13 @@
# along with this program. If not, see <http://www.gnu.org/licenses/>. # along with this program. If not, see <http://www.gnu.org/licenses/>.
{% set VERSION = salt['pillar.get']('static:soversion', 'HH1.2.2') %} {% set VERSION = salt['pillar.get']('static:soversion', 'HH1.2.2') %}
{% set MASTER = salt['grains.get']('master') %} {% set MASTER = salt['grains.get']('master') %}
{% if grains['role'] == 'so-master' %}
{% set esalert = salt['pillar.get']('master:elastalert', '1') %}
{% set esip = salt['pillar.get']('master:mainip', '') %}
{% set esport = salt['pillar.get']('master:es_port', '') %}
{% elif grains['role'] in ['so-eval','so-mastersearch'] %}
{% set esalert = salt['pillar.get']('master:elastalert', '1') %}
{% set esip = salt['pillar.get']('master:mainip', '') %}
{% set esport = salt['pillar.get']('master:es_port', '') %}
{% if grains['role'] in ['so-eval','so-mastersearch', 'so-master', 'so-standalone'] %}
{% set esalert = salt['pillar.get']('master:elastalert', '1') %}
{% set esip = salt['pillar.get']('master:mainip', '') %}
{% set esport = salt['pillar.get']('master:es_port', '') %}
{% elif grains['role'] == 'so-node' %} {% elif grains['role'] == 'so-node' %}
{% set esalert = salt['pillar.get']('node:elastalert', '0') %}
{% set esalert = salt['pillar.get']('node:elastalert', '0') %}
{% endif %} {% endif %}
# Elastalert # Elastalert
@@ -55,35 +44,35 @@ elastalogdir:
file.directory: file.directory:
- name: /opt/so/log/elastalert - name: /opt/so/log/elastalert
- user: 933 - user: 933
- group: 939 - group: 933
- makedirs: True - makedirs: True
elastarules: elastarules:
file.directory: file.directory:
- name: /opt/so/rules/elastalert - name: /opt/so/rules/elastalert
- user: 933 - user: 933
- group: 939 - group: 933
- makedirs: True - makedirs: True
elastaconfdir: elastaconfdir:
file.directory: file.directory:
- name: /opt/so/conf/elastalert - name: /opt/so/conf/elastalert
- user: 933 - user: 933
- group: 939 - group: 933
- makedirs: True - makedirs: True
elastasomodulesdir: elastasomodulesdir:
file.directory: file.directory:
- name: /opt/so/conf/elastalert/modules/so - name: /opt/so/conf/elastalert/modules/so
- user: 933 - user: 933
- group: 939 - group: 933
- makedirs: True - makedirs: True
elastacustmodulesdir: elastacustmodulesdir:
file.directory: file.directory:
- name: /opt/so/conf/elastalert/modules/custom - name: /opt/so/conf/elastalert/modules/custom
- user: 933 - user: 933
- group: 939 - group: 933
- makedirs: True - makedirs: True
elastasomodulesync: elastasomodulesync:
@@ -91,7 +80,7 @@ elastasomodulesync:
- name: /opt/so/conf/elastalert/modules/so - name: /opt/so/conf/elastalert/modules/so
- source: salt://elastalert/files/modules/so - source: salt://elastalert/files/modules/so
- user: 933 - user: 933
- group: 939 - group: 933
- makedirs: True - makedirs: True
elastarulesync: elastarulesync:
@@ -99,7 +88,7 @@ elastarulesync:
- name: /opt/so/rules/elastalert - name: /opt/so/rules/elastalert
- source: salt://elastalert/files/rules/so - source: salt://elastalert/files/rules/so
- user: 933 - user: 933
- group: 939 - group: 933
- template: jinja - template: jinja
elastaconf: elastaconf:
@@ -107,7 +96,7 @@ elastaconf:
- name: /opt/so/conf/elastalert/elastalert_config.yaml - name: /opt/so/conf/elastalert/elastalert_config.yaml
- source: salt://elastalert/files/elastalert_config.yaml - source: salt://elastalert/files/elastalert_config.yaml
- user: 933 - user: 933
- group: 939 - group: 933
- template: jinja - template: jinja
so-elastalert: so-elastalert:
@@ -118,16 +107,9 @@ so-elastalert:
- user: elastalert - user: elastalert
- detach: True - detach: True
- binds: - binds:
- /opt/so/rules/elastalert:/etc/elastalert/rules/:ro - /opt/so/rules/elastalert:/opt/elastalert/rules/:ro
- /opt/so/log/elastalert:/var/log/elastalert:rw - /opt/so/log/elastalert:/var/log/elastalert:rw
- /opt/so/conf/elastalert/modules/:/opt/elastalert/modules/:ro - /opt/so/conf/elastalert/modules/:/opt/elastalert/modules/:ro
- /opt/so/conf/elastalert/elastalert_config.yaml:/etc/elastalert/conf/elastalert_config.yaml:ro - /opt/so/conf/elastalert/elastalert_config.yaml:/opt/config/elastalert_config.yaml:ro
- environment:
- ELASTICSEARCH_HOST: {{ esip }}
- ELASTICSEARCH_PORT: {{ esport }}
- ELASTALERT_CONFIG: /etc/elastalert/conf/elastalert_config.yaml
- ELASTALERT_SUPERVISOR_CONF: /etc/elastalert/conf/elastalert_supervisord.conf
- RULES_DIRECTORY: /etc/elastalert/rules/
- LOG_DIR: /var/log/elastalert
{% endif %} {% endif %}

View File

@@ -22,3 +22,7 @@ transport.bind_host: 0.0.0.0
transport.publish_host: {{ nodeip }} transport.publish_host: {{ nodeip }}
transport.publish_port: 9300 transport.publish_port: 9300
{%- endif %} {%- endif %}
cluster.routing.allocation.disk.threshold_enabled: true
cluster.routing.allocation.disk.watermark.low: 95%
cluster.routing.allocation.disk.watermark.high: 98%
cluster.routing.allocation.disk.watermark.flood_stage: 98%

View File

@@ -4,7 +4,7 @@
{ {
"geoip": { "geoip": {
"field": "destination.ip", "field": "destination.ip",
"target_field": "geo", "target_field": "destination.geo",
"database_file": "GeoLite2-City.mmdb", "database_file": "GeoLite2-City.mmdb",
"ignore_missing": true, "ignore_missing": true,
"properties": ["ip", "country_iso_code", "country_name", "continent_name", "region_iso_code", "region_name", "city_name", "timezone", "location"] "properties": ["ip", "country_iso_code", "country_name", "continent_name", "region_iso_code", "region_name", "city_name", "timezone", "location"]
@@ -13,7 +13,7 @@
{ {
"geoip": { "geoip": {
"field": "source.ip", "field": "source.ip",
"target_field": "geo", "target_field": "source.geo",
"database_file": "GeoLite2-City.mmdb", "database_file": "GeoLite2-City.mmdb",
"ignore_missing": true, "ignore_missing": true,
"properties": ["ip", "country_iso_code", "country_name", "continent_name", "region_iso_code", "region_name", "city_name", "timezone", "location"] "properties": ["ip", "country_iso_code", "country_name", "continent_name", "region_iso_code", "region_name", "city_name", "timezone", "location"]
@@ -38,11 +38,11 @@
{ "rename": { "field": "module", "target_field": "event.module", "ignore_missing": true } }, { "rename": { "field": "module", "target_field": "event.module", "ignore_missing": true } },
{ "rename": { "field": "dataset", "target_field": "event.dataset", "ignore_missing": true } }, { "rename": { "field": "dataset", "target_field": "event.dataset", "ignore_missing": true } },
{ "rename": { "field": "category", "target_field": "event.category", "ignore_missing": true } }, { "rename": { "field": "category", "target_field": "event.category", "ignore_missing": true } },
{ "rename": { "field": "message2.community_id", "target_field": "network.community_id", "ignore_missing": true } }, { "rename": { "field": "message2.community_id", "target_field": "network.community_id", "ignore_failure": true, "ignore_missing": true } },
{ {
"remove": { "remove": {
"field": [ "index_name_prefix", "message2"], "field": [ "index_name_prefix", "message2", "type" ],
"ignore_failure": false "ignore_failure": true
} }
} }
] ]

View File

@@ -24,8 +24,14 @@
{ "rename": { "field": "message3.columns.pid", "target_field": "process.pid", "ignore_missing": true } }, { "rename": { "field": "message3.columns.pid", "target_field": "process.pid", "ignore_missing": true } },
{ "rename": { "field": "message3.columns.parent", "target_field": "process.ppid", "ignore_missing": true } }, { "rename": { "field": "message3.columns.parent", "target_field": "process.ppid", "ignore_missing": true } },
{ "rename": { "field": "message3.columns.cwd", "target_field": "process.working_directory", "ignore_missing": true } }, { "rename": { "field": "message3.columns.cwd", "target_field": "process.working_directory", "ignore_missing": true } },
{ "rename": { "field": "message3.columns.community_id", "target_field": "network.community_id", "ignore_missing": true } },
{ "rename": { "field": "message3.columns.local_address", "target_field": "local.ip", "ignore_missing": true } },
{ "rename": { "field": "message3.columns.local_port", "target_field": "local.port", "ignore_missing": true } },
{ "rename": { "field": "message3.columns.remote_address", "target_field": "remote.ip", "ignore_missing": true } },
{ "rename": { "field": "message3.columns.remote_port", "target_field": "remote.port", "ignore_missing": true } },
{ "rename": { "field": "message3.columns.process_name", "target_field": "process.name", "ignore_missing": true } },
{ "rename": { "field": "message3.columns.eventid", "target_field": "event.code", "ignore_missing": true } }, { "rename": { "field": "message3.columns.eventid", "target_field": "event.code", "ignore_missing": true } },
{ "set": { "if": "ctx.message3.columns.data != null", "field": "dataset", "value": "wel-{{message3.columns.source}}", "override": true } }, { "set": { "if": "ctx.message3.columns?.data != null", "field": "dataset", "value": "wel-{{message3.columns.source}}", "override": true } },
{ "rename": { "field": "message3.columns.winlog.EventData.SubjectUserName", "target_field": "user.name", "ignore_missing": true } }, { "rename": { "field": "message3.columns.winlog.EventData.SubjectUserName", "target_field": "user.name", "ignore_missing": true } },
{ "rename": { "field": "message3.columns.winlog.EventData.destinationHostname", "target_field": "destination.hostname", "ignore_missing": true } }, { "rename": { "field": "message3.columns.winlog.EventData.destinationHostname", "target_field": "destination.hostname", "ignore_missing": true } },
{ "rename": { "field": "message3.columns.winlog.EventData.destinationIp", "target_field": "destination.ip", "ignore_missing": true } }, { "rename": { "field": "message3.columns.winlog.EventData.destinationIp", "target_field": "destination.ip", "ignore_missing": true } },

View File

@@ -6,6 +6,8 @@
{ "rename":{ "field": "message2.alert", "target_field": "rule", "ignore_failure": true } }, { "rename":{ "field": "message2.alert", "target_field": "rule", "ignore_failure": true } },
{ "rename":{ "field": "rule.signature", "target_field": "rule.name", "ignore_failure": true } }, { "rename":{ "field": "rule.signature", "target_field": "rule.name", "ignore_failure": true } },
{ "rename":{ "field": "rule.ref", "target_field": "rule.version", "ignore_failure": true } }, { "rename":{ "field": "rule.ref", "target_field": "rule.version", "ignore_failure": true } },
{ "rename":{ "field": "rule.signature_id", "target_field": "rule.uuid", "ignore_failure": true } },
{ "rename":{ "field": "rule.signature_id", "target_field": "rule.signature", "ignore_failure": true } },
{ "pipeline": { "name": "suricata.common" } } { "pipeline": { "name": "suricata.common" } }
] ]
} }

View File

@@ -0,0 +1,17 @@
{
"description" : "syslog",
"processors" : [
{
"dissect": {
"field": "message",
"pattern" : "%{message}",
"on_failure": [ { "drop" : { } } ]
},
"remove": {
"field": [ "type", "agent" ],
"ignore_failure": true
}
},
{ "pipeline": { "name": "common" } }
]
}

View File

@@ -7,14 +7,15 @@
{ "dot_expander": { "field": "id.orig_p", "path": "message2", "ignore_failure": true } }, { "dot_expander": { "field": "id.orig_p", "path": "message2", "ignore_failure": true } },
{ "dot_expander": { "field": "id.resp_h", "path": "message2", "ignore_failure": true } }, { "dot_expander": { "field": "id.resp_h", "path": "message2", "ignore_failure": true } },
{ "dot_expander": { "field": "id.resp_p", "path": "message2", "ignore_failure": true } }, { "dot_expander": { "field": "id.resp_p", "path": "message2", "ignore_failure": true } },
{"community_id": {"if": "ctx.network?.transport != null", "field":["message2.id.orig_h","message2.id.orig_p","message2.id.resp_h","message2.id.resp_p","network.transport"],"target_field":"network.community_id"}},
{ "rename": { "field": "message2.id.orig_h", "target_field": "source.ip", "ignore_missing": true } }, { "rename": { "field": "message2.id.orig_h", "target_field": "source.ip", "ignore_missing": true } },
{ "rename": { "field": "message2.id.orig_p", "target_field": "source.port", "ignore_missing": true } }, { "rename": { "field": "message2.id.orig_p", "target_field": "source.port", "ignore_missing": true } },
{ "rename": { "field": "message2.id.resp_h", "target_field": "destination.ip", "ignore_missing": true } }, { "rename": { "field": "message2.id.resp_h", "target_field": "destination.ip", "ignore_missing": true } },
{ "rename": { "field": "message2.id.resp_p", "target_field": "destination.port", "ignore_missing": true } }, { "rename": { "field": "message2.id.resp_p", "target_field": "destination.port", "ignore_missing": true } },
{ "set": { "field": "client.ip", "value": "{{source.ip}}" } }, { "set": { "field": "client.ip", "value": "{{source.ip}}" } },
{ "set": { "if": "ctx.source.port != null", "field": "client.port", "value": "{{source.port}}" } }, { "set": { "if": "ctx.source?.port != null", "field": "client.port", "value": "{{source.port}}" } },
{ "set": { "field": "server.ip", "value": "{{destination.ip}}" } }, { "set": { "field": "server.ip", "value": "{{destination.ip}}" } },
{ "set": { "if": "ctx.destination.port != null", "field": "server.port", "value": "{{destination.port}}" } }, { "set": { "if": "ctx.destination?.port != null", "field": "server.port", "value": "{{destination.port}}" } },
{ "set": { "field": "observer.name", "value": "{{agent.name}}" } }, { "set": { "field": "observer.name", "value": "{{agent.name}}" } },
{ "date": { "field": "message2.ts", "target_field": "@timestamp", "formats": ["ISO8601", "UNIX"], "ignore_failure": true } }, { "date": { "field": "message2.ts", "target_field": "@timestamp", "formats": ["ISO8601", "UNIX"], "ignore_failure": true } },
{ "remove": { "field": ["agent"], "ignore_failure": true } }, { "remove": { "field": ["agent"], "ignore_failure": true } },

View File

@@ -17,10 +17,24 @@
{ "rename": { "field": "message2.orig_ip_bytes", "target_field": "client.ip_bytes", "ignore_missing": true } }, { "rename": { "field": "message2.orig_ip_bytes", "target_field": "client.ip_bytes", "ignore_missing": true } },
{ "rename": { "field": "message2.resp_pkts", "target_field": "server.packets", "ignore_missing": true } }, { "rename": { "field": "message2.resp_pkts", "target_field": "server.packets", "ignore_missing": true } },
{ "rename": { "field": "message2.resp_ip_bytes", "target_field": "server.ip_bytes", "ignore_missing": true } }, { "rename": { "field": "message2.resp_ip_bytes", "target_field": "server.ip_bytes", "ignore_missing": true } },
{ "rename": { "field": "message2.tunnel_parents", "target_field": "connection.tunnel_parents", "ignore_missing": true } }, { "rename": { "field": "message2.tunnel_parents", "target_field": "log.id.tunnel_parents", "ignore_missing": true } },
{ "rename": { "field": "message2.orig_cc", "target_field": "client.country_code","ignore_missing": true } }, { "rename": { "field": "message2.orig_cc", "target_field": "client.country_code","ignore_missing": true } },
{ "rename": { "field": "message2.resp_cc", "target_field": "server.country_code", "ignore_missing": true } }, { "rename": { "field": "message2.resp_cc", "target_field": "server.country_code", "ignore_missing": true } },
{ "rename": { "field": "message2.sensorname", "target_field": "observer.name", "ignore_missing": true } }, { "rename": { "field": "message2.sensorname", "target_field": "observer.name", "ignore_missing": true } },
{ "script": { "lang": "painless", "source": "ctx.network.bytes = (ctx.client.bytes + ctx.server.bytes)", "ignore_failure": true } },
{ "set": { "if": "ctx.connection.state == 'S0'", "field": "connection.state_description", "value": "Connection attempt seen, no reply" } },
{ "set": { "if": "ctx.connection.state == 'S1'", "field": "connection.state_description", "value": "Connection established, not terminated" } },
{ "set": { "if": "ctx.connection.state == 'S2'", "field": "connection.state_description", "value": "Connection established and close attempt by originator seen (but no reply from responder)" } },
{ "set": { "if": "ctx.connection.state == 'S3'", "field": "connection.state_description", "value": "Connection established and close attempt by responder seen (but no reply from originator)" } },
{ "set": { "if": "ctx.connection.state == 'SF'", "field": "connection.state_description", "value": "Normal SYN/FIN completion" } },
{ "set": { "if": "ctx.connection.state == 'REJ'", "field": "connection.state_description", "value": "Connection attempt rejected" } },
{ "set": { "if": "ctx.connection.state == 'RSTO'", "field": "connection.state_description", "value": "Connection established, originator aborted (sent a RST)" } },
{ "set": { "if": "ctx.connection.state == 'RSTR'", "field": "connection.state_description", "value": "Established, responder aborted" } },
{ "set": { "if": "ctx.connection.state == 'RSTOS0'","field": "connection.state_description", "value": "Originator sent a SYN followed by a RST, we never saw a SYN-ACK from the responder" } },
{ "set": { "if": "ctx.connection.state == 'RSTRH'", "field": "connection.state_description", "value": "Responder sent a SYN ACK followed by a RST, we never saw a SYN from the (purported) originator" } },
{ "set": { "if": "ctx.connection.state == 'SH'", "field": "connection.state_description", "value": "Originator sent a SYN followed by a FIN, we never saw a SYN ACK from the responder (hence the connection was 'half' open)" } },
{ "set": { "if": "ctx.connection.state == 'SHR'", "field": "connection.state_description", "value": "Responder sent a SYN ACK followed by a FIN, we never saw a SYN from the originator" } },
{ "set": { "if": "ctx.connection.state == 'OTH'", "field": "connection.state_description", "value": "No SYN seen, just midstream traffic (a 'partial connection' that was not later closed)" } },
{ "pipeline": { "name": "zeek.common" } } { "pipeline": { "name": "zeek.common" } }
] ]
} }

View File

@@ -4,9 +4,9 @@
{ "remove": { "field": ["host"], "ignore_failure": true } }, { "remove": { "field": ["host"], "ignore_failure": true } },
{ "json": { "field": "message", "target_field": "message2", "ignore_failure": true } }, { "json": { "field": "message", "target_field": "message2", "ignore_failure": true } },
{ "rename": { "field": "message2.rtt", "target_field": "event.duration", "ignore_missing": true } }, { "rename": { "field": "message2.rtt", "target_field": "event.duration", "ignore_missing": true } },
{ "rename": { "field": "message2.named_pipe", "target_field": "named_pipe", "ignore_missing": true } }, { "rename": { "field": "message2.named_pipe", "target_field": "dce_rpc.named_pipe", "ignore_missing": true } },
{ "rename": { "field": "message2.endpoint", "target_field": "endpoint", "ignore_missing": true } }, { "rename": { "field": "message2.endpoint", "target_field": "dce_rpc.endpoint", "ignore_missing": true } },
{ "rename": { "field": "message2.operation", "target_field": "operation", "ignore_missing": true } }, { "rename": { "field": "message2.operation", "target_field": "dce_rpc.operation", "ignore_missing": true } },
{ "pipeline": { "name": "zeek.common" } } { "pipeline": { "name": "zeek.common" } }
] ]
} }

View File

@@ -15,7 +15,7 @@
{ "rename": { "field": "message2.domain", "target_field": "host.domain", "ignore_missing": true } }, { "rename": { "field": "message2.domain", "target_field": "host.domain", "ignore_missing": true } },
{ "rename": { "field": "message2.host_name", "target_field": "host.hostname", "ignore_missing": true } }, { "rename": { "field": "message2.host_name", "target_field": "host.hostname", "ignore_missing": true } },
{ "rename": { "field": "message2.duration", "target_field": "event.duration", "ignore_missing": true } }, { "rename": { "field": "message2.duration", "target_field": "event.duration", "ignore_missing": true } },
{ "rename": { "field": "message2.msg_types", "target_field": "message_types", "ignore_missing": true } }, { "rename": { "field": "message2.msg_types", "target_field": "dhcp.message_types", "ignore_missing": true } },
{ "pipeline": { "name": "zeek.common" } } { "pipeline": { "name": "zeek.common" } }
] ]
} }

View File

@@ -23,6 +23,7 @@
{ "rename": { "field": "message2.TTLs", "target_field": "dns.ttls", "ignore_missing": true } }, { "rename": { "field": "message2.TTLs", "target_field": "dns.ttls", "ignore_missing": true } },
{ "rename": { "field": "message2.rejected", "target_field": "dns.query.rejected", "ignore_missing": true } }, { "rename": { "field": "message2.rejected", "target_field": "dns.query.rejected", "ignore_missing": true } },
{ "script": { "lang": "painless", "source": "ctx.dns.query.length = ctx.dns.query.name.length()", "ignore_failure": true } }, { "script": { "lang": "painless", "source": "ctx.dns.query.length = ctx.dns.query.name.length()", "ignore_failure": true } },
{ "pipeline": { "if": "ctx.dns.query.name.contains('.')", "name": "zeek.dns.tld"} },
{ "pipeline": { "name": "zeek.common" } } { "pipeline": { "name": "zeek.common" } }
] ]
} }

View File

@@ -0,0 +1,13 @@
{
"description" : "zeek.dns.tld",
"processors" : [
{ "script": { "lang": "painless", "source": "ctx.dns.top_level_domain = ctx.dns.query.name.substring(ctx.dns.query.name.lastIndexOf('.') + 1)", "ignore_failure": true } },
{ "script": { "lang": "painless", "source": "ctx.dns.query_without_tld = ctx.dns.query.name.substring(0, (ctx.dns.query.name.lastIndexOf('.')))", "ignore_failure": true } },
{ "script": { "lang": "painless", "source": "ctx.dns.parent_domain = ctx.dns.query_without_tld.substring(ctx.dns.query_without_tld.lastIndexOf('.') + 1)", "ignore_failure": true } },
{ "script": { "lang": "painless", "source": "ctx.dns.subdomain = ctx.dns.query_without_tld.substring(0, (ctx.dns.query_without_tld.lastIndexOf('.')))", "ignore_failure": true } },
{ "script": { "lang": "painless", "source": "ctx.dns.highest_registered_domain = ctx.dns.parent_domain + '.' + ctx.dns.top_level_domain", "ignore_failure": true } },
{ "script": { "lang": "painless", "source": "ctx.dns.subdomain_length = ctx.dns.subdomain.length()", "ignore_failure": true } },
{ "script": { "lang": "painless", "source": "ctx.dns.parent_domain_length = ctx.dns.parent_domain.length()", "ignore_failure": true } },
{ "remove": { "field": "dns.query_without_tld", "ignore_failure": true } }
]
}

View File

@@ -29,6 +29,7 @@
{ "script": { "lang": "painless", "source": "ctx.uri_length = ctx.uri.length()", "ignore_failure": true } }, { "script": { "lang": "painless", "source": "ctx.uri_length = ctx.uri.length()", "ignore_failure": true } },
{ "script": { "lang": "painless", "source": "ctx.useragent_length = ctx.useragent.length()", "ignore_failure": true } }, { "script": { "lang": "painless", "source": "ctx.useragent_length = ctx.useragent.length()", "ignore_failure": true } },
{ "script": { "lang": "painless", "source": "ctx.virtual_host_length = ctx.virtual_host.length()", "ignore_failure": true } }, { "script": { "lang": "painless", "source": "ctx.virtual_host_length = ctx.virtual_host.length()", "ignore_failure": true } },
{ "set": { "field": "network.transport", "value": "tcp" } },
{ "pipeline": { "name": "zeek.common" } } { "pipeline": { "name": "zeek.common" } }
] ]
} }

View File

@@ -6,7 +6,7 @@
{ "rename": { "field": "message2.fuid", "target_field": "log.id.fuid", "ignore_missing": true } }, { "rename": { "field": "message2.fuid", "target_field": "log.id.fuid", "ignore_missing": true } },
{ "rename": { "field": "message2.mime", "target_field": "file.mimetype", "ignore_missing": true } }, { "rename": { "field": "message2.mime", "target_field": "file.mimetype", "ignore_missing": true } },
{ "rename": { "field": "message2.desc", "target_field": "file.description", "ignore_missing": true } }, { "rename": { "field": "message2.desc", "target_field": "file.description", "ignore_missing": true } },
{ "rename": { "field": "message2.proto", "target_field": "network.protocol", "ignore_missing": true } }, { "rename": { "field": "message2.proto", "target_field": "network.transport", "ignore_missing": true } },
{ "rename": { "field": "message2.note", "target_field": "notice.note", "ignore_missing": true } }, { "rename": { "field": "message2.note", "target_field": "notice.note", "ignore_missing": true } },
{ "rename": { "field": "message2.msg", "target_field": "notice.message", "ignore_missing": true } }, { "rename": { "field": "message2.msg", "target_field": "notice.message", "ignore_missing": true } },
{ "rename": { "field": "message2.sub", "target_field": "notice.sub_message", "ignore_missing": true } }, { "rename": { "field": "message2.sub", "target_field": "notice.sub_message", "ignore_missing": true } },

View File

@@ -5,7 +5,7 @@
{ "json": { "field": "message", "target_field": "message2", "ignore_failure": true } }, { "json": { "field": "message", "target_field": "message2", "ignore_failure": true } },
{ "rename": { "field": "message2.username", "target_field": "user.name", "ignore_missing": true } }, { "rename": { "field": "message2.username", "target_field": "user.name", "ignore_missing": true } },
{ "rename": { "field": "message2.mac", "target_field": "host.mac", "ignore_missing": true } }, { "rename": { "field": "message2.mac", "target_field": "host.mac", "ignore_missing": true } },
{ "rename": { "field": "message2.framed_addr", "target_field": "framed_addr", "ignore_missing": true } }, { "rename": { "field": "message2.framed_addr", "target_field": "radius.framed_address", "ignore_missing": true } },
{ "rename": { "field": "message2.remote_ip", "target_field": "destination.ip", "ignore_missing": true } }, { "rename": { "field": "message2.remote_ip", "target_field": "destination.ip", "ignore_missing": true } },
{ "rename": { "field": "message2.connect_info", "target_field": "radius.connect_info", "ignore_missing": true } }, { "rename": { "field": "message2.connect_info", "target_field": "radius.connect_info", "ignore_missing": true } },
{ "rename": { "field": "message2.reply_msg", "target_field": "radius.reply_message", "ignore_missing": true } }, { "rename": { "field": "message2.reply_msg", "target_field": "radius.reply_message", "ignore_missing": true } },

View File

@@ -25,6 +25,7 @@
{ "rename": { "field": "message2.tls", "target_field": "smtp.tls", "ignore_missing": true } }, { "rename": { "field": "message2.tls", "target_field": "smtp.tls", "ignore_missing": true } },
{ "rename": { "field": "message2.fuids", "target_field": "log.id.fuids", "ignore_missing": true } }, { "rename": { "field": "message2.fuids", "target_field": "log.id.fuids", "ignore_missing": true } },
{ "rename": { "field": "message2.is_webmail", "target_field": "smtp.is_webmail", "ignore_missing": true } }, { "rename": { "field": "message2.is_webmail", "target_field": "smtp.is_webmail", "ignore_missing": true } },
{ "set": { "field": "network.transport", "value": "tcp" } },
{ "pipeline": { "name": "zeek.common" } } { "pipeline": { "name": "zeek.common" } }
] ]
} }

View File

@@ -3,7 +3,7 @@
"processors" : [ "processors" : [
{ "remove": { "field": ["host"], "ignore_failure": true } }, { "remove": { "field": ["host"], "ignore_failure": true } },
{ "json": { "field": "message", "target_field": "message2", "ignore_failure": true } }, { "json": { "field": "message", "target_field": "message2", "ignore_failure": true } },
{ "rename": { "field": "message2.uid", "target_field": "uid", "ignore_missing": true } }, { "rename": { "field": "message2.uid", "target_field": "log.id.uid", "ignore_missing": true } },
{ "dot_expander": { "field": "id.orig_h", "path": "message2", "ignore_failure": true } }, { "dot_expander": { "field": "id.orig_h", "path": "message2", "ignore_failure": true } },
{ "rename": { "field": "message2.id.orig_h", "target_field": "source.ip", "ignore_missing": true } }, { "rename": { "field": "message2.id.orig_h", "target_field": "source.ip", "ignore_missing": true } },
{ "dot_expander": { "field": "id.orig_p", "path": "message2", "ignore_failure": true } }, { "dot_expander": { "field": "id.orig_p", "path": "message2", "ignore_failure": true } },

View File

@@ -15,6 +15,7 @@
# You should have received a copy of the GNU General Public License # You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>. # along with this program. If not, see <http://www.gnu.org/licenses/>.
RETURN_CODE=0
ELASTICSEARCH_HOST=$1 ELASTICSEARCH_HOST=$1
ELASTICSEARCH_PORT=9200 ELASTICSEARCH_PORT=9200
@@ -46,7 +47,9 @@ fi
cd ${ELASTICSEARCH_INGEST_PIPELINES} cd ${ELASTICSEARCH_INGEST_PIPELINES}
echo "Loading pipelines..." echo "Loading pipelines..."
for i in *; do echo $i; curl ${ELASTICSEARCH_AUTH} -XPUT http://${ELASTICSEARCH_HOST}:${ELASTICSEARCH_PORT}/_ingest/pipeline/$i -H 'Content-Type: application/json' -d@$i 2>/dev/null; echo; done for i in *; do echo $i; RESPONSE=$(curl ${ELASTICSEARCH_AUTH} -XPUT http://${ELASTICSEARCH_HOST}:${ELASTICSEARCH_PORT}/_ingest/pipeline/$i -H 'Content-Type: application/json' -d@$i 2>/dev/null); echo $RESPONSE; if [[ "$RESPONSE" == *"error"* ]]; then RETURN_CODE=1; fi; done
echo echo
cd - >/dev/null cd - >/dev/null
exit $RETURN_CODE

View File

@@ -15,27 +15,19 @@
{% set VERSION = salt['pillar.get']('static:soversion', 'HH1.2.2') %} {% set VERSION = salt['pillar.get']('static:soversion', 'HH1.2.2') %}
{% set MASTER = salt['grains.get']('master') %} {% set MASTER = salt['grains.get']('master') %}
{% set FEATURES = salt['pillar.get']('elastic:features', False) %} {% set FEATURES = salt['pillar.get']('elastic:features', False) %}
{% if FEATURES %} {% if FEATURES %}
{% set FEATURES = "-features" %} {% set FEATURES = "-features" %}
{% else %} {% else %}
{% set FEATURES = '' %} {% set FEATURES = '' %}
{% endif %} {% endif %}
{% if grains['role'] == 'so-master' %} {% if grains['role'] in ['so-eval','so-mastersearch', 'so-master', 'so-standalone'] %}
{% set esclustername = salt['pillar.get']('master:esclustername', '') %}
{% set esclustername = salt['pillar.get']('master:esclustername', '') %} {% set esheap = salt['pillar.get']('master:esheap', '') %}
{% set esheap = salt['pillar.get']('master:esheap', '') %} {% elif grains['role'] in ['so-node','so-heavynode'] %}
{% set esclustername = salt['pillar.get']('node:esclustername', '') %}
{% elif grains['role'] in ['so-eval','so-mastersearch'] %} {% set esheap = salt['pillar.get']('node:esheap', '') %}
{% set esclustername = salt['pillar.get']('master:esclustername', '') %}
{% set esheap = salt['pillar.get']('master:esheap', '') %}
{% elif grains['role'] == 'so-node' or grains['role'] == 'so-heavynode' %}
{% set esclustername = salt['pillar.get']('node:esclustername', '') %}
{% set esheap = salt['pillar.get']('node:esheap', '') %}
{% endif %} {% endif %}
vm.max_map_count: vm.max_map_count:
@@ -144,8 +136,12 @@ so-elasticsearch-pipelines-file:
so-elasticsearch-pipelines: so-elasticsearch-pipelines:
cmd.run: cmd.run:
- name: /opt/so/conf/elasticsearch/so-elasticsearch-pipelines {{ esclustername }} - name: /opt/so/conf/elasticsearch/so-elasticsearch-pipelines {{ esclustername }}
- onchanges:
- file: esingestconf
- file: esyml
- file: so-elasticsearch-pipelines-file
{% if grains['role'] == 'so-master' or grains['role'] == "so-eval" or grains['role'] == "so-mastersearch" %} {% if grains['role'] in ['so-master', 'so-eval', 'so-mastersearch', 'so-standalone'] %}
so-elasticsearch-templates: so-elasticsearch-templates:
cmd.run: cmd.run:
- name: /usr/sbin/so-elasticsearch-templates - name: /usr/sbin/so-elasticsearch-templates

View File

@@ -74,7 +74,33 @@ filebeat.modules:
# List of prospectors to fetch data. # List of prospectors to fetch data.
filebeat.inputs: filebeat.inputs:
#------------------------------ Log prospector -------------------------------- #------------------------------ Log prospector --------------------------------
{%- if grains['role'] == 'so-sensor' or grains['role'] == "so-eval" or grains['role'] == "so-helix" or grains['role'] == "so-heavynode" %} {%- if grains['role'] == 'so-sensor' or grains['role'] == "so-eval" or grains['role'] == "so-helix" or grains['role'] == "so-heavynode" or grains['role'] == "so-standalone" %}
- type: udp
enabled: true
host: "0.0.0.0:514"
fields:
module: syslog
dataset: syslog
pipeline: "syslog"
index: "so-syslog-%{+yyyy.MM.dd}"
processors:
- drop_fields:
fields: ["source", "prospector", "input", "offset", "beat"]
fields_under_root: true
- type: tcp
enabled: true
host: "0.0.0.0:514"
fields:
module: syslog
dataset: syslog
pipeline: "syslog"
index: "so-syslog-%{+yyyy.MM.dd}"
processors:
- drop_fields:
fields: ["source", "prospector", "input", "offset", "beat"]
fields_under_root: true
{%- if BROVER != 'SURICATA' %} {%- if BROVER != 'SURICATA' %}
{%- for LOGNAME in salt['pillar.get']('brologs:enabled', '') %} {%- for LOGNAME in salt['pillar.get']('brologs:enabled', '') %}
- type: log - type: log

View File

@@ -57,12 +57,14 @@ so-filebeat:
- /opt/so/conf/filebeat/etc/filebeat.yml:/usr/share/filebeat/filebeat.yml:ro - /opt/so/conf/filebeat/etc/filebeat.yml:/usr/share/filebeat/filebeat.yml:ro
- /nsm/zeek:/nsm/zeek:ro - /nsm/zeek:/nsm/zeek:ro
- /nsm/strelka/log:/nsm/strelka/log:ro - /nsm/strelka/log:/nsm/strelka/log:ro
- /opt/so/log/suricata:/suricata:ro - /nsm/suricata:/suricata:ro
- /opt/so/wazuh/logs/alerts:/wazuh/alerts:ro - /opt/so/wazuh/logs/alerts:/wazuh/alerts:ro
- /opt/so/wazuh/logs/archives:/wazuh/archives:ro - /opt/so/wazuh/logs/archives:/wazuh/archives:ro
- /nsm/osquery/fleet/:/nsm/osquery/fleet:ro - /nsm/osquery/fleet/:/nsm/osquery/fleet:ro
- /opt/so/conf/filebeat/etc/pki/filebeat.crt:/usr/share/filebeat/filebeat.crt:ro - /opt/so/conf/filebeat/etc/pki/filebeat.crt:/usr/share/filebeat/filebeat.crt:ro
- /opt/so/conf/filebeat/etc/pki/filebeat.key:/usr/share/filebeat/filebeat.key:ro - /opt/so/conf/filebeat/etc/pki/filebeat.key:/usr/share/filebeat/filebeat.key:ro
- /etc/ssl/certs/intca.crt:/usr/share/filebeat/intraca.crt:ro - /etc/ssl/certs/intca.crt:/usr/share/filebeat/intraca.crt:ro
- port_bindings:
- 0.0.0.0:514:514/udp
- watch: - watch:
- file: /opt/so/conf/filebeat/etc/filebeat.yml - file: /opt/so/conf/filebeat/etc/filebeat.yml

View File

@@ -1,15 +1,16 @@
# Firewall Magic for the grid # Firewall Magic for the grid
{%- if grains['role'] in ['so-eval','so-master','so-helix','so-mastersearch'] %} {% if grains['role'] in ['so-eval','so-master','so-helix','so-mastersearch', 'so-standalone'] %}
{%- set ip = salt['pillar.get']('static:masterip', '') %} {% set ip = salt['pillar.get']('static:masterip', '') %}
{%- elif grains['role'] == 'so-node' or grains['role'] == 'so-heavynode' %} {% elif grains['role'] == 'so-node' or grains['role'] == 'so-heavynode' %}
{%- set ip = salt['pillar.get']('node:mainip', '') %} {% set ip = salt['pillar.get']('node:mainip', '') %}
{%- elif grains['role'] == 'so-sensor' %} {% elif grains['role'] == 'so-sensor' %}
{%- set ip = salt['pillar.get']('sensor:mainip', '') %} {% set ip = salt['pillar.get']('sensor:mainip', '') %}
{%- elif grains['role'] == 'so-fleet' %} {% elif grains['role'] == 'so-fleet' %}
{%- set ip = salt['pillar.get']('node:mainip', '') %} {% set ip = salt['pillar.get']('node:mainip', '') %}
{%- endif %} {% endif %}
{%- set FLEET_NODE = salt['pillar.get']('static:fleet_node') %}
{%- set FLEET_NODE_IP = salt['pillar.get']('static:fleet_ip') %} {% set FLEET_NODE = salt['pillar.get']('static:fleet_node') %}
{% set FLEET_NODE_IP = salt['pillar.get']('static:fleet_ip') %}
{% import_yaml 'firewall/ports.yml' as firewall_ports %} {% import_yaml 'firewall/ports.yml' as firewall_ports %}
{% set firewall_aliases = salt['pillar.get']('firewall:aliases', firewall_ports.firewall.aliases, merge=True) %} {% set firewall_aliases = salt['pillar.get']('firewall:aliases', firewall_ports.firewall.aliases, merge=True) %}
@@ -116,7 +117,7 @@ enable_docker_user_established:
- ctstate: 'RELATED,ESTABLISHED' - ctstate: 'RELATED,ESTABLISHED'
# Rules if you are a Master # Rules if you are a Master
{% if grains['role'] == 'so-master' or grains['role'] == 'so-eval' or grains['role'] == 'so-helix' or grains['role'] == 'so-mastersearch' %} {% if grains['role'] in ['so-master', 'so-eval', 'so-helix', 'so-mastersearch', 'so-standalone'] %}
#This should be more granular #This should be more granular
iptables_allow_master_docker: iptables_allow_master_docker:
iptables.insert: iptables.insert:
@@ -219,7 +220,14 @@ enable_cluster_ES_9300_{{ip}}:
# All Sensors get the below rules: # All Sensors get the below rules:
{% if grains['role'] == 'so-sensor' %} {% if grains['role'] == 'so-sensor' %}
iptables_allow_sensor_docker:
iptables.insert:
- table: filter
- chain: INPUT
- jump: ACCEPT
- source: 172.17.0.0/24
- position: 1
- save: True
{% endif %} {% endif %}
# Rules if you are a Hot Node # Rules if you are a Hot Node

View File

@@ -2,6 +2,7 @@
{% set MAIN_HOSTNAME = salt['grains.get']('host') %} {% set MAIN_HOSTNAME = salt['grains.get']('host') %}
{% set MAIN_IP = salt['pillar.get']('node:mainip') %} {% set MAIN_IP = salt['pillar.get']('node:mainip') %}
local_salt_dir=/opt/so/saltstack/local
#so-fleet-packages $FleetHostname/IP #so-fleet-packages $FleetHostname/IP
@@ -26,8 +27,8 @@ docker run \
--mount type=bind,source=/etc/ssl/certs/intca.crt,target=/var/launcher/launcher.crt \ --mount type=bind,source=/etc/ssl/certs/intca.crt,target=/var/launcher/launcher.crt \
docker.io/soshybridhunter/so-fleet-launcher:HH1.1.0 "$esecret" "$1":8090 docker.io/soshybridhunter/so-fleet-launcher:HH1.1.0 "$esecret" "$1":8090
cp /opt/so/conf/fleet/packages/launcher.* /opt/so/saltstack/salt/launcher/packages/ cp /opt/so/conf/fleet/packages/launcher.* $local_salt_dir/salt/launcher/packages/
#Update timestamp on packages webpage #Update timestamp on packages webpage
sed -i "s@.*Generated.*@Generated: $(date '+%m%d%Y')@g" /opt/so/conf/fleet/packages/index.html sed -i "s@.*Generated.*@Generated: $(date '+%m%d%Y')@g" /opt/so/conf/fleet/packages/index.html
sed -i "s@.*Generated.*@Generated: $(date '+%m%d%Y')@g" /opt/so/saltstack/salt/fleet/files/dedicated-index.html sed -i "s@.*Generated.*@Generated: $(date '+%m%d%Y')@g" $local_salt_dir/salt/fleet/files/dedicated-index.html

View File

@@ -31,7 +31,7 @@ docker exec so-fleet fleetctl apply -f /packs/hh/osquery.conf
# Enable Fleet # Enable Fleet
echo "Enabling Fleet..." echo "Enabling Fleet..."
salt-call state.apply fleet.event_enable-fleet queue=True >> /root/fleet-setup.log salt-call state.apply fleet.event_enable-fleet queue=True >> /root/fleet-setup.log
salt-call state.apply common queue=True >> /root/fleet-setup.log salt-call state.apply nginx queue=True >> /root/fleet-setup.log
# Generate osquery install packages # Generate osquery install packages
echo "Generating osquery install packages - this will take some time..." echo "Generating osquery install packages - this will take some time..."
@@ -42,7 +42,7 @@ echo "Installing launcher via salt..."
salt-call state.apply fleet.install_package queue=True >> /root/fleet-setup.log salt-call state.apply fleet.install_package queue=True >> /root/fleet-setup.log
salt-call state.apply filebeat queue=True >> /root/fleet-setup.log salt-call state.apply filebeat queue=True >> /root/fleet-setup.log
docker stop so-nginx docker stop so-nginx
salt-call state.apply common queue=True >> /root/fleet-setup.log salt-call state.apply nginx queue=True >> /root/fleet-setup.log
echo "Fleet Setup Complete - Login here: https://{{ MAIN_HOSTNAME }}" echo "Fleet Setup Complete - Login here: https://{{ MAIN_HOSTNAME }}"
echo "Your username is $2 and your password is $initpw" echo "Your username is $2 and your password is $initpw"

View File

@@ -1226,7 +1226,7 @@
}, },
{ {
"params": [ "params": [
" / 5" " / {{ CPUS }}"
], ],
"type": "math" "type": "math"
} }
@@ -1365,7 +1365,7 @@
}, },
{ {
"params": [ "params": [
" / 8" " / {{ CPUS }}"
], ],
"type": "math" "type": "math"
} }
@@ -1504,7 +1504,7 @@
}, },
{ {
"params": [ "params": [
" / 8" " / {{ CPUS }}"
], ],
"type": "math" "type": "math"
} }
@@ -1643,7 +1643,7 @@
}, },
{ {
"params": [ "params": [
" / 8" " / {{ CPUS }}"
], ],
"type": "math" "type": "math"
} }

View File

@@ -290,7 +290,7 @@
}, },
{ {
"params": [ "params": [
" / 16" " / {{ CPUS }}"
], ],
"type": "math" "type": "math"
} }
@@ -430,7 +430,7 @@
}, },
{ {
"params": [ "params": [
" / 16" " / {{ CPUS }}"
], ],
"type": "math" "type": "math"
} }
@@ -1046,7 +1046,7 @@
}, },
{ {
"params": [ "params": [
" / 16" " / {{ CPUS }}"
], ],
"type": "math" "type": "math"
} }
@@ -1186,7 +1186,7 @@
}, },
{ {
"params": [ "params": [
" / 16" " / {{ CPUS }}"
], ],
"type": "math" "type": "math"
} }
@@ -1326,7 +1326,7 @@
}, },
{ {
"params": [ "params": [
" / 16" " / {{ CPUS }}"
], ],
"type": "math" "type": "math"
} }

File diff suppressed because it is too large Load Diff

View File

@@ -298,7 +298,7 @@
}, },
{ {
"params": [ "params": [
" / 16" " / {{ CPUS }}"
], ],
"type": "math" "type": "math"
} }
@@ -438,7 +438,7 @@
}, },
{ {
"params": [ "params": [
" / 16" " / {{ CPUS }}"
], ],
"type": "math" "type": "math"
} }

View File

@@ -1326,7 +1326,7 @@
}, },
{ {
"params": [ "params": [
" / 8" " / {{ CPUS }}"
], ],
"type": "math" "type": "math"
} }
@@ -1465,7 +1465,7 @@
}, },
{ {
"params": [ "params": [
" / 8" " / {{ CPUS }}"
], ],
"type": "math" "type": "math"
} }
@@ -1604,7 +1604,7 @@
}, },
{ {
"params": [ "params": [
" / 8" " / {{ CPUS }}"
], ],
"type": "math" "type": "math"
} }

View File

@@ -10,6 +10,13 @@ providers:
editable: true editable: true
options: options:
path: /etc/grafana/grafana_dashboards/master path: /etc/grafana/grafana_dashboards/master
- name: 'Master Search'
folder: 'Master Search'
type: file
disableDeletion: false
editable: true
options:
path: /etc/grafana/grafana_dashboards/mastersearch
- name: 'Sensor Nodes' - name: 'Sensor Nodes'
folder: 'Sensor Nodes' folder: 'Sensor Nodes'
type: file type: file

View File

@@ -2,7 +2,7 @@
{% set MASTER = salt['grains.get']('master') %} {% set MASTER = salt['grains.get']('master') %}
{% set VERSION = salt['pillar.get']('static:soversion', 'HH1.2.2') %} {% set VERSION = salt['pillar.get']('static:soversion', 'HH1.2.2') %}
{% if grains['role'] in ['so-master', 'so-mastersearch', 'so-eval'] and GRAFANA == 1 %} {% if grains['role'] in ['so-master', 'so-mastersearch', 'so-eval', 'so-standalone'] and GRAFANA == 1 %}
# Grafana all the things # Grafana all the things
grafanadir: grafanadir:
@@ -33,6 +33,13 @@ grafanadashmdir:
- group: 939 - group: 939
- makedirs: True - makedirs: True
grafanadashmsdir:
file.directory:
- name: /opt/so/conf/grafana/grafana_dashboards/mastersearch
- user: 939
- group: 939
- makedirs: True
grafanadashevaldir: grafanadashevaldir:
file.directory: file.directory:
- name: /opt/so/conf/grafana/grafana_dashboards/eval - name: /opt/so/conf/grafana/grafana_dashboards/eval
@@ -85,6 +92,29 @@ dashboard-master:
{% endfor %} {% endfor %}
{% endif %} {% endif %}
{% if salt['pillar.get']('mastersearchtab', False) %}
{% for SN, SNDATA in salt['pillar.get']('mastersearchtab', {}).items() %}
{% set NODETYPE = SN.split('_')|last %}
{% set SN = SN | regex_replace('_' ~ NODETYPE, '') %}
dashboard-master:
file.managed:
- name: /opt/so/conf/grafana/grafana_dashboards/mastersearch/{{ SN }}-MasterSearch.json
- user: 939
- group: 939
- template: jinja
- source: salt://grafana/dashboards/mastersearch/mastersearch.json
- defaults:
SERVERNAME: {{ SN }}
MANINT: {{ SNDATA.manint }}
MONINT: {{ SNDATA.manint }}
CPUS: {{ SNDATA.totalcpus }}
UID: {{ SNDATA.guid }}
ROOTFS: {{ SNDATA.rootfs }}
NSMFS: {{ SNDATA.nsmfs }}
{% endfor %}
{% endif %}
{% if salt['pillar.get']('sensorstab', False) %} {% if salt['pillar.get']('sensorstab', False) %}
{% for SN, SNDATA in salt['pillar.get']('sensorstab', {}).items() %} {% for SN, SNDATA in salt['pillar.get']('sensorstab', {}).items() %}
{% set NODETYPE = SN.split('_')|last %} {% set NODETYPE = SN.split('_')|last %}

View File

@@ -1,64 +0,0 @@
#!/bin/bash
{% set MASTERIP = salt['pillar.get']('static:masterip', '') %}
{%- set HIVEUSER = salt['pillar.get']('static:hiveuser', '') %}
{%- set HIVEPASSWORD = salt['pillar.get']('static:hivepassword', '') %}
{%- set HIVEKEY = salt['pillar.get']('static:hivekey', '') %}
hive_init(){
sleep 60
HIVE_IP="{{MASTERIP}}"
HIVE_USER="{{HIVEUSER}}"
HIVE_PASSWORD="{{HIVEPASSWORD}}"
HIVE_KEY="{{HIVEKEY}}"
SOCTOPUS_CONFIG="/opt/so/saltstack/salt/soctopus/files/SOCtopus.conf"
echo -n "Waiting for TheHive..."
COUNT=0
HIVE_CONNECTED="no"
while [[ "$COUNT" -le 240 ]]; do
curl --output /dev/null --silent --head --fail -k "https://$HIVE_IP:/thehive"
if [ $? -eq 0 ]; then
HIVE_CONNECTED="yes"
echo "connected!"
break
else
((COUNT+=1))
sleep 1
echo -n "."
fi
done
if [ "$HIVE_CONNECTED" == "yes" ]; then
# Migrate DB
curl -v -k -XPOST "https://$HIVE_IP:/thehive/api/maintenance/migrate"
# Create intial TheHive user
curl -v -k "https://$HIVE_IP/thehive/api/user" -H "Content-Type: application/json" -d "{\"login\" : \"$HIVE_USER\",\"name\" : \"$HIVE_USER\",\"roles\" : [\"read\",\"alert\",\"write\",\"admin\"],\"preferences\" : \"{}\",\"password\" : \"$HIVE_PASSWORD\", \"key\": \"$HIVE_KEY\"}"
# Pre-load custom fields
#
# reputation
curl -v -k "https://$HIVE_IP/thehive/api/list/custom_fields" -H "Authorization: Bearer $HIVE_KEY" -H "Content-Type: application/json" -d "{\"value\":{\"name\": \"reputation\", \"reference\": \"reputation\", \"description\": \"This field provides an overall reputation status for an address/domain.\", \"type\": \"string\", \"options\": []}}"
touch /opt/so/state/thehive.txt
else
echo "We experienced an issue connecting to TheHive!"
fi
}
if [ -f /opt/so/state/thehive.txt ]; then
exit 0
else
rm -f garbage_file
while ! wget -O garbage_file {{MASTERIP}}:9500 2>/dev/null
do
echo "Waiting for Elasticsearch..."
rm -f garbage_file
sleep 1
done
rm -f garbage_file
sleep 5
hive_init
fi

View File

@@ -39,7 +39,7 @@ idstoolsetcsync:
so-ruleupdatecron: so-ruleupdatecron:
cron.present: cron.present:
- name: /usr/sbin/so-rule-update.sh > /opt/so/log/idstools/download.log - name: /usr/sbin/so-rule-update > /opt/so/log/idstools/download.log 2>&1
- user: root - user: root
- minute: '1' - minute: '1'
- hour: '7' - hour: '7'
@@ -58,11 +58,6 @@ synclocalnidsrules:
- user: 939 - user: 939
- group: 939 - group: 939
ruleslink:
file.symlink:
- name: /opt/so/saltstack/salt/suricata/rules
- target: /opt/so/rules/nids
so-idstools: so-idstools:
docker_container.running: docker_container.running:
- image: {{ MASTER }}:5000/soshybridhunter/so-idstools:{{ VERSION }} - image: {{ MASTER }}:5000/soshybridhunter/so-idstools:{{ VERSION }}

View File

@@ -3,7 +3,7 @@
{% set VERSION = salt['pillar.get']('static:soversion', 'HH1.2.2') %} {% set VERSION = salt['pillar.get']('static:soversion', 'HH1.2.2') %}
{% if grains['role'] in ['so-master', 'so-mastersearch', 'so-eval'] and GRAFANA == 1 %} {% if grains['role'] in ['so-master', 'so-mastersearch', 'so-eval', 'so-standalone'] and GRAFANA == 1 %}
# Influx DB # Influx DB
influxconfdir: influxconfdir:

View File

@@ -1,25 +1,21 @@
#!/bin/bash #!/bin/bash
# {%- set FLEET_MASTER = salt['pillar.get']('static:fleet_master', False) -%}
{%- set MASTER = salt['pillar.get']('static:masterip', '') %} # {%- set FLEET_NODE = salt['pillar.get']('static:fleet_node', False) -%}
{%- set FLEET_NODE = salt['pillar.get']('static:fleet_node') %} # {%- set FLEET_IP = salt['pillar.get']('static:fleet_ip', '') %}
{%- set FLEET = salt['pillar.get']('static:fleet_ip', '') %} # {%- set MASTER = salt['pillar.get']('master:url_base', '') %}
{%- set KRATOS = salt['pillar.get']('kratos:redirect', '') %}
KIBANA_VERSION="7.6.1" KIBANA_VERSION="7.6.1"
# Copy template file # Copy template file
cp /opt/so/conf/kibana/saved_objects.ndjson.template /opt/so/conf/kibana/saved_objects.ndjson cp /opt/so/conf/kibana/saved_objects.ndjson.template /opt/so/conf/kibana/saved_objects.ndjson
# {% if FLEET_NODE or FLEET_MASTER %}
# Fleet IP
sed -i "s/FLEETPLACEHOLDER/{{ FLEET_IP }}/g" /opt/so/conf/kibana/saved_objects.ndjson
# {% endif %}
# SOCtopus and Master # SOCtopus and Master
sed -i "s/PLACEHOLDER/{{ MASTER }}/g" /opt/so/conf/kibana/saved_objects.ndjson sed -i "s/PLACEHOLDER/{{ MASTER }}/g" /opt/so/conf/kibana/saved_objects.ndjson
{% if FLEET_NODE %}
# Fleet IP
sed -i "s/FLEETPLACEHOLDER/{{ FLEET }}/g" /opt/so/conf/kibana/saved_objects.ndjson
{% endif %}
# Kratos redirect
sed -i "s/PCAPPLACEHOLDER/{{ KRATOS }}/g" /opt/so/conf/kibana/saved_objects.ndjson
# Load saved objects # Load saved objects
curl -X POST "localhost:5601/api/saved_objects/_import" -H "kbn-xsrf: true" --form file=@/opt/so/conf/kibana/saved_objects.ndjson > /dev/null 2>&1 curl -X POST "localhost:5601/api/saved_objects/_import" -H "kbn-xsrf: true" --form file=@/opt/so/conf/kibana/saved_objects.ndjson > /dev/null 2>&1

File diff suppressed because one or more lines are too long

View File

@@ -15,6 +15,7 @@
{% set VERSION = salt['pillar.get']('static:soversion', 'HH1.2.2') %} {% set VERSION = salt['pillar.get']('static:soversion', 'HH1.2.2') %}
{% set MASTER = salt['grains.get']('master') %} {% set MASTER = salt['grains.get']('master') %}
{% set FEATURES = salt['pillar.get']('elastic:features', False) %} {% set FEATURES = salt['pillar.get']('elastic:features', False) %}
{% if FEATURES %} {% if FEATURES %}
{% set FEATURES = "-features" %} {% set FEATURES = "-features" %}
{% else %} {% else %}
@@ -23,35 +24,21 @@
# Logstash Section - Decide which pillar to use # Logstash Section - Decide which pillar to use
{% if grains['role'] == 'so-sensor' %} {% if grains['role'] == 'so-sensor' %}
{% set lsheap = salt['pillar.get']('sensor:lsheap', '') %}
{% set lsheap = salt['pillar.get']('sensor:lsheap', '') %} {% set lsaccessip = salt['pillar.get']('sensor:lsaccessip', '') %}
{% set lsaccessip = salt['pillar.get']('sensor:lsaccessip', '') %}
{% elif grains['role'] == 'so-node' or grains['role'] == 'so-heavynode' %} {% elif grains['role'] == 'so-node' or grains['role'] == 'so-heavynode' %}
{% set lsheap = salt['pillar.get']('node:lsheap', '') %} {% set lsheap = salt['pillar.get']('node:lsheap', '') %}
{% set nodetype = salt['pillar.get']('node:node_type', 'storage') %} {% set nodetype = salt['pillar.get']('node:node_type', 'storage') %}
{% elif grains['role'] in ['so-eval','so-mastersearch', 'so-master', 'so-standalone'] %}
{% elif grains['role'] == 'so-master' %} {% set lsheap = salt['pillar.get']('master:lsheap', '') %}
{% set freq = salt['pillar.get']('master:freq', '0') %}
{% set lsheap = salt['pillar.get']('master:lsheap', '') %} {% set dstats = salt['pillar.get']('master:domainstats', '0') %}
{% set freq = salt['pillar.get']('master:freq', '0') %} {% set nodetype = salt['grains.get']('role', '') %}
{% set dstats = salt['pillar.get']('master:domainstats', '0') %}
{% set nodetype = salt['grains.get']('role', '') %}
{% elif grains['role'] == 'so-helix' %} {% elif grains['role'] == 'so-helix' %}
{% set lsheap = salt['pillar.get']('master:lsheap', '') %}
{% set lsheap = salt['pillar.get']('master:lsheap', '') %} {% set freq = salt['pillar.get']('master:freq', '0') %}
{% set freq = salt['pillar.get']('master:freq', '0') %} {% set dstats = salt['pillar.get']('master:domainstats', '0') %}
{% set dstats = salt['pillar.get']('master:domainstats', '0') %} {% set nodetype = salt['grains.get']('role', '') %}
{% set nodetype = salt['grains.get']('role', '') %}
{% elif grains['role'] in ['so-eval','so-mastersearch'] %}
{% set lsheap = salt['pillar.get']('master:lsheap', '') %}
{% set freq = salt['pillar.get']('master:freq', '0') %}
{% set dstats = salt['pillar.get']('master:domainstats', '0') %}
{% set nodetype = salt['grains.get']('role', '') %}
{% endif %} {% endif %}
{% set PIPELINES = salt['pillar.get']('logstash:pipelines', {}) %} {% set PIPELINES = salt['pillar.get']('logstash:pipelines', {}) %}
@@ -211,7 +198,7 @@ so-logstash:
- /etc/pki/ca.crt:/usr/share/filebeat/ca.crt:ro - /etc/pki/ca.crt:/usr/share/filebeat/ca.crt:ro
{%- if grains['role'] == 'so-eval' %} {%- if grains['role'] == 'so-eval' %}
- /nsm/zeek:/nsm/zeek:ro - /nsm/zeek:/nsm/zeek:ro
- /opt/so/log/suricata:/suricata:ro - /nsm/suricata:/suricata:ro
- /opt/so/wazuh/logs/alerts:/wazuh/alerts:ro - /opt/so/wazuh/logs/alerts:/wazuh/alerts:ro
- /opt/so/wazuh/logs/archives:/wazuh/archives:ro - /opt/so/wazuh/logs/archives:/wazuh/archives:ro
- /opt/so/log/fleet/:/osquery/logs:ro - /opt/so/log/fleet/:/osquery/logs:ro

View File

@@ -0,0 +1 @@
# For custom logstash configs, they should be placed in /opt/so/saltstack/local/salt/logstash/pipelines/config/custom/

View File

@@ -5,7 +5,7 @@ input {
ssl_certificate_authorities => ["/usr/share/filebeat/ca.crt"] ssl_certificate_authorities => ["/usr/share/filebeat/ca.crt"]
ssl_certificate => "/usr/share/logstash/filebeat.crt" ssl_certificate => "/usr/share/logstash/filebeat.crt"
ssl_key => "/usr/share/logstash/filebeat.key" ssl_key => "/usr/share/logstash/filebeat.key"
tags => [ "beat" ] #tags => [ "beat" ]
} }
} }
filter { filter {

View File

@@ -3,24 +3,21 @@
{%- else %} {%- else %}
{%- set ES = salt['pillar.get']('node:mainip', '') -%} {%- set ES = salt['pillar.get']('node:mainip', '') -%}
{%- endif %} {%- endif %}
# Author: Justin Henderson
# SANS Instructor and author of SANS SEC555: SIEM and Tactical Analytics
# Updated by: Doug Burks
# Last Update: 5/15/2017
filter { filter {
if "syslog" in [tags] and "test_data" not in [tags] { if [module] =~ "syslog" {
mutate { mutate {
##add_tag => [ "conf_file_9034"] ##add_tag => [ "conf_file_9000"]
} }
} }
} }
output { output {
if "syslog" in [tags] and "test_data" not in [tags] { if [module] =~ "syslog" {
elasticsearch { elasticsearch {
pipeline => "%{module}"
hosts => "{{ ES }}" hosts => "{{ ES }}"
index => "so-syslog-%{+YYYY.MM.dd}" index => "so-syslog-%{+YYYY.MM.dd}"
template_name => "logstash" template_name => "so-common"
template => "/so-common-template.json" template => "/so-common-template.json"
template_overwrite => true template_overwrite => true
} }

View File

@@ -1,2 +0,0 @@
# Reference /usr/share/logstash/pipeline.custom/templates/YOURTEMPLATE.json
#

View File

@@ -0,0 +1,2 @@
# Reference /usr/share/logstash/pipeline.custom/templates/YOURTEMPLATE.json
# For custom logstash templates, they should be placed in /opt/so/saltstack/local/salt/logstash/pipelines/templates/custom/

View File

@@ -1,10 +1,10 @@
#!/usr/bin/env bash #!/usr/bin/env bash
# This script adds pillar and schedule files securely # This script adds pillar and schedule files securely
local_salt_dir=/opt/so/saltstack/local
MINION=$1 MINION=$1
echo "Adding $1" echo "Adding $1"
cp /tmp/$MINION/pillar/$MINION.sls /opt/so/saltstack/pillar/minions/ cp /tmp/$MINION/pillar/$MINION.sls $local_salt_dir/pillar/minions/
cp /tmp/$MINION/schedules/* /opt/so/saltstack/salt/patch/os/schedules/ cp --parents /tmp/$MINION/schedules/* $local_salt_dir/salt/patch/os/schedules/
rm -rf /tmp/$MINION rm -rf /tmp/$MINION

View File

@@ -61,6 +61,7 @@ so-aptcacherng:
docker_container.running: docker_container.running:
- image: {{ MASTER }}:5000/soshybridhunter/so-acng:{{ VERSION }} - image: {{ MASTER }}:5000/soshybridhunter/so-acng:{{ VERSION }}
- hostname: so-acng - hostname: so-acng
- restart_policy: always
- port_bindings: - port_bindings:
- 0.0.0.0:3142:3142 - 0.0.0.0:3142:3142
- binds: - binds:

22
salt/navigator/init.sls Normal file
View File

@@ -0,0 +1,22 @@
{% set VERSION = salt['pillar.get']('static:soversion', 'HH1.2.2') %}
{% set MASTER = salt['grains.get']('master') %}
navigatorconfig:
file.managed:
- name: /opt/so/conf/navigator/navigator_config.json
- source: salt://navigator/files/navigator_config.json
- user: 939
- group: 939
- makedirs: True
- template: jinja
so-navigator:
docker_container.running:
- image: {{ MASTER }}:5000/soshybridhunter/so-navigator:{{ VERSION }}
- hostname: navigator
- name: so-navigator
- binds:
- /opt/so/conf/navigator/navigator_config.json:/nav-app/src/assets/config.json:ro
- /opt/so/conf/navigator/nav_layer_playbook.json:/nav-app/src/assets/playbook.json:ro
- port_bindings:
- 0.0.0.0:4200:4200

View File

@@ -134,7 +134,7 @@ http {
proxy_set_header Connection "Upgrade"; proxy_set_header Connection "Upgrade";
} }
location ~ ^/auth/.*?(whoami|login|logout) { location ~ ^/auth/.*?(whoami|login|logout|settings) {
rewrite /auth/(.*) /$1 break; rewrite /auth/(.*) /$1 break;
proxy_pass http://{{ masterip }}:4433; proxy_pass http://{{ masterip }}:4433;
proxy_read_timeout 90; proxy_read_timeout 90;

View File

@@ -134,7 +134,7 @@ http {
proxy_set_header Connection "Upgrade"; proxy_set_header Connection "Upgrade";
} }
location ~ ^/auth/.*?(whoami|login|logout) { location ~ ^/auth/.*?(whoami|login|logout|settings) {
rewrite /auth/(.*) /$1 break; rewrite /auth/(.*) /$1 break;
proxy_pass http://{{ masterip }}:4433; proxy_pass http://{{ masterip }}:4433;
proxy_read_timeout 90; proxy_read_timeout 90;

View File

@@ -134,7 +134,7 @@ http {
proxy_set_header Connection "Upgrade"; proxy_set_header Connection "Upgrade";
} }
location ~ ^/auth/.*?(whoami|login|logout) { location ~ ^/auth/.*?(whoami|login|logout|settings) {
rewrite /auth/(.*) /$1 break; rewrite /auth/(.*) /$1 break;
proxy_pass http://{{ masterip }}:4433; proxy_pass http://{{ masterip }}:4433;
proxy_read_timeout 90; proxy_read_timeout 90;

View File

@@ -0,0 +1,325 @@
{%- set masterip = salt['pillar.get']('master:mainip', '') %}
{%- set FLEET_MASTER = salt['pillar.get']('static:fleet_master') %}
{%- set FLEET_NODE = salt['pillar.get']('static:fleet_node') %}
{%- set FLEET_IP = salt['pillar.get']('static:fleet_ip', None) %}
# For more information on configuration, see:
# * Official English Documentation: http://nginx.org/en/docs/
# * Official Russian Documentation: http://nginx.org/ru/docs/
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /run/nginx.pid;
# Load dynamic modules. See /usr/share/nginx/README.dynamic.
include /usr/share/nginx/modules/*.conf;
events {
worker_connections 1024;
}
http {
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
client_max_body_size 1024M;
include /etc/nginx/mime.types;
default_type application/octet-stream;
# Load modular configuration files from the /etc/nginx/conf.d directory.
# See http://nginx.org/en/docs/ngx_core_module.html#include
# for more information.
include /etc/nginx/conf.d/*.conf;
#server {
# listen 80 default_server;
# listen [::]:80 default_server;
# server_name _;
# root /opt/socore/html;
# index index.html;
# Load configuration files for the default server block.
#include /etc/nginx/default.d/*.conf;
# location / {
# }
# error_page 404 /404.html;
# location = /40x.html {
# }
# error_page 500 502 503 504 /50x.html;
# location = /50x.html {
# }
#}
server {
listen 80 default_server;
server_name _;
return 301 https://$host$request_uri;
}
{% if FLEET_MASTER %}
server {
listen 8090 ssl http2 default_server;
server_name _;
root /opt/socore/html;
index blank.html;
ssl_certificate "/etc/pki/nginx/server.crt";
ssl_certificate_key "/etc/pki/nginx/server.key";
ssl_session_cache shared:SSL:1m;
ssl_session_timeout 10m;
ssl_ciphers HIGH:!aNULL:!MD5;
ssl_prefer_server_ciphers on;
location ~ ^/kolide.agent.Api/(RequestEnrollment|RequestConfig|RequestQueries|PublishLogs|PublishResults|CheckHealth)$ {
grpc_pass grpcs://{{ masterip }}:8080;
grpc_set_header Host $host;
grpc_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_buffering off;
}
}
{% endif %}
# Settings for a TLS enabled server.
server {
listen 443 ssl http2 default_server;
#listen [::]:443 ssl http2 default_server;
server_name _;
root /opt/socore/html;
index index.html;
ssl_certificate "/etc/pki/nginx/server.crt";
ssl_certificate_key "/etc/pki/nginx/server.key";
ssl_session_cache shared:SSL:1m;
ssl_session_timeout 10m;
ssl_ciphers HIGH:!aNULL:!MD5;
ssl_prefer_server_ciphers on;
# Load configuration files for the default server block.
#include /etc/nginx/default.d/*.conf;
location ~* (^/login/|^/js/.*|^/css/.*|^/images/.*) {
proxy_pass http://{{ masterip }}:9822;
proxy_read_timeout 90;
proxy_connect_timeout 90;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Proxy "";
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
}
location / {
auth_request /auth/sessions/whoami;
proxy_pass http://{{ masterip }}:9822/;
proxy_read_timeout 90;
proxy_connect_timeout 90;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Proxy "";
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
}
location ~ ^/auth/.*?(whoami|login|logout) {
rewrite /auth/(.*) /$1 break;
proxy_pass http://{{ masterip }}:4433;
proxy_read_timeout 90;
proxy_connect_timeout 90;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Proxy "";
}
location /cyberchef/ {
auth_request /auth/sessions/whoami;
proxy_read_timeout 90;
proxy_connect_timeout 90;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Proxy "";
}
location /cyberchef {
rewrite ^ /cyberchef/ permanent;
}
location /packages/ {
try_files $uri =206;
auth_request /auth/sessions/whoami;
proxy_read_timeout 90;
proxy_connect_timeout 90;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Proxy "";
}
location /grafana/ {
rewrite /grafana/(.*) /$1 break;
proxy_pass http://{{ masterip }}:3000/;
proxy_read_timeout 90;
proxy_connect_timeout 90;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Proxy "";
}
location /kibana/ {
auth_request /auth/sessions/whoami;
rewrite /kibana/(.*) /$1 break;
proxy_pass http://{{ masterip }}:5601/;
proxy_read_timeout 90;
proxy_connect_timeout 90;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Proxy "";
}
location /nodered/ {
proxy_pass http://{{ masterip }}:1880/;
proxy_read_timeout 90;
proxy_connect_timeout 90;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
proxy_set_header Proxy "";
}
location /playbook/ {
proxy_pass http://{{ masterip }}:3200/playbook/;
proxy_read_timeout 90;
proxy_connect_timeout 90;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Proxy "";
}
location /navigator/ {
auth_request /auth/sessions/whoami;
proxy_pass http://{{ masterip }}:4200/navigator/;
proxy_read_timeout 90;
proxy_connect_timeout 90;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Proxy "";
}
{%- if FLEET_NODE %}
location /fleet/ {
return 301 https://{{ FLEET_IP }}/fleet;
}
{%- else %}
location /fleet/ {
proxy_pass https://{{ masterip }}:8080;
proxy_read_timeout 90;
proxy_connect_timeout 90;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Proxy "";
}
{%- endif %}
location /thehive/ {
proxy_pass http://{{ masterip }}:9000/thehive/;
proxy_read_timeout 90;
proxy_connect_timeout 90;
proxy_http_version 1.1; # this is essential for chunked responses to work
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Proxy "";
}
location /cortex/ {
proxy_pass http://{{ masterip }}:9001/cortex/;
proxy_read_timeout 90;
proxy_connect_timeout 90;
proxy_http_version 1.1; # this is essential for chunked responses to work
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Proxy "";
}
location /soctopus/ {
proxy_pass http://{{ masterip }}:7000/;
proxy_read_timeout 90;
proxy_connect_timeout 90;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Proxy "";
}
location /kibana/app/soc/ {
rewrite ^/kibana/app/soc/(.*) /soc/$1 permanent;
}
location /kibana/app/fleet/ {
rewrite ^/kibana/app/fleet/(.*) /fleet/$1 permanent;
}
location /kibana/app/soctopus/ {
rewrite ^/kibana/app/soctopus/(.*) /soctopus/$1 permanent;
}
location /sensoroniagents/ {
proxy_pass http://{{ masterip }}:9822/;
proxy_read_timeout 90;
proxy_connect_timeout 90;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Proxy "";
}
error_page 401 = @error401;
location @error401 {
add_header Set-Cookie "AUTH_REDIRECT=$request_uri;Path=/;Max-Age=14400";
return 302 /auth/self-service/browser/flows/login;
}
#error_page 404 /404.html;
# location = /40x.html {
#}
error_page 500 502 503 504 /50x.html;
location = /usr/share/nginx/html/50x.html {
}
}
}

View File

@@ -1,5 +1,6 @@
{%- set ip = salt['pillar.get']('static:masterip', '') -%} {%- set ip = salt['pillar.get']('static:masterip', '') -%}
#!/bin/bash #!/bin/bash
default_salt_dir=/opt/so/saltstack/default
echo "Waiting for connection" echo "Waiting for connection"
until $(curl --output /dev/null --silent --head http://{{ ip }}:1880); do until $(curl --output /dev/null --silent --head http://{{ ip }}:1880); do
@@ -7,5 +8,5 @@ until $(curl --output /dev/null --silent --head http://{{ ip }}:1880); do
sleep 1 sleep 1
done done
echo "Loading flows..." echo "Loading flows..."
curl -XPOST -v -H "Content-Type: application/json" -d @/opt/so/saltstack/salt/nodered/so_flows.json {{ ip }}:1880/flows curl -XPOST -v -H "Content-Type: application/json" -d @$default_salt_dir/salt/nodered/so_flows.json {{ ip }}:1880/flows
echo "Done loading..." echo "Done loading..."

Some files were not shown because too many files have changed in this diff Show More