Merge pull request #1230 from Security-Onion-Solutions/dev

2.1.0
This commit is contained in:
Mike Reeves
2020-08-24 10:25:28 -04:00
committed by GitHub
156 changed files with 11494 additions and 1071 deletions

View File

@@ -1,6 +1,6 @@
## Security Onion 2.0.3.rc1 ## Security Onion 2.1.0.rc2
Security Onion 2.0.3 RC1 is here! This version requires a fresh install, but there is good news - we have brought back soup! From now on, you should be able to run soup on the manager to upgrade your environment to RC2 and beyond! Security Onion 2.1.0 RC2 is here!
### Warnings and Disclaimers ### Warnings and Disclaimers
@@ -14,24 +14,24 @@ Security Onion 2.0.3 RC1 is here! This version requires a fresh install, but the
### Release Notes ### Release Notes
https://docs.securityonion.net/en/2.0/release-notes.html https://docs.securityonion.net/en/2.1/release-notes.html
### Requirements ### Requirements
https://docs.securityonion.net/en/2.0/hardware.html https://docs.securityonion.net/en/2.1/hardware.html
### Download ### Download
https://docs.securityonion.net/en/2.0/download.html https://docs.securityonion.net/en/2.1/download.html
### Installation ### Installation
https://docs.securityonion.net/en/2.0/installation.html https://docs.securityonion.net/en/2.1/installation.html
### FAQ ### FAQ
https://docs.securityonion.net/en/2.0/faq.html https://docs.securityonion.net/en/2.1/faq.html
### Feedback ### Feedback
https://docs.securityonion.net/en/2.0/community-support.html https://docs.securityonion.net/en/2.1/community-support.html

View File

@@ -1,16 +1,16 @@
### 2.0.3-rc1 ISO image built on 2020/07/28 ### 2.1.0-rc2 ISO image built on 2020/08/23
### Download and Verify ### Download and Verify
2.0.3-rc1 ISO image: 2.1.0-rc2 ISO image:
https://download.securityonion.net/file/securityonion/securityonion-2.0.3-rc1.iso https://download.securityonion.net/file/securityonion/securityonion-2.1.0-rc2.iso
MD5: 126EDE15589BCB44A64F51637E6BF720 MD5: 9EAE772B64F5B3934C0DB7913E38D6D4
SHA1: 5804EB797C655177533C55BB34569E1E2E0B2685 SHA1: D0D347AE30564871DE81203C0CE53B950F8732CE
SHA256: CDB9EEFEA965BD70ACC2FC64981A52BD83B85B47812261F79EC3930BB1924463 SHA256: 888AC7758C975FAA0A7267E5EFCB082164AC7AC8DCB3B370C06BA0B8493DAC44
Signature for ISO image: Signature for ISO image:
https://github.com/Security-Onion-Solutions/securityonion/raw/master/sigs/securityonion-2.0.3-rc1.iso.sig https://github.com/Security-Onion-Solutions/securityonion/raw/master/sigs/securityonion-2.1.0-rc2.iso.sig
Signing key: Signing key:
https://raw.githubusercontent.com/Security-Onion-Solutions/securityonion/master/KEYS https://raw.githubusercontent.com/Security-Onion-Solutions/securityonion/master/KEYS
@@ -24,22 +24,22 @@ wget https://raw.githubusercontent.com/Security-Onion-Solutions/securityonion/ma
Download the signature file for the ISO: Download the signature file for the ISO:
``` ```
wget https://github.com/Security-Onion-Solutions/securityonion/raw/master/sigs/securityonion-2.0.3-rc1.iso.sig wget https://github.com/Security-Onion-Solutions/securityonion/raw/master/sigs/securityonion-2.1.0-rc2.iso.sig
``` ```
Download the ISO image: Download the ISO image:
``` ```
wget https://download.securityonion.net/file/securityonion/securityonion-2.0.3-rc1.iso wget https://download.securityonion.net/file/securityonion/securityonion-2.1.0-rc2.iso
``` ```
Verify the downloaded ISO image using the signature file: Verify the downloaded ISO image using the signature file:
``` ```
gpg --verify securityonion-2.0.3-rc1.iso.sig securityonion-2.0.3-rc1.iso gpg --verify securityonion-2.1.0-rc2.iso.sig securityonion-2.1.0-rc2.iso
``` ```
The output should show "Good signature" and the Primary key fingerprint should match what's shown below: The output should show "Good signature" and the Primary key fingerprint should match what's shown below:
``` ```
gpg: Signature made Tue 28 Jul 2020 10:36:55 PM EDT using RSA key ID FE507013 gpg: Signature made Sun 23 Aug 2020 04:37:00 PM EDT using RSA key ID FE507013
gpg: Good signature from "Security Onion Solutions, LLC <info@securityonionsolutions.com>" gpg: Good signature from "Security Onion Solutions, LLC <info@securityonionsolutions.com>"
gpg: WARNING: This key is not certified with a trusted signature! gpg: WARNING: This key is not certified with a trusted signature!
gpg: There is no indication that the signature belongs to the owner. gpg: There is no indication that the signature belongs to the owner.
@@ -47,4 +47,4 @@ Primary key fingerprint: C804 A93D 36BE 0C73 3EA1 9644 7C10 60B7 FE50 7013
``` ```
Once you've verified the ISO image, you're ready to proceed to our Installation guide: Once you've verified the ISO image, you're ready to proceed to our Installation guide:
https://docs.securityonion.net/en/2.0/installation.html https://docs.securityonion.net/en/2.1/installation.html

View File

@@ -1 +1 @@
2.0.3-rc.1 2.1.0-rc.2

View File

@@ -13,6 +13,7 @@ role:
fleet: fleet:
heavynode: heavynode:
helixsensor: helixsensor:
import:
manager: manager:
managersearch: managersearch:
standalone: standalone:

View File

@@ -44,11 +44,11 @@ echo " guid: $GUID" >> $local_salt_dir/pillar/data/$TYPE.sls
echo " rootfs: $ROOTFS" >> $local_salt_dir/pillar/data/$TYPE.sls echo " rootfs: $ROOTFS" >> $local_salt_dir/pillar/data/$TYPE.sls
echo " nsmfs: $NSM" >> $local_salt_dir/pillar/data/$TYPE.sls echo " nsmfs: $NSM" >> $local_salt_dir/pillar/data/$TYPE.sls
if [ $TYPE == 'sensorstab' ]; then if [ $TYPE == 'sensorstab' ]; then
echo " monint: $MONINT" >> $local_salt_dir/pillar/data/$TYPE.sls echo " monint: bond0" >> $local_salt_dir/pillar/data/$TYPE.sls
salt-call state.apply grafana queue=True salt-call state.apply grafana queue=True
fi fi
if [ $TYPE == 'evaltab' ] || [ $TYPE == 'standalonetab' ]; then if [ $TYPE == 'evaltab' ] || [ $TYPE == 'standalonetab' ]; then
echo " monint: $MONINT" >> $local_salt_dir/pillar/data/$TYPE.sls echo " monint: bond0" >> $local_salt_dir/pillar/data/$TYPE.sls
if [ ! $10 ]; then if [ ! $10 ]; then
salt-call state.apply grafana queue=True salt-call state.apply grafana queue=True
salt-call state.apply utility queue=True salt-call state.apply utility queue=True

View File

@@ -1,11 +1,11 @@
{%- set FLEETMANAGER = salt['pillar.get']('static:fleet_manager', False) -%} {%- set FLEETMANAGER = salt['pillar.get']('global:fleet_manager', False) -%}
{%- set FLEETNODE = salt['pillar.get']('static:fleet_node', False) -%} {%- set FLEETNODE = salt['pillar.get']('global:fleet_node', False) -%}
{% set WAZUH = salt['pillar.get']('manager:wazuh', '0') %} {% set WAZUH = salt['pillar.get']('manager:wazuh', '0') %}
{% set THEHIVE = salt['pillar.get']('manager:thehive', '0') %} {% set THEHIVE = salt['pillar.get']('manager:thehive', '0') %}
{% set PLAYBOOK = salt['pillar.get']('manager:playbook', '0') %} {% set PLAYBOOK = salt['pillar.get']('manager:playbook', '0') %}
{% set FREQSERVER = salt['pillar.get']('manager:freq', '0') %} {% set FREQSERVER = salt['pillar.get']('manager:freq', '0') %}
{% set DOMAINSTATS = salt['pillar.get']('manager:domainstats', '0') %} {% set DOMAINSTATS = salt['pillar.get']('manager:domainstats', '0') %}
{% set ZEEKVER = salt['pillar.get']('static:zeekversion', 'COMMUNITY') %} {% set ZEEKVER = salt['pillar.get']('global:zeekversion', 'COMMUNITY') %}
{% set GRAFANA = salt['pillar.get']('manager:grafana', '0') %} {% set GRAFANA = salt['pillar.get']('manager:grafana', '0') %}
eval: eval:

View File

@@ -33,6 +33,8 @@ firewall:
- 9300 - 9300
- 9400 - 9400
- 9500 - 9500
- 9595
- 9696
udp: udp:
- 1514 - 1514
minions: minions:

View File

@@ -1,7 +1,6 @@
logstash: logstash:
docker_options: docker_options:
port_bindings: port_bindings:
- 0.0.0.0:514:514
- 0.0.0.0:5044:5044 - 0.0.0.0:5044:5044
- 0.0.0.0:5644:5644 - 0.0.0.0:5644:5644
- 0.0.0.0:6050:6050 - 0.0.0.0:6050:6050

View File

@@ -1,3 +1,4 @@
{%- set PIPELINE = salt['pillar.get']('global:pipeline', 'redis') %}
logstash: logstash:
pipelines: pipelines:
manager: manager:
@@ -5,3 +6,4 @@ logstash:
- so/0009_input_beats.conf - so/0009_input_beats.conf
- so/0010_input_hhbeats.conf - so/0010_input_hhbeats.conf
- so/9999_output_redis.conf.jinja - so/9999_output_redis.conf.jinja

View File

@@ -1,3 +1,4 @@
{%- set PIPELINE = salt['pillar.get']('global:pipeline', 'minio') %}
logstash: logstash:
pipelines: pipelines:
search: search:

View File

@@ -2,7 +2,7 @@ base:
'*': '*':
- patch.needs_restarting - patch.needs_restarting
'*_eval or *_helix or *_heavynode or *_sensor or *_standalone': '*_eval or *_helix or *_heavynode or *_sensor or *_standalone or *_import':
- match: compound - match: compound
- zeek - zeek
@@ -14,14 +14,14 @@ base:
- elasticsearch.search - elasticsearch.search
'*_sensor': '*_sensor':
- static - global
- zeeklogs - zeeklogs
- healthcheck.sensor - healthcheck.sensor
- minions.{{ grains.id }} - minions.{{ grains.id }}
'*_manager or *_managersearch': '*_manager or *_managersearch':
- match: compound - match: compound
- static - global
- data.* - data.*
- secrets - secrets
- minions.{{ grains.id }} - minions.{{ grains.id }}
@@ -36,7 +36,7 @@ base:
- secrets - secrets
- healthcheck.eval - healthcheck.eval
- elasticsearch.eval - elasticsearch.eval
- static - global
- minions.{{ grains.id }} - minions.{{ grains.id }}
'*_standalone': '*_standalone':
@@ -48,20 +48,20 @@ base:
- zeeklogs - zeeklogs
- secrets - secrets
- healthcheck.standalone - healthcheck.standalone
- static - global
- minions.{{ grains.id }} - minions.{{ grains.id }}
'*_node': '*_node':
- static - global
- minions.{{ grains.id }} - minions.{{ grains.id }}
'*_heavynode': '*_heavynode':
- static - global
- zeeklogs - zeeklogs
- minions.{{ grains.id }} - minions.{{ grains.id }}
'*_helix': '*_helix':
- static - global
- fireeye - fireeye
- zeeklogs - zeeklogs
- logstash - logstash
@@ -69,14 +69,21 @@ base:
- minions.{{ grains.id }} - minions.{{ grains.id }}
'*_fleet': '*_fleet':
- static - global
- data.* - data.*
- secrets - secrets
- minions.{{ grains.id }} - minions.{{ grains.id }}
'*_searchnode': '*_searchnode':
- static - global
- logstash - logstash
- logstash.search - logstash.search
- elasticsearch.search - elasticsearch.search
- minions.{{ grains.id }} - minions.{{ grains.id }}
'*_import':
- zeeklogs
- secrets
- elasticsearch.eval
- global
- minions.{{ grains.id }}

View File

@@ -16,6 +16,10 @@ pki_private_key:
- passphrase: - passphrase:
- cipher: aes_256_cbc - cipher: aes_256_cbc
- backup: True - backup: True
{% if salt['file.file_exists']('/etc/pki/ca.key') -%}
- prereq:
- x509: /etc/pki/ca.crt
{%- endif %}
/etc/pki/ca.crt: /etc/pki/ca.crt:
x509.certificate_managed: x509.certificate_managed:
@@ -32,17 +36,14 @@ pki_private_key:
- days_valid: 3650 - days_valid: 3650
- days_remaining: 0 - days_remaining: 0
- backup: True - backup: True
- managed_private_key: - replace: False
name: /etc/pki/ca.key
bits: 4096
backup: True
- require: - require:
- file: /etc/pki - file: /etc/pki
send_x509_pem_entries_to_mine: x509_pem_entries:
module.run: module.run:
- mine.send: - mine.send:
- func: x509.get_pem_entries - name: x509.get_pem_entries
- glob_path: /etc/pki/ca.crt - glob_path: /etc/pki/ca.crt
cakeyperms: cakeyperms:

View File

@@ -0,0 +1,10 @@
{% set docker = {
'containers': [
'so-filebeat',
'so-nginx',
'so-soc',
'so-kratos',
'so-elasticsearch',
'so-kibana'
]
} %}

View File

@@ -20,7 +20,7 @@
{% if role in ['eval', 'managersearch', 'manager', 'standalone'] %} {% if role in ['eval', 'managersearch', 'manager', 'standalone'] %}
{{ append_containers('manager', 'grafana', 0) }} {{ append_containers('manager', 'grafana', 0) }}
{{ append_containers('static', 'fleet_manager', 0) }} {{ append_containers('global', 'fleet_manager', 0) }}
{{ append_containers('manager', 'wazuh', 0) }} {{ append_containers('manager', 'wazuh', 0) }}
{{ append_containers('manager', 'thehive', 0) }} {{ append_containers('manager', 'thehive', 0) }}
{{ append_containers('manager', 'playbook', 0) }} {{ append_containers('manager', 'playbook', 0) }}
@@ -29,11 +29,11 @@
{% endif %} {% endif %}
{% if role in ['eval', 'heavynode', 'sensor', 'standalone'] %} {% if role in ['eval', 'heavynode', 'sensor', 'standalone'] %}
{{ append_containers('static', 'strelka', 0) }} {{ append_containers('global', 'strelka', 0) }}
{% endif %} {% endif %}
{% if role in ['heavynode', 'standalone'] %} {% if role in ['heavynode', 'standalone'] %}
{{ append_containers('static', 'zeekversion', 'SURICATA') }} {{ append_containers('global', 'zeekversion', 'SURICATA') }}
{% endif %} {% endif %}
{% if role == 'searchnode' %} {% if role == 'searchnode' %}
@@ -41,5 +41,5 @@
{% endif %} {% endif %}
{% if role == 'sensor' %} {% if role == 'sensor' %}
{{ append_containers('static', 'zeekversion', 'SURICATA') }} {{ append_containers('global', 'zeekversion', 'SURICATA') }}
{% endif %} {% endif %}

View File

@@ -21,6 +21,30 @@ local_salt_dir=/opt/so/saltstack/local
SKIP=0 SKIP=0
function usage {
cat << EOF
Usage: $0 [-abefhoprsw] [ -i IP ]
This program allows you to add a firewall rule to allow connections from a new IP address or CIDR range.
If you run this program with no arguments, it will present a menu for you to choose your options.
If you want to automate and skip the menu, you can pass the desired options as command line arguments.
EXAMPLES
To add 10.1.2.3 to the analyst role:
so-allow -a -i 10.1.2.3
To add 10.1.2.0/24 to the osquery role:
so-allow -o -i 10.1.2.0/24
EOF
}
while getopts "ahfesprbowi:" OPTION while getopts "ahfesprbowi:" OPTION
do do
case $OPTION in case $OPTION in
@@ -127,7 +151,7 @@ salt-call state.apply firewall queue=True
if grep -q -R "wazuh: 1" $local_salt_dir/pillar/*; then if grep -q -R "wazuh: 1" $local_salt_dir/pillar/*; then
# If analyst, add to Wazuh AR whitelist # If analyst, add to Wazuh AR whitelist
if [ "$FULLROLE" == "analyst" ]; then if [ "$FULLROLE" == "analyst" ]; then
WAZUH_MGR_CFG="/opt/so/wazuh/etc/ossec.conf" WAZUH_MGR_CFG="/nsm/wazuh/etc/ossec.conf"
if ! grep -q "<white_list>$IP</white_list>" $WAZUH_MGR_CFG ; then if ! grep -q "<white_list>$IP</white_list>" $WAZUH_MGR_CFG ; then
DATE=$(date) DATE=$(date)
sed -i 's/<\/ossec_config>//' $WAZUH_MGR_CFG sed -i 's/<\/ossec_config>//' $WAZUH_MGR_CFG

View File

@@ -76,6 +76,7 @@ if [ $MANAGERCHECK != 'so-helix' ]; then
"so-kibana:$VERSION" \ "so-kibana:$VERSION" \
"so-kratos:$VERSION" \ "so-kratos:$VERSION" \
"so-logstash:$VERSION" \ "so-logstash:$VERSION" \
"so-minio:$VERSION" \
"so-mysql:$VERSION" \ "so-mysql:$VERSION" \
"so-nginx:$VERSION" \ "so-nginx:$VERSION" \
"so-pcaptools:$VERSION" \ "so-pcaptools:$VERSION" \

View File

@@ -14,7 +14,7 @@
# #
# You should have received a copy of the GNU General Public License # You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>. # along with this program. If not, see <http://www.gnu.org/licenses/>.
{%- set MANAGERIP = salt['pillar.get']('static:managerip', '') -%} {%- set MANAGERIP = salt['pillar.get']('global:managerip', '') -%}
. /usr/sbin/so-common . /usr/sbin/so-common
SKIP=0 SKIP=0

View File

@@ -29,9 +29,9 @@ manager_check() {
} }
manager_check manager_check
VERSION=$(grep soversion $local_salt_dir/pillar/static.sls | cut -d':' -f2|sed 's/ //g') VERSION=$(grep soversion $local_salt_dir/pillar/global.sls | cut -d':' -f2|sed 's/ //g')
# Modify static.sls to enable Features # Modify global.sls to enable Features
sed -i 's/features: False/features: True/' $local_salt_dir/pillar/static.sls sed -i 's/features: False/features: True/' $local_salt_dir/pillar/global.sls
SUFFIX="-features" SUFFIX="-features"
TRUSTED_CONTAINERS=( \ TRUSTED_CONTAINERS=( \
"so-elasticsearch:$VERSION$SUFFIX" \ "so-elasticsearch:$VERSION$SUFFIX" \

View File

@@ -15,10 +15,13 @@
# You should have received a copy of the GNU General Public License # You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>. # along with this program. If not, see <http://www.gnu.org/licenses/>.
{% set MANAGER = salt['grains.get']('master') %} {%- set MANAGER = salt['grains.get']('master') %}
{% set VERSION = salt['pillar.get']('static:soversion') %} {%- set VERSION = salt['pillar.get']('global:soversion') %}
{% set IMAGEREPO = salt['pillar.get']('static:imagerepo') %} {%- set IMAGEREPO = salt['pillar.get']('global:imagerepo') %}
{%- set MANAGERIP = salt['pillar.get']('static:managerip') -%} {%- set MANAGERIP = salt['pillar.get']('global:managerip') -%}
{%- set URLBASE = salt['pillar.get']('global:url_base') %}
. /usr/sbin/so-common
function usage { function usage {
cat << EOF cat << EOF
@@ -32,13 +35,13 @@ EOF
function pcapinfo() { function pcapinfo() {
PCAP=$1 PCAP=$1
ARGS=$2 ARGS=$2
docker run --rm -v $PCAP:/input.pcap --entrypoint capinfos {{ MANAGER }}:5000/{{ IMAGEREPO }}/so-pcaptools:{{ VERSION }} /input.pcap $ARGS docker run --rm -v "$PCAP:/input.pcap" --entrypoint capinfos {{ MANAGER }}:5000/{{ IMAGEREPO }}/so-pcaptools:{{ VERSION }} /input.pcap $ARGS
} }
function pcapfix() { function pcapfix() {
PCAP=$1 PCAP=$1
PCAP_OUT=$2 PCAP_OUT=$2
docker run --rm -v $PCAP:/input.pcap -v $PCAP_OUT:$PCAP_OUT --entrypoint pcapfix {{ MANAGER }}:5000/{{ IMAGEREPO }}/so-pcaptools:{{ VERSION }} /input.pcap -o $PCAP_OUT > /dev/null 2>&1 docker run --rm -v "$PCAP:/input.pcap" -v $PCAP_OUT:$PCAP_OUT --entrypoint pcapfix {{ MANAGER }}:5000/{{ IMAGEREPO }}/so-pcaptools:{{ VERSION }} /input.pcap -o $PCAP_OUT > /dev/null 2>&1
} }
function suricata() { function suricata() {
@@ -57,7 +60,7 @@ function suricata() {
-v /opt/so/conf/suricata/rules:/etc/suricata/rules:ro \ -v /opt/so/conf/suricata/rules:/etc/suricata/rules:ro \
-v ${LOG_PATH}:/var/log/suricata/:rw \ -v ${LOG_PATH}:/var/log/suricata/:rw \
-v ${NSM_PATH}/:/nsm/:rw \ -v ${NSM_PATH}/:/nsm/:rw \
-v $PCAP:/input.pcap:ro \ -v "$PCAP:/input.pcap:ro" \
-v /opt/so/conf/suricata/bpf:/etc/suricata/bpf:ro \ -v /opt/so/conf/suricata/bpf:/etc/suricata/bpf:ro \
{{ MANAGER }}:5000/{{ IMAGEREPO }}/so-suricata:{{ VERSION }} \ {{ MANAGER }}:5000/{{ IMAGEREPO }}/so-suricata:{{ VERSION }} \
--runmode single -k none -r /input.pcap > $LOG_PATH/console.log 2>&1 --runmode single -k none -r /input.pcap > $LOG_PATH/console.log 2>&1
@@ -76,7 +79,7 @@ function zeek() {
-v $NSM_PATH/logs:/nsm/zeek/logs:rw \ -v $NSM_PATH/logs:/nsm/zeek/logs:rw \
-v $NSM_PATH/spool:/nsm/zeek/spool:rw \ -v $NSM_PATH/spool:/nsm/zeek/spool:rw \
-v $NSM_PATH/extracted:/nsm/zeek/extracted:rw \ -v $NSM_PATH/extracted:/nsm/zeek/extracted:rw \
-v $PCAP:/input.pcap:ro \ -v "$PCAP:/input.pcap:ro" \
-v /opt/so/conf/zeek/local.zeek:/opt/zeek/share/zeek/site/local.zeek:ro \ -v /opt/so/conf/zeek/local.zeek:/opt/zeek/share/zeek/site/local.zeek:ro \
-v /opt/so/conf/zeek/node.cfg:/opt/zeek/etc/node.cfg:ro \ -v /opt/so/conf/zeek/node.cfg:/opt/zeek/etc/node.cfg:ro \
-v /opt/so/conf/zeek/zeekctl.cfg:/opt/zeek/etc/zeekctl.cfg:ro \ -v /opt/so/conf/zeek/zeekctl.cfg:/opt/zeek/etc/zeekctl.cfg:ro \
@@ -210,9 +213,9 @@ cat << EOF
Import complete! Import complete!
You can use the following hyperlink to view data in the time range of your import. You can triple-click to quickly highlight the entire hyperlink and you can then copy it into your browser: You can use the following hyperlink to view data in the time range of your import. You can triple-click to quickly highlight the entire hyperlink and you can then copy it into your browser:
https://{{ MANAGERIP }}/#/hunt?q=%2a%20%7C%20groupby%20event.module%20event.dataset&t=${START_OLDEST_SLASH}%2000%3A00%3A00%20AM%20-%20${END_NEWEST_SLASH}%2000%3A00%3A00%20AM https://{{ URLBASE }}/#/hunt?q=import.id:${HASH}%20%7C%20groupby%20event.module%20event.dataset&t=${START_OLDEST_SLASH}%2000%3A00%3A00%20AM%20-%20${END_NEWEST_SLASH}%2000%3A00%3A00%20AM&z=UTC
or you can manually set your Time Range to be: or you can manually set your Time Range to be (in UTC):
From: $START_OLDEST To: $END_NEWEST From: $START_OLDEST To: $END_NEWEST
Please note that it may take 30 seconds or more for events to appear in Onion Hunt. Please note that it may take 30 seconds or more for events to appear in Onion Hunt.

View File

@@ -1,9 +1,9 @@
#!/bin/bash #!/bin/bash
# #
# {%- set FLEET_MANAGER = salt['pillar.get']('static:fleet_manager', False) -%} # {%- set FLEET_MANAGER = salt['pillar.get']('global:fleet_manager', False) -%}
# {%- set FLEET_NODE = salt['pillar.get']('static:fleet_node', False) -%} # {%- set FLEET_NODE = salt['pillar.get']('global:fleet_node', False) -%}
# {%- set FLEET_IP = salt['pillar.get']('static:fleet_ip', '') %} # {%- set FLEET_IP = salt['pillar.get']('global:fleet_ip', '') %}
# {%- set MANAGER = salt['pillar.get']('manager:url_base', '') %} # {%- set MANAGER = salt['pillar.get']('global:url_base', '') %}
# #
# Copyright 2014,2015,2016,2017,2018,2019,2020 Security Onion Solutions, LLC # Copyright 2014,2015,2016,2017,2018,2019,2020 Security Onion Solutions, LLC
# #

View File

@@ -17,4 +17,4 @@
. /usr/sbin/so-common . /usr/sbin/so-common
docker exec so-soctopus python3 playbook_play-sync.py >> /opt/so/log/soctopus/so-playbook-sync.log 2>&1 docker exec so-soctopus python3 playbook_play-sync.py

View File

@@ -10,4 +10,4 @@ got_root() {
} }
got_root got_root
docker exec -it so-idstools /bin/bash -c 'cd /opt/so/idstools/etc && idstools-rulecat' docker exec -d so-idstools /bin/bash -c 'cd /opt/so/idstools/etc && idstools-rulecat'

View File

@@ -18,13 +18,18 @@
. /usr/sbin/so-common . /usr/sbin/so-common
UPDATE_DIR=/tmp/sogh/securityonion UPDATE_DIR=/tmp/sogh/securityonion
INSTALLEDVERSION=$(cat /etc/soversion) INSTALLEDVERSION=$(cat /etc/soversion)
default_salt_dir=/opt/so/saltstack/default INSTALLEDSALTVERSION=$(salt --versions-report | grep Salt: | awk {'print $2'})
DEFAULT_SALT_DIR=/opt/so/saltstack/default
BATCHSIZE=5
SOUP_LOG=/root/soup.log
exec 3>&1 1>${SOUP_LOG} 2>&1
manager_check() { manager_check() {
# Check to see if this is a manager # Check to see if this is a manager
MANAGERCHECK=$(cat /etc/salt/grains | grep role | awk '{print $2}') MANAGERCHECK=$(cat /etc/salt/grains | grep role | awk '{print $2}')
if [[ "$MANAGERCHECK" =~ ^('so-eval'|'so-manager'|'so-standalone'|'so-managersearch')$ ]]; then if [[ "$MANAGERCHECK" =~ ^('so-eval'|'so-manager'|'so-standalone'|'so-managersearch'|'so-import')$ ]]; then
echo "This is a manager. We can proceed" echo "This is a manager. We can proceed."
MINIONID=$(salt-call grains.get id --out=txt|awk -F: {'print $2'}|tr -d ' ')
else else
echo "Please run soup on the manager. The manager controls all updates." echo "Please run soup on the manager. The manager controls all updates."
exit 0 exit 0
@@ -58,28 +63,122 @@ clone_to_tmp() {
copy_new_files() { copy_new_files() {
# Copy new files over to the salt dir # Copy new files over to the salt dir
cd /tmp/sogh/securityonion cd /tmp/sogh/securityonion
rsync -a salt $default_salt_dir/ rsync -a salt $DEFAULT_SALT_DIR/
rsync -a pillar $default_salt_dir/ rsync -a pillar $DEFAULT_SALT_DIR/
chown -R socore:socore $default_salt_dir/ chown -R socore:socore $DEFAULT_SALT_DIR/
chmod 755 $default_salt_dir/pillar/firewall/addfirewall.sh chmod 755 $DEFAULT_SALT_DIR/pillar/firewall/addfirewall.sh
cd /tmp cd /tmp
} }
detect_os() {
# Detect Base OS
echo "Determining Base OS." >> "$SOUP_LOG" 2>&1
if [ -f /etc/redhat-release ]; then
OS="centos"
elif [ -f /etc/os-release ]; then
OS="ubuntu"
fi
echo "Found OS: $OS" >> "$SOUP_LOG" 2>&1
}
highstate() { highstate() {
# Run a highstate but first cancel a running one. # Run a highstate but first cancel a running one.
salt-call saltutil.kill_all_jobs salt-call saltutil.kill_all_jobs
salt-call state.highstate salt-call state.highstate -l info
}
masterlock() {
echo "Locking Salt Master"
if [[ "$INSTALLEDVERSION" =~ rc.1 ]]; then
TOPFILE=/opt/so/saltstack/default/salt/top.sls
BACKUPTOPFILE=/opt/so/saltstack/default/salt/top.sls.backup
mv -v $TOPFILE $BACKUPTOPFILE
echo "base:" > $TOPFILE
echo " $MINIONID:" >> $TOPFILE
echo " - ca" >> $TOPFILE
echo " - ssl" >> $TOPFILE
echo " - elasticsearch" >> $TOPFILE
fi
}
masterunlock() {
echo "Unlocking Salt Master"
if [[ "$INSTALLEDVERSION" =~ rc.1 ]]; then
mv -v $BACKUPTOPFILE $TOPFILE
fi
}
playbook() {
echo "Applying playbook settings"
if [[ "$INSTALLEDVERSION" =~ rc.1 ]]; then
salt-call state.apply playbook.db_init
rm -f /opt/so/rules/elastalert/playbook/*.yaml
so-playbook-ruleupdate >> /root/soup_playbook_rule_update.log 2>&1 &
fi
} }
pillar_changes() { pillar_changes() {
# This function is to add any new pillar items if needed. # This function is to add any new pillar items if needed.
echo "Checking to see if pillar changes are needed" echo "Checking to see if pillar changes are needed."
# Move baseurl in global.sls
if [[ "$INSTALLEDVERSION" =~ rc.1 ]]; then
# Move the static file to global.sls
echo "Migrating static.sls to global.sls"
mv -v /opt/so/saltstack/local/pillar/static.sls /opt/so/saltstack/local/pillar/global.sls >> "$SOUP_LOG" 2>&1
sed -i '1c\global:' /opt/so/saltstack/local/pillar/global.sls >> "$SOUP_LOG" 2>&1
# Moving baseurl from minion sls file to inside global.sls
local line=$(grep '^ url_base:' /opt/so/saltstack/local/pillar/minions/$MINIONID.sls)
sed -i '/^ url_base:/d' /opt/so/saltstack/local/pillar/minions/$MINIONID.sls;
sed -i "/^global:/a \\$line" /opt/so/saltstack/local/pillar/global.sls;
# Adding play values to the global.sls
local HIVEPLAYSECRET=$(tr -dc 'a-zA-Z0-9' < /dev/urandom | fold -w 20 | head -n 1)
local CORTEXPLAYSECRET=$(tr -dc 'a-zA-Z0-9' < /dev/urandom | fold -w 20 | head -n 1)
sed -i "/^global:/a \\ hiveplaysecret: $HIVEPLAYSECRET" /opt/so/saltstack/local/pillar/global.sls;
sed -i "/^global:/a \\ cortexplaysecret: $CORTEXPLAYSECRET" /opt/so/saltstack/local/pillar/global.sls;
# Move storage nodes to hostname for SSL
# Get a list we can use:
grep -A1 searchnode /opt/so/saltstack/local/pillar/data/nodestab.sls | grep -v '\-\-' | sed '$!N;s/\n/ /' | awk '{print $1,$3}' | awk '/_searchnode:/{gsub(/\_searchnode:/, "_searchnode"); print}' >/tmp/nodes.txt
# Remove the nodes from cluster settings
while read p; do
local NAME=$(echo $p | awk '{print $1}')
local IP=$(echo $p | awk '{print $2}')
echo "Removing the old cross cluster config for $NAME"
curl -XPUT -H 'Content-Type: application/json' http://localhost:9200/_cluster/settings -d '{"persistent":{"cluster":{"remote":{"'$NAME'":{"skip_unavailable":null,"seeds":null}}}}}'
done </tmp/nodes.txt
# Add the nodes back using hostname
while read p; do
local NAME=$(echo $p | awk '{print $1}')
local EHOSTNAME=$(echo $p | awk -F"_" '{print $1}')
local IP=$(echo $p | awk '{print $2}')
echo "Adding the new cross cluster config for $NAME"
curl -XPUT http://localhost:9200/_cluster/settings -H'Content-Type: application/json' -d '{"persistent": {"search": {"remote": {"'$NAME'": {"skip_unavailable": "true", "seeds": ["'$EHOSTNAME':9300"]}}}}}'
done </tmp/nodes.txt
fi
} }
update_dockers() { update_dockers() {
# List all the containers # List all the containers
if [ $MANAGERCHECK != 'so-helix' ]; then if [ $MANAGERCHECK == 'so-import' ]; then
TRUSTED_CONTAINERS=( \
"so-idstools" \
"so-nginx" \
"so-filebeat" \
"so-suricata" \
"so-soc" \
"so-elasticsearch" \
"so-kibana" \
"so-kratos" \
"so-suricata" \
"so-registry" \
"so-pcaptools" \
"so-zeek" )
elif [ $MANAGERCHECK != 'so-helix' ]; then
TRUSTED_CONTAINERS=( \ TRUSTED_CONTAINERS=( \
"so-acng" \ "so-acng" \
"so-thehive-cortex" \ "so-thehive-cortex" \
@@ -97,6 +196,7 @@ update_dockers() {
"so-kibana" \ "so-kibana" \
"so-kratos" \ "so-kratos" \
"so-logstash" \ "so-logstash" \
"so-minio" \
"so-mysql" \ "so-mysql" \
"so-nginx" \ "so-nginx" \
"so-pcaptools" \ "so-pcaptools" \
@@ -143,9 +243,9 @@ update_dockers() {
update_version() { update_version() {
# Update the version to the latest # Update the version to the latest
echo "Updating the version file." echo "Updating the Security Onion version file."
echo $NEWVERSION > /etc/soversion echo $NEWVERSION > /etc/soversion
sed -i "s/$INSTALLEDVERSION/$NEWVERSION/g" /opt/so/saltstack/local/pillar/static.sls sed -i "s/$INSTALLEDVERSION/$NEWVERSION/g" /opt/so/saltstack/local/pillar/global.sls
} }
upgrade_check() { upgrade_check() {
@@ -154,8 +254,44 @@ upgrade_check() {
if [ "$INSTALLEDVERSION" == "$NEWVERSION" ]; then if [ "$INSTALLEDVERSION" == "$NEWVERSION" ]; then
echo "You are already running the latest version of Security Onion." echo "You are already running the latest version of Security Onion."
exit 0 exit 0
fi
}
upgrade_check_salt() {
NEWSALTVERSION=$(grep version: $UPDATE_DIR/salt/salt/master.defaults.yaml | awk {'print $2'})
if [ "$INSTALLEDSALTVERSION" == "$NEWSALTVERSION" ]; then
echo "You are already running the correct version of Salt for Security Onion."
else else
echo "Performing Upgrade from $INSTALLEDVERSION to $NEWVERSION" SALTUPGRADED=True
echo "Performing upgrade of Salt from $INSTALLEDSALTVERSION to $NEWSALTVERSION."
echo ""
# If CentOS
if [ "$OS" == "centos" ]; then
echo "Removing yum versionlock for Salt."
echo ""
yum versionlock delete "salt-*"
echo "Updating Salt packages and restarting services."
echo ""
sh $UPDATE_DIR/salt/salt/scripts/bootstrap-salt.sh -F -M -x python3 stable "$NEWSALTVERSION"
echo "Applying yum versionlock for Salt."
echo ""
yum versionlock add "salt-*"
# Else do Ubuntu things
elif [ "$OS" == "ubuntu" ]; then
echo "Removing apt hold for Salt."
echo ""
apt-mark unhold "salt-common"
apt-mark unhold "salt-master"
apt-mark unhold "salt-minion"
echo "Updating Salt packages and restarting services."
echo ""
sh $UPDATE_DIR/salt/salt/scripts/bootstrap-salt.sh -F -M -x python3 stable "$NEWSALTVERSION"
echo "Applying apt hold for Salt."
echo ""
apt-mark hold "salt-common"
apt-mark hold "salt-master"
apt-mark hold "salt-minion"
fi
fi fi
} }
@@ -167,41 +303,111 @@ verify_latest_update_script() {
echo "This version of the soup script is up to date. Proceeding." echo "This version of the soup script is up to date. Proceeding."
else else
echo "You are not running the latest soup version. Updating soup." echo "You are not running the latest soup version. Updating soup."
cp $UPDATE_DIR/salt/common/tools/sbin/soup $default_salt_dir/salt/common/tools/sbin/ cp $UPDATE_DIR/salt/common/tools/sbin/soup $DEFAULT_SALT_DIR/salt/common/tools/sbin/
salt-call state.apply common queue=True salt-call state.apply common queue=True
echo "" echo ""
echo "soup has been updated. Please run soup again" echo "soup has been updated. Please run soup again."
exit 0 exit 0
fi fi
} }
echo "Checking to see if this is a manager" main () {
while getopts ":b" opt; do
case "$opt" in
b ) # process option b
shift
BATCHSIZE=$1
if ! [[ "$BATCHSIZE" =~ ^[0-9]+$ ]]; then
echo "Batch size must be a number greater than 0."
exit 1
fi
;;
\? ) echo "Usage: cmd [-b]"
;;
esac
done
echo "Checking to see if this is a manager."
echo ""
manager_check manager_check
echo "Cloning latest code to a temporary location" echo "Found that Security Onion $INSTALLEDVERSION is currently installed."
echo ""
detect_os
echo ""
echo "Cloning Security Onion github repo into $UPDATE_DIR."
clone_to_tmp clone_to_tmp
echo "" echo ""
echo "Verifying we have the latest script" echo "Verifying we have the latest soup script."
verify_latest_update_script verify_latest_update_script
echo "" echo ""
echo "Let's see if we need to update"
echo "Let's see if we need to update Security Onion."
upgrade_check upgrade_check
echo "" echo ""
echo "Making pillar changes" echo "Performing upgrade from Security Onion $INSTALLEDVERSION to Security Onion $NEWVERSION."
echo ""
echo "Stopping Salt Minion service."
systemctl stop salt-minion
echo ""
echo "Stopping Salt Master service."
systemctl stop salt-master
echo ""
echo "Checking for Salt Master and Minion updates."
upgrade_check_salt
echo "Making pillar changes."
pillar_changes pillar_changes
echo "" echo ""
echo "Cleaning up old dockers"
echo "Cleaning up old dockers."
clean_dockers clean_dockers
echo "" echo ""
echo "Updating docker to $NEWVERSION" echo "Updating dockers to $NEWVERSION."
update_dockers update_dockers
echo "" echo ""
echo "Copying new code" echo "Copying new Security Onion code from $UPDATE_DIR to $DEFAULT_SALT_DIR."
copy_new_files copy_new_files
echo "" echo ""
echo "Updating version"
update_version update_version
echo "" echo ""
echo "Running a highstate to complete upgrade" echo "Locking down Salt Master for upgrade"
masterlock
echo ""
echo "Starting Salt Master service."
systemctl start salt-master
echo ""
echo "Running a highstate to complete the Security Onion upgrade on this manager. This could take several minutes."
highstate highstate
echo "" echo ""
echo "Upgrade from $INSTALLEDVERSION to $NEWVERSION complete." echo "Upgrade from $INSTALLEDVERSION to $NEWVERSION complete."
echo ""
echo "Stopping Salt Master to remove ACL"
systemctl stop salt-master
masterunlock
echo ""
echo "Starting Salt Master service."
systemctl start salt-master
highstate
playbook
SALTUPGRADED="True"
if [[ "$SALTUPGRADED" == "True" ]]; then
echo ""
echo "Upgrading Salt on the remaining Security Onion nodes from $INSTALLEDSALTVERSION to $NEWSALTVERSION."
salt -C 'not *_eval and not *_helix and not *_manager and not *_managersearch and not *_standalone' -b $BATCHSIZE state.apply salt.minion
echo ""
fi
}
main "$@" | tee /dev/fd/3

View File

@@ -1,5 +1,5 @@
{% set VERSION = salt['pillar.get']('static:soversion', 'HH1.2.2') %} {% set VERSION = salt['pillar.get']('global:soversion', 'HH1.2.2') %}
{% set IMAGEREPO = salt['pillar.get']('static:imagerepo') %} {% set IMAGEREPO = salt['pillar.get']('global:imagerepo') %}
{% set MANAGER = salt['grains.get']('master') %} {% set MANAGER = salt['grains.get']('master') %}
{% if grains['role'] in ['so-eval', 'so-node', 'so-managersearch', 'so-heavynode', 'so-standalone'] %} {% if grains['role'] in ['so-eval', 'so-node', 'so-managersearch', 'so-heavynode', 'so-standalone'] %}
# Curator # Curator

View File

@@ -1,4 +1,4 @@
{%- set FLEETSETUP = salt['pillar.get']('static:fleetsetup', '0') -%} {%- set FLEETSETUP = salt['pillar.get']('global:fleetsetup', '0') -%}
{%- if FLEETSETUP != 0 %} {%- if FLEETSETUP != 0 %}
launcherpkg: launcherpkg:

View File

@@ -13,7 +13,7 @@
# You should have received a copy of the GNU General Public License # You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>. # along with this program. If not, see <http://www.gnu.org/licenses/>.
{% set IMAGEREPO = salt['pillar.get']('static:imagerepo') %} {% set IMAGEREPO = salt['pillar.get']('global:imagerepo') %}
# Create the group # Create the group
dstatsgroup: dstatsgroup:

View File

@@ -16,12 +16,12 @@ disable_rules_on_error: false
# How often ElastAlert will query Elasticsearch # How often ElastAlert will query Elasticsearch
# The unit can be anything from weeks to seconds # The unit can be anything from weeks to seconds
run_every: run_every:
minutes: 1 minutes: 3
# ElastAlert will buffer results from the most recent # ElastAlert will buffer results from the most recent
# period of time, in case some log sources are not in real time # period of time, in case some log sources are not in real time
buffer_time: buffer_time:
minutes: 1 minutes: 10
# The maximum time between queries for ElastAlert to start at the most recently # The maximum time between queries for ElastAlert to start at the most recently
# run query. When ElastAlert starts, for each rule, it will search elastalert_metadata # run query. When ElastAlert starts, for each rule, it will search elastalert_metadata
@@ -38,7 +38,7 @@ es_host: {{ esip }}
es_port: {{ esport }} es_port: {{ esport }}
# Sets timeout for connecting to and reading from es_host # Sets timeout for connecting to and reading from es_host
es_conn_timeout: 60 es_conn_timeout: 55
# The maximum number of documents that will be downloaded from Elasticsearch in # The maximum number of documents that will be downloaded from Elasticsearch in
# a single query. The default is 10,000, and if you expect to get near this number, # a single query. The default is 10,000, and if you expect to get near this number,

View File

@@ -16,7 +16,7 @@ class PlaybookESAlerter(Alerter):
today = strftime("%Y.%m.%d", gmtime()) today = strftime("%Y.%m.%d", gmtime())
timestamp = strftime("%Y-%m-%d"'T'"%H:%M:%S", gmtime()) timestamp = strftime("%Y-%m-%d"'T'"%H:%M:%S", gmtime())
headers = {"Content-Type": "application/json"} headers = {"Content-Type": "application/json"}
payload = {"rule.name": self.rule['play_title'],"event.severity": self.rule['event.severity'],"kibana_pivot": self.rule['kibana_pivot'],"soc_pivot": self.rule['soc_pivot'],"event.module": self.rule['event.module'],"event.dataset": self.rule['event.dataset'],"play_url": self.rule['play_url'],"sigma_level": self.rule['sigma_level'],"rule.category": self.rule['rule.category'],"data": match, "@timestamp": timestamp} payload = {"rule.name": self.rule['play_title'],"event.severity": self.rule['event.severity'],"kibana_pivot": self.rule['kibana_pivot'],"soc_pivot": self.rule['soc_pivot'],"event.module": self.rule['event.module'],"event.dataset": self.rule['event.dataset'],"play_url": self.rule['play_url'],"sigma_level": self.rule['sigma_level'],"rule.category": self.rule['rule.category'],"event_data": match, "@timestamp": timestamp}
url = f"http://{self.rule['elasticsearch_host']}/so-playbook-alerts-{today}/_doc/" url = f"http://{self.rule['elasticsearch_host']}/so-playbook-alerts-{today}/_doc/"
requests.post(url, data=json.dumps(payload), headers=headers, verify=False) requests.post(url, data=json.dumps(payload), headers=headers, verify=False)

View File

@@ -1,21 +1,17 @@
{% set es = salt['pillar.get']('static:managerip', '') %} {% set es = salt['pillar.get']('global:managerip', '') %}
{% set hivehost = salt['pillar.get']('static:managerip', '') %} {% set hivehost = salt['pillar.get']('global:managerip', '') %}
{% set hivekey = salt['pillar.get']('static:hivekey', '') %} {% set hivekey = salt['pillar.get']('global:hivekey', '') %}
{% set MANAGER = salt['pillar.get']('manager:url_base', '') %} {% set MANAGER = salt['pillar.get']('global:url_base', '') %}
# Elastalert rule to forward Suricata alerts from Security Onion to a specified TheHive instance. # Elastalert rule to forward Suricata alerts from Security Onion to a specified TheHive instance.
# #
es_host: {{es}} es_host: {{es}}
es_port: 9200 es_port: 9200
name: Suricata-Alert name: Suricata-Alert
type: frequency type: any
index: "so-ids-*" index: "*:so-ids-*"
num_events: 1
timeframe:
minutes: 10
buffer_time: buffer_time:
minutes: 10 minutes: 5
allow_buffer_time_overlap: true
query_key: ["rule.uuid","source.ip","destination.ip"] query_key: ["rule.uuid","source.ip","destination.ip"]
realert: realert:
days: 1 days: 1

View File

@@ -1,21 +1,17 @@
{% set es = salt['pillar.get']('static:managerip', '') %} {% set es = salt['pillar.get']('global:managerip', '') %}
{% set hivehost = salt['pillar.get']('static:managerip', '') %} {% set hivehost = salt['pillar.get']('global:managerip', '') %}
{% set hivekey = salt['pillar.get']('static:hivekey', '') %} {% set hivekey = salt['pillar.get']('global:hivekey', '') %}
{% set MANAGER = salt['pillar.get']('manager:url_base', '') %} {% set MANAGER = salt['pillar.get']('global:url_base', '') %}
# Elastalert rule to forward high level Wazuh alerts from Security Onion to a specified TheHive instance. # Elastalert rule to forward high level Wazuh alerts from Security Onion to a specified TheHive instance.
# #
es_host: {{es}} es_host: {{es}}
es_port: 9200 es_port: 9200
name: Wazuh-Alert name: Wazuh-Alert
type: frequency type: any
index: "so-ossec-*" index: "*:so-ossec-*"
num_events: 1
timeframe:
minutes: 10
buffer_time: buffer_time:
minutes: 10 minutes: 5
allow_buffer_time_overlap: true
realert: realert:
days: 1 days: 1
filter: filter:

View File

@@ -12,8 +12,8 @@
# #
# You should have received a copy of the GNU General Public License # You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>. # along with this program. If not, see <http://www.gnu.org/licenses/>.
{% set VERSION = salt['pillar.get']('static:soversion', 'HH1.2.2') %} {% set VERSION = salt['pillar.get']('global:soversion', 'HH1.2.2') %}
{% set IMAGEREPO = salt['pillar.get']('static:imagerepo') %} {% set IMAGEREPO = salt['pillar.get']('global:imagerepo') %}
{% set MANAGER = salt['grains.get']('master') %} {% set MANAGER = salt['grains.get']('master') %}
{% if grains['role'] in ['so-eval','so-managersearch', 'so-manager', 'so-standalone'] %} {% if grains['role'] in ['so-eval','so-managersearch', 'so-manager', 'so-standalone'] %}

View File

@@ -5,6 +5,7 @@
{%- set ESCLUSTERNAME = salt['pillar.get']('elasticsearch:esclustername', '') %} {%- set ESCLUSTERNAME = salt['pillar.get']('elasticsearch:esclustername', '') %}
{%- endif %} {%- endif %}
{%- set NODEIP = salt['pillar.get']('elasticsearch:mainip', '') -%} {%- set NODEIP = salt['pillar.get']('elasticsearch:mainip', '') -%}
{% set FEATURES = salt['pillar.get']('elastic:features', False) %}
cluster.name: "{{ ESCLUSTERNAME }}" cluster.name: "{{ ESCLUSTERNAME }}"
network.host: 0.0.0.0 network.host: 0.0.0.0
@@ -16,12 +17,30 @@ discovery.zen.minimum_master_nodes: 1
path.logs: /var/log/elasticsearch path.logs: /var/log/elasticsearch
action.destructive_requires_name: true action.destructive_requires_name: true
transport.bind_host: 0.0.0.0 transport.bind_host: 0.0.0.0
transport.publish_host: {{ NODEIP }} transport.publish_host: {{ grains.host }}
transport.publish_port: 9300 transport.publish_port: 9300
cluster.routing.allocation.disk.threshold_enabled: true cluster.routing.allocation.disk.threshold_enabled: true
cluster.routing.allocation.disk.watermark.low: 95% cluster.routing.allocation.disk.watermark.low: 95%
cluster.routing.allocation.disk.watermark.high: 98% cluster.routing.allocation.disk.watermark.high: 98%
cluster.routing.allocation.disk.watermark.flood_stage: 98% cluster.routing.allocation.disk.watermark.flood_stage: 98%
{%- if FEATURES is sameas true %}
#xpack.security.enabled: false
#xpack.security.http.ssl.enabled: false
#xpack.security.transport.ssl.enabled: false
#xpack.security.http.ssl.key: /usr/share/elasticsearch/config/elasticsearch.key
#xpack.security.http.ssl.certificate: /usr/share/elasticsearch/config/elasticsearch.crt
#xpack.security.http.ssl.certificate_authorities: /usr/share/elasticsearch/config/ca.crt
#xpack.security.transport.ssl.key: /usr/share/elasticsearch/config/elasticsearch.key
#xpack.security.transport.ssl.certificate: /usr/share/elasticsearch/config/elasticsearch.crt
#xpack.security.transport.ssl.certificate_authorities: /usr/share/elasticsearch/config/ca.crt
#xpack.security.transport.ssl.verification_mode: none
#xpack.security.http.ssl.client_authentication: none
#xpack.security.authc:
# anonymous:
# username: anonymous_user
# roles: superuser
# authz_exception: true
{%- endif %}
node.attr.box_type: {{ NODE_ROUTE_TYPE }} node.attr.box_type: {{ NODE_ROUTE_TYPE }}
node.name: {{ ESCLUSTERNAME }} node.name: {{ ESCLUSTERNAME }}
script.max_compilations_rate: 1000/1m script.max_compilations_rate: 1000/1m

View File

@@ -1,53 +1,8 @@
{ {
"description" : "beats.common", "description" : "beats.common",
"processors" : [ "processors" : [
{"community_id": {"if": "ctx.winlog.event_data?.Protocol != null", "field":["winlog.event_data.SourceIp","winlog.event_data.SourcePort","winlog.event_data.DestinationIp","winlog.event_data.DestinationPort","winlog.event_data.Protocol"],"target_field":"network.community_id"}}, { "pipeline": { "if": "ctx.winlog?.channel == 'Microsoft-Windows-Sysmon/Operational'", "name": "sysmon" } },
{ "set": { "if": "ctx.winlog?.channel == 'Microsoft-Windows-Sysmon/Operational'", "field": "event.module", "value": "sysmon", "override": true } }, { "pipeline": { "if": "ctx.winlog?.channel != 'Microsoft-Windows-Sysmon/Operational'", "name":"win.eventlogs" } },
{ "set": { "if": "ctx.winlog?.channel != null", "field": "event.module", "value": "windows_eventlog", "override": false, "ignore_failure": true } },
{ "set": { "if": "ctx.agent?.type != null", "field": "module", "value": "{{agent.type}}", "override": true } },
{ "set": { "if": "ctx.winlog?.channel != 'Microsoft-Windows-Sysmon/Operational'", "field": "event.dataset", "value": "{{winlog.channel}}", "override": true } },
{ "set": { "if": "ctx.winlog?.channel == 'Microsoft-Windows-Sysmon/Operational' && ctx.event?.code == 3", "field": "event.category", "value": "host,process,network", "override": true } },
{ "set": { "if": "ctx.winlog?.channel == 'Microsoft-Windows-Sysmon/Operational' && ctx.event?.code == 1", "field": "event.category", "value": "host,process", "override": true } },
{ "set": { "if": "ctx.winlog?.channel == 'Microsoft-Windows-Sysmon/Operational' && ctx.event?.code == 1", "field": "event.dataset", "value": "process_creation", "override": true } },
{ "set": { "if": "ctx.winlog?.channel == 'Microsoft-Windows-Sysmon/Operational' && ctx.event?.code == 2", "field": "event.dataset", "value": "process_changed_file", "override": true } },
{ "set": { "if": "ctx.winlog?.channel == 'Microsoft-Windows-Sysmon/Operational' && ctx.event?.code == 3", "field": "event.dataset", "value": "network_connection", "override": true } },
{ "set": { "if": "ctx.winlog?.channel == 'Microsoft-Windows-Sysmon/Operational' && ctx.event?.code == 5", "field": "event.dataset", "value": "process_terminated", "override": true } },
{ "set": { "if": "ctx.winlog?.channel == 'Microsoft-Windows-Sysmon/Operational' && ctx.event?.code == 6", "field": "event.dataset", "value": "driver_loaded", "override": true } },
{ "set": { "if": "ctx.winlog?.channel == 'Microsoft-Windows-Sysmon/Operational' && ctx.event?.code == 7", "field": "event.dataset", "value": "image_loaded", "override": true } },
{ "set": { "if": "ctx.winlog?.channel == 'Microsoft-Windows-Sysmon/Operational' && ctx.event?.code == 8", "field": "event.dataset", "value": "create_remote_thread", "override": true } },
{ "set": { "if": "ctx.winlog?.channel == 'Microsoft-Windows-Sysmon/Operational' && ctx.event?.code == 9", "field": "event.dataset", "value": "raw_file_access_read", "override": true } },
{ "set": { "if": "ctx.winlog?.channel == 'Microsoft-Windows-Sysmon/Operational' && ctx.event?.code == 10", "field": "event.dataset", "value": "process_access", "override": true } },
{ "set": { "if": "ctx.winlog?.channel == 'Microsoft-Windows-Sysmon/Operational' && ctx.event?.code == 11", "field": "event.dataset", "value": "file_create", "override": true } },
{ "set": { "if": "ctx.winlog?.channel == 'Microsoft-Windows-Sysmon/Operational' && ctx.event?.code == 12", "field": "event.dataset", "value": "registry_create_delete", "override": true } },
{ "set": { "if": "ctx.winlog?.channel == 'Microsoft-Windows-Sysmon/Operational' && ctx.event?.code == 13", "field": "event.dataset", "value": "registry_value_set", "override": true } },
{ "set": { "if": "ctx.winlog?.channel == 'Microsoft-Windows-Sysmon/Operational' && ctx.event?.code == 14", "field": "event.dataset", "value": "registry_key_value_rename", "override": true } },
{ "set": { "if": "ctx.winlog?.channel == 'Microsoft-Windows-Sysmon/Operational' && ctx.event?.code == 15", "field": "event.dataset", "value": "file_create_stream_hash", "override": true } },
{ "set": { "if": "ctx.winlog?.channel == 'Microsoft-Windows-Sysmon/Operational' && ctx.event?.code == 16", "field": "event.dataset", "value": "config_change", "override": true } },
{ "set": { "if": "ctx.winlog?.channel == 'Microsoft-Windows-Sysmon/Operational' && ctx.event?.code == 22", "field": "event.dataset", "value": "dns_query", "override": true } },
{ "rename": { "field": "agent.hostname", "target_field": "agent.name", "ignore_missing": true } },
{ "rename": { "field": "winlog.event_data.SubjectUserName", "target_field": "user.name", "ignore_missing": true } },
{ "rename": { "field": "winlog.event_data.DestinationHostname", "target_field": "destination.hostname", "ignore_missing": true } },
{ "rename": { "field": "winlog.event_data.DestinationIp", "target_field": "destination.ip", "ignore_missing": true } },
{ "rename": { "field": "winlog.event_data.DestinationPort", "target_field": "destination.port", "ignore_missing": true } },
{ "rename": { "field": "winlog.event_data.Image", "target_field": "process.executable", "ignore_missing": true } },
{ "rename": { "field": "winlog.event_data.ProcessID", "target_field": "process.pid", "ignore_missing": true } },
{ "rename": { "field": "winlog.event_data.ProcessGuid", "target_field": "process.entity_id", "ignore_missing": true } },
{ "rename": { "field": "winlog.event_data.CommandLine", "target_field": "process.command_line", "ignore_missing": true } },
{ "rename": { "field": "winlog.event_data.CurrentDirectory", "target_field": "process.working_directory", "ignore_missing": true } },
{ "rename": { "field": "winlog.event_data.Description", "target_field": "process.pe.description", "ignore_missing": true } },
{ "rename": { "field": "winlog.event_data.Product", "target_field": "process.pe.product", "ignore_missing": true } },
{ "rename": { "field": "winlog.event_data.OriginalFileName", "target_field": "process.pe.original_file_name", "ignore_missing": true } },
{ "rename": { "field": "winlog.event_data.FileVersion", "target_field": "process.pe.file_version", "ignore_missing": true } },
{ "rename": { "field": "winlog.event_data.ParentCommandLine", "target_field": "process.parent.command_line", "ignore_missing": true } },
{ "rename": { "field": "winlog.event_data.ParentImage", "target_field": "process.parent.executable", "ignore_missing": true } },
{ "rename": { "field": "winlog.event_data.ParentProcessGuid", "target_field": "process.parent.entity_id", "ignore_missing": true } },
{ "rename": { "field": "winlog.event_data.ParentProcessId", "target_field": "process.ppid", "ignore_missing": true } },
{ "rename": { "field": "winlog.event_data.Protocol", "target_field": "network.transport", "ignore_missing": true } },
{ "rename": { "field": "winlog.event_data.User", "target_field": "user.name", "ignore_missing": true } },
{ "rename": { "field": "winlog.event_data.SourceHostname", "target_field": "source.hostname", "ignore_missing": true } },
{ "rename": { "field": "winlog.event_data.SourceIp", "target_field": "source.ip", "ignore_missing": true } },
{ "rename": { "field": "winlog.event_data.SourcePort", "target_field": "source.port", "ignore_missing": true } },
{ "rename": { "field": "winlog.event_data.targetFilename", "target_field": "file.target", "ignore_missing": true } },
{ "pipeline": { "name": "common" } } { "pipeline": { "name": "common" } }
] ]
} }

View File

@@ -43,12 +43,13 @@
{ "set": { "if": "ctx.event?.severity == 4", "field": "event.severity_label", "value": "critical", "override": true } }, { "set": { "if": "ctx.event?.severity == 4", "field": "event.severity_label", "value": "critical", "override": true } },
{ "rename": { "field": "module", "target_field": "event.module", "ignore_failure": true, "ignore_missing": true } }, { "rename": { "field": "module", "target_field": "event.module", "ignore_failure": true, "ignore_missing": true } },
{ "rename": { "field": "dataset", "target_field": "event.dataset", "ignore_failure": true, "ignore_missing": true } }, { "rename": { "field": "dataset", "target_field": "event.dataset", "ignore_failure": true, "ignore_missing": true } },
{ "rename": { "field": "category", "target_field": "event.category", "ignore_missing": true } }, { "rename": { "field": "category", "target_field": "event.category", "ignore_failure": true, "ignore_missing": true } },
{ "rename": { "field": "message2.community_id", "target_field": "network.community_id", "ignore_failure": true, "ignore_missing": true } }, { "rename": { "field": "message2.community_id", "target_field": "network.community_id", "ignore_failure": true, "ignore_missing": true } },
{ "lowercase": { "field": "event.dataset", "ignore_failure": true, "ignore_missing": true } }, { "lowercase": { "field": "event.dataset", "ignore_failure": true, "ignore_missing": true } },
{ "convert": { "field": "destination.port", "type": "integer", "ignore_failure": true, "ignore_missing": true } }, { "convert": { "field": "destination.port", "type": "integer", "ignore_failure": true, "ignore_missing": true } },
{ "convert": { "field": "source.port", "type": "integer", "ignore_failure": true, "ignore_missing": true } }, { "convert": { "field": "source.port", "type": "integer", "ignore_failure": true, "ignore_missing": true } },
{ "convert": { "field": "log.id.uid", "type": "string", "ignore_failure": true, "ignore_missing": true } }, { "convert": { "field": "log.id.uid", "type": "string", "ignore_failure": true, "ignore_missing": true } },
{ "convert": { "field": "agent.id", "type": "string", "ignore_failure": true, "ignore_missing": true } },
{ "convert": { "field": "event.severity", "type": "integer", "ignore_failure": true, "ignore_missing": true } }, { "convert": { "field": "event.severity", "type": "integer", "ignore_failure": true, "ignore_missing": true } },
{ {
"remove": { "remove": {

View File

@@ -2,78 +2,24 @@
"description" : "osquery", "description" : "osquery",
"processors" : [ "processors" : [
{ "json": { "field": "message", "target_field": "message2", "ignore_failure": true } }, { "json": { "field": "message", "target_field": "message2", "ignore_failure": true } },
{ "gsub": { "field": "message2.columns.data", "pattern": "\\\\xC2\\\\xAE", "replacement": "", "ignore_missing": true } }, { "gsub": { "field": "message2.columns.data", "pattern": "\\\\xC2\\\\xAE", "replacement": "", "ignore_missing": true } },
{ "json": { "field": "message2.columns.data", "target_field": "message2.columns.winlog", "ignore_failure": true } }, { "rename": { "if": "ctx.message2.columns?.eventid != null", "field": "message2.columns", "target_field": "winlog", "ignore_missing": true } },
{ "json": { "field": "winlog.data", "target_field": "temp", "ignore_failure": true } },
{ "rename": { "field": "temp.Data", "target_field": "winlog.event_data", "ignore_missing": true } },
{ "rename": { "field": "winlog.source", "target_field": "winlog.channel", "ignore_missing": true } },
{ "rename": { "field": "winlog.eventid", "target_field": "winlog.event_id", "ignore_missing": true } },
{ "pipeline": { "if": "ctx.winlog?.channel == 'Microsoft-Windows-Sysmon/Operational'", "name": "sysmon" } },
{ "pipeline": { "if": "ctx.winlog?.channel != 'Microsoft-Windows-Sysmon/Operational'", "name":"win.eventlogs" } },
{ {
"script": { "script": {
"lang": "painless", "lang": "painless",
"source": "def dict = ['result': new HashMap()]; for (entry in ctx['message2'].entrySet()) { dict['result'][entry.getKey()] = entry.getValue(); } ctx['osquery'] = dict; " "source": "def dict = ['result': new HashMap()]; for (entry in ctx['message2'].entrySet()) { dict['result'][entry.getKey()] = entry.getValue(); } ctx['osquery'] = dict; "
} }
}, },
{ "rename": { "field": "osquery.result.hostIdentifier", "target_field": "osquery.result.host_identifier", "ignore_missing": true } }, { "set": { "field": "event.module", "value": "osquery", "override": false } },
{ "rename": { "field": "osquery.result.calendarTime", "target_field": "osquery.result.calendar_time", "ignore_missing": true } }, { "set": { "field": "event.dataset", "value": "{{osquery.result.name}}", "override": false} },
{ "rename": { "field": "osquery.result.unixTime", "target_field": "osquery.result.unix_time", "ignore_missing": true } },
{ "json": { "field": "message", "target_field": "message3", "ignore_failure": true } },
{ "gsub": { "field": "message3.columns.data", "pattern": "\\\\xC2\\\\xAE", "replacement": "", "ignore_missing": true } },
{ "json": { "field": "message3.columns.data", "target_field": "message3.columns.winlog", "ignore_failure": true } },
{ "rename": { "field": "message3.columns.username", "target_field": "user.name", "ignore_missing": true } },
{ "rename": { "field": "message3.columns.uid", "target_field": "user.uid", "ignore_missing": true } },
{ "rename": { "field": "message3.columns.gid", "target_field": "user.gid", "ignore_missing": true } },
{ "rename": { "field": "message3.columns.shell", "target_field": "user.shell", "ignore_missing": true } },
{ "rename": { "field": "message3.columns.cmdline", "target_field": "process.command_line", "ignore_missing": true } },
{ "rename": { "field": "message3.columns.pid", "target_field": "process.pid", "ignore_missing": true } },
{ "rename": { "field": "message3.columns.parent", "target_field": "process.ppid", "ignore_missing": true } },
{ "rename": { "field": "message3.columns.cwd", "target_field": "process.working_directory", "ignore_missing": true } },
{ "rename": { "field": "message3.columns.community_id", "target_field": "network.community_id", "ignore_missing": true } },
{ "rename": { "field": "message3.columns.local_address", "target_field": "local.ip", "ignore_missing": true } },
{ "rename": { "field": "message3.columns.local_port", "target_field": "local.port", "ignore_missing": true } },
{ "rename": { "field": "message3.columns.remote_address", "target_field": "remote.ip", "ignore_missing": true } },
{ "rename": { "field": "message3.columns.remote_port", "target_field": "remote.port", "ignore_missing": true } },
{ "rename": { "field": "message3.columns.process_name", "target_field": "process.name", "ignore_missing": true } },
{ "rename": { "field": "message3.columns.eventid", "target_field": "event.code", "ignore_missing": true } },
{ "set": { "if": "ctx.message3.columns?.source == 'Microsoft-Windows-Sysmon/Operational'", "field": "event.module", "value": "sysmon", "override": true } },
{ "set": { "if": "ctx.message3.columns?.source != null", "field": "event.module", "value": "windows_eventlog", "override": false, "ignore_failure": true } },
{ "set": { "if": "ctx.message3.columns?.source != 'Microsoft-Windows-Sysmon/Operational'", "field": "event.dataset", "value": "{{message3.columns.source}}", "override": true } },
{ "set": { "if": "ctx.message3.columns?.source == 'Microsoft-Windows-Sysmon/Operational' && ctx.event?.code == 1", "field": "event.dataset", "value": "process_creation", "override": true } },
{ "set": { "if": "ctx.message3.columns?.source == 'Microsoft-Windows-Sysmon/Operational' && ctx.event?.code == 2", "field": "event.dataset", "value": "process_changed_file", "override": true } },
{ "set": { "if": "ctx.message3.columns?.source == 'Microsoft-Windows-Sysmon/Operational' && ctx.event?.code == 3", "field": "event.dataset", "value": "network_connection", "override": true } },
{ "set": { "if": "ctx.message3.columns?.source == 'Microsoft-Windows-Sysmon/Operational' && ctx.event?.code == 5", "field": "event.dataset", "value": "process_terminated", "override": true } },
{ "set": { "if": "ctx.message3.columns?.source == 'Microsoft-Windows-Sysmon/Operational' && ctx.event?.code == 6", "field": "event.dataset", "value": "driver_loaded", "override": true } },
{ "set": { "if": "ctx.message3.columns?.source == 'Microsoft-Windows-Sysmon/Operational' && ctx.event?.code == 7", "field": "event.dataset", "value": "image_loaded", "override": true } },
{ "set": { "if": "ctx.message3.columns?.source == 'Microsoft-Windows-Sysmon/Operational' && ctx.event?.code == 8", "field": "event.dataset", "value": "create_remote_thread", "override": true } },
{ "set": { "if": "ctx.message3.columns?.source == 'Microsoft-Windows-Sysmon/Operational' && ctx.event?.code == 9", "field": "event.dataset", "value": "raw_file_access_read", "override": true } },
{ "set": { "if": "ctx.message3.columns?.source == 'Microsoft-Windows-Sysmon/Operational' && ctx.event?.code == 10", "field": "event.dataset", "value": "process_access", "override": true } },
{ "set": { "if": "ctx.message3.columns?.source == 'Microsoft-Windows-Sysmon/Operational' && ctx.event?.code == 11", "field": "event.dataset", "value": "file_create", "override": true } },
{ "set": { "if": "ctx.message3.columns?.source == 'Microsoft-Windows-Sysmon/Operational' && ctx.event?.code == 12", "field": "event.dataset", "value": "registry_create_delete", "override": true } },
{ "set": { "if": "ctx.message3.columns?.source == 'Microsoft-Windows-Sysmon/Operational' && ctx.event?.code == 13", "field": "event.dataset", "value": "registry_value_set", "override": true } },
{ "set": { "if": "ctx.message3.columns?.source == 'Microsoft-Windows-Sysmon/Operational' && ctx.event?.code == 14", "field": "event.dataset", "value": "registry_key_value_rename", "override": true } },
{ "set": { "if": "ctx.message3.columns?.source == 'Microsoft-Windows-Sysmon/Operational' && ctx.event?.code == 15", "field": "event.dataset", "value": "file_create_stream_hash", "override": true } },
{ "set": { "if": "ctx.message3.columns?.source == 'Microsoft-Windows-Sysmon/Operational' && ctx.event?.code == 16", "field": "event.dataset", "value": "config_change", "override": true } },
{ "set": { "if": "ctx.message3.columns?.source == 'Microsoft-Windows-Sysmon/Operational' && ctx.event?.code == 22", "field": "event.dataset", "value": "dns_query", "override": true } },
{ "rename": { "field": "message3.columns.winlog.EventData.SubjectUserName", "target_field": "user.name", "ignore_missing": true } },
{ "rename": { "field": "message3.columns.winlog.EventData.destinationHostname", "target_field": "destination.hostname", "ignore_missing": true } },
{ "rename": { "field": "message3.columns.winlog.EventData.destinationIp", "target_field": "destination.ip", "ignore_missing": true } },
{ "rename": { "field": "message3.columns.winlog.EventData.destinationPort", "target_field": "destination.port", "ignore_missing": true } },
{ "rename": { "field": "message3.columns.winlog.EventData.Image", "target_field": "process.executable", "ignore_missing": true } },
{ "rename": { "field": "message3.columns.winlog.EventData.ProcessID", "target_field": "process.pid", "ignore_missing": true } },
{ "rename": { "field": "message3.columns.winlog.EventData.ProcessGuid", "target_field": "process.entity_id", "ignore_missing": true } },
{ "rename": { "field": "message3.columns.winlog.EventData.CommandLine", "target_field": "process.command_line", "ignore_missing": true } },
{ "rename": { "field": "message3.columns.winlog.EventData.CurrentDirectory", "target_field": "process.working_directory", "ignore_missing": true } },
{ "rename": { "field": "message3.columns.winlog.EventData.Description", "target_field": "process.pe.description", "ignore_missing": true } },
{ "rename": { "field": "message3.columns.winlog.EventData.Product", "target_field": "process.pe.product", "ignore_missing": true } },
{ "rename": { "field": "message3.columns.winlog.EventData.OriginalFileName", "target_field": "process.pe.original_file_name", "ignore_missing": true } },
{ "rename": { "field": "message3.columns.winlog.EventData.FileVersion", "target_field": "process.pe.file_version", "ignore_missing": true } },
{ "rename": { "field": "message3.columns.winlog.EventData.ParentCommandLine", "target_field": "process.parent.command_line", "ignore_missing": true } },
{ "rename": { "field": "message3.columns.winlog.EventData.ParentImage", "target_field": "process.parent.executable", "ignore_missing": true } },
{ "rename": { "field": "message3.columns.winlog.EventData.ParentProcessGuid", "target_field": "process.parent.entity_id", "ignore_missing": true } },
{ "rename": { "field": "message3.columns.winlog.EventData.ParentProcessId", "target_field": "process.ppid", "ignore_missing": true } },
{ "rename": { "field": "message3.columns.winlog.EventData.User", "target_field": "user.name", "ignore_missing": true } },
{ "rename": { "field": "message3.columns.winlog.EventData.parentImage", "target_field": "parent_image_path", "ignore_missing": true } },
{ "rename": { "field": "message3.columns.winlog.EventData.sourceHostname", "target_field": "source.hostname", "ignore_missing": true } },
{ "rename": { "field": "message3.columns.winlog.EventData.sourceIp", "target_field": "source_ip", "ignore_missing": true } },
{ "rename": { "field": "message3.columns.winlog.EventData.sourcePort", "target_field": "source.port", "ignore_missing": true } },
{ "rename": { "field": "message3.columns.winlog.EventData.targetFilename", "target_field": "file.target", "ignore_missing": true } },
{ "remove": { "field": [ "message3"], "ignore_failure": false } },
{ "pipeline": { "name": "common" } } { "pipeline": { "name": "common" } }
] ]
} }

View File

@@ -0,0 +1,54 @@
{
"description" : "sysmon",
"processors" : [
{"community_id": {"if": "ctx.winlog.event_data?.Protocol != null", "field":["winlog.event_data.SourceIp","winlog.event_data.SourcePort","winlog.event_data.DestinationIp","winlog.event_data.DestinationPort","winlog.event_data.Protocol"],"target_field":"network.community_id"}},
{ "set": { "field": "event.code", "value": "{{winlog.event_id}}", "override": true } },
{ "set": { "field": "event.module", "value": "sysmon", "override": true } },
{ "set": { "if": "ctx.winlog?.computer_name != null", "field": "observer.name", "value": "{{winlog.computer_name}}", "override": true } },
{ "set": { "if": "ctx.event?.code == '3'", "field": "event.category", "value": "host,process,network", "override": true } },
{ "set": { "if": "ctx.event?.code == '1'", "field": "event.category", "value": "host,process", "override": true } },
{ "set": { "if": "ctx.event?.code == '5'", "field": "event.category", "value": "host,process", "override": true } },
{ "set": { "if": "ctx.event?.code == '6'", "field": "event.category", "value": "host,driver", "override": true } },
{ "set": { "if": "ctx.event?.code == '22'", "field": "event.category", "value": "network", "override": true } },
{ "set": { "if": "ctx.event?.code == '1'", "field": "event.dataset", "value": "process_creation", "override": true } },
{ "set": { "if": "ctx.event?.code == '2'", "field": "event.dataset", "value": "process_changed_file", "override": true } },
{ "set": { "if": "ctx.event?.code == '3'", "field": "event.dataset", "value": "network_connection", "override": true } },
{ "set": { "if": "ctx.event?.code == '5'", "field": "event.dataset", "value": "process_terminated", "override": true } },
{ "set": { "if": "ctx.event?.code == '6'", "field": "event.dataset", "value": "driver_loaded", "override": true } },
{ "set": { "if": "ctx.event?.code == '7'", "field": "event.dataset", "value": "image_loaded", "override": true } },
{ "set": { "if": "ctx.event?.code == '8'", "field": "event.dataset", "value": "create_remote_thread", "override": true } },
{ "set": { "if": "ctx.event?.code == '9'", "field": "event.dataset", "value": "raw_file_access_read", "override": true } },
{ "set": { "if": "ctx.event?.code == '10'", "field": "event.dataset", "value": "process_access", "override": true } },
{ "set": { "if": "ctx.event?.code == '11'", "field": "event.dataset", "value": "file_create", "override": true } },
{ "set": { "if": "ctx.event?.code == '12'", "field": "event.dataset", "value": "registry_create_delete", "override": true } },
{ "set": { "if": "ctx.event?.code == '13'", "field": "event.dataset", "value": "registry_value_set", "override": true } },
{ "set": { "if": "ctx.event?.code == '14'", "field": "event.dataset", "value": "registry_key_value_rename", "override": true } },
{ "set": { "if": "ctx.event?.code == '15'", "field": "event.dataset", "value": "file_create_stream_hash", "override": true } },
{ "set": { "if": "ctx.event?.code == '16'", "field": "event.dataset", "value": "config_change", "override": true } },
{ "set": { "if": "ctx.event?.code == '22'", "field": "event.dataset", "value": "dns_query", "override": true } },
{ "rename": { "field": "winlog.event_data.SubjectUserName", "target_field": "user.name", "ignore_missing": true } },
{ "rename": { "field": "winlog.event_data.DestinationHostname", "target_field": "destination.hostname", "ignore_missing": true } },
{ "rename": { "field": "winlog.event_data.DestinationIp", "target_field": "destination.ip", "ignore_missing": true } },
{ "rename": { "field": "winlog.event_data.DestinationPort", "target_field": "destination.port", "ignore_missing": true } },
{ "rename": { "field": "winlog.event_data.Image", "target_field": "process.executable", "ignore_missing": true } },
{ "rename": { "field": "winlog.event_data.ProcessID", "target_field": "process.pid", "ignore_missing": true } },
{ "rename": { "field": "winlog.event_data.ProcessGuid", "target_field": "process.entity_id", "ignore_missing": true } },
{ "rename": { "field": "winlog.event_data.CommandLine", "target_field": "process.command_line", "ignore_missing": true } },
{ "rename": { "field": "winlog.event_data.CurrentDirectory", "target_field": "process.working_directory", "ignore_missing": true } },
{ "rename": { "field": "winlog.event_data.Description", "target_field": "process.pe.description", "ignore_missing": true } },
{ "rename": { "field": "winlog.event_data.Product", "target_field": "process.pe.product", "ignore_missing": true } },
{ "rename": { "field": "winlog.event_data.Company", "target_field": "process.pe.company", "ignore_missing": true } },
{ "rename": { "field": "winlog.event_data.OriginalFileName", "target_field": "process.pe.original_file_name", "ignore_missing": true } },
{ "rename": { "field": "winlog.event_data.FileVersion", "target_field": "process.pe.file_version", "ignore_missing": true } },
{ "rename": { "field": "winlog.event_data.ParentCommandLine", "target_field": "process.parent.command_line", "ignore_missing": true } },
{ "rename": { "field": "winlog.event_data.ParentImage", "target_field": "process.parent.executable", "ignore_missing": true } },
{ "rename": { "field": "winlog.event_data.ParentProcessGuid", "target_field": "process.parent.entity_id", "ignore_missing": true } },
{ "rename": { "field": "winlog.event_data.ParentProcessId", "target_field": "process.ppid", "ignore_missing": true } },
{ "rename": { "field": "winlog.event_data.Protocol", "target_field": "network.transport", "ignore_missing": true } },
{ "rename": { "field": "winlog.event_data.User", "target_field": "user.name", "ignore_missing": true } },
{ "rename": { "field": "winlog.event_data.SourceHostname", "target_field": "source.hostname", "ignore_missing": true } },
{ "rename": { "field": "winlog.event_data.SourceIp", "target_field": "source.ip", "ignore_missing": true } },
{ "rename": { "field": "winlog.event_data.SourcePort", "target_field": "source.port", "ignore_missing": true } },
{ "rename": { "field": "winlog.event_data.targetFilename", "target_field": "file.target", "ignore_missing": true } }
]
}

View File

@@ -0,0 +1,10 @@
{
"description" : "win.eventlogs",
"processors" : [
{ "set": { "if": "ctx.winlog?.channel != null", "field": "event.module", "value": "windows_eventlog", "override": false, "ignore_failure": true } },
{ "set": { "if": "ctx.winlog?.channel != null", "field": "event.dataset", "value": "{{winlog.channel}}", "override": true } },
{ "set": { "if": "ctx.winlog?.computer_name != null", "field": "observer.name", "value": "{{winlog.computer_name}}", "override": true } },
{ "rename": { "field": "winlog.event_data.SubjectUserName", "target_field": "user.name", "ignore_missing": true } },
{ "rename": { "field": "winlog.event_data.User", "target_field": "user.name", "ignore_missing": true } }
]
}

View File

@@ -0,0 +1,32 @@
#!/bin/bash
#
# Copyright 2014,2015,2016,2017,2018,2019 Security Onion Solutions, LLC
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
{%- set VERSION = salt['pillar.get']('global:soversion', '') %}
{%- set IMAGEREPO = salt['pillar.get']('global:imagerepo') %}
{%- set MANAGER = salt['grains.get']('master') %}
. /usr/sbin/so-common
# Check to see if we have extracted the ca cert.
if [ ! -f /opt/so/saltstack/local/salt/common/cacerts ]; then
docker run -v /etc/pki/ca.crt:/etc/pki/ca.crt --name so-elasticsearchca --user root --entrypoint jdk/bin/keytool {{ MANAGER }}:5000/{{ IMAGEREPO }}/so-elasticsearch:{{ VERSION }} -keystore /etc/pki/ca-trust/extracted/java/cacerts -alias SOSCA -import -file /etc/pki/ca.crt -storepass changeit -noprompt
docker cp so-elasticsearchca:/etc/pki/ca-trust/extracted/java/cacerts /opt/so/saltstack/local/salt/common/cacerts
docker cp so-elasticsearchca:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem /opt/so/saltstack/local/salt/common/tls-ca-bundle.pem
docker rm so-elasticsearchca
echo "" >> /opt/so/saltstack/local/salt/common/tls-ca-bundle.pem
echo "sosca" >> /opt/so/saltstack/local/salt/common/tls-ca-bundle.pem
cat /etc/pki/ca.crt >> /opt/so/saltstack/local/salt/common/tls-ca-bundle.pem
else
exit 0
fi

View File

@@ -0,0 +1,12 @@
keystore.path: /usr/share/elasticsearch/config/sokeys
keystore.password: changeit
keystore.algorithm: SunX509
truststore.path: /etc/pki/java/cacerts
truststore.password: changeit
truststore.algorithm: PKIX
protocols:
- TLSv1.2
ciphers:
- TLS_RSA_WITH_AES_128_CBC_SHA256
transport.encrypted: true
http.encrypted: false

View File

@@ -12,23 +12,27 @@
# #
# You should have received a copy of the GNU General Public License # You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>. # along with this program. If not, see <http://www.gnu.org/licenses/>.
{% set VERSION = salt['pillar.get']('static:soversion', 'HH1.2.2') %} {% set VERSION = salt['pillar.get']('global:soversion', 'HH1.2.2') %}
{% set IMAGEREPO = salt['pillar.get']('static:imagerepo') %} {% set IMAGEREPO = salt['pillar.get']('global:imagerepo') %}
{% set MANAGER = salt['grains.get']('master') %} {% set MANAGER = salt['grains.get']('master') %}
{% set FEATURES = salt['pillar.get']('elastic:features', False) %} {% set FEATURES = salt['pillar.get']('elastic:features', False) %}
{%- set NODEIP = salt['pillar.get']('elasticsearch:mainip', '') -%}
{% if FEATURES %}
{% set FEATURES = "-features" %} {%- if FEATURES is sameas true %}
{% set FEATUREZ = "-features" %}
{% else %} {% else %}
{% set FEATURES = '' %} {% set FEATUREZ = '' %}
{% endif %} {% endif %}
{% if grains['role'] in ['so-eval','so-managersearch', 'so-manager', 'so-standalone'] %} {% if grains['role'] in ['so-eval','so-managersearch', 'so-manager', 'so-standalone', 'so-import'] %}
{% set esclustername = salt['pillar.get']('manager:esclustername', '') %} {% set esclustername = salt['pillar.get']('manager:esclustername', '') %}
{% set esheap = salt['pillar.get']('manager:esheap', '') %} {% set esheap = salt['pillar.get']('manager:esheap', '') %}
{% set ismanager = True %}
{% elif grains['role'] in ['so-node','so-heavynode'] %} {% elif grains['role'] in ['so-node','so-heavynode'] %}
{% set esclustername = salt['pillar.get']('elasticsearch:esclustername', '') %} {% set esclustername = salt['pillar.get']('elasticsearch:esclustername', '') %}
{% set esheap = salt['pillar.get']('elasticsearch:esheap', '') %} {% set esheap = salt['pillar.get']('elasticsearch:esheap', '') %}
{% set ismanager = False %}
{% endif %} {% endif %}
{% set TEMPLATES = salt['pillar.get']('elasticsearch:templates', {}) %} {% set TEMPLATES = salt['pillar.get']('elasticsearch:templates', {}) %}
@@ -37,6 +41,46 @@ vm.max_map_count:
sysctl.present: sysctl.present:
- value: 262144 - value: 262144
{% if ismanager %}
# We have to add the Manager CA to the CA list
cascriptsync:
file.managed:
- name: /usr/sbin/so-catrust
- source: salt://elasticsearch/files/scripts/so-catrust
- user: 939
- group: 939
- mode: 750
- template: jinja
# Run the CA magic
cascriptfun:
cmd.run:
- name: /usr/sbin/so-catrust
{% endif %}
# Move our new CA over so Elastic and Logstash can use SSL with the internal CA
catrustdir:
file.directory:
- name: /opt/so/conf/ca
- user: 939
- group: 939
- makedirs: True
cacertz:
file.managed:
- name: /opt/so/conf/ca/cacerts
- source: salt://common/cacerts
- user: 939
- group: 939
capemz:
file.managed:
- name: /opt/so/conf/ca/tls-ca-bundle.pem
- source: salt://common/tls-ca-bundle.pem
- user: 939
- group: 939
# Add ES Group # Add ES Group
elasticsearchgroup: elasticsearchgroup:
group.present: group.present:
@@ -95,6 +139,13 @@ esyml:
- group: 939 - group: 939
- template: jinja - template: jinja
sotls:
file.managed:
- name: /opt/so/conf/elasticsearch/sotls.yml
- source: salt://elasticsearch/files/sotls.yml
- user: 930
- group: 939
#sync templates to /opt/so/conf/elasticsearch/templates #sync templates to /opt/so/conf/elasticsearch/templates
{% for TEMPLATE in TEMPLATES %} {% for TEMPLATE in TEMPLATES %}
es_template_{{TEMPLATE.split('.')[0] | replace("/","_") }}: es_template_{{TEMPLATE.split('.')[0] | replace("/","_") }}:
@@ -126,18 +177,23 @@ eslogdir:
so-elasticsearch: so-elasticsearch:
docker_container.running: docker_container.running:
- image: {{ MANAGER }}:5000/{{ IMAGEREPO }}/so-elasticsearch:{{ VERSION }}{{ FEATURES }} - image: {{ MANAGER }}:5000/{{ IMAGEREPO }}/so-elasticsearch:{{ VERSION }}{{ FEATUREZ }}
- hostname: elasticsearch - hostname: elasticsearch
- name: so-elasticsearch - name: so-elasticsearch
- user: elasticsearch - user: elasticsearch
- extra_hosts:
- {{ grains.host }}:{{ NODEIP }}
{%- if ismanager %}
{%- if salt['pillar.get']('nodestab', {}) %}
{%- for SN, SNDATA in salt['pillar.get']('nodestab', {}).items() %}
- {{ SN.split('_')|first }}:{{ SNDATA.ip }}
{%- endfor %}
{%- endif %}
{%- endif %}
- environment: - environment:
- discovery.type=single-node - discovery.type=single-node
#- bootstrap.memory_lock=true
#- cluster.name={{ esclustername }}
- ES_JAVA_OPTS=-Xms{{ esheap }} -Xmx{{ esheap }} - ES_JAVA_OPTS=-Xms{{ esheap }} -Xmx{{ esheap }}
#- http.host=0.0.0.0 ulimits:
#- transport.host=127.0.0.1
- ulimits:
- memlock=-1:-1 - memlock=-1:-1
- nofile=65536:65536 - nofile=65536:65536
- nproc=4096 - nproc=4096
@@ -149,6 +205,16 @@ so-elasticsearch:
- /opt/so/conf/elasticsearch/log4j2.properties:/usr/share/elasticsearch/config/log4j2.properties:ro - /opt/so/conf/elasticsearch/log4j2.properties:/usr/share/elasticsearch/config/log4j2.properties:ro
- /nsm/elasticsearch:/usr/share/elasticsearch/data:rw - /nsm/elasticsearch:/usr/share/elasticsearch/data:rw
- /opt/so/log/elasticsearch:/var/log/elasticsearch:rw - /opt/so/log/elasticsearch:/var/log/elasticsearch:rw
- /opt/so/conf/ca/cacerts:/etc/pki/ca-trust/extracted/java/cacerts:ro
- /etc/pki/ca.crt:/usr/share/elasticsearch/config/ca.crt:ro
- /etc/pki/elasticsearch.p12:/usr/share/elasticsearch/config/elasticsearch.p12:ro
- /opt/so/conf/elasticsearch/sotls.yml:/usr/share/elasticsearch/config/sotls.yml:ro
- watch:
- file: cacertz
- file: esyml
- file: esingestconf
- file: so-elasticsearch-pipelines-file
so-elasticsearch-pipelines-file: so-elasticsearch-pipelines-file:
file.managed: file.managed:

View File

@@ -177,6 +177,10 @@
"type":"object", "type":"object",
"dynamic": true "dynamic": true
}, },
"import":{
"type":"object",
"dynamic": true
},
"ingest":{ "ingest":{
"type":"object", "type":"object",
"dynamic": true "dynamic": true
@@ -383,7 +387,15 @@
}, },
"winlog":{ "winlog":{
"type":"object", "type":"object",
"dynamic": true "dynamic": true,
"properties":{
"event_id":{
"type":"long"
},
"event_data":{
"type":"object"
}
}
}, },
"x509":{ "x509":{
"type":"object", "type":"object",

View File

@@ -1,16 +1,16 @@
{%- if grains.role == 'so-heavynode' %} {%- if grains.role == 'so-heavynode' %}
{%- set MANAGER = salt['pillar.get']('sensor:mainip' '') %} {%- set MANAGER = salt['grains.get']('host' '') %}
{%- else %} {%- else %}
{%- set MANAGER = salt['grains.get']('master') %} {%- set MANAGER = salt['grains.get']('master') %}
{%- endif %} {%- endif %}
{%- set HOSTNAME = salt['grains.get']('host', '') %} {%- set HOSTNAME = salt['grains.get']('host', '') %}
{%- set ZEEKVER = salt['pillar.get']('static:zeekversion', 'COMMUNITY') %} {%- set ZEEKVER = salt['pillar.get']('global:zeekversion', 'COMMUNITY') %}
{%- set WAZUHENABLED = salt['pillar.get']('static:wazuh', '0') %} {%- set WAZUHENABLED = salt['pillar.get']('global:wazuh', '0') %}
{%- set STRELKAENABLED = salt['pillar.get']('strelka:enabled', '0') %} {%- set STRELKAENABLED = salt['pillar.get']('strelka:enabled', '0') %}
{%- set FLEETMANAGER = salt['pillar.get']('static:fleet_manager', False) -%} {%- set FLEETMANAGER = salt['pillar.get']('global:fleet_manager', False) -%}
{%- set FLEETNODE = salt['pillar.get']('static:fleet_node', False) -%} {%- set FLEETNODE = salt['pillar.get']('global:fleet_node', False) -%}
name: {{ HOSTNAME }} name: {{ HOSTNAME }}
@@ -74,7 +74,7 @@ filebeat.modules:
# List of prospectors to fetch data. # List of prospectors to fetch data.
filebeat.inputs: filebeat.inputs:
#------------------------------ Log prospector -------------------------------- #------------------------------ Log prospector --------------------------------
{%- if grains['role'] == 'so-sensor' or grains['role'] == "so-eval" or grains['role'] == "so-helix" or grains['role'] == "so-heavynode" or grains['role'] == "so-standalone" %} {%- if grains['role'] in ['so-sensor', "so-eval", "so-helix", "so-heavynode", "so-standalone", "so-import"] %}
- type: udp - type: udp
enabled: true enabled: true
host: "0.0.0.0:514" host: "0.0.0.0:514"
@@ -253,7 +253,7 @@ output.{{ type }}:
{%- endfor %} {%- endfor %}
{%- else %} {%- else %}
#----------------------------- Elasticsearch/Logstash output --------------------------------- #----------------------------- Elasticsearch/Logstash output ---------------------------------
{%- if grains['role'] == "so-eval" %} {%- if grains['role'] in ["so-eval", "so-import"] %}
output.elasticsearch: output.elasticsearch:
enabled: true enabled: true
hosts: ["{{ MANAGER }}:9200"] hosts: ["{{ MANAGER }}:9200"]

View File

@@ -11,12 +11,12 @@
# #
# You should have received a copy of the GNU General Public License # You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>. # along with this program. If not, see <http://www.gnu.org/licenses/>.
{% set VERSION = salt['pillar.get']('static:soversion', 'HH1.2.2') %} {% set VERSION = salt['pillar.get']('global:soversion', 'HH1.2.2') %}
{% set IMAGEREPO = salt['pillar.get']('static:imagerepo') %} {% set IMAGEREPO = salt['pillar.get']('global:imagerepo') %}
{% set MANAGER = salt['grains.get']('master') %} {% set MANAGER = salt['grains.get']('master') %}
{% set MANAGERIP = salt['pillar.get']('static:managerip', '') %} {% set MANAGERIP = salt['pillar.get']('global:managerip', '') %}
{% set FEATURES = salt['pillar.get']('elastic:features', False) %} {% set FEATURES = salt['pillar.get']('elastic:features', False) %}
{% if FEATURES %} {%- if FEATURES is sameas true %}
{% set FEATURES = "-features" %} {% set FEATURES = "-features" %}
{% else %} {% else %}
{% set FEATURES = '' %} {% set FEATURES = '' %}
@@ -60,8 +60,8 @@ so-filebeat:
- /nsm:/nsm:ro - /nsm:/nsm:ro
- /opt/so/log/filebeat:/usr/share/filebeat/logs:rw - /opt/so/log/filebeat:/usr/share/filebeat/logs:rw
- /opt/so/conf/filebeat/etc/filebeat.yml:/usr/share/filebeat/filebeat.yml:ro - /opt/so/conf/filebeat/etc/filebeat.yml:/usr/share/filebeat/filebeat.yml:ro
- /opt/so/wazuh/logs/alerts:/wazuh/alerts:ro - /nsm/wazuh/logs/alerts:/wazuh/alerts:ro
- /opt/so/wazuh/logs/archives:/wazuh/archives:ro - /nsm/wazuh/logs/archives:/wazuh/archives:ro
- /opt/so/conf/filebeat/etc/pki/filebeat.crt:/usr/share/filebeat/filebeat.crt:ro - /opt/so/conf/filebeat/etc/pki/filebeat.crt:/usr/share/filebeat/filebeat.crt:ro
- /opt/so/conf/filebeat/etc/pki/filebeat.key:/usr/share/filebeat/filebeat.key:ro - /opt/so/conf/filebeat/etc/pki/filebeat.key:/usr/share/filebeat/filebeat.key:ro
- /etc/ssl/certs/intca.crt:/usr/share/filebeat/intraca.crt:ro - /etc/ssl/certs/intca.crt:/usr/share/filebeat/intraca.crt:ro

View File

@@ -15,6 +15,7 @@ role:
- {{ portgroups.mysql }} - {{ portgroups.mysql }}
- {{ portgroups.kibana }} - {{ portgroups.kibana }}
- {{ portgroups.redis }} - {{ portgroups.redis }}
- {{ portgroups.minio }}
- {{ portgroups.influxdb }} - {{ portgroups.influxdb }}
- {{ portgroups.fleet_api }} - {{ portgroups.fleet_api }}
- {{ portgroups.cortex }} - {{ portgroups.cortex }}
@@ -38,6 +39,7 @@ role:
search_node: search_node:
portgroups: portgroups:
- {{ portgroups.redis }} - {{ portgroups.redis }}
- {{ portgroups.minio }}
- {{ portgroups.elasticsearch_node }} - {{ portgroups.elasticsearch_node }}
self: self:
portgroups: portgroups:
@@ -99,6 +101,7 @@ role:
- {{ portgroups.mysql }} - {{ portgroups.mysql }}
- {{ portgroups.kibana }} - {{ portgroups.kibana }}
- {{ portgroups.redis }} - {{ portgroups.redis }}
- {{ portgroups.minio }}
- {{ portgroups.influxdb }} - {{ portgroups.influxdb }}
- {{ portgroups.fleet_api }} - {{ portgroups.fleet_api }}
- {{ portgroups.cortex }} - {{ portgroups.cortex }}
@@ -122,6 +125,7 @@ role:
search_node: search_node:
portgroups: portgroups:
- {{ portgroups.redis }} - {{ portgroups.redis }}
- {{ portgroups.minio }}
- {{ portgroups.elasticsearch_node }} - {{ portgroups.elasticsearch_node }}
self: self:
portgroups: portgroups:
@@ -180,6 +184,7 @@ role:
- {{ portgroups.mysql }} - {{ portgroups.mysql }}
- {{ portgroups.kibana }} - {{ portgroups.kibana }}
- {{ portgroups.redis }} - {{ portgroups.redis }}
- {{ portgroups.minio }}
- {{ portgroups.influxdb }} - {{ portgroups.influxdb }}
- {{ portgroups.fleet_api }} - {{ portgroups.fleet_api }}
- {{ portgroups.cortex }} - {{ portgroups.cortex }}
@@ -203,6 +208,7 @@ role:
search_node: search_node:
portgroups: portgroups:
- {{ portgroups.redis }} - {{ portgroups.redis }}
- {{ portgroups.minio }}
- {{ portgroups.elasticsearch_node }} - {{ portgroups.elasticsearch_node }}
self: self:
portgroups: portgroups:
@@ -261,6 +267,7 @@ role:
- {{ portgroups.mysql }} - {{ portgroups.mysql }}
- {{ portgroups.kibana }} - {{ portgroups.kibana }}
- {{ portgroups.redis }} - {{ portgroups.redis }}
- {{ portgroups.minio }}
- {{ portgroups.influxdb }} - {{ portgroups.influxdb }}
- {{ portgroups.fleet_api }} - {{ portgroups.fleet_api }}
- {{ portgroups.cortex }} - {{ portgroups.cortex }}
@@ -284,6 +291,7 @@ role:
search_node: search_node:
portgroups: portgroups:
- {{ portgroups.redis }} - {{ portgroups.redis }}
- {{ portgroups.minio }}
- {{ portgroups.elasticsearch_node }} - {{ portgroups.elasticsearch_node }}
self: self:
portgroups: portgroups:
@@ -434,16 +442,24 @@ role:
chain: chain:
DOCKER-USER: DOCKER-USER:
hostgroups: hostgroups:
self: manager:
portgroups: portgroups:
- {{ portgroups.redis }} - {{ portgroups.elasticsearch_node }}
- {{ portgroups.beats_5044 }} dockernet:
- {{ portgroups.beats_5644 }} portgroups:
- {{ portgroups.elasticsearch_node }}
- {{ portgroups.elasticsearch_rest }}
elasticsearch_rest:
portgroups:
- {{ portgroups.elasticsearch_rest }}
INPUT: INPUT:
hostgroups: hostgroups:
anywhere: anywhere:
portgroups: portgroups:
- {{ portgroups.ssh }} - {{ portgroups.ssh }}
dockernet:
portgroups:
- {{ portgroups.all }}
localhost: localhost:
portgroups: portgroups:
- {{ portgroups.all }} - {{ portgroups.all }}
@@ -480,3 +496,55 @@ role:
localhost: localhost:
portgroups: portgroups:
- {{ portgroups.all }} - {{ portgroups.all }}
import:
chain:
DOCKER-USER:
hostgroups:
manager:
portgroups:
- {{ portgroups.kibana }}
- {{ portgroups.redis }}
- {{ portgroups.influxdb }}
- {{ portgroups.elasticsearch_rest }}
- {{ portgroups.elasticsearch_node }}
minion:
portgroups:
- {{ portgroups.docker_registry }}
sensor:
portgroups:
- {{ portgroups.beats_5044 }}
- {{ portgroups.beats_5644 }}
- {{ portgroups.sensoroni }}
search_node:
portgroups:
- {{ portgroups.redis }}
- {{ portgroups.elasticsearch_node }}
self:
portgroups:
- {{ portgroups.syslog}}
beats_endpoint:
portgroups:
- {{ portgroups.beats_5044 }}
beats_endpoint_ssl:
portgroups:
- {{ portgroups.beats_5644 }}
elasticsearch_rest:
portgroups:
- {{ portgroups.elasticsearch_rest }}
analyst:
portgroups:
- {{ portgroups.nginx }}
INPUT:
hostgroups:
anywhere:
portgroups:
- {{ portgroups.ssh }}
dockernet:
portgroups:
- {{ portgroups.all }}
localhost:
portgroups:
- {{ portgroups.all }}
minion:
portgroups:
- {{ portgroups.salt_manager }}

View File

@@ -45,6 +45,9 @@ firewall:
kibana: kibana:
tcp: tcp:
- 5601 - 5601
minio:
tcp:
- 9595
mysql: mysql:
tcp: tcp:
- 3306 - 3306
@@ -61,6 +64,7 @@ firewall:
redis: redis:
tcp: tcp:
- 6379 - 6379
- 9696
salt_manager: salt_manager:
tcp: tcp:
- 4505 - 4505

View File

@@ -1,17 +1,17 @@
{% set MANAGER = salt['grains.get']('master') %} {% set MANAGER = salt['grains.get']('master') %}
{% set ENROLLSECRET = salt['pillar.get']('secrets:fleet_enroll-secret') %} {% set ENROLLSECRET = salt['pillar.get']('secrets:fleet_enroll-secret') %}
{% set CURRENTPACKAGEVERSION = salt['pillar.get']('static:fleet_packages-version') %} {% set CURRENTPACKAGEVERSION = salt['pillar.get']('global:fleet_packages-version') %}
{% set VERSION = salt['pillar.get']('static:soversion') %} {% set VERSION = salt['pillar.get']('global:soversion') %}
{% set CUSTOM_FLEET_HOSTNAME = salt['pillar.get']('static:fleet_custom_hostname', None) %} {% set CUSTOM_FLEET_HOSTNAME = salt['pillar.get']('global:fleet_custom_hostname', None) %}
{% set IMAGEREPO = salt['pillar.get']('static:imagerepo') %} {% set IMAGEREPO = salt['pillar.get']('global:imagerepo') %}
{%- set FLEETNODE = salt['pillar.get']('static:fleet_node') -%} {%- set FLEETNODE = salt['pillar.get']('global:fleet_node') -%}
{% if CUSTOM_FLEET_HOSTNAME != None and CUSTOM_FLEET_HOSTNAME != '' %} {% if CUSTOM_FLEET_HOSTNAME != None and CUSTOM_FLEET_HOSTNAME != '' %}
{% set HOSTNAME = CUSTOM_FLEET_HOSTNAME %} {% set HOSTNAME = CUSTOM_FLEET_HOSTNAME %}
{% elif FLEETNODE %} {% elif FLEETNODE %}
{% set HOSTNAME = grains.host %} {% set HOSTNAME = grains.host %}
{% else %} {% else %}
{% set HOSTNAME = salt['pillar.get']('manager:url_base') %} {% set HOSTNAME = salt['pillar.get']('global:url_base') %}
{% endif %} {% endif %}
so/fleet: so/fleet:

View File

@@ -1,4 +1,4 @@
{% set CUSTOM_FLEET_HOSTNAME = salt['pillar.get']('static:fleet_custom_hostname', None) %} {% set CUSTOM_FLEET_HOSTNAME = salt['pillar.get']('global:fleet_custom_hostname', None) %}
so/fleet: so/fleet:
event.send: event.send:

View File

@@ -22,6 +22,8 @@ spec:
distributed_tls_max_attempts: 3 distributed_tls_max_attempts: 3
distributed_tls_read_endpoint: /api/v1/osquery/distributed/read distributed_tls_read_endpoint: /api/v1/osquery/distributed/read
distributed_tls_write_endpoint: /api/v1/osquery/distributed/write distributed_tls_write_endpoint: /api/v1/osquery/distributed/write
enable_windows_events_publisher: true
enable_windows_events_subscriber: true
logger_plugin: tls logger_plugin: tls
logger_tls_endpoint: /api/v1/osquery/log logger_tls_endpoint: /api/v1/osquery/log
logger_tls_period: 10 logger_tls_period: 10

View File

@@ -1,8 +1,8 @@
{%- set MYSQLPASS = salt['pillar.get']('secrets:mysql', None) -%} {%- set MYSQLPASS = salt['pillar.get']('secrets:mysql', None) -%}
{%- set FLEETPASS = salt['pillar.get']('secrets:fleet', None) -%} {%- set FLEETPASS = salt['pillar.get']('secrets:fleet', None) -%}
{%- set FLEETJWT = salt['pillar.get']('secrets:fleet_jwt', None) -%} {%- set FLEETJWT = salt['pillar.get']('secrets:fleet_jwt', None) -%}
{% set VERSION = salt['pillar.get']('static:soversion', 'HH1.2.2') %} {% set VERSION = salt['pillar.get']('global:soversion', 'HH1.2.2') %}
{% set IMAGEREPO = salt['pillar.get']('static:imagerepo') %} {% set IMAGEREPO = salt['pillar.get']('global:imagerepo') %}
{% set MANAGER = salt['grains.get']('master') %} {% set MANAGER = salt['grains.get']('master') %}
{% set FLEETARCH = salt['grains.get']('role') %} {% set FLEETARCH = salt['grains.get']('role') %}
@@ -10,7 +10,7 @@
{% set MAININT = salt['pillar.get']('host:mainint') %} {% set MAININT = salt['pillar.get']('host:mainint') %}
{% set MAINIP = salt['grains.get']('ip_interfaces').get(MAININT)[0] %} {% set MAINIP = salt['grains.get']('ip_interfaces').get(MAININT)[0] %}
{% else %} {% else %}
{% set MAINIP = salt['pillar.get']('static:managerip') %} {% set MAINIP = salt['pillar.get']('global:managerip') %}
{% endif %} {% endif %}
include: include:

View File

@@ -1,8 +1,8 @@
{%- set FLEETMANAGER = salt['pillar.get']('static:fleet_manager', False) -%} {%- set FLEETMANAGER = salt['pillar.get']('global:fleet_manager', False) -%}
{%- set FLEETNODE = salt['pillar.get']('static:fleet_node', False) -%} {%- set FLEETNODE = salt['pillar.get']('global:fleet_node', False) -%}
{%- set FLEETHOSTNAME = salt['pillar.get']('static:fleet_hostname', False) -%} {%- set FLEETHOSTNAME = salt['pillar.get']('global:fleet_hostname', False) -%}
{%- set FLEETIP = salt['pillar.get']('static:fleet_ip', False) -%} {%- set FLEETIP = salt['pillar.get']('global:fleet_ip', False) -%}
{% set CUSTOM_FLEET_HOSTNAME = salt['pillar.get']('static:fleet_custom_hostname', None) %} {% set CUSTOM_FLEET_HOSTNAME = salt['pillar.get']('global:fleet_custom_hostname', None) %}
{% if CUSTOM_FLEET_HOSTNAME != (None and '') %} {% if CUSTOM_FLEET_HOSTNAME != (None and '') %}

View File

@@ -13,7 +13,7 @@
# You should have received a copy of the GNU General Public License # You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>. # along with this program. If not, see <http://www.gnu.org/licenses/>.
{% set IMAGEREPO = salt['pillar.get']('static:imagerepo') %} {% set IMAGEREPO = salt['pillar.get']('global:imagerepo') %}
# Create the user # Create the user
fservergroup: fservergroup:

View File

@@ -1,4 +1,4 @@
{%- set MANAGER = salt['pillar.get']('static:managerip', '') %} {%- set MANAGER = salt['pillar.get']('global:managerip', '') %}
apiVersion: 1 apiVersion: 1
deleteDatasources: deleteDatasources:

View File

@@ -1,7 +1,7 @@
{% set GRAFANA = salt['pillar.get']('manager:grafana', '0') %} {% set GRAFANA = salt['pillar.get']('manager:grafana', '0') %}
{% set MANAGER = salt['grains.get']('master') %} {% set MANAGER = salt['grains.get']('master') %}
{% set VERSION = salt['pillar.get']('static:soversion', 'HH1.2.2') %} {% set VERSION = salt['pillar.get']('global:soversion', 'HH1.2.2') %}
{% set IMAGEREPO = salt['pillar.get']('static:imagerepo') %} {% set IMAGEREPO = salt['pillar.get']('global:imagerepo') %}
{% if grains['role'] in ['so-manager', 'so-managersearch', 'so-eval', 'so-standalone'] and GRAFANA == 1 %} {% if grains['role'] in ['so-manager', 'so-managersearch', 'so-eval', 'so-standalone'] and GRAFANA == 1 %}
@@ -91,7 +91,6 @@ dashboard-manager:
- defaults: - defaults:
SERVERNAME: {{ SN }} SERVERNAME: {{ SN }}
MANINT: {{ SNDATA.manint }} MANINT: {{ SNDATA.manint }}
MONINT: {{ SNDATA.manint }}
CPUS: {{ SNDATA.totalcpus }} CPUS: {{ SNDATA.totalcpus }}
UID: so_overview UID: so_overview
ROOTFS: {{ SNDATA.rootfs }} ROOTFS: {{ SNDATA.rootfs }}
@@ -114,7 +113,6 @@ dashboard-managersearch:
- defaults: - defaults:
SERVERNAME: {{ SN }} SERVERNAME: {{ SN }}
MANINT: {{ SNDATA.manint }} MANINT: {{ SNDATA.manint }}
MONINT: {{ SNDATA.manint }}
CPUS: {{ SNDATA.totalcpus }} CPUS: {{ SNDATA.totalcpus }}
UID: so_overview UID: so_overview
ROOTFS: {{ SNDATA.rootfs }} ROOTFS: {{ SNDATA.rootfs }}
@@ -137,7 +135,7 @@ dashboard-standalone:
- defaults: - defaults:
SERVERNAME: {{ SN }} SERVERNAME: {{ SN }}
MANINT: {{ SNDATA.manint }} MANINT: {{ SNDATA.manint }}
MONINT: {{ SNDATA.manint }} MONINT: {{ SNDATA.monint }}
CPUS: {{ SNDATA.totalcpus }} CPUS: {{ SNDATA.totalcpus }}
UID: so_overview UID: so_overview
ROOTFS: {{ SNDATA.rootfs }} ROOTFS: {{ SNDATA.rootfs }}
@@ -159,8 +157,8 @@ dashboard-{{ SN }}:
- source: salt://grafana/dashboards/sensor_nodes/sensor.json - source: salt://grafana/dashboards/sensor_nodes/sensor.json
- defaults: - defaults:
SERVERNAME: {{ SN }} SERVERNAME: {{ SN }}
MONINT: {{ SNDATA.monint }}
MANINT: {{ SNDATA.manint }} MANINT: {{ SNDATA.manint }}
MONINT: {{ SNDATA.monint }}
CPUS: {{ SNDATA.totalcpus }} CPUS: {{ SNDATA.totalcpus }}
UID: {{ SNDATA.guid }} UID: {{ SNDATA.guid }}
ROOTFS: {{ SNDATA.rootfs }} ROOTFS: {{ SNDATA.rootfs }}
@@ -183,7 +181,6 @@ dashboardsearch-{{ SN }}:
- defaults: - defaults:
SERVERNAME: {{ SN }} SERVERNAME: {{ SN }}
MANINT: {{ SNDATA.manint }} MANINT: {{ SNDATA.manint }}
MONINT: {{ SNDATA.manint }}
CPUS: {{ SNDATA.totalcpus }} CPUS: {{ SNDATA.totalcpus }}
UID: {{ SNDATA.guid }} UID: {{ SNDATA.guid }}
ROOTFS: {{ SNDATA.rootfs }} ROOTFS: {{ SNDATA.rootfs }}

View File

@@ -12,8 +12,8 @@
# #
# You should have received a copy of the GNU General Public License # You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>. # along with this program. If not, see <http://www.gnu.org/licenses/>.
{% set VERSION = salt['pillar.get']('static:soversion', 'HH1.2.2') %} {% set VERSION = salt['pillar.get']('global:soversion', 'HH1.2.2') %}
{% set IMAGEREPO = salt['pillar.get']('static:imagerepo') %} {% set IMAGEREPO = salt['pillar.get']('global:imagerepo') %}
{% set MANAGER = salt['grains.get']('master') %} {% set MANAGER = salt['grains.get']('master') %}
# IDSTools Setup # IDSTools Setup
idstoolsdir: idstoolsdir:

View File

@@ -1,7 +1,7 @@
{% set GRAFANA = salt['pillar.get']('manager:grafana', '0') %} {% set GRAFANA = salt['pillar.get']('manager:grafana', '0') %}
{% set MANAGER = salt['grains.get']('master') %} {% set MANAGER = salt['grains.get']('master') %}
{% set VERSION = salt['pillar.get']('static:soversion', 'HH1.2.2') %} {% set VERSION = salt['pillar.get']('global:soversion', 'HH1.2.2') %}
{% set IMAGEREPO = salt['pillar.get']('static:imagerepo') %} {% set IMAGEREPO = salt['pillar.get']('global:imagerepo') %}
{% if grains['role'] in ['so-manager', 'so-managersearch', 'so-eval', 'so-standalone'] and GRAFANA == 1 %} {% if grains['role'] in ['so-manager', 'so-managersearch', 'so-eval', 'so-standalone'] and GRAFANA == 1 %}

View File

@@ -1,7 +1,7 @@
#!/bin/bash #!/bin/bash
# {%- set FLEET_MANAGER = salt['pillar.get']('static:fleet_manager', False) -%} # {%- set FLEET_MANAGER = salt['pillar.get']('global:fleet_manager', False) -%}
# {%- set FLEET_NODE = salt['pillar.get']('static:fleet_node', False) -%} # {%- set FLEET_NODE = salt['pillar.get']('global:fleet_node', False) -%}
# {%- set MANAGER = salt['pillar.get']('manager:url_base', '') %} # {%- set MANAGER = salt['pillar.get']('global:url_base', '') %}
KIBANA_VERSION="7.6.1" KIBANA_VERSION="7.6.1"

View File

@@ -1,6 +1,7 @@
--- ---
# Default Kibana configuration from kibana-docker. # Default Kibana configuration from kibana-docker.
{%- set ES = salt['pillar.get']('manager:mainip', '') -%} {%- set ES = salt['pillar.get']('manager:mainip', '') -%}
{%- set FEATURES = salt['pillar.get']('elastic:features', False) %}
server.name: kibana server.name: kibana
server.host: "0" server.host: "0"
server.basePath: /kibana server.basePath: /kibana

View File

@@ -1,8 +1,8 @@
{% set VERSION = salt['pillar.get']('static:soversion', 'HH1.2.2') %} {% set VERSION = salt['pillar.get']('global:soversion', 'HH1.2.2') %}
{% set IMAGEREPO = salt['pillar.get']('static:imagerepo') %} {% set IMAGEREPO = salt['pillar.get']('global:imagerepo') %}
{% set MANAGER = salt['grains.get']('master') %} {% set MANAGER = salt['grains.get']('master') %}
{% set FEATURES = salt['pillar.get']('elastic:features', False) %} {% set FEATURES = salt['pillar.get']('elastic:features', False) %}
{% if FEATURES %} {%- if FEATURES is sameas true %}
{% set FEATURES = "-features" %} {% set FEATURES = "-features" %}
{% else %} {% else %}
{% set FEATURES = '' %} {% set FEATURES = '' %}

View File

@@ -12,12 +12,13 @@
# #
# You should have received a copy of the GNU General Public License # You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>. # along with this program. If not, see <http://www.gnu.org/licenses/>.
{% set VERSION = salt['pillar.get']('static:soversion', 'HH1.2.2') %} {% set VERSION = salt['pillar.get']('global:soversion', 'HH1.2.2') %}
{% set IMAGEREPO = salt['pillar.get']('static:imagerepo') %} {% set IMAGEREPO = salt['pillar.get']('global:imagerepo') %}
{% set MANAGER = salt['grains.get']('master') %} {% set MANAGER = salt['grains.get']('master') %}
{% set MANAGERIP = salt['pillar.get']('global:managerip') %}
{% set FEATURES = salt['pillar.get']('elastic:features', False) %} {% set FEATURES = salt['pillar.get']('elastic:features', False) %}
{% if FEATURES %} {%- if FEATURES is sameas true %}
{% set FEATURES = "-features" %} {% set FEATURES = "-features" %}
{% else %} {% else %}
{% set FEATURES = '' %} {% set FEATURES = '' %}
@@ -127,7 +128,7 @@ importdir:
# Create the logstash data directory # Create the logstash data directory
nsmlsdir: nsmlsdir:
file.directory: file.directory:
- name: /nsm/logstash - name: /nsm/logstash/tmp
- user: 931 - user: 931
- group: 939 - group: 939
- makedirs: True - makedirs: True
@@ -146,6 +147,8 @@ so-logstash:
- hostname: so-logstash - hostname: so-logstash
- name: so-logstash - name: so-logstash
- user: logstash - user: logstash
- extra_hosts:
- {{ MANAGER }}:{{ MANAGERIP }}
- environment: - environment:
- LS_JAVA_OPTS=-Xms{{ lsheap }} -Xmx{{ lsheap }} - LS_JAVA_OPTS=-Xms{{ lsheap }} -Xmx{{ lsheap }}
- port_bindings: - port_bindings:
@@ -165,12 +168,19 @@ so-logstash:
- /sys/fs/cgroup:/sys/fs/cgroup:ro - /sys/fs/cgroup:/sys/fs/cgroup:ro
- /etc/pki/filebeat.crt:/usr/share/logstash/filebeat.crt:ro - /etc/pki/filebeat.crt:/usr/share/logstash/filebeat.crt:ro
- /etc/pki/filebeat.p8:/usr/share/logstash/filebeat.key:ro - /etc/pki/filebeat.p8:/usr/share/logstash/filebeat.key:ro
{% if grains['role'] == 'so-heavynode' %}
- /etc/ssl/certs/intca.crt:/usr/share/filebeat/ca.crt:ro
{% else %}
- /etc/pki/ca.crt:/usr/share/filebeat/ca.crt:ro - /etc/pki/ca.crt:/usr/share/filebeat/ca.crt:ro
{% endif %}
- /opt/so/conf/ca/cacerts:/etc/pki/ca-trust/extracted/java/cacerts:ro
- /opt/so/conf/ca/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro
- /etc/pki/ca.cer:/ca/ca.crt:ro
{%- if grains['role'] == 'so-eval' %} {%- if grains['role'] == 'so-eval' %}
- /nsm/zeek:/nsm/zeek:ro - /nsm/zeek:/nsm/zeek:ro
- /nsm/suricata:/suricata:ro - /nsm/suricata:/suricata:ro
- /opt/so/wazuh/logs/alerts:/wazuh/alerts:ro - /nsm/wazuh/logs/alerts:/wazuh/alerts:ro
- /opt/so/wazuh/logs/archives:/wazuh/archives:ro - /nsm/wazuh/logs/archives:/wazuh/archives:ro
- /opt/so/log/fleet/:/osquery/logs:ro - /opt/so/log/fleet/:/osquery/logs:ro
- /opt/so/log/strelka:/strelka:ro - /opt/so/log/strelka:/strelka:ro
{%- endif %} {%- endif %}

View File

@@ -0,0 +1,23 @@
{%- if grains.role == 'so-heavynode' %}
{%- set MANAGER = salt['grains.get']('host') %}
{%- else %}
{%- set MANAGER = salt['grains.get']('master') %}
{% endif -%}
{%- set THREADS = salt['pillar.get']('logstash_settings:ls_input_threads', '') %}
{%- set access_key = salt['pillar.get']('minio:access_key', '') %}
{%- set access_secret = salt['pillar.get']('minio:access_secret', '') %}
{%- set INTERVAL = salt['pillar.get']('s3_settings:interval', 5) %}
input {
s3 {
access_key_id => "{{ access_key }}"
secret_access_key => "{{ access_secret }}"
endpoint => "https://{{ MANAGER }}:9595"
bucket => "logstash"
delete => true
interval => {{ INTERVAL }}
codec => json
additional_settings => {
"force_path_style" => true
}
}
}

View File

@@ -1,13 +1,11 @@
{%- if grains.role == 'so-heavynode' %} {%- set MANAGER = salt['grains.get']('master') %}
{%- set MANAGER = salt['pillar.get']('elasticsearch:mainip', '') %}
{%- else %}
{%- set MANAGER = salt['pillar.get']('static:managerip', '') %}
{% endif -%}
{%- set THREADS = salt['pillar.get']('logstash_settings:ls_input_threads', '') %} {%- set THREADS = salt['pillar.get']('logstash_settings:ls_input_threads', '') %}
input { input {
redis { redis {
host => '{{ MANAGER }}' host => '{{ MANAGER }}'
port => 9696
ssl => true
data_type => 'list' data_type => 'list'
key => 'logstash:unparsed' key => 'logstash:unparsed'
type => 'redis-input' type => 'redis-input'

View File

@@ -3,6 +3,7 @@
{%- else %} {%- else %}
{%- set ES = salt['pillar.get']('elasticsearch:mainip', '') -%} {%- set ES = salt['pillar.get']('elasticsearch:mainip', '') -%}
{%- endif %} {%- endif %}
{% set FEATURES = salt['pillar.get']('elastic:features', False) %}
output { output {
if [module] =~ "zeek" and "import" not in [tags] { if [module] =~ "zeek" and "import" not in [tags] {
elasticsearch { elasticsearch {

View File

@@ -3,6 +3,7 @@
{%- else %} {%- else %}
{%- set ES = salt['pillar.get']('elasticsearch:mainip', '') -%} {%- set ES = salt['pillar.get']('elasticsearch:mainip', '') -%}
{%- endif %} {%- endif %}
{% set FEATURES = salt['pillar.get']('elastic:features', False) %}
output { output {
if "import" in [tags] { if "import" in [tags] {
elasticsearch { elasticsearch {

View File

@@ -3,6 +3,7 @@
{%- else %} {%- else %}
{%- set ES = salt['pillar.get']('elasticsearch:mainip', '') -%} {%- set ES = salt['pillar.get']('elasticsearch:mainip', '') -%}
{%- endif %} {%- endif %}
{% set FEATURES = salt['pillar.get']('elastic:features', False) %}
output { output {
if [event_type] == "sflow" { if [event_type] == "sflow" {
elasticsearch { elasticsearch {

View File

@@ -3,6 +3,7 @@
{%- else %} {%- else %}
{%- set ES = salt['pillar.get']('elasticsearch:mainip', '') -%} {%- set ES = salt['pillar.get']('elasticsearch:mainip', '') -%}
{%- endif %} {%- endif %}
{% set FEATURES = salt['pillar.get']('elastic:features', False) %}
output { output {
if [event_type] == "ids" and "import" not in [tags] { if [event_type] == "ids" and "import" not in [tags] {
elasticsearch { elasticsearch {

View File

@@ -3,6 +3,7 @@
{%- else %} {%- else %}
{%- set ES = salt['pillar.get']('elasticsearch:mainip', '') -%} {%- set ES = salt['pillar.get']('elasticsearch:mainip', '') -%}
{%- endif %} {%- endif %}
{% set FEATURES = salt['pillar.get']('elastic:features', False) %}
output { output {
if [module] =~ "syslog" { if [module] =~ "syslog" {
elasticsearch { elasticsearch {

View File

@@ -3,6 +3,7 @@
{%- else %} {%- else %}
{%- set ES = salt['pillar.get']('elasticsearch:mainip', '') -%} {%- set ES = salt['pillar.get']('elasticsearch:mainip', '') -%}
{%- endif %} {%- endif %}
{% set FEATURES = salt['pillar.get']('elastic:features', False) %}
output { output {
if [module] =~ "osquery" { if [module] =~ "osquery" {
elasticsearch { elasticsearch {

View File

@@ -3,6 +3,7 @@
{%- else %} {%- else %}
{%- set ES = salt['pillar.get']('elasticsearch:mainip', '') -%} {%- set ES = salt['pillar.get']('elasticsearch:mainip', '') -%}
{%- endif %} {%- endif %}
{% set FEATURES = salt['pillar.get']('elastic:features', False) %}
output { output {
if "firewall" in [tags] { if "firewall" in [tags] {
elasticsearch { elasticsearch {

View File

@@ -3,6 +3,7 @@
{%- else %} {%- else %}
{%- set ES = salt['pillar.get']('elasticsearch:mainip', '') -%} {%- set ES = salt['pillar.get']('elasticsearch:mainip', '') -%}
{%- endif %} {%- endif %}
{% set FEATURES = salt['pillar.get']('elastic:features', False) %}
output { output {
if [module] =~ "suricata" and "import" not in [tags] { if [module] =~ "suricata" and "import" not in [tags] {
elasticsearch { elasticsearch {

View File

@@ -3,6 +3,7 @@
{%- else %} {%- else %}
{%- set ES = salt['pillar.get']('elasticsearch:mainip', '') -%} {%- set ES = salt['pillar.get']('elasticsearch:mainip', '') -%}
{%- endif %} {%- endif %}
{% set FEATURES = salt['pillar.get']('elastic:features', False) %}
output { output {
if "beat-ext" in [tags] and "import" not in [tags] { if "beat-ext" in [tags] and "import" not in [tags] {
elasticsearch { elasticsearch {

View File

@@ -3,6 +3,7 @@
{%- else %} {%- else %}
{%- set ES = salt['pillar.get']('elasticsearch:mainip', '') -%} {%- set ES = salt['pillar.get']('elasticsearch:mainip', '') -%}
{%- endif %} {%- endif %}
{% set FEATURES = salt['pillar.get']('elastic:features', False) %}
output { output {
if [module] =~ "ossec" { if [module] =~ "ossec" {
elasticsearch { elasticsearch {

View File

@@ -3,6 +3,7 @@
{%- else %} {%- else %}
{%- set ES = salt['pillar.get']('elasticsearch:mainip', '') -%} {%- set ES = salt['pillar.get']('elasticsearch:mainip', '') -%}
{%- endif %} {%- endif %}
{% set FEATURES = salt['pillar.get']('elastic:features', False) %}
output { output {
if [module] =~ "strelka" { if [module] =~ "strelka" {
elasticsearch { elasticsearch {

View File

@@ -0,0 +1,25 @@
{%- set MANAGER = salt['grains.get']('master') %}
{%- set access_key = salt['pillar.get']('minio:access_key', '') %}
{%- set access_secret = salt['pillar.get']('minio:access_secret', '') %}
{%- set SIZE_FILE = salt['pillar.get']('s3_settings:size_file', 2048) %}
{%- set TIME_FILE = salt['pillar.get']('s3_settings:time_file', 1) %}
{%- set UPLOAD_QUEUE_SIZE = salt['pillar.get']('s3_settings:upload_queue_size', 4) %}
{%- set ENCODING = salt['pillar.get']('s3_settings:encoding', 'gzip') %}
output {
s3 {
access_key_id => "{{ access_key }}"
secret_access_key => "{{ access_secret}}"
endpoint => "https://{{ MANAGER }}:9595"
bucket => "logstash"
size_file => {{ SIZE_FILE }}
time_file => {{ TIME_FILE }}
codec => json
encoding => {{ ENCODING }}
upload_queue_size => {{ UPLOAD_QUEUE_SIZE }}
temporary_directory => "/usr/share/logstash/data/tmp"
validate_credentials_on_root_bucket => false
additional_settings => {
"force_path_style" => true
}
}
}

View File

@@ -1,8 +1,9 @@
{% set MANAGER = salt['pillar.get']('static:managerip', '') %} {%- set MANAGER = salt['grains.get']('master') %}
{% set BATCH = salt['pillar.get']('logstash_settings:ls_pipeline_batch_size', 125) %} {% set BATCH = salt['pillar.get']('logstash_settings:ls_pipeline_batch_size', 125) %}
output { output {
redis { redis {
host => '{{ MANAGER }}' host => '{{ MANAGER }}'
port => 6379
data_type => 'list' data_type => 'list'
key => 'logstash:unparsed' key => 'logstash:unparsed'
congestion_interval => 1 congestion_interval => 1

View File

@@ -12,10 +12,10 @@
# #
# You should have received a copy of the GNU General Public License # You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>. # along with this program. If not, see <http://www.gnu.org/licenses/>.
{% set VERSION = salt['pillar.get']('static:soversion', 'HH1.2.2') %} {% set VERSION = salt['pillar.get']('global:soversion', 'HH1.2.2') %}
{% set IMAGEREPO = salt['pillar.get']('static:imagerepo') %} {% set IMAGEREPO = salt['pillar.get']('global:imagerepo') %}
{% set MANAGER = salt['grains.get']('master') %} {% set MANAGER = salt['grains.get']('master') %}
{% set managerproxy = salt['pillar.get']('static:managerupdate', '0') %} {% set managerproxy = salt['pillar.get']('global:managerupdate', '0') %}
socore_own_saltstack: socore_own_saltstack:
file.directory: file.directory:

View File

@@ -13,47 +13,47 @@
# You should have received a copy of the GNU General Public License # You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>. # along with this program. If not, see <http://www.gnu.org/licenses/>.
{% set access_key = salt['pillar.get']('manager:access_key', '') %} {% set access_key = salt['pillar.get']('minio:access_key', '') %}
{% set access_secret = salt['pillar.get']('manager:access_secret', '') %} {% set access_secret = salt['pillar.get']('minio:access_secret', '') %}
{% set VERSION = salt['pillar.get']('global:soversion', 'HH1.2.2') %}
{% set IMAGEREPO = salt['pillar.get']('global:imagerepo') %}
{% set MANAGER = salt['grains.get']('master') %}
# Minio Setup # Minio Setup
minioconfdir: minioconfdir:
file.directory: file.directory:
- name: /opt/so/conf/minio/etc - name: /opt/so/conf/minio/etc/certs
- user: 939 - user: 939
- group: 939 - group: 939
- makedirs: True - makedirs: True
miniodatadir: miniodatadir:
file.directory: file.directory:
- name: /nsm/minio/data - name: /nsm/minio/data/
- user: 939 - user: 939
- group: 939 - group: 939
- makedirs: True - makedirs: True
#redisconfsync: logstashbucket:
# file.recurse: file.directory:
# - name: /opt/so/conf/redis/etc - name: /nsm/minio/data/logstash
# - source: salt://redis/etc - user: 939
# - user: 939 - group: 939
# - group: 939 - makedirs: True
# - template: jinja
minio/minio: so-minio:
docker_image.present
minio:
docker_container.running: docker_container.running:
- image: minio/minio - image: {{ MANAGER }}:5000/{{ IMAGEREPO }}/so-minio:{{ VERSION }}
- hostname: so-minio - hostname: so-minio
- user: socore - user: socore
- port_bindings: - port_bindings:
- 0.0.0.0:9000:9000 - 0.0.0.0:9595:9595
- environment: - environment:
- MINIO_ACCESS_KEY: {{ access_key }} - MINIO_ACCESS_KEY: {{ access_key }}
- MINIO_SECRET_KEY: {{ access_secret }} - MINIO_SECRET_KEY: {{ access_secret }}
- binds: - binds:
- /nsm/minio/data:/data:rw - /nsm/minio/data:/data:rw
- /opt/so/conf/minio/etc:/root/.minio:rw - /opt/so/conf/minio/etc:/.minio:rw
- entrypoint: "/usr/bin/docker-entrypoint.sh server /data" - /etc/pki/minio.key:/.minio/certs/private.key:ro
- network_mode: so-elastic-net - /etc/pki/minio.crt:/.minio/certs/public.crt:ro
- entrypoint: "/usr/bin/docker-entrypoint.sh server --certs-dir /.minio/certs --address :9595 /data"

View File

@@ -1,6 +1,6 @@
{% set needs_restarting_check = salt['mine.get']('*', 'needs_restarting.check', tgt_type='glob') -%} {% set needs_restarting_check = salt['mine.get']('*', 'needs_restarting.check', tgt_type='glob') -%}
{% set role = grains.id.split('_') | last -%} {% set role = grains.id.split('_') | last -%}
{% set url = salt['pillar.get']('manager:url_base') -%} {% set url = salt['pillar.get']('global:url_base') -%}
{% if role in ['eval', 'managersearch', 'manager', 'standalone'] %} {% if role in ['eval', 'managersearch', 'manager', 'standalone'] %}
Access the Security Onion web interface at https://{{ url }} Access the Security Onion web interface at https://{{ url }}

View File

@@ -1,7 +1,7 @@
{%- set MYSQLPASS = salt['pillar.get']('secrets:mysql', None) %} {%- set MYSQLPASS = salt['pillar.get']('secrets:mysql', None) %}
{%- set MANAGERIP = salt['pillar.get']('static:managerip', '') %} {%- set MANAGERIP = salt['pillar.get']('global:managerip', '') %}
{% set VERSION = salt['pillar.get']('static:soversion', 'HH1.2.2') %} {% set VERSION = salt['pillar.get']('global:soversion', 'HH1.2.2') %}
{% set IMAGEREPO = salt['pillar.get']('static:imagerepo') %} {% set IMAGEREPO = salt['pillar.get']('global:imagerepo') %}
{% set MANAGER = salt['grains.get']('master') %} {% set MANAGER = salt['grains.get']('master') %}
{% set MAINIP = salt['pillar.get']('elasticsearch:mainip') %} {% set MAINIP = salt['pillar.get']('elasticsearch:mainip') %}
{% set FLEETARCH = salt['grains.get']('role') %} {% set FLEETARCH = salt['grains.get']('role') %}
@@ -10,7 +10,7 @@
{% set MAININT = salt['pillar.get']('host:mainint') %} {% set MAININT = salt['pillar.get']('host:mainint') %}
{% set MAINIP = salt['grains.get']('ip_interfaces').get(MAININT)[0] %} {% set MAINIP = salt['grains.get']('ip_interfaces').get(MAININT)[0] %}
{% else %} {% else %}
{% set MAINIP = salt['pillar.get']('static:managerip') %} {% set MAINIP = salt['pillar.get']('global:managerip') %}
{% endif %} {% endif %}
# MySQL Setup # MySQL Setup
@@ -89,7 +89,7 @@ so-mysql:
- /opt/so/conf/mysql/etc - /opt/so/conf/mysql/etc
cmd.run: cmd.run:
- name: until nc -z {{ MAINIP }} 3306; do sleep 1; done - name: until nc -z {{ MAINIP }} 3306; do sleep 1; done
- timeout: 120 - timeout: 900
- onchanges: - onchanges:
- docker_container: so-mysql - docker_container: so-mysql
{% endif %} {% endif %}

View File

@@ -1,7 +1,7 @@
{%- set managerip = salt['pillar.get']('manager:mainip', '') %} {%- set managerip = salt['pillar.get']('manager:mainip', '') %}
{%- set FLEET_MANAGER = salt['pillar.get']('static:fleet_manager') %} {%- set FLEET_MANAGER = salt['pillar.get']('global:fleet_manager') %}
{%- set FLEET_NODE = salt['pillar.get']('static:fleet_node') %} {%- set FLEET_NODE = salt['pillar.get']('global:fleet_node') %}
{%- set FLEET_IP = salt['pillar.get']('static:fleet_ip', None) %} {%- set FLEET_IP = salt['pillar.get']('global:fleet_ip', None) %}
# For more information on configuration, see: # For more information on configuration, see:
# * Official English Documentation: http://nginx.org/en/docs/ # * Official English Documentation: http://nginx.org/en/docs/
# * Official Russian Documentation: http://nginx.org/ru/docs/ # * Official Russian Documentation: http://nginx.org/ru/docs/
@@ -297,6 +297,9 @@ http {
} }
location /sensoroniagents/ { location /sensoroniagents/ {
if ($http_authorization = "") {
return 403;
}
proxy_pass http://{{ managerip }}:9822/; proxy_pass http://{{ managerip }}:9822/;
proxy_read_timeout 90; proxy_read_timeout 90;
proxy_connect_timeout 90; proxy_connect_timeout 90;

View File

@@ -0,0 +1,326 @@
{%- set managerip = salt['pillar.get']('manager:mainip', '') %}
{%- set FLEET_MANAGER = salt['pillar.get']('global:fleet_manager') %}
{%- set FLEET_NODE = salt['pillar.get']('global:fleet_node') %}
{%- set FLEET_IP = salt['pillar.get']('global:fleet_ip', None) %}
# For more information on configuration, see:
# * Official English Documentation: http://nginx.org/en/docs/
# * Official Russian Documentation: http://nginx.org/ru/docs/
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /run/nginx.pid;
# Load dynamic modules. See /usr/share/nginx/README.dynamic.
include /usr/share/nginx/modules/*.conf;
events {
worker_connections 1024;
}
http {
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
client_max_body_size 1024M;
include /etc/nginx/mime.types;
default_type application/octet-stream;
# Load modular configuration files from the /etc/nginx/conf.d directory.
# See http://nginx.org/en/docs/ngx_core_module.html#include
# for more information.
include /etc/nginx/conf.d/*.conf;
#server {
# listen 80 default_server;
# listen [::]:80 default_server;
# server_name _;
# root /opt/socore/html;
# index index.html;
# Load configuration files for the default server block.
#include /etc/nginx/default.d/*.conf;
# location / {
# }
# error_page 404 /404.html;
# location = /40x.html {
# }
# error_page 500 502 503 504 /50x.html;
# location = /50x.html {
# }
#}
server {
listen 80 default_server;
server_name _;
return 301 https://$host$request_uri;
}
{% if FLEET_MANAGER %}
server {
listen 8090 ssl http2 default_server;
server_name _;
root /opt/socore/html;
index blank.html;
ssl_certificate "/etc/pki/nginx/server.crt";
ssl_certificate_key "/etc/pki/nginx/server.key";
ssl_session_cache shared:SSL:1m;
ssl_session_timeout 10m;
ssl_ciphers HIGH:!aNULL:!MD5;
ssl_prefer_server_ciphers on;
location ~ ^/kolide.agent.Api/(RequestEnrollment|RequestConfig|RequestQueries|PublishLogs|PublishResults|CheckHealth)$ {
grpc_pass grpcs://{{ managerip }}:8080;
grpc_set_header Host $host;
grpc_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_buffering off;
}
}
{% endif %}
# Settings for a TLS enabled server.
server {
listen 443 ssl http2 default_server;
#listen [::]:443 ssl http2 default_server;
server_name _;
root /opt/socore/html;
index index.html;
ssl_certificate "/etc/pki/nginx/server.crt";
ssl_certificate_key "/etc/pki/nginx/server.key";
ssl_session_cache shared:SSL:1m;
ssl_session_timeout 10m;
ssl_ciphers HIGH:!aNULL:!MD5;
ssl_prefer_server_ciphers on;
# Load configuration files for the default server block.
#include /etc/nginx/default.d/*.conf;
location ~* (^/login/|^/js/.*|^/css/.*|^/images/.*) {
proxy_pass http://{{ managerip }}:9822;
proxy_read_timeout 90;
proxy_connect_timeout 90;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Proxy "";
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
proxy_set_header X-Forwarded-Proto $scheme;
}
location / {
auth_request /auth/sessions/whoami;
proxy_pass http://{{ managerip }}:9822/;
proxy_read_timeout 90;
proxy_connect_timeout 90;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Proxy "";
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
proxy_set_header X-Forwarded-Proto $scheme;
}
location ~ ^/auth/.*?(whoami|login|logout|settings) {
rewrite /auth/(.*) /$1 break;
proxy_pass http://{{ managerip }}:4433;
proxy_read_timeout 90;
proxy_connect_timeout 90;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Proxy "";
proxy_set_header X-Forwarded-Proto $scheme;
}
location /cyberchef/ {
auth_request /auth/sessions/whoami;
proxy_read_timeout 90;
proxy_connect_timeout 90;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Proxy "";
proxy_set_header X-Forwarded-Proto $scheme;
}
location /navigator/ {
auth_request /auth/sessions/whoami;
proxy_read_timeout 90;
proxy_connect_timeout 90;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Proxy "";
proxy_set_header X-Forwarded-Proto $scheme;
}
location /packages/ {
try_files $uri =206;
auth_request /auth/sessions/whoami;
proxy_read_timeout 90;
proxy_connect_timeout 90;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Proxy "";
proxy_set_header X-Forwarded-Proto $scheme;
}
location /grafana/ {
auth_request /auth/sessions/whoami;
rewrite /grafana/(.*) /$1 break;
proxy_pass http://{{ managerip }}:3000/;
proxy_read_timeout 90;
proxy_connect_timeout 90;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Proxy "";
proxy_set_header X-Forwarded-Proto $scheme;
}
location /kibana/ {
auth_request /auth/sessions/whoami;
rewrite /kibana/(.*) /$1 break;
proxy_pass http://{{ managerip }}:5601/;
proxy_read_timeout 90;
proxy_connect_timeout 90;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Proxy "";
proxy_set_header X-Forwarded-Proto $scheme;
}
location /nodered/ {
proxy_pass http://{{ managerip }}:1880/;
proxy_read_timeout 90;
proxy_connect_timeout 90;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
proxy_set_header Proxy "";
proxy_set_header X-Forwarded-Proto $scheme;
}
location /playbook/ {
proxy_pass http://{{ managerip }}:3200/playbook/;
proxy_read_timeout 90;
proxy_connect_timeout 90;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Proxy "";
proxy_set_header X-Forwarded-Proto $scheme;
}
{%- if FLEET_NODE %}
location /fleet/ {
return 301 https://{{ FLEET_IP }}/fleet;
}
{%- else %}
location /fleet/ {
proxy_pass https://{{ managerip }}:8080;
proxy_read_timeout 90;
proxy_connect_timeout 90;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Proxy "";
proxy_set_header X-Forwarded-Proto $scheme;
}
{%- endif %}
location /thehive/ {
proxy_pass http://{{ managerip }}:9000/thehive/;
proxy_read_timeout 90;
proxy_connect_timeout 90;
proxy_http_version 1.1; # this is essential for chunked responses to work
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Proxy "";
proxy_set_header X-Forwarded-Proto $scheme;
}
location /cortex/ {
proxy_pass http://{{ managerip }}:9001/cortex/;
proxy_read_timeout 90;
proxy_connect_timeout 90;
proxy_http_version 1.1; # this is essential for chunked responses to work
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Proxy "";
proxy_set_header X-Forwarded-Proto $scheme;
}
location /soctopus/ {
proxy_pass http://{{ managerip }}:7000/;
proxy_read_timeout 90;
proxy_connect_timeout 90;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Proxy "";
proxy_set_header X-Forwarded-Proto $scheme;
}
location /kibana/app/soc/ {
rewrite ^/kibana/app/soc/(.*) /soc/$1 permanent;
}
location /kibana/app/fleet/ {
rewrite ^/kibana/app/fleet/(.*) /fleet/$1 permanent;
}
location /kibana/app/soctopus/ {
rewrite ^/kibana/app/soctopus/(.*) /soctopus/$1 permanent;
}
location /sensoroniagents/ {
proxy_pass http://{{ managerip }}:9822/;
proxy_read_timeout 90;
proxy_connect_timeout 90;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Proxy "";
proxy_set_header X-Forwarded-Proto $scheme;
}
error_page 401 = @error401;
location @error401 {
add_header Set-Cookie "AUTH_REDIRECT=$request_uri;Path=/;Max-Age=14400";
return 302 /auth/self-service/browser/flows/login;
}
#error_page 404 /404.html;
# location = /usr/share/nginx/html/40x.html {
#}
error_page 500 502 503 504 /50x.html;
location = /usr/share/nginx/html/50x.html {
}
}
}

View File

@@ -1,7 +1,7 @@
{%- set managerip = salt['pillar.get']('manager:mainip', '') %} {%- set managerip = salt['pillar.get']('manager:mainip', '') %}
{%- set FLEET_MANAGER = salt['pillar.get']('static:fleet_manager') %} {%- set FLEET_MANAGER = salt['pillar.get']('global:fleet_manager') %}
{%- set FLEET_NODE = salt['pillar.get']('static:fleet_node') %} {%- set FLEET_NODE = salt['pillar.get']('global:fleet_node') %}
{%- set FLEET_IP = salt['pillar.get']('static:fleet_ip', None) %} {%- set FLEET_IP = salt['pillar.get']('global:fleet_ip', None) %}
# For more information on configuration, see: # For more information on configuration, see:
# * Official English Documentation: http://nginx.org/en/docs/ # * Official English Documentation: http://nginx.org/en/docs/
# * Official Russian Documentation: http://nginx.org/ru/docs/ # * Official Russian Documentation: http://nginx.org/ru/docs/
@@ -297,6 +297,9 @@ http {
} }
location /sensoroniagents/ { location /sensoroniagents/ {
if ($http_authorization = "") {
return 403;
}
proxy_pass http://{{ managerip }}:9822/; proxy_pass http://{{ managerip }}:9822/;
proxy_read_timeout 90; proxy_read_timeout 90;
proxy_connect_timeout 90; proxy_connect_timeout 90;

View File

@@ -1,7 +1,7 @@
{%- set managerip = salt['pillar.get']('manager:mainip', '') %} {%- set managerip = salt['pillar.get']('manager:mainip', '') %}
{%- set FLEET_MANAGER = salt['pillar.get']('static:fleet_manager') %} {%- set FLEET_MANAGER = salt['pillar.get']('global:fleet_manager') %}
{%- set FLEET_NODE = salt['pillar.get']('static:fleet_node') %} {%- set FLEET_NODE = salt['pillar.get']('global:fleet_node') %}
{%- set FLEET_IP = salt['pillar.get']('static:fleet_ip', None) %} {%- set FLEET_IP = salt['pillar.get']('global:fleet_ip', None) %}
# For more information on configuration, see: # For more information on configuration, see:
# * Official English Documentation: http://nginx.org/en/docs/ # * Official English Documentation: http://nginx.org/en/docs/
# * Official Russian Documentation: http://nginx.org/ru/docs/ # * Official Russian Documentation: http://nginx.org/ru/docs/
@@ -296,6 +296,9 @@ http {
} }
location /sensoroniagents/ { location /sensoroniagents/ {
if ($http_authorization = "") {
return 403;
}
proxy_pass http://{{ managerip }}:9822/; proxy_pass http://{{ managerip }}:9822/;
proxy_read_timeout 90; proxy_read_timeout 90;
proxy_connect_timeout 90; proxy_connect_timeout 90;

View File

@@ -1,7 +1,7 @@
{%- set managerip = salt['pillar.get']('manager:mainip', '') %} {%- set managerip = salt['pillar.get']('manager:mainip', '') %}
{%- set FLEET_MANAGER = salt['pillar.get']('static:fleet_manager') %} {%- set FLEET_MANAGER = salt['pillar.get']('global:fleet_manager') %}
{%- set FLEET_NODE = salt['pillar.get']('static:fleet_node') %} {%- set FLEET_NODE = salt['pillar.get']('global:fleet_node') %}
{%- set FLEET_IP = salt['pillar.get']('static:fleet_ip', None) %} {%- set FLEET_IP = salt['pillar.get']('global:fleet_ip', None) %}
# For more information on configuration, see: # For more information on configuration, see:
# * Official English Documentation: http://nginx.org/en/docs/ # * Official English Documentation: http://nginx.org/en/docs/
# * Official Russian Documentation: http://nginx.org/ru/docs/ # * Official Russian Documentation: http://nginx.org/ru/docs/
@@ -297,6 +297,9 @@ http {
} }
location /sensoroniagents/ { location /sensoroniagents/ {
if ($http_authorization = "") {
return 403;
}
proxy_pass http://{{ managerip }}:9822/; proxy_pass http://{{ managerip }}:9822/;
proxy_read_timeout 90; proxy_read_timeout 90;
proxy_connect_timeout 90; proxy_connect_timeout 90;

View File

@@ -1,4 +1,4 @@
{%- set ip = salt['pillar.get']('static:managerip', '') %} {%- set URL_BASE = salt['pillar.get']('global:url_base', '') %}
{ {
"enterprise_attack_url": "https://raw.githubusercontent.com/mitre/cti/master/enterprise-attack/enterprise-attack.json", "enterprise_attack_url": "https://raw.githubusercontent.com/mitre/cti/master/enterprise-attack/enterprise-attack.json",
@@ -16,7 +16,7 @@
"domain": "mitre-enterprise", "domain": "mitre-enterprise",
"custom_context_menu_items": [ {"label": "view related plays","url": " https://{{ip}}/playbook/projects/detection-playbooks/issues?utf8=%E2%9C%93&set_filter=1&sort=id%3Adesc&f%5B%5D=cf_15&op%5Bcf_15%5D=%3D&f%5B%5D=&c%5B%5D=status&c%5B%5D=cf_10&c%5B%5D=cf_13&c%5B%5D=cf_18&c%5B%5D=cf_19&c%5B%5D=cf_1&c%5B%5D=updated_on&v%5Bcf_15%5D%5B%5D=~Technique_ID~"}], "custom_context_menu_items": [ {"label": "view related plays","url": " https://{{URL_BASE}}/playbook/projects/detection-playbooks/issues?utf8=%E2%9C%93&set_filter=1&sort=id%3Adesc&f%5B%5D=cf_15&op%5Bcf_15%5D=%3D&f%5B%5D=&c%5B%5D=status&c%5B%5D=cf_10&c%5B%5D=cf_13&c%5B%5D=cf_18&c%5B%5D=cf_19&c%5B%5D=cf_1&c%5B%5D=updated_on&v%5Bcf_15%5D%5B%5D=~Technique_ID~"}],
"default_layers": { "default_layers": {
"enabled": true, "enabled": true,

View File

@@ -1,8 +1,8 @@
{% set FLEETMANAGER = salt['pillar.get']('static:fleet_manager', False) %} {% set FLEETMANAGER = salt['pillar.get']('global:fleet_manager', False) %}
{% set FLEETNODE = salt['pillar.get']('static:fleet_node', False) %} {% set FLEETNODE = salt['pillar.get']('global:fleet_node', False) %}
{% set MANAGER = salt['grains.get']('master') %} {% set MANAGER = salt['grains.get']('master') %}
{% set VERSION = salt['pillar.get']('static:soversion', 'HH1.2.2') %} {% set VERSION = salt['pillar.get']('global:soversion', 'HH1.2.2') %}
{% set IMAGEREPO = salt['pillar.get']('static:imagerepo') %} {% set IMAGEREPO = salt['pillar.get']('global:imagerepo') %}
# Drop the correct nginx config based on role # Drop the correct nginx config based on role
nginxconfdir: nginxconfdir:

View File

@@ -1,4 +1,4 @@
{%- set ip = salt['pillar.get']('static:managerip', '') -%} {%- set ip = salt['pillar.get']('global:managerip', '') -%}
#!/bin/bash #!/bin/bash
default_salt_dir=/opt/so/saltstack/default default_salt_dir=/opt/so/saltstack/default

File diff suppressed because one or more lines are too long

View File

@@ -13,7 +13,7 @@
# You should have received a copy of the GNU General Public License # You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>. # along with this program. If not, see <http://www.gnu.org/licenses/>.
{% set IMAGEREPO = salt['pillar.get']('static:imagerepo') %} {% set IMAGEREPO = salt['pillar.get']('global:imagerepo') %}
# Create the nodered group # Create the nodered group
noderedgroup: noderedgroup:

View File

@@ -1,5 +1,5 @@
needs_restarting: needs_restarting:
module.run: module.run:
- mine.send: - mine.send:
- func: needs_restarting.check - name: needs_restarting.check
- order: last - order: last

View File

@@ -1,5 +1,5 @@
{%- set MANAGER = salt['grains.get']('master') -%} {%- set MANAGER = salt['grains.get']('master') -%}
{%- set SENSORONIKEY = salt['pillar.get']('static:sensoronikey', '') -%} {%- set SENSORONIKEY = salt['pillar.get']('global:sensoronikey', '') -%}
{%- set CHECKININTERVALMS = salt['pillar.get']('pcap:sensor_checkin_interval_ms', 10000) -%} {%- set CHECKININTERVALMS = salt['pillar.get']('pcap:sensor_checkin_interval_ms', 10000) -%}
{ {
"logFilename": "/opt/sensoroni/logs/sensoroni.log", "logFilename": "/opt/sensoroni/logs/sensoroni.log",

View File

@@ -12,12 +12,13 @@
# #
# You should have received a copy of the GNU General Public License # You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>. # along with this program. If not, see <http://www.gnu.org/licenses/>.
{% set VERSION = salt['pillar.get']('static:soversion', 'HH1.2.2') %} {% set VERSION = salt['pillar.get']('global:soversion', 'HH1.2.2') %}
{% set IMAGEREPO = salt['pillar.get']('static:imagerepo') %} {% set IMAGEREPO = salt['pillar.get']('global:imagerepo') %}
{% set MANAGER = salt['grains.get']('master') %} {% set MANAGER = salt['grains.get']('master') %}
{% set INTERFACE = salt['pillar.get']('sensor:interface', 'bond0') %} {% set INTERFACE = salt['pillar.get']('sensor:interface', 'bond0') %}
{% set BPF_STENO = salt['pillar.get']('steno:bpf', None) %} {% set BPF_STENO = salt['pillar.get']('steno:bpf', None) %}
{% set BPF_COMPILED = "" %} {% set BPF_COMPILED = "" %}
{% from "pcap/map.jinja" import START with context %}
# PCAP Section # PCAP Section
@@ -131,6 +132,7 @@ sensoronilog:
so-steno: so-steno:
docker_container.running: docker_container.running:
- image: {{ MANAGER }}:5000/{{ IMAGEREPO }}/so-steno:{{ VERSION }} - image: {{ MANAGER }}:5000/{{ IMAGEREPO }}/so-steno:{{ VERSION }}
- start: {{ START }}
- network_mode: host - network_mode: host
- privileged: True - privileged: True
- port_bindings: - port_bindings:

6
salt/pcap/map.jinja Normal file
View File

@@ -0,0 +1,6 @@
# don't start the docker container if it is an import node
{% if grains.id.split('_')|last == 'import' %}
{% set START = False %}
{% else %}
{% set START = True %}
{% endif %}

File diff suppressed because one or more lines are too long

View File

@@ -1,6 +1,6 @@
{% set MANAGERIP = salt['pillar.get']('manager:mainip', '') %} {% set MANAGERIP = salt['pillar.get']('manager:mainip', '') %}
{% set VERSION = salt['pillar.get']('static:soversion', 'HH1.2.2') %} {% set VERSION = salt['pillar.get']('global:soversion', 'HH1.2.2') %}
{% set IMAGEREPO = salt['pillar.get']('static:imagerepo') %} {% set IMAGEREPO = salt['pillar.get']('global:imagerepo') %}
{% set MANAGER = salt['grains.get']('master') %} {% set MANAGER = salt['grains.get']('master') %}
{% set MAINIP = salt['grains.get']('ip_interfaces').get(salt['pillar.get']('sensor:mainint', salt['pillar.get']('manager:mainint', salt['pillar.get']('elasticsearch:mainint', salt['pillar.get']('host:mainint')))))[0] %} {% set MAINIP = salt['grains.get']('ip_interfaces').get(salt['pillar.get']('sensor:mainint', salt['pillar.get']('manager:mainint', salt['pillar.get']('elasticsearch:mainint', salt['pillar.get']('host:mainint')))))[0] %}
{%- set MYSQLPASS = salt['pillar.get']('secrets:mysql', None) -%} {%- set MYSQLPASS = salt['pillar.get']('secrets:mysql', None) -%}

View File

@@ -10,7 +10,7 @@ def run():
MINIONID = data['id'] MINIONID = data['id']
ACTION = data['data']['action'] ACTION = data['data']['action']
LOCAL_SALT_DIR = "/opt/so/saltstack/local" LOCAL_SALT_DIR = "/opt/so/saltstack/local"
STATICFILE = f"{LOCAL_SALT_DIR}/pillar/static.sls" STATICFILE = f"{LOCAL_SALT_DIR}/pillar/global.sls"
SECRETSFILE = f"{LOCAL_SALT_DIR}/pillar/secrets.sls" SECRETSFILE = f"{LOCAL_SALT_DIR}/pillar/secrets.sls"
if MINIONID.split('_')[-1] in ['manager','eval','fleet','managersearch','standalone']: if MINIONID.split('_')[-1] in ['manager','eval','fleet','managersearch','standalone']:

File diff suppressed because it is too large Load Diff

View File

@@ -12,8 +12,8 @@
# #
# You should have received a copy of the GNU General Public License # You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>. # along with this program. If not, see <http://www.gnu.org/licenses/>.
{% set VERSION = salt['pillar.get']('static:soversion', 'HH1.2.2') %} {% set VERSION = salt['pillar.get']('global:soversion', 'HH1.2.2') %}
{% set IMAGEREPO = salt['pillar.get']('static:imagerepo') %} {% set IMAGEREPO = salt['pillar.get']('global:imagerepo') %}
{% set MANAGER = salt['grains.get']('master') %} {% set MANAGER = salt['grains.get']('master') %}
# Redis Setup # Redis Setup
@@ -53,10 +53,14 @@ so-redis:
- user: socore - user: socore
- port_bindings: - port_bindings:
- 0.0.0.0:6379:6379 - 0.0.0.0:6379:6379
- 0.0.0.0:9696:9696
- binds: - binds:
- /opt/so/log/redis:/var/log/redis:rw - /opt/so/log/redis:/var/log/redis:rw
- /opt/so/conf/redis/etc/redis.conf:/usr/local/etc/redis/redis.conf:ro - /opt/so/conf/redis/etc/redis.conf:/usr/local/etc/redis/redis.conf:ro
- /opt/so/conf/redis/working:/redis:rw - /opt/so/conf/redis/working:/redis:rw
- /etc/pki/redis.crt:/certs/redis.crt:ro
- /etc/pki/redis.key:/certs/redis.key:ro
- /etc/pki/ca.crt:/certs/ca.crt:ro
- entrypoint: "redis-server /usr/local/etc/redis/redis.conf" - entrypoint: "redis-server /usr/local/etc/redis/redis.conf"
- watch: - watch:
- file: /opt/so/conf/redis/etc - file: /opt/so/conf/redis/etc

View File

@@ -40,7 +40,7 @@ dockerregistryconf:
# Install the registry container # Install the registry container
so-dockerregistry: so-dockerregistry:
docker_container.running: docker_container.running:
- image: registry:2 - image: registry:latest
- hostname: so-registry - hostname: so-registry
- restart_policy: always - restart_policy: always
- port_bindings: - port_bindings:

View File

@@ -1,5 +1,3 @@
{% if grains['os'] != 'CentOS' %} {% if grains['os'] != 'CentOS' %}
saltpymodules: saltpymodules:
pkg.installed: pkg.installed:
@@ -8,8 +6,8 @@ saltpymodules:
- python-m2crypto - python-m2crypto
{% endif %} {% endif %}
salt_bootstrap:
salt_minion_service: file.managed:
service.running: - name: /usr/sbin/bootstrap-salt.sh
- name: salt-minion - source: salt://salt/scripts/bootstrap-salt.sh
- enable: True - mode: 755

18
salt/salt/map.jinja Normal file
View File

@@ -0,0 +1,18 @@
{% import_yaml 'salt/minion.defaults.yaml' as salt %}
{% set SALTVERSION = salt.salt.minion.version %}
{% if grains.os|lower == 'ubuntu' %}
{% set COMMON = 'salt-common' %}
{% elif grains.os|lower == 'centos' %}
{% set COMMON = 'salt' %}
{% endif %}
{% if grains.saltversion|string != SALTVERSION|string %}
{% if grains.os|lower == 'centos' %}
{% set UPGRADECOMMAND = 'yum versionlock delete "salt-*" && sh /usr/sbin/bootstrap-salt.sh -F -x python3 stable ' ~ SALTVERSION %}
{% elif grains.os|lower == 'ubuntu' %}
{% set UPGRADECOMMAND = 'apt-mark unhold salt-common && apt-mark unhold salt-minion && sh /usr/sbin/bootstrap-salt.sh -F -x python3 stable ' ~ SALTVERSION %}
{% endif %}
{% else %}
{% set UPGRADECOMMAND = 'echo Already running Salt Minon version ' ~ SALTVERSION %}
{% endif %}

Some files were not shown because too many files have changed in this diff Show More