Merge remote-tracking branch 'remotes/origin/dev' into feature/users

This commit is contained in:
m0duspwnens
2021-11-19 09:58:01 -05:00
480 changed files with 31678 additions and 44463 deletions

View File

@@ -15,7 +15,7 @@
### Contributing code ### Contributing code
* **All commits must be signed** with a valid key that has been added to your GitHub account. The commits should have all the "**Verified**" tag when viewed on GitHub as shown below: * **All commits must be signed** with a valid key that has been added to your GitHub account. Each commit should have the "**Verified**" tag when viewed on GitHub as shown below:
<img src="./assets/images/verified-commit-1.png" width="450"> <img src="./assets/images/verified-commit-1.png" width="450">

1
HOTFIX
View File

@@ -1 +0,0 @@

View File

@@ -1,6 +1,6 @@
## Security Onion 2.3.60 ## Security Onion 2.3.80
Security Onion 2.3.60 is here! Security Onion 2.3.80 is here!
## Screenshots ## Screenshots

View File

@@ -1,17 +1,18 @@
### 2.3.60 ISO image built on 2021/04/27 ### 2.3.80 ISO image built on 2021/09/27
### Download and Verify ### Download and Verify
2.3.60 ISO image: 2.3.80 ISO image:
https://download.securityonion.net/file/securityonion/securityonion-2.3.60.iso https://download.securityonion.net/file/securityonion/securityonion-2.3.80.iso
MD5: 0470325615C42C206B028EE37A1AD897 MD5: 24F38563860416F4A8ABE18746913E14
SHA1: 496E70BD529D3B8A02D0B32F68B8F7527C953612 SHA1: F923C005F54EA2A17AB225ADA0DA46042707AAD9
SHA256: 417E34DFCD63D84A16FF2041DC712F02D9E0515C8B78BDF0EE1037DD13C32030 SHA256: 8E95D10AF664D9A406C168EC421D943CB23F0D0C1813C6C2DBA9B4E131984018
Signature for ISO image: Signature for ISO image:
https://github.com/Security-Onion-Solutions/securityonion/raw/master/sigs/securityonion-2.3.60.iso.sig https://github.com/Security-Onion-Solutions/securityonion/raw/master/sigs/securityonion-2.3.80.iso.sig
Signing key: Signing key:
https://raw.githubusercontent.com/Security-Onion-Solutions/securityonion/master/KEYS https://raw.githubusercontent.com/Security-Onion-Solutions/securityonion/master/KEYS
@@ -25,22 +26,22 @@ wget https://raw.githubusercontent.com/Security-Onion-Solutions/securityonion/ma
Download the signature file for the ISO: Download the signature file for the ISO:
``` ```
wget https://github.com/Security-Onion-Solutions/securityonion/raw/master/sigs/securityonion-2.3.60.iso.sig wget https://github.com/Security-Onion-Solutions/securityonion/raw/master/sigs/securityonion-2.3.80.iso.sig
``` ```
Download the ISO image: Download the ISO image:
``` ```
wget https://download.securityonion.net/file/securityonion/securityonion-2.3.60.iso wget https://download.securityonion.net/file/securityonion/securityonion-2.3.80.iso
``` ```
Verify the downloaded ISO image using the signature file: Verify the downloaded ISO image using the signature file:
``` ```
gpg --verify securityonion-2.3.60.iso.sig securityonion-2.3.60.iso gpg --verify securityonion-2.3.80.iso.sig securityonion-2.3.80.iso
``` ```
The output should show "Good signature" and the Primary key fingerprint should match what's shown below: The output should show "Good signature" and the Primary key fingerprint should match what's shown below:
``` ```
gpg: Signature made Thu 01 Jul 2021 10:59:24 AM EDT using RSA key ID FE507013 gpg: Signature made Mon 27 Sep 2021 08:55:01 AM EDT using RSA key ID FE507013
gpg: Good signature from "Security Onion Solutions, LLC <info@securityonionsolutions.com>" gpg: Good signature from "Security Onion Solutions, LLC <info@securityonionsolutions.com>"
gpg: WARNING: This key is not certified with a trusted signature! gpg: WARNING: This key is not certified with a trusted signature!
gpg: There is no indication that the signature belongs to the owner. gpg: There is no indication that the signature belongs to the owner.

View File

@@ -1 +1 @@
2.3.60 2.3.90

View File

@@ -16,6 +16,10 @@ firewall:
ips: ips:
delete: delete:
insert: insert:
endgame:
ips:
delete:
insert:
fleet: fleet:
ips: ips:
delete: delete:

View File

@@ -1,7 +1,7 @@
elasticsearch: elasticsearch:
templates: templates:
- so/so-beats-template.json.jinja - so/so-beats-template.json.jinja
- so/so-common-template.json - so/so-common-template.json.jinja
- so/so-firewall-template.json.jinja - so/so-firewall-template.json.jinja
- so/so-flow-template.json.jinja - so/so-flow-template.json.jinja
- so/so-ids-template.json.jinja - so/so-ids-template.json.jinja

View File

@@ -1,7 +1,8 @@
elasticsearch: elasticsearch:
templates: templates:
- so/so-beats-template.json.jinja - so/so-beats-template.json.jinja
- so/so-common-template.json - so/so-common-template.json.jinja
- so/so-endgame-template.json.jinja
- so/so-firewall-template.json.jinja - so/so-firewall-template.json.jinja
- so/so-flow-template.json.jinja - so/so-flow-template.json.jinja
- so/so-ids-template.json.jinja - so/so-ids-template.json.jinja

View File

@@ -1,7 +1,8 @@
elasticsearch: elasticsearch:
templates: templates:
- so/so-beats-template.json.jinja - so/so-beats-template.json.jinja
- so/so-common-template.json - so/so-common-template.json.jinja
- so/so-endgame-template.json.jinja
- so/so-firewall-template.json.jinja - so/so-firewall-template.json.jinja
- so/so-flow-template.json.jinja - so/so-flow-template.json.jinja
- so/so-ids-template.json.jinja - so/so-ids-template.json.jinja

View File

@@ -1,6 +1,7 @@
logstash: logstash:
docker_options: docker_options:
port_bindings: port_bindings:
- 0.0.0.0:3765:3765
- 0.0.0.0:5044:5044 - 0.0.0.0:5044:5044
- 0.0.0.0:5644:5644 - 0.0.0.0:5644:5644
- 0.0.0.0:6050:6050 - 0.0.0.0:6050:6050

View File

@@ -5,5 +5,6 @@ logstash:
config: config:
- so/0009_input_beats.conf - so/0009_input_beats.conf
- so/0010_input_hhbeats.conf - so/0010_input_hhbeats.conf
- so/0011_input_endgame.conf
- so/9999_output_redis.conf.jinja - so/9999_output_redis.conf.jinja

View File

@@ -13,3 +13,5 @@ logstash:
- so/9500_output_beats.conf.jinja - so/9500_output_beats.conf.jinja
- so/9600_output_ossec.conf.jinja - so/9600_output_ossec.conf.jinja
- so/9700_output_strelka.conf.jinja - so/9700_output_strelka.conf.jinja
- so/9800_output_logscan.conf.jinja
- so/9900_output_endgame.conf.jinja

View File

@@ -25,6 +25,9 @@ base:
- data.* - data.*
{% if salt['file.file_exists']('/opt/so/saltstack/local/pillar/elasticsearch/auth.sls') %} {% if salt['file.file_exists']('/opt/so/saltstack/local/pillar/elasticsearch/auth.sls') %}
- elasticsearch.auth - elasticsearch.auth
{% endif %}
{% if salt['file.file_exists']('/opt/so/saltstack/local/pillar/kibana/secrets.sls') %}
- kibana.secrets
{% endif %} {% endif %}
- secrets - secrets
- global - global
@@ -44,6 +47,9 @@ base:
- elasticsearch.eval - elasticsearch.eval
{% if salt['file.file_exists']('/opt/so/saltstack/local/pillar/elasticsearch/auth.sls') %} {% if salt['file.file_exists']('/opt/so/saltstack/local/pillar/elasticsearch/auth.sls') %}
- elasticsearch.auth - elasticsearch.auth
{% endif %}
{% if salt['file.file_exists']('/opt/so/saltstack/local/pillar/kibana/secrets.sls') %}
- kibana.secrets
{% endif %} {% endif %}
- global - global
- minions.{{ grains.id }} - minions.{{ grains.id }}
@@ -55,6 +61,9 @@ base:
- elasticsearch.search - elasticsearch.search
{% if salt['file.file_exists']('/opt/so/saltstack/local/pillar/elasticsearch/auth.sls') %} {% if salt['file.file_exists']('/opt/so/saltstack/local/pillar/elasticsearch/auth.sls') %}
- elasticsearch.auth - elasticsearch.auth
{% endif %}
{% if salt['file.file_exists']('/opt/so/saltstack/local/pillar/kibana/secrets.sls') %}
- kibana.secrets
{% endif %} {% endif %}
- data.* - data.*
- zeeklogs - zeeklogs
@@ -102,6 +111,9 @@ base:
- elasticsearch.eval - elasticsearch.eval
{% if salt['file.file_exists']('/opt/so/saltstack/local/pillar/elasticsearch/auth.sls') %} {% if salt['file.file_exists']('/opt/so/saltstack/local/pillar/elasticsearch/auth.sls') %}
- elasticsearch.auth - elasticsearch.auth
{% endif %}
{% if salt['file.file_exists']('/opt/so/saltstack/local/pillar/kibana/secrets.sls') %}
- kibana.secrets
{% endif %} {% endif %}
- global - global
- minions.{{ grains.id }} - minions.{{ grains.id }}

View File

@@ -35,6 +35,7 @@
'influxdb', 'influxdb',
'grafana', 'grafana',
'soc', 'soc',
'kratos',
'firewall', 'firewall',
'idstools', 'idstools',
'suricata.manager', 'suricata.manager',
@@ -45,7 +46,8 @@
'schedule', 'schedule',
'soctopus', 'soctopus',
'tcpreplay', 'tcpreplay',
'docker_clean' 'docker_clean',
'learn'
], ],
'so-heavynode': [ 'so-heavynode': [
'ca', 'ca',
@@ -99,6 +101,7 @@
'manager', 'manager',
'nginx', 'nginx',
'soc', 'soc',
'kratos',
'firewall', 'firewall',
'idstools', 'idstools',
'suricata.manager', 'suricata.manager',
@@ -108,7 +111,8 @@
'zeek', 'zeek',
'schedule', 'schedule',
'tcpreplay', 'tcpreplay',
'docker_clean' 'docker_clean',
'learn'
], ],
'so-manager': [ 'so-manager': [
'salt.master', 'salt.master',
@@ -121,13 +125,15 @@
'influxdb', 'influxdb',
'grafana', 'grafana',
'soc', 'soc',
'kratos',
'firewall', 'firewall',
'idstools', 'idstools',
'suricata.manager', 'suricata.manager',
'utility', 'utility',
'schedule', 'schedule',
'soctopus', 'soctopus',
'docker_clean' 'docker_clean',
'learn'
], ],
'so-managersearch': [ 'so-managersearch': [
'salt.master', 'salt.master',
@@ -139,6 +145,7 @@
'influxdb', 'influxdb',
'grafana', 'grafana',
'soc', 'soc',
'kratos',
'firewall', 'firewall',
'manager', 'manager',
'idstools', 'idstools',
@@ -146,7 +153,8 @@
'utility', 'utility',
'schedule', 'schedule',
'soctopus', 'soctopus',
'docker_clean' 'docker_clean',
'learn'
], ],
'so-node': [ 'so-node': [
'ca', 'ca',
@@ -168,6 +176,7 @@
'influxdb', 'influxdb',
'grafana', 'grafana',
'soc', 'soc',
'kratos',
'firewall', 'firewall',
'idstools', 'idstools',
'suricata.manager', 'suricata.manager',
@@ -178,7 +187,8 @@
'schedule', 'schedule',
'soctopus', 'soctopus',
'tcpreplay', 'tcpreplay',
'docker_clean' 'docker_clean',
'learn'
], ],
'so-sensor': [ 'so-sensor': [
'ca', 'ca',
@@ -233,11 +243,16 @@
{% do allowed_states.append('elasticsearch') %} {% do allowed_states.append('elasticsearch') %}
{% endif %} {% endif %}
{% if KIBANA and grains.role in ['so-eval', 'so-manager', 'so-standalone', 'so-managersearch', 'so-import'] %} {% if ELASTICSEARCH and grains.role in ['so-eval', 'so-manager', 'so-standalone', 'so-managersearch', 'so-import'] %}
{% do allowed_states.append('kibana') %} {% do allowed_states.append('elasticsearch.auth') %}
{% endif %} {% endif %}
{% if CURATOR and grains.role in ['so-eval', 'so-standalone', 'so-node', 'so-managersearch', 'so-heavynode'] %} {% if KIBANA and grains.role in ['so-eval', 'so-manager', 'so-standalone', 'so-managersearch', 'so-import'] %}
{% do allowed_states.append('kibana') %}
{% do allowed_states.append('kibana.secrets') %}
{% endif %}
{% if grains.role in ['so-eval', 'so-standalone', 'so-node', 'so-managersearch', 'so-heavynode', 'so-manager'] %}
{% do allowed_states.append('curator') %} {% do allowed_states.append('curator') %}
{% endif %} {% endif %}

View File

@@ -24,8 +24,9 @@ pki_private_key:
- x509: /etc/pki/ca.crt - x509: /etc/pki/ca.crt
{%- endif %} {%- endif %}
/etc/pki/ca.crt: pki_public_ca_crt:
x509.certificate_managed: x509.certificate_managed:
- name: /etc/pki/ca.crt
- signing_private_key: /etc/pki/ca.key - signing_private_key: /etc/pki/ca.key
- CN: {{ manager }} - CN: {{ manager }}
- C: US - C: US

View File

@@ -22,6 +22,7 @@
/opt/so/log/salt/so-salt-minion-check /opt/so/log/salt/so-salt-minion-check
/opt/so/log/salt/minion /opt/so/log/salt/minion
/opt/so/log/salt/master /opt/so/log/salt/master
/opt/so/log/logscan/*.log
{ {
{{ logrotate_conf | indent(width=4) }} {{ logrotate_conf | indent(width=4) }}
} }

View File

@@ -9,6 +9,11 @@ rmvariablesfile:
file.absent: file.absent:
- name: /tmp/variables.txt - name: /tmp/variables.txt
dockergroup:
group.present:
- name: docker
- gid: 920
# Add socore Group # Add socore Group
socoregroup: socoregroup:
group.present: group.present:
@@ -101,16 +106,24 @@ commonpkgs:
- python3-m2crypto - python3-m2crypto
- python3-mysqldb - python3-mysqldb
- python3-packaging - python3-packaging
- python3-lxml
- git - git
- vim - vim
heldpackages: heldpackages:
pkg.installed: pkg.installed:
- pkgs: - pkgs:
{% if grains['oscodename'] == 'bionic' %}
- containerd.io: 1.4.4-1 - containerd.io: 1.4.4-1
- docker-ce: 5:20.10.5~3-0~ubuntu-bionic - docker-ce: 5:20.10.5~3-0~ubuntu-bionic
- docker-ce-cli: 5:20.10.5~3-0~ubuntu-bionic - docker-ce-cli: 5:20.10.5~3-0~ubuntu-bionic
- docker-ce-rootless-extras: 5:20.10.5~3-0~ubuntu-bionic - docker-ce-rootless-extras: 5:20.10.5~3-0~ubuntu-bionic
{% elif grains['oscodename'] == 'focal' %}
- containerd.io: 1.4.9-1
- docker-ce: 5:20.10.8~3-0~ubuntu-focal
- docker-ce-cli: 5:20.10.5~3-0~ubuntu-focal
- docker-ce-rootless-extras: 5:20.10.5~3-0~ubuntu-focal
{% endif %}
- hold: True - hold: True
- update_holds: True - update_holds: True
@@ -136,6 +149,7 @@ commonpkgs:
- python36-m2crypto - python36-m2crypto
- python36-mysql - python36-mysql
- python36-packaging - python36-packaging
- python36-lxml
- yum-utils - yum-utils
- device-mapper-persistent-data - device-mapper-persistent-data
- lvm2 - lvm2
@@ -326,6 +340,16 @@ dockerreserveports:
- name: /etc/sysctl.d/99-reserved-ports.conf - name: /etc/sysctl.d/99-reserved-ports.conf
{% if salt['grains.get']('sosmodel', '') %} {% if salt['grains.get']('sosmodel', '') %}
{% if grains['os'] == 'CentOS' %}
# Install Raid tools
raidpkgs:
pkg.installed:
- skip_suggestions: True
- pkgs:
- securityonion-raidtools
- securityonion-megactl
{% endif %}
# Install raid check cron # Install raid check cron
/usr/sbin/so-raid-status > /dev/null 2>&1: /usr/sbin/so-raid-status > /dev/null 2>&1:
cron.present: cron.present:

View File

@@ -1,64 +0,0 @@
#!/bin/bash
# Copyright 2014,2015,2016,2017,2018,2019,2020,2021 Security Onion Solutions, LLC
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
. /usr/sbin/so-common
UPDATE_DIR=/tmp/sohotfixapply
if [ -z "$1" ]; then
echo "No tarball given. Please provide the filename so I can run the hotfix"
echo "so-airgap-hotfixapply /path/to/sohotfix.tar"
exit 1
else
if [ ! -f "$1" ]; then
echo "Unable to find $1. Make sure your path is correct and retry."
exit 1
else
echo "Determining if we need to apply this hotfix"
rm -rf $UPDATE_DIR
mkdir -p $UPDATE_DIR
tar xvf $1 -C $UPDATE_DIR
# Compare some versions
NEWVERSION=$(cat $UPDATE_DIR/VERSION)
HOTFIXVERSION=$(cat $UPDATE_DIR/HOTFIX)
CURRENTHOTFIX=$(cat /etc/sohotfix)
INSTALLEDVERSION=$(cat /etc/soversion)
if [ "$INSTALLEDVERSION" == "$NEWVERSION" ]; then
echo "Checking to see if there are hotfixes needed"
if [ "$HOTFIXVERSION" == "$CURRENTHOTFIX" ]; then
echo "You are already running the latest version of Security Onion."
rm -rf $UPDATE_DIR
exit 1
else
echo "We need to apply a hotfix"
copy_new_files
echo $HOTFIXVERSION > /etc/sohotfix
salt-call state.highstate -l info queue=True
echo "The Hotfix $HOTFIXVERSION has been applied"
# Clean up
rm -rf $UPDATE_DIR
exit 0
fi
else
echo "This hotfix is not compatible with your current version. Download the latest ISO and run soup"
rm -rf $UPDATE_DIR
fi
fi
fi

View File

@@ -1,4 +1,4 @@
#!/bin/bash #!/usr/bin/env python3
# Copyright 2014,2015,2016,2017,2018,2019,2020,2021 Security Onion Solutions, LLC # Copyright 2014,2015,2016,2017,2018,2019,2020,2021 Security Onion Solutions, LLC
# #
@@ -15,152 +15,199 @@
# You should have received a copy of the GNU General Public License # You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>. # along with this program. If not, see <http://www.gnu.org/licenses/>.
. /usr/sbin/so-common import ipaddress
import textwrap
import os
import subprocess
import sys
import argparse
import re
from lxml import etree as ET
from xml.dom import minidom
from datetime import datetime as dt
from datetime import timezone as tz
local_salt_dir=/opt/so/saltstack/local
SKIP=0
function usage {
cat << EOF
Usage: $0 [-abefhoprsw] [ -i IP ]
This program allows you to add a firewall rule to allow connections from a new IP address or CIDR range.
If you run this program with no arguments, it will present a menu for you to choose your options.
If you want to automate and skip the menu, you can pass the desired options as command line arguments.
EXAMPLES
To add 10.1.2.3 to the analyst role:
so-allow -a -i 10.1.2.3
To add 10.1.2.0/24 to the osquery role:
so-allow -o -i 10.1.2.0/24
EOF
LOCAL_SALT_DIR='/opt/so/saltstack/local'
WAZUH_CONF='/nsm/wazuh/etc/ossec.conf'
VALID_ROLES = {
'a': { 'role': 'analyst','desc': 'Analyst - 80/tcp, 443/tcp' },
'b': { 'role': 'beats_endpoint', 'desc': 'Logstash Beat - 5044/tcp' },
'e': { 'role': 'elasticsearch_rest', 'desc': 'Elasticsearch REST API - 9200/tcp' },
'f': { 'role': 'strelka_frontend', 'desc': 'Strelka frontend - 57314/tcp' },
'o': { 'role': 'osquery_endpoint', 'desc': 'Osquery endpoint - 8090/tcp' },
's': { 'role': 'syslog', 'desc': 'Syslog device - 514/tcp/udp' },
'w': { 'role': 'wazuh_agent', 'desc': 'Wazuh agent - 1514/tcp/udp' },
'p': { 'role': 'wazuh_api', 'desc': 'Wazuh API - 55000/tcp' },
'r': { 'role': 'wazuh_authd', 'desc': 'Wazuh registration service - 1515/tcp' }
} }
while getopts "ahfesprbowi:" OPTION
do
case $OPTION in
h)
usage
exit 0
;;
a)
FULLROLE="analyst"
SKIP=1
;;
b)
FULLROLE="beats_endpoint"
SKIP=1
;;
e)
FULLROLE="elasticsearch_rest"
SKIP=1
;;
f)
FULLROLE="strelka_frontend"
SKIP=1
;;
i) IP=$OPTARG
;;
o)
FULLROLE="osquery_endpoint"
SKIP=1
;;
w)
FULLROLE="wazuh_agent"
SKIP=1
;;
s)
FULLROLE="syslog"
SKIP=1
;;
p)
FULLROLE="wazuh_api"
SKIP=1
;;
r)
FULLROLE="wazuh_authd"
SKIP=1
;;
*)
usage
exit 0
;;
esac
done
if [ "$SKIP" -eq 0 ]; then def validate_ip_cidr(ip_cidr: str) -> bool:
try:
ipaddress.ip_address(ip_cidr)
except ValueError:
try:
ipaddress.ip_network(ip_cidr)
except ValueError:
return False
return True
echo "This program allows you to add a firewall rule to allow connections from a new IP address."
echo ""
echo "Choose the role for the IP or Range you would like to add"
echo ""
echo "[a] - Analyst - ports 80/tcp and 443/tcp"
echo "[b] - Logstash Beat - port 5044/tcp"
echo "[e] - Elasticsearch REST API - port 9200/tcp"
echo "[f] - Strelka frontend - port 57314/tcp"
echo "[o] - Osquery endpoint - port 8090/tcp"
echo "[s] - Syslog device - 514/tcp/udp"
echo "[w] - Wazuh agent - port 1514/tcp/udp"
echo "[p] - Wazuh API - port 55000/tcp"
echo "[r] - Wazuh registration service - 1515/tcp"
echo ""
echo "Please enter your selection:"
read -r ROLE
echo "Enter a single ip address or range to allow (example: 10.10.10.10 or 10.10.0.0/16):"
read -r IP
if [ "$ROLE" == "a" ]; then def role_prompt() -> str:
FULLROLE=analyst print()
elif [ "$ROLE" == "b" ]; then print('Choose the role for the IP or Range you would like to allow')
FULLROLE=beats_endpoint print()
elif [ "$ROLE" == "e" ]; then for role in VALID_ROLES:
FULLROLE=elasticsearch_rest print(f'[{role}] - {VALID_ROLES[role]["desc"]}')
elif [ "$ROLE" == "f" ]; then print()
FULLROLE=strelka_frontend role = input('Please enter your selection: ')
elif [ "$ROLE" == "o" ]; then if role in VALID_ROLES.keys():
FULLROLE=osquery_endpoint return VALID_ROLES[role]['role']
elif [ "$ROLE" == "w" ]; then else:
FULLROLE=wazuh_agent print(f'Invalid role \'{role}\', please try again.', file=sys.stderr)
elif [ "$ROLE" == "s" ]; then sys.exit(1)
FULLROLE=syslog
elif [ "$ROLE" == "p" ]; then
FULLROLE=wazuh_api
elif [ "$ROLE" == "r" ]; then
FULLROLE=wazuh_authd
else
echo "I don't recognize that role"
exit 1
fi
fi
echo "Adding $IP to the $FULLROLE role. This can take a few seconds" def ip_prompt() -> str:
/usr/sbin/so-firewall includehost $FULLROLE $IP ip = input('Enter a single ip address or range to allow (ex: 10.10.10.10 or 10.10.0.0/16): ')
salt-call state.apply firewall queue=True if validate_ip_cidr(ip):
return ip
else:
print(f'Invalid IP address or CIDR block \'{ip}\', please try again.', file=sys.stderr)
sys.exit(1)
def wazuh_enabled() -> bool:
for file in os.listdir(f'{LOCAL_SALT_DIR}/pillar'):
with open(file, 'r') as pillar:
if 'wazuh: 1' in pillar.read():
return True
return False
def root_to_str(root: ET.ElementTree) -> str:
xml_str = ET.tostring(root, encoding='unicode', method='xml').replace('\n', '')
xml_str = re.sub(r'(?:(?<=>) *)', '', xml_str)
xml_str = re.sub(r' -', '', xml_str)
xml_str = re.sub(r' -->', ' -->', xml_str)
dom = minidom.parseString(xml_str)
return dom.toprettyxml(indent=" ")
def add_wl(ip):
parser = ET.XMLParser(remove_blank_text=True)
with open(WAZUH_CONF, 'rb') as wazuh_conf:
tree = ET.parse(wazuh_conf, parser)
root = tree.getroot()
source_comment = ET.Comment(f'Address {ip} added by /usr/sbin/so-allow on {dt.utcnow().replace(tzinfo=tz.utc).strftime("%a %b %e %H:%M:%S %Z %Y")}')
new_global = ET.Element("global")
new_wl = ET.SubElement(new_global, 'white_list')
new_wl.text = ip
root.append(source_comment)
root.append(new_global)
with open(WAZUH_CONF, 'w') as add_out:
add_out.write(root_to_str(root))
def apply(role: str, ip: str) -> int:
firewall_cmd = ['so-firewall', 'includehost', role, ip]
salt_cmd = ['salt-call', 'state.apply', '-l', 'quiet', 'firewall', 'queue=True']
restart_wazuh_cmd = ['so-wazuh-restart']
print(f'Adding {ip} to the {role} role. This can take a few seconds...')
cmd = subprocess.run(firewall_cmd)
if cmd.returncode == 0:
cmd = subprocess.run(salt_cmd, stdout=subprocess.DEVNULL)
else:
return cmd.returncode
if cmd.returncode == 0:
if wazuh_enabled and role=='analyst':
try:
add_wl(ip)
print(f'Added whitelist entry for {ip} from {WAZUH_CONF}', file=sys.stderr)
except Exception as e:
print(f'Failed to add whitelist entry for {ip} from {WAZUH_CONF}', file=sys.stderr)
print(e)
return 1
print('Restarting OSSEC Server...')
cmd = subprocess.run(restart_wazuh_cmd)
else:
return cmd.returncode
else:
print(f'Commmand \'{" ".join(salt_cmd)}\' failed.', file=sys.stderr)
return cmd.returncode
if cmd.returncode != 0:
print('Failed to restart OSSEC server.')
return cmd.returncode
def main():
if os.geteuid() != 0:
print('You must run this script as root', file=sys.stderr)
sys.exit(1)
main_parser = argparse.ArgumentParser(
formatter_class=argparse.RawDescriptionHelpFormatter,
epilog=textwrap.dedent(f'''\
additional information:
To use this script in interactive mode call it with no arguments
'''
))
group = main_parser.add_argument_group(title='roles')
group.add_argument('-a', dest='roles', action='append_const', const=VALID_ROLES['a']['role'], help="Analyst - 80/tcp, 443/tcp")
group.add_argument('-b', dest='roles', action='append_const', const=VALID_ROLES['b']['role'], help="Logstash Beat - 5044/tcp")
group.add_argument('-e', dest='roles', action='append_const', const=VALID_ROLES['e']['role'], help="Elasticsearch REST API - 9200/tcp")
group.add_argument('-f', dest='roles', action='append_const', const=VALID_ROLES['f']['role'], help="Strelka frontend - 57314/tcp")
group.add_argument('-o', dest='roles', action='append_const', const=VALID_ROLES['o']['role'], help="Osquery endpoint - 8090/tcp")
group.add_argument('-s', dest='roles', action='append_const', const=VALID_ROLES['s']['role'], help="Syslog device - 514/tcp/udp")
group.add_argument('-w', dest='roles', action='append_const', const=VALID_ROLES['w']['role'], help="Wazuh agent - 1514/tcp/udp")
group.add_argument('-p', dest='roles', action='append_const', const=VALID_ROLES['p']['role'], help="Wazuh API - 55000/tcp")
group.add_argument('-r', dest='roles', action='append_const', const=VALID_ROLES['r']['role'], help="Wazuh registration service - 1515/tcp")
ip_g = main_parser.add_argument_group(title='allow')
ip_g.add_argument('-i', help="IP or CIDR block to disallow connections from, requires at least one role argument", metavar='', dest='ip')
args = main_parser.parse_args(sys.argv[1:])
if args.roles is None:
role = role_prompt()
ip = ip_prompt()
try:
return_code = apply(role, ip)
except Exception as e:
print(f'Unexpected exception occurred: {e}', file=sys.stderr)
return_code = e.errno
sys.exit(return_code)
elif args.roles is not None and args.ip is None:
if os.environ.get('IP') is None:
main_parser.print_help()
sys.exit(1)
else:
args.ip = os.environ['IP']
if validate_ip_cidr(args.ip):
try:
for role in args.roles:
return_code = apply(role, args.ip)
if return_code > 0:
break
except Exception as e:
print(f'Unexpected exception occurred: {e}', file=sys.stderr)
return_code = e.errno
else:
print(f'Invalid IP address or CIDR block \'{args.ip}\', please try again.', file=sys.stderr)
return_code = 1
sys.exit(return_code)
if __name__ == '__main__':
try:
main()
except KeyboardInterrupt:
sys.exit(1)
# Check if Wazuh enabled
if grep -q -R "wazuh: 1" $local_salt_dir/pillar/*; then
# If analyst, add to Wazuh AR whitelist
if [ "$FULLROLE" == "analyst" ]; then
WAZUH_MGR_CFG="/nsm/wazuh/etc/ossec.conf"
if ! grep -q "<white_list>$IP</white_list>" $WAZUH_MGR_CFG ; then
DATE=$(date)
sed -i 's/<\/ossec_config>//' $WAZUH_MGR_CFG
sed -i '/^$/N;/^\n$/D' $WAZUH_MGR_CFG
echo -e "<!--Address $IP added by /usr/sbin/so-allow on \"$DATE\"-->\n <global>\n <white_list>$IP</white_list>\n </global>\n</ossec_config>" >> $WAZUH_MGR_CFG
echo "Added whitelist entry for $IP in $WAZUH_MGR_CFG."
echo
echo "Restarting OSSEC Server..."
/usr/sbin/so-wazuh-restart
fi
fi
fi

View File

@@ -17,4 +17,4 @@
. /usr/sbin/so-common . /usr/sbin/so-common
salt-call state.highstate -linfo salt-call state.highstate -l info

View File

@@ -99,6 +99,15 @@ check_password() {
return $? return $?
} }
check_password_and_exit() {
local password=$1
if ! check_password "$password"; then
echo "Password is invalid. Do not include single quotes, double quotes, dollar signs, and backslashes in the password."
exit 2
fi
return 0
}
check_elastic_license() { check_elastic_license() {
[ -n "$TESTING" ] && return [ -n "$TESTING" ] && return
@@ -372,18 +381,29 @@ set_version() {
fi fi
} }
has_uppercase() {
local string=$1
echo "$string" | grep -qP '[A-Z]' \
&& return 0 \
|| return 1
}
valid_cidr() { valid_cidr() {
# Verify there is a backslash in the string # Verify there is a backslash in the string
echo "$1" | grep -qP "^[^/]+/[^/]+$" || return 1 echo "$1" | grep -qP "^[^/]+/[^/]+$" || return 1
local cidr valid_ip4_cidr_mask "$1" && return 0 || return 1
local ip
cidr=$(echo "$1" | sed 's/.*\///') local cidr="$1"
ip=$(echo "$1" | sed 's/\/.*//' ) local ip
ip=$(echo "$cidr" | sed 's/\/.*//' )
if valid_ip4 "$ip"; then if valid_ip4 "$ip"; then
[[ $cidr =~ ([0-9]|[1-2][0-9]|3[0-2]) ]] && return 0 || return 1 local ip1 ip2 ip3 ip4 N
IFS="./" read -r ip1 ip2 ip3 ip4 N <<< "$cidr"
ip_total=$((ip1 * 256 ** 3 + ip2 * 256 ** 2 + ip3 * 256 + ip4))
[[ $((ip_total % 2**(32-N))) == 0 ]] && return 0 || return 1
else else
return 1 return 1
fi fi
@@ -433,6 +453,23 @@ valid_ip4() {
echo "$ip" | grep -qP '^(?:(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\.){3}(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)$' && return 0 || return 1 echo "$ip" | grep -qP '^(?:(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\.){3}(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)$' && return 0 || return 1
} }
valid_ip4_cidr_mask() {
# Verify there is a backslash in the string
echo "$1" | grep -qP "^[^/]+/[^/]+$" || return 1
local cidr
local ip
cidr=$(echo "$1" | sed 's/.*\///')
ip=$(echo "$1" | sed 's/\/.*//' )
if valid_ip4 "$ip"; then
[[ $cidr =~ ^([0-9]|[1-2][0-9]|3[0-2])$ ]] && return 0 || return 1
else
return 1
fi
}
valid_int() { valid_int() {
local num=$1 local num=$1
local min=${2:-1} local min=${2:-1}

213
salt/common/tools/sbin/so-deny Executable file
View File

@@ -0,0 +1,213 @@
#!/usr/bin/env python3
# Copyright 2014,2015,2016,2017,2018,2019,2020,2021 Security Onion Solutions, LLC
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
import ipaddress
import textwrap
import os
import subprocess
import sys
import argparse
import re
from lxml import etree as ET
from xml.dom import minidom
LOCAL_SALT_DIR='/opt/so/saltstack/local'
WAZUH_CONF='/nsm/wazuh/etc/ossec.conf'
VALID_ROLES = {
'a': { 'role': 'analyst','desc': 'Analyst - 80/tcp, 443/tcp' },
'b': { 'role': 'beats_endpoint', 'desc': 'Logstash Beat - 5044/tcp' },
'e': { 'role': 'elasticsearch_rest', 'desc': 'Elasticsearch REST API - 9200/tcp' },
'f': { 'role': 'strelka_frontend', 'desc': 'Strelka frontend - 57314/tcp' },
'o': { 'role': 'osquery_endpoint', 'desc': 'Osquery endpoint - 8090/tcp' },
's': { 'role': 'syslog', 'desc': 'Syslog device - 514/tcp/udp' },
'w': { 'role': 'wazuh_agent', 'desc': 'Wazuh agent - 1514/tcp/udp' },
'p': { 'role': 'wazuh_api', 'desc': 'Wazuh API - 55000/tcp' },
'r': { 'role': 'wazuh_authd', 'desc': 'Wazuh registration service - 1515/tcp' }
}
def validate_ip_cidr(ip_cidr: str) -> bool:
try:
ipaddress.ip_address(ip_cidr)
except ValueError:
try:
ipaddress.ip_network(ip_cidr)
except ValueError:
return False
return True
def role_prompt() -> str:
print()
print('Choose the role for the IP or Range you would like to deny')
print()
for role in VALID_ROLES:
print(f'[{role}] - {VALID_ROLES[role]["desc"]}')
print()
role = input('Please enter your selection: ')
if role in VALID_ROLES.keys():
return VALID_ROLES[role]['role']
else:
print(f'Invalid role \'{role}\', please try again.', file=sys.stderr)
sys.exit(1)
def ip_prompt() -> str:
ip = input('Enter a single ip address or range to deny (ex: 10.10.10.10 or 10.10.0.0/16): ')
if validate_ip_cidr(ip):
return ip
else:
print(f'Invalid IP address or CIDR block \'{ip}\', please try again.', file=sys.stderr)
sys.exit(1)
def wazuh_enabled() -> bool:
for file in os.listdir(f'{LOCAL_SALT_DIR}/pillar'):
with open(file, 'r') as pillar:
if 'wazuh: 1' in pillar.read():
return True
return False
def root_to_str(root: ET.ElementTree) -> str:
xml_str = ET.tostring(root, encoding='unicode', method='xml').replace('\n', '')
xml_str = re.sub(r'(?:(?<=>) *)', '', xml_str)
# Remove specific substrings to better format comments on intial parse/write
xml_str = re.sub(r' -', '', xml_str)
xml_str = re.sub(r' -->', ' -->', xml_str)
dom = minidom.parseString(xml_str)
return dom.toprettyxml(indent=" ")
def rem_wl(ip):
parser = ET.XMLParser(remove_blank_text=True)
with open(WAZUH_CONF, 'rb') as wazuh_conf:
tree = ET.parse(wazuh_conf, parser)
root = tree.getroot()
global_elems = root.findall(f"global/white_list[. = '{ip}']/..")
if len(global_elems) > 0:
for g_elem in global_elems:
ge_index = list(root).index(g_elem)
if ge_index > 0 and root[list(root).index(g_elem) - 1].tag == ET.Comment:
root.remove(root[ge_index - 1])
root.remove(g_elem)
with open(WAZUH_CONF, 'w') as out:
out.write(root_to_str(root))
def apply(role: str, ip: str) -> int:
firewall_cmd = ['so-firewall', 'excludehost', role, ip]
salt_cmd = ['salt-call', 'state.apply', '-l', 'quiet', 'firewall', 'queue=True']
restart_wazuh_cmd = ['so-wazuh-restart']
print(f'Removing {ip} from the {role} role. This can take a few seconds...')
cmd = subprocess.run(firewall_cmd)
if cmd.returncode == 0:
cmd = subprocess.run(salt_cmd, stdout=subprocess.DEVNULL)
else:
return cmd.returncode
if cmd.returncode == 0:
if wazuh_enabled and role=='analyst':
try:
rem_wl(ip)
print(f'Removed whitelist entry for {ip} from {WAZUH_CONF}', file=sys.stderr)
except Exception as e:
print(f'Failed to remove whitelist entry for {ip} from {WAZUH_CONF}', file=sys.stderr)
print(e)
return 1
print('Restarting OSSEC Server...')
cmd = subprocess.run(restart_wazuh_cmd)
else:
return cmd.returncode
else:
print(f'Commmand \'{" ".join(salt_cmd)}\' failed.', file=sys.stderr)
return cmd.returncode
if cmd.returncode != 0:
print('Failed to restart OSSEC server.')
return cmd.returncode
def main():
if os.geteuid() != 0:
print('You must run this script as root', file=sys.stderr)
sys.exit(1)
main_parser = argparse.ArgumentParser(
formatter_class=argparse.RawDescriptionHelpFormatter,
epilog=textwrap.dedent(f'''\
additional information:
To use this script in interactive mode call it with no arguments
'''
))
group = main_parser.add_argument_group(title='roles')
group.add_argument('-a', dest='roles', action='append_const', const=VALID_ROLES['a']['role'], help="Analyst - 80/tcp, 443/tcp")
group.add_argument('-b', dest='roles', action='append_const', const=VALID_ROLES['b']['role'], help="Logstash Beat - 5044/tcp")
group.add_argument('-e', dest='roles', action='append_const', const=VALID_ROLES['e']['role'], help="Elasticsearch REST API - 9200/tcp")
group.add_argument('-f', dest='roles', action='append_const', const=VALID_ROLES['f']['role'], help="Strelka frontend - 57314/tcp")
group.add_argument('-o', dest='roles', action='append_const', const=VALID_ROLES['o']['role'], help="Osquery endpoint - 8090/tcp")
group.add_argument('-s', dest='roles', action='append_const', const=VALID_ROLES['s']['role'], help="Syslog device - 514/tcp/udp")
group.add_argument('-w', dest='roles', action='append_const', const=VALID_ROLES['w']['role'], help="Wazuh agent - 1514/tcp/udp")
group.add_argument('-p', dest='roles', action='append_const', const=VALID_ROLES['p']['role'], help="Wazuh API - 55000/tcp")
group.add_argument('-r', dest='roles', action='append_const', const=VALID_ROLES['r']['role'], help="Wazuh registration service - 1515/tcp")
ip_g = main_parser.add_argument_group(title='allow')
ip_g.add_argument('-i', help="IP or CIDR block to disallow connections from, requires at least one role argument", metavar='', dest='ip')
args = main_parser.parse_args(sys.argv[1:])
if args.roles is None:
role = role_prompt()
ip = ip_prompt()
try:
return_code = apply(role, ip)
except Exception as e:
print(f'Unexpected exception occurred: {e}', file=sys.stderr)
return_code = e.errno
sys.exit(return_code)
elif args.roles is not None and args.ip is None:
if os.environ.get('IP') is None:
main_parser.print_help()
sys.exit(1)
else:
args.ip = os.environ['IP']
if validate_ip_cidr(args.ip):
try:
for role in args.roles:
return_code = apply(role, args.ip)
if return_code > 0:
break
except Exception as e:
print(f'Unexpected exception occurred: {e}', file=sys.stderr)
return_code = e.errno
else:
print(f'Invalid IP address or CIDR block \'{args.ip}\', please try again.', file=sys.stderr)
return_code = 1
sys.exit(return_code)
if __name__ == '__main__':
try:
main()
except KeyboardInterrupt:
sys.exit(1)

View File

@@ -70,7 +70,7 @@ do
done done
docker_exec(){ docker_exec(){
CMD="docker exec -it so-elastalert elastalert-test-rule /opt/elastalert/rules/$RULE_NAME --config /opt/config/elastalert_config.yaml $OPTIONS" CMD="docker exec -it so-elastalert elastalert-test-rule /opt/elastalert/rules/$RULE_NAME --config /opt/elastalert/config.yaml $OPTIONS"
if [ "${RESULTS_TO_LOG,,}" = "y" ] ; then if [ "${RESULTS_TO_LOG,,}" = "y" ] ; then
$CMD > "$FILE_SAVE_LOCATION" $CMD > "$FILE_SAVE_LOCATION"
else else

View File

@@ -0,0 +1,155 @@
#!/bin/bash
# Copyright 2014,2015,2016,2017,2018,2019,2020,2021 Security Onion Solutions, LLC
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
source $(dirname $0)/so-common
require_manager
user=$1
elasticUsersFile=${ELASTIC_USERS_FILE:-/opt/so/saltstack/local/salt/elasticsearch/files/users}
elasticAuthPillarFile=${ELASTIC_AUTH_PILLAR_FILE:-/opt/so/saltstack/local/pillar/elasticsearch/auth.sls}
if [[ $# -ne 1 ]]; then
echo "Usage: $0 <user>"
echo ""
echo " where <user> is one of the following:"
echo ""
echo " all: Reset the password for the so_elastic, so_kibana, so_logstash, so_beats, and so_monitor users"
echo " so_elastic: Reset the password for the so_elastic user"
echo " so_kibana: Reset the password for the so_kibana user"
echo " so_logstash: Reset the password for the so_logstash user"
echo " so_beats: Reset the password for the so_beats user"
echo " so_monitor: Reset the password for the so_monitor user"
echo ""
exit 1
fi
# function to create a lock so that the so-user sync cronjob can't run while this is running
function lock() {
# Obtain file descriptor lock
exec 99>/var/tmp/so-user.lock || fail "Unable to create lock descriptor; if the system was not shutdown gracefully you may need to remove /var/tmp/so-user.lock manually."
flock -w 10 99 || fail "Another process is using so-user; if the system was not shutdown gracefully you may need to remove /var/tmp/so-user.lock manually."
trap 'rm -f /var/tmp/so-user.lock' EXIT
}
function unlock() {
rm -f /var/tmp/so-user.lock
}
function fail() {
msg=$1
echo "$1"
exit 1
}
function removeSingleUserPass() {
local user=$1
sed -i '/user: '"${user}"'/{N;/pass: /d}' "${elasticAuthPillarFile}"
}
function removeAllUserPass() {
local userList=("so_elastic" "so_kibana" "so_logstash" "so_beats" "so_monitor")
for u in ${userList[@]}; do
removeSingleUserPass "$u"
done
}
function removeElasticUsersFile() {
rm -f "$elasticUsersFile"
}
function createElasticAuthPillar() {
salt-call state.apply elasticsearch.auth queue=True
}
# this will disable highstate to prevent a highstate from starting while the script is running
# will also disable salt.minion-state-apply-test allow so-salt-minion-check cronjob to restart salt-minion service incase
function disableSaltStates() {
printf "\nDisabling salt.minion-state-apply-test and highstate from running.\n\n"
salt-call state.disable salt.minion-state-apply-test
salt-call state.disable highstate
}
function enableSaltStates() {
printf "\nEnabling salt.minion-state-apply-test and highstate.\n\n"
salt-call state.enable salt.minion-state-apply-test
salt-call state.enable highstate
}
function killAllSaltJobs() {
printf "\nKilling all running salt jobs.\n\n"
salt-call saltutil.kill_all_jobs
}
function soUserSync() {
# apply this state to update /opt/so/saltstack/local/salt/elasticsearch/curl.config on the manager
salt-call state.sls_id elastic_curl_config_distributed manager queue=True
salt -C 'G@role:so-standalone or G@role:so-eval or G@role:so-import or G@role:so-manager or G@role:so-managersearch or G@role:so-node or G@role:so-heavynode' saltutil.kill_all_jobs
# apply this state to get the curl.config
salt -C 'G@role:so-standalone or G@role:so-eval or G@role:so-import or G@role:so-manager or G@role:so-managersearch or G@role:so-node or G@role:so-heavynode' state.sls_id elastic_curl_config common queue=True
$(dirname $0)/so-user sync
printf "\nApplying logstash state to the appropriate nodes.\n\n"
salt -C 'G@role:so-standalone or G@role:so-eval or G@role:so-import or G@role:so-manager or G@role:so-managersearch or G@role:so-node or G@role:so-heavynode' state.apply logstash queue=True
printf "\nApplying filebeat state to the appropriate nodes.\n\n"
salt -C 'G@role:so-standalone or G@role:so-eval or G@role:so-import or G@role:so-manager or G@role:so-managersearch or G@role:so-node or G@role:so-heavynode or G@role:so-sensor or G@role:so-fleet' state.apply filebeat queue=True
printf "\nApplying kibana state to the appropriate nodes.\n\n"
salt -C 'G@role:so-standalone or G@role:so-eval or G@role:so-import or G@role:so-manager or G@role:so-managersearch' state.apply kibana queue=True
printf "\nApplying curator state to the appropriate nodes.\n\n"
salt -C 'G@role:so-standalone or G@role:so-eval or G@role:so-import or G@role:so-manager or G@role:so-managersearch or G@role:so-node or G@role:so-heavynode' state.apply curator queue=True
}
function highstateManager() {
killAllSaltJobs
printf "\nRunning highstate on the manager to finalize password reset.\n\n"
salt-call state.highstate -linfo queue=True
}
case "${user}" in
so_elastic | so_kibana | so_logstash | so_beats | so_monitor)
lock
killAllSaltJobs
disableSaltStates
removeSingleUserPass "$user"
createElasticAuthPillar
removeElasticUsersFile
unlock
soUserSync
enableSaltStates
highstateManager
;;
all)
lock
killAllSaltJobs
disableSaltStates
removeAllUserPass
createElasticAuthPillar
removeElasticUsersFile
unlock
soUserSync
enableSaltStates
highstateManager
;;
*)
fail "Unsupported user: $user"
;;
esac
exit 0

View File

@@ -0,0 +1,57 @@
#!/bin/bash
# Copyright 2014,2015,2016,2017,2018,2019,2020,2021 Security Onion Solutions, LLC
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
{%- set mainint = salt['pillar.get']('host:mainint') %}
{%- set MYIP = salt['grains.get']('ip_interfaces:' ~ mainint)[0] %}
default_conf_dir=/opt/so/conf
ELASTICSEARCH_HOST="{{ MYIP }}"
ELASTICSEARCH_PORT=9200
# Define a default directory to load roles from
ELASTICSEARCH_ROLES="$default_conf_dir/elasticsearch/roles/"
# Wait for ElasticSearch to initialize
echo -n "Waiting for ElasticSearch..."
COUNT=0
ELASTICSEARCH_CONNECTED="no"
while [[ "$COUNT" -le 240 ]]; do
{{ ELASTICCURL }} -k --output /dev/null --silent --head --fail -L https://"$ELASTICSEARCH_HOST":"$ELASTICSEARCH_PORT"
if [ $? -eq 0 ]; then
ELASTICSEARCH_CONNECTED="yes"
echo "connected!"
break
else
((COUNT+=1))
sleep 1
echo -n "."
fi
done
if [ "$ELASTICSEARCH_CONNECTED" == "no" ]; then
echo
echo -e "Connection attempt timed out. Unable to connect to ElasticSearch. \nPlease try: \n -checking log(s) in /var/log/elasticsearch/\n -running 'sudo docker ps' \n -running 'sudo so-elastic-restart'"
echo
fi
cd ${ELASTICSEARCH_ROLES}
echo "Loading templates..."
for role in *; do
name=$(echo "$role" | cut -d. -f1)
so-elasticsearch-query _security/role/$name -XPUT -d @"$role"
done
cd - >/dev/null

View File

@@ -54,7 +54,7 @@ PIPELINES=$({{ ELASTICCURL }} -sk https://"$ELASTICSEARCH_HOST":"$ELASTICSEARCH_
if [[ "$PIPELINES" -lt 5 ]]; then if [[ "$PIPELINES" -lt 5 ]]; then
echo "Setting up ingest pipeline(s)" echo "Setting up ingest pipeline(s)"
for MODULE in activemq apache auditd aws azure barracuda bluecoat cef checkpoint cisco coredns crowdstrike cyberark cylance elasticsearch envoyproxy f5 fortinet gcp google_workspace googlecloud gsuite haproxy ibmmq icinga iis imperva infoblox iptables juniper kafka kibana logstash microsoft misp mongodb mssql mysql nats netscout nginx o365 okta osquery panw postgresql rabbitmq radware redis santa snort snyk sonicwall sophos squid suricata system tomcat traefik zeek zscaler for MODULE in activemq apache auditd aws azure barracuda bluecoat cef checkpoint cisco coredns crowdstrike cyberark cylance elasticsearch envoyproxy f5 fortinet gcp google_workspace googlecloud gsuite haproxy ibmmq icinga iis imperva infoblox iptables juniper kafka kibana logstash microsoft mongodb mssql mysql nats netscout nginx o365 okta osquery panw postgresql rabbitmq radware redis santa snort snyk sonicwall sophos squid suricata system threatintel tomcat traefik zeek zscaler
do do
echo "Loading $MODULE" echo "Loading $MODULE"
docker exec -i so-filebeat filebeat setup modules -pipelines -modules $MODULE -c $FB_MODULE_YML docker exec -i so-filebeat filebeat setup modules -pipelines -modules $MODULE -c $FB_MODULE_YML

View File

@@ -35,6 +35,7 @@ def showUsage(options, args):
print('') print('')
print(' General commands:') print(' General commands:')
print(' help - Prints this usage information.') print(' help - Prints this usage information.')
print(' apply - Apply the firewall state.')
print('') print('')
print(' Host commands:') print(' Host commands:')
print(' listhostgroups - Lists the known host groups.') print(' listhostgroups - Lists the known host groups.')
@@ -66,11 +67,11 @@ def checkDefaultPortsOption(options):
def checkApplyOption(options): def checkApplyOption(options):
if "--apply" in options: if "--apply" in options:
return apply() return apply(None, None)
def loadYaml(filename): def loadYaml(filename):
file = open(filename, "r") file = open(filename, "r")
return yaml.load(file.read()) return yaml.safe_load(file.read())
def writeYaml(filename, content): def writeYaml(filename, content):
file = open(filename, "w") file = open(filename, "w")
@@ -328,7 +329,7 @@ def removehost(options, args):
code = checkApplyOption(options) code = checkApplyOption(options)
return code return code
def apply(): def apply(options, args):
proc = subprocess.run(['salt-call', 'state.apply', 'firewall', 'queue=True']) proc = subprocess.run(['salt-call', 'state.apply', 'firewall', 'queue=True'])
return proc.returncode return proc.returncode
@@ -356,7 +357,8 @@ def main():
"addport": addport, "addport": addport,
"removeport": removeport, "removeport": removeport,
"addhostgroup": addhostgroup, "addhostgroup": addhostgroup,
"addportgroup": addportgroup "addportgroup": addportgroup,
"apply": apply
} }
code=1 code=1

View File

@@ -2,11 +2,16 @@
#so-fleet-setup $FleetEmail $FleetPassword #so-fleet-setup $FleetEmail $FleetPassword
. /usr/sbin/so-common
if [[ $# -ne 2 ]] ; then if [[ $# -ne 2 ]] ; then
echo "Username or Password was not set - exiting now." echo "Username or Password was not set - exiting now."
exit 1 exit 1
fi fi
USER_EMAIL=$1
USER_PW=$2
# Checking to see if required containers are started... # Checking to see if required containers are started...
if [ ! "$(docker ps -q -f name=so-fleet)" ]; then if [ ! "$(docker ps -q -f name=so-fleet)" ]; then
echo "Starting Docker Containers..." echo "Starting Docker Containers..."
@@ -17,8 +22,16 @@ fi
docker exec so-fleet fleetctl config set --address https://127.0.0.1:8080 --tls-skip-verify --url-prefix /fleet docker exec so-fleet fleetctl config set --address https://127.0.0.1:8080 --tls-skip-verify --url-prefix /fleet
docker exec so-fleet bash -c 'while [[ "$(curl -s -o /dev/null --insecure -w ''%{http_code}'' https://127.0.0.1:8080/fleet)" != "301" ]]; do sleep 5; done' docker exec so-fleet bash -c 'while [[ "$(curl -s -o /dev/null --insecure -w ''%{http_code}'' https://127.0.0.1:8080/fleet)" != "301" ]]; do sleep 5; done'
docker exec so-fleet fleetctl setup --email $1 --password $2
# Create Security Onion Fleet Service Account + Setup Fleet
FLEET_SA_EMAIL=$(lookup_pillar_secret fleet_sa_email)
FLEET_SA_PW=$(lookup_pillar_secret fleet_sa_password)
docker exec so-fleet fleetctl setup --email $FLEET_SA_EMAIL --password $FLEET_SA_PW --name SO_ServiceAccount --org-name SO
# Create User Account
echo "$USER_PW" | so-fleet-user-add "$USER_EMAIL"
# Import Packs & Configs
docker exec so-fleet fleetctl apply -f /packs/palantir/Fleet/Endpoints/MacOS/osquery.yaml docker exec so-fleet fleetctl apply -f /packs/palantir/Fleet/Endpoints/MacOS/osquery.yaml
docker exec so-fleet fleetctl apply -f /packs/palantir/Fleet/Endpoints/Windows/osquery.yaml docker exec so-fleet fleetctl apply -f /packs/palantir/Fleet/Endpoints/Windows/osquery.yaml
docker exec so-fleet fleetctl apply -f /packs/so/so-default.yml docker exec so-fleet fleetctl apply -f /packs/so/so-default.yml

View File

@@ -18,7 +18,7 @@
. /usr/sbin/so-common . /usr/sbin/so-common
usage() { usage() {
echo "Usage: $0 <new-user-name>" echo "Usage: $0 <new-user-email>"
echo "" echo ""
echo "Adds a new user to Fleet. The new password will be read from STDIN." echo "Adds a new user to Fleet. The new password will be read from STDIN."
exit 1 exit 1
@@ -28,37 +28,42 @@ if [ $# -ne 1 ]; then
usage usage
fi fi
USER=$1
MYSQL_PASS=$(lookup_pillar_secret mysql) USER_EMAIL=$1
FLEET_IP=$(lookup_pillar fleet_ip) FLEET_SA_EMAIL=$(lookup_pillar_secret fleet_sa_email)
FLEET_USER=$USER FLEET_SA_PW=$(lookup_pillar_secret fleet_sa_password)
MYSQL_PW=$(lookup_pillar_secret mysql)
# Read password for new user from stdin # Read password for new user from stdin
test -t 0 test -t 0
if [[ $? == 0 ]]; then if [[ $? == 0 ]]; then
echo "Enter new password:" echo "Enter new password:"
fi fi
read -rs FLEET_PASS read -rs USER_PASS
if ! check_password "$FLEET_PASS"; then check_password_and_exit "$USER_PASS"
echo "Password is invalid. Please exclude single quotes, double quotes and backslashes from the password."
exit 2 # Config fleetctl & login with the SO Service Account
fi CONFIG_OUTPUT=$(docker exec so-fleet fleetctl config set --address https://127.0.0.1:8080 --tls-skip-verify --url-prefix /fleet 2>&1 )
SALOGIN_OUTPUT=$(docker exec so-fleet fleetctl login --email $FLEET_SA_EMAIL --password $FLEET_SA_PW 2>&1)
FLEET_HASH=$(docker exec so-soctopus python -c "import bcrypt; print(bcrypt.hashpw('$FLEET_PASS'.encode('utf-8'), bcrypt.gensalt()).decode('utf-8'));" 2>&1)
if [[ $? -ne 0 ]]; then if [[ $? -ne 0 ]]; then
echo "Failed to generate Fleet password hash" echo "Unable to add user to Fleet; Fleet Service account login failed"
echo "$SALOGIN_OUTPUT"
exit 2 exit 2
fi fi
MYSQL_OUTPUT=$(docker exec so-mysql mysql -u root --password=$MYSQL_PASS fleet -e \ # Create New User
"INSERT INTO users (password,salt,username,email,admin,enabled) VALUES ('$FLEET_HASH','','$FLEET_USER','$FLEET_USER',1,1)" 2>&1) CREATE_OUTPUT=$(docker exec so-fleet fleetctl user create --email $USER_EMAIL --name $USER_EMAIL --password $USER_PASS --global-role admin 2>&1)
if [[ $? -eq 0 ]]; then if [[ $? -eq 0 ]]; then
echo "Successfully added user to Fleet" echo "Successfully added user to Fleet"
else else
echo "Unable to add user to Fleet; user might already exist" echo "Unable to add user to Fleet; user might already exist"
echo "$MYSQL_OUTPUT" echo "$CREATE_OUTPUT"
exit 2 exit 2
fi fi
# Disable forced password reset
MYSQL_OUTPUT=$(docker exec so-mysql mysql -u root --password=$MYSQL_PW fleet -e \
"UPDATE users SET admin_forced_password_reset = 0 WHERE email = '$USER_EMAIL'" 2>&1)

View File

@@ -0,0 +1,56 @@
#!/bin/bash
#
# Copyright 2014,2015,2016,2017,2018,2019,2020,2021 Security Onion Solutions, LLC
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
. /usr/sbin/so-common
usage() {
echo "Usage: $0 <user-email>"
echo ""
echo "Deletes a user in Fleet"
exit 1
}
if [ $# -ne 1 ]; then
usage
fi
USER_EMAIL=$1
FLEET_SA_EMAIL=$(lookup_pillar_secret fleet_sa_email)
FLEET_SA_PW=$(lookup_pillar_secret fleet_sa_password)
# Config fleetctl & login with the SO Service Account
CONFIG_OUTPUT=$(docker exec so-fleet fleetctl config set --address https://127.0.0.1:8080 --tls-skip-verify --url-prefix /fleet 2>&1 )
SALOGIN_OUTPUT=$(docker exec so-fleet fleetctl login --email $FLEET_SA_EMAIL --password $FLEET_SA_PW 2>&1)
if [[ $? -ne 0 ]]; then
echo "Unable to delete user from Fleet; Fleet Service account login failed"
echo "$SALOGIN_OUTPUT"
exit 2
fi
# Delete User
DELETE_OUTPUT=$(docker exec so-fleet fleetctl user delete --email $USER_EMAIL 2>&1)
if [[ $? -eq 0 ]]; then
echo "Successfully deleted user from Fleet"
else
echo "Unable to delete user from Fleet"
echo "$DELETE_OUTPUT"
exit 2
fi

View File

@@ -0,0 +1,75 @@
#!/bin/bash
#
# Copyright 2014,2015,2016,2017,2018,2019,2020,2021 Security Onion Solutions, LLC
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
. /usr/sbin/so-common
usage() {
echo "Usage: $0 <user-name>"
echo ""
echo "Update password for an existing Fleet user. The new password will be read from STDIN."
exit 1
}
if [ $# -ne 1 ]; then
usage
fi
USER=$1
MYSQL_PASS=$(lookup_pillar_secret mysql)
FLEET_IP=$(lookup_pillar fleet_ip)
FLEET_USER=$USER
# test existence of user
MYSQL_OUTPUT=$(docker exec so-mysql mysql -u root --password=$MYSQL_PASS fleet -e \
"SELECT count(1) FROM users WHERE email='$FLEET_USER'" 2>/dev/null | tail -1)
if [[ $? -ne 0 ]] || [[ $MYSQL_OUTPUT -ne 1 ]] ; then
echo "Test for email [${FLEET_USER}] failed"
echo " expect 1 hit in users database, return $MYSQL_OUTPUT hit(s)."
echo "Unable to update Fleet user password."
exit 2
fi
# Read password for new user from stdin
test -t 0
if [[ $? == 0 ]]; then
echo "Enter new password:"
fi
read -rs FLEET_PASS
if ! check_password "$FLEET_PASS"; then
echo "Password is invalid. Please exclude single quotes, double quotes, dollar signs, and backslashes from the password."
exit 2
fi
FLEET_HASH=$(docker exec so-soctopus python -c "import bcrypt; print(bcrypt.hashpw('$FLEET_PASS'.encode('utf-8'), bcrypt.gensalt()).decode('utf-8'));" 2>&1)
if [[ $? -ne 0 ]]; then
echo "Failed to generate Fleet password hash"
exit 2
fi
MYSQL_OUTPUT=$(docker exec so-mysql mysql -u root --password=$MYSQL_PASS fleet -e \
"UPDATE users SET password='$FLEET_HASH', salt='' where email='$FLEET_USER'" 2>&1)
if [[ $? -eq 0 ]]; then
echo "Successfully updated Fleet user password"
else
echo "Unable to update Fleet user password"
echo "$MYSQL_OUTPUT"
exit 2
fi

View File

@@ -0,0 +1,17 @@
# this script is used to delete the default Grafana dashboard folders that existed prior to Grafana dashboard and Salt management changes in 2.3.70
folders=$(curl -X GET http://admin:{{salt['pillar.get']('secrets:grafana_admin')}}@localhost:3000/api/folders | jq -r '.[] | @base64')
delfolder=("Manager" "Manager Search" "Sensor Nodes" "Search Nodes" "Standalone" "Eval Mode")
for row in $folders; do
title=$(echo ${row} | base64 --decode | jq -r '.title')
uid=$(echo ${row} | base64 --decode | jq -r '.uid')
if [[ " ${delfolder[@]} " =~ " ${title} " ]]; then
curl -X DELETE http://admin:{{salt['pillar.get']('secrets:grafana_admin')}}@localhost:3000/api/folders/$uid
fi
done
echo "so-grafana-dashboard-folder-delete has been run to delete default Grafana dashboard folders that existed prior to 2.3.70" > /opt/so/state/so-grafana-dashboard-folder-delete-complete
exit 0

View File

@@ -17,6 +17,7 @@
# NOTE: This script depends on so-common # NOTE: This script depends on so-common
IMAGEREPO=security-onion-solutions IMAGEREPO=security-onion-solutions
STATUS_CONF='/opt/so/conf/so-status/so-status.conf'
# shellcheck disable=SC2120 # shellcheck disable=SC2120
container_list() { container_list() {
@@ -138,6 +139,11 @@ update_docker_containers() {
cat $SIGNPATH/KEYS | gpg --import - >> "$LOG_FILE" 2>&1 cat $SIGNPATH/KEYS | gpg --import - >> "$LOG_FILE" 2>&1
fi fi
# If downloading for soup, check if any optional images need to be pulled
if [[ $CURLTYPE == 'soup' ]]; then
grep -q "so-logscan" "$STATUS_CONF" && TRUSTED_CONTAINERS+=("so-logscan")
fi
# Download the containers from the interwebs # Download the containers from the interwebs
for i in "${TRUSTED_CONTAINERS[@]}" for i in "${TRUSTED_CONTAINERS[@]}"
do do

View File

@@ -0,0 +1,58 @@
#!/bin/bash
#
# Copyright 2014,2015,2016,2017,2018,2019,2020,2021 Security Onion Solutions, LLC
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
. /usr/sbin/so-common
. /usr/sbin/so-image-common
usage() {
read -r -d '' message <<- EOM
usage: so-image-pull [-h] IMAGE [IMAGE ...]
positional arguments:
IMAGE One or more 'so-' prefixed images to download and verify.
optional arguments:
-h, --help Show this help message and exit.
EOM
echo "$message"
exit 1
}
for arg; do
shift
[[ "$arg" = "--quiet" || "$arg" = "-q" ]] && quiet=true && continue
set -- "$@" "$arg"
done
if [[ $# -eq 0 || $# -gt 1 ]] || [[ $1 == '-h' || $1 == '--help' ]]; then
usage
fi
TRUSTED_CONTAINERS=("$@")
set_version
for image in "${TRUSTED_CONTAINERS[@]}"; do
if ! docker images | grep "$image" | grep ":5000" | grep -q "$VERSION"; then
if [[ $quiet == true ]]; then
update_docker_containers "$image" "" "" "/dev/null"
else
update_docker_containers "$image" "" "" ""
fi
else
echo "$image:$VERSION image exists."
fi
done

View File

@@ -0,0 +1,176 @@
#!/bin/bash
#
# Copyright 2014,2015,2016,2017,2018,2019,2020,2021 Security Onion Solutions, LLC
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
{%- set MANAGER = salt['grains.get']('master') %}
{%- set VERSION = salt['pillar.get']('global:soversion') %}
{%- set IMAGEREPO = salt['pillar.get']('global:imagerepo') %}
{%- set MANAGERIP = salt['pillar.get']('global:managerip') -%}
{%- set URLBASE = salt['pillar.get']('global:url_base') %}
{% set ES_USER = salt['pillar.get']('elasticsearch:auth:users:so_elastic_user:user', '') %}
{% set ES_PW = salt['pillar.get']('elasticsearch:auth:users:so_elastic_user:pass', '') %}
INDEX_DATE=$(date +'%Y.%m.%d')
RUNID=$(cat /dev/urandom | tr -dc 'a-z0-9' | fold -w 8 | head -n 1)
LOG_FILE=/nsm/import/evtx-import.log
. /usr/sbin/so-common
function usage {
cat << EOF
Usage: $0 <evtx-file-1> [evtx-file-2] [evtx-file-*]
Imports one or more evtx files into Security Onion. The evtx files will be analyzed and made available for review in the Security Onion toolset.
EOF
}
function evtx2es() {
EVTX=$1
HASH=$2
ES_PW=$(lookup_pillar "auth:users:so_elastic_user:pass" "elasticsearch")
ES_USER=$(lookup_pillar "auth:users:so_elastic_user:user" "elasticsearch")
docker run --rm \
-v "$EVTX:/tmp/$RUNID.evtx" \
--entrypoint evtx2es \
{{ MANAGER }}:5000/{{ IMAGEREPO }}/so-pcaptools:{{ VERSION }} \
--host {{ MANAGERIP }} --scheme https \
--index so-beats-$INDEX_DATE --pipeline import.wel \
--login $ES_USER --pwd $ES_PW \
"/tmp/$RUNID.evtx" >> $LOG_FILE 2>&1
docker run --rm \
-v "$EVTX:/tmp/import.evtx" \
-v "/nsm/import/evtx-end_newest:/tmp/newest" \
-v "/nsm/import/evtx-start_oldest:/tmp/oldest" \
--entrypoint '/evtx_calc_timestamps.sh' \
{{ MANAGER }}:5000/{{ IMAGEREPO }}/so-pcaptools:{{ VERSION }}
}
# if no parameters supplied, display usage
if [ $# -eq 0 ]; then
usage
exit 1
fi
# ensure this is a Manager node
require_manager
# verify that all parameters are files
for i in "$@"; do
if ! [ -f "$i" ]; then
usage
echo "\"$i\" is not a valid file!"
exit 2
fi
done
# track if we have any valid or invalid evtx
INVALID_EVTXS="no"
VALID_EVTXS="no"
# track oldest start and newest end so that we can generate the Kibana search hyperlink at the end
START_OLDEST="2050-12-31"
END_NEWEST="1971-01-01"
touch /nsm/import/evtx-start_oldest
touch /nsm/import/evtx-end_newest
echo $START_OLDEST > /nsm/import/evtx-start_oldest
echo $END_NEWEST > /nsm/import/evtx-end_newest
# paths must be quoted in case they include spaces
for EVTX in "$@"; do
EVTX=$(/usr/bin/realpath "$EVTX")
echo "Processing Import: ${EVTX}"
# generate a unique hash to assist with dedupe checks
HASH=$(md5sum "${EVTX}" | awk '{ print $1 }')
HASH_DIR=/nsm/import/${HASH}
echo "- assigning unique identifier to import: $HASH"
if [ -d $HASH_DIR ]; then
echo "- this EVTX has already been imported; skipping"
INVALID_EVTXS="yes"
else
VALID_EVTXS="yes"
EVTX_DIR=$HASH_DIR/evtx
mkdir -p $EVTX_DIR
# import evtx and write them to import ingest pipeline
echo "- importing logs to Elasticsearch..."
evtx2es "${EVTX}" $HASH
# compare $START to $START_OLDEST
START=$(cat /nsm/import/evtx-start_oldest)
START_COMPARE=$(date -d $START +%s)
START_OLDEST_COMPARE=$(date -d $START_OLDEST +%s)
if [ $START_COMPARE -lt $START_OLDEST_COMPARE ]; then
START_OLDEST=$START
fi
# compare $ENDNEXT to $END_NEWEST
END=$(cat /nsm/import/evtx-end_newest)
ENDNEXT=`date +%Y-%m-%d --date="$END 1 day"`
ENDNEXT_COMPARE=$(date -d $ENDNEXT +%s)
END_NEWEST_COMPARE=$(date -d $END_NEWEST +%s)
if [ $ENDNEXT_COMPARE -gt $END_NEWEST_COMPARE ]; then
END_NEWEST=$ENDNEXT
fi
cp -f "${EVTX}" "${EVTX_DIR}"/data.evtx
chmod 644 "${EVTX_DIR}"/data.evtx
fi # end of valid evtx
echo
done # end of for-loop processing evtx files
# remove temp files
echo "Cleaning up:"
for TEMP_EVTX in ${TEMP_EVTXS[@]}; do
echo "- removing temporary evtx $TEMP_EVTX"
rm -f $TEMP_EVTX
done
# output final messages
if [ "$INVALID_EVTXS" = "yes" ]; then
echo
echo "Please note! One or more evtx was invalid! You can scroll up to see which ones were invalid."
fi
START_OLDEST_FORMATTED=`date +%Y-%m-%d --date="$START_OLDEST"`
START_OLDEST_SLASH=$(echo $START_OLDEST_FORMATTED | sed -e 's/-/%2F/g')
END_NEWEST_SLASH=$(echo $END_NEWEST | sed -e 's/-/%2F/g')
if [ "$VALID_EVTXS" = "yes" ]; then
cat << EOF
Import complete!
You can use the following hyperlink to view data in the time range of your import. You can triple-click to quickly highlight the entire hyperlink and you can then copy it into your browser:
https://{{ URLBASE }}/#/hunt?q=import.id:${RUNID}%20%7C%20groupby%20event.module%20event.dataset&t=${START_OLDEST_SLASH}%2000%3A00%3A00%20AM%20-%20${END_NEWEST_SLASH}%2000%3A00%3A00%20AM&z=UTC
or you can manually set your Time Range to be (in UTC):
From: $START_OLDEST_FORMATTED To: $END_NEWEST
Please note that it may take 30 seconds or more for events to appear in Hunt.
EOF
fi

0
salt/common/tools/sbin/so-influxdb-drop-autogen Normal file → Executable file
View File

View File

@@ -8,9 +8,9 @@ fi
echo "This tool will update a manager's IP address to the new IP assigned to the management network interface." echo "This tool will update a manager's IP address to the new IP assigned to the management network interface."
echo echo ""
echo "WARNING: This tool is still undergoing testing, use at your own risk!" echo "WARNING: This tool is still undergoing testing, use at your own risk!"
echo echo ""
if [ -z "$OLD_IP" ]; then if [ -z "$OLD_IP" ]; then
OLD_IP=$(lookup_pillar "managerip") OLD_IP=$(lookup_pillar "managerip")
@@ -27,7 +27,7 @@ if [ -z "$NEW_IP" ]; then
NEW_IP=$(ip -4 addr list $iface | grep inet | cut -d' ' -f6 | cut -d/ -f1) NEW_IP=$(ip -4 addr list $iface | grep inet | cut -d' ' -f6 | cut -d/ -f1)
if [ -z "$NEW_IP" ]; then if [ -z "$NEW_IP" ]; then
fail "Unable to detect new IP on interface $iface. " fail "Unable to detect new IP on interface $iface."
fi fi
echo "Detected new IP $NEW_IP on interface $iface." echo "Detected new IP $NEW_IP on interface $iface."
@@ -39,9 +39,9 @@ fi
echo "About to change old IP $OLD_IP to new IP $NEW_IP." echo "About to change old IP $OLD_IP to new IP $NEW_IP."
echo echo ""
read -n 1 -p "Would you like to continue? (y/N) " CONTINUE read -n 1 -p "Would you like to continue? (y/N) " CONTINUE
echo echo ""
if [ "$CONTINUE" == "y" ]; then if [ "$CONTINUE" == "y" ]; then
for file in $(grep -rlI $OLD_IP /opt/so/saltstack /etc); do for file in $(grep -rlI $OLD_IP /opt/so/saltstack /etc); do
@@ -49,6 +49,11 @@ if [ "$CONTINUE" == "y" ]; then
sed -i "s|$OLD_IP|$NEW_IP|g" $file sed -i "s|$OLD_IP|$NEW_IP|g" $file
done done
echo "Granting MySQL root user permissions on $NEW_IP"
docker exec -i so-mysql mysql --user=root --password=$(lookup_pillar_secret 'mysql') -e "GRANT ALL PRIVILEGES ON *.* TO 'root'@'$NEW_IP' IDENTIFIED BY '$(lookup_pillar_secret 'mysql')' WITH GRANT OPTION;" &> /dev/null
echo "Removing MySQL root user from $OLD_IP"
docker exec -i so-mysql mysql --user=root --password=$(lookup_pillar_secret 'mysql') -e "DROP USER 'root'@'$OLD_IP';" &> /dev/null
echo "The IP has been changed from $OLD_IP to $NEW_IP." echo "The IP has been changed from $OLD_IP to $NEW_IP."
echo echo

View File

@@ -15,19 +15,16 @@
# You should have received a copy of the GNU General Public License # You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>. # along with this program. If not, see <http://www.gnu.org/licenses/>.
# Get the latest code . /usr/sbin/so-common
rm -rf /tmp/sohotfix
mkdir -p /tmp/sohotfix echo $banner
cd /tmp/sohotfix echo "Running kibana.so_savedobjects_defaults Salt state to restore default saved objects."
git clone https://github.com/Security-Onion-Solutions/securityonion printf "This could take a while if another Salt job is running. \nRun this command with --force to stop all Salt jobs before proceeding.\n"
if [ ! -d "/tmp/sohotfix/securityonion" ]; then echo $banner
echo "I was unable to get the latest code. Check your internet and try again."
exit 1 if [ "$1" = "--force" ]; then
else printf "\nForce-stopping all Salt jobs before proceeding\n\n"
echo "Looks like we have the code lets create the tarball." salt-call saltutil.kill_all_jobs
cd /tmp/sohotfix/securityonion fi
tar cvf /tmp/sohotfix/sohotfix.tar HOTFIX VERSION salt pillar
echo "" salt-call state.apply kibana.so_savedobjects_defaults -linfo queue=True
echo "Copy /tmp/sohotfix/sohotfix.tar to portable media and then copy it to your airgap manager."
exit 0
fi

View File

@@ -1,5 +1,5 @@
. /usr/sbin/so-common . /usr/sbin/so-common
{% set HIGHLANDER = salt['pillar.get']('global:highlander', False) %}
wait_for_web_response "http://localhost:5601/app/kibana" "Elastic" 300 "{{ ELASTICCURL }}" wait_for_web_response "http://localhost:5601/app/kibana" "Elastic" 300 "{{ ELASTICCURL }}"
## This hackery will be removed if using Elastic Auth ## ## This hackery will be removed if using Elastic Auth ##
@@ -9,5 +9,9 @@ SESSIONCOOKIE=$({{ ELASTICCURL }} -c - -X GET http://localhost:5601/ | grep sid
# Disable certain Features from showing up in the Kibana UI # Disable certain Features from showing up in the Kibana UI
echo echo
echo "Setting up default Space:" echo "Setting up default Space:"
{% if HIGHLANDER %}
{{ ELASTICCURL }} -b "sid=$SESSIONCOOKIE" -L -X PUT "localhost:5601/api/spaces/space/default" -H 'kbn-xsrf: true' -H 'Content-Type: application/json' -d' {"id":"default","name":"Default","disabledFeatures":["enterpriseSearch"]} ' >> /opt/so/log/kibana/misc.log
{% else %}
{{ ELASTICCURL }} -b "sid=$SESSIONCOOKIE" -L -X PUT "localhost:5601/api/spaces/space/default" -H 'kbn-xsrf: true' -H 'Content-Type: application/json' -d' {"id":"default","name":"Default","disabledFeatures":["ml","enterpriseSearch","siem","logs","infrastructure","apm","uptime","monitoring","stackAlerts","actions","fleet"]} ' >> /opt/so/log/kibana/misc.log {{ ELASTICCURL }} -b "sid=$SESSIONCOOKIE" -L -X PUT "localhost:5601/api/spaces/space/default" -H 'kbn-xsrf: true' -H 'Content-Type: application/json' -d' {"id":"default","name":"Default","disabledFeatures":["ml","enterpriseSearch","siem","logs","infrastructure","apm","uptime","monitoring","stackAlerts","actions","fleet"]} ' >> /opt/so/log/kibana/misc.log
{% endif %}
echo echo

303
salt/common/tools/sbin/so-learn Executable file
View File

@@ -0,0 +1,303 @@
#!/usr/bin/env python3
# Copyright 2014,2015,2016,2017,2018,2019,2020,2021 Security Onion Solutions, LLC
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
from itertools import chain
from typing import List
import signal
import sys
import os
import re
import subprocess
import argparse
import textwrap
import yaml
import multiprocessing
import docker
import pty
minion_pillar_dir = '/opt/so/saltstack/local/pillar/minions'
so_status_conf = '/opt/so/conf/so-status/so-status.conf'
proc: subprocess.CompletedProcess = None
# Temp store of modules, will likely be broken out into salt
def get_learn_modules():
return {
'logscan': { 'cpu_period': get_cpu_period(fraction=0.25), 'enabled': False, 'description': 'Scan log files against pre-trained models to alert on anomalies.' }
}
def get_cpu_period(fraction: float):
multiplier = 10000
num_cores = multiprocessing.cpu_count()
if num_cores <= 2:
fraction = 1.
num_used_cores = int(num_cores * fraction)
cpu_period = num_used_cores * multiplier
return cpu_period
def sigint_handler(*_):
print('Exiting gracefully on Ctrl-C')
if proc is not None: proc.send_signal(signal.SIGINT)
sys.exit(1)
def find_minion_pillar() -> str:
regex = '^.*_(manager|managersearch|standalone|import|eval)\.sls$'
result = []
for root, _, files in os.walk(minion_pillar_dir):
for f_minion_id in files:
if re.search(regex, f_minion_id):
result.append(os.path.join(root, f_minion_id))
if len(result) == 0:
print('Could not find manager-type pillar (eval, standalone, manager, managersearch, import). Are you running this script on the manager?', file=sys.stderr)
sys.exit(3)
elif len(result) > 1:
res_str = ', '.join(f'\"{result}\"')
print('(This should not happen, the system is in an error state if you see this message.)\n', file=sys.stderr)
print('More than one manager-type pillar exists, minion id\'s listed below:', file=sys.stderr)
print(f' {res_str}', file=sys.stderr)
sys.exit(3)
else:
return result[0]
def read_pillar(pillar: str):
try:
with open(pillar, 'r') as pillar_file:
loaded_yaml = yaml.safe_load(pillar_file.read())
if loaded_yaml is None:
print(f'Could not parse {pillar}', file=sys.stderr)
sys.exit(3)
return loaded_yaml
except:
print(f'Could not open {pillar}', file=sys.stderr)
sys.exit(3)
def write_pillar(pillar: str, content: dict):
try:
with open(pillar, 'w') as pillar_file:
yaml.dump(content, pillar_file, default_flow_style=False)
except:
print(f'Could not open {pillar}', file=sys.stderr)
sys.exit(3)
def mod_so_status(action: str, item: str):
with open(so_status_conf, 'a+') as conf:
conf.seek(0)
containers = conf.readlines()
if f'so-{item}\n' in containers:
if action == 'remove': containers.remove(f'so-{item}\n')
if action == 'add': pass
else:
if action == 'remove': pass
if action == 'add': containers.append(f'so-{item}\n')
[containers.remove(c_name) for c_name in containers if c_name == '\n'] # remove extra newlines
conf.seek(0)
conf.truncate(0)
conf.writelines(containers)
def create_pillar_if_not_exist(pillar:str, content: dict):
pillar_dict = content
if pillar_dict.get('learn', {}).get('modules') is None:
pillar_dict['learn'] = {}
pillar_dict['learn']['modules'] = get_learn_modules()
content.update()
write_pillar(pillar, content)
return content
def salt_call(module: str):
salt_cmd = ['salt-call', 'state.apply', '-l', 'quiet', f'learn.{module}', 'queue=True']
print(f' Applying salt state for {module} module...')
proc = subprocess.run(salt_cmd, stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL)
return_code = proc.returncode
if return_code != 0:
print(f' [ERROR] Failed to apply salt state for {module} module.')
return return_code
def pull_image(module: str):
container_basename = f'so-{module}'
client = docker.from_env()
image_list = client.images.list(filters={ 'dangling': False })
tag_list = list(chain.from_iterable(list(map(lambda x: x.attrs.get('RepoTags'), image_list))))
basename_match = list(filter(lambda x: f'{container_basename}' in x, tag_list))
local_registry_match = list(filter(lambda x: ':5000' in x, basename_match))
if len(local_registry_match) == 0:
print(f'Pulling and verifying missing image for {module} (may take several minutes) ...')
pull_command = ['so-image-pull', '--quiet', container_basename]
proc = subprocess.run(pull_command, stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL)
return_code = proc.returncode
if return_code != 0:
print(f'[ERROR] Failed to pull image so-{module}, skipping state.')
else:
return_code = 0
return return_code
def apply(module_list: List):
return_code = 0
for module in module_list:
salt_ret = salt_call(module)
# Only update return_code if the command returned a non-zero return
if salt_ret != 0:
return_code = salt_ret
return return_code
def check_apply(args: dict):
if args.apply:
print('Configuration updated. Applying changes:')
return apply(args.modules)
else:
message = 'Configuration updated. Would you like to apply your changes now? (y/N) '
answer = input(message)
while answer.lower() not in [ 'y', 'n', '' ]:
answer = input(message)
if answer.lower() in [ 'n', '' ]:
return 0
else:
print('Applying changes:')
return apply(args.modules)
def enable_disable_modules(args, enable: bool):
pillar_modules = args.pillar_dict.get('learn', {}).get('modules')
pillar_mod_names = args.pillar_dict.get('learn', {}).get('modules').keys()
action_str = 'add' if enable else 'remove'
if 'all' in args.modules:
for module, details in pillar_modules.items():
details['enabled'] = enable
mod_so_status(action_str, module)
if enable: pull_image(module)
args.pillar_dict.update()
write_pillar(args.pillar, args.pillar_dict)
else:
write_needed = False
for module in args.modules:
if module in pillar_mod_names:
if pillar_modules[module]['enabled'] == enable:
state_str = 'enabled' if enable else 'disabled'
print(f'{module} module already {state_str}.', file=sys.stderr)
else:
if enable and pull_image(module) != 0:
continue
pillar_modules[module]['enabled'] = enable
mod_so_status(action_str, module)
write_needed = True
if write_needed:
args.pillar_dict.update()
write_pillar(args.pillar, args.pillar_dict)
cmd_ret = check_apply(args)
return cmd_ret
def enable_modules(args):
enable_disable_modules(args, enable=True)
def disable_modules(args):
enable_disable_modules(args, enable=False)
def list_modules(*_):
print('Available ML modules:')
for module, details in get_learn_modules().items():
print(f' - { module } : {details["description"]}')
return 0
def main():
beta_str = 'BETA - SUBJECT TO CHANGE\n'
apply_help='After ACTION the chosen modules, apply any necessary salt states.'
enable_apply_help = apply_help.replace('ACTION', 'enabling')
disable_apply_help = apply_help.replace('ACTION', 'disabling')
signal.signal(signal.SIGINT, sigint_handler)
if os.geteuid() != 0:
print('You must run this script as root', file=sys.stderr)
sys.exit(1)
main_parser = argparse.ArgumentParser(formatter_class=argparse.RawDescriptionHelpFormatter)
subcommand_desc = textwrap.dedent(
"""\
enable Enable one or more ML modules.
disable Disable one or more ML modules.
list List all available ML modules.
"""
)
subparsers = main_parser.add_subparsers(title='commands', description=subcommand_desc, metavar='', dest='command')
module_help_str = 'One or more ML modules, which can be listed using \'so-learn list\'. Use the keyword \'all\' to apply the action to all available modules.'
enable = subparsers.add_parser('enable')
enable.set_defaults(func=enable_modules)
enable.add_argument('modules', metavar='ML_MODULE', nargs='+', help=module_help_str)
enable.add_argument('--apply', action='store_const', const=True, required=False, help=enable_apply_help)
disable = subparsers.add_parser('disable')
disable.set_defaults(func=disable_modules)
disable.add_argument('modules', metavar='ML_MODULE', nargs='+', help=module_help_str)
disable.add_argument('--apply', action='store_const', const=True, required=False, help=disable_apply_help)
list = subparsers.add_parser('list')
list.set_defaults(func=list_modules)
args = main_parser.parse_args(sys.argv[1:])
args.pillar = find_minion_pillar()
args.pillar_dict = create_pillar_if_not_exist(args.pillar, read_pillar(args.pillar))
if hasattr(args, 'func'):
exit_code = args.func(args)
else:
if args.command is None:
print(beta_str)
main_parser.print_help()
sys.exit(0)
sys.exit(exit_code)
if __name__ == '__main__':
main()

View File

@@ -1,5 +1,5 @@
#!/bin/bash #!/bin/bash
#
# Copyright 2014,2015,2016,2017,2018,2019,2020,2021 Security Onion Solutions, LLC # Copyright 2014,2015,2016,2017,2018,2019,2020,2021 Security Onion Solutions, LLC
# #
# This program is free software: you can redistribute it and/or modify # This program is free software: you can redistribute it and/or modify
@@ -17,42 +17,6 @@
. /usr/sbin/so-common . /usr/sbin/so-common
usage() { ENABLEPLAY=${1:-False}
echo "Usage: $0 <user-name>"
echo ""
echo "Enables or disables a user in Fleet"
exit 1
}
if [ $# -ne 2 ]; then docker exec so-soctopus /usr/local/bin/python -c "import playbook; print(playbook.play_import($ENABLEPLAY))"
usage
fi
USER=$1
MYSQL_PASS=$(lookup_pillar_secret mysql)
FLEET_IP=$(lookup_pillar fleet_ip)
FLEET_USER=$USER
case "${2^^}" in
FALSE | NO | 0)
FLEET_STATUS=0
;;
TRUE | YES | 1)
FLEET_STATUS=1
;;
*)
usage
;;
esac
MYSQL_OUTPUT=$(docker exec so-mysql mysql -u root --password=$MYSQL_PASS fleet -e \
"UPDATE users SET enabled=$FLEET_STATUS WHERE username='$FLEET_USER'" 2>&1)
if [[ $? -eq 0 ]]; then
echo "Successfully updated user in Fleet"
else
echo "Failed to update user in Fleet"
echo $resp
exit 2
fi

View File

@@ -17,53 +17,101 @@
. /usr/sbin/so-common . /usr/sbin/so-common
check_lsi_raid() { appliance_check() {
# For use for LSI on Ubuntu {%- if salt['grains.get']('sosmodel', '') %}
#MEGA=/opt/MegaRAID/MegeCli/MegaCli64 APPLIANCE=1
#LSIRC=$($MEGA -LDInfo -Lall -aALL | grep Optimal) {%- if grains['sosmodel'] in ['SO2AMI01', 'SO2GCI01', 'SO2AZI01'] %}
# Open Source Centos exit 0
MEGA=/opt/mega/megasasctl {%- endif %}
LSIRC=$($MEGA | grep optimal) DUDEYOUGOTADELL=$(dmidecode |grep Dell)
if [[ -n $DUDEYOUGOTADELL ]]; then
if [[ $LSIRC ]]; then APPTYPE=dell
# Raid is good
LSIRAID=0
else else
LSIRAID=1 APPTYPE=sm
fi
mkdir -p /opt/so/log/raid
{%- else %}
echo "This is not an appliance"
exit 0
{%- endif %}
}
check_nsm_raid() {
PERCCLI=$(/opt/raidtools/perccli/perccli64 /c0/v0 show|grep RAID|grep Optl)
MEGACTL=$(/opt/raidtools/megasasctl |grep optimal)
if [[ $APPLIANCE == '1' ]]; then
if [[ -n $PERCCLI ]]; then
HWRAID=0
elif [[ -n $MEGACTL ]]; then
HWRAID=0
else
HWRAID=1
fi
fi fi
} }
check_boss_raid() {
MVCLI=$(/usr/local/bin/mvcli info -o vd |grep status |grep functional)
if [[ -n $DUDEYOUGOTADELL ]]; then
if [[ -n $MVCLI ]]; then
BOSSRAID=0
else
BOSSRAID=1
fi
fi
}
check_software_raid() { check_software_raid() {
if [[ -n $DUDEYOUGOTADELL ]]; then
SWRC=$(grep "_" /proc/mdstat) SWRC=$(grep "_" /proc/mdstat)
if [[ $SWRC ]]; then if [[ -n $SWRC ]]; then
# RAID is failed in some way # RAID is failed in some way
SWRAID=1 SWRAID=1
else else
SWRAID=0 SWRAID=0
fi fi
fi
} }
# This script checks raid status if you use SO appliances # This script checks raid status if you use SO appliances
# See if this is an appliance # See if this is an appliance
appliance_check
check_nsm_raid
check_boss_raid
{%- if salt['grains.get']('sosmodel', '') %} {%- if salt['grains.get']('sosmodel', '') %}
mkdir -p /opt/so/log/raid {%- if grains['sosmodel'] in ['SOSMN', 'SOSSNNV'] %}
{%- if grains['sosmodel'] in ['SOSMN', 'SOSSNNV'] %}
#check_boss_raid
check_software_raid check_software_raid
echo "nsmraid=$SWRAID" > /opt/so/log/raid/status.log {%- endif %}
{%- elif grains['sosmodel'] in ['SOS1000F', 'SOS1000', 'SOSSN7200', 'SOS10K', 'SOS4000'] %}
#check_boss_raid
check_lsi_raid
echo "nsmraid=$LSIRAID" > /opt/so/log/raid/status.log
{%- else %}
exit 0
{%- endif %}
{%- else %}
exit 0
{%- endif %} {%- endif %}
if [[ -n $SWRAID ]]; then
if [[ $SWRAID == '0' && $BOSSRAID == '0' ]]; then
RAIDSTATUS=0
else
RAIDSTATUS=1
fi
elif [[ -n $DUDEYOUGOTADELL ]]; then
if [[ $BOSSRAID == '0' && $HWRAID == '0' ]]; then
RAIDSTATUS=0
else
RAIDSTATUS=1
fi
elif [[ "$APPTYPE" == 'sm' ]]; then
if [[ -n "$HWRAID" ]]; then
RAIDSTATUS=0
else
RAIDSTATUS=1
fi
fi
echo "nsmraid=$RAIDSTATUS" > /opt/so/log/raid/status.log

View File

@@ -17,4 +17,4 @@
. /usr/sbin/so-common . /usr/sbin/so-common
docker exec -it so-redis redis-cli llen logstash:unparsed docker exec so-redis redis-cli llen logstash:unparsed

View File

@@ -405,7 +405,7 @@ def main():
enabled_list.set_defaults(func=list_enabled_rules) enabled_list.set_defaults(func=list_enabled_rules)
search_term_help='A quoted regex search term (ex: "\$EXTERNAL_NET")' search_term_help='A properly escaped regex search term (ex: "\\\$EXTERNAL_NET")'
replace_term_help='The text to replace the search term with' replace_term_help='The text to replace the search term with'
# Modify actions # Modify actions

View File

@@ -1,13 +1,10 @@
#!/bin/bash #!/bin/bash
got_root() {
# Make sure you are root . /usr/sbin/so-common
if [ "$(id -u)" -ne 0 ]; then
echo "This script must be run using sudo!"
exit 1
fi
} argstr=""
for arg in "$@"; do
argstr="${argstr} \"${arg}\""
done
got_root docker exec so-idstools /bin/bash -c "cd /opt/so/idstools/etc && idstools-rulecat --force ${argstr}"
docker exec so-idstools /bin/bash -c "cd /opt/so/idstools/etc && idstools-rulecat $1"

View File

@@ -92,6 +92,10 @@ if [ $CURRENT_TIME -ge $((SYSTEM_START_TIME+$UPTIME_REQ)) ]; then
log "last highstate completed at `date -d @$LAST_HIGHSTATE_END`" I log "last highstate completed at `date -d @$LAST_HIGHSTATE_END`" I
log "checking if any jobs are running" I log "checking if any jobs are running" I
logCmd "salt-call --local saltutil.running" I logCmd "salt-call --local saltutil.running" I
log "ensure salt.minion-state-apply-test is enabled" I
logCmd "salt-call state.enable salt.minion-state-apply-test" I
log "ensure highstate is enabled" I
logCmd "salt-call state.enable highstate" I
log "killing all salt-minion processes" I log "killing all salt-minion processes" I
logCmd "pkill -9 -ef /usr/bin/salt-minion" I logCmd "pkill -9 -ef /usr/bin/salt-minion" I
log "starting salt-minion service" I log "starting salt-minion service" I

View File

@@ -31,7 +31,7 @@ if [[ $# -lt 1 ]]; then
echo "Usage: $0 <pcap-sample(s)>" echo "Usage: $0 <pcap-sample(s)>"
echo echo
echo "All PCAPs must be placed in the /opt/so/samples directory unless replaying" echo "All PCAPs must be placed in the /opt/so/samples directory unless replaying"
echo "a sample pcap that is included in the so-tcpreplay image. Those PCAP sampes" echo "a sample pcap that is included in the so-tcpreplay image. Those PCAP samples"
echo "are located in the /opt/samples directory inside of the image." echo "are located in the /opt/samples directory inside of the image."
echo echo
echo "Customer provided PCAP example:" echo "Customer provided PCAP example:"

View File

@@ -41,10 +41,7 @@ if [[ $? == 0 ]]; then
fi fi
read -rs THEHIVE_PASS read -rs THEHIVE_PASS
if ! check_password "$THEHIVE_PASS"; then check_password_and_exit "$THEHIVE_PASS"
echo "Password is invalid. Please exclude single quotes, double quotes and backslashes from the password."
exit 2
fi
# Create new user in TheHive # Create new user in TheHive
resp=$(curl -sk -XPOST -H "Authorization: Bearer $THEHIVE_KEY" -H "Content-Type: application/json" -L "https://$THEHVIE_API_URL/user" -d "{\"login\" : \"$THEHIVE_USER\",\"name\" : \"$THEHIVE_USER\",\"roles\" : [\"read\",\"alert\",\"write\",\"admin\"],\"preferences\" : \"{}\",\"password\" : \"$THEHIVE_PASS\"}") resp=$(curl -sk -XPOST -H "Authorization: Bearer $THEHIVE_KEY" -H "Content-Type: application/json" -L "https://$THEHVIE_API_URL/user" -d "{\"login\" : \"$THEHIVE_USER\",\"name\" : \"$THEHIVE_USER\",\"roles\" : [\"read\",\"alert\",\"write\",\"admin\"],\"preferences\" : \"{}\",\"password\" : \"$THEHIVE_PASS\"}")

View File

@@ -0,0 +1,57 @@
#!/bin/bash
#
# Copyright 2014,2015,2016,2017,2018,2019,2020,2021 Security Onion Solutions, LLC
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
. /usr/sbin/so-common
usage() {
echo "Usage: $0 <user-name>"
echo ""
echo "Update password for an existing TheHive user. The new password will be read from STDIN."
exit 1
}
if [ $# -ne 1 ]; then
usage
fi
USER=$1
THEHIVE_KEY=$(lookup_pillar hivekey)
THEHVIE_API_URL="$(lookup_pillar url_base)/thehive/api"
THEHIVE_USER=$USER
# Read password for new user from stdin
test -t 0
if [[ $? == 0 ]]; then
echo "Enter new password:"
fi
read -rs THEHIVE_PASS
if ! check_password "$THEHIVE_PASS"; then
echo "Password is invalid. Please exclude single quotes, double quotes, dollar signs, and backslashes from the password."
exit 2
fi
# Change password for user in TheHive
resp=$(curl -sk -XPOST -H "Authorization: Bearer $THEHIVE_KEY" -H "Content-Type: application/json" -L "https://$THEHVIE_API_URL/user/${THEHIVE_USER}/password/set" -d "{\"password\" : \"$THEHIVE_PASS\"}")
if [[ -z "$resp" ]]; then
echo "Successfully updated TheHive user password"
else
echo "Unable to update TheHive user password"
echo $resp
exit 2
fi

View File

@@ -18,11 +18,17 @@
source $(dirname $0)/so-common source $(dirname $0)/so-common
if [[ $# -lt 1 || $# -gt 2 ]]; then DEFAULT_ROLE=analyst
echo "Usage: $0 <list|add|update|enable|disable|validate|valemail|valpass> [email]"
if [[ $# -lt 1 || $# -gt 3 ]]; then
echo "Usage: $0 <operation> [email] [role]"
echo ""
echo " where <operation> is one of the following:"
echo "" echo ""
echo " list: Lists all user email addresses currently defined in the identity system" echo " list: Lists all user email addresses currently defined in the identity system"
echo " add: Adds a new user to the identity system; requires 'email' parameter" echo " add: Adds a new user to the identity system; requires 'email' parameter, while 'role' parameter is optional and defaults to $DEFAULT_ROLE"
echo " addrole: Grants a role to an existing user; requires 'email' and 'role' parameters"
echo " delrole: Removes a role from an existing user; requires 'email' and 'role' parameters"
echo " update: Updates a user's password; requires 'email' parameter" echo " update: Updates a user's password; requires 'email' parameter"
echo " enable: Enables a user; requires 'email' parameter" echo " enable: Enables a user; requires 'email' parameter"
echo " disable: Disables a user; requires 'email' parameter" echo " disable: Disables a user; requires 'email' parameter"
@@ -36,14 +42,18 @@ fi
operation=$1 operation=$1
email=$2 email=$2
role=$3
kratosUrl=${KRATOS_URL:-http://127.0.0.1:4434} kratosUrl=${KRATOS_URL:-http://127.0.0.1:4434}
databasePath=${KRATOS_DB_PATH:-/opt/so/conf/kratos/db/db.sqlite} databasePath=${KRATOS_DB_PATH:-/opt/so/conf/kratos/db/db.sqlite}
bcryptRounds=${BCRYPT_ROUNDS:-12} bcryptRounds=${BCRYPT_ROUNDS:-12}
elasticUsersFile=${ELASTIC_USERS_FILE:-/opt/so/saltstack/local/salt/elasticsearch/files/users} elasticUsersFile=${ELASTIC_USERS_FILE:-/opt/so/saltstack/local/salt/elasticsearch/files/users}
elasticRolesFile=${ELASTIC_ROLES_FILE:-/opt/so/saltstack/local/salt/elasticsearch/files/users_roles} elasticRolesFile=${ELASTIC_ROLES_FILE:-/opt/so/saltstack/local/salt/elasticsearch/files/users_roles}
socRolesFile=${SOC_ROLES_FILE:-/opt/so/conf/soc/soc_users_roles}
esUID=${ELASTIC_UID:-930} esUID=${ELASTIC_UID:-930}
esGID=${ELASTIC_GID:-930} esGID=${ELASTIC_GID:-930}
soUID=${SOCORE_UID:-939}
soGID=${SOCORE_GID:-939}
function lock() { function lock() {
# Obtain file descriptor lock # Obtain file descriptor lock
@@ -80,7 +90,7 @@ function findIdByEmail() {
email=$1 email=$1
response=$(curl -Ss -L ${kratosUrl}/identities) response=$(curl -Ss -L ${kratosUrl}/identities)
identityId=$(echo "${response}" | jq ".[] | select(.verifiable_addresses[0].value == \"$email\") | .id") identityId=$(echo "${response}" | jq -r ".[] | select(.verifiable_addresses[0].value == \"$email\") | .id")
echo $identityId echo $identityId
} }
@@ -89,17 +99,23 @@ function validatePassword() {
len=$(expr length "$password") len=$(expr length "$password")
if [[ $len -lt 6 ]]; then if [[ $len -lt 6 ]]; then
echo "Password does not meet the minimum requirements" fail "Password does not meet the minimum requirements"
exit 2
fi fi
if [[ $len -gt 72 ]]; then
fail "Password is too long (max: 72)"
fi
check_password_and_exit "$password"
} }
function validateEmail() { function validateEmail() {
email=$1 email=$1
# (?:[a-z0-9!#$%&'*+/=?^_`{|}~-]+(?:\.[a-z0-9!#$%&'*+/=?^_`{|}~-]+)*|"(?:[\x01-\x08\x0b\x0c\x0e-\x1f\x21\x23-\x5b\x5d-\x7f]|\\[\x01-\x09\x0b\x0c\x0e-\x7f])*")@(?:(?:[a-z0-9](?:[a-z0-9-]*[a-z0-9])?\.)+[a-z0-9](?:[a-z0-9-]*[a-z0-9])?|\[(?:(?:(2(5[0-5]|[0-4][0-9])|1[0-9][0-9]|[1-9]?[0-9]))\.){3}(?:(2(5[0-5]|[0-4][0-9])|1[0-9][0-9]|[1-9]?[0-9])|[a-z0-9-]*[a-z0-9]:(?:[\x01-\x08\x0b\x0c\x0e-\x1f\x21-\x5a\x53-\x7f]|\\[\x01-\x09\x0b\x0c\x0e-\x7f])+)\]) # (?:[a-z0-9!#$%&'*+/=?^_`{|}~-]+(?:\.[a-z0-9!#$%&'*+/=?^_`{|}~-]+)*|"(?:[\x01-\x08\x0b\x0c\x0e-\x1f\x21\x23-\x5b\x5d-\x7f]|\\[\x01-\x09\x0b\x0c\x0e-\x7f])*")@(?:(?:[a-z0-9](?:[a-z0-9-]*[a-z0-9])?\.)+[a-z0-9](?:[a-z0-9-]*[a-z0-9])?|\[(?:(?:(2(5[0-5]|[0-4][0-9])|1[0-9][0-9]|[1-9]?[0-9]))\.){3}(?:(2(5[0-5]|[0-4][0-9])|1[0-9][0-9]|[1-9]?[0-9])|[a-z0-9-]*[a-z0-9]:(?:[\x01-\x08\x0b\x0c\x0e-\x1f\x21-\x5a\x53-\x7f]|\\[\x01-\x09\x0b\x0c\x0e-\x7f])+)\])
if [[ ! "$email" =~ ^[[:alnum:]._%+-]+@[[:alnum:].-]+\.[[:alpha:]]{2,}$ ]]; then if [[ ! "$email" =~ ^[[:alnum:]._%+-]+@[[:alnum:].-]+\.[[:alpha:]]{2,}$ ]]; then
echo "Email address is invalid" fail "Email address is invalid"
exit 3 fi
if [[ "$email" =~ [A-Z] ]]; then
fail "Email addresses cannot contain uppercase letters"
fi fi
} }
@@ -127,21 +143,51 @@ function updatePassword() {
validatePassword "$password" validatePassword "$password"
fi fi
if [[ -n $identityId ]]; then if [[ -n "$identityId" ]]; then
# Generate password hash # Generate password hash
passwordHash=$(hashPassword "$password") passwordHash=$(hashPassword "$password")
# Update DB with new hash # Update DB with new hash
echo "update identity_credentials set config=CAST('{\"hashed_password\":\"$passwordHash\"}' as BLOB) where identity_id=${identityId};" | sqlite3 "$databasePath" echo "update identity_credentials set config=CAST('{\"hashed_password\":\"$passwordHash\"}' as BLOB) where identity_id='${identityId}';" | sqlite3 "$databasePath"
[[ $? != 0 ]] && fail "Unable to update password" [[ $? != 0 ]] && fail "Unable to update password"
fi fi
} }
function createElasticFile() { function createFile() {
filename=$1 filename=$1
tmpFile=${filename} uid=$2
truncate -s 0 "$tmpFile" gid=$3
chmod 600 "$tmpFile"
chown "${esUID}:${esGID}" "$tmpFile" mkdir -p $(dirname "$filename")
truncate -s 0 "$filename"
chmod 600 "$filename"
chown "${uid}:${gid}" "$filename"
}
function ensureRoleFileExists() {
if [[ ! -f "$socRolesFile" || ! -s "$socRolesFile" ]]; then
# Generate the new users file
rolesTmpFile="${socRolesFile}.tmp"
createFile "$rolesTmpFile" "$soUID" "$soGID"
if [[ -f "$databasePath" ]]; then
echo "Migrating roles to new file: $socRolesFile"
echo "select 'superuser:' || id from identities;" | sqlite3 "$databasePath" \
>> "$rolesTmpFile"
[[ $? != 0 ]] && fail "Unable to read identities from database"
echo "The following users have all been migrated with the super user role:"
cat "${rolesTmpFile}"
else
echo "Database file does not exist yet, installation is likely not yet complete."
fi
if [[ -d "$socRolesFile" ]]; then
echo "Removing invalid roles directory created by Docker"
rm -fr "$socRolesFile"
fi
mv "${rolesTmpFile}" "${socRolesFile}"
fi
} }
function syncElasticSystemUser() { function syncElasticSystemUser() {
@@ -172,53 +218,56 @@ function syncElasticSystemRole() {
} }
function syncElastic() { function syncElastic() {
echo "Syncing users between SOC and Elastic..." echo "Syncing users and roles between SOC and Elastic..."
usersTmpFile="${elasticUsersFile}.tmp" usersTmpFile="${elasticUsersFile}.tmp"
createFile "${usersTmpFile}" "$esUID" "$esGID"
rolesTmpFile="${elasticRolesFile}.tmp" rolesTmpFile="${elasticRolesFile}.tmp"
createElasticFile "${usersTmpFile}" createFile "${rolesTmpFile}" "$esUID" "$esGID"
createElasticFile "${rolesTmpFile}"
authPillarJson=$(lookup_salt_value "auth" "elasticsearch" "pillar" "json") authPillarJson=$(lookup_salt_value "auth" "elasticsearch" "pillar" "json")
syncElasticSystemUser "$authPillarJson" "so_elastic_user" "$usersTmpFile" syncElasticSystemUser "$authPillarJson" "so_elastic_user" "$usersTmpFile"
syncElasticSystemRole "$authPillarJson" "so_elastic_user" "superuser" "$rolesTmpFile"
syncElasticSystemUser "$authPillarJson" "so_kibana_user" "$usersTmpFile" syncElasticSystemUser "$authPillarJson" "so_kibana_user" "$usersTmpFile"
syncElasticSystemRole "$authPillarJson" "so_kibana_user" "superuser" "$rolesTmpFile"
syncElasticSystemUser "$authPillarJson" "so_logstash_user" "$usersTmpFile" syncElasticSystemUser "$authPillarJson" "so_logstash_user" "$usersTmpFile"
syncElasticSystemRole "$authPillarJson" "so_logstash_user" "superuser" "$rolesTmpFile"
syncElasticSystemUser "$authPillarJson" "so_beats_user" "$usersTmpFile" syncElasticSystemUser "$authPillarJson" "so_beats_user" "$usersTmpFile"
syncElasticSystemRole "$authPillarJson" "so_beats_user" "superuser" "$rolesTmpFile"
syncElasticSystemUser "$authPillarJson" "so_monitor_user" "$usersTmpFile" syncElasticSystemUser "$authPillarJson" "so_monitor_user" "$usersTmpFile"
syncElasticSystemRole "$authPillarJson" "so_elastic_user" "superuser" "$rolesTmpFile"
syncElasticSystemRole "$authPillarJson" "so_kibana_user" "superuser" "$rolesTmpFile"
syncElasticSystemRole "$authPillarJson" "so_logstash_user" "superuser" "$rolesTmpFile"
syncElasticSystemRole "$authPillarJson" "so_beats_user" "superuser" "$rolesTmpFile"
syncElasticSystemRole "$authPillarJson" "so_monitor_user" "remote_monitoring_collector" "$rolesTmpFile" syncElasticSystemRole "$authPillarJson" "so_monitor_user" "remote_monitoring_collector" "$rolesTmpFile"
syncElasticSystemRole "$authPillarJson" "so_monitor_user" "remote_monitoring_agent" "$rolesTmpFile" syncElasticSystemRole "$authPillarJson" "so_monitor_user" "remote_monitoring_agent" "$rolesTmpFile"
syncElasticSystemRole "$authPillarJson" "so_monitor_user" "monitoring_user" "$rolesTmpFile" syncElasticSystemRole "$authPillarJson" "so_monitor_user" "monitoring_user" "$rolesTmpFile"
if [[ -f "$databasePath" ]]; then if [[ -f "$databasePath" && -f "$socRolesFile" ]]; then
# Generate the new users file # Append the SOC users
echo "select '{\"user\":\"' || ici.identifier || '\", \"data\":' || ic.config || '}'" \ echo "select '{\"user\":\"' || ici.identifier || '\", \"data\":' || ic.config || '}'" \
"from identity_credential_identifiers ici, identity_credentials ic " \ "from identity_credential_identifiers ici, identity_credentials ic, identities i " \
"where ici.identity_credential_id=ic.id and instr(ic.config, 'hashed_password') " \ "where " \
" ici.identity_credential_id=ic.id " \
" and ic.identity_id=i.id " \
" and instr(ic.config, 'hashed_password') " \
" and i.state == 'active' " \
"order by ici.identifier;" | \ "order by ici.identifier;" | \
sqlite3 "$databasePath" | \ sqlite3 "$databasePath" | \
jq -r '.user + ":" + .data.hashed_password' \ jq -r '.user + ":" + .data.hashed_password' \
>> "$usersTmpFile" >> "$usersTmpFile"
[[ $? != 0 ]] && fail "Unable to read credential hashes from database" [[ $? != 0 ]] && fail "Unable to read credential hashes from database"
# Generate the new users_roles file # Append the user roles
while IFS="" read -r rolePair || [ -n "$rolePair" ]; do
echo "select 'superuser:' || ici.identifier " \ userId=$(echo "$rolePair" | cut -d: -f2)
role=$(echo "$rolePair" | cut -d: -f1)
echo "select '$role:' || ici.identifier " \
"from identity_credential_identifiers ici, identity_credentials ic " \ "from identity_credential_identifiers ici, identity_credentials ic " \
"where ici.identity_credential_id=ic.id and instr(ic.config, 'hashed_password') " \ "where ici.identity_credential_id=ic.id and ic.identity_id = '$userId';" | \
"order by ici.identifier;" | \ sqlite3 "$databasePath" >> "$rolesTmpFile"
sqlite3 "$databasePath" \ done < "$socRolesFile"
>> "$rolesTmpFile"
[[ $? != 0 ]] && fail "Unable to read credential IDs from database"
else else
echo "Database file does not exist yet, skipping users export" echo "Database file or soc roles file does not exist yet, skipping users export"
fi fi
if [[ -s "${usersTmpFile}" ]]; then if [[ -s "${usersTmpFile}" ]]; then
@@ -236,15 +285,22 @@ function syncElastic() {
} }
function syncAll() { function syncAll() {
ensureRoleFileExists
# Check if a sync is needed. Sync is not needed if the following are true:
# - user database entries are all older than the elastic users file
# - soc roles file last modify date is older than the elastic roles file
if [[ -z "$FORCE_SYNC" && -f "$databasePath" && -f "$elasticUsersFile" ]]; then if [[ -z "$FORCE_SYNC" && -f "$databasePath" && -f "$elasticUsersFile" ]]; then
usersFileAgeSecs=$(echo $(($(date +%s) - $(date +%s -r "$elasticUsersFile")))) usersFileAgeSecs=$(echo $(($(date +%s) - $(date +%s -r "$elasticUsersFile"))))
staleCount=$(echo "select count(*) from identity_credentials where updated_at >= Datetime('now', '-${usersFileAgeSecs} seconds');" \ staleCount=$(echo "select count(*) from identity_credentials where updated_at >= Datetime('now', '-${usersFileAgeSecs} seconds');" \
| sqlite3 "$databasePath") | sqlite3 "$databasePath")
if [[ "$staleCount" == "0" ]]; then if [[ "$staleCount" == "0" && "$elasticRolesFile" -nt "$socRolesFile" ]]; then
return 1 return 1
fi fi
fi fi
syncElastic syncElastic
return 0 return 0
} }
@@ -252,11 +308,64 @@ function listUsers() {
response=$(curl -Ss -L ${kratosUrl}/identities) response=$(curl -Ss -L ${kratosUrl}/identities)
[[ $? != 0 ]] && fail "Unable to communicate with Kratos" [[ $? != 0 ]] && fail "Unable to communicate with Kratos"
echo "${response}" | jq -r ".[] | .verifiable_addresses[0].value" | sort users=$(echo "${response}" | jq -r ".[] | .verifiable_addresses[0].value" | sort)
for user in $users; do
roles=$(grep "$user" "$elasticRolesFile" | cut -d: -f1 | tr '\n' ' ')
echo "$user: $roles"
done
}
function addUserRole() {
email=$1
role=$2
adjustUserRole "$email" "$role" "add"
}
function deleteUserRole() {
email=$1
role=$2
adjustUserRole "$email" "$role" "del"
}
function adjustUserRole() {
email=$1
role=$2
op=$3
identityId=$(findIdByEmail "$email")
[[ ${identityId} == "" ]] && fail "User not found"
ensureRoleFileExists
filename="$socRolesFile"
hasRole=0
grep "$role:" "$socRolesFile" | grep -q "$identityId" && hasRole=1
if [[ "$op" == "add" ]]; then
if [[ "$hasRole" == "1" ]]; then
echo "User '$email' already has the role: $role"
return 1
else
echo "$role:$identityId" >> "$filename"
fi
elif [[ "$op" == "del" ]]; then
if [[ "$hasRole" -ne 1 ]]; then
fail "User '$email' does not have the role: $role"
else
sed "/^$role:$identityId\$/d" "$filename" > "$filename.tmp"
cat "$filename".tmp > "$filename"
rm -f "$filename".tmp
fi
else
fail "Unsupported role adjustment operation: $op"
fi
return 0
} }
function createUser() { function createUser() {
email=$1 email=$1
role=$2
now=$(date -u +%FT%TZ) now=$(date -u +%FT%TZ)
addUserJson=$(cat <<EOF addUserJson=$(cat <<EOF
@@ -270,16 +379,30 @@ EOF
response=$(curl -Ss -L ${kratosUrl}/identities -d "$addUserJson") response=$(curl -Ss -L ${kratosUrl}/identities -d "$addUserJson")
[[ $? != 0 ]] && fail "Unable to communicate with Kratos" [[ $? != 0 ]] && fail "Unable to communicate with Kratos"
identityId=$(echo "${response}" | jq ".id") identityId=$(echo "${response}" | jq -r ".id")
if [[ ${identityId} == "null" ]]; then if [[ "${identityId}" == "null" ]]; then
code=$(echo "${response}" | jq ".error.code") code=$(echo "${response}" | jq ".error.code")
[[ "${code}" == "409" ]] && fail "User already exists" [[ "${code}" == "409" ]] && fail "User already exists"
reason=$(echo "${response}" | jq ".error.message") reason=$(echo "${response}" | jq ".error.message")
[[ $? == 0 ]] && fail "Unable to add user: ${reason}" [[ $? == 0 ]] && fail "Unable to add user: ${reason}"
else
updatePassword "$identityId"
addUserRole "$email" "$role"
fi fi
}
updatePassword $identityId function migrateLockedUsers() {
# This is a migration function to convert locked users from prior to 2.3.90
# to inactive users using the newer Kratos functionality. This should only
# find locked users once.
lockedEmails=$(curl -s http://localhost:4434/identities | jq -r '.[] | select(.traits.status == "locked") | .traits.email')
if [[ -n "$lockedEmails" ]]; then
echo "Disabling locked users..."
for email in $lockedEmails; do
updateStatus "$email" locked
done
fi
} }
function updateStatus() { function updateStatus() {
@@ -292,24 +415,18 @@ function updateStatus() {
response=$(curl -Ss -L "${kratosUrl}/identities/$identityId") response=$(curl -Ss -L "${kratosUrl}/identities/$identityId")
[[ $? != 0 ]] && fail "Unable to communicate with Kratos" [[ $? != 0 ]] && fail "Unable to communicate with Kratos"
oldConfig=$(echo "select config from identity_credentials where identity_id=${identityId};" | sqlite3 "$databasePath") schemaId=$(echo "$response" | jq -r .schema_id)
# Capture traits and remove obsolete 'status' trait if exists
traitBlock=$(echo "$response" | jq -c .traits | sed -re 's/,?"status":".*?"//')
state="active"
if [[ "$status" == "locked" ]]; then if [[ "$status" == "locked" ]]; then
config=$(echo $oldConfig | sed -e 's/hashed/locked/') state="inactive"
echo "update identity_credentials set config=CAST('${config}' as BLOB) where identity_id=${identityId};" | sqlite3 "$databasePath"
[[ $? != 0 ]] && fail "Unable to lock credential record"
echo "delete from sessions where identity_id=${identityId};" | sqlite3 "$databasePath"
[[ $? != 0 ]] && fail "Unable to invalidate sessions"
else
config=$(echo $oldConfig | sed -e 's/locked/hashed/')
echo "update identity_credentials set config=CAST('${config}' as BLOB) where identity_id=${identityId};" | sqlite3 "$databasePath"
[[ $? != 0 ]] && fail "Unable to unlock credential record"
fi fi
body="{ \"schema_id\": \"$schemaId\", \"state\": \"$state\", \"traits\": $traitBlock }"
updatedJson=$(echo "$response" | jq ".traits.status = \"$status\" | del(.verifiable_addresses) | del(.id) | del(.schema_url)") response=$(curl -fSsL -XPUT "${kratosUrl}/identities/$identityId" -d "$body")
response=$(curl -Ss -XPUT -L ${kratosUrl}/identities/$identityId -d "$updatedJson") [[ $? != 0 ]] && fail "Unable to update user"
[[ $? != 0 ]] && fail "Unable to mark user as locked"
} }
function updateUser() { function updateUser() {
@@ -318,7 +435,7 @@ function updateUser() {
identityId=$(findIdByEmail "$email") identityId=$(findIdByEmail "$email")
[[ ${identityId} == "" ]] && fail "User not found" [[ ${identityId} == "" ]] && fail "User not found"
updatePassword $identityId updatePassword "$identityId"
} }
function deleteUser() { function deleteUser() {
@@ -329,6 +446,11 @@ function deleteUser() {
response=$(curl -Ss -XDELETE -L "${kratosUrl}/identities/$identityId") response=$(curl -Ss -XDELETE -L "${kratosUrl}/identities/$identityId")
[[ $? != 0 ]] && fail "Unable to communicate with Kratos" [[ $? != 0 ]] && fail "Unable to communicate with Kratos"
rolesTmpFile="${socRolesFile}.tmp"
createFile "$rolesTmpFile" "$soUID" "$soGID"
grep -v "$identityId" "$socRolesFile" > "$rolesTmpFile"
mv "$rolesTmpFile" "$socRolesFile"
} }
case "${operation}" in case "${operation}" in
@@ -339,7 +461,7 @@ case "${operation}" in
lock lock
validateEmail "$email" validateEmail "$email"
updatePassword updatePassword
createUser "$email" createUser "$email" "${role:-$DEFAULT_ROLE}"
syncAll syncAll
echo "Successfully added new user to SOC" echo "Successfully added new user to SOC"
check_container thehive && echo "$password" | so-thehive-user-add "$email" check_container thehive && echo "$password" | so-thehive-user-add "$email"
@@ -351,6 +473,31 @@ case "${operation}" in
listUsers listUsers
;; ;;
"addrole")
verifyEnvironment
[[ "$email" == "" ]] && fail "Email address must be provided"
[[ "$role" == "" ]] && fail "Role must be provided"
lock
validateEmail "$email"
if addUserRole "$email" "$role"; then
syncElastic
echo "Successfully added role to user"
fi
;;
"delrole")
verifyEnvironment
[[ "$email" == "" ]] && fail "Email address must be provided"
[[ "$role" == "" ]] && fail "Role must be provided"
lock
validateEmail "$email"
deleteUserRole "$email" "$role"
syncElastic
echo "Successfully removed role from user"
;;
"update") "update")
verifyEnvironment verifyEnvironment
[[ "$email" == "" ]] && fail "Email address must be provided" [[ "$email" == "" ]] && fail "Email address must be provided"
@@ -370,7 +517,7 @@ case "${operation}" in
syncAll syncAll
echo "Successfully enabled user" echo "Successfully enabled user"
check_container thehive && so-thehive-user-enable "$email" true check_container thehive && so-thehive-user-enable "$email" true
check_container fleet && so-fleet-user-enable "$email" true echo "Fleet user will need to be recreated manually with so-fleet-user-add"
;; ;;
"disable") "disable")
@@ -382,7 +529,7 @@ case "${operation}" in
syncAll syncAll
echo "Successfully disabled user" echo "Successfully disabled user"
check_container thehive && so-thehive-user-enable "$email" false check_container thehive && so-thehive-user-enable "$email" false
check_container fleet && so-fleet-user-enable "$email" false check_container fleet && so-fleet-user-delete "$email"
;; ;;
"delete") "delete")
@@ -394,7 +541,7 @@ case "${operation}" in
syncAll syncAll
echo "Successfully deleted user" echo "Successfully deleted user"
check_container thehive && so-thehive-user-enable "$email" false check_container thehive && so-thehive-user-enable "$email" false
check_container fleet && so-fleet-user-enable "$email" false check_container fleet && so-fleet-user-delete "$email"
;; ;;
"sync") "sync")
@@ -418,6 +565,11 @@ case "${operation}" in
echo "Password is acceptable" echo "Password is acceptable"
;; ;;
"migrate")
migrateLockedUsers
echo "User migration complete"
;;
*) *)
fail "Unsupported operation: $operation" fail "Unsupported operation: $operation"
;; ;;

View File

@@ -1,5 +1,4 @@
#!/bin/bash #!/bin/bash
# Copyright 2014,2015,2016,2017,2018,2019,2020,2021 Security Onion Solutions, LLC # Copyright 2014,2015,2016,2017,2018,2019,2020,2021 Security Onion Solutions, LLC
# #
# This program is free software: you can redistribute it and/or modify # This program is free software: you can redistribute it and/or modify
@@ -20,13 +19,8 @@ echo "Starting to check for yara rule updates at $(date)..."
output_dir="/opt/so/saltstack/default/salt/strelka/rules" output_dir="/opt/so/saltstack/default/salt/strelka/rules"
mkdir -p $output_dir mkdir -p $output_dir
repos="$output_dir/repos.txt" repos="$output_dir/repos.txt"
ignorefile="$output_dir/ignore.txt"
deletecounter=0
newcounter=0 newcounter=0
updatecounter=0
{% if ISAIRGAP is sameas true %} {% if ISAIRGAP is sameas true %}
@@ -35,58 +29,21 @@ echo "Airgap mode enabled."
clone_dir="/nsm/repo/rules/strelka" clone_dir="/nsm/repo/rules/strelka"
repo_name="signature-base" repo_name="signature-base"
mkdir -p /opt/so/saltstack/default/salt/strelka/rules/signature-base mkdir -p /opt/so/saltstack/default/salt/strelka/rules/signature-base
# Ensure a copy of the license is available for the rules
[ -f $clone_dir/LICENSE ] && cp $clone_dir/$repo_name/LICENSE $output_dir/$repo_name [ -f $clone_dir/LICENSE ] && cp $clone_dir/$repo_name/LICENSE $output_dir/$repo_name
# Copy over rules # Copy over rules
for i in $(find $clone_dir/yara -name "*.yar*"); do for i in $(find $clone_dir/yara -name "*.yar*"); do
rule_name=$(echo $i | awk -F '/' '{print $NF}') rule_name=$(echo $i | awk -F '/' '{print $NF}')
repo_sum=$(sha256sum $i | awk '{print $1}') echo "Adding rule: $rule_name..."
# Check rules against those in ignore list -- don't copy if ignored.
if ! grep -iq $rule_name $ignorefile; then
existing_rules=$(find $output_dir/$repo_name/ -name $rule_name | wc -l)
# For existing rules, check to see if they need to be updated, by comparing checksums
if [ $existing_rules -gt 0 ];then
local_sum=$(sha256sum $output_dir/$repo_name/$rule_name | awk '{print $1}')
if [ "$repo_sum" != "$local_sum" ]; then
echo "Checksums do not match!"
echo "Updating $rule_name..."
cp $i $output_dir/$repo_name;
((updatecounter++))
fi
else
# If rule doesn't exist already, we'll add it
echo "Adding new rule: $rule_name..."
cp $i $output_dir/$repo_name cp $i $output_dir/$repo_name
((newcounter++)) ((newcounter++))
fi
fi;
done
# Check to see if we have any old rules that need to be removed
for i in $(find $output_dir/$repo_name -name "*.yar*" | awk -F '/' '{print $NF}'); do
is_repo_rule=$(find $clone_dir -name "$i" | wc -l)
if [ $is_repo_rule -eq 0 ]; then
echo "Could not find $i in source $repo_name repo...removing from $output_dir/$repo_name..."
rm $output_dir/$repo_name/$i
((deletecounter++))
fi
done done
echo "Done!" echo "Done!"
if [ "$newcounter" -gt 0 ];then if [ "$newcounter" -gt 0 ];then
echo "$newcounter new rules added." echo "$newcounter rules added."
fi
if [ "$updatecounter" -gt 0 ];then
echo "$updatecounter rules updated."
fi
if [ "$deletecounter" -gt 0 ];then
echo "$deletecounter rules removed because they were deprecated or don't exist in the source repo."
fi fi
{% else %} {% else %}
@@ -99,50 +56,21 @@ if [ "$gh_status" == "200" ] || [ "$gh_status" == "301" ]; then
if ! $(echo "$repo" | grep -qE '^#'); then if ! $(echo "$repo" | grep -qE '^#'); then
# Remove old repo if existing bc of previous error condition or unexpected disruption # Remove old repo if existing bc of previous error condition or unexpected disruption
repo_name=`echo $repo | awk -F '/' '{print $NF}'` repo_name=`echo $repo | awk -F '/' '{print $NF}'`
[ -d $repo_name ] && rm -rf $repo_name [ -d $output_dir/$repo_name ] && rm -rf $output_dir/$repo_name
# Clone repo and make appropriate directories for rules # Clone repo and make appropriate directories for rules
git clone $repo $clone_dir/$repo_name git clone $repo $clone_dir/$repo_name
echo "Analyzing rules from $clone_dir/$repo_name..." echo "Analyzing rules from $clone_dir/$repo_name..."
mkdir -p $output_dir/$repo_name mkdir -p $output_dir/$repo_name
# Ensure a copy of the license is available for the rules
[ -f $clone_dir/$repo_name/LICENSE ] && cp $clone_dir/$repo_name/LICENSE $output_dir/$repo_name [ -f $clone_dir/$repo_name/LICENSE ] && cp $clone_dir/$repo_name/LICENSE $output_dir/$repo_name
# Copy over rules # Copy over rules
for i in $(find $clone_dir/$repo_name -name "*.yar*"); do for i in $(find $clone_dir/$repo_name -name "*.yar*"); do
rule_name=$(echo $i | awk -F '/' '{print $NF}') rule_name=$(echo $i | awk -F '/' '{print $NF}')
repo_sum=$(sha256sum $i | awk '{print $1}') echo "Adding rule: $rule_name..."
# Check rules against those in ignore list -- don't copy if ignored.
if ! grep -iq $rule_name $ignorefile; then
existing_rules=$(find $output_dir/$repo_name/ -name $rule_name | wc -l)
# For existing rules, check to see if they need to be updated, by comparing checksums
if [ $existing_rules -gt 0 ];then
local_sum=$(sha256sum $output_dir/$repo_name/$rule_name | awk '{print $1}')
if [ "$repo_sum" != "$local_sum" ]; then
echo "Checksums do not match!"
echo "Updating $rule_name..."
cp $i $output_dir/$repo_name;
((updatecounter++))
fi
else
# If rule doesn't exist already, we'll add it
echo "Adding new rule: $rule_name..."
cp $i $output_dir/$repo_name cp $i $output_dir/$repo_name
((newcounter++)) ((newcounter++))
fi
fi;
done
# Check to see if we have any old rules that need to be removed
for i in $(find $output_dir/$repo_name -name "*.yar*" | awk -F '/' '{print $NF}'); do
is_repo_rule=$(find $clone_dir/$repo_name -name "$i" | wc -l)
if [ $is_repo_rule -eq 0 ]; then
echo "Could not find $i in source $repo_name repo...removing from $output_dir/$repo_name..."
rm $output_dir/$repo_name/$i
((deletecounter++))
fi
done done
rm -rf $clone_dir/$repo_name rm -rf $clone_dir/$repo_name
fi fi
@@ -151,15 +79,7 @@ if [ "$gh_status" == "200" ] || [ "$gh_status" == "301" ]; then
echo "Done!" echo "Done!"
if [ "$newcounter" -gt 0 ];then if [ "$newcounter" -gt 0 ];then
echo "$newcounter new rules added." echo "$newcounter rules added."
fi
if [ "$updatecounter" -gt 0 ];then
echo "$updatecounter rules updated."
fi
if [ "$deletecounter" -gt 0 ];then
echo "$deletecounter rules removed because they were deprecated or don't exist in the source repo."
fi fi
else else

View File

@@ -27,6 +27,7 @@ SOUP_LOG=/root/soup.log
INFLUXDB_MIGRATION_LOG=/opt/so/log/influxdb/soup_migration.log INFLUXDB_MIGRATION_LOG=/opt/so/log/influxdb/soup_migration.log
WHATWOULDYOUSAYYAHDOHERE=soup WHATWOULDYOUSAYYAHDOHERE=soup
whiptail_title='Security Onion UPdater' whiptail_title='Security Onion UPdater'
NOTIFYCUSTOMELASTICCONFIG=false
check_err() { check_err() {
local exit_code=$1 local exit_code=$1
@@ -105,9 +106,11 @@ add_common() {
airgap_mounted() { airgap_mounted() {
# Let's see if the ISO is already mounted. # Let's see if the ISO is already mounted.
if [ -f /tmp/soagupdate/SecurityOnion/VERSION ]; then if [[ -f /tmp/soagupdate/SecurityOnion/VERSION ]]; then
echo "The ISO is already mounted" echo "The ISO is already mounted"
else else
if [[ -z $ISOLOC ]]; then
echo "This is airgap. Ask for a location."
echo "" echo ""
cat << EOF cat << EOF
In order for soup to proceed, the path to the downloaded Security Onion ISO file, or the path to the CD-ROM or equivalent device containing the ISO media must be provided. In order for soup to proceed, the path to the downloaded Security Onion ISO file, or the path to the CD-ROM or equivalent device containing the ISO media must be provided.
@@ -116,6 +119,7 @@ Or, if you have burned the new ISO onto an optical disk then the path might look
EOF EOF
read -rp 'Enter the path to the new Security Onion ISO content: ' ISOLOC read -rp 'Enter the path to the new Security Onion ISO content: ' ISOLOC
fi
if [[ -f $ISOLOC ]]; then if [[ -f $ISOLOC ]]; then
# Mounting the ISO image # Mounting the ISO image
mkdir -p /tmp/soagupdate mkdir -p /tmp/soagupdate
@@ -131,7 +135,7 @@ EOF
elif [[ -f $ISOLOC/SecurityOnion/VERSION ]]; then elif [[ -f $ISOLOC/SecurityOnion/VERSION ]]; then
ln -s $ISOLOC /tmp/soagupdate ln -s $ISOLOC /tmp/soagupdate
echo "Found the update content" echo "Found the update content"
else elif [[ -b $ISOLOC ]]; then
mkdir -p /tmp/soagupdate mkdir -p /tmp/soagupdate
mount $ISOLOC /tmp/soagupdate mount $ISOLOC /tmp/soagupdate
if [ ! -f /tmp/soagupdate/SecurityOnion/VERSION ]; then if [ ! -f /tmp/soagupdate/SecurityOnion/VERSION ]; then
@@ -141,6 +145,10 @@ EOF
else else
echo "Device has been mounted!" echo "Device has been mounted!"
fi fi
else
echo "Could not find Security Onion ISO content at ${ISOLOC}"
echo "Ensure the path you entered is correct, and that you verify the ISO that you downloaded."
exit 0
fi fi
fi fi
} }
@@ -150,7 +158,7 @@ airgap_update_dockers() {
# Let's copy the tarball # Let's copy the tarball
if [[ ! -f $AGDOCKER/registry.tar ]]; then if [[ ! -f $AGDOCKER/registry.tar ]]; then
echo "Unable to locate registry. Exiting" echo "Unable to locate registry. Exiting"
exit 1 exit 0
else else
echo "Stopping the registry docker" echo "Stopping the registry docker"
docker stop so-dockerregistry docker stop so-dockerregistry
@@ -182,6 +190,50 @@ check_airgap() {
fi fi
} }
# {% raw %}
check_local_mods() {
local salt_local=/opt/so/saltstack/local
local_mod_arr=()
while IFS= read -r -d '' local_file; do
stripped_path=${local_file#"$salt_local"}
default_file="${DEFAULT_SALT_DIR}${stripped_path}"
if [[ -f $default_file ]]; then
file_diff=$(diff "$default_file" "$local_file" )
if [[ $(echo "$file_diff" | grep -c "^<") -gt 0 ]]; then
local_mod_arr+=( "$local_file" )
fi
fi
done< <(find $salt_local -type f -print0)
if [[ ${#local_mod_arr} -gt 0 ]]; then
echo "Potentially breaking changes found in the following files (check ${DEFAULT_SALT_DIR} for original copy):"
for file_str in "${local_mod_arr[@]}"; do
echo " $file_str"
done
echo ""
echo "To reference this list later, check $SOUP_LOG"
sleep 10
fi
}
# {% endraw %}
check_pillar_items() {
local pillar_output=$(salt-call pillar.items --out=json)
cond=$(jq '.local | has("_errors")' <<< "$pillar_output")
if [[ "$cond" == "true" ]]; then
printf "\nThere is an issue rendering the manager's pillars. Please correct the issues in the sls files mentioned below before running SOUP again.\n\n"
jq '.local._errors[]' <<< "$pillar_output"
exit 0
else
printf "\nThe manager's pillars can be rendered. We can proceed with SOUP.\n\n"
fi
}
check_sudoers() { check_sudoers() {
if grep -q "so-setup" /etc/sudoers; then if grep -q "so-setup" /etc/sudoers; then
echo "There is an entry for so-setup in the sudoers file, this can be safely deleted using \"visudo\"." echo "There is an entry for so-setup in the sudoers file, this can be safely deleted using \"visudo\"."
@@ -251,25 +303,31 @@ check_os_updates() {
OSUPDATES=$(yum -q list updates | wc -l) OSUPDATES=$(yum -q list updates | wc -l)
fi fi
if [[ "$OSUPDATES" -gt 0 ]]; then if [[ "$OSUPDATES" -gt 0 ]]; then
echo $NEEDUPDATES if [[ -z $UNATTENDED ]]; then
echo "$NEEDUPDATES"
echo "" echo ""
read -p "Press U to update OS packages (recommended), C to continue without updates, or E to exit: " confirm read -rp "Press U to update OS packages (recommended), C to continue without updates, or E to exit: " confirm
if [[ "$confirm" == [cC] ]]; then if [[ "$confirm" == [cC] ]]; then
echo "Continuing without updating packages" echo "Continuing without updating packages"
elif [[ "$confirm" == [uU] ]]; then elif [[ "$confirm" == [uU] ]]; then
echo "Applying Grid Updates" echo "Applying Grid Updates"
set +e update_flag=true
run_check_net_err "salt '*' -b 5 state.apply patch.os queue=True" 'Could not apply OS updates, please check your network connection.'
set -e
else else
echo "Exiting soup" echo "Exiting soup"
exit 0 exit 0
fi fi
else
update_flag=true
fi
else else
echo "Looks like you have an updated OS" echo "Looks like you have an updated OS"
fi fi
if [[ $update_flag == true ]]; then
set +e
run_check_net_err "salt '*' -b 5 state.apply patch.os queue=True" 'Could not apply OS updates, please check your network connection.'
set -e
fi
} }
clean_dockers() { clean_dockers() {
@@ -335,12 +393,11 @@ preupgrade_changes() {
# This function is to add any new pillar items if needed. # This function is to add any new pillar items if needed.
echo "Checking to see if changes are needed." echo "Checking to see if changes are needed."
[[ "$INSTALLEDVERSION" =~ rc.1 ]] && rc1_to_rc2 [[ "$INSTALLEDVERSION" == 2.3.0 || "$INSTALLEDVERSION" == 2.3.1 || "$INSTALLEDVERSION" == 2.3.2 || "$INSTALLEDVERSION" == 2.3.10 ]] && up_to_2.3.20
[[ "$INSTALLEDVERSION" =~ rc.2 ]] && rc2_to_rc3 [[ "$INSTALLEDVERSION" == 2.3.20 || "$INSTALLEDVERSION" == 2.3.21 ]] && up_to_2.3.30
[[ "$INSTALLEDVERSION" =~ rc.3 ]] && rc3_to_2.3.0 [[ "$INSTALLEDVERSION" == 2.3.30 || "$INSTALLEDVERSION" == 2.3.40 ]] && up_to_2.3.50
[[ "$INSTALLEDVERSION" == 2.3.0 || "$INSTALLEDVERSION" == 2.3.1 || "$INSTALLEDVERSION" == 2.3.2 || "$INSTALLEDVERSION" == 2.3.10 ]] && up_2.3.0_to_2.3.20 [[ "$INSTALLEDVERSION" == 2.3.50 || "$INSTALLEDVERSION" == 2.3.51 || "$INSTALLEDVERSION" == 2.3.52 || "$INSTALLEDVERSION" == 2.3.60 || "$INSTALLEDVERSION" == 2.3.61 || "$INSTALLEDVERSION" == 2.3.70 ]] && up_to_2.3.80
[[ "$INSTALLEDVERSION" == 2.3.20 || "$INSTALLEDVERSION" == 2.3.21 ]] && up_2.3.2X_to_2.3.30 [[ "$INSTALLEDVERSION" == 2.3.80 ]] && up_to_2.3.90
[[ "$INSTALLEDVERSION" == 2.3.30 || "$INSTALLEDVERSION" == 2.3.40 ]] && up_2.3.3X_to_2.3.50
true true
} }
@@ -348,119 +405,66 @@ postupgrade_changes() {
# This function is to add any new pillar items if needed. # This function is to add any new pillar items if needed.
echo "Running post upgrade processes." echo "Running post upgrade processes."
[[ "$POSTVERSION" =~ rc.1 ]] && post_rc1_to_rc2 [[ "$POSTVERSION" == 2.3.0 || "$POSTVERSION" == 2.3.1 || "$POSTVERSION" == 2.3.2 || "$POSTVERSION" == 2.3.10 || "$POSTVERSION" == 2.3.20 ]] && post_to_2.3.21
[[ "$POSTVERSION" == 2.3.20 || "$POSTVERSION" == 2.3.21 ]] && post_2.3.2X_to_2.3.30 [[ "$POSTVERSION" == 2.3.21 || "$POSTVERSION" == 2.3.30 ]] && post_to_2.3.40
[[ "$POSTVERSION" == 2.3.30 ]] && post_2.3.30_to_2.3.40 [[ "$POSTVERSION" == 2.3.40 || "$POSTVERSION" == 2.3.50 || "$POSTVERSION" == 2.3.51 || "$POSTVERSION" == 2.3.52 ]] && post_to_2.3.60
[[ "$POSTVERSION" == 2.3.50 ]] && post_2.3.5X_to_2.3.60 [[ "$POSTVERSION" == 2.3.60 || "$POSTVERSION" == 2.3.61 || "$POSTVERSION" == 2.3.70 || "$POSTVERSION" == 2.3.80 ]] && post_to_2.3.90
true true
} }
post_rc1_to_2.3.21() { post_to_2.3.21() {
salt-call state.apply playbook.OLD_db_init salt-call state.apply playbook.OLD_db_init
rm -f /opt/so/rules/elastalert/playbook/*.yaml rm -f /opt/so/rules/elastalert/playbook/*.yaml
so-playbook-ruleupdate >> /root/soup_playbook_rule_update.log 2>&1 & so-playbook-ruleupdate >> /root/soup_playbook_rule_update.log 2>&1 &
POSTVERSION=2.3.21 POSTVERSION=2.3.21
} }
post_2.3.2X_to_2.3.30() { post_to_2.3.40() {
so-playbook-sigma-refresh >> /root/soup_playbook_sigma_refresh.log 2>&1 &
POSTVERSION=2.3.30
}
post_2.3.30_to_2.3.40() {
so-playbook-sigma-refresh >> /root/soup_playbook_sigma_refresh.log 2>&1 & so-playbook-sigma-refresh >> /root/soup_playbook_sigma_refresh.log 2>&1 &
so-kibana-space-defaults so-kibana-space-defaults
POSTVERSION=2.3.40 POSTVERSION=2.3.40
} }
post_2.3.5X_to_2.3.60() { post_to_2.3.60() {
for table in identity_recovery_addresses selfservice_recovery_flows selfservice_registration_flows selfservice_verification_flows identities identity_verification_tokens identity_credentials selfservice_settings_flows identity_recovery_tokens continuity_containers identity_credential_identifiers identity_verifiable_addresses courier_messages selfservice_errors sessions selfservice_login_flows
do
echo "Forcing Kratos network migration: $table"
sqlite3 /opt/so/conf/kratos/db/db.sqlite "update $table set nid=(select id from networks limit 1);"
done
POSTVERSION=2.3.60 POSTVERSION=2.3.60
} }
post_to_2.3.90() {
# Do Kibana dashboard things
salt-call state.apply kibana.so_savedobjects_defaults queue=True
rc1_to_rc2() { # Create FleetDM service account
FLEET_MANAGER=$(lookup_pillar fleet_manager)
if [[ "$FLEET_MANAGER" == "True" ]]; then
FLEET_SA_EMAIL=$(lookup_pillar_secret fleet_sa_email)
FLEET_SA_PW=$(lookup_pillar_secret fleet_sa_password)
MYSQL_PW=$(lookup_pillar_secret mysql)
# Move the static file to global.sls FLEET_HASH=$(docker exec so-soctopus python -c "import bcrypt; print(bcrypt.hashpw('$FLEET_SA_PW'.encode('utf-8'), bcrypt.gensalt()).decode('utf-8'));" 2>&1)
echo "Migrating static.sls to global.sls" MYSQL_OUTPUT=$(docker exec so-mysql mysql -u root --password=$MYSQL_PW fleet -e \
mv -v /opt/so/saltstack/local/pillar/static.sls /opt/so/saltstack/local/pillar/global.sls >> "$SOUP_LOG" 2>&1 "INSERT INTO users (password,salt,email,name,global_role) VALUES ('$FLEET_HASH','','$FLEET_SA_EMAIL','$FLEET_SA_EMAIL','admin')" 2>&1)
sed -i '1c\global:' /opt/so/saltstack/local/pillar/global.sls >> "$SOUP_LOG" 2>&1
# Moving baseurl from minion sls file to inside global.sls if [[ $? -eq 0 ]]; then
local line=$(grep '^ url_base:' /opt/so/saltstack/local/pillar/minions/$MINIONID.sls) echo "Successfully added service account to Fleet"
sed -i '/^ url_base:/d' /opt/so/saltstack/local/pillar/minions/$MINIONID.sls; else
sed -i "/^global:/a \\$line" /opt/so/saltstack/local/pillar/global.sls; echo "Unable to add service account to Fleet"
echo "$MYSQL_OUTPUT"
# Adding play values to the global.sls
local HIVEPLAYSECRET=$(get_random_value)
local CORTEXPLAYSECRET=$(get_random_value)
sed -i "/^global:/a \\ hiveplaysecret: $HIVEPLAYSECRET" /opt/so/saltstack/local/pillar/global.sls;
sed -i "/^global:/a \\ cortexplaysecret: $CORTEXPLAYSECRET" /opt/so/saltstack/local/pillar/global.sls;
# Move storage nodes to hostname for SSL
# Get a list we can use:
grep -A1 searchnode /opt/so/saltstack/local/pillar/data/nodestab.sls | grep -v '\-\-' | sed '$!N;s/\n/ /' | awk '{print $1,$3}' | awk '/_searchnode:/{gsub(/\_searchnode:/, "_searchnode"); print}' >/tmp/nodes.txt
# Remove the nodes from cluster settings
while read p; do
local NAME=$(echo $p | awk '{print $1}')
local IP=$(echo $p | awk '{print $2}')
echo "Removing the old cross cluster config for $NAME"
curl -XPUT -H 'Content-Type: application/json' http://localhost:9200/_cluster/settings -d '{"persistent":{"cluster":{"remote":{"'$NAME'":{"skip_unavailable":null,"seeds":null}}}}}'
done </tmp/nodes.txt
# Add the nodes back using hostname
while read p; do
local NAME=$(echo $p | awk '{print $1}')
local EHOSTNAME=$(echo $p | awk -F"_" '{print $1}')
local IP=$(echo $p | awk '{print $2}')
echo "Adding the new cross cluster config for $NAME"
curl -XPUT http://localhost:9200/_cluster/settings -H'Content-Type: application/json' -d '{"persistent": {"search": {"remote": {"'$NAME'": {"skip_unavailable": "true", "seeds": ["'$EHOSTNAME':9300"]}}}}}'
done </tmp/nodes.txt
INSTALLEDVERSION=rc.2
}
rc2_to_rc3() {
# move location of local.rules
cp /opt/so/saltstack/default/salt/idstools/localrules/local.rules /opt/so/saltstack/local/salt/idstools/local.rules
if [ -f /opt/so/saltstack/local/salt/idstools/localrules/local.rules ]; then
cat /opt/so/saltstack/local/salt/idstools/localrules/local.rules >> /opt/so/saltstack/local/salt/idstools/local.rules
fi fi
rm -rf /opt/so/saltstack/local/salt/idstools/localrules
rm -rf /opt/so/saltstack/default/salt/idstools/localrules
# Rename mdengine to MDENGINE
sed -i "s/ zeekversion/ mdengine/g" /opt/so/saltstack/local/pillar/global.sls
# Enable Strelka Rules
sed -i "/ rules:/c\ rules: 1" /opt/so/saltstack/local/pillar/global.sls
INSTALLEDVERSION=rc.3
}
rc3_to_2.3.0() {
# Fix Tab Complete
if [ ! -f /etc/profile.d/securityonion.sh ]; then
echo "complete -cf sudo" > /etc/profile.d/securityonion.sh
fi fi
{
echo "redis_settings:"
echo " redis_maxmemory: 827"
echo "playbook:"
echo " api_key: de6639318502476f2fa5aa06f43f51fb389a3d7f"
} >> /opt/so/saltstack/local/pillar/global.sls
sed -i 's/playbook:/playbook_db:/' /opt/so/saltstack/local/pillar/secrets.sls
{
echo "playbook_admin: $(get_random_value)"
echo "playbook_automation: $(get_random_value)"
} >> /opt/so/saltstack/local/pillar/secrets.sls
INSTALLEDVERSION=2.3.0 POSTVERSION=2.3.90
} }
up_2.3.0_to_2.3.20(){
up_to_2.3.20(){
DOCKERSTUFFBIP=$(echo $DOCKERSTUFF | awk -F'.' '{print $1,$2,$3,1}' OFS='.')/24 DOCKERSTUFFBIP=$(echo $DOCKERSTUFF | awk -F'.' '{print $1,$2,$3,1}' OFS='.')/24
# Remove PCAP from global # Remove PCAP from global
sed '/pcap:/d' /opt/so/saltstack/local/pillar/global.sls sed '/pcap:/d' /opt/so/saltstack/local/pillar/global.sls
@@ -498,7 +502,7 @@ up_2.3.0_to_2.3.20(){
INSTALLEDVERSION=2.3.20 INSTALLEDVERSION=2.3.20
} }
up_2.3.2X_to_2.3.30() { up_to_2.3.30() {
# Replace any curly brace scalars with the same scalar in single quotes # Replace any curly brace scalars with the same scalar in single quotes
readarray -t minion_pillars <<< "$(find /opt/so/saltstack/local/pillar/minions -type f -name '*.sls')" readarray -t minion_pillars <<< "$(find /opt/so/saltstack/local/pillar/minions -type f -name '*.sls')"
for pillar in "${minion_pillars[@]}"; do for pillar in "${minion_pillars[@]}"; do
@@ -521,32 +525,7 @@ up_2.3.2X_to_2.3.30() {
INSTALLEDVERSION=2.3.30 INSTALLEDVERSION=2.3.30
} }
upgrade_to_2.3.50_repo() { up_to_2.3.50() {
echo "Performing repo changes."
if [[ "$OS" == "centos" ]]; then
# Import GPG Keys
gpg_rpm_import
echo "Disabling fastestmirror."
disable_fastestmirror
echo "Deleting unneeded repo files."
DELREPOS=('CentOS-Base' 'CentOS-CR' 'CentOS-Debuginfo' 'docker-ce' 'CentOS-fasttrack' 'CentOS-Media' 'CentOS-Sources' 'CentOS-Vault' 'CentOS-x86_64-kernel' 'epel' 'epel-testing' 'saltstack' 'wazuh')
for DELREPO in "${DELREPOS[@]}"; do
if [[ -f "/etc/yum.repos.d/$DELREPO.repo" ]]; then
echo "Deleting $DELREPO.repo"
rm -f "/etc/yum.repos.d/$DELREPO.repo"
fi
done
if [[ $is_airgap -eq 1 ]]; then
# Copy the new repo file if not airgap
cp $UPDATE_DIR/salt/repo/client/files/centos/securityonion.repo /etc/yum.repos.d/
yum clean all
yum repolist
fi
fi
}
up_2.3.3X_to_2.3.50() {
cat <<EOF > /tmp/supersed.txt cat <<EOF > /tmp/supersed.txt
/so-zeek:/ { /so-zeek:/ {
@@ -578,6 +557,89 @@ EOF
INSTALLEDVERSION=2.3.50 INSTALLEDVERSION=2.3.50
} }
up_to_2.3.80() {
# Remove watermark settings from global.sls
sed -i '/ cluster_routing_allocation_disk/d' /opt/so/saltstack/local/pillar/global.sls
# Add new indices to the global
sed -i '/ index_settings:/a \\ so-elasticsearch: \n shards: 1 \n warm: 7 \n close: 30 \n delete: 365' /opt/so/saltstack/local/pillar/global.sls
sed -i '/ index_settings:/a \\ so-logstash: \n shards: 1 \n warm: 7 \n close: 30 \n delete: 365' /opt/so/saltstack/local/pillar/global.sls
sed -i '/ index_settings:/a \\ so-kibana: \n shards: 1 \n warm: 7 \n close: 30 \n delete: 365' /opt/so/saltstack/local/pillar/global.sls
sed -i '/ index_settings:/a \\ so-redis: \n shards: 1 \n warm: 7 \n close: 30 \n delete: 365' /opt/so/saltstack/local/pillar/global.sls
# Do some pillar formatting
tc=$(grep -w true_cluster /opt/so/saltstack/local/pillar/global.sls | awk -F: {'print tolower($2)'}| xargs)
if [[ "$tc" == "true" ]]; then
tcname=$(grep -w true_cluster_name /opt/so/saltstack/local/pillar/global.sls | awk -F: {'print $2'})
sed -i "/^elasticsearch:/a \\ config: \n cluster: \n name: $tcname" /opt/so/saltstack/local/pillar/global.sls
sed -i '/ true_cluster_name/d' /opt/so/saltstack/local/pillar/global.sls
sed -i '/ esclustername/d' /opt/so/saltstack/local/pillar/global.sls
for file in /opt/so/saltstack/local/pillar/minions/*.sls; do
if [[ ${file} != *"manager.sls"* ]]; then
noderoutetype=$(grep -w node_route_type $file | awk -F: {'print $2'})
if [ -n "$noderoutetype" ]; then
sed -i "/^elasticsearch:/a \\ config: \n node: \n attr: \n box_type: $noderoutetype" $file
sed -i '/ node_route_type/d' $file
noderoutetype=''
fi
fi
done
fi
# check for local es config to inform user that the config in local is now ignored and those options need to be placed in the pillar
if [ -f "/opt/so/saltstack/local/salt/elasticsearch/files/elasticsearch.yml" ]; then
NOTIFYCUSTOMELASTICCONFIG=true
fi
INSTALLEDVERSION=2.3.80
}
up_to_2.3.90() {
for i in manager managersearch eval standalone; do
if compgen -G "/opt/so/saltstack/local/pillar/minions/*_$i.sls" > /dev/null; then
echo "soc:" >> /opt/so/saltstack/local/pillar/minions/*_$i.sls
sed -i "/^soc:/a \\ es_index_patterns: '*:so-*,*:endgame-*'" /opt/so/saltstack/local/pillar/minions/*_$i.sls
fi
done
# Create Endgame Hostgroup
so-firewall addhostgroup endgame
# Force influx to generate a new cert
mv /etc/pki/influxdb.crt /etc/pki/influxdb.crt.2390upgrade
mv /etc/pki/influxdb.key /etc/pki/influxdb.key.2390upgrade
# remove old common ingest pipeline in default
rm -vf /opt/so/saltstack/default/salt/elasticsearch/files/ingest/common
# if custom common, move from local ingest to local ingest-dynamic
mkdir -vp /opt/so/saltstack/local/salt/elasticsearch/files/ingest-dynamic
if [[ -f "/opt/so/saltstack/local/salt/elasticsearch/files/ingest/common" ]]; then
mv -v /opt/so/saltstack/local/salt/elasticsearch/files/ingest/common /opt/so/saltstack/local/salt/elasticsearch/files/ingest-dynamic/common
# since json file, we need to wrap with raw
sed -i '1s/^/{{'{% raw %}'}}\n/' /opt/so/saltstack/local/salt/elasticsearch/files/ingest-dynamic/common
sed -i -e '$a{{'{% endraw %}'}}\n' /opt/so/saltstack/local/salt/elasticsearch/files/ingest-dynamic/common
fi
# Generate FleetDM Service Account creds if they do not exist
if grep -q "fleet_sa_email" /opt/so/saltstack/local/pillar/secrets.sls; then
echo "FleetDM Service Account credentials already created..."
else
echo "Generating FleetDM Service Account credentials..."
FLEETSAPASS=$(get_random_value)
printf '%s\n'\
" fleet_sa_email: service.account@securityonion.invalid"\
" fleet_sa_password: $FLEETSAPASS"\
>> /opt/so/saltstack/local/pillar/secrets.sls
fi
INSTALLEDVERSION=2.3.90
}
verify_upgradespace() { verify_upgradespace() {
CURRENTSPACE=$(df -BG / | grep -v Avail | awk '{print $4}' | sed 's/.$//') CURRENTSPACE=$(df -BG / | grep -v Avail | awk '{print $4}' | sed 's/.$//')
if [ "$CURRENTSPACE" -lt "10" ]; then if [ "$CURRENTSPACE" -lt "10" ]; then
@@ -593,7 +655,7 @@ upgrade_space() {
clean_dockers clean_dockers
if ! verify_upgradespace; then if ! verify_upgradespace; then
echo "There is not enough space to perform the upgrade. Please free up space and try again" echo "There is not enough space to perform the upgrade. Please free up space and try again"
exit 1 exit 0
fi fi
else else
echo "You have enough space for upgrade. Proceeding with soup." echo "You have enough space for upgrade. Proceeding with soup."
@@ -618,8 +680,8 @@ thehive_maint() {
done done
if [ "$THEHIVE_CONNECTED" == "yes" ]; then if [ "$THEHIVE_CONNECTED" == "yes" ]; then
echo "Migrating thehive databases if needed." echo "Migrating thehive databases if needed."
curl -v -k -XPOST -L "https://localhost/thehive/api/maintenance/migrate" curl -v -k -XPOST -L "https://localhost/thehive/api/maintenance/migrate" >> "$SOUP_LOG" 2>&1
curl -v -k -XPOST -L "https://localhost/cortex/api/maintenance/migrate" curl -v -k -XPOST -L "https://localhost/cortex/api/maintenance/migrate" >> "$SOUP_LOG" 2>&1
fi fi
} }
@@ -719,6 +781,31 @@ upgrade_salt() {
fi fi
} }
upgrade_to_2.3.50_repo() {
echo "Performing repo changes."
if [[ "$OS" == "centos" ]]; then
# Import GPG Keys
gpg_rpm_import
echo "Disabling fastestmirror."
disable_fastestmirror
echo "Deleting unneeded repo files."
DELREPOS=('CentOS-Base' 'CentOS-CR' 'CentOS-Debuginfo' 'docker-ce' 'CentOS-fasttrack' 'CentOS-Media' 'CentOS-Sources' 'CentOS-Vault' 'CentOS-x86_64-kernel' 'epel' 'epel-testing' 'saltstack' 'wazuh')
for DELREPO in "${DELREPOS[@]}"; do
if [[ -f "/etc/yum.repos.d/$DELREPO.repo" ]]; then
echo "Deleting $DELREPO.repo"
rm -f "/etc/yum.repos.d/$DELREPO.repo"
fi
done
if [[ $is_airgap -eq 1 ]]; then
# Copy the new repo file if not airgap
cp $UPDATE_DIR/salt/repo/client/files/centos/securityonion.repo /etc/yum.repos.d/
yum clean all
yum repolist
fi
fi
}
verify_latest_update_script() { verify_latest_update_script() {
# Check to see if the update scripts match. If not run the new one. # Check to see if the update scripts match. If not run the new one.
CURRENTSOUP=$(md5sum /opt/so/saltstack/default/salt/common/tools/sbin/soup | awk '{print $1}') CURRENTSOUP=$(md5sum /opt/so/saltstack/default/salt/common/tools/sbin/soup | awk '{print $1}')
@@ -743,39 +830,25 @@ verify_latest_update_script() {
} }
main() { main() {
set -e
set +e
trap 'check_err $?' EXIT trap 'check_err $?' EXIT
echo "### Preparing soup at $(date) ###" check_pillar_items
while getopts ":b" opt; do
case "$opt" in
b ) # process option b
shift
BATCHSIZE=$1
if ! [[ "$BATCHSIZE" =~ ^[0-9]+$ ]]; then
echo "Batch size must be a number greater than 0."
exit 1
fi
;;
\? )
echo "Usage: cmd [-b]"
;;
esac
done
echo "Checking to see if this is an airgap install."
echo ""
check_airgap
if [[ $is_airgap -eq 0 && $UNATTENDED == true && -z $ISOLOC ]]; then
echo "Missing file argument (-f <FILENAME>) for unattended airgap upgrade."
exit 0
fi
echo "Checking to see if this is a manager." echo "Checking to see if this is a manager."
echo "" echo ""
require_manager require_manager
set_minionid set_minionid
echo "Checking to see if this is an airgap install."
echo ""
check_airgap
echo "Found that Security Onion $INSTALLEDVERSION is currently installed." echo "Found that Security Onion $INSTALLEDVERSION is currently installed."
echo "" echo ""
if [[ $is_airgap -eq 0 ]]; then if [[ $is_airgap -eq 0 ]]; then
# Let's mount the ISO since this is airgap # Let's mount the ISO since this is airgap
echo "This is airgap. Ask for a location."
airgap_mounted airgap_mounted
else else
echo "Cloning Security Onion github repo into $UPDATE_DIR." echo "Cloning Security Onion github repo into $UPDATE_DIR."
@@ -863,7 +936,7 @@ main() {
echo "Once the issue is resolved, run soup again." echo "Once the issue is resolved, run soup again."
echo "Exiting." echo "Exiting."
echo "" echo ""
exit 1 exit 0
else else
echo "Salt upgrade success." echo "Salt upgrade success."
echo "" echo ""
@@ -922,8 +995,6 @@ main() {
set +e set +e
salt-call state.highstate -l info queue=True salt-call state.highstate -l info queue=True
set -e set -e
echo ""
echo "Upgrade from $INSTALLEDVERSION to $NEWVERSION complete."
echo "" echo ""
echo "Stopping Salt Master to remove ACL" echo "Stopping Salt Master to remove ACL"
@@ -946,6 +1017,13 @@ main() {
[[ $is_airgap -eq 0 ]] && unmount_update [[ $is_airgap -eq 0 ]] && unmount_update
thehive_maint thehive_maint
echo ""
echo "Upgrade to $NEWVERSION complete."
# Everything beyond this is post-upgrade checking, don't fail past this point if something here causes an error
set +e
echo "Checking the number of minions."
NUM_MINIONS=$(ls /opt/so/saltstack/local/pillar/minions/*_*.sls | wc -l) NUM_MINIONS=$(ls /opt/so/saltstack/local/pillar/minions/*_*.sls | wc -l)
if [[ $UPGRADESALT -eq 1 ]] && [[ $NUM_MINIONS -gt 1 ]]; then if [[ $UPGRADESALT -eq 1 ]] && [[ $NUM_MINIONS -gt 1 ]]; then
if [[ $is_airgap -eq 0 ]]; then if [[ $is_airgap -eq 0 ]]; then
@@ -956,8 +1034,15 @@ main() {
fi fi
fi fi
echo "Checking for local modifications."
check_local_mods
echo "Checking sudoers file."
check_sudoers check_sudoers
echo "Checking for necessary user migrations."
so-user migrate
if [[ -n $lsl_msg ]]; then if [[ -n $lsl_msg ]]; then
case $lsl_msg in case $lsl_msg in
'distributed') 'distributed')
@@ -993,10 +1078,56 @@ EOF
fi fi
fi fi
if [ "$NOTIFYCUSTOMELASTICCONFIG" = true ] ; then
cat << EOF
A custom Elasticsearch configuration has been found at /opt/so/saltstack/local/elasticsearch/files/elasticsearch.yml. This file is no longer referenced in Security Onion versions >= 2.3.80.
If you still need those customizations, you'll need to manually migrate them to the new Elasticsearch config as shown at https://docs.securityonion.net/en/2.3/elasticsearch.html.
EOF
fi
echo "### soup has been served at $(date) ###" echo "### soup has been served at $(date) ###"
} }
cat << EOF while getopts ":b:f:y" opt; do
case ${opt} in
b )
BATCHSIZE="$OPTARG"
if ! [[ "$BATCHSIZE" =~ ^[1-9][0-9]*$ ]]; then
echo "Batch size must be a number greater than 0."
exit 1
fi
;;
y )
if [[ ! -f /opt/so/state/yeselastic.txt ]]; then
echo "Cannot run soup in unattended mode. You must run soup manually to accept the Elastic License."
exit 1
else
UNATTENDED=true
fi
;;
f )
ISOLOC="$OPTARG"
;;
\? )
echo "Usage: soup [-b] [-y] [-f <iso location>]"
exit 1
;;
: )
echo "Invalid option: $OPTARG requires an argument"
exit 1
;;
esac
done
shift $((OPTIND - 1))
if [[ -z $UNATTENDED ]]; then
cat << EOF
SOUP - Security Onion UPdater SOUP - Security Onion UPdater
@@ -1008,7 +1139,9 @@ Press Enter to continue or Ctrl-C to cancel.
EOF EOF
read -r input read -r input
fi
echo "### Preparing soup at $(date) ###"
main "$@" | tee -a $SOUP_LOG main "$@" | tee -a $SOUP_LOG

View File

@@ -0,0 +1,29 @@
{%- set cur_close_days = salt['pillar.get']('elasticsearch:index_settings:so-aws:close', 30) -%}
---
# Remember, leave a key empty if there is no value. None will be a string,
# not a Python "NoneType"
#
# Also remember that all examples have 'disable_action' set to True. If you
# want to use this action as a template, be sure to set this to False after
# copying it.
actions:
1:
action: close
description: >-
Close aws indices older than {{cur_close_days}} days.
options:
delete_aliases: False
timeout_override:
continue_if_exception: False
disable_action: False
filters:
- filtertype: pattern
kind: regex
value: '^(logstash-aws.*|so-aws.*)$'
- filtertype: age
source: name
direction: older
timestring: '%Y.%m.%d'
unit: days
unit_count: {{cur_close_days}}
exclude:

View File

@@ -0,0 +1,29 @@
{%- set DELETE_DAYS = salt['pillar.get']('elasticsearch:index_settings:so-aws:delete', 365) -%}
---
# Remember, leave a key empty if there is no value. None will be a string,
# not a Python "NoneType"
#
# Also remember that all examples have 'disable_action' set to True. If you
# want to use this action as a template, be sure to set this to False after
# copying it.
actions:
1:
action: delete_indices
description: >-
Delete aws indices when older than {{ DELETE_DAYS }} days.
options:
ignore_empty_list: True
disable_action: False
filters:
- filtertype: pattern
kind: regex
value: '^(logstash-aws.*|so-aws.*)$'
- filtertype: age
source: name
direction: older
timestring: '%Y.%m.%d'
unit: days
unit_count: {{ DELETE_DAYS }}
exclude:

View File

@@ -0,0 +1,24 @@
{%- set WARM_DAYS = salt['pillar.get']('elasticsearch:index_settings:so-aws:warm', 7) -%}
actions:
1:
action: allocation
description: "Apply shard allocation filtering rules to the specified indices"
options:
key: box_type
value: warm
allocation_type: require
wait_for_completion: true
timeout_override:
continue_if_exception: false
disable_action: false
filters:
- filtertype: pattern
kind: prefix
value: so-aws
- filtertype: age
source: name
direction: older
timestring: '%Y.%m.%d'
unit: days
unit_count: {{ WARM_DAYS }}

View File

@@ -0,0 +1,29 @@
{%- set cur_close_days = salt['pillar.get']('elasticsearch:index_settings:so-azure:close', 30) -%}
---
# Remember, leave a key empty if there is no value. None will be a string,
# not a Python "NoneType"
#
# Also remember that all examples have 'disable_action' set to True. If you
# want to use this action as a template, be sure to set this to False after
# copying it.
actions:
1:
action: close
description: >-
Close azure indices older than {{cur_close_days}} days.
options:
delete_aliases: False
timeout_override:
continue_if_exception: False
disable_action: False
filters:
- filtertype: pattern
kind: regex
value: '^(logstash-azure.*|so-azure.*)$'
- filtertype: age
source: name
direction: older
timestring: '%Y.%m.%d'
unit: days
unit_count: {{cur_close_days}}
exclude:

View File

@@ -0,0 +1,29 @@
{%- set DELETE_DAYS = salt['pillar.get']('elasticsearch:index_settings:so-azure:delete', 365) -%}
---
# Remember, leave a key empty if there is no value. None will be a string,
# not a Python "NoneType"
#
# Also remember that all examples have 'disable_action' set to True. If you
# want to use this action as a template, be sure to set this to False after
# copying it.
actions:
1:
action: delete_indices
description: >-
Delete azure indices when older than {{ DELETE_DAYS }} days.
options:
ignore_empty_list: True
disable_action: False
filters:
- filtertype: pattern
kind: regex
value: '^(logstash-azure.*|so-azure.*)$'
- filtertype: age
source: name
direction: older
timestring: '%Y.%m.%d'
unit: days
unit_count: {{ DELETE_DAYS }}
exclude:

View File

@@ -0,0 +1,24 @@
{%- set WARM_DAYS = salt['pillar.get']('elasticsearch:index_settings:so-azure:warm', 7) -%}
actions:
1:
action: allocation
description: "Apply shard allocation filtering rules to the specified indices"
options:
key: box_type
value: warm
allocation_type: require
wait_for_completion: true
timeout_override:
continue_if_exception: false
disable_action: false
filters:
- filtertype: pattern
kind: prefix
value: so-azure
- filtertype: age
source: name
direction: older
timestring: '%Y.%m.%d'
unit: days
unit_count: {{ WARM_DAYS }}

View File

@@ -0,0 +1,29 @@
{%- set cur_close_days = salt['pillar.get']('elasticsearch:index_settings:so-barracuda:close', 30) -%}
---
# Remember, leave a key empty if there is no value. None will be a string,
# not a Python "NoneType"
#
# Also remember that all examples have 'disable_action' set to True. If you
# want to use this action as a template, be sure to set this to False after
# copying it.
actions:
1:
action: close
description: >-
Close barracuda indices older than {{cur_close_days}} days.
options:
delete_aliases: False
timeout_override:
continue_if_exception: False
disable_action: False
filters:
- filtertype: pattern
kind: regex
value: '^(logstash-barracuda.*|so-barracuda.*)$'
- filtertype: age
source: name
direction: older
timestring: '%Y.%m.%d'
unit: days
unit_count: {{cur_close_days}}
exclude:

View File

@@ -0,0 +1,29 @@
{%- set DELETE_DAYS = salt['pillar.get']('elasticsearch:index_settings:so-barracuda:delete', 365) -%}
---
# Remember, leave a key empty if there is no value. None will be a string,
# not a Python "NoneType"
#
# Also remember that all examples have 'disable_action' set to True. If you
# want to use this action as a template, be sure to set this to False after
# copying it.
actions:
1:
action: delete_indices
description: >-
Delete barracuda indices when older than {{ DELETE_DAYS }} days.
options:
ignore_empty_list: True
disable_action: False
filters:
- filtertype: pattern
kind: regex
value: '^(logstash-barracuda.*|so-barracuda.*)$'
- filtertype: age
source: name
direction: older
timestring: '%Y.%m.%d'
unit: days
unit_count: {{ DELETE_DAYS }}
exclude:

View File

@@ -0,0 +1,24 @@
{%- set WARM_DAYS = salt['pillar.get']('elasticsearch:index_settings:so-barracuda:warm', 7) -%}
actions:
1:
action: allocation
description: "Apply shard allocation filtering rules to the specified indices"
options:
key: box_type
value: warm
allocation_type: require
wait_for_completion: true
timeout_override:
continue_if_exception: false
disable_action: false
filters:
- filtertype: pattern
kind: prefix
value: so-barracuda
- filtertype: age
source: name
direction: older
timestring: '%Y.%m.%d'
unit: days
unit_count: {{ WARM_DAYS }}

View File

@@ -0,0 +1,29 @@
{%- set DELETE_DAYS = salt['pillar.get']('elasticsearch:index_settings:so-beats:delete', 365) -%}
---
# Remember, leave a key empty if there is no value. None will be a string,
# not a Python "NoneType"
#
# Also remember that all examples have 'disable_action' set to True. If you
# want to use this action as a template, be sure to set this to False after
# copying it.
actions:
1:
action: delete_indices
description: >-
Delete beats indices when older than {{ DELETE_DAYS }} days.
options:
ignore_empty_list: True
disable_action: False
filters:
- filtertype: pattern
kind: regex
value: '^(logstash-beats.*|so-beats.*)$'
- filtertype: age
source: name
direction: older
timestring: '%Y.%m.%d'
unit: days
unit_count: {{ DELETE_DAYS }}
exclude:

View File

@@ -0,0 +1,24 @@
{%- set WARM_DAYS = salt['pillar.get']('elasticsearch:index_settings:so-beats:warm', 7) -%}
actions:
1:
action: allocation
description: "Apply shard allocation filtering rules to the specified indices"
options:
key: box_type
value: warm
allocation_type: require
wait_for_completion: true
timeout_override:
continue_if_exception: false
disable_action: false
filters:
- filtertype: pattern
kind: prefix
value: so-beats
- filtertype: age
source: name
direction: older
timestring: '%Y.%m.%d'
unit: days
unit_count: {{ WARM_DAYS }}

View File

@@ -0,0 +1,29 @@
{%- set cur_close_days = salt['pillar.get']('elasticsearch:index_settings:so-bluecoat:close', 30) -%}
---
# Remember, leave a key empty if there is no value. None will be a string,
# not a Python "NoneType"
#
# Also remember that all examples have 'disable_action' set to True. If you
# want to use this action as a template, be sure to set this to False after
# copying it.
actions:
1:
action: close
description: >-
Close bluecoat indices older than {{cur_close_days}} days.
options:
delete_aliases: False
timeout_override:
continue_if_exception: False
disable_action: False
filters:
- filtertype: pattern
kind: regex
value: '^(logstash-bluecoat.*|so-bluecoat.*)$'
- filtertype: age
source: name
direction: older
timestring: '%Y.%m.%d'
unit: days
unit_count: {{cur_close_days}}
exclude:

View File

@@ -0,0 +1,29 @@
{%- set DELETE_DAYS = salt['pillar.get']('elasticsearch:index_settings:so-bluecoat:delete', 365) -%}
---
# Remember, leave a key empty if there is no value. None will be a string,
# not a Python "NoneType"
#
# Also remember that all examples have 'disable_action' set to True. If you
# want to use this action as a template, be sure to set this to False after
# copying it.
actions:
1:
action: delete_indices
description: >-
Delete bluecoat indices when older than {{ DELETE_DAYS }} days.
options:
ignore_empty_list: True
disable_action: False
filters:
- filtertype: pattern
kind: regex
value: '^(logstash-bluecoat.*|so-bluecoat.*)$'
- filtertype: age
source: name
direction: older
timestring: '%Y.%m.%d'
unit: days
unit_count: {{ DELETE_DAYS }}
exclude:

View File

@@ -0,0 +1,24 @@
{%- set WARM_DAYS = salt['pillar.get']('elasticsearch:index_settings:so-bluecoat:warm', 7) -%}
actions:
1:
action: allocation
description: "Apply shard allocation filtering rules to the specified indices"
options:
key: box_type
value: warm
allocation_type: require
wait_for_completion: true
timeout_override:
continue_if_exception: false
disable_action: false
filters:
- filtertype: pattern
kind: prefix
value: so-bluecoat
- filtertype: age
source: name
direction: older
timestring: '%Y.%m.%d'
unit: days
unit_count: {{ WARM_DAYS }}

View File

@@ -0,0 +1,29 @@
{%- set cur_close_days = salt['pillar.get']('elasticsearch:index_settings:so-cef:close', 30) -%}
---
# Remember, leave a key empty if there is no value. None will be a string,
# not a Python "NoneType"
#
# Also remember that all examples have 'disable_action' set to True. If you
# want to use this action as a template, be sure to set this to False after
# copying it.
actions:
1:
action: close
description: >-
Close cef indices older than {{cur_close_days}} days.
options:
delete_aliases: False
timeout_override:
continue_if_exception: False
disable_action: False
filters:
- filtertype: pattern
kind: regex
value: '^(logstash-cef.*|so-cef.*)$'
- filtertype: age
source: name
direction: older
timestring: '%Y.%m.%d'
unit: days
unit_count: {{cur_close_days}}
exclude:

View File

@@ -0,0 +1,29 @@
{%- set DELETE_DAYS = salt['pillar.get']('elasticsearch:index_settings:so-cef:delete', 365) -%}
---
# Remember, leave a key empty if there is no value. None will be a string,
# not a Python "NoneType"
#
# Also remember that all examples have 'disable_action' set to True. If you
# want to use this action as a template, be sure to set this to False after
# copying it.
actions:
1:
action: delete_indices
description: >-
Delete cef indices when older than {{ DELETE_DAYS }} days.
options:
ignore_empty_list: True
disable_action: False
filters:
- filtertype: pattern
kind: regex
value: '^(logstash-cef.*|so-cef.*)$'
- filtertype: age
source: name
direction: older
timestring: '%Y.%m.%d'
unit: days
unit_count: {{ DELETE_DAYS }}
exclude:

View File

@@ -0,0 +1,24 @@
{%- set WARM_DAYS = salt['pillar.get']('elasticsearch:index_settings:so-cef:warm', 7) -%}
actions:
1:
action: allocation
description: "Apply shard allocation filtering rules to the specified indices"
options:
key: box_type
value: warm
allocation_type: require
wait_for_completion: true
timeout_override:
continue_if_exception: false
disable_action: false
filters:
- filtertype: pattern
kind: prefix
value: so-cef
- filtertype: age
source: name
direction: older
timestring: '%Y.%m.%d'
unit: days
unit_count: {{ WARM_DAYS }}

View File

@@ -0,0 +1,29 @@
{%- set cur_close_days = salt['pillar.get']('elasticsearch:index_settings:so-checkpoint:close', 30) -%}
---
# Remember, leave a key empty if there is no value. None will be a string,
# not a Python "NoneType"
#
# Also remember that all examples have 'disable_action' set to True. If you
# want to use this action as a template, be sure to set this to False after
# copying it.
actions:
1:
action: close
description: >-
Close checkpoint indices older than {{cur_close_days}} days.
options:
delete_aliases: False
timeout_override:
continue_if_exception: False
disable_action: False
filters:
- filtertype: pattern
kind: regex
value: '^(logstash-checkpoint.*|so-checkpoint.*)$'
- filtertype: age
source: name
direction: older
timestring: '%Y.%m.%d'
unit: days
unit_count: {{cur_close_days}}
exclude:

View File

@@ -0,0 +1,29 @@
{%- set DELETE_DAYS = salt['pillar.get']('elasticsearch:index_settings:so-checkpoint:delete', 365) -%}
---
# Remember, leave a key empty if there is no value. None will be a string,
# not a Python "NoneType"
#
# Also remember that all examples have 'disable_action' set to True. If you
# want to use this action as a template, be sure to set this to False after
# copying it.
actions:
1:
action: delete_indices
description: >-
Delete checkpoint indices when older than {{ DELETE_DAYS }} days.
options:
ignore_empty_list: True
disable_action: False
filters:
- filtertype: pattern
kind: regex
value: '^(logstash-checkpoint.*|so-checkpoint.*)$'
- filtertype: age
source: name
direction: older
timestring: '%Y.%m.%d'
unit: days
unit_count: {{ DELETE_DAYS }}
exclude:

View File

@@ -0,0 +1,24 @@
{%- set WARM_DAYS = salt['pillar.get']('elasticsearch:index_settings:so-checkpoint:warm', 7) -%}
actions:
1:
action: allocation
description: "Apply shard allocation filtering rules to the specified indices"
options:
key: box_type
value: warm
allocation_type: require
wait_for_completion: true
timeout_override:
continue_if_exception: false
disable_action: false
filters:
- filtertype: pattern
kind: prefix
value: so-checkpoint
- filtertype: age
source: name
direction: older
timestring: '%Y.%m.%d'
unit: days
unit_count: {{ WARM_DAYS }}

View File

@@ -0,0 +1,29 @@
{%- set cur_close_days = salt['pillar.get']('elasticsearch:index_settings:so-cisco:close', 30) -%}
---
# Remember, leave a key empty if there is no value. None will be a string,
# not a Python "NoneType"
#
# Also remember that all examples have 'disable_action' set to True. If you
# want to use this action as a template, be sure to set this to False after
# copying it.
actions:
1:
action: close
description: >-
Close cisco indices older than {{cur_close_days}} days.
options:
delete_aliases: False
timeout_override:
continue_if_exception: False
disable_action: False
filters:
- filtertype: pattern
kind: regex
value: '^(logstash-cisco.*|so-cisco.*)$'
- filtertype: age
source: name
direction: older
timestring: '%Y.%m.%d'
unit: days
unit_count: {{cur_close_days}}
exclude:

View File

@@ -0,0 +1,29 @@
{%- set DELETE_DAYS = salt['pillar.get']('elasticsearch:index_settings:so-cisco:delete', 365) -%}
---
# Remember, leave a key empty if there is no value. None will be a string,
# not a Python "NoneType"
#
# Also remember that all examples have 'disable_action' set to True. If you
# want to use this action as a template, be sure to set this to False after
# copying it.
actions:
1:
action: delete_indices
description: >-
Delete cisco indices when older than {{ DELETE_DAYS }} days.
options:
ignore_empty_list: True
disable_action: False
filters:
- filtertype: pattern
kind: regex
value: '^(logstash-cisco.*|so-cisco.*)$'
- filtertype: age
source: name
direction: older
timestring: '%Y.%m.%d'
unit: days
unit_count: {{ DELETE_DAYS }}
exclude:

View File

@@ -0,0 +1,24 @@
{%- set WARM_DAYS = salt['pillar.get']('elasticsearch:index_settings:so-cisco:warm', 7) -%}
actions:
1:
action: allocation
description: "Apply shard allocation filtering rules to the specified indices"
options:
key: box_type
value: warm
allocation_type: require
wait_for_completion: true
timeout_override:
continue_if_exception: false
disable_action: false
filters:
- filtertype: pattern
kind: prefix
value: so-cisco
- filtertype: age
source: name
direction: older
timestring: '%Y.%m.%d'
unit: days
unit_count: {{ WARM_DAYS }}

View File

@@ -0,0 +1,29 @@
{%- set cur_close_days = salt['pillar.get']('elasticsearch:index_settings:so-cyberark:close', 30) -%}
---
# Remember, leave a key empty if there is no value. None will be a string,
# not a Python "NoneType"
#
# Also remember that all examples have 'disable_action' set to True. If you
# want to use this action as a template, be sure to set this to False after
# copying it.
actions:
1:
action: close
description: >-
Close cyberark indices older than {{cur_close_days}} days.
options:
delete_aliases: False
timeout_override:
continue_if_exception: False
disable_action: False
filters:
- filtertype: pattern
kind: regex
value: '^(logstash-cyberark.*|so-cyberark.*)$'
- filtertype: age
source: name
direction: older
timestring: '%Y.%m.%d'
unit: days
unit_count: {{cur_close_days}}
exclude:

View File

@@ -0,0 +1,29 @@
{%- set DELETE_DAYS = salt['pillar.get']('elasticsearch:index_settings:so-cyberark:delete', 365) -%}
---
# Remember, leave a key empty if there is no value. None will be a string,
# not a Python "NoneType"
#
# Also remember that all examples have 'disable_action' set to True. If you
# want to use this action as a template, be sure to set this to False after
# copying it.
actions:
1:
action: delete_indices
description: >-
Delete cyberark indices when older than {{ DELETE_DAYS }} days.
options:
ignore_empty_list: True
disable_action: False
filters:
- filtertype: pattern
kind: regex
value: '^(logstash-cyberark.*|so-cyberark.*)$'
- filtertype: age
source: name
direction: older
timestring: '%Y.%m.%d'
unit: days
unit_count: {{ DELETE_DAYS }}
exclude:

View File

@@ -0,0 +1,24 @@
{%- set WARM_DAYS = salt['pillar.get']('elasticsearch:index_settings:so-cyberark:warm', 7) -%}
actions:
1:
action: allocation
description: "Apply shard allocation filtering rules to the specified indices"
options:
key: box_type
value: warm
allocation_type: require
wait_for_completion: true
timeout_override:
continue_if_exception: false
disable_action: false
filters:
- filtertype: pattern
kind: prefix
value: so-cyberark
- filtertype: age
source: name
direction: older
timestring: '%Y.%m.%d'
unit: days
unit_count: {{ WARM_DAYS }}

View File

@@ -0,0 +1,29 @@
{%- set cur_close_days = salt['pillar.get']('elasticsearch:index_settings:so-cylance:close', 30) -%}
---
# Remember, leave a key empty if there is no value. None will be a string,
# not a Python "NoneType"
#
# Also remember that all examples have 'disable_action' set to True. If you
# want to use this action as a template, be sure to set this to False after
# copying it.
actions:
1:
action: close
description: >-
Close cylance indices older than {{cur_close_days}} days.
options:
delete_aliases: False
timeout_override:
continue_if_exception: False
disable_action: False
filters:
- filtertype: pattern
kind: regex
value: '^(logstash-cylance.*|so-cylance.*)$'
- filtertype: age
source: name
direction: older
timestring: '%Y.%m.%d'
unit: days
unit_count: {{cur_close_days}}
exclude:

View File

@@ -0,0 +1,29 @@
{%- set DELETE_DAYS = salt['pillar.get']('elasticsearch:index_settings:so-cylance:delete', 365) -%}
---
# Remember, leave a key empty if there is no value. None will be a string,
# not a Python "NoneType"
#
# Also remember that all examples have 'disable_action' set to True. If you
# want to use this action as a template, be sure to set this to False after
# copying it.
actions:
1:
action: delete_indices
description: >-
Delete cylance indices when older than {{ DELETE_DAYS }} days.
options:
ignore_empty_list: True
disable_action: False
filters:
- filtertype: pattern
kind: regex
value: '^(logstash-cylance.*|so-cylance.*)$'
- filtertype: age
source: name
direction: older
timestring: '%Y.%m.%d'
unit: days
unit_count: {{ DELETE_DAYS }}
exclude:

View File

@@ -0,0 +1,24 @@
{%- set WARM_DAYS = salt['pillar.get']('elasticsearch:index_settings:so-cylance:warm', 7) -%}
actions:
1:
action: allocation
description: "Apply shard allocation filtering rules to the specified indices"
options:
key: box_type
value: warm
allocation_type: require
wait_for_completion: true
timeout_override:
continue_if_exception: false
disable_action: false
filters:
- filtertype: pattern
kind: prefix
value: so-cylance
- filtertype: age
source: name
direction: older
timestring: '%Y.%m.%d'
unit: days
unit_count: {{ WARM_DAYS }}

View File

@@ -0,0 +1,29 @@
{%- set cur_close_days = salt['pillar.get']('elasticsearch:index_settings:so-elasticsearch:close', 30) -%}
---
# Remember, leave a key empty if there is no value. None will be a string,
# not a Python "NoneType"
#
# Also remember that all examples have 'disable_action' set to True. If you
# want to use this action as a template, be sure to set this to False after
# copying it.
actions:
1:
action: close
description: >-
Close elasticsearch indices older than {{cur_close_days}} days.
options:
delete_aliases: False
timeout_override:
continue_if_exception: False
disable_action: False
filters:
- filtertype: pattern
kind: regex
value: '^(logstash-elasticsearch.*|so-elasticsearch.*)$'
- filtertype: age
source: name
direction: older
timestring: '%Y.%m.%d'
unit: days
unit_count: {{cur_close_days}}
exclude:

View File

@@ -0,0 +1,29 @@
{%- set DELETE_DAYS = salt['pillar.get']('elasticsearch:index_settings:so-elasticsearch:delete', 365) -%}
---
# Remember, leave a key empty if there is no value. None will be a string,
# not a Python "NoneType"
#
# Also remember that all examples have 'disable_action' set to True. If you
# want to use this action as a template, be sure to set this to False after
# copying it.
actions:
1:
action: delete_indices
description: >-
Delete elasticsearch indices when older than {{ DELETE_DAYS }} days.
options:
ignore_empty_list: True
disable_action: False
filters:
- filtertype: pattern
kind: regex
value: '^(logstash-elasticsearch.*|so-elasticsearch.*)$'
- filtertype: age
source: name
direction: older
timestring: '%Y.%m.%d'
unit: days
unit_count: {{ DELETE_DAYS }}
exclude:

View File

@@ -0,0 +1,24 @@
{%- set WARM_DAYS = salt['pillar.get']('elasticsearch:index_settings:so-elasticsearch:warm', 7) -%}
actions:
1:
action: allocation
description: "Apply shard allocation filtering rules to the specified indices"
options:
key: box_type
value: warm
allocation_type: require
wait_for_completion: true
timeout_override:
continue_if_exception: false
disable_action: false
filters:
- filtertype: pattern
kind: prefix
value: so-elasticsearch
- filtertype: age
source: name
direction: older
timestring: '%Y.%m.%d'
unit: days
unit_count: {{ WARM_DAYS }}

View File

@@ -0,0 +1,29 @@
{%- set cur_close_days = salt['pillar.get']('elasticsearch:index_settings:so-endgame:close', 30) -%}
---
# Remember, leave a key empty if there is no value. None will be a string,
# not a Python "NoneType"
#
# Also remember that all examples have 'disable_action' set to True. If you
# want to use this action as a template, be sure to set this to False after
# copying it.
actions:
1:
action: close
description: >-
Close Endgame indices older than {{cur_close_days}} days.
options:
delete_aliases: False
timeout_override:
continue_if_exception: False
disable_action: False
filters:
- filtertype: pattern
kind: regex
value: '^(logstash-endgame.*|so-endgame.*|endgame.*)$'
- filtertype: age
source: name
direction: older
timestring: '%Y.%m.%d'
unit: days
unit_count: {{cur_close_days}}
exclude:

View File

@@ -0,0 +1,27 @@
{%- set DELETE_DAYS = salt['pillar.get']('elasticsearch:index_settings:so-endgame:delete', 365) -%}
---
# Remember, leave a key empty if there is no value. None will be a string,
# not a Python "NoneType"
#
# Also remember that all examples have 'disable_action' set to True. If you
# want to use this action as a template, be sure to set this to False after
# copying it.
actions:
1:
action: delete_indices
description: >-
Delete Endgame indices when older than {{ DELETE_DAYS }} days.
options:
ignore_empty_list: True
disable_action: False
filters:
- filtertype: pattern
kind: regex
value: '^(logstash-endgame.*|so-endgame.*|endgame.*)$'
- filtertype: age
source: name
direction: older
timestring: '%Y.%m.%d'
unit: days
unit_count: {{ DELETE_DAYS }}
exclude:

View File

@@ -0,0 +1,23 @@
{%- set WARM_DAYS = salt['pillar.get']('elasticsearch:index_settings:so-endgame:warm', 7) -%}
actions:
1:
action: allocation
description: "Apply shard allocation filtering rules to the specified indices"
options:
key: box_type
value: warm
allocation_type: require
wait_for_completion: true
timeout_override:
continue_if_exception: false
disable_action: false
filters:
- filtertype: pattern
kind: regex
value: '^(logstash-endgame.*|so-endgame.*|endgame.*)$'
- filtertype: age
source: name
direction: older
timestring: '%Y.%m.%d'
unit: days
unit_count: {{ WARM_DAYS }}

View File

@@ -0,0 +1,29 @@
{%- set cur_close_days = salt['pillar.get']('elasticsearch:index_settings:so-f5:close', 30) -%}
---
# Remember, leave a key empty if there is no value. None will be a string,
# not a Python "NoneType"
#
# Also remember that all examples have 'disable_action' set to True. If you
# want to use this action as a template, be sure to set this to False after
# copying it.
actions:
1:
action: close
description: >-
Close f5 indices older than {{cur_close_days}} days.
options:
delete_aliases: False
timeout_override:
continue_if_exception: False
disable_action: False
filters:
- filtertype: pattern
kind: regex
value: '^(logstash-f5.*|so-f5.*)$'
- filtertype: age
source: name
direction: older
timestring: '%Y.%m.%d'
unit: days
unit_count: {{cur_close_days}}
exclude:

View File

@@ -0,0 +1,29 @@
{%- set DELETE_DAYS = salt['pillar.get']('elasticsearch:index_settings:so-f5:delete', 365) -%}
---
# Remember, leave a key empty if there is no value. None will be a string,
# not a Python "NoneType"
#
# Also remember that all examples have 'disable_action' set to True. If you
# want to use this action as a template, be sure to set this to False after
# copying it.
actions:
1:
action: delete_indices
description: >-
Delete f5 indices when older than {{ DELETE_DAYS }} days.
options:
ignore_empty_list: True
disable_action: False
filters:
- filtertype: pattern
kind: regex
value: '^(logstash-f5.*|so-f5.*)$'
- filtertype: age
source: name
direction: older
timestring: '%Y.%m.%d'
unit: days
unit_count: {{ DELETE_DAYS }}
exclude:

View File

@@ -0,0 +1,24 @@
{%- set WARM_DAYS = salt['pillar.get']('elasticsearch:index_settings:so-f5:warm', 7) -%}
actions:
1:
action: allocation
description: "Apply shard allocation filtering rules to the specified indices"
options:
key: box_type
value: warm
allocation_type: require
wait_for_completion: true
timeout_override:
continue_if_exception: false
disable_action: false
filters:
- filtertype: pattern
kind: prefix
value: so-f5
- filtertype: age
source: name
direction: older
timestring: '%Y.%m.%d'
unit: days
unit_count: {{ WARM_DAYS }}

View File

@@ -0,0 +1,29 @@
{%- set DELETE_DAYS = salt['pillar.get']('elasticsearch:index_settings:so-firewall:delete', 365) -%}
---
# Remember, leave a key empty if there is no value. None will be a string,
# not a Python "NoneType"
#
# Also remember that all examples have 'disable_action' set to True. If you
# want to use this action as a template, be sure to set this to False after
# copying it.
actions:
1:
action: delete_indices
description: >-
Delete firewall indices when older than {{ DELETE_DAYS }} days.
options:
ignore_empty_list: True
disable_action: False
filters:
- filtertype: pattern
kind: regex
value: '^(logstash-firewall.*|so-firewall.*)$'
- filtertype: age
source: name
direction: older
timestring: '%Y.%m.%d'
unit: days
unit_count: {{ DELETE_DAYS }}
exclude:

View File

@@ -0,0 +1,24 @@
{%- set WARM_DAYS = salt['pillar.get']('elasticsearch:index_settings:so-firewall:warm', 7) -%}
actions:
1:
action: allocation
description: "Apply shard allocation filtering rules to the specified indices"
options:
key: box_type
value: warm
allocation_type: require
wait_for_completion: true
timeout_override:
continue_if_exception: false
disable_action: false
filters:
- filtertype: pattern
kind: prefix
value: so-firewall
- filtertype: age
source: name
direction: older
timestring: '%Y.%m.%d'
unit: days
unit_count: {{ WARM_DAYS }}

View File

@@ -0,0 +1,29 @@
{%- set cur_close_days = salt['pillar.get']('elasticsearch:index_settings:so-fortinet:close', 30) -%}
---
# Remember, leave a key empty if there is no value. None will be a string,
# not a Python "NoneType"
#
# Also remember that all examples have 'disable_action' set to True. If you
# want to use this action as a template, be sure to set this to False after
# copying it.
actions:
1:
action: close
description: >-
Close fortinet indices older than {{cur_close_days}} days.
options:
delete_aliases: False
timeout_override:
continue_if_exception: False
disable_action: False
filters:
- filtertype: pattern
kind: regex
value: '^(logstash-fortinet.*|so-fortinet.*)$'
- filtertype: age
source: name
direction: older
timestring: '%Y.%m.%d'
unit: days
unit_count: {{cur_close_days}}
exclude:

View File

@@ -0,0 +1,29 @@
{%- set DELETE_DAYS = salt['pillar.get']('elasticsearch:index_settings:so-fortinet:delete', 365) -%}
---
# Remember, leave a key empty if there is no value. None will be a string,
# not a Python "NoneType"
#
# Also remember that all examples have 'disable_action' set to True. If you
# want to use this action as a template, be sure to set this to False after
# copying it.
actions:
1:
action: delete_indices
description: >-
Delete fortinet indices when older than {{ DELETE_DAYS }} days.
options:
ignore_empty_list: True
disable_action: False
filters:
- filtertype: pattern
kind: regex
value: '^(logstash-fortinet.*|so-fortinet.*)$'
- filtertype: age
source: name
direction: older
timestring: '%Y.%m.%d'
unit: days
unit_count: {{ DELETE_DAYS }}
exclude:

View File

@@ -0,0 +1,24 @@
{%- set WARM_DAYS = salt['pillar.get']('elasticsearch:index_settings:so-fortinet:warm', 7) -%}
actions:
1:
action: allocation
description: "Apply shard allocation filtering rules to the specified indices"
options:
key: box_type
value: warm
allocation_type: require
wait_for_completion: true
timeout_override:
continue_if_exception: false
disable_action: false
filters:
- filtertype: pattern
kind: prefix
value: so-fortinet
- filtertype: age
source: name
direction: older
timestring: '%Y.%m.%d'
unit: days
unit_count: {{ WARM_DAYS }}

View File

@@ -0,0 +1,29 @@
{%- set cur_close_days = salt['pillar.get']('elasticsearch:index_settings:so-gcp:close', 30) -%}
---
# Remember, leave a key empty if there is no value. None will be a string,
# not a Python "NoneType"
#
# Also remember that all examples have 'disable_action' set to True. If you
# want to use this action as a template, be sure to set this to False after
# copying it.
actions:
1:
action: close
description: >-
Close gcp indices older than {{cur_close_days}} days.
options:
delete_aliases: False
timeout_override:
continue_if_exception: False
disable_action: False
filters:
- filtertype: pattern
kind: regex
value: '^(logstash-gcp.*|so-gcp.*)$'
- filtertype: age
source: name
direction: older
timestring: '%Y.%m.%d'
unit: days
unit_count: {{cur_close_days}}
exclude:

View File

@@ -0,0 +1,29 @@
{%- set DELETE_DAYS = salt['pillar.get']('elasticsearch:index_settings:so-gcp:delete', 365) -%}
---
# Remember, leave a key empty if there is no value. None will be a string,
# not a Python "NoneType"
#
# Also remember that all examples have 'disable_action' set to True. If you
# want to use this action as a template, be sure to set this to False after
# copying it.
actions:
1:
action: delete_indices
description: >-
Delete gcp indices when older than {{ DELETE_DAYS }} days.
options:
ignore_empty_list: True
disable_action: False
filters:
- filtertype: pattern
kind: regex
value: '^(logstash-gcp.*|so-gcp.*)$'
- filtertype: age
source: name
direction: older
timestring: '%Y.%m.%d'
unit: days
unit_count: {{ DELETE_DAYS }}
exclude:

View File

@@ -0,0 +1,24 @@
{%- set WARM_DAYS = salt['pillar.get']('elasticsearch:index_settings:so-gcp:warm', 7) -%}
actions:
1:
action: allocation
description: "Apply shard allocation filtering rules to the specified indices"
options:
key: box_type
value: warm
allocation_type: require
wait_for_completion: true
timeout_override:
continue_if_exception: false
disable_action: false
filters:
- filtertype: pattern
kind: prefix
value: so-gcp
- filtertype: age
source: name
direction: older
timestring: '%Y.%m.%d'
unit: days
unit_count: {{ WARM_DAYS }}

View File

@@ -0,0 +1,29 @@
{%- set cur_close_days = salt['pillar.get']('elasticsearch:index_settings:so-google_workspace:close', 30) -%}
---
# Remember, leave a key empty if there is no value. None will be a string,
# not a Python "NoneType"
#
# Also remember that all examples have 'disable_action' set to True. If you
# want to use this action as a template, be sure to set this to False after
# copying it.
actions:
1:
action: close
description: >-
Close google_workspace indices older than {{cur_close_days}} days.
options:
delete_aliases: False
timeout_override:
continue_if_exception: False
disable_action: False
filters:
- filtertype: pattern
kind: regex
value: '^(logstash-google_workspace.*|so-google_workspace.*)$'
- filtertype: age
source: name
direction: older
timestring: '%Y.%m.%d'
unit: days
unit_count: {{cur_close_days}}
exclude:

View File

@@ -0,0 +1,29 @@
{%- set DELETE_DAYS = salt['pillar.get']('elasticsearch:index_settings:so-google_workspace:delete', 365) -%}
---
# Remember, leave a key empty if there is no value. None will be a string,
# not a Python "NoneType"
#
# Also remember that all examples have 'disable_action' set to True. If you
# want to use this action as a template, be sure to set this to False after
# copying it.
actions:
1:
action: delete_indices
description: >-
Delete google_workspace indices when older than {{ DELETE_DAYS }} days.
options:
ignore_empty_list: True
disable_action: False
filters:
- filtertype: pattern
kind: regex
value: '^(logstash-google_workspace.*|so-google_workspace.*)$'
- filtertype: age
source: name
direction: older
timestring: '%Y.%m.%d'
unit: days
unit_count: {{ DELETE_DAYS }}
exclude:

Some files were not shown because too many files have changed in this diff Show More