Merge branch 'mkrmerge' into escluster

This commit is contained in:
Mike Reeves
2020-11-24 10:29:57 -05:00
committed by GitHub
171 changed files with 2613 additions and 3181 deletions

12
.github/ISSUE_TEMPLATE vendored Normal file
View File

@@ -0,0 +1,12 @@
PLEASE STOP AND READ THIS INFORMATION!
If you are creating an issue just to ask a question, you will likely get faster and better responses by posting to our discussions forum instead:
https://securityonion.net/discuss
If you think you have found a possible bug or are observing a behavior that you weren't expecting, use the discussion forum to start a conversation about it instead of creating an issue.
If you are very familiar with the latest version of the product and are confident you have found a bug in Security Onion, you can continue with creating an issue here, but please make sure you have done the following:
- duplicated the issue on a fresh installation of the latest version
- provide information about your system and how you installed Security Onion
- include relevant log files
- include reproduction steps

15
.github/workflows/leaktest.yml vendored Normal file
View File

@@ -0,0 +1,15 @@
name: leak-test
on: [push,pull_request]
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
with:
fetch-depth: '0'
- name: Gitleaks
uses: zricethezav/gitleaks-action@master

1
KEYS
View File

@@ -1,4 +1,5 @@
-----BEGIN PGP PUBLIC KEY BLOCK-----
mQINBF7rzwEBEADBg87uJhnC3Ls7s60hbHGaywGrPtbz2WuYA/ev3YS3X7WS75p8
PGlzTWUCujx0pEHbK2vYfExl3zksZ8ZmLyZ9VB3oSLiWBzJgKAeB7YCFEo8te+eE
P2Z+8c+kX4eOV+2waxZyewA2TipSkhWgStSI4Ow8SyVUcUWA3hCw7mo2duNVi7KO

View File

@@ -1,7 +1,14 @@
## Security Onion 2.3.1
## Security Onion 2.3.10
Security Onion 2.3.1 is here!
Security Onion 2.3.10 is here!
## Screenshots
Alerts
![Alerts](https://raw.githubusercontent.com/security-onion-solutions/securityonion/master/screenshots/alerts-1.png)
Hunt
![Hunt](https://raw.githubusercontent.com/security-onion-solutions/securityonion/master/screenshots/hunt-1.png)
### Release Notes

View File

@@ -1,16 +1,16 @@
### 2.3.1 ISO image built on 2020/10/22
### 2.3.10 ISO image built on 2020/11/19
### Download and Verify
2.3.1 ISO image:
https://download.securityonion.net/file/securityonion/securityonion-2.3.1.iso
2.3.10 ISO image:
https://download.securityonion.net/file/securityonion/securityonion-2.3.10.iso
MD5: EF2DEBCCBAE0B0BCCC906552B5FF918A
SHA1: 16AFCACB102BD217A038044D64E7A86DA351640E
SHA256: 7125F90B6323179D0D29F5745681BE995BD2615E64FA1E0046D94888A72C539E
MD5: 55E10BAE3D90DF47CA4D5DCCDCB67A96
SHA1: 01361123F35CEACE077803BC8074594D57EE653A
SHA256: 772EA4EFFFF12F026593F5D1CC93DB538CC17B9BA5F60308F1976B6ED7032A8D
Signature for ISO image:
https://github.com/Security-Onion-Solutions/securityonion/raw/master/sigs/securityonion-2.3.1.iso.sig
https://github.com/Security-Onion-Solutions/securityonion/raw/master/sigs/securityonion-2.3.10.iso.sig
Signing key:
https://raw.githubusercontent.com/Security-Onion-Solutions/securityonion/master/KEYS
@@ -24,22 +24,22 @@ wget https://raw.githubusercontent.com/Security-Onion-Solutions/securityonion/ma
Download the signature file for the ISO:
```
wget https://github.com/Security-Onion-Solutions/securityonion/raw/master/sigs/securityonion-2.3.1.iso.sig
wget https://github.com/Security-Onion-Solutions/securityonion/raw/master/sigs/securityonion-2.3.10.iso.sig
```
Download the ISO image:
```
wget https://download.securityonion.net/file/securityonion/securityonion-2.3.1.iso
wget https://download.securityonion.net/file/securityonion/securityonion-2.3.10.iso
```
Verify the downloaded ISO image using the signature file:
```
gpg --verify securityonion-2.3.1.iso.sig securityonion-2.3.1.iso
gpg --verify securityonion-2.3.10.iso.sig securityonion-2.3.10.iso
```
The output should show "Good signature" and the Primary key fingerprint should match what's shown below:
```
gpg: Signature made Thu 22 Oct 2020 10:34:27 AM EDT using RSA key ID FE507013
gpg: Signature made Thu 19 Nov 2020 03:38:54 PM EST using RSA key ID FE507013
gpg: Good signature from "Security Onion Solutions, LLC <info@securityonionsolutions.com>"
gpg: WARNING: This key is not certified with a trusted signature!
gpg: There is no indication that the signature belongs to the owner.

View File

@@ -1 +1 @@
2.3.10
2.3.20

View File

@@ -1,4 +1,51 @@
#!py
import logging
def status():
return __salt__['cmd.run']('/usr/sbin/so-status')
return __salt__['cmd.run']('/usr/sbin/so-status')
def mysql_conn(retry):
log = logging.getLogger(__name__)
from time import sleep
try:
from MySQLdb import _mysql
except ImportError as e:
log.error(e)
return False
mainint = __salt__['pillar.get']('sensor:mainint', __salt__['pillar.get']('manager:mainint'))
mainip = __salt__['grains.get']('ip_interfaces').get(mainint)[0]
mysql_up = False
for i in range(0, retry):
log.debug(f'Connection attempt {i+1}')
try:
db = _mysql.connect(
host=mainip,
user='root',
passwd=__salt__['pillar.get']('secrets:mysql')
)
log.debug(f'Connected to MySQL server on {mainip} after {i} attempts.')
db.query("""SELECT 1;""")
log.debug(f'Successfully completed query against MySQL server on {mainip}')
db.close()
mysql_up = True
break
except _mysql.OperationalError as e:
log.debug(e)
except Exception as e:
log.error('Unexpected error occured.')
log.error(e)
break
sleep(1)
if not mysql_up:
log.error(f'Could not connect to MySQL server on {mainip} after {retry} attempts.')
return mysql_up

View File

@@ -18,6 +18,7 @@
/opt/so/log/filebeat/*.log
/opt/so/log/telegraf/*.log
/opt/so/log/redis/*.log
/opt/so/log/salt/so-salt-minion-check
{
{{ logrotate_conf | indent(width=4) }}
}

View File

@@ -32,6 +32,18 @@ soconfperms:
- gid: 939
- dir_mode: 770
sostatusconf:
file.directory:
- name: /opt/so/conf/so-status
- uid: 939
- gid: 939
- dir_mode: 770
so-status.conf:
file.touch:
- name: /opt/so/conf/so-status/so-status.conf
- unless: ls /opt/so/conf/so-status/so-status.conf
sosaltstackperms:
file.directory:
- name: /opt/so/saltstack
@@ -158,8 +170,8 @@ Etc/UTC:
utilsyncscripts:
file.recurse:
- name: /usr/sbin
- user: 0
- group: 0
- user: root
- group: root
- file_mode: 755
- template: jinja
- source: salt://common/tools/sbin

View File

@@ -1,5 +0,0 @@
{% set docker = {
'containers': [
'so-domainstats'
]
} %}

View File

@@ -1,20 +0,0 @@
{% set docker = {
'containers': [
'so-filebeat',
'so-nginx',
'so-telegraf',
'so-dockerregistry',
'so-soc',
'so-kratos',
'so-idstools',
'so-elasticsearch',
'so-kibana',
'so-steno',
'so-suricata',
'so-zeek',
'so-curator',
'so-elastalert',
'so-soctopus',
'so-sensoroni'
]
} %}

View File

@@ -1,10 +0,0 @@
{% set docker = {
'containers': [
'so-mysql',
'so-fleet',
'so-redis',
'so-filebeat',
'so-nginx',
'so-telegraf'
]
} %}

View File

@@ -1,7 +0,0 @@
{% set docker = {
'containers': [
'so-mysql',
'so-fleet',
'so-redis'
]
} %}

View File

@@ -1,5 +0,0 @@
{% set docker = {
'containers': [
'so-freqserver'
]
} %}

View File

@@ -1,6 +0,0 @@
{% set docker = {
'containers': [
'so-influxdb',
'so-grafana'
]
} %}

View File

@@ -1,15 +0,0 @@
{% set docker = {
'containers': [
'so-nginx',
'so-telegraf',
'so-redis',
'so-logstash',
'so-elasticsearch',
'so-curator',
'so-steno',
'so-suricata',
'so-wazuh',
'so-filebeat',
'so-sensoroni'
]
} %}

View File

@@ -1,12 +0,0 @@
{% set docker = {
'containers': [
'so-nginx',
'so-telegraf',
'so-idstools',
'so-steno',
'so-zeek',
'so-redis',
'so-logstash',
'so-filebeat
]
} %}

View File

@@ -1,9 +0,0 @@
{% set docker = {
'containers': [
'so-nginx',
'so-telegraf',
'so-logstash',
'so-elasticsearch',
'so-curator',
]
} %}

View File

@@ -1,10 +0,0 @@
{% set docker = {
'containers': [
'so-filebeat',
'so-nginx',
'so-soc',
'so-kratos',
'so-elasticsearch',
'so-kibana'
]
} %}

View File

@@ -1,21 +0,0 @@
{% set docker = {
'containers': [
'so-dockerregistry',
'so-nginx',
'so-telegraf',
'so-soc',
'so-kratos',
'so-idstools',
'so-redis',
'so-elasticsearch',
'so-logstash',
'so-kibana',
'so-elastalert',
'so-filebeat',
'so-soctopus'
]
} %}
{% if salt['pillar.get']('global:managerupdate') == 1 %}
{% do docker.containers.append('so-aptcacherng') %}
{% endif %}

View File

@@ -1,21 +0,0 @@
{% set docker = {
'containers': [
'so-nginx',
'so-telegraf',
'so-soc',
'so-kratos',
'so-idstools',
'so-redis',
'so-logstash',
'so-elasticsearch',
'so-curator',
'so-kibana',
'so-elastalert',
'so-filebeat',
'so-soctopus'
]
} %}
{% if salt['pillar.get']('global:managerupdate') == 1 %}
{% do docker.containers.append('so-aptcacherng') %}
{% endif %}

View File

@@ -1,5 +0,0 @@
{% set docker = {
'containers': [
'so-zeek'
]
} %}

View File

@@ -1,5 +0,0 @@
{% set docker = {
'containers': [
'so-playbook'
]
} %}

View File

@@ -1,10 +0,0 @@
{% set docker = {
'containers': [
'so-nginx',
'so-telegraf',
'so-logstash',
'so-elasticsearch',
'so-curator',
'so-filebeat'
]
} %}

View File

@@ -1,9 +0,0 @@
{% set docker = {
'containers': [
'so-telegraf',
'so-steno',
'so-suricata',
'so-filebeat',
'so-sensoroni'
]
} %}

View File

@@ -1,48 +0,0 @@
{% set role = grains.id.split('_') | last %}
{% from 'common/maps/'~ role ~'.map.jinja' import docker with context %}
# Check if the service is enabled and append it's required containers
# to the list predefined by the role / minion id affix
{% macro append_containers(pillar_name, k, compare )%}
{% if salt['pillar.get'](pillar_name~':'~k, {}) != compare %}
{% if k == 'enabled' %}
{% set k = pillar_name %}
{% endif %}
{% from 'common/maps/'~k~'.map.jinja' import docker as d with context %}
{% for li in d['containers'] %}
{{ docker['containers'].append(li) }}
{% endfor %}
{% endif %}
{% endmacro %}
{% set docker = salt['grains.filter_by']({
'*_'~role: {
'containers': docker['containers']
}
},grain='id', merge=salt['pillar.get']('docker')) %}
{% if role in ['eval', 'managersearch', 'manager', 'standalone'] %}
{{ append_containers('manager', 'grafana', 0) }}
{{ append_containers('global', 'fleet_manager', 0) }}
{{ append_containers('global', 'wazuh', 0) }}
{{ append_containers('manager', 'thehive', 0) }}
{{ append_containers('manager', 'playbook', 0) }}
{{ append_containers('manager', 'freq', 0) }}
{{ append_containers('manager', 'domainstats', 0) }}
{% endif %}
{% if role in ['eval', 'heavynode', 'sensor', 'standalone'] %}
{{ append_containers('strelka', 'enabled', 0) }}
{% endif %}
{% if role in ['heavynode', 'standalone'] %}
{{ append_containers('global', 'mdengine', 'SURICATA') }}
{% endif %}
{% if role == 'searchnode' %}
{{ append_containers('manager', 'wazuh', 0) }}
{% endif %}
{% if role == 'sensor' %}
{{ append_containers('global', 'mdengine', 'SURICATA') }}
{% endif %}

View File

@@ -1,25 +0,0 @@
{% set docker = {
'containers': [
'so-nginx',
'so-telegraf',
'so-soc',
'so-kratos',
'so-idstools',
'so-redis',
'so-logstash',
'so-elasticsearch',
'so-curator',
'so-kibana',
'so-elastalert',
'so-filebeat',
'so-suricata',
'so-steno',
'so-dockerregistry',
'so-soctopus',
'so-sensoroni'
]
} %}
{% if salt['pillar.get']('global:managerupdate') == 1 %}
{% do docker.containers.append('so-aptcacherng') %}
{% endif %}

View File

@@ -1,9 +0,0 @@
{% set docker = {
'containers': [
'so-strelka-coordinator',
'so-strelka-gatekeeper',
'so-strelka-manager',
'so-strelka-frontend',
'so-strelka-filestream'
]
} %}

View File

@@ -1,7 +0,0 @@
{% set docker = {
'containers': [
'so-thehive',
'so-thehive-es',
'so-cortex'
]
} %}

View File

@@ -1,7 +0,0 @@
{% set docker = {
'containers': [
'so-nginx',
'so-telegraf',
'so-elasticsearch'
]
} %}

View File

@@ -1,5 +0,0 @@
{% set docker = {
'containers': [
'so-wazuh'
]
} %}

View File

@@ -1,8 +0,0 @@
#!/bin/bash
if [ ! -f /opt/so/state/dockernet.state ]; then
docker network create -d bridge so-elastic-net
touch /opt/so/state/dockernet.state
else
exit
fi

View File

@@ -15,12 +15,10 @@
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
IMAGEREPO=securityonion
# Check for prerequisites
if [ "$(id -u)" -ne 0 ]; then
echo "This script must be run using sudo!"
exit 1
echo "This script must be run using sudo!"
exit 1
fi
# Define a banner to separate sections
@@ -31,14 +29,43 @@ header() {
printf '%s\n' "$banner" "$*" "$banner"
}
lookup_salt_value() {
key=$1
group=$2
kind=$3
if [ -z "$kind" ]; then
kind=pillar
fi
if [ -n "$group" ]; then
group=${group}:
fi
salt-call --no-color ${kind}.get ${group}${key} --out=newline_values_only
}
lookup_pillar() {
key=$1
salt-call --no-color pillar.get global:${key} --out=newline_values_only
key=$1
pillar=$2
if [ -z "$pillar" ]; then
pillar=global
fi
lookup_salt_value "$key" "$pillar" "pillar"
}
lookup_pillar_secret() {
key=$1
salt-call --no-color pillar.get secrets:${key} --out=newline_values_only
lookup_pillar "$1" "secrets"
}
lookup_grain() {
lookup_salt_value "$1" "" "grains"
}
lookup_role() {
id=$(lookup_grain id)
pieces=($(echo $id | tr '_' ' '))
echo ${pieces[1]}
}
check_container() {
@@ -47,7 +74,64 @@ check_container() {
}
check_password() {
local password=$1
echo "$password" | egrep -v "'|\"|\\\\" > /dev/null 2>&1
return $?
local password=$1
echo "$password" | egrep -v "'|\"|\\$|\\\\" > /dev/null 2>&1
return $?
}
set_os() {
if [ -f /etc/redhat-release ]; then
OS=centos
else
OS=ubuntu
fi
}
set_minionid() {
MINIONID=$(lookup_grain id)
}
set_version() {
CURRENTVERSION=0.0.0
if [ -f /etc/soversion ]; then
CURRENTVERSION=$(cat /etc/soversion)
fi
if [ -z "$VERSION" ]; then
if [ -z "$NEWVERSION" ]; then
if [ "$CURRENTVERSION" == "0.0.0" ]; then
echo "ERROR: Unable to detect Security Onion version; terminating script."
exit 1
else
VERSION=$CURRENTVERSION
fi
else
VERSION="$NEWVERSION"
fi
fi
}
require_manager() {
# Check to see if this is a manager
MANAGERCHECK=$(cat /etc/salt/grains | grep role | awk '{print $2}')
if [ $MANAGERCHECK == 'so-eval' ] || [ $MANAGERCHECK == 'so-manager' ] || [ $MANAGERCHECK == 'so-managersearch' ] || [ $MANAGERCHECK == 'so-standalone' ] || [ $MANAGERCHECK == 'so-helix' ] || [ $MANAGERCHECK == 'so-import' ]; then
echo "This is a manager, We can proceed."
else
echo "Please run this command on the manager; the manager controls the grid."
exit 1
fi
}
is_single_node_grid() {
role=$(lookup_role)
if [ "$role" != "eval" ] && [ "$role" != "standalone" ] && [ "$role" != "import" ]; then
return 1
fi
return 0
}
fail() {
msg=$1
echo "ERROR: $msg"
echo "Exiting."
exit 1
}

View File

@@ -31,7 +31,7 @@ fi
USER=$1
CORTEX_KEY=$(lookup_pillar cortexkey)
CORTEX_IP=$(lookup_pillar managerip)
CORTEX_API_URL="$(lookup_pillar url_base)/cortex/api"
CORTEX_ORG_NAME=$(lookup_pillar cortexorgname)
CORTEX_USER=$USER
@@ -43,7 +43,7 @@ fi
read -rs CORTEX_PASS
# Create new user in Cortex
resp=$(curl -sk -XPOST -H "Authorization: Bearer $CORTEX_KEY" -H "Content-Type: application/json" "https://$CORTEX_IP/cortex/api/user" -d "{\"name\": \"$CORTEX_USER\",\"roles\": [\"read\",\"analyze\",\"orgadmin\"],\"organization\": \"$CORTEX_ORG_NAME\",\"login\": \"$CORTEX_USER\",\"password\" : \"$CORTEX_PASS\" }")
resp=$(curl -sk -XPOST -H "Authorization: Bearer $CORTEX_KEY" -H "Content-Type: application/json" -L "https://$CORTEX_API_URL/user" -d "{\"name\": \"$CORTEX_USER\",\"roles\": [\"read\",\"analyze\",\"orgadmin\"],\"organization\": \"$CORTEX_ORG_NAME\",\"login\": \"$CORTEX_USER\",\"password\" : \"$CORTEX_PASS\" }")
if [[ "$resp" =~ \"status\":\"Ok\" ]]; then
echo "Successfully added user to Cortex."
else

View File

@@ -31,7 +31,7 @@ fi
USER=$1
CORTEX_KEY=$(lookup_pillar cortexkey)
CORTEX_IP=$(lookup_pillar managerip)
CORTEX_API_URL="$(lookup_pillar url_base)/cortex/api"
CORTEX_USER=$USER
case "${2^^}" in
@@ -46,7 +46,7 @@ case "${2^^}" in
;;
esac
resp=$(curl -sk -XPATCH -H "Authorization: Bearer $CORTEX_KEY" -H "Content-Type: application/json" "https://$CORTEX_IP/cortex/api/user/${CORTEX_USER}" -d "{\"status\":\"${CORTEX_STATUS}\" }")
resp=$(curl -sk -XPATCH -H "Authorization: Bearer $CORTEX_KEY" -H "Content-Type: application/json" -L "https://$CORTEX_API_URL/user/${CORTEX_USER}" -d "{\"status\":\"${CORTEX_STATUS}\" }")
if [[ "$resp" =~ \"status\":\"Locked\" || "$resp" =~ \"status\":\"Ok\" ]]; then
echo "Successfully updated user in Cortex."
else

View File

@@ -16,96 +16,7 @@
# along with this program. If not, see <http://www.gnu.org/licenses/>.
. /usr/sbin/so-common
. /usr/sbin/so-image-common
manager_check() {
# Check to see if this is a manager
MANAGERCHECK=$(cat /etc/salt/grains | grep role | awk '{print $2}')
if [ $MANAGERCHECK == 'so-eval' ] || [ $MANAGERCHECK == 'so-manager' ] || [ $MANAGERCHECK == 'so-managersearch' ] || [ $MANAGERCHECK == 'so-standalone' ] || [ $MANAGERCHECK == 'so-helix' ]; then
echo "This is a manager. We can proceed"
else
echo "Please run soup on the manager. The manager controls all updates."
exit 1
fi
}
update_docker_containers() {
# Download the containers from the interwebs
for i in "${TRUSTED_CONTAINERS[@]}"
do
# Pull down the trusted docker image
echo "Downloading $i"
docker pull --disable-content-trust=false docker.io/$IMAGEREPO/$i
# Tag it with the new registry destination
docker tag $IMAGEREPO/$i $HOSTNAME:5000/$IMAGEREPO/$i
docker push $HOSTNAME:5000/$IMAGEREPO/$i
done
}
version_check() {
if [ -f /etc/soversion ]; then
VERSION=$(cat /etc/soversion)
else
echo "Unable to detect version. I will now terminate."
exit 1
fi
}
manager_check
version_check
# Use the hostname
HOSTNAME=$(hostname)
# List all the containers
if [ $MANAGERCHECK != 'so-helix' ]; then
TRUSTED_CONTAINERS=( \
"so-acng:$VERSION" \
"so-thehive-cortex:$VERSION" \
"so-curator:$VERSION" \
"so-domainstats:$VERSION" \
"so-elastalert:$VERSION" \
"so-elasticsearch:$VERSION" \
"so-filebeat:$VERSION" \
"so-fleet:$VERSION" \
"so-fleet-launcher:$VERSION" \
"so-freqserver:$VERSION" \
"so-grafana:$VERSION" \
"so-idstools:$VERSION" \
"so-influxdb:$VERSION" \
"so-kibana:$VERSION" \
"so-kratos:$VERSION" \
"so-logstash:$VERSION" \
"so-minio:$VERSION" \
"so-mysql:$VERSION" \
"so-nginx:$VERSION" \
"so-pcaptools:$VERSION" \
"so-playbook:$VERSION" \
"so-redis:$VERSION" \
"so-soc:$VERSION" \
"so-soctopus:$VERSION" \
"so-steno:$VERSION" \
"so-strelka-frontend:$VERSION" \
"so-strelka-manager:$VERSION" \
"so-strelka-backend:$VERSION" \
"so-strelka-filestream:$VERSION" \
"so-suricata:$VERSION" \
"so-telegraf:$VERSION" \
"so-thehive:$VERSION" \
"so-thehive-es:$VERSION" \
"so-wazuh:$VERSION" \
"so-zeek:$VERSION" )
else
TRUSTED_CONTAINERS=( \
"so-filebeat:$VERSION" \
"so-idstools:$VERSION" \
"so-logstash:$VERSION" \
"so-nginx:$VERSION" \
"so-redis:$VERSION" \
"so-steno:$VERSION" \
"so-suricata:$VERSION" \
"so-telegraf:$VERSION" \
"so-zeek:$VERSION" )
fi
update_docker_containers
require_manager
update_docker_containers "refresh"

View File

@@ -19,8 +19,7 @@
#
# Purpose: This script will allow you to test your elastalert rule without entering the Docker container.
. /usr/sbin/so-elastic-common
HOST_RULE_DIR=/opt/so/rules/elastalert
OPTIONS=""
SKIP=0
RESULTS_TO_LOG="n"
@@ -29,114 +28,109 @@ FILE_SAVE_LOCATION=""
usage()
{
cat <<EOF
cat <<EOF
Test Elastalert Rule
Options:
-h This message
-a Trigger real alerts instead of the debug alert
-l <path_to_file> Write results to specified log file
-o '<options>' Specify Elastalert options ( Ex. --schema-only , --count-only, --days N )
-r <rule_name> Specify path/name of rule to test
-h This message
-a Trigger real alerts instead of the debug alert
-l <path_to_file> Write results to specified log file
-o '<options>' Specify Elastalert options ( Ex. --schema-only , --count-only, --days N )
-r <rule_name> Specify filename of rule to test (must exist in $HOST_RULE_DIR; do not include path)
EOF
}
while getopts "hal:o:r:" OPTION
do
case $OPTION in
h)
usage
exit 0
;;
a)
OPTIONS="--alert"
;;
l)
RESULTS_TO_LOG="y"
FILE_SAVE_LOCATION=$OPTARG
;;
o)
OPTIONS=$OPTARG
;;
r)
RULE_NAME=$OPTARG
SKIP=1
;;
*)
usage
exit 0
;;
esac
case $OPTION in
h)
usage
exit 0
;;
a)
OPTIONS="--alert"
;;
l)
RESULTS_TO_LOG="y"
FILE_SAVE_LOCATION=$OPTARG
;;
o)
OPTIONS=$OPTARG
;;
r)
RULE_NAME=$OPTARG
SKIP=1
;;
*)
usage
exit 0
;;
esac
done
docker_exec(){
if [ ${RESULTS_TO_LOG,,} = "y" ] ; then
docker exec -it so-elastalert bash -c "elastalert-test-rule $RULE_NAME $OPTIONS" > $FILE_SAVE_LOCATION
CMD="docker exec -it so-elastalert elastalert-test-rule /opt/elastalert/rules/$RULE_NAME --config /opt/config/elastalert_config.yaml $OPTIONS"
if [ "${RESULTS_TO_LOG,,}" = "y" ] ; then
$CMD > "$FILE_SAVE_LOCATION"
else
docker exec -it so-elastalert bash -c "elastalert-test-rule $RULE_NAME $OPTIONS"
$CMD
fi
}
rule_prompt(){
CURRENT_RULES=$(find /opt/so/rules/elastalert -name "*.yaml")
echo
echo "This script will allow you to test an Elastalert rule."
echo
echo "Below is a list of active Elastalert rules:"
echo
echo "-----------------------------------"
echo
echo "$CURRENT_RULES"
echo
CURRENT_RULES=$(cd "$HOST_RULE_DIR" && find . -type f \( -name "*.yaml" -o -name "*.yml" \) | sed -e 's/^\.\///')
if [ -z "$CURRENT_RULES" ]; then
echo "There are no rules available to test. Rule files must be placed in the $HOST_RULE_DIR directory."
exit 1
fi
echo
echo "This script will allow you to test an Elastalert rule."
echo
echo "Below is a list of available Elastalert rules:"
echo
echo "-----------------------------------"
echo
echo "Note: To test a rule it must be accessible by the Elastalert Docker container."
echo
echo "Make sure to swap the local path (/opt/so/rules/elastalert/) for the docker path (/etc/elastalert/rules/)"
echo "Example: /opt/so/rules/elastalert/nids2hive.yaml would be /etc/elastalert/rules/nids2hive.yaml"
echo
while [ -z $RULE_NAME ]; do
echo "Please enter the file path and rule name you want to test."
read -e RULE_NAME
echo "$CURRENT_RULES"
echo
echo "-----------------------------------"
echo
while [ -z "$RULE_NAME" ]; do
read -p "Please enter the rule filename you want to test (filename only, no path): " -e RULE_NAME
done
}
log_save_prompt(){
RESULTS_TO_LOG=""
while [ -z $RESULTS_TO_LOG ]; do
echo "The results can be rather long. Would you like to write the results to a file? (Y/N)"
read RESULTS_TO_LOG
done
read -p "The results can be rather long. Would you like to write the results to a file? (y/N) " -e RESULTS_TO_LOG
}
log_path_prompt(){
while [ -z $FILE_SAVE_LOCATION ]; do
echo "Please enter the file path and file name."
read -e FILE_SAVE_LOCATION
done
while [ -z "$FILE_SAVE_LOCATION" ]; do
read -p "Please enter the log file path and file name: " -e FILE_SAVE_LOCATION
done
echo "Depending on the rule this may take a while."
}
if [ $SKIP -eq 0 ]; then
rule_prompt
log_save_prompt
if [ ${RESULTS_TO_LOG,,} = "y" ] ; then
log_path_prompt
fi
fi
docker_exec
if [ $? -eq 0 ]; then
echo "Test completed successfully!"
else
echo "Something went wrong..."
if [ "${RESULTS_TO_LOG,,}" = "y" ] ; then
log_path_prompt
fi
fi
echo
docker_exec
RESULT=$?
echo
if [ $RESULT -eq 0 ]; then
echo "Test completed successfully!"
else
echo "Test failed."
fi
echo

View File

@@ -51,9 +51,9 @@ if [ $SKIP -ne 1 ]; then
# List indices
echo
{% if grains['role'] in ['so-node','so-heavynode'] %}
curl -k https://{{ NODEIP }}:9200/_cat/indices?v
curl -k -L https://{{ NODEIP }}:9200/_cat/indices?v
{% else %}
curl {{ NODEIP }}:9200/_cat/indices?v
curl -L {{ NODEIP }}:9200/_cat/indices?v
{% endif %}
echo
# Inform user we are about to delete all data
@@ -94,16 +94,16 @@ fi
echo "Deleting data..."
{% if grains['role'] in ['so-node','so-heavynode'] %}
INDXS=$(curl -s -XGET -k https://{{ NODEIP }}:9200/_cat/indices?v | egrep 'logstash|elastalert|so-' | awk '{ print $3 }')
INDXS=$(curl -s -XGET -k -L https://{{ NODEIP }}:9200/_cat/indices?v | egrep 'logstash|elastalert|so-' | awk '{ print $3 }')
{% else %}
INDXS=$(curl -s -XGET {{ NODEIP }}:9200/_cat/indices?v | egrep 'logstash|elastalert|so-' | awk '{ print $3 }')
INDXS=$(curl -s -XGET -L {{ NODEIP }}:9200/_cat/indices?v | egrep 'logstash|elastalert|so-' | awk '{ print $3 }')
{% endif %}
for INDX in ${INDXS}
do
{% if grains['role'] in ['so-node','so-heavynode'] %}
curl -XDELETE -k https://"{{ NODEIP }}:9200/${INDX}" > /dev/null 2>&1
curl -XDELETE -k -L https://"{{ NODEIP }}:9200/${INDX}" > /dev/null 2>&1
{% else %}
curl -XDELETE "{{ NODEIP }}:9200/${INDX}" > /dev/null 2>&1
curl -XDELETE -L "{{ NODEIP }}:9200/${INDX}" > /dev/null 2>&1
{% endif %}
done

View File

@@ -22,5 +22,5 @@ THEHIVEESPORT=9400
echo "Removing read only attributes for indices..."
echo
for p in $ESPORT $THEHIVEESPORT; do
curl -XPUT -H "Content-Type: application/json" http://$IP:$p/_all/_settings -d '{"index.blocks.read_only_allow_delete": null}' 2>&1 | if grep -q ack; then echo "Index settings updated..."; else echo "There was any issue updating the read-only attribute. Please ensure Elasticsearch is running.";fi;
curl -XPUT -H "Content-Type: application/json" -L http://$IP:$p/_all/_settings -d '{"index.blocks.read_only_allow_delete": null}' 2>&1 | if grep -q ack; then echo "Index settings updated..."; else echo "There was any issue updating the read-only attribute. Please ensure Elasticsearch is running.";fi;
done

View File

@@ -20,14 +20,14 @@
if [ "$1" == "" ]; then
{% if grains['role'] in ['so-node','so-heavynode'] %}
curl -s -k https://{{ NODEIP }}:9200/_nodes/stats | jq .nodes | jq ".[] | .ingest.pipelines"
curl -s -k -L https://{{ NODEIP }}:9200/_nodes/stats | jq .nodes | jq ".[] | .ingest.pipelines"
{% else %}
curl -s {{ NODEIP }}:9200/_nodes/stats | jq .nodes | jq ".[] | .ingest.pipelines"
curl -s -L {{ NODEIP }}:9200/_nodes/stats | jq .nodes | jq ".[] | .ingest.pipelines"
{% endif %}
else
{% if grains['role'] in ['so-node','so-heavynode'] %}
curl -s -k https://{{ NODEIP }}:9200/_nodes/stats | jq .nodes | jq ".[] | .ingest.pipelines.\"$1\""
curl -s -k -L https://{{ NODEIP }}:9200/_nodes/stats | jq .nodes | jq ".[] | .ingest.pipelines.\"$1\""
{% else %}
curl -s {{ NODEIP }}:9200/_nodes/stats | jq .nodes | jq ".[] | .ingest.pipelines.\"$1\""
curl -s -L {{ NODEIP }}:9200/_nodes/stats | jq .nodes | jq ".[] | .ingest.pipelines.\"$1\""
{% endif %}
fi

View File

@@ -18,14 +18,14 @@
. /usr/sbin/so-common
if [ "$1" == "" ]; then
{% if grains['role'] in ['so-node','so-heavynode'] %}
curl -s -k https://{{ NODEIP }}:9200/_ingest/pipeline/* | jq 'keys'
curl -s -k -L https://{{ NODEIP }}:9200/_ingest/pipeline/* | jq 'keys'
{% else %}
curl -s {{ NODEIP }}:9200/_ingest/pipeline/* | jq 'keys'
curl -s -L {{ NODEIP }}:9200/_ingest/pipeline/* | jq 'keys'
{% endif %}
else
{% if grains['role'] in ['so-node','so-heavynode'] %}
curl -s -k https://{{ NODEIP }}:9200/_ingest/pipeline/$1 | jq
curl -s -k -L https://{{ NODEIP }}:9200/_ingest/pipeline/$1 | jq
{% else %}
curl -s {{ NODEIP }}:9200/_ingest/pipeline/$1 | jq
curl -s -L {{ NODEIP }}:9200/_ingest/pipeline/$1 | jq
{% endif %}
fi

View File

@@ -18,14 +18,14 @@
. /usr/sbin/so-common
if [ "$1" == "" ]; then
{% if grains['role'] in ['so-node','so-heavynode'] %}
curl -s -k https://{{ NODEIP }}:9200/_template/* | jq 'keys'
curl -s -k -L https://{{ NODEIP }}:9200/_template/* | jq 'keys'
{% else %}
curl -s {{ NODEIP }}:9200/_template/* | jq 'keys'
curl -s -L {{ NODEIP }}:9200/_template/* | jq 'keys'
{% endif %}
else
{% if grains['role'] in ['so-node','so-heavynode'] %}
curl -s -k https://{{ NODEIP }}:9200/_template/$1 | jq
curl -s -k -L https://{{ NODEIP }}:9200/_template/$1 | jq
{% else %}
curl -s {{ NODEIP }}:9200/_template/$1 | jq
curl -s -L {{ NODEIP }}:9200/_template/$1 | jq
{% endif %}
fi

View File

@@ -31,9 +31,9 @@ COUNT=0
ELASTICSEARCH_CONNECTED="no"
while [[ "$COUNT" -le 240 ]]; do
{% if grains['role'] in ['so-node','so-heavynode'] %}
curl -k --output /dev/null --silent --head --fail https://"$ELASTICSEARCH_HOST":"$ELASTICSEARCH_PORT"
curl -k --output /dev/null --silent --head --fail -L https://"$ELASTICSEARCH_HOST":"$ELASTICSEARCH_PORT"
{% else %}
curl --output /dev/null --silent --head --fail http://"$ELASTICSEARCH_HOST":"$ELASTICSEARCH_PORT"
curl --output /dev/null --silent --head --fail -L http://"$ELASTICSEARCH_HOST":"$ELASTICSEARCH_PORT"
{% endif %}
if [ $? -eq 0 ]; then
ELASTICSEARCH_CONNECTED="yes"
@@ -56,9 +56,9 @@ cd ${ELASTICSEARCH_TEMPLATES}
echo "Loading templates..."
{% if grains['role'] in ['so-node','so-heavynode'] %}
for i in *; do TEMPLATE=$(echo $i | cut -d '-' -f2); echo "so-$TEMPLATE"; curl -k ${ELASTICSEARCH_AUTH} -s -XPUT https://${ELASTICSEARCH_HOST}:${ELASTICSEARCH_PORT}/_template/so-$TEMPLATE -H 'Content-Type: application/json' -d@$i 2>/dev/null; echo; done
for i in *; do TEMPLATE=$(echo $i | cut -d '-' -f2); echo "so-$TEMPLATE"; curl -k ${ELASTICSEARCH_AUTH} -s -XPUT -L https://${ELASTICSEARCH_HOST}:${ELASTICSEARCH_PORT}/_template/so-$TEMPLATE -H 'Content-Type: application/json' -d@$i 2>/dev/null; echo; done
{% else %}
for i in *; do TEMPLATE=$(echo $i | cut -d '-' -f2); echo "so-$TEMPLATE"; curl ${ELASTICSEARCH_AUTH} -s -XPUT http://${ELASTICSEARCH_HOST}:${ELASTICSEARCH_PORT}/_template/so-$TEMPLATE -H 'Content-Type: application/json' -d@$i 2>/dev/null; echo; done
for i in *; do TEMPLATE=$(echo $i | cut -d '-' -f2); echo "so-$TEMPLATE"; curl ${ELASTICSEARCH_AUTH} -s -XPUT -L http://${ELASTICSEARCH_HOST}:${ELASTICSEARCH_PORT}/_template/so-$TEMPLATE -H 'Content-Type: application/json' -d@$i 2>/dev/null; echo; done
{% endif %}
echo

View File

@@ -15,6 +15,7 @@
# along with this program. If not, see <http://www.gnu.org/licenses/>.
. /usr/sbin/so-common
. /usr/sbin/so-image-common
local_salt_dir=/opt/so/saltstack/local
cat << EOF
@@ -39,34 +40,14 @@ fi
echo "Please wait while switching to Elastic Features."
manager_check() {
# Check to see if this is a manager
MANAGERCHECK=$(cat /etc/salt/grains | grep role | awk '{print $2}')
if [[ "$MANAGERCHECK" =~ ^('so-eval'|'so-manager'|'so-standalone'|'so-managersearch')$ ]]; then
echo "This is a manager. We can proceed"
else
echo "Please run so-features-enable on the manager."
exit 0
fi
}
require_manager
TRUSTED_CONTAINERS=( \
"so-elasticsearch" \
"so-filebeat" \
"so-kibana" \
"so-logstash" )
update_docker_containers "features" "-features"
manager_check
VERSION=$(lookup_pillar soversion)
# Modify global.sls to enable Features
sed -i 's/features: False/features: True/' $local_salt_dir/pillar/global.sls
SUFFIX="-features"
TRUSTED_CONTAINERS=( \
"so-elasticsearch:$VERSION$SUFFIX" \
"so-filebeat:$VERSION$SUFFIX" \
"so-kibana:$VERSION$SUFFIX" \
"so-logstash:$VERSION$SUFFIX" )
for i in "${TRUSTED_CONTAINERS[@]}"
do
# Pull down the trusted docker image
echo "Downloading $i"
docker pull --disable-content-trust=false docker.io/$IMAGEREPO/$i
# Tag it with the new registry destination
docker tag $IMAGEREPO/$i $HOSTNAME:5000/$IMAGEREPO/$i
docker push $HOSTNAME:5000/$IMAGEREPO/$i
done

View File

@@ -59,6 +59,6 @@ if [[ $? -eq 0 ]]; then
echo "Successfully added user to Fleet"
else
echo "Unable to add user to Fleet; user might already exist"
echo $resp
echo "$MYSQL_OUTPUT"
exit 2
fi

View File

@@ -0,0 +1,175 @@
#!/bin/bash
#
# Copyright 2014,2015,2016,2017,2018,2019,2020 Security Onion Solutions, LLC
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
# NOTE: This script depends on so-common
IMAGEREPO=securityonion
container_list() {
MANAGERCHECK=$1
if [ -z "$MANAGERCHECK" ]; then
MANAGERCHECK=so-unknown
if [ -f /etc/salt/grains ]; then
MANAGERCHECK=$(cat /etc/salt/grains | grep role | awk '{print $2}')
fi
fi
if [ $MANAGERCHECK == 'so-import' ]; then
TRUSTED_CONTAINERS=( \
"so-elasticsearch" \
"so-filebeat" \
"so-idstools" \
"so-kibana" \
"so-kratos" \
"so-nginx" \
"so-pcaptools" \
"so-soc" \
"so-steno" \
"so-suricata" \
"so-zeek" )
elif [ $MANAGERCHECK != 'so-helix' ]; then
TRUSTED_CONTAINERS=( \
"so-acng" \
"so-curator" \
"so-domainstats" \
"so-elastalert" \
"so-elasticsearch" \
"so-filebeat" \
"so-fleet" \
"so-fleet-launcher" \
"so-freqserver" \
"so-grafana" \
"so-idstools" \
"so-influxdb" \
"so-kibana" \
"so-kratos" \
"so-logstash" \
"so-minio" \
"so-mysql" \
"so-nginx" \
"so-pcaptools" \
"so-playbook" \
"so-redis" \
"so-soc" \
"so-soctopus" \
"so-steno" \
"so-strelka-backend" \
"so-strelka-filestream" \
"so-strelka-frontend" \
"so-strelka-manager" \
"so-suricata" \
"so-telegraf" \
"so-thehive" \
"so-thehive-cortex" \
"so-thehive-es" \
"so-wazuh" \
"so-zeek" )
else
TRUSTED_CONTAINERS=( \
"so-filebeat" \
"so-idstools" \
"so-logstash" \
"so-nginx" \
"so-redis" \
"so-steno" \
"so-suricata" \
"so-telegraf" \
"so-zeek" )
fi
}
update_docker_containers() {
local CURLTYPE=$1
local IMAGE_TAG_SUFFIX=$2
local PROGRESS_CALLBACK=$3
local LOG_FILE=$4
local CONTAINER_REGISTRY=quay.io
local SIGNPATH=/root/sosigs
if [ -z "$CURLTYPE" ]; then
CURLTYPE=unknown
fi
if [ -z "$LOG_FILE" ]; then
if [ -c /dev/tty ]; then
LOG_FILE=/dev/tty
else
LOG_FILE=/dev/null
fi
fi
# Recheck the version for scenarios were the VERSION wasn't known before this script was imported
set_version
set_os
if [ -z "$TRUSTED_CONTAINERS" ]; then
container_list
fi
# Let's make sure we have the public key
curl -sSL https://raw.githubusercontent.com/Security-Onion-Solutions/securityonion/master/KEYS | gpg --import - >> "$LOG_FILE" 2>&1
rm -rf $SIGNPATH >> "$LOG_FILE" 2>&1
mkdir -p $SIGNPATH >> "$LOG_FILE" 2>&1
# Download the containers from the interwebs
for i in "${TRUSTED_CONTAINERS[@]}"
do
if [ -z "$PROGRESS_CALLBACK" ]; then
echo "Downloading $i" >> "$LOG_FILE" 2>&1
else
$PROGRESS_CALLBACK $i
fi
# Pull down the trusted docker image
local image=$i:$VERSION$IMAGE_TAG_SUFFIX
docker pull $CONTAINER_REGISTRY/$IMAGEREPO/$image >> "$LOG_FILE" 2>&1
# Get signature
curl -A "$CURLTYPE/$CURRENTVERSION/$OS/$(uname -r)" https://sigs.securityonion.net/$VERSION/$i:$VERSION$IMAGE_TAG_SUFFIX.sig --output $SIGNPATH/$image.sig >> "$LOG_FILE" 2>&1
if [[ $? -ne 0 ]]; then
echo "Unable to pull signature file for $image" >> "$LOG_FILE" 2>&1
exit 1
fi
# Dump our hash values
DOCKERINSPECT=$(docker inspect $CONTAINER_REGISTRY/$IMAGEREPO/$image)
echo "$DOCKERINSPECT" | jq ".[0].RepoDigests[] | select(. | contains(\"$CONTAINER_REGISTRY\"))" > $SIGNPATH/$image.txt
echo "$DOCKERINSPECT" | jq ".[0].Created, .[0].RootFS.Layers" >> $SIGNPATH/$image.txt
if [[ $? -ne 0 ]]; then
echo "Unable to inspect $image" >> "$LOG_FILE" 2>&1
exit 1
fi
GPGTEST=$(gpg --verify $SIGNPATH/$image.sig $SIGNPATH/$image.txt 2>&1)
if [[ $? -eq 0 ]]; then
if [[ -z "$SKIP_TAGPUSH" ]]; then
# Tag it with the new registry destination
if [ -z "$HOSTNAME" ]; then
HOSTNAME=$(hostname)
fi
docker tag $CONTAINER_REGISTRY/$IMAGEREPO/$image $HOSTNAME:5000/$IMAGEREPO/$image >> "$LOG_FILE" 2>&1
docker push $HOSTNAME:5000/$IMAGEREPO/$image >> "$LOG_FILE" 2>&1
fi
else
echo "There is a problem downloading the $image image. Details: " >> "$LOG_FILE" 2>&1
echo "" >> "$LOG_FILE" 2>&1
echo $GPGTEST >> "$LOG_FILE" 2>&1
exit 1
fi
done
}

View File

@@ -27,8 +27,7 @@ function usage {
cat << EOF
Usage: $0 <pcap-file-1> [pcap-file-2] [pcap-file-N]
Imports one or more PCAP files onto a sensor node. The PCAP traffic will be analyzed and
made available for review in the Security Onion toolset.
Imports one or more PCAP files onto a sensor node. The PCAP traffic will be analyzed and made available for review in the Security Onion toolset.
EOF
}

View File

@@ -16,7 +16,7 @@
# along with this program. If not, see <http://www.gnu.org/licenses/>.
{% if grains['role'] in ['so-node','so-heavynode'] %}
curl -X GET -k https://localhost:9200/_cat/indices?v
curl -X GET -k -L https://localhost:9200/_cat/indices?v
{% else %}
curl -X GET localhost:9200/_cat/indices?v
curl -X GET -L localhost:9200/_cat/indices?v
{% endif %}

View File

@@ -0,0 +1,63 @@
#!/bin/bash
. $(dirname $0)/so-common
if [ "$FORCE_IP_UPDATE" != "1" ]; then
is_single_node_grid || fail "Cannot update the IP on a distributed grid"
fi
echo "This tool will update a manager's IP address to the new IP assigned to the management network interface."
echo
echo "WARNING: This tool is still undergoing testing, use at your own risk!"
echo
if [ -z "$OLD_IP" ]; then
OLD_IP=$(lookup_pillar "managerip")
if [ -z "$OLD_IP" ]; then
fail "Unable to find old IP; possible salt system failure"
fi
echo "Found old IP $OLD_IP."
fi
if [ -z "$NEW_IP" ]; then
iface=$(lookup_pillar "mainint" "host")
NEW_IP=$(ip -4 addr list $iface | grep inet | cut -d' ' -f6 | cut -d/ -f1)
if [ -z "$NEW_IP" ]; then
fail "Unable to detect new IP on interface $iface. "
fi
echo "Detected new IP $NEW_IP on interface $iface."
fi
if [ "$OLD_IP" == "$NEW_IP" ]; then
fail "IP address has not changed"
fi
echo "About to change old IP $OLD_IP to new IP $NEW_IP."
echo
read -n 1 -p "Would you like to continue? (y/N) " CONTINUE
echo
if [ "$CONTINUE" == "y" ]; then
for file in $(grep -rlI $OLD_IP /opt/so/saltstack /etc); do
echo "Updating file: $file"
sed -i "s|$OLD_IP|$NEW_IP|g" $file
done
echo "The IP has been changed from $OLD_IP to $NEW_IP."
echo
read -n 1 -p "The system must reboot to ensure all services have restarted with the new configuration. Reboot now? (y/N)" CONTINUE
echo
if [ "$CONTINUE" == "y" ]; then
reboot
fi
else
echo "Exiting without changes."
fi

View File

@@ -23,7 +23,7 @@
KIBANA_HOST={{ MANAGER }}
KSO_PORT=5601
OUTFILE="saved_objects.ndjson"
curl -s -H 'kbn-xsrf: true' -H 'Content-Type: application/json' -XPOST $KIBANA_HOST:$KSO_PORT/api/saved_objects/_export -d '{ "type": [ "index-pattern", "config", "visualization", "dashboard", "search" ], "excludeExportDetails": false }' > $OUTFILE
curl -s -H 'kbn-xsrf: true' -H 'Content-Type: application/json' -XPOST -L $KIBANA_HOST:$KSO_PORT/api/saved_objects/_export -d '{ "type": [ "index-pattern", "config", "visualization", "dashboard", "search" ], "excludeExportDetails": false }' > $OUTFILE
# Clean up using PLACEHOLDER
sed -i "s/$KIBANA_HOST/PLACEHOLDER/g" $OUTFILE

View File

@@ -0,0 +1,18 @@
#!/bin/bash
#
# Copyright 2014,2015,2016,2017,2018,2019,2020 Security Onion Solutions, LLC
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
$(dirname $0)/so-import-pcap $@

View File

@@ -0,0 +1,26 @@
#!/bin/bash
#
# Copyright 2014,2015,2016,2017,2018,2019,2020 Security Onion Solutions, LLC
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
. /usr/sbin/so-common
salt-call state.apply playbook.db_init,playbook,playbook.automation_user_create
/usr/sbin/so-soctopus-restart
echo "Importing Plays - this will take some time...."
wait 5
/usr/sbin/so-playbook-ruleupdate

View File

@@ -0,0 +1,104 @@
{% import_yaml 'salt/minion.defaults.yaml' as SALT_MINION_DEFAULTS -%}
#!/bin/bash
#
# Copyright 2014,2015,2016,2017,2018,2019,2020 Security Onion Solutions, LLC
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
# this script checks the time the file /opt/so/log/salt/state-apply-test was last modified and restarts the salt-minion service if it is outside a threshold date/time
# the file is modified via file.touch using a scheduled job healthcheck.salt-minion.state-apply-test that runs a state.apply.
# by default the file should be updated every 5-8 minutes.
# this allows us to test that the minion is able apply states and communicate with the master
# if the file is unable to be touched via the state.apply, then we assume there is a possibilty that the minion is hung (though it could be possible the master is down as well)
# we then stop the service, pkill salt-minion, the start the salt-minion service back up
. /usr/sbin/so-common
QUIET=false
UPTIME_REQ=1800 #in seconds, how long the box has to be up before considering restarting salt-minion due to /opt/so/log/salt/state-apply-test not being touched
CURRENT_TIME=$(date +%s)
SYSTEM_START_TIME=$(date -d "$(</proc/uptime awk '{print $1}') seconds ago" +%s)
LAST_HIGHSTATE_END=$([ -e "/opt/so/log/salt/lasthighstate" ] && date -r /opt/so/log/salt/lasthighstate +%s || echo 0)
LAST_HEALTHCHECK_STATE_APPLY=$([ -e "/opt/so/log/salt/state-apply-test" ] && date -r /opt/so/log/salt/state-apply-test +%s || echo 0)
# SETTING THRESHOLD TO ANYTHING UNDER 600 seconds may cause a lot of salt-minion restarts since the job to touch the file occurs every 5-8 minutes by default
THRESHOLD={{SALT_MINION_DEFAULTS.salt.minion.check_threshold}} #within how many seconds the file /opt/so/log/salt/state-apply-test must have been touched/modified before the salt minion is restarted
THRESHOLD_DATE=$((LAST_HEALTHCHECK_STATE_APPLY+THRESHOLD))
logCmd() {
cmd=$1
info "Executing command: $cmd"
$cmd >> "/opt/so/log/salt/so-salt-minion-check"
}
log() {
msg=$1
level=${2:-I}
now=$(TZ=GMT date +"%Y-%m-%dT%H:%M:%SZ")
if ! $QUIET; then
echo $msg
fi
echo -e "$now | $level | $msg" >> "/opt/so/log/salt/so-salt-minion-check" 2>&1
}
error() {
log "$1" "E"
}
info() {
log "$1" "I"
}
usage()
{
cat <<EOF
Check health of salt-minion and restart it if needed
Options:
-h This message
-q Don't output to terminal
EOF
}
while getopts ":q" opt; do
case "$opt" in
q )
QUIET=true
;;
* ) usage
exit 0
;;
esac
done
log "running so-salt-minion-check"
if [ $CURRENT_TIME -ge $((SYSTEM_START_TIME+$UPTIME_REQ)) ]; then
if [ $THRESHOLD_DATE -le $CURRENT_TIME ]; then
log "salt-minion is unable to apply states" E
log "/opt/so/log/salt/healthcheck-state-apply not touched by required date: `date -d @$THRESHOLD_DATE`, last touched: `date -d @$LAST_HEALTHCHECK_STATE_APPLY`" I
log "last highstate completed at `date -d @$LAST_HIGHSTATE_END`" I
log "checking if any jobs are running" I
logCmd "salt-call --local saltutil.running" I
log "killing all salt-minion processes" I
logCmd "pkill -9 -ef /usr/bin/salt-minion" I
log "starting salt-minion service" I
logCmd "systemctl start salt-minion" I
else
log "/opt/so/log/salt/healthcheck-state-apply last touched: `date -d @$LAST_HEALTHCHECK_STATE_APPLY` must be touched by `date -d @$THRESHOLD_DATE` to avoid salt-minion restart" I
fi
else
log "system uptime only $((CURRENT_TIME-SYSTEM_START_TIME)) seconds does not meet $UPTIME_REQ second requirement." I
fi

View File

@@ -0,0 +1,93 @@
#!/bin/bash
. /usr/sbin/so-common
if [[ $1 =~ ^(-q|--quiet) ]]; then
quiet=true
fi
before=
after=
reload_required=false
print_sshd_t() {
local string=$1
local state=$2
echo "${state}:"
local grep_out
grep_out=$(sshd -T | grep "^${string}")
if [[ $state == "Before" ]]; then
before=$grep_out
else
after=$grep_out
fi
echo $grep_out
}
print_msg() {
local msg=$1
if ! [[ $quiet ]]; then
printf "%s\n" \
"----" \
"$msg" \
"----" \
""
fi
}
if ! [[ $quiet ]]; then print_sshd_t "ciphers" "Before"; fi
sshd -T | grep "^ciphers" | sed -e "s/\(3des-cbc\|aes128-cbc\|aes192-cbc\|aes256-cbc\|arcfour\|arcfour128\|arcfour256\|blowfish-cbc\|cast128-cbc\|rijndael-cbc@lysator.liu.se\)\,\?//g" >> /etc/ssh/sshd_config
if ! [[ $quiet ]]; then
print_sshd_t "ciphers" "After"
echo ""
fi
if [[ $before != $after ]]; then
reload_required=true
fi
if ! [[ $quiet ]]; then print_sshd_t "kexalgorithms" "Before"; fi
sshd -T | grep "^kexalgorithms" | sed -e "s/\(diffie-hellman-group14-sha1\|ecdh-sha2-nistp256\|diffie-hellman-group-exchange-sha256\|diffie-hellman-group1-sha1\|diffie-hellman-group-exchange-sha1\|ecdh-sha2-nistp521\|ecdh-sha2-nistp384\)\,\?//g" >> /etc/ssh/sshd_config
if ! [[ $quiet ]]; then
print_sshd_t "kexalgorithms" "After"
echo ""
fi
if [[ $before != $after ]]; then
reload_required=true
fi
if ! [[ $quiet ]]; then print_sshd_t "macs" "Before"; fi
sshd -T | grep "^macs" | sed -e "s/\(hmac-sha2-512,\|umac-128@openssh.com,\|hmac-sha2-256,\|umac-64@openssh.com,\|hmac-sha1,\|hmac-sha1-etm@openssh.com,\|umac-64-etm@openssh.com,\|hmac-sha1\)//g" >> /etc/ssh/sshd_config
if ! [[ $quiet ]]; then
print_sshd_t "macs" "After"
echo ""
fi
if [[ $before != $after ]]; then
reload_required=true
fi
if ! [[ $quiet ]]; then print_sshd_t "hostkeyalgorithms" "Before"; fi
sshd -T | grep "^hostkeyalgorithms" | sed "s|ecdsa-sha2-nistp256,||g" | sed "s|ssh-rsa,||g" >> /etc/ssh/sshd_config
if ! [[ $quiet ]]; then
print_sshd_t "hostkeyalgorithms" "After"
echo ""
fi
if [[ $before != $after ]]; then
reload_required=true
fi
if [[ $reload_required == true ]]; then
print_msg "Reloading sshd to load config changes..."
systemctl reload sshd
fi
{% if grains['os'] != 'CentOS' %}
print_msg "[ WARNING ] Any new ssh sessions will need to remove and reaccept the ECDSA key for this server before reconnecting."
{% endif %}

View File

@@ -14,8 +14,6 @@
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
{%- from 'common/maps/so-status.map.jinja' import docker with context %}
{%- set container_list = docker['containers'] | sort | unique %}
if ! [ "$(id -u)" = 0 ]; then
echo "This command must be run as root"
@@ -23,19 +21,24 @@ if ! [ "$(id -u)" = 0 ]; then
fi
# Constants
SYSTEM_START_TIME=$(date -d "$(</proc/uptime awk '{print $1}') seconds ago" +%s)
# file populated by salt.lasthighstate state at end of successful highstate run
LAST_HIGHSTATE_END=$([ -e "/opt/so/log/salt/lasthighstate" ] && date -r /opt/so/log/salt/lasthighstate +%s || echo 0)
HIGHSTATE_RUNNING=$(salt-call --local saltutil.running --out=json | jq -r '.local[].fun' | grep -q 'state.highstate' && echo $?)
ERROR_STRING="ERROR"
SUCCESS_STRING="OK"
PENDING_STRING="PENDING"
MISSING_STRING='MISSING'
DISABLED_STRING='DISABLED'
WAIT_START_STRING='WAIT_START'
STARTING_STRING='STARTING'
CALLER=$(ps -o comm= $PPID)
declare -a BAD_STATUSES=("removing" "paused" "exited" "dead")
declare -a PENDING_STATUSES=("paused" "created" "restarting")
declare -a GOOD_STATUSES=("running")
declare -a DISABLED_CONTAINERS=()
{%- if salt['pillar.get']('steno:enabled', 'True') is sameas false %}
DISABLED_CONTAINERS+=("so-steno")
{%- endif %}
mapfile -t DISABLED_CONTAINERS < <(sort -u /opt/so/conf/so-status/so-status.conf | grep "^\s*#" | tr -d "#")
declare -a temp_container_name_list=()
declare -a temp_container_state_list=()
@@ -77,9 +80,9 @@ compare_lists() {
# {% endraw %}
create_expected_container_list() {
{% for item in container_list -%}
expected_container_list+=("{{ item }}")
{% endfor -%}
mapfile -t expected_container_list < <(sort -u /opt/so/conf/so-status/so-status.conf | tr -d "#")
}
populate_container_lists() {
@@ -111,28 +114,42 @@ parse_status() {
local container_state=${1}
local service_name=${2}
[[ $container_state = "missing" ]] && printf $MISSING_STRING && return 1
for state in "${GOOD_STATUSES[@]}"; do
[[ $container_state = "$state" ]] && printf $SUCCESS_STRING && return 0
done
for state in "${PENDING_STATUSES[@]}"; do
[[ $container_state = "$state" ]] && printf $PENDING_STRING && return 0
done
# This is technically not needed since the default is error state
for state in "${BAD_STATUSES[@]}"; do
if [[ " ${DISABLED_CONTAINERS[@]} " =~ " ${service_name} " ]]; then
printf $DISABLED_STRING
return 0
elif [[ $container_state = "$state" ]]; then
printf $ERROR_STRING
return 1
fi
[[ " ${DISABLED_CONTAINERS[@]} " =~ " ${service_name} " ]] && printf $DISABLED_STRING && return 0
done
printf $ERROR_STRING && return 1
# if a highstate has finished running since the system has started
# then the containers should be running so let's check the status
if [ $LAST_HIGHSTATE_END -ge $SYSTEM_START_TIME ]; then
[[ $container_state = "missing" ]] && printf $MISSING_STRING && return 1
for state in "${PENDING_STATUSES[@]}"; do
[[ $container_state = "$state" ]] && printf $PENDING_STRING && return 0
done
# This is technically not needed since the default is error state
for state in "${BAD_STATUSES[@]}"; do
[[ $container_state = "$state" ]] && printf $ERROR_STRING && return 1
done
printf $ERROR_STRING && return 1
# if a highstate has not run since system start time, but a highstate is currently running
# then show that the containers are STARTING
elif [[ "$HIGHSTATE_RUNNING" == 0 ]]; then
printf $STARTING_STRING && return 0
# if a highstate has not finished running since system startup and isn't currently running
# then just show that the containers are WAIT_START; waiting to be started
else
printf $WAIT_START_STRING && return 1
fi
}
# {% raw %}
@@ -143,19 +160,19 @@ print_line() {
local columns=$(tput cols)
local state_color="\e[0m"
local PADDING_CONSTANT=14
local PADDING_CONSTANT=15
if [[ $service_state = "$ERROR_STRING" ]] || [[ $service_state = "$MISSING_STRING" ]]; then
if [[ $service_state = "$ERROR_STRING" ]] || [[ $service_state = "$MISSING_STRING" ]] || [[ $service_state = "$WAIT_START_STRING" ]]; then
state_color="\e[1;31m"
elif [[ $service_state = "$SUCCESS_STRING" ]]; then
state_color="\e[1;32m"
elif [[ $service_state = "$PENDING_STRING" ]] || [[ $service_state = "$DISABLED_STRING" ]]; then
elif [[ $service_state = "$PENDING_STRING" ]] || [[ $service_state = "$DISABLED_STRING" ]] || [[ $service_state = "$STARTING_STRING" ]]; then
state_color="\e[1;33m"
fi
printf " $service_name "
for i in $(seq 0 $(( $columns - $PADDING_CONSTANT - ${#service_name} - ${#service_state} ))); do
printf "-"
printf "${state_color}%b\e[0m" "-"
done
printf " [ "
printf "${state_color}%b\e[0m" "$service_state"
@@ -164,12 +181,10 @@ print_line() {
non_term_print_line() {
local service_name=${1}
local service_state="$( parse_status ${2} )"
local PADDING_CONSTANT=10
local service_state="$( parse_status ${2} ${1} )"
printf " $service_name "
for i in $(seq 0 $(( 40 - $PADDING_CONSTANT - ${#service_name} - ${#service_state} ))); do
for i in $(seq 0 $(( 35 - ${#service_name} - ${#service_state} ))); do
printf "-"
done
printf " [ "

View File

@@ -31,7 +31,7 @@ fi
USER=$1
THEHIVE_KEY=$(lookup_pillar hivekey)
THEHIVE_IP=$(lookup_pillar managerip)
THEHVIE_API_URL="$(lookup_pillar url_base)/thehive/api"
THEHIVE_USER=$USER
# Read password for new user from stdin
@@ -47,7 +47,7 @@ if ! check_password "$THEHIVE_PASS"; then
fi
# Create new user in TheHive
resp=$(curl -sk -XPOST -H "Authorization: Bearer $THEHIVE_KEY" -H "Content-Type: application/json" "https://$THEHIVE_IP/thehive/api/user" -d "{\"login\" : \"$THEHIVE_USER\",\"name\" : \"$THEHIVE_USER\",\"roles\" : [\"read\",\"alert\",\"write\",\"admin\"],\"preferences\" : \"{}\",\"password\" : \"$THEHIVE_PASS\"}")
resp=$(curl -sk -XPOST -H "Authorization: Bearer $THEHIVE_KEY" -H "Content-Type: application/json" -L "https://$THEHVIE_API_URL/user" -d "{\"login\" : \"$THEHIVE_USER\",\"name\" : \"$THEHIVE_USER\",\"roles\" : [\"read\",\"alert\",\"write\",\"admin\"],\"preferences\" : \"{}\",\"password\" : \"$THEHIVE_PASS\"}")
if [[ "$resp" =~ \"status\":\"Ok\" ]]; then
echo "Successfully added user to TheHive"
else

View File

@@ -31,7 +31,7 @@ fi
USER=$1
THEHIVE_KEY=$(lookup_pillar hivekey)
THEHIVE_IP=$(lookup_pillar managerip)
THEHVIE_API_URL="$(lookup_pillar url_base)/thehive/api"
THEHIVE_USER=$USER
case "${2^^}" in
@@ -46,7 +46,7 @@ case "${2^^}" in
;;
esac
resp=$(curl -sk -XPATCH -H "Authorization: Bearer $THEHIVE_KEY" -H "Content-Type: application/json" "https://$THEHIVE_IP/thehive/api/user/${THEHIVE_USER}" -d "{\"status\":\"${THEHIVE_STATUS}\" }")
resp=$(curl -sk -XPATCH -H "Authorization: Bearer $THEHIVE_KEY" -H "Content-Type: application/json" -L "https://$THEHVIE_API_URL/user/${THEHIVE_USER}" -d "{\"status\":\"${THEHIVE_STATUS}\" }")
if [[ "$resp" =~ \"status\":\"Locked\" || "$resp" =~ \"status\":\"Ok\" ]]; then
echo "Successfully updated user in TheHive"
else

View File

@@ -8,7 +8,7 @@
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
. /usr/sbin/so-common
source $(dirname $0)/so-common
if [[ $# -lt 1 || $# -gt 2 ]]; then
echo "Usage: $0 <list|add|update|enable|disable|validate|valemail|valpass> [email]"
@@ -56,14 +56,14 @@ function verifyEnvironment() {
require "openssl"
require "sqlite3"
[[ ! -f $databasePath ]] && fail "Unable to find database file; specify path via KRATOS_DB_PATH environment variable"
response=$(curl -Ss ${kratosUrl}/)
response=$(curl -Ss -L ${kratosUrl}/)
[[ "$response" != "404 page not found" ]] && fail "Unable to communicate with Kratos; specify URL via KRATOS_URL environment variable"
}
function findIdByEmail() {
email=$1
response=$(curl -Ss ${kratosUrl}/identities)
response=$(curl -Ss -L ${kratosUrl}/identities)
identityId=$(echo "${response}" | jq ".[] | select(.verifiable_addresses[0].value == \"$email\") | .id")
echo $identityId
}
@@ -113,7 +113,7 @@ function updatePassword() {
}
function listUsers() {
response=$(curl -Ss ${kratosUrl}/identities)
response=$(curl -Ss -L ${kratosUrl}/identities)
[[ $? != 0 ]] && fail "Unable to communicate with Kratos"
echo "${response}" | jq -r ".[] | .verifiable_addresses[0].value" | sort
@@ -131,7 +131,7 @@ function createUser() {
EOF
)
response=$(curl -Ss ${kratosUrl}/identities -d "$addUserJson")
response=$(curl -Ss -L ${kratosUrl}/identities -d "$addUserJson")
[[ $? != 0 ]] && fail "Unable to communicate with Kratos"
identityId=$(echo "${response}" | jq ".id")
@@ -153,7 +153,7 @@ function updateStatus() {
identityId=$(findIdByEmail "$email")
[[ ${identityId} == "" ]] && fail "User not found"
response=$(curl -Ss "${kratosUrl}/identities/$identityId")
response=$(curl -Ss -L "${kratosUrl}/identities/$identityId")
[[ $? != 0 ]] && fail "Unable to communicate with Kratos"
oldConfig=$(echo "select config from identity_credentials where identity_id=${identityId};" | sqlite3 "$databasePath")
@@ -171,7 +171,7 @@ function updateStatus() {
fi
updatedJson=$(echo "$response" | jq ".traits.status = \"$status\" | del(.verifiable_addresses) | del(.id) | del(.schema_url)")
response=$(curl -Ss -XPUT ${kratosUrl}/identities/$identityId -d "$updatedJson")
response=$(curl -Ss -XPUT -L ${kratosUrl}/identities/$identityId -d "$updatedJson")
[[ $? != 0 ]] && fail "Unable to mark user as locked"
}
@@ -191,7 +191,7 @@ function deleteUser() {
identityId=$(findIdByEmail "$email")
[[ ${identityId} == "" ]] && fail "User not found"
response=$(curl -Ss -XDELETE "${kratosUrl}/identities/$identityId")
response=$(curl -Ss -XDELETE -L "${kratosUrl}/identities/$identityId")
[[ $? != 0 ]] && fail "Unable to communicate with Kratos"
}

View File

@@ -0,0 +1,17 @@
#!/bin/bash
# Copyright 2014,2015,2016,2017,2018,2019,2020 Security Onion Solutions, LLC
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
docker exec -it so-wazuh /usr/bin/node /var/ossec/api/configuration/auth/htpasswd /var/ossec/api/configuration/auth/user $1

View File

@@ -0,0 +1,17 @@
#!/bin/bash
# Copyright 2014,2015,2016,2017,2018,2019,2020 Security Onion Solutions, LLC
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
docker exec -it so-wazuh /usr/bin/node /var/ossec/api/configuration/auth/htpasswd /var/ossec/api/configuration/auth/user $1

View File

@@ -0,0 +1,17 @@
#!/bin/bash
# Copyright 2014,2015,2016,2017,2018,2019,2020 Security Onion Solutions, LLC
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
docker exec -it so-wazuh /usr/bin/node /var/ossec/api/configuration/auth/htpasswd -D /var/ossec/api/configuration/auth/user $1

View File

@@ -16,24 +16,22 @@
# along with this program. If not, see <http://www.gnu.org/licenses/>.
. /usr/sbin/so-common
UPDATE_DIR=/tmp/sogh/securityonion
INSTALLEDVERSION=$(cat /etc/soversion)
INSTALLEDSALTVERSION=$(salt --versions-report | grep Salt: | awk {'print $2'})
DEFAULT_SALT_DIR=/opt/so/saltstack/default
BATCHSIZE=5
SOUP_LOG=/root/soup.log
exec 3>&1 1>${SOUP_LOG} 2>&1
manager_check() {
# Check to see if this is a manager
MANAGERCHECK=$(cat /etc/salt/grains | grep role | awk '{print $2}')
if [[ "$MANAGERCHECK" =~ ^('so-eval'|'so-manager'|'so-standalone'|'so-managersearch'|'so-import')$ ]]; then
echo "This is a manager. We can proceed."
MINIONID=$(salt-call grains.get id --out=txt|awk -F: {'print $2'}|tr -d ' ')
else
echo "Please run soup on the manager. The manager controls all updates."
exit 0
fi
add_common() {
cp $UPDATE_DIR/salt/common/tools/sbin/so-common $DEFAULT_SALT_DIR/salt/common/tools/sbin/
cp $UPDATE_DIR/salt/common/tools/sbin/so-image-common $DEFAULT_SALT_DIR/salt/common/tools/sbin/
salt-call state.apply common queue=True
echo "Run soup one more time"
exit 0
}
airgap_mounted() {
@@ -79,6 +77,30 @@ airgap_mounted() {
fi
}
airgap_update_dockers() {
if [ $is_airgap -eq 0 ]; then
# Let's copy the tarball
if [ ! -f $AGDOCKER/registry.tar ]; then
echo "Unable to locate registry. Exiting"
exit 1
else
echo "Stopping the registry docker"
docker stop so-dockerregistry
docker rm so-dockerregistry
echo "Copying the new dockers over"
tar xvf $AGDOCKER/registry.tar -C /nsm/docker-registry/docker
echo "Add Registry back"
docker load -i $AGDOCKER/registry_image.tar
fi
fi
}
update_registry() {
docker stop so-dockerregistry
docker rm so-dockerregistry
salt-call state.apply registry queue=True
}
check_airgap() {
# See if this is an airgap install
AIRGAP=$(cat /opt/so/saltstack/local/pillar/global.sls | grep airgap | awk '{print $2}')
@@ -92,6 +114,12 @@ check_airgap() {
fi
}
check_sudoers() {
if grep -q "so-setup" /etc/sudoers; then
echo "There is an entry for so-setup in the sudoers file, this can be safely deleted using \"visudo\"."
fi
}
clean_dockers() {
# Place Holder for cleaning up old docker images
echo "Trying to clean up old dockers."
@@ -100,7 +128,6 @@ clean_dockers() {
}
clone_to_tmp() {
# TODO Need to add a air gap option
# Clean old files
rm -rf /tmp/sogh
# Make a temp location for the files
@@ -128,21 +155,9 @@ copy_new_files() {
cd /tmp
}
detect_os() {
# Detect Base OS
echo "Determining Base OS." >> "$SOUP_LOG" 2>&1
if [ -f /etc/redhat-release ]; then
OS="centos"
elif [ -f /etc/os-release ]; then
OS="ubuntu"
fi
echo "Found OS: $OS" >> "$SOUP_LOG" 2>&1
}
highstate() {
# Run a highstate but first cancel a running one.
salt-call saltutil.kill_all_jobs
salt-call state.highstate -l info
# Run a highstate.
salt-call state.highstate -l info queue=True
}
masterlock() {
@@ -182,7 +197,6 @@ pillar_changes() {
[[ "$INSTALLEDVERSION" =~ rc.1 ]] && rc1_to_rc2
[[ "$INSTALLEDVERSION" =~ rc.2 ]] && rc2_to_rc3
[[ "$INSTALLEDVERSION" =~ rc.3 ]] && rc3_to_2.3.0
}
rc1_to_rc2() {
@@ -283,113 +297,15 @@ unmount_update() {
umount /tmp/soagupdate
}
update_centos_repo() {
# Update the files in the repo
echo "Syncing new updates to /nsm/repo"
rsync -a $AGDOCKER/repo /nsm/repo
rsync -av $AGREPO/* /nsm/repo/
echo "Creating repo"
createrepo /nsm/repo
}
update_dockers() {
if [ $is_airgap -eq 0 ]; then
# Let's copy the tarball
if [ ! -f $AGDOCKER/registry.tar ]; then
echo "Unable to locate registry. Exiting"
exit 0
else
echo "Stopping the registry docker"
docker stop so-dockerregistry
docker rm so-dockerregistry
echo "Copying the new dockers over"
tar xvf $AGDOCKER/registry.tar -C /nsm/docker-registry/docker
fi
else
# List all the containers
if [ $MANAGERCHECK == 'so-import' ]; then
TRUSTED_CONTAINERS=( \
"so-idstools" \
"so-nginx" \
"so-filebeat" \
"so-suricata" \
"so-soc" \
"so-elasticsearch" \
"so-kibana" \
"so-kratos" \
"so-suricata" \
"so-registry" \
"so-pcaptools" \
"so-zeek" )
elif [ $MANAGERCHECK != 'so-helix' ]; then
TRUSTED_CONTAINERS=( \
"so-acng" \
"so-thehive-cortex" \
"so-curator" \
"so-domainstats" \
"so-elastalert" \
"so-elasticsearch" \
"so-filebeat" \
"so-fleet" \
"so-fleet-launcher" \
"so-freqserver" \
"so-grafana" \
"so-idstools" \
"so-influxdb" \
"so-kibana" \
"so-kratos" \
"so-logstash" \
"so-minio" \
"so-mysql" \
"so-nginx" \
"so-pcaptools" \
"so-playbook" \
"so-redis" \
"so-soc" \
"so-soctopus" \
"so-steno" \
"so-strelka-frontend" \
"so-strelka-manager" \
"so-strelka-backend" \
"so-strelka-filestream" \
"so-suricata" \
"so-telegraf" \
"so-thehive" \
"so-thehive-es" \
"so-wazuh" \
"so-zeek" )
else
TRUSTED_CONTAINERS=( \
"so-filebeat" \
"so-idstools" \
"so-logstash" \
"so-nginx" \
"so-redis" \
"so-steno" \
"so-suricata" \
"so-telegraf" \
"so-zeek" )
fi
# Download the containers from the interwebs
for i in "${TRUSTED_CONTAINERS[@]}"
do
# Pull down the trusted docker image
echo "Downloading $i:$NEWVERSION"
docker pull --disable-content-trust=false docker.io/$IMAGEREPO/$i:$NEWVERSION
# Tag it with the new registry destination
docker tag $IMAGEREPO/$i:$NEWVERSION $HOSTNAME:5000/$IMAGEREPO/$i:$NEWVERSION
docker push $HOSTNAME:5000/$IMAGEREPO/$i:$NEWVERSION
done
fi
# Cleanup on Aisle 4
clean_dockers
echo "Add Registry back if airgap"
if [ $is_airgap -eq 0 ]; then
docker load -i $AGDOCKER/registry_image.tar
fi
}
update_version() {
# Update the version to the latest
echo "Updating the Security Onion version file."
@@ -411,6 +327,10 @@ upgrade_check_salt() {
if [ "$INSTALLEDSALTVERSION" == "$NEWSALTVERSION" ]; then
echo "You are already running the correct version of Salt for Security Onion."
else
UPGRADESALT=1
fi
}
upgrade_salt() {
SALTUPGRADED=True
echo "Performing upgrade of Salt from $INSTALLEDSALTVERSION to $NEWSALTVERSION."
echo ""
@@ -421,7 +341,11 @@ upgrade_check_salt() {
yum versionlock delete "salt-*"
echo "Updating Salt packages and restarting services."
echo ""
sh $UPDATE_DIR/salt/salt/scripts/bootstrap-salt.sh -F -M -x python3 stable "$NEWSALTVERSION"
if [ $is_airgap -eq 0 ]; then
sh $UPDATE_DIR/salt/salt/scripts/bootstrap-salt.sh -r -F -M -x python3 stable "$NEWSALTVERSION"
else
sh $UPDATE_DIR/salt/salt/scripts/bootstrap-salt.sh -F -M -x python3 stable "$NEWSALTVERSION"
fi
echo "Applying yum versionlock for Salt."
echo ""
yum versionlock add "salt-*"
@@ -441,7 +365,6 @@ upgrade_check_salt() {
apt-mark hold "salt-master"
apt-mark hold "salt-minion"
fi
fi
}
verify_latest_update_script() {
@@ -478,13 +401,14 @@ done
echo "Checking to see if this is a manager."
echo ""
manager_check
require_manager
set_minionid
echo "Checking to see if this is an airgap install"
echo ""
check_airgap
echo "Found that Security Onion $INSTALLEDVERSION is currently installed."
echo ""
detect_os
set_os
echo ""
if [ $is_airgap -eq 0 ]; then
# Let's mount the ISO since this is airgap
@@ -493,6 +417,12 @@ else
echo "Cloning Security Onion github repo into $UPDATE_DIR."
clone_to_tmp
fi
if [ -f /usr/sbin/so-image-common ]; then
. /usr/sbin/so-image-common
else
add_common
fi
echo ""
echo "Verifying we have the latest soup script."
verify_latest_update_script
@@ -502,29 +432,60 @@ echo "Let's see if we need to update Security Onion."
upgrade_check
space_check
echo "Checking for Salt Master and Minion updates."
upgrade_check_salt
echo ""
echo "Performing upgrade from Security Onion $INSTALLEDVERSION to Security Onion $NEWVERSION."
echo ""
echo "Updating dockers to $NEWVERSION."
if [ $is_airgap -eq 0 ]; then
airgap_update_dockers
else
update_registry
update_docker_containers "soup"
fi
echo ""
echo "Stopping Salt Minion service."
systemctl stop salt-minion
echo "Killing any remaining Salt Minion processes."
pkill -9 -ef /usr/bin/salt-minion
echo ""
echo "Stopping Salt Master service."
systemctl stop salt-master
echo ""
echo "Checking for Salt Master and Minion updates."
upgrade_check_salt
# Does salt need upgraded. If so update it.
if [ "$UPGRADESALT" == "1" ]; then
echo "Upgrading Salt"
# Update the repo files so it can actually upgrade
if [ $is_airgap -eq 0 ]; then
update_centos_repo
yum clean all
fi
upgrade_salt
fi
echo "Checking if Salt was upgraded."
echo ""
# Check that Salt was upgraded
if [[ $(salt --versions-report | grep Salt: | awk {'print $2'}) != "$NEWSALTVERSION" ]]; then
echo "Salt upgrade failed. Check of indicators of failure in $SOUP_LOG."
echo "Once the issue is resolved, run soup again."
echo "Exiting."
echo ""
exit 1
else
echo "Salt upgrade success."
echo ""
fi
echo "Making pillar changes."
pillar_changes
echo ""
echo ""
echo "Updating dockers to $NEWVERSION."
update_dockers
# Only update the repo if its airgap
if [ $is_airgap -eq 0 ]; then
if [[ $is_airgap -eq 0 ]] && [[ "$UPGRADESALT" != "1" ]]; then
update_centos_repo
fi
@@ -542,9 +503,19 @@ echo ""
echo "Starting Salt Master service."
systemctl start salt-master
# Only regenerate osquery packages if Fleet is enabled
FLEET_MANAGER=$(lookup_pillar fleet_manager)
FLEET_NODE=$(lookup_pillar fleet_node)
if [[ "$FLEET_MANAGER" == "True" || "$FLEET_NODE" == "True" ]]; then
echo ""
echo "Regenerating Osquery Packages.... This will take several minutes."
salt-call state.apply fleet.event_gen-packages -l info queue=True
echo ""
fi
echo ""
echo "Running a highstate to complete the Security Onion upgrade on this manager. This could take several minutes."
highstate
salt-call state.highstate -l info queue=True
echo ""
echo "Upgrade from $INSTALLEDVERSION to $NEWVERSION complete."
@@ -557,18 +528,23 @@ masterunlock
echo ""
echo "Starting Salt Master service."
systemctl start salt-master
highstate
echo "Running a highstate. This could take several minutes."
salt-call state.highstate -l info queue=True
playbook
unmount_update
SALTUPGRADED="True"
if [[ "$SALTUPGRADED" == "True" ]]; then
if [ "$UPGRADESALT" == "1" ]; then
echo ""
echo "Upgrading Salt on the remaining Security Onion nodes from $INSTALLEDSALTVERSION to $NEWSALTVERSION."
salt -C 'not *_eval and not *_helix and not *_manager and not *_managersearch and not *_standalone' -b $BATCHSIZE state.apply salt.minion
if [ $is_airgap -eq 0 ]; then
salt -C 'not *_eval and not *_helix and not *_manager and not *_managersearch and not *_standalone' cmd.run "yum clean all"
fi
salt -C 'not *_eval and not *_helix and not *_manager and not *_managersearch and not *_standalone' -b $BATCHSIZE state.apply salt.minion queue=True
echo ""
fi
check_sudoers
}
main "$@" | tee /dev/fd/3

View File

@@ -1,2 +1,27 @@
#!/bin/bash
#
# Copyright 2014,2015,2016,2017,2018,2019,2020 Security Onion Solutions, LLC
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
APP=close
lf=/tmp/$APP-pidLockFile
# create empty lock file if none exists
cat /dev/null >> $lf
read lastPID < $lf
# if lastPID is not null and a process with that pid exists , exit
[ ! -z "$lastPID" -a -d /proc/$lastPID ] && exit
echo $$ > $lf
/usr/sbin/so-curator-closed-delete > /dev/null 2>&1; docker exec so-curator curator --config /etc/curator/config/curator.yml /etc/curator/action/so-zeek-close.yml > /dev/null 2>&1; docker exec so-curator curator --config /etc/curator/config/curator.yml /etc/curator/action/so-beats-close.yml > /dev/null 2>&1; docker exec so-curator curator --config /etc/curator/config/curator.yml /etc/curator/action/so-firewall-close.yml > /dev/null 2>&1; docker exec so-curator curator --config /etc/curator/config/curator.yml /etc/curator/action/so-ids-close.yml > /dev/null 2>&1; docker exec so-curator curator --config /etc/curator/config/curator.yml /etc/curator/action/so-import-close.yml > /dev/null 2>&1; docker exec so-curator curator --config /etc/curator/config/curator.yml /etc/curator/action/so-osquery-close.yml > /dev/null 2>&1; docker exec so-curator curator --config /etc/curator/config/curator.yml /etc/curator/action/so-ossec-close.yml > /dev/null 2>&1; docker exec so-curator curator --config /etc/curator/config/curator.yml /etc/curator/action/so-strelka-close.yml > /dev/null 2>&1; docker exec so-curator curator --config /etc/curator/config/curator.yml /etc/curator/action/so-syslog-close.yml > /dev/null 2>&1

View File

@@ -1,6 +1,6 @@
#!/bin/bash
#
# Copyright 2014,2015,2016,2017,2018 Security Onion Solutions, LLC
# Copyright 2014,2015,2016,2017,2018,2019,2020 Security Onion Solutions, LLC
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
@@ -34,6 +34,13 @@
#fi
# Avoid starting multiple instances
if ! pgrep -f "so-curator-closed-delete-delete" >/dev/null; then
/usr/sbin/so-curator-closed-delete-delete
fi
APP=closeddelete
lf=/tmp/$APP-pidLockFile
# create empty lock file if none exists
cat /dev/null >> $lf
read lastPID < $lf
# if lastPID is not null and a process with that pid exists , exit
[ ! -z "$lastPID" -a -d /proc/$lastPID ] && exit
echo $$ > $lf
/usr/sbin/so-curator-closed-delete-delete

View File

@@ -26,39 +26,34 @@
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
#. /usr/sbin/so-elastic-common
#. /etc/nsm/securityonion.conf
LOG="/opt/so/log/curator/so-curator-closed-delete.log"
overlimit() {
[[ $(du -hs --block-size=1GB /nsm/elasticsearch/nodes | awk '{print $1}' ) -gt "{{LOG_SIZE_LIMIT}}" ]]
}
closedindices() {
INDICES=$(curl -s -k {% if grains['role'] in ['so-node','so-heavynode'] %}https://{% endif %}{{ELASTICSEARCH_HOST}}:{{ELASTICSEARCH_PORT}}/_cat/indices?h=index\&expand_wildcards=closed 2> /dev/null)
[ $? -eq 1 ] && return false
echo ${INDICES} | grep -q -E "(logstash-|so-)"
}
# Check for 2 conditions:
# 1. Are Elasticsearch indices using more disk space than LOG_SIZE_LIMIT?
# 2. Are there any closed logstash- or so- indices that we can delete?
# 2. Are there any closed indices that we can delete?
# If both conditions are true, keep on looping until one of the conditions is false.
while [[ $(du -hs --block-size=1GB /nsm/elasticsearch/nodes | awk '{print $1}' ) -gt "{{LOG_SIZE_LIMIT}}" ]] &&
{% if grains['role'] in ['so-node','so-heavynode'] %}
curl -s -k https://{{ELASTICSEARCH_HOST}}:{{ELASTICSEARCH_PORT}}/_cat/indices | grep -E " close (logstash-|so-)" > /dev/null; do
{% else %}
curl -s {{ELASTICSEARCH_HOST}}:{{ELASTICSEARCH_PORT}}/_cat/indices | grep -E " close (logstash-|so-)" > /dev/null; do
{% endif %}
while overlimit && closedindices; do
# We need to determine OLDEST_INDEX.
# First, get the list of closed indices that are prefixed with "logstash-" or "so-".
# For example: logstash-ids-YYYY.MM.DD
# We need to determine OLDEST_INDEX:
# First, get the list of closed indices using _cat/indices?h=index\&expand_wildcards=closed.
# Then, sort by date by telling sort to use hyphen as delimiter and then sort on the third field.
# Finally, select the first entry in that sorted list.
{% if grains['role'] in ['so-node','so-heavynode'] %}
OLDEST_INDEX=$(curl -s -k https://{{ELASTICSEARCH_HOST}}:{{ELASTICSEARCH_PORT}}/_cat/indices | grep -E " close (logstash-|so-)" | awk '{print $2}' | sort -t- -k3 | head -1)
{% else %}
OLDEST_INDEX=$(curl -s {{ELASTICSEARCH_HOST}}:{{ELASTICSEARCH_PORT}}/_cat/indices | grep -E " close (logstash-|so-)" | awk '{print $2}' | sort -t- -k3 | head -1)
{% endif %}
OLDEST_INDEX=$(curl -s -k {% if grains['role'] in ['so-node','so-heavynode'] %}https://{% endif %}{{ELASTICSEARCH_HOST}}:{{ELASTICSEARCH_PORT}}/_cat/indices?h=index\&expand_wildcards=closed | grep -E "(logstash-|so-)" | sort -t- -k3 | head -1)
# Now that we've determined OLDEST_INDEX, ask Elasticsearch to delete it.
{% if grains['role'] in ['so-node','so-heavynode'] %}
curl -XDELETE -k https://{{ELASTICSEARCH_HOST}}:{{ELASTICSEARCH_PORT}}/${OLDEST_INDEX}
{% else %}
curl -XDELETE {{ELASTICSEARCH_HOST}}:{{ELASTICSEARCH_PORT}}/${OLDEST_INDEX}
{% endif %}
curl -XDELETE -k {% if grains['role'] in ['so-node','so-heavynode'] %}https://{% endif %}{{ELASTICSEARCH_HOST}}:{{ELASTICSEARCH_PORT}}/${OLDEST_INDEX}
# Finally, write a log entry that says we deleted it.
echo "$(date) - Used disk space exceeds LOG_SIZE_LIMIT ({{LOG_SIZE_LIMIT}} GB) - Index ${OLDEST_INDEX} deleted ..." >> ${LOG}

View File

@@ -1,2 +1,27 @@
#!/bin/bash
#
# Copyright 2014,2015,2016,2017,2018,2019,2020 Security Onion Solutions, LLC
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
APP=delete
lf=/tmp/$APP-pidLockFile
# create empty lock file if none exists
cat /dev/null >> $lf
read lastPID < $lf
# if lastPID is not null and a process with that pid exists , exit
[ ! -z "$lastPID" -a -d /proc/$lastPID ] && exit
echo $$ > $lf
docker exec so-curator curator --config /etc/curator/config/curator.yml /etc/curator/action/delete.yml > /dev/null 2>&1

View File

@@ -127,6 +127,12 @@ so-curator:
- /opt/so/conf/curator/curator.yml:/etc/curator/config/curator.yml:ro
- /opt/so/conf/curator/action/:/etc/curator/action:ro
- /opt/so/log/curator:/var/log/curator:rw
append_so-curator_so-status.conf:
file.append:
- name: /opt/so/conf/so-status/so-status.conf
- text: so-curator
# Begin Curator Cron Jobs
# Close

View File

@@ -1,6 +1,6 @@
{% set IMAGEREPO = salt['pillar.get']('global:imagerepo') %}
{% set MANAGER = salt['grains.get']('master') %}
{% set OLDVERSIONS = ['2.0.0-rc.1','2.0.1-rc.1','2.0.2-rc.1','2.0.3-rc.1','2.1.0-rc.2','2.2.0-rc.3','2.3.0']%}
{% set OLDVERSIONS = ['2.0.0-rc.1','2.0.1-rc.1','2.0.2-rc.1','2.0.3-rc.1','2.1.0-rc.2','2.2.0-rc.3','2.3.0','2.3.1']%}
{% for VERSION in OLDVERSIONS %}
remove_images_{{ VERSION }}:

View File

@@ -43,19 +43,24 @@ dstatslogdir:
so-domainstatsimage:
cmd.run:
- name: docker pull --disable-content-trust=false docker.io/{{ IMAGEREPO }}/so-domainstats:HH1.0.3
- name: docker pull {{ MANAGER }}:5000/{{ IMAGEREPO }}/so-domainstats:{{ VERSION }}
so-domainstats:
docker_container.running:
- require:
- so-domainstatsimage
- image: docker.io/{{ IMAGEREPO }}/so-domainstats:HH1.0.3
- image: {{ MANAGER }}:5000/{{ IMAGEREPO }}/so-domainstats:{{ VERSION }}
- hostname: domainstats
- name: so-domainstats
- user: domainstats
- binds:
- /opt/so/log/domainstats:/var/log/domain_stats
append_so-domainstats_so-status.conf:
file.append:
- name: /opt/so/conf/so-status/so-status.conf
- text: so-domainstats
{% else %}
domainstats_state_not_allowed:

View File

@@ -16,7 +16,7 @@ class PlaybookESAlerter(Alerter):
today = strftime("%Y.%m.%d", gmtime())
timestamp = strftime("%Y-%m-%d"'T'"%H:%M:%S", gmtime())
headers = {"Content-Type": "application/json"}
payload = {"rule": { "name": self.rule['play_title'],"uuid": self.rule['play_id'],"category": self.rule['rule.category']},"event":{ "severity": self.rule['event.severity'],"module": self.rule['event.module'],"dataset": self.rule['event.dataset'],"severity_label": self.rule['sigma_level']},"kibana_pivot": self.rule['kibana_pivot'],"soc_pivot": self.rule['soc_pivot'],"play_url": self.rule['play_url'],"sigma_level": self.rule['sigma_level'],"event_data": match, "@timestamp": timestamp}
payload = {"rule": { "name": self.rule['play_title'],"case_template": self.rule['play_id'],"uuid": self.rule['play_id'],"category": self.rule['rule.category']},"event":{ "severity": self.rule['event.severity'],"module": self.rule['event.module'],"dataset": self.rule['event.dataset'],"severity_label": self.rule['sigma_level']},"kibana_pivot": self.rule['kibana_pivot'],"soc_pivot": self.rule['soc_pivot'],"play_url": self.rule['play_url'],"sigma_level": self.rule['sigma_level'],"event_data": match, "@timestamp": timestamp}
url = f"http://{self.rule['elasticsearch_host']}/so-playbook-alerts-{today}/_doc/"
requests.post(url, data=json.dumps(payload), headers=headers, verify=False)

View File

@@ -121,6 +121,12 @@ so-elastalert:
- {{MANAGER_URL}}:{{MANAGER_IP}}
- require:
- module: wait_for_elasticsearch
append_so-elastalert_so-status.conf:
file.append:
- name: /opt/so/conf/so-status/so-status.conf
- text: so-elastalert
{% endif %}
{% else %}

View File

@@ -6,7 +6,7 @@
{ "gsub": { "field": "message2.columns.data", "pattern": "\\\\xC2\\\\xAE", "replacement": "", "ignore_missing": true } },
{ "rename": { "if": "ctx.message2.columns?.eventid != null", "field": "message2.columns", "target_field": "winlog", "ignore_missing": true } },
{ "json": { "field": "winlog.data", "target_field": "temp", "ignore_failure": true } },
{ "rename": { "field": "temp.Data", "target_field": "winlog.event_data", "ignore_missing": true } },
{ "rename": { "field": "temp.EventData", "target_field": "winlog.event_data", "ignore_missing": true } },
{ "rename": { "field": "winlog.source", "target_field": "winlog.channel", "ignore_missing": true } },
{ "rename": { "field": "winlog.eventid", "target_field": "winlog.event_id", "ignore_missing": true } },
{ "pipeline": { "if": "ctx.winlog?.channel == 'Microsoft-Windows-Sysmon/Operational'", "name": "sysmon" } },

View File

@@ -6,15 +6,27 @@
{ "rename": { "field": "message2.scan", "target_field": "scan", "ignore_missing": true } },
{ "rename": { "field": "message2.request", "target_field": "request", "ignore_missing": true } },
{ "rename": { "field": "scan.hash", "target_field": "hash", "ignore_missing": true } },
{ "rename": { "field": "scan.exiftool", "target_field": "exiftool", "ignore_missing": true } },
{ "grok": { "if": "ctx.request?.attributes?.filename != null", "field": "request.attributes.filename", "patterns": ["-%{WORD:log.id.fuid}-"], "ignore_failure": true } },
{ "foreach":
{
"if": "ctx.scan?.exiftool?.keys !=null",
"field": "scan.exiftool.keys",
"processor":{
"if": "ctx.exiftool?.keys !=null",
"field": "exiftool.keys",
"processor": {
"append": {
"field": "scan.exiftool",
"value": "{{_ingest._value.key}}={{_ingest._value.value}}"
}
}
}
},
{ "foreach":
{
"if": "ctx.exiftool?.keys !=null",
"field": "exiftool.keys",
"processor": {
"set": {
"field": "scan.exiftool.{{_ingest._value.key}}",
"field": "exiftool.{{_ingest._value.key}}",
"value": "{{_ingest._value.value}}"
}
}
@@ -32,6 +44,14 @@
}
}
},
{ "set": { "if": "ctx.exiftool?.SourceFile != null", "field": "file.source", "value": "{{exiftool.SourceFile}}", "ignore_failure": true }},
{ "set": { "if": "ctx.exiftool?.FilePermissions != null", "field": "file.permissions", "value": "{{exiftool.FilePermissions}}", "ignore_failure": true }},
{ "set": { "if": "ctx.exiftool?.FileName != null", "field": "file.name", "value": "{{exiftool.FileName}}", "ignore_failure": true }},
{ "set": { "if": "ctx.exiftool?.FileModifyDate != null", "field": "file.mtime", "value": "{{exiftool.FileModifyDate}}", "ignore_failure": true }},
{ "set": { "if": "ctx.exiftool?.FileAccessDate != null", "field": "file.accessed", "value": "{{exiftool.FileAccessDate}}", "ignore_failure": true }},
{ "set": { "if": "ctx.exiftool?.FileInodeChangeDate != null", "field": "file.ctime", "value": "{{exiftool.FileInodeChangeDate}}", "ignore_failure": true }},
{ "set": { "if": "ctx.exiftool?.FileDirectory != null", "field": "file.directory", "value": "{{exiftool.FileDirectory}}", "ignore_failure": true }},
{ "set": { "if": "ctx.exiftool?.Subsystem != null", "field": "host.subsystem", "value": "{{exiftool.Subsystem}}", "ignore_failure": true }},
{ "set": { "if": "ctx.scan?.yara?.matches != null", "field": "rule.name", "value": "{{scan.yara.matches.0}}" }},
{ "set": { "if": "ctx.scan?.yara?.matches != null", "field": "dataset", "value": "alert", "override": true }},
{ "rename": { "field": "file.flavors.mime", "target_field": "file.mime_type", "ignore_missing": true }},
@@ -42,7 +62,8 @@
{ "set": { "if": "ctx.rule?.score != null && ctx.rule?.score >= 70 && ctx.rule?.score <=89", "field": "event.severity", "value": 3, "override": true } },
{ "set": { "if": "ctx.rule?.score != null && ctx.rule?.score >= 90", "field": "event.severity", "value": 4, "override": true } },
{ "set": { "field": "observer.name", "value": "{{agent.name}}" }},
{ "remove": { "field": ["host", "path", "message", "scan.exiftool.keys", "scan.yara.meta"], "ignore_missing": true } },
{ "convert" : { "field" : "scan.exiftool","type": "string", "ignore_missing":true }},
{ "remove": { "field": ["host", "path", "message", "exiftool", "scan.yara.meta"], "ignore_missing": true } },
{ "pipeline": { "name": "common" } }
]
}

View File

@@ -12,9 +12,25 @@
"ignore_failure": true
}
},
{ "grok": { "field": "message", "patterns": ["<%{INT:syslog.priority}>%{DATA:syslog.timestamp} %{WORD:source.application}: %{GREEDYDATA:real_message}"], "ignore_failure": false } },
{ "set": { "if": "ctx.source.application == 'filterlog'", "field": "dataset", "value": "firewall" } },
{ "pipeline": { "if": "ctx.dataset == 'firewall'", "name": "filterlog" } },
{
"grok":
{
"field": "message",
"patterns": [
"^<%{INT:syslog.priority}>%{DATA:syslog.timestamp} %{WORD:source.application}: %{GREEDYDATA:real_message}$",
"^%{SYSLOGTIMESTAMP:syslog.timestamp} %{SYSLOGHOST:syslog.host} %{SYSLOGPROG:syslog.program}: CEF:0\\|%{DATA:vendor}\\|%{DATA:product}\\|%{GREEDYDATA:message2}$"
],
"ignore_failure": true
}
},
{ "set": { "if": "ctx.source?.application == 'filterlog'", "field": "dataset", "value": "firewall", "ignore_failure": true } },
{ "set": { "if": "ctx.vendor != null", "field": "module", "value": "{{ vendor }}", "ignore_failure": true } },
{ "set": { "if": "ctx.product != null", "field": "dataset", "value": "{{ product }}", "ignore_failure": true } },
{ "set": { "field": "ingest.timestamp", "value": "{{ @timestamp }}" } },
{ "date": { "if": "ctx.syslog?.timestamp != null", "field": "syslog.timestamp", "target_field": "@timestamp", "formats": ["MMM d HH:mm:ss", "MMM dd HH:mm:ss", "ISO8601", "UNIX"], "ignore_failure": true } },
{ "remove": { "field": ["pid", "program"], "ignore_missing": true, "ignore_failure": true } },
{ "pipeline": { "if": "ctx.vendor != null && ctx.product != null", "name": "{{ vendor }}.{{ product }}", "ignore_failure": true } },
{ "pipeline": { "if": "ctx.dataset == 'firewall'", "name": "filterlog", "ignore_failure": true } },
{ "pipeline": { "name": "common" } }
]
}

View File

@@ -30,40 +30,40 @@
{ "rename": { "field": "winlog.event_data.DestinationHostname", "target_field": "destination.hostname", "ignore_missing": true } },
{ "rename": { "field": "winlog.event_data.DestinationIp", "target_field": "destination.ip", "ignore_missing": true } },
{ "rename": { "field": "winlog.event_data.DestinationPort", "target_field": "destination.port", "ignore_missing": true } },
{ "rename": { "field": "winlog.event_data.image", "target_field": "process.executable", "ignore_missing": true } },
{ "rename": { "field": "winlog.event_data.image", "target_field": "process.executable", "ignore_missing": true } },
{ "rename": { "field": "winlog.event_data.Image", "target_field": "process.executable", "ignore_missing": true } },
{ "rename": { "field": "winlog.event_data.processID", "target_field": "process.pid", "ignore_missing": true } },
{ "rename": { "field": "winlog.event_data.ProcessID", "target_field": "process.pid", "ignore_missing": true } },
{ "rename": { "field": "winlog.event_data.processGuid", "target_field": "process.entity_id", "ignore_missing": true } },
{ "rename": { "field": "winlog.event_data.processID", "target_field": "process.pid", "ignore_missing": true } },
{ "rename": { "field": "winlog.event_data.ProcessId", "target_field": "process.pid", "ignore_missing": true } },
{ "rename": { "field": "winlog.event_data.processGuid", "target_field": "process.entity_id", "ignore_missing": true } },
{ "rename": { "field": "winlog.event_data.ProcessGuid", "target_field": "process.entity_id", "ignore_missing": true } },
{ "rename": { "field": "winlog.event_data.commandLine", "target_field": "process.command_line", "ignore_missing": true } },
{ "rename": { "field": "winlog.event_data.commandLine", "target_field": "process.command_line", "ignore_missing": true } },
{ "rename": { "field": "winlog.event_data.CommandLine", "target_field": "process.command_line", "ignore_missing": true } },
{ "rename": { "field": "winlog.event_data.currentDirectory", "target_field": "process.working_directory", "ignore_missing": true } },
{ "rename": { "field": "winlog.event_data.currentDirectory", "target_field": "process.working_directory", "ignore_missing": true } },
{ "rename": { "field": "winlog.event_data.CurrentDirectory", "target_field": "process.working_directory", "ignore_missing": true } },
{ "rename": { "field": "winlog.event_data.description", "target_field": "process.pe.description", "ignore_missing": true } },
{ "rename": { "field": "winlog.event_data.description", "target_field": "process.pe.description", "ignore_missing": true } },
{ "rename": { "field": "winlog.event_data.Description", "target_field": "process.pe.description", "ignore_missing": true } },
{ "rename": { "field": "winlog.event_data.product", "target_field": "process.pe.product", "ignore_missing": true } },
{ "rename": { "field": "winlog.event_data.product", "target_field": "process.pe.product", "ignore_missing": true } },
{ "rename": { "field": "winlog.event_data.Product", "target_field": "process.pe.product", "ignore_missing": true } },
{ "rename": { "field": "winlog.event_data.company", "target_field": "process.pe.company", "ignore_missing": true } },
{ "rename": { "field": "winlog.event_data.company", "target_field": "process.pe.company", "ignore_missing": true } },
{ "rename": { "field": "winlog.event_data.Company", "target_field": "process.pe.company", "ignore_missing": true } },
{ "rename": { "field": "winlog.event_data.originalFileName", "target_field": "process.pe.original_file_name", "ignore_missing": true } },
{ "rename": { "field": "winlog.event_data.originalFileName", "target_field": "process.pe.original_file_name", "ignore_missing": true } },
{ "rename": { "field": "winlog.event_data.OriginalFileName", "target_field": "process.pe.original_file_name", "ignore_missing": true } },
{ "rename": { "field": "winlog.event_data.fileVersion", "target_field": "process.pe.file_version", "ignore_missing": true } },
{ "rename": { "field": "winlog.event_data.fileVersion", "target_field": "process.pe.file_version", "ignore_missing": true } },
{ "rename": { "field": "winlog.event_data.FileVersion", "target_field": "process.pe.file_version", "ignore_missing": true } },
{ "rename": { "field": "winlog.event_data.parentCommandLine", "target_field": "process.parent.command_line", "ignore_missing": true } },
{ "rename": { "field": "winlog.event_data.parentCommandLine", "target_field": "process.parent.command_line", "ignore_missing": true } },
{ "rename": { "field": "winlog.event_data.ParentCommandLine", "target_field": "process.parent.command_line", "ignore_missing": true } },
{ "rename": { "field": "winlog.event_data.parentImage", "target_field": "process.parent.executable", "ignore_missing": true } },
{ "rename": { "field": "winlog.event_data.parentImage", "target_field": "process.parent.executable", "ignore_missing": true } },
{ "rename": { "field": "winlog.event_data.ParentImage", "target_field": "process.parent.executable", "ignore_missing": true } },
{ "rename": { "field": "winlog.event_data.parentProcessGuid", "target_field": "process.parent.entity_id", "ignore_missing": true } },
{ "rename": { "field": "winlog.event_data.parentProcessGuid", "target_field": "process.parent.entity_id", "ignore_missing": true } },
{ "rename": { "field": "winlog.event_data.ParentProcessGuid", "target_field": "process.parent.entity_id", "ignore_missing": true } },
{ "rename": { "field": "winlog.event_data.parentProcessId", "target_field": "process.ppid", "ignore_missing": true } },
{ "rename": { "field": "winlog.event_data.parentProcessId", "target_field": "process.ppid", "ignore_missing": true } },
{ "rename": { "field": "winlog.event_data.ParentProcessId", "target_field": "process.ppid", "ignore_missing": true } },
{ "rename": { "field": "winlog.event_data.Protocol", "target_field": "network.transport", "ignore_missing": true } },
{ "rename": { "field": "winlog.event_data.Protocol", "target_field": "network.transport", "ignore_missing": true } },
{ "rename": { "field": "winlog.event_data.User", "target_field": "user.name", "ignore_missing": true } },
{ "rename": { "field": "winlog.event_data.SourceHostname", "target_field": "source.hostname", "ignore_missing": true } },
{ "rename": { "field": "winlog.event_data.SourceIp", "target_field": "source.ip", "ignore_missing": true } },
{ "rename": { "field": "winlog.event_data.SourcePort", "target_field": "source.port", "ignore_missing": true } },
{ "rename": { "field": "winlog.event_data.targetFilename", "target_field": "file.target", "ignore_missing": true } },
{ "rename": { "field": "winlog.event_data.TargetFilename", "target_field": "file.target", "ignore_missing": true } }
{ "rename": { "field": "winlog.event_data.TargetFilename", "target_field": "file.target", "ignore_missing": true } }
]
}

View File

@@ -28,9 +28,9 @@ COUNT=0
ELASTICSEARCH_CONNECTED="no"
while [[ "$COUNT" -le 240 ]]; do
{% if grains['role'] in ['so-node','so-heavynode'] %}
curl ${ELASTICSEARCH_AUTH} -k --output /dev/null --silent --head --fail https://"$ELASTICSEARCH_HOST":"$ELASTICSEARCH_PORT"
curl ${ELASTICSEARCH_AUTH} -k --output /dev/null --silent --head --fail -L https://"$ELASTICSEARCH_HOST":"$ELASTICSEARCH_PORT"
{% else %}
curl ${ELASTICSEARCH_AUTH} --output /dev/null --silent --head --fail http://"$ELASTICSEARCH_HOST":"$ELASTICSEARCH_PORT"
curl ${ELASTICSEARCH_AUTH} --output /dev/null --silent --head --fail -L http://"$ELASTICSEARCH_HOST":"$ELASTICSEARCH_PORT"
{% endif %}
if [ $? -eq 0 ]; then
ELASTICSEARCH_CONNECTED="yes"
@@ -52,9 +52,9 @@ cd ${ELASTICSEARCH_INGEST_PIPELINES}
echo "Loading pipelines..."
{% if grains['role'] in ['so-node','so-heavynode'] %}
for i in *; do echo $i; RESPONSE=$(curl ${ELASTICSEARCH_AUTH} -k -XPUT https://${ELASTICSEARCH_HOST}:${ELASTICSEARCH_PORT}/_ingest/pipeline/$i -H 'Content-Type: application/json' -d@$i 2>/dev/null); echo $RESPONSE; if [[ "$RESPONSE" == *"error"* ]]; then RETURN_CODE=1; fi; done
for i in *; do echo $i; RESPONSE=$(curl ${ELASTICSEARCH_AUTH} -k -XPUT -L https://${ELASTICSEARCH_HOST}:${ELASTICSEARCH_PORT}/_ingest/pipeline/$i -H 'Content-Type: application/json' -d@$i 2>/dev/null); echo $RESPONSE; if [[ "$RESPONSE" == *"error"* ]]; then RETURN_CODE=1; fi; done
{% else %}
for i in *; do echo $i; RESPONSE=$(curl ${ELASTICSEARCH_AUTH} -XPUT http://${ELASTICSEARCH_HOST}:${ELASTICSEARCH_PORT}/_ingest/pipeline/$i -H 'Content-Type: application/json' -d@$i 2>/dev/null); echo $RESPONSE; if [[ "$RESPONSE" == *"error"* ]]; then RETURN_CODE=1; fi; done
for i in *; do echo $i; RESPONSE=$(curl ${ELASTICSEARCH_AUTH} -XPUT -L http://${ELASTICSEARCH_HOST}:${ELASTICSEARCH_PORT}/_ingest/pipeline/$i -H 'Content-Type: application/json' -d@$i 2>/dev/null); echo $RESPONSE; if [[ "$RESPONSE" == *"error"* ]]; then RETURN_CODE=1; fi; done
{% endif %}
echo

View File

@@ -215,13 +215,17 @@ so-elasticsearch:
- /etc/pki/ca.crt:/usr/share/elasticsearch/config/ca.crt:ro
- /etc/pki/elasticsearch.p12:/usr/share/elasticsearch/config/elasticsearch.p12:ro
- /opt/so/conf/elasticsearch/sotls.yml:/usr/share/elasticsearch/config/sotls.yml:ro
- watch:
- file: cacertz
- file: esyml
- file: esingestconf
- file: so-elasticsearch-pipelines-file
append_so-elasticsearch_so-status.conf:
file.append:
- name: /opt/so/conf/so-status/so-status.conf
- text: so-elasticsearch
so-elasticsearch-pipelines-file:
file.managed:
- name: /opt/so/conf/elasticsearch/so-elasticsearch-pipelines

View File

@@ -379,9 +379,14 @@
}
}
},
"scan":{
"scan":{
"type":"object",
"dynamic": true
"dynamic": true,
"properties":{
"exiftool":{
"type":"text"
}
}
},
"server":{
"type":"object",

View File

@@ -115,7 +115,7 @@ filebeat.inputs:
fields: ["source", "prospector", "input", "offset", "beat"]
fields_under_root: true
clean_removed: false
clean_removed: true
close_removed: false
- type: log

View File

@@ -58,8 +58,8 @@ filebeatconfsync:
file.managed:
- name: /opt/so/conf/filebeat/etc/filebeat.yml
- source: salt://filebeat/etc/filebeat.yml
- user: 0
- group: 0
- user: root
- group: root
- template: jinja
- defaults:
INPUTS: {{ salt['pillar.get']('filebeat:config:inputs', {}) }}
@@ -86,6 +86,11 @@ so-filebeat:
- watch:
- file: /opt/so/conf/filebeat/etc/filebeat.yml
append_so-filebeat_so-status.conf:
file.append:
- name: /opt/so/conf/so-status/so-status.conf
- text: so-filebeat
{% else %}
filebeat_state_not_allowed:

View File

@@ -1,3 +1,4 @@
{%- set DNET = salt['pillar.get']('global:dockernet', '172.17.0.0') %}
firewall:
hostgroups:
anywhere:
@@ -9,7 +10,7 @@ firewall:
ips:
delete:
insert:
- 172.17.0.0/24
- {{ DNET }}/24
localhost:
ips:
delete:

View File

@@ -1,4 +1,10 @@
{% set ENROLLSECRET = salt['cmd.run']('docker exec so-fleet fleetctl get enroll-secret default') %}
{% set FLEETMANAGER = salt['pillar.get']('global:fleet_manager', False) %}
{% set FLEETNODE = salt['pillar.get']('global:fleet_node', False) %}
{% if FLEETNODE or FLEETMANAGER %}
{% set ENROLLSECRET = salt['cmd.run']('docker exec so-fleet fleetctl get enroll-secret default') %}
{% else %}
{% set ENROLLSECRET = '' %}
{% endif %}
{% set MAININT = salt['pillar.get']('host:mainint') %}
{% set MAINIP = salt['grains.get']('ip_interfaces').get(MAININT)[0] %}

View File

@@ -12,6 +12,8 @@
{% else %}
{% set MAINIP = salt['pillar.get']('global:managerip') %}
{% endif %}
{% set DNET = salt['pillar.get']('global:dockernet', '172.17.0.0') %}
include:
- mysql
@@ -71,7 +73,7 @@ fleetdb:
fleetdbuser:
mysql_user.present:
- host: 172.17.0.0/255.255.0.0
- host: {{ DNET }}/255.255.0.0
- password: {{ FLEETPASS }}
- connection_host: {{ MAINIP }}
- connection_port: 3306
@@ -85,7 +87,7 @@ fleetdbpriv:
- grant: all privileges
- database: fleet.*
- user: fleetdbuser
- host: 172.17.0.0/255.255.0.0
- host: {{ DNET }}/255.255.0.0
- connection_host: {{ MAINIP }}
- connection_port: 3306
- connection_user: root
@@ -132,4 +134,9 @@ so-fleet:
- watch:
- /opt/so/conf/fleet/etc
append_so-fleet_so-status.conf:
file.append:
- name: /opt/so/conf/so-status/so-status.conf
- text: so-fleet
{% endif %}

View File

@@ -43,19 +43,24 @@ freqlogdir:
so-freqimage:
cmd.run:
- name: docker pull --disable-content-trust=false docker.io/{{ IMAGEREPO }}/so-freqserver:HH1.0.3
- name: docker pull {{ MANAGER }}:5000/{{ IMAGEREPO }}/so-freqserver:{{ VERSION }}
so-freq:
docker_container.running:
- require:
- so-freqimage
- image: docker.io/{{ IMAGEREPO }}/so-freqserver:HH1.0.3
- image: {{ MANAGER }}:5000/{{ IMAGEREPO }}/so-freqserver:{{ VERSION }}
- hostname: freqserver
- name: so-freqserver
- user: freqserver
- binds:
- /opt/so/log/freq_server:/var/log/freq_server:rw
append_so-freq_so-status.conf:
file.append:
- name: /opt/so/conf/so-status/so-status.conf
- text: so-freq
{% else %}
freqserver_state_not_allowed:

View File

@@ -3565,7 +3565,7 @@
"aliasColors": {
"InBound": "#629E51",
"OutBound": "#5195CE",
"net.derivative": "#1F78C1"
"net.non_negative_derivative": "#1F78C1"
},
"bars": false,
"dashLength": 10,
@@ -3636,7 +3636,7 @@
"measurement": "net",
"orderByTime": "ASC",
"policy": "default",
"query": "SELECT 8 * derivative(mean(\"bytes_recv\"),1s) FROM \"net\" WHERE \"host\" = 'JumpHost' AND \"interface\" = 'eth0' AND $timeFilter GROUP BY time($interval) fill(null)",
"query": "SELECT 8 * non_negative_derivative(mean(\"bytes_recv\"),1s) FROM \"net\" WHERE \"host\" = 'JumpHost' AND \"interface\" = 'eth0' AND $timeFilter GROUP BY time($interval) fill(null)",
"rawQuery": false,
"refId": "A",
"resultFormat": "time_series",
@@ -3656,7 +3656,7 @@
"params": [
"1s"
],
"type": "derivative"
"type": "non_negative_derivative"
},
{
"params": [
@@ -4036,7 +4036,7 @@
},
{
"params": [],
"type": "difference"
"type": "non_negative_difference"
}
]
],
@@ -4084,7 +4084,7 @@
},
{
"params": [],
"type": "difference"
"type": "non_negative_difference"
}
]
],
@@ -4143,7 +4143,7 @@
"aliasColors": {
"InBound": "#629E51",
"OutBound": "#5195CE",
"net.derivative": "#1F78C1"
"net.non_negative_derivative": "#1F78C1"
},
"bars": false,
"dashLength": 10,
@@ -4214,7 +4214,7 @@
"measurement": "net",
"orderByTime": "ASC",
"policy": "default",
"query": "SELECT 8 * derivative(mean(\"bytes_recv\"),1s) FROM \"net\" WHERE \"host\" = 'JumpHost' AND \"interface\" = 'eth0' AND $timeFilter GROUP BY time($interval) fill(null)",
"query": "SELECT 8 * non_negative_derivative(mean(\"bytes_recv\"),1s) FROM \"net\" WHERE \"host\" = 'JumpHost' AND \"interface\" = 'eth0' AND $timeFilter GROUP BY time($interval) fill(null)",
"rawQuery": false,
"refId": "A",
"resultFormat": "time_series",
@@ -4234,7 +4234,7 @@
"params": [
"1s"
],
"type": "derivative"
"type": "non_negative_derivative"
},
{
"params": [
@@ -4278,7 +4278,7 @@
"measurement": "net",
"orderByTime": "ASC",
"policy": "default",
"query": "SELECT 8 * derivative(mean(\"bytes_sent\"),1s) FROM \"net\" WHERE \"host\" = 'JumpHost' AND \"interface\" = 'eth0' AND $timeFilter GROUP BY time($interval) fill(null)",
"query": "SELECT 8 * non_negative_derivative(mean(\"bytes_sent\"),1s) FROM \"net\" WHERE \"host\" = 'JumpHost' AND \"interface\" = 'eth0' AND $timeFilter GROUP BY time($interval) fill(null)",
"rawQuery": false,
"refId": "B",
"resultFormat": "time_series",
@@ -4298,7 +4298,7 @@
"params": [
"1s"
],
"type": "derivative"
"type": "non_negative_derivative"
},
{
"params": [

View File

@@ -1795,7 +1795,7 @@
"aliasColors": {
"InBound": "#629E51",
"OutBound": "#5195CE",
"net.derivative": "#1F78C1"
"net.non_negative_derivative": "#1F78C1"
},
"bars": false,
"dashLength": 10,
@@ -1860,7 +1860,7 @@
"measurement": "net",
"orderByTime": "ASC",
"policy": "default",
"query": "SELECT 8 * derivative(mean(\"bytes_recv\"),1s) FROM \"net\" WHERE \"host\" = 'JumpHost' AND \"interface\" = 'eth0' AND $timeFilter GROUP BY time($interval) fill(null)",
"query": "SELECT 8 * non_negative_derivative(mean(\"bytes_recv\"),1s) FROM \"net\" WHERE \"host\" = 'JumpHost' AND \"interface\" = 'eth0' AND $timeFilter GROUP BY time($interval) fill(null)",
"rawQuery": false,
"refId": "A",
"resultFormat": "time_series",
@@ -1880,7 +1880,7 @@
"params": [
"1s"
],
"type": "derivative"
"type": "non_negative_derivative"
},
{
"params": [
@@ -1924,7 +1924,7 @@
"measurement": "net",
"orderByTime": "ASC",
"policy": "default",
"query": "SELECT 8 * derivative(mean(\"bytes_sent\"),1s) FROM \"net\" WHERE \"host\" = 'JumpHost' AND \"interface\" = 'eth0' AND $timeFilter GROUP BY time($interval) fill(null)",
"query": "SELECT 8 * non_negative_derivative(mean(\"bytes_sent\"),1s) FROM \"net\" WHERE \"host\" = 'JumpHost' AND \"interface\" = 'eth0' AND $timeFilter GROUP BY time($interval) fill(null)",
"rawQuery": false,
"refId": "B",
"resultFormat": "time_series",
@@ -1944,7 +1944,7 @@
"params": [
"1s"
],
"type": "derivative"
"type": "non_negative_derivative"
},
{
"params": [
@@ -2459,7 +2459,7 @@
"aliasColors": {
"InBound": "#629E51",
"OutBound": "#5195CE",
"net.derivative": "#1F78C1"
"net.non_negative_derivative": "#1F78C1"
},
"bars": false,
"dashLength": 10,
@@ -2524,7 +2524,7 @@
"measurement": "docker_container_net",
"orderByTime": "ASC",
"policy": "default",
"query": "SELECT 8 * derivative(mean(\"bytes_recv\"),1s) FROM \"net\" WHERE \"host\" = 'JumpHost' AND \"interface\" = 'eth0' AND $timeFilter GROUP BY time($interval) fill(null)",
"query": "SELECT 8 * non_negative_derivative(mean(\"bytes_recv\"),1s) FROM \"net\" WHERE \"host\" = 'JumpHost' AND \"interface\" = 'eth0' AND $timeFilter GROUP BY time($interval) fill(null)",
"rawQuery": false,
"refId": "A",
"resultFormat": "time_series",
@@ -2544,7 +2544,7 @@
"params": [
"1s"
],
"type": "derivative"
"type": "non_negative_derivative"
},
{
"params": [
@@ -2588,7 +2588,7 @@
"measurement": "docker_container_net",
"orderByTime": "ASC",
"policy": "default",
"query": "SELECT 8 * derivative(mean(\"bytes_sent\"),1s) FROM \"net\" WHERE \"host\" = 'JumpHost' AND \"interface\" = 'eth0' AND $timeFilter GROUP BY time($interval) fill(null)",
"query": "SELECT 8 * non_negative_derivative(mean(\"bytes_sent\"),1s) FROM \"net\" WHERE \"host\" = 'JumpHost' AND \"interface\" = 'eth0' AND $timeFilter GROUP BY time($interval) fill(null)",
"rawQuery": false,
"refId": "B",
"resultFormat": "time_series",
@@ -2608,7 +2608,7 @@
"params": [
"1s"
],
"type": "derivative"
"type": "non_negative_derivative"
},
{
"params": [
@@ -3168,7 +3168,7 @@
"aliasColors": {
"InBound": "#629E51",
"OutBound": "#5195CE",
"net.derivative": "#1F78C1"
"net.non_negative_derivative": "#1F78C1"
},
"bars": false,
"dashLength": 10,
@@ -3233,7 +3233,7 @@
"measurement": "docker_container_net",
"orderByTime": "ASC",
"policy": "default",
"query": "SELECT 8 * derivative(mean(\"bytes_recv\"),1s) FROM \"net\" WHERE \"host\" = 'JumpHost' AND \"interface\" = 'eth0' AND $timeFilter GROUP BY time($interval) fill(null)",
"query": "SELECT 8 * non_negative_derivative(mean(\"bytes_recv\"),1s) FROM \"net\" WHERE \"host\" = 'JumpHost' AND \"interface\" = 'eth0' AND $timeFilter GROUP BY time($interval) fill(null)",
"rawQuery": false,
"refId": "A",
"resultFormat": "time_series",
@@ -3253,7 +3253,7 @@
"params": [
"1s"
],
"type": "derivative"
"type": "non_negative_derivative"
},
{
"params": [
@@ -3297,7 +3297,7 @@
"measurement": "docker_container_net",
"orderByTime": "ASC",
"policy": "default",
"query": "SELECT 8 * derivative(mean(\"bytes_sent\"),1s) FROM \"net\" WHERE \"host\" = 'JumpHost' AND \"interface\" = 'eth0' AND $timeFilter GROUP BY time($interval) fill(null)",
"query": "SELECT 8 * non_negative_derivative(mean(\"bytes_sent\"),1s) FROM \"net\" WHERE \"host\" = 'JumpHost' AND \"interface\" = 'eth0' AND $timeFilter GROUP BY time($interval) fill(null)",
"rawQuery": false,
"refId": "B",
"resultFormat": "time_series",
@@ -3317,7 +3317,7 @@
"params": [
"1s"
],
"type": "derivative"
"type": "non_negative_derivative"
},
{
"params": [
@@ -3463,7 +3463,7 @@
},
{
"params": [],
"type": "difference"
"type": "non_negative_difference"
}
]
],
@@ -3510,7 +3510,7 @@
},
{
"params": [],
"type": "difference"
"type": "non_negative_difference"
}
]
],
@@ -3700,7 +3700,7 @@
"aliasColors": {
"InBound": "#629E51",
"OutBound": "#5195CE",
"net.derivative": "#1F78C1"
"net.non_negative_derivative": "#1F78C1"
},
"bars": false,
"dashLength": 10,
@@ -3765,7 +3765,7 @@
"measurement": "docker_container_net",
"orderByTime": "ASC",
"policy": "default",
"query": "SELECT 8 * derivative(mean(\"bytes_recv\"),1s) FROM \"net\" WHERE \"host\" = 'JumpHost' AND \"interface\" = 'eth0' AND $timeFilter GROUP BY time($interval) fill(null)",
"query": "SELECT 8 * non_negative_derivative(mean(\"bytes_recv\"),1s) FROM \"net\" WHERE \"host\" = 'JumpHost' AND \"interface\" = 'eth0' AND $timeFilter GROUP BY time($interval) fill(null)",
"rawQuery": false,
"refId": "A",
"resultFormat": "time_series",
@@ -3785,7 +3785,7 @@
"params": [
"1s"
],
"type": "derivative"
"type": "non_negative_derivative"
},
{
"params": [
@@ -3829,7 +3829,7 @@
"measurement": "docker_container_net",
"orderByTime": "ASC",
"policy": "default",
"query": "SELECT 8 * derivative(mean(\"bytes_sent\"),1s) FROM \"net\" WHERE \"host\" = 'JumpHost' AND \"interface\" = 'eth0' AND $timeFilter GROUP BY time($interval) fill(null)",
"query": "SELECT 8 * non_negative_derivative(mean(\"bytes_sent\"),1s) FROM \"net\" WHERE \"host\" = 'JumpHost' AND \"interface\" = 'eth0' AND $timeFilter GROUP BY time($interval) fill(null)",
"rawQuery": false,
"refId": "B",
"resultFormat": "time_series",
@@ -3849,7 +3849,7 @@
"params": [
"1s"
],
"type": "derivative"
"type": "non_negative_derivative"
},
{
"params": [

View File

@@ -1799,7 +1799,7 @@
"aliasColors": {
"InBound": "#629E51",
"OutBound": "#5195CE",
"net.derivative": "#1F78C1"
"net.non_negative_derivative": "#1F78C1"
},
"bars": false,
"dashLength": 10,
@@ -1864,7 +1864,7 @@
"measurement": "net",
"orderByTime": "ASC",
"policy": "default",
"query": "SELECT 8 * derivative(mean(\"bytes_recv\"),1s) FROM \"net\" WHERE \"host\" = 'JumpHost' AND \"interface\" = 'eth0' AND $timeFilter GROUP BY time($interval) fill(null)",
"query": "SELECT 8 * non_negative_derivative(mean(\"bytes_recv\"),1s) FROM \"net\" WHERE \"host\" = 'JumpHost' AND \"interface\" = 'eth0' AND $timeFilter GROUP BY time($interval) fill(null)",
"rawQuery": false,
"refId": "A",
"resultFormat": "time_series",
@@ -1884,7 +1884,7 @@
"params": [
"1s"
],
"type": "derivative"
"type": "non_negative_derivative"
},
{
"params": [
@@ -1928,7 +1928,7 @@
"measurement": "net",
"orderByTime": "ASC",
"policy": "default",
"query": "SELECT 8 * derivative(mean(\"bytes_sent\"),1s) FROM \"net\" WHERE \"host\" = 'JumpHost' AND \"interface\" = 'eth0' AND $timeFilter GROUP BY time($interval) fill(null)",
"query": "SELECT 8 * non_negative_derivative(mean(\"bytes_sent\"),1s) FROM \"net\" WHERE \"host\" = 'JumpHost' AND \"interface\" = 'eth0' AND $timeFilter GROUP BY time($interval) fill(null)",
"rawQuery": false,
"refId": "B",
"resultFormat": "time_series",
@@ -1948,7 +1948,7 @@
"params": [
"1s"
],
"type": "derivative"
"type": "non_negative_derivative"
},
{
"params": [
@@ -2546,7 +2546,7 @@
"aliasColors": {
"InBound": "#629E51",
"OutBound": "#5195CE",
"net.derivative": "#1F78C1"
"net.non_negative_derivative": "#1F78C1"
},
"bars": false,
"dashLength": 10,
@@ -2611,7 +2611,7 @@
"measurement": "docker_container_net",
"orderByTime": "ASC",
"policy": "default",
"query": "SELECT 8 * derivative(mean(\"bytes_recv\"),1s) FROM \"net\" WHERE \"host\" = 'JumpHost' AND \"interface\" = 'eth0' AND $timeFilter GROUP BY time($interval) fill(null)",
"query": "SELECT 8 * non_negative_derivative(mean(\"bytes_recv\"),1s) FROM \"net\" WHERE \"host\" = 'JumpHost' AND \"interface\" = 'eth0' AND $timeFilter GROUP BY time($interval) fill(null)",
"rawQuery": false,
"refId": "A",
"resultFormat": "time_series",
@@ -2631,7 +2631,7 @@
"params": [
"1s"
],
"type": "derivative"
"type": "non_negative_derivative"
},
{
"params": [
@@ -2675,7 +2675,7 @@
"measurement": "docker_container_net",
"orderByTime": "ASC",
"policy": "default",
"query": "SELECT 8 * derivative(mean(\"bytes_sent\"),1s) FROM \"net\" WHERE \"host\" = 'JumpHost' AND \"interface\" = 'eth0' AND $timeFilter GROUP BY time($interval) fill(null)",
"query": "SELECT 8 * non_negative_derivative(mean(\"bytes_sent\"),1s) FROM \"net\" WHERE \"host\" = 'JumpHost' AND \"interface\" = 'eth0' AND $timeFilter GROUP BY time($interval) fill(null)",
"rawQuery": false,
"refId": "B",
"resultFormat": "time_series",
@@ -2695,7 +2695,7 @@
"params": [
"1s"
],
"type": "derivative"
"type": "non_negative_derivative"
},
{
"params": [
@@ -3299,7 +3299,7 @@
"measurement": "docker_container_net",
"orderByTime": "ASC",
"policy": "default",
"query": "SELECT derivative(mean(\"rx_bytes\"), 1s) *8 FROM \"docker_container_net\" WHERE (\"host\" = '{{ SERVERNAME }}' AND \"container_name\" = 'so-influxdb') AND $timeFilter GROUP BY time($__interval) fill(null)",
"query": "SELECT non_negative_derivative(mean(\"rx_bytes\"), 1s) *8 FROM \"docker_container_net\" WHERE (\"host\" = '{{ SERVERNAME }}' AND \"container_name\" = 'so-influxdb') AND $timeFilter GROUP BY time($__interval) fill(null)",
"rawQuery": false,
"refId": "A",
"resultFormat": "time_series",
@@ -3319,7 +3319,7 @@
"params": [
"1s"
],
"type": "derivative"
"type": "non_negative_derivative"
},
{
"params": [
@@ -3380,7 +3380,7 @@
"params": [
"1s"
],
"type": "derivative"
"type": "non_negative_derivative"
},
{
"params": [
@@ -3785,7 +3785,7 @@
"params": [
"1s"
],
"type": "derivative"
"type": "non_negative_derivative"
},
{
"params": [
@@ -3846,7 +3846,7 @@
"params": [
"1s"
],
"type": "derivative"
"type": "non_negative_derivative"
},
{
"params": [
@@ -4164,7 +4164,7 @@
},
{
"params": [],
"type": "difference"
"type": "non_negative_difference"
}
]
],
@@ -4211,7 +4211,7 @@
},
{
"params": [],
"type": "difference"
"type": "non_negative_difference"
}
]
],

View File

@@ -2135,7 +2135,7 @@
},
{
"params": [],
"type": "difference"
"type": "non_negative_difference"
}
]
],
@@ -2182,7 +2182,7 @@
},
{
"params": [],
"type": "difference"
"type": "non_negative_difference"
}
]
],
@@ -2781,7 +2781,7 @@
"aliasColors": {
"InBound": "#629E51",
"OutBound": "#5195CE",
"net.derivative": "#1F78C1"
"net.non_negative_derivative": "#1F78C1"
},
"bars": false,
"dashLength": 10,
@@ -2846,7 +2846,7 @@
"measurement": "net",
"orderByTime": "ASC",
"policy": "default",
"query": "SELECT 8 * derivative(mean(\"bytes_recv\"),1s) FROM \"net\" WHERE \"host\" = 'JumpHost' AND \"interface\" = 'eth0' AND $timeFilter GROUP BY time($interval) fill(null)",
"query": "SELECT 8 * non_negative_derivative(mean(\"bytes_recv\"),1s) FROM \"net\" WHERE \"host\" = 'JumpHost' AND \"interface\" = 'eth0' AND $timeFilter GROUP BY time($interval) fill(null)",
"rawQuery": false,
"refId": "A",
"resultFormat": "time_series",
@@ -2866,7 +2866,7 @@
"params": [
"1s"
],
"type": "derivative"
"type": "non_negative_derivative"
},
{
"params": [
@@ -2910,7 +2910,7 @@
"measurement": "net",
"orderByTime": "ASC",
"policy": "default",
"query": "SELECT 8 * derivative(mean(\"bytes_sent\"),1s) FROM \"net\" WHERE \"host\" = 'JumpHost' AND \"interface\" = 'eth0' AND $timeFilter GROUP BY time($interval) fill(null)",
"query": "SELECT 8 * non_negative_derivative(mean(\"bytes_sent\"),1s) FROM \"net\" WHERE \"host\" = 'JumpHost' AND \"interface\" = 'eth0' AND $timeFilter GROUP BY time($interval) fill(null)",
"rawQuery": false,
"refId": "B",
"resultFormat": "time_series",
@@ -2930,7 +2930,7 @@
"params": [
"1s"
],
"type": "derivative"
"type": "non_negative_derivative"
},
{
"params": [
@@ -3353,7 +3353,7 @@
"aliasColors": {
"InBound": "#629E51",
"OutBound": "#5195CE",
"net.derivative": "#1F78C1"
"net.non_negative_derivative": "#1F78C1"
},
"bars": false,
"dashLength": 10,
@@ -3418,7 +3418,7 @@
"measurement": "docker_container_net",
"orderByTime": "ASC",
"policy": "default",
"query": "SELECT 8 * derivative(mean(\"bytes_recv\"),1s) FROM \"net\" WHERE \"host\" = 'JumpHost' AND \"interface\" = 'eth0' AND $timeFilter GROUP BY time($interval) fill(null)",
"query": "SELECT 8 * non_negative_derivative(mean(\"bytes_recv\"),1s) FROM \"net\" WHERE \"host\" = 'JumpHost' AND \"interface\" = 'eth0' AND $timeFilter GROUP BY time($interval) fill(null)",
"rawQuery": false,
"refId": "A",
"resultFormat": "time_series",
@@ -3438,7 +3438,7 @@
"params": [
"1s"
],
"type": "derivative"
"type": "non_negative_derivative"
},
{
"params": [
@@ -3482,7 +3482,7 @@
"measurement": "docker_container_net",
"orderByTime": "ASC",
"policy": "default",
"query": "SELECT 8 * derivative(mean(\"bytes_sent\"),1s) FROM \"net\" WHERE \"host\" = 'JumpHost' AND \"interface\" = 'eth0' AND $timeFilter GROUP BY time($interval) fill(null)",
"query": "SELECT 8 * non_negative_derivative(mean(\"bytes_sent\"),1s) FROM \"net\" WHERE \"host\" = 'JumpHost' AND \"interface\" = 'eth0' AND $timeFilter GROUP BY time($interval) fill(null)",
"rawQuery": false,
"refId": "B",
"resultFormat": "time_series",
@@ -3502,7 +3502,7 @@
"params": [
"1s"
],
"type": "derivative"
"type": "non_negative_derivative"
},
{
"params": [

View File

@@ -2729,7 +2729,7 @@
"aliasColors": {
"InBound": "#629E51",
"OutBound": "#5195CE",
"net.derivative": "#1F78C1"
"net.non_negative_derivative": "#1F78C1"
},
"bars": false,
"dashLength": 10,
@@ -2800,7 +2800,7 @@
"measurement": "net",
"orderByTime": "ASC",
"policy": "default",
"query": "SELECT 8 * derivative(mean(\"bytes_recv\"),1s) FROM \"net\" WHERE \"host\" = 'JumpHost' AND \"interface\" = 'eth0' AND $timeFilter GROUP BY time($interval) fill(null)",
"query": "SELECT 8 * non_negative_derivative(mean(\"bytes_recv\"),1s) FROM \"net\" WHERE \"host\" = 'JumpHost' AND \"interface\" = 'eth0' AND $timeFilter GROUP BY time($interval) fill(null)",
"rawQuery": false,
"refId": "A",
"resultFormat": "time_series",
@@ -2820,7 +2820,7 @@
"params": [
"1s"
],
"type": "derivative"
"type": "non_negative_derivative"
},
{
"params": [
@@ -2864,7 +2864,7 @@
"measurement": "net",
"orderByTime": "ASC",
"policy": "default",
"query": "SELECT 8 * derivative(mean(\"bytes_sent\"),1s) FROM \"net\" WHERE \"host\" = 'JumpHost' AND \"interface\" = 'eth0' AND $timeFilter GROUP BY time($interval) fill(null)",
"query": "SELECT 8 * non_negative_derivative(mean(\"bytes_sent\"),1s) FROM \"net\" WHERE \"host\" = 'JumpHost' AND \"interface\" = 'eth0' AND $timeFilter GROUP BY time($interval) fill(null)",
"rawQuery": false,
"refId": "B",
"resultFormat": "time_series",
@@ -2884,7 +2884,7 @@
"params": [
"1s"
],
"type": "derivative"
"type": "non_negative_derivative"
},
{
"params": [
@@ -3311,7 +3311,7 @@
},
{
"params": [],
"type": "difference"
"type": "non_negative_difference"
}
]
],
@@ -3359,7 +3359,7 @@
},
{
"params": [],
"type": "difference"
"type": "non_negative_difference"
}
]
],
@@ -3418,7 +3418,7 @@
"aliasColors": {
"InBound": "#629E51",
"OutBound": "#5195CE",
"net.derivative": "#1F78C1"
"net.non_negative_derivative": "#1F78C1"
},
"bars": false,
"dashLength": 10,
@@ -3489,7 +3489,7 @@
"measurement": "net",
"orderByTime": "ASC",
"policy": "default",
"query": "SELECT 8 * derivative(mean(\"bytes_recv\"),1s) FROM \"net\" WHERE \"host\" = 'JumpHost' AND \"interface\" = 'eth0' AND $timeFilter GROUP BY time($interval) fill(null)",
"query": "SELECT 8 * non_negative_derivative(mean(\"bytes_recv\"),1s) FROM \"net\" WHERE \"host\" = 'JumpHost' AND \"interface\" = 'eth0' AND $timeFilter GROUP BY time($interval) fill(null)",
"rawQuery": false,
"refId": "A",
"resultFormat": "time_series",
@@ -3509,7 +3509,7 @@
"params": [
"1s"
],
"type": "derivative"
"type": "non_negative_derivative"
},
{
"params": [
@@ -4085,7 +4085,7 @@
"aliasColors": {
"InBound": "#629E51",
"OutBound": "#5195CE",
"net.derivative": "#1F78C1"
"net.non_negative_derivative": "#1F78C1"
},
"bars": false,
"dashLength": 10,
@@ -4156,7 +4156,7 @@
"measurement": "docker_container_net",
"orderByTime": "ASC",
"policy": "default",
"query": "SELECT 8 * derivative(mean(\"bytes_recv\"),1s) FROM \"net\" WHERE \"host\" = 'JumpHost' AND \"interface\" = 'eth0' AND $timeFilter GROUP BY time($interval) fill(null)",
"query": "SELECT 8 * non_negative_derivative(mean(\"bytes_recv\"),1s) FROM \"net\" WHERE \"host\" = 'JumpHost' AND \"interface\" = 'eth0' AND $timeFilter GROUP BY time($interval) fill(null)",
"rawQuery": false,
"refId": "A",
"resultFormat": "time_series",
@@ -4176,7 +4176,7 @@
"params": [
"1s"
],
"type": "derivative"
"type": "non_negative_derivative"
},
{
"params": [
@@ -4220,7 +4220,7 @@
"measurement": "docker_container_net",
"orderByTime": "ASC",
"policy": "default",
"query": "SELECT 8 * derivative(mean(\"bytes_sent\"),1s) FROM \"net\" WHERE \"host\" = 'JumpHost' AND \"interface\" = 'eth0' AND $timeFilter GROUP BY time($interval) fill(null)",
"query": "SELECT 8 * non_negative_derivative(mean(\"bytes_sent\"),1s) FROM \"net\" WHERE \"host\" = 'JumpHost' AND \"interface\" = 'eth0' AND $timeFilter GROUP BY time($interval) fill(null)",
"rawQuery": false,
"refId": "B",
"resultFormat": "time_series",
@@ -4240,7 +4240,7 @@
"params": [
"1s"
],
"type": "derivative"
"type": "non_negative_derivative"
},
{
"params": [

View File

@@ -2010,7 +2010,7 @@
"aliasColors": {
"InBound": "#629E51",
"OutBound": "#5195CE",
"net.derivative": "#1F78C1"
"net.non_negative_derivative": "#1F78C1"
},
"bars": false,
"dashLength": 10,
@@ -2081,7 +2081,7 @@
"measurement": "net",
"orderByTime": "ASC",
"policy": "default",
"query": "SELECT 8 * derivative(mean(\"bytes_recv\"),1s) FROM \"net\" WHERE \"host\" = 'JumpHost' AND \"interface\" = 'eth0' AND $timeFilter GROUP BY time($interval) fill(null)",
"query": "SELECT 8 * non_negative_derivative(mean(\"bytes_recv\"),1s) FROM \"net\" WHERE \"host\" = 'JumpHost' AND \"interface\" = 'eth0' AND $timeFilter GROUP BY time($interval) fill(null)",
"rawQuery": false,
"refId": "A",
"resultFormat": "time_series",
@@ -2101,7 +2101,7 @@
"params": [
"1s"
],
"type": "derivative"
"type": "non_negative_derivative"
},
{
"params": [
@@ -2145,7 +2145,7 @@
"measurement": "net",
"orderByTime": "ASC",
"policy": "default",
"query": "SELECT 8 * derivative(mean(\"bytes_sent\"),1s) FROM \"net\" WHERE \"host\" = 'JumpHost' AND \"interface\" = 'eth0' AND $timeFilter GROUP BY time($interval) fill(null)",
"query": "SELECT 8 * non_negative_derivative(mean(\"bytes_sent\"),1s) FROM \"net\" WHERE \"host\" = 'JumpHost' AND \"interface\" = 'eth0' AND $timeFilter GROUP BY time($interval) fill(null)",
"rawQuery": false,
"refId": "B",
"resultFormat": "time_series",
@@ -2165,7 +2165,7 @@
"params": [
"1s"
],
"type": "derivative"
"type": "non_negative_derivative"
},
{
"params": [
@@ -2794,7 +2794,7 @@
"aliasColors": {
"InBound": "#629E51",
"OutBound": "#5195CE",
"net.derivative": "#1F78C1"
"net.non_negative_derivative": "#1F78C1"
},
"bars": false,
"dashLength": 10,
@@ -2865,7 +2865,7 @@
"measurement": "net",
"orderByTime": "ASC",
"policy": "default",
"query": "SELECT 8 * derivative(mean(\"bytes_recv\"),1s) FROM \"net\" WHERE \"host\" = 'JumpHost' AND \"interface\" = 'eth0' AND $timeFilter GROUP BY time($interval) fill(null)",
"query": "SELECT 8 * non_negative_derivative(mean(\"bytes_recv\"),1s) FROM \"net\" WHERE \"host\" = 'JumpHost' AND \"interface\" = 'eth0' AND $timeFilter GROUP BY time($interval) fill(null)",
"rawQuery": false,
"refId": "A",
"resultFormat": "time_series",
@@ -2885,7 +2885,7 @@
"params": [
"1s"
],
"type": "derivative"
"type": "non_negative_derivative"
},
{
"params": [
@@ -3466,7 +3466,7 @@
"params": [
"1s"
],
"type": "derivative"
"type": "non_negative_derivative"
},
{
"params": [
@@ -3527,7 +3527,7 @@
"params": [
"1s"
],
"type": "derivative"
"type": "non_negative_derivative"
},
{
"params": [
@@ -4102,7 +4102,7 @@
"params": [
"1s"
],
"type": "derivative"
"type": "non_negative_derivative"
},
{
"params": [
@@ -4163,7 +4163,7 @@
"params": [
"1s"
],
"type": "derivative"
"type": "non_negative_derivative"
},
{
"params": [
@@ -4854,7 +4854,7 @@
"params": [
"1s"
],
"type": "derivative"
"type": "non_negative_derivative"
},
{
"params": [
@@ -4915,7 +4915,7 @@
"params": [
"1s"
],
"type": "derivative"
"type": "non_negative_derivative"
},
{
"params": [
@@ -5202,7 +5202,7 @@
},
{
"params": [],
"type": "difference"
"type": "non_negative_difference"
}
]
],
@@ -5250,7 +5250,7 @@
},
{
"params": [],
"type": "difference"
"type": "non_negative_difference"
}
]
],

View File

@@ -236,6 +236,11 @@ so-grafana:
- watch:
- file: /opt/so/conf/grafana/*
append_so-grafana_so-status.conf:
file.append:
- name: /opt/so/conf/so-status/so-status.conf
- text: so-grafana
{% endif %}
{% else %}

View File

@@ -58,11 +58,12 @@ rulesdir:
- makedirs: True
synclocalnidsrules:
file.managed:
- name: /opt/so/rules/nids/local.rules
- source: salt://idstools/local.rules
file.recurse:
- name: /opt/so/rules/nids/
- source: salt://idstools/
- user: 939
- group: 939
- include_pat: 'E@.rules'
so-idstools:
docker_container.running:
@@ -75,6 +76,11 @@ so-idstools:
- watch:
- file: idstoolsetcsync
append_so-idstools_so-status.conf:
file.append:
- name: /opt/so/conf/so-status/so-status.conf
- text: so-idstools
{% else %}
idstools_state_not_allowed:

View File

@@ -54,6 +54,11 @@ so-influxdb:
- watch:
- file: influxdbconf
append_so-influxdb_so-status.conf:
file.append:
- name: /opt/so/conf/so-status/so-status.conf
- text: so-influxdb
{% endif %}
{% else %}

View File

@@ -4,7 +4,7 @@ echo -n "Waiting for ElasticSearch..."
COUNT=0
ELASTICSEARCH_CONNECTED="no"
while [[ "$COUNT" -le 30 ]]; do
curl --output /dev/null --silent --head --fail http://{{ ES }}:9200
curl --output /dev/null --silent --head --fail -L http://{{ ES }}:9200
if [ $? -eq 0 ]; then
ELASTICSEARCH_CONNECTED="yes"
echo "connected!"
@@ -28,7 +28,7 @@ MAX_WAIT=240
# Check to see if Kibana is available
wait_step=0
until curl -s -XGET http://{{ ES }}:5601 > /dev/null ; do
until curl -s -XGET -L http://{{ ES }}:5601 > /dev/null ; do
wait_step=$(( ${wait_step} + 1 ))
echo "Waiting on Kibana...Attempt #$wait_step"
if [ ${wait_step} -gt ${MAX_WAIT} ]; then
@@ -42,12 +42,12 @@ wait_step=0
# Apply Kibana template
echo
echo "Applying Kibana template..."
curl -s -XPUT http://{{ ES }}:9200/_template/kibana \
curl -s -XPUT -L http://{{ ES }}:9200/_template/kibana \
-H 'Content-Type: application/json' \
-d'{"index_patterns" : ".kibana", "settings": { "number_of_shards" : 1, "number_of_replicas" : 0 }, "mappings" : { "search": {"properties": {"hits": {"type": "integer"}, "version": {"type": "integer"}}}}}'
echo
curl -s -XPUT "{{ ES }}:9200/.kibana/_settings" \
curl -s -XPUT -L "{{ ES }}:9200/.kibana/_settings" \
-H 'Content-Type: application/json' \
-d'{"index" : {"number_of_replicas" : 0}}'
echo

View File

@@ -90,6 +90,11 @@ so-kibana:
- port_bindings:
- 0.0.0.0:5601:5601
append_so-kibana_so-status.conf:
file.append:
- name: /opt/so/conf/so-status/so-status.conf
- text: so-kibana
kibanadashtemplate:
file.managed:
- name: /opt/so/conf/kibana/saved_objects.ndjson.template

View File

@@ -173,6 +173,7 @@ so-logstash:
- /sys/fs/cgroup:/sys/fs/cgroup:ro
- /etc/pki/filebeat.crt:/usr/share/logstash/filebeat.crt:ro
- /etc/pki/filebeat.p8:/usr/share/logstash/filebeat.key:ro
- /opt/so/conf/logstash/etc/certs:/usr/share/logstash/certs:ro
{% if grains['role'] == 'so-heavynode' %}
- /etc/ssl/certs/intca.crt:/usr/share/filebeat/ca.crt:ro
{% else %}
@@ -201,6 +202,11 @@ so-logstash:
- file: es_template_{{TEMPLATE.split('.')[0] | replace("/","_") }}
{% endfor %}
append_so-logstash_so-status.conf:
file.append:
- name: /opt/so/conf/so-status/so-status.conf
- text: so-logstash
{% else %}
logstash_state_not_allowed:

View File

@@ -81,6 +81,11 @@ so-aptcacherng:
- /opt/so/log/aptcacher-ng:/var/log/apt-cacher-ng:rw
- /opt/so/conf/aptcacher-ng/etc/acng.conf:/etc/apt-cacher-ng/acng.conf:ro
append_so-aptcacherng_so-status.conf:
file.append:
- name: /opt/so/conf/so-status/so-status.conf
- text: so-aptcacherng
{% endif %}
strelka_yara_update:

View File

@@ -62,6 +62,11 @@ so-minio:
- /etc/pki/minio.crt:/.minio/certs/public.crt:ro
- entrypoint: "/usr/bin/docker-entrypoint.sh server --certs-dir /.minio/certs --address :9595 /data"
append_so-minio_so-status.conf:
file.append:
- name: /opt/so/conf/so-status/so-status.conf
- text: so-minio
{% else %}
minio_state_not_allowed:

View File

@@ -94,9 +94,20 @@ so-mysql:
- /opt/so/conf/mysql/etc
cmd.run:
- name: until nc -z {{ MAINIP }} 3306; do sleep 1; done
- timeout: 900
- timeout: 600
- onchanges:
- docker_container: so-mysql
module.run:
- so.mysql_conn:
- retry: 300
- onchanges:
- cmd: so-mysql
append_so-mysql_so-status.conf:
file.append:
- name: /opt/so/conf/so-status/so-status.conf
- text: so-mysql
{% endif %}
{% else %}

Some files were not shown because too many files have changed in this diff Show More