Merge remote-tracking branch 'remotes/origin/master' into feature/users

This commit is contained in:
m0duspwnens
2021-07-02 08:13:56 -04:00
142 changed files with 16416 additions and 1920 deletions

39
CONTRIBUTING.md Normal file
View File

@@ -0,0 +1,39 @@
# Contributing to Security Onion
### Questions, suggestions, and general comments
* Security Onion uses GitHub's [Discussions](https://github.com/Security-Onion-Solutions/securityonion/discussions) to provide a forum where the community and developers can interact as well as ask and answer questions.
### Reporting a bug
* The primary place to report unexpected behavior or possible bugs is the repo's [Discussions forum](https://github.com/Security-Onion-Solutions/securityonion/discussions).
* **If you are familiar with the current version of Security Onion and are confident you've discovered a bug**, first ensure there is not already an issue present by searching the open [issues](https://github.com/Security-Onion-Solutions/securityonion/issues). If there is, a thumbs up :+1: is a great way to show this bug is affecting you too.
* If an issue doesn't exist, [open a new one](https://github.com/Security-Onion-Solutions/securityonion/issues/new), following the directions in the issue template. This means including:
* **System information** and how Security Onion was installed
* **Log files** relevant to the bug report
* **Reproduction steps**
### Contributing code
* **All commits must be signed** with a valid key that has been added to your GitHub account. The commits should have all the "**Verified**" tag when viewed on GitHub as shown below:
<img src="./assets/images/verified-commit-1.png" width="450">
* If an issue does not already exist for the bug or feature for which you are submitting a pull request, [create one](https://github.com/Security-Onion-Solutions/securityonion/issues/new) with the relevant prefix. (**`FIX:`** for bug fixes, **`FEATURE:`** for new features.)
* Link the PR to the related issue, either using [keywords](https://docs.github.com/en/issues/tracking-your-work-with-issues/creating-issues/linking-a-pull-request-to-an-issue#linking-a-pull-request-to-an-issue-using-a-keyword) in the PR description, or [manually](https://docs.github.com/en/issues/tracking-your-work-with-issues/creating-issues/linking-a-pull-request-to-an-issue#manually-linking-a-pull-request-to-an-issue).
* **Pull requests should be opened against the `dev` branch of this repo**, and should clearly describe the problem and solution.
* Be sure you have tested your changes and are confident they will not break other parts of the product.
* See this document's [code styling and conventions section](#code-style-and-conventions) below to be sure your PR fits our code requirements prior to submitting.
### Code style and conventions
* **Keep code [DRY](https://en.wikipedia.org/wiki/Don%27t_repeat_yourself)**. For example, Bash code used by multiple scripts will likely best be added to <span style="white-space: nowrap;">[`so-common`](salt/common/tools/sbin/so-common)</span>.
* All new Bash code should pass [ShellCheck](https://www.shellcheck.net/) analysis. Where errors can be *safely* [ignored](https://github.com/koalaman/shellcheck/wiki/Ignore), the relevant disable directive should be accompanied by a brief explanation as to why the error is being ignored.
* **Ensure all YAML (this includes Salt states and pillars) is properly formatted**. The spec for YAML v1.2 can be found [here](https://yaml.org/spec/1.2/spec.html), however there are numerous online resources with simpler descriptions of its formatting rules.

2
HOTFIX
View File

@@ -1 +1 @@
SALTYSOUP

View File

@@ -1,14 +1,14 @@
## Security Onion 2.3.52
## Security Onion 2.3.60
Security Onion 2.3.52 is here!
Security Onion 2.3.60 is here!
## Screenshots
Alerts
![Alerts](https://raw.githubusercontent.com/security-onion-solutions/securityonion/master/screenshots/alerts-1.png)
![Alerts](./assets/images/screenshots/alerts-1.png)
Hunt
![Hunt](https://raw.githubusercontent.com/security-onion-solutions/securityonion/master/screenshots/hunt-1.png)
![Hunt](./assets/images/screenshots/hunt-1.png)
### Release Notes

21
SECURITY.md Normal file
View File

@@ -0,0 +1,21 @@
# Security Policy
## Supported Versions
| Version | Supported |
| ------- | ------------------ |
| 2.x.x | :white_check_mark: |
| 16.04.x | :x: |
Security Onion 16.04 has reached End Of Life and is no longer supported.
## Reporting a Vulnerability
If you have any security concerns regarding Security Onion or believe you have uncovered a vulnerability, please follow these steps:
- send an email to security@securityonion.net
- include a description of the issue and steps to reproduce
- please use plain text format (no Word documents or PDF files)
- please do not disclose publicly until we have had sufficient time to resolve the issue
This security address should be used only for undisclosed vulnerabilities. Dealing with fixed issues or general questions on how to use Security Onion should be handled via the normal support channels.

View File

@@ -1,17 +1,17 @@
### 2.3.52 ISO image built on 2021/04/27
### 2.3.60 ISO image built on 2021/04/27
### Download and Verify
2.3.52 ISO image:
https://download.securityonion.net/file/securityonion/securityonion-2.3.52.iso
2.3.60 ISO image:
https://download.securityonion.net/file/securityonion/securityonion-2.3.60.iso
MD5: DF0CCCB0331780F472CC167AEAB55652
SHA1: 71FAE87E6C0AD99FCC27C50A5E5767D3F2332260
SHA256: 30E7C4206CC86E94D1657CBE420D2F41C28BC4CC63C51F27C448109EBAF09121
MD5: 0470325615C42C206B028EE37A1AD897
SHA1: 496E70BD529D3B8A02D0B32F68B8F7527C953612
SHA256: 417E34DFCD63D84A16FF2041DC712F02D9E0515C8B78BDF0EE1037DD13C32030
Signature for ISO image:
https://github.com/Security-Onion-Solutions/securityonion/raw/master/sigs/securityonion-2.3.52.iso.sig
https://github.com/Security-Onion-Solutions/securityonion/raw/master/sigs/securityonion-2.3.60.iso.sig
Signing key:
https://raw.githubusercontent.com/Security-Onion-Solutions/securityonion/master/KEYS
@@ -25,22 +25,22 @@ wget https://raw.githubusercontent.com/Security-Onion-Solutions/securityonion/ma
Download the signature file for the ISO:
```
wget https://github.com/Security-Onion-Solutions/securityonion/raw/master/sigs/securityonion-2.3.52.iso.sig
wget https://github.com/Security-Onion-Solutions/securityonion/raw/master/sigs/securityonion-2.3.60.iso.sig
```
Download the ISO image:
```
wget https://download.securityonion.net/file/securityonion/securityonion-2.3.52.iso
wget https://download.securityonion.net/file/securityonion/securityonion-2.3.60.iso
```
Verify the downloaded ISO image using the signature file:
```
gpg --verify securityonion-2.3.52.iso.sig securityonion-2.3.52.iso
gpg --verify securityonion-2.3.60.iso.sig securityonion-2.3.60.iso
```
The output should show "Good signature" and the Primary key fingerprint should match what's shown below:
```
gpg: Signature made Sat 05 Jun 2021 06:56:04 PM EDT using RSA key ID FE507013
gpg: Signature made Thu 01 Jul 2021 10:59:24 AM EDT using RSA key ID FE507013
gpg: Good signature from "Security Onion Solutions, LLC <info@securityonionsolutions.com>"
gpg: WARNING: This key is not certified with a trusted signature!
gpg: There is no indication that the signature belongs to the owner.

View File

@@ -1 +1 @@
2.3.52
2.3.60

View File

Before

Width:  |  Height:  |  Size: 245 KiB

After

Width:  |  Height:  |  Size: 245 KiB

View File

Before

Width:  |  Height:  |  Size: 168 KiB

After

Width:  |  Height:  |  Size: 168 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 24 KiB

View File

@@ -67,3 +67,7 @@ peer:
reactor:
- 'so/fleet':
- salt://reactor/fleet.sls
- 'salt/beacon/*/watch_sqlite_db//opt/so/conf/kratos/db/sqlite.db':
- salt://reactor/kratos.sls

View File

@@ -7,6 +7,7 @@ logstash:
- so/9000_output_zeek.conf.jinja
- so/9002_output_import.conf.jinja
- so/9034_output_syslog.conf.jinja
- so/9050_output_filebeatmodules.conf.jinja
- so/9100_output_osquery.conf.jinja
- so/9400_output_suricata.conf.jinja
- so/9500_output_beats.conf.jinja

View File

@@ -23,6 +23,9 @@ base:
'*_manager or *_managersearch':
- match: compound
- data.*
{% if salt['file.file_exists']('/opt/so/saltstack/local/pillar/elasticsearch/auth.sls') %}
- elasticsearch.auth
{% endif %}
- secrets
- global
- minions.{{ grains.id }}
@@ -39,6 +42,9 @@ base:
- secrets
- healthcheck.eval
- elasticsearch.eval
{% if salt['file.file_exists']('/opt/so/saltstack/local/pillar/elasticsearch/auth.sls') %}
- elasticsearch.auth
{% endif %}
- global
- minions.{{ grains.id }}
@@ -47,6 +53,9 @@ base:
- logstash.manager
- logstash.search
- elasticsearch.search
{% if salt['file.file_exists']('/opt/so/saltstack/local/pillar/elasticsearch/auth.sls') %}
- elasticsearch.auth
{% endif %}
- data.*
- zeeklogs
- secrets
@@ -60,6 +69,7 @@ base:
'*_heavynode':
- zeeklogs
- elasticsearch.auth
- global
- minions.{{ grains.id }}
@@ -81,6 +91,7 @@ base:
- logstash
- logstash.search
- elasticsearch.search
- elasticsearch.auth
- global
- minions.{{ grains.id }}
- data.nodestab
@@ -89,5 +100,8 @@ base:
- zeeklogs
- secrets
- elasticsearch.eval
{% if salt['file.file_exists']('/opt/so/saltstack/local/pillar/elasticsearch/auth.sls') %}
- elasticsearch.auth
{% endif %}
- global
- minions.{{ grains.id }}

View File

@@ -52,5 +52,4 @@ zeek:
- frameworks/signatures/detect-windows-shells
redef:
- LogAscii::use_json = T;
- LogAscii::json_timestamps = JSON::TS_ISO8601;
- CaptureLoss::watch_interval = 5 mins;
- CaptureLoss::watch_interval = 5 mins;

View File

@@ -2,6 +2,7 @@
{% if sls in allowed_states %}
{% set role = grains.id.split('_') | last %}
{% from 'elasticsearch/auth.map.jinja' import ELASTICAUTH with context %}
# Remove variables.txt from /tmp - This is temp
rmvariablesfile:
@@ -95,7 +96,6 @@ commonpkgs:
- netcat
- python3-mysqldb
- sqlite3
- argon2
- libssl-dev
- python3-dateutil
- python3-m2crypto
@@ -128,7 +128,6 @@ commonpkgs:
- net-tools
- curl
- sqlite
- argon2
- mariadb-devel
- nmap-ncat
- python3
@@ -169,6 +168,14 @@ alwaysupdated:
Etc/UTC:
timezone.system
elastic_curl_config:
file.managed:
- name: /opt/so/conf/elasticsearch/curl.config
- source: salt://elasticsearch/curl.config
- mode: 600
- show_changes: False
- makedirs: True
# Sync some Utilities
utilsyncscripts:
file.recurse:
@@ -178,6 +185,10 @@ utilsyncscripts:
- file_mode: 755
- template: jinja
- source: salt://common/tools/sbin
- defaults:
ELASTICCURL: 'curl'
- context:
ELASTICCURL: {{ ELASTICAUTH.elasticcurl }}
{% if role in ['eval', 'standalone', 'sensor', 'heavynode'] %}
# Add sensor cleanup

0
salt/common/tools/sbin/so-airgap-hotfixapply Normal file → Executable file
View File

0
salt/common/tools/sbin/so-airgap-hotfixdownload Normal file → Executable file
View File

View File

@@ -17,4 +17,4 @@
. /usr/sbin/so-common
salt-call state.highstate
salt-call state.highstate -linfo

View File

@@ -88,19 +88,6 @@ add_interface_bond0() {
fi
}
check_airgap() {
# See if this is an airgap install
AIRGAP=$(cat /opt/so/saltstack/local/pillar/global.sls | grep airgap: | awk '{print $2}')
if [[ "$AIRGAP" == "True" ]]; then
is_airgap=0
UPDATE_DIR=/tmp/soagupdate/SecurityOnion
AGDOCKER=/tmp/soagupdate/docker
AGREPO=/tmp/soagupdate/Packages
else
is_airgap=1
fi
}
check_container() {
docker ps | grep "$1:" > /dev/null 2>&1
return $?
@@ -153,16 +140,16 @@ Do you agree to the terms of the Elastic License?
If so, type AGREE to accept the Elastic License and continue. Otherwise, press Enter to exit this program without making any changes.
EOM
AGREED=$(whiptail --title "Security Onion Setup" --inputbox \
"$message" 20 75 3>&1 1>&2 2>&3)
AGREED=$(whiptail --title "$whiptail_title" --inputbox \
"$message" 20 75 3>&1 1>&2 2>&3)
if [ "${AGREED^^}" = 'AGREE' ]; then
mkdir -p /opt/so/state
touch /opt/so/state/yeselastic.txt
else
echo "Starting in 2.3.40 you must accept the Elastic license if you want to run Security Onion."
exit 1
fi
if [ "${AGREED^^}" = 'AGREE' ]; then
mkdir -p /opt/so/state
touch /opt/so/state/yeselastic.txt
else
echo "Starting in 2.3.40 you must accept the Elastic license if you want to run Security Onion."
exit 1
fi
}
@@ -252,6 +239,7 @@ lookup_salt_value() {
key=$1
group=$2
kind=$3
output=${4:-newline_values_only}
if [ -z "$kind" ]; then
kind=pillar
@@ -261,7 +249,7 @@ lookup_salt_value() {
group=${group}:
fi
salt-call --no-color ${kind}.get ${group}${key} --out=newline_values_only
salt-call --no-color ${kind}.get ${group}${key} --out=${output}
}
lookup_pillar() {
@@ -289,7 +277,7 @@ lookup_role() {
require_manager() {
if is_manager_node; then
echo "This is a manager, We can proceed."
echo "This is a manager, so we can proceed."
else
echo "Please run this command on the manager; the manager controls the grid."
exit 1
@@ -302,6 +290,7 @@ retry() {
cmd=$3
expectedOutput=$4
attempt=0
local exitcode=0
while [[ $attempt -lt $maxAttempts ]]; do
attempt=$((attempt+1))
echo "Executing command with retry support: $cmd"
@@ -321,7 +310,29 @@ retry() {
sleep $sleepDelay
done
echo "Command continues to fail; giving up."
return 1
return $exitcode
}
run_check_net_err() {
local cmd=$1
local err_msg=${2:-"Unknown error occured, please check /root/$WHATWOULDYOUSAYYAHDOHERE.log for details."} # Really need to rename that variable
local no_retry=$3
local exit_code
if [[ -z $no_retry ]]; then
retry 5 60 "$cmd"
exit_code=$?
else
eval "$cmd"
exit_code=$?
fi
if [[ $exit_code -ne 0 ]]; then
ERR_HANDLED=true
[[ -z $no_retry ]] || echo "Command failed with error $exit_code"
echo "$err_msg"
exit $exit_code
fi
}
set_os() {
@@ -486,13 +497,14 @@ wait_for_web_response() {
url=$1
expected=$2
maxAttempts=${3:-300}
curlcmd=${4:-curl}
logfile=/root/wait_for_web_response.log
truncate -s 0 "$logfile"
attempt=0
while [[ $attempt -lt $maxAttempts ]]; do
attempt=$((attempt+1))
echo "Waiting for value '$expected' at '$url' ($attempt/$maxAttempts)"
result=$(curl -ks -L $url)
result=$($curlcmd -ks -L $url)
exitcode=$?
echo "--------------------------------------------------" >> $logfile

View File

@@ -35,6 +35,7 @@ if [ ! -f $BACKUPFILE ]; then
{%- endfor %}
tar -rf $BACKUPFILE /etc/pki
tar -rf $BACKUPFILE /etc/salt
tar -rf $BACKUPFILE /opt/so/conf/kratos
fi

View File

@@ -32,19 +32,25 @@ def get_image_version(string) -> str:
ver = string.split(':')[-1]
if ver == 'latest':
# Version doesn't like "latest", so use a high semver
return '999999.9.9'
return '99999.9.9'
else:
try:
Version(ver)
except InvalidVersion:
# Strip the last substring following a hyphen for automated branches
ver = '-'.join(ver.split('-')[:-1])
# Also return a very high semver for any version
# with a dash in it since it will likely be a dev version of some kind
if '-' in ver:
return '999999.9.9'
return ver
def main(quiet):
client = docker.from_env()
# Prune old/stopped containers
if not quiet: print('Pruning old containers')
client.containers.prune()
image_list = client.images.list(filters={ 'dangling': False })
# Map list of image objects to flattened list of tags (format: "name:version")
@@ -72,9 +78,16 @@ def main(quiet):
for group in grouped_t_list[2:]:
for tag in group:
if not quiet: print(f'Removing image {tag}')
client.images.remove(tag)
except InvalidVersion as e:
print(f'so-{get_so_image_basename(t_list[0])}: {e.args[0]}', file=sys.stderr)
try:
client.images.remove(tag, force=True)
except docker.errors.ClientError as e:
print(f'Could not remove image {tag}, continuing...')
except (docker.errors.APIError, InvalidVersion) as e:
print(f'so-{get_so_image_basename(t_list[0])}: {e}', file=sys.stderr)
exit(1)
except Exception as e:
print('Unhandled exception occurred:')
print(f'so-{get_so_image_basename(t_list[0])}: {e}', file=sys.stderr)
exit(1)
if no_prunable and not quiet:
@@ -86,4 +99,4 @@ if __name__ == "__main__":
main_parser.add_argument('-q', '--quiet', action='store_const', const=True, required=False)
args = main_parser.parse_args(sys.argv[1:])
main(args.quiet)
main(args.quiet)

View File

@@ -145,9 +145,9 @@ EOF
rulename=$(echo ${raw_rulename,,} | sed 's/ /_/g')
cat << EOF >> "$rulename.yaml"
# Elasticsearch Host
es_host: elasticsearch
es_port: 9200
# Elasticsearch Host Override (optional)
# es_host: elasticsearch
# es_port: 9200
# (Required)
# Rule name, must be unique

View File

@@ -0,0 +1,67 @@
#!/bin/bash
# Copyright 2014,2015,2016,2017,2018,2019,2020,2021 Security Onion Solutions, LLC
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
if [ -f "/usr/sbin/so-common" ]; then
. /usr/sbin/so-common
fi
ES_AUTH_PILLAR=${ELASTIC_AUTH_PILLAR:-/opt/so/saltstack/local/pillar/elasticsearch/auth.sls}
ES_USERS_FILE=${ELASTIC_USERS_FILE:-/opt/so/saltstack/local/salt/elasticsearch/files/users}
authEnable=$1
if ! grep -q "enabled: " "$ES_AUTH_PILLAR"; then
echo "Elastic auth pillar file is invalid. Unable to proceed."
exit 1
fi
function restart() {
if [[ -z "$ELASTIC_AUTH_SKIP_HIGHSTATE" ]]; then
echo "Elasticsearch on all affected minions will now be stopped and then restarted..."
salt -C 'G@role:so-standalone or G@role:so-eval or G@role:so-import or G@role:so-manager or G@role:so-managersearch or G@role:so-node or G@role:so-heavynode' cmd.run so-elastic-stop queue=True
echo "Applying highstate to all affected minions..."
salt -C 'G@role:so-standalone or G@role:so-eval or G@role:so-import or G@role:so-manager or G@role:so-managersearch or G@role:so-node or G@role:so-heavynode' state.highstate queue=True
fi
}
if [[ "$authEnable" == "true" ]]; then
if grep -q "enabled: False" "$ES_AUTH_PILLAR"; then
sed -i 's/enabled: False/enabled: True/g' "$ES_AUTH_PILLAR"
restart
echo "Elastic auth is now enabled."
if grep -q "argon" "$ES_USERS_FILE"; then
echo ""
echo "IMPORTANT: The following users will need to change their password, after logging into SOC, in order to access Kibana:"
grep argon "$ES_USERS_FILE" | cut -d ":" -f 1
fi
else
echo "Auth is already enabled."
fi
elif [[ "$authEnable" == "false" ]]; then
if grep -q "enabled: True" "$ES_AUTH_PILLAR"; then
sed -i 's/enabled: True/enabled: False/g' "$ES_AUTH_PILLAR"
restart
echo "Elastic auth is now disabled."
else
echo "Auth is already disabled."
fi
else
echo "Usage: $0 <true|false>"
echo ""
echo "Toggles Elastic authentication. Elasticsearch will be restarted on each affected minion."
echo ""
fi

View File

@@ -50,7 +50,7 @@ done
if [ $SKIP -ne 1 ]; then
# List indices
echo
curl -k -L https://{{ NODEIP }}:9200/_cat/indices?v
{{ ELASTICCURL }} -k -L https://{{ NODEIP }}:9200/_cat/indices?v
echo
# Inform user we are about to delete all data
echo
@@ -89,10 +89,10 @@ fi
# Delete data
echo "Deleting data..."
INDXS=$(curl -s -XGET -k -L https://{{ NODEIP }}:9200/_cat/indices?v | egrep 'logstash|elastalert|so-' | awk '{ print $3 }')
INDXS=$({{ ELASTICCURL }} -s -XGET -k -L https://{{ NODEIP }}:9200/_cat/indices?v | egrep 'logstash|elastalert|so-' | awk '{ print $3 }')
for INDX in ${INDXS}
do
curl -XDELETE -k -L https://"{{ NODEIP }}:9200/${INDX}" > /dev/null 2>&1
{{ ELASTICCURL }} -XDELETE -k -L https://"{{ NODEIP }}:9200/${INDX}" > /dev/null 2>&1
done
#Start Logstash/Filebeat

View File

@@ -18,4 +18,4 @@
. /usr/sbin/so-common
curl -s -k -L https://{{ NODEIP }}:9200/_cat/indices?pretty
{{ ELASTICCURL }} -s -k -L https://{{ NODEIP }}:9200/_cat/indices?pretty

View File

@@ -21,5 +21,5 @@ THEHIVEESPORT=9400
echo "Removing read only attributes for indices..."
echo
curl -s -k -XPUT -H "Content-Type: application/json" -L https://$IP:9200/_all/_settings -d '{"index.blocks.read_only_allow_delete": null}' 2>&1 | if grep -q ack; then echo "Index settings updated..."; else echo "There was any issue updating the read-only attribute. Please ensure Elasticsearch is running.";fi;
curl -XPUT -H "Content-Type: application/json" -L http://$IP:9400/_all/_settings -d '{"index.blocks.read_only_allow_delete": null}' 2>&1 | if grep -q ack; then echo "Index settings updated..."; else echo "There was any issue updating the read-only attribute. Please ensure Elasticsearch is running.";fi;
{{ ELASTICCURL }} -s -k -XPUT -H "Content-Type: application/json" -L https://$IP:9200/_all/_settings -d '{"index.blocks.read_only_allow_delete": null}' 2>&1 | if grep -q ack; then echo "Index settings updated..."; else echo "There was any issue updating the read-only attribute. Please ensure Elasticsearch is running.";fi;
{{ ELASTICCURL }} -XPUT -H "Content-Type: application/json" -L http://$IP:9400/_all/_settings -d '{"index.blocks.read_only_allow_delete": null}' 2>&1 | if grep -q ack; then echo "Index settings updated..."; else echo "There was any issue updating the read-only attribute. Please ensure Elasticsearch is running.";fi;

View File

@@ -19,7 +19,7 @@
. /usr/sbin/so-common
if [ "$1" == "" ]; then
curl -s -k -L https://{{ NODEIP }}:9200/_nodes/stats | jq .nodes | jq ".[] | .ingest.pipelines"
{{ ELASTICCURL }} -s -k -L https://{{ NODEIP }}:9200/_nodes/stats | jq .nodes | jq ".[] | .ingest.pipelines"
else
curl -s -k -L https://{{ NODEIP }}:9200/_nodes/stats | jq .nodes | jq ".[] | .ingest.pipelines.\"$1\""
{{ ELASTICCURL }} -s -k -L https://{{ NODEIP }}:9200/_nodes/stats | jq .nodes | jq ".[] | .ingest.pipelines.\"$1\""
fi

View File

@@ -19,7 +19,7 @@
. /usr/sbin/so-common
if [ "$1" == "" ]; then
curl -s -k -L https://{{ NODEIP }}:9200/_ingest/pipeline/* | jq .
{{ ELASTICCURL }} -s -k -L https://{{ NODEIP }}:9200/_ingest/pipeline/* | jq .
else
curl -s -k -L https://{{ NODEIP }}:9200/_ingest/pipeline/$1 | jq .
{{ ELASTICCURL }} -s -k -L https://{{ NODEIP }}:9200/_ingest/pipeline/$1 | jq .[]
fi

View File

@@ -17,7 +17,7 @@
{%- set NODEIP = salt['pillar.get']('elasticsearch:mainip', '') -%}
. /usr/sbin/so-common
if [ "$1" == "" ]; then
curl -s -k -L https://{{ NODEIP }}:9200/_ingest/pipeline/* | jq 'keys'
{{ ELASTICCURL }} -s -k -L https://{{ NODEIP }}:9200/_ingest/pipeline/* | jq 'keys'
else
curl -s -k -L https://{{ NODEIP }}:9200/_ingest/pipeline/$1 | jq
{{ ELASTICCURL }} -s -k -L https://{{ NODEIP }}:9200/_ingest/pipeline/$1 | jq
fi

View File

@@ -0,0 +1,37 @@
#!/bin/bash
#
# Copyright 2014,2015,2016,2017,2018,2019,2020,2021 Security Onion Solutions, LLC
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>
. /usr/sbin/so-common
if [[ $# -lt 1 ]]; then
echo "Submit a cURL request to the local Security Onion Elasticsearch host."
echo ""
echo "Usage: $0 <PATH> [ARGS,...]"
echo ""
echo "Where "
echo " PATH represents the elastic function being requested."
echo " ARGS is used to specify additional, optional curl parameters."
echo ""
echo "Examples:"
echo " $0 /"
echo " $0 '*:so-*/_search' -d '{\"query\": {\"match_all\": {}},\"size\": 1}' | jq"
exit 1
fi
QUERYPATH=$1
shift
{{ ELASTICCURL }} -s -k -L -H "Content-Type: application/json" "https://localhost:9200/${QUERYPATH}" "$@"

View File

@@ -18,4 +18,4 @@
. /usr/sbin/so-common
curl -s -k -L https://{{ NODEIP }}:9200/_cat/shards?pretty
{{ ELASTICCURL }} -s -k -L https://{{ NODEIP }}:9200/_cat/shards?pretty

View File

@@ -18,4 +18,4 @@
. /usr/sbin/so-common
curl -s -k -L -XDELETE https://{{ NODEIP }}:9200/_template/$1
{{ ELASTICCURL }} -s -k -L -XDELETE https://{{ NODEIP }}:9200/_template/$1

View File

@@ -19,7 +19,7 @@
. /usr/sbin/so-common
if [ "$1" == "" ]; then
curl -s -k -L https://{{ NODEIP }}:9200/_template/* | jq .
{{ ELASTICCURL }} -s -k -L https://{{ NODEIP }}:9200/_template/* | jq .
else
curl -s -k -L https://{{ NODEIP }}:9200/_template/$1 | jq .
{{ ELASTICCURL }} -s -k -L https://{{ NODEIP }}:9200/_template/$1 | jq .
fi

View File

@@ -17,7 +17,7 @@
{%- set NODEIP = salt['pillar.get']('elasticsearch:mainip', '') -%}
. /usr/sbin/so-common
if [ "$1" == "" ]; then
curl -s -k -L https://{{ NODEIP }}:9200/_template/* | jq 'keys'
{{ ELASTICCURL }} -s -k -L https://{{ NODEIP }}:9200/_template/* | jq 'keys'
else
curl -s -k -L https://{{ NODEIP }}:9200/_template/$1 | jq
{{ ELASTICCURL }} -s -k -L https://{{ NODEIP }}:9200/_template/$1 | jq
fi

View File

@@ -1,8 +1,5 @@
{%- set mainint = salt['pillar.get']('host:mainint') %}
{%- set MYIP = salt['grains.get']('ip_interfaces:' ~ mainint)[0] %}
#!/bin/bash
# Copyright 2014,2015,2016,2017,2018,2019 Security Onion Solutions, LLC
# Copyright 2014,2015,2016,2017,2018,2019,2020,2021 Security Onion Solutions, LLC
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
@@ -17,6 +14,9 @@
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
{%- set mainint = salt['pillar.get']('host:mainint') %}
{%- set MYIP = salt['grains.get']('ip_interfaces:' ~ mainint)[0] %}
default_conf_dir=/opt/so/conf
ELASTICSEARCH_HOST="{{ MYIP }}"
ELASTICSEARCH_PORT=9200
@@ -30,7 +30,7 @@ echo -n "Waiting for ElasticSearch..."
COUNT=0
ELASTICSEARCH_CONNECTED="no"
while [[ "$COUNT" -le 240 ]]; do
curl -k --output /dev/null --silent --head --fail -L https://"$ELASTICSEARCH_HOST":"$ELASTICSEARCH_PORT"
{{ ELASTICCURL }} -k --output /dev/null --silent --head --fail -L https://"$ELASTICSEARCH_HOST":"$ELASTICSEARCH_PORT"
if [ $? -eq 0 ]; then
ELASTICSEARCH_CONNECTED="yes"
echo "connected!"
@@ -51,7 +51,7 @@ cd ${ELASTICSEARCH_TEMPLATES}
echo "Loading templates..."
for i in *; do TEMPLATE=$(echo $i | cut -d '-' -f2); echo "so-$TEMPLATE"; curl -k ${ELASTICSEARCH_AUTH} -s -XPUT -L https://${ELASTICSEARCH_HOST}:${ELASTICSEARCH_PORT}/_template/so-$TEMPLATE -H 'Content-Type: application/json' -d@$i 2>/dev/null; echo; done
for i in *; do TEMPLATE=$(echo $i | cut -d '-' -f2); echo "so-$TEMPLATE"; {{ ELASTICCURL }} -k ${ELASTICSEARCH_AUTH} -s -XPUT -L https://${ELASTICSEARCH_HOST}:${ELASTICSEARCH_PORT}/_template/so-$TEMPLATE -H 'Content-Type: application/json' -d@$i 2>/dev/null; echo; done
echo
cd - >/dev/null

View File

@@ -0,0 +1,5 @@
#!/bin/bash
. /usr/sbin/so-common
wait_for_web_response "https://localhost:9200/_cat/indices/.kibana*" "green open" 300 "{{ ELASTICCURL }}"

View File

@@ -0,0 +1,67 @@
#!/bin/bash
# Copyright 2014,2015,2016,2017,2018,2019,2020,2021 Security Onion Solutions, LLC
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
{%- set mainint = salt['pillar.get']('host:mainint') %}
{%- set MYIP = salt['grains.get']('ip_interfaces:' ~ mainint)[0] %}
default_conf_dir=/opt/so/conf
ELASTICSEARCH_HOST="{{ MYIP }}"
ELASTICSEARCH_PORT=9200
#ELASTICSEARCH_AUTH=""
# Define a default directory to load pipelines from
FB_MODULE_YML="/usr/share/filebeat/module-setup.yml"
# Wait for ElasticSearch to initialize
echo -n "Waiting for ElasticSearch..."
COUNT=0
ELASTICSEARCH_CONNECTED="no"
while [[ "$COUNT" -le 240 ]]; do
{{ ELASTICCURL }} -k --output /dev/null --silent --head --fail -L https://"$ELASTICSEARCH_HOST":"$ELASTICSEARCH_PORT"
if [ $? -eq 0 ]; then
ELASTICSEARCH_CONNECTED="yes"
echo "connected!"
break
else
((COUNT+=1))
sleep 1
echo -n "."
fi
done
if [ "$ELASTICSEARCH_CONNECTED" == "no" ]; then
echo
echo -e "Connection attempt timed out. Unable to connect to ElasticSearch. \nPlease try: \n -checking log(s) in /var/log/elasticsearch/\n -running 'sudo docker ps' \n -running 'sudo so-elastic-restart'"
echo
fi
echo "Testing to see if the pipelines are already applied"
ESVER=$({{ ELASTICCURL }} -sk https://"$ELASTICSEARCH_HOST":"$ELASTICSEARCH_PORT" |jq .version.number |tr -d \")
PIPELINES=$({{ ELASTICCURL }} -sk https://"$ELASTICSEARCH_HOST":"$ELASTICSEARCH_PORT"/_ingest/pipeline/filebeat-$ESVER-suricata-eve-pipeline | jq . | wc -c)
if [[ "$PIPELINES" -lt 5 ]]; then
echo "Setting up ingest pipeline(s)"
for MODULE in activemq apache auditd aws azure barracuda bluecoat cef checkpoint cisco coredns crowdstrike cyberark cylance elasticsearch envoyproxy f5 fortinet gcp google_workspace googlecloud gsuite haproxy ibmmq icinga iis imperva infoblox iptables juniper kafka kibana logstash microsoft misp mongodb mssql mysql nats netscout nginx o365 okta osquery panw postgresql rabbitmq radware redis santa snort snyk sonicwall sophos squid suricata system tomcat traefik zeek zscaler
do
echo "Loading $MODULE"
docker exec -i so-filebeat filebeat setup modules -pipelines -modules $MODULE -c $FB_MODULE_YML
sleep 2
done
else
exit 0
fi

View File

@@ -18,6 +18,7 @@
# NOTE: This script depends on so-common
IMAGEREPO=security-onion-solutions
# shellcheck disable=SC2120
container_list() {
MANAGERCHECK=$1
@@ -128,13 +129,13 @@ update_docker_containers() {
mkdir -p $SIGNPATH >> "$LOG_FILE" 2>&1
# Let's make sure we have the public key
retry 50 10 "curl -sSL https://raw.githubusercontent.com/Security-Onion-Solutions/securityonion/master/KEYS -o $SIGNPATH/KEYS" >> "$LOG_FILE" 2>&1
run_check_net_err \
"curl --retry 5 --retry-delay 60 -sSL https://raw.githubusercontent.com/Security-Onion-Solutions/securityonion/master/KEYS -o $SIGNPATH/KEYS" \
"Could not pull signature key file, please ensure connectivity to https://raw.gihubusercontent.com" \
noretry >> "$LOG_FILE" 2>&1
result=$?
if [[ $result -eq 0 ]]; then
cat $SIGNPATH/KEYS | gpg --import - >> "$LOG_FILE" 2>&1
else
echo "Failed to pull signature key file: $result"
exit 1
fi
# Download the containers from the interwebs
@@ -148,14 +149,15 @@ update_docker_containers() {
# Pull down the trusted docker image
local image=$i:$VERSION$IMAGE_TAG_SUFFIX
retry 50 10 "docker pull $CONTAINER_REGISTRY/$IMAGEREPO/$image" >> "$LOG_FILE" 2>&1
run_check_net_err \
"docker pull $CONTAINER_REGISTRY/$IMAGEREPO/$image" \
"Could not pull $image, please ensure connectivity to $CONTAINER_REGISTRY" >> "$LOG_FILE" 2>&1
# Get signature
retry 50 10 "curl -A '$CURLTYPE/$CURRENTVERSION/$OS/$(uname -r)' https://sigs.securityonion.net/$VERSION/$i:$VERSION$IMAGE_TAG_SUFFIX.sig --output $SIGNPATH/$image.sig" >> "$LOG_FILE" 2>&1
if [[ $? -ne 0 ]]; then
echo "Unable to pull signature file for $image" >> "$LOG_FILE" 2>&1
exit 1
fi
run_check_net_err \
"curl --retry 5 --retry-delay 60 -A '$CURLTYPE/$CURRENTVERSION/$OS/$(uname -r)' https://sigs.securityonion.net/$VERSION/$i:$VERSION$IMAGE_TAG_SUFFIX.sig --output $SIGNPATH/$image.sig" \
"Could not pull signature file for $image, please ensure connectivity to https://sigs.securityonion.net " \
noretry >> "$LOG_FILE" 2>&1
# Dump our hash values
DOCKERINSPECT=$(docker inspect $CONTAINER_REGISTRY/$IMAGEREPO/$image)

View File

@@ -132,6 +132,8 @@ for PCAP in "$@"; do
PCAP_FIXED=`mktemp /tmp/so-import-pcap-XXXXXXXXXX.pcap`
echo "- attempting to recover corrupted PCAP file"
pcapfix "${PCAP}" "${PCAP_FIXED}"
# Make fixed file world readable since the Suricata docker container will runas a non-root user
chmod a+r "${PCAP_FIXED}"
PCAP="${PCAP_FIXED}"
TEMP_PCAPS+=(${PCAP_FIXED})
fi

View File

@@ -15,4 +15,4 @@
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
curl -X GET -k -L "https://localhost:9200/_cat/indices?v&s=index"
{{ ELASTICCURL }} -X GET -k -L "https://localhost:9200/_cat/indices?v&s=index"

View File

@@ -0,0 +1,53 @@
#!/bin/bash
# Copyright 2014,2015,2016,2017,2018,2019,2020,2021 Security Onion Solutions, LLC
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
. /usr/sbin/so-common
wdurregex="^[0-9]+w$"
ddurregex="^[0-9]+d$"
echo -e "\nThis script is used to reduce the size of InfluxDB by removing old data and retaining only the duration specified."
echo "The duration will need to be specified as an integer followed by the duration unit without a space."
echo -e "\nFor example, to purge all data but retain the past 12 weeks, specify 12w for the duration."
echo "The duration units are as follows:"
echo " w - week(s)"
echo " d - day(s)"
while true; do
echo ""
read -p 'Enter the duration of past data that you would like to retain: ' duration
duration=$(echo $duration | tr '[:upper:]' '[:lower:]')
if [[ "$duration" =~ $wdurregex ]] || [[ "$duration" =~ $ddurregex ]]; then
break
fi
echo -e "\nInvalid duration."
done
echo -e "\nInfluxDB will now be cleaned and leave only the past $duration worth of data."
read -r -p "Are you sure you want to continue? [y/N] " yorn
if [[ "$yorn" =~ ^([yY][eE][sS]|[yY])$ ]]; then
echo -e "\nCleaning InfluxDb and saving only the past $duration. This may could take several minutes depending on how much data needs to be cleaned."
if docker exec -t so-influxdb /bin/bash -c "influx -ssl -unsafeSsl -database telegraf -execute \"DELETE FROM /.*/ WHERE \"time\" >= '2020-01-01T00:00:00.0000000Z' AND \"time\" <= now() - $duration\""; then
echo -e "\nInfluxDb clean complete."
else
echo -e "\nSomething went wrong with cleaning InfluxDB. Please verify that the so-influxdb Docker container is running, and check the log at /opt/so/log/influxdb/influxdb.log for any details."
fi
else
echo -e "\nExiting as requested."
fi

View File

@@ -0,0 +1,63 @@
#!/bin/bash
# Copyright 2014,2015,2016,2017,2018,2019,2020,2021 Security Onion Solutions, LLC
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
{%- set role = grains.id.split('_') | last %}
{%- if role in ['manager', 'managersearch', 'eval', 'standalone'] %}
{%- import_yaml 'influxdb/defaults.yaml' as default_settings %}
{%- set influxdb = salt['grains.filter_by'](default_settings, default='influxdb', merge=salt['pillar.get']('influxdb', {})) %}
. /usr/sbin/so-common
echo -e "\nThis script is used to reduce the size of InfluxDB by downsampling old data into the so_long_term retention policy."
echo -e "\nInfluxDB will now be downsampled. This could take a few hours depending on how large the database is and hardware resources available."
read -r -p "Are you sure you want to continue? [y/N] " yorn
if [[ "$yorn" =~ ^([yY][eE][sS]|[yY])$ ]]; then
echo -e "\nDownsampling InfluxDb started at `date`. This may take several hours depending on how much data needs to be downsampled."
{% for dest_rp in influxdb.downsample.keys() -%}
{% for measurement in influxdb.downsample[dest_rp].get('measurements', []) -%}
day=0
startdate=`date`
while docker exec -t so-influxdb /bin/bash -c "influx -ssl -unsafeSsl -database telegraf -execute \"SELECT mean(*) INTO \"so_long_term\".\"{{measurement}}\" FROM \"autogen\".\"{{measurement}}\" WHERE \"time\" >= '2020-07-21T00:00:00.0000000Z' + ${day}d AND \"time\" <= '2020-07-21T00:00:00.0000000Z' + $((day+1))d GROUP BY time(5m),*\""; do
# why 2020-07-21?
migrationdate=`date -d "2020-07-21 + ${day} days" +"%y-%m-%d"`
echo "Downsampling of measurement: {{measurement}} from $migrationdate started at $startdate and completed at `date`."
newdaytomigrate=$(date -d "$migrationdate + 1 days" +"%s")
today=$(date +"%s")
if [ $newdaytomigrate -ge $today ]; then
break
else
((day=day+1))
startdate=`date`
echo -e "\nDownsampling the next day's worth of data for measurement: {{measurement}}."
fi
done
{% endfor -%}
{% endfor -%}
echo -e "\nInfluxDb data downsampling complete."
else
echo -e "\nExiting as requested."
fi
{%- else %}
echo -e "\nThis script can only be run on a node running InfluxDB."
{%- endif %}

View File

@@ -0,0 +1,34 @@
#!/bin/bash
# Copyright 2014,2015,2016,2017,2018,2019,2020,2021 Security Onion Solutions, LLC
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
. /usr/sbin/so-common
echo -e "\nThis script is used to reduce the size of InfluxDB by dropping the autogen retention policy."
echo "If you want to retain historical data prior to 2.3.60, then this should only be run after you have downsampled your data using so-influxdb-downsample."
echo -e "\nThe autogen retention policy will now be dropped from InfluxDB."
read -r -p "Are you sure you want to continue? [y/N] " yorn
if [[ "$yorn" =~ ^([yY][eE][sS]|[yY])$ ]]; then
echo -e "\nDropping autogen retention policy."
if docker exec -t so-influxdb influx -format json -ssl -unsafeSsl -execute "drop retention policy autogen on telegraf"; then
echo -e "\nAutogen retention policy dropped from InfluxDb."
else
echo -e "\nSomething went wrong dropping then autogen retention policy from InfluxDB. Please verify that the so-influxdb Docker container is running, and check the log at /opt/so/log/influxdb/influxdb.log for any details."
fi
else
echo -e "\nExiting as requested."
fi

View File

@@ -23,7 +23,9 @@
KIBANA_HOST={{ MANAGER }}
KSO_PORT=5601
OUTFILE="saved_objects.ndjson"
curl -s -H 'kbn-xsrf: true' -H 'Content-Type: application/json' -XPOST -L $KIBANA_HOST:$KSO_PORT/api/saved_objects/_export -d '{ "type": [ "index-pattern", "config", "visualization", "dashboard", "search" ], "excludeExportDetails": false }' > $OUTFILE
SESSIONCOOKIE=$({{ ELASTICCURL }} -c - -X GET http://$KIBANA_HOST:$KSO_PORT/ | grep sid | awk '{print $7}')
{{ ELASTICCURL }} -b "sid=$SESSIONCOOKIE" -s -H 'kbn-xsrf: true' -H 'Content-Type: application/json' -XPOST -L $KIBANA_HOST:$KSO_PORT/api/saved_objects/_export -d '{ "type": [ "index-pattern", "config", "visualization", "dashboard", "search" ], "excludeExportDetails": false }' > $OUTFILE
# Clean up using PLACEHOLDER
sed -i "s/$KIBANA_HOST/PLACEHOLDER/g" $OUTFILE

View File

@@ -1,13 +1,13 @@
. /usr/sbin/so-common
wait_for_web_response "http://localhost:5601/app/kibana" "Elastic"
wait_for_web_response "http://localhost:5601/app/kibana" "Elastic" 300 "{{ ELASTICCURL }}"
## This hackery will be removed if using Elastic Auth ##
# Let's snag a cookie from Kibana
THECOOKIE=$(curl -c - -X GET http://localhost:5601/ | grep sid | awk '{print $7}')
SESSIONCOOKIE=$({{ ELASTICCURL }} -c - -X GET http://localhost:5601/ | grep sid | awk '{print $7}')
# Disable certain Features from showing up in the Kibana UI
echo
echo "Setting up default Space:"
curl -b "sid=$THECOOKIE" -L -X PUT "localhost:5601/api/spaces/space/default" -H 'kbn-xsrf: true' -H 'Content-Type: application/json' -d' {"id":"default","name":"Default","disabledFeatures":["ml","enterpriseSearch","siem","logs","infrastructure","apm","uptime","monitoring","stackAlerts","actions","fleet"]} ' >> /opt/so/log/kibana/misc.log
echo
{{ ELASTICCURL }} -b "sid=$SESSIONCOOKIE" -L -X PUT "localhost:5601/api/spaces/space/default" -H 'kbn-xsrf: true' -H 'Content-Type: application/json' -d' {"id":"default","name":"Default","disabledFeatures":["ml","enterpriseSearch","siem","logs","infrastructure","apm","uptime","monitoring","stackAlerts","actions","fleet"]} ' >> /opt/so/log/kibana/misc.log
echo

View File

@@ -0,0 +1,26 @@
#!/bin/bash
#
# Copyright 2014,2015,2016,2017,2018,2019,2020,2021 Security Onion Solutions, LLC
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
if [ $# -lt 2 ]; then
echo "Usage: $0 <steno-query> Output-Filename"
exit 1
fi
docker exec -it so-sensoroni scripts/stenoquery.sh "$1" -w /nsm/pcapout/$2.pcap
echo ""
echo "If successful, the output was written to: /nsm/pcapout/$2.pcap"

View File

@@ -22,5 +22,5 @@ salt-call state.apply playbook.db_init,playbook,playbook.automation_user_create
/usr/sbin/so-soctopus-restart
echo "Importing Plays - this will take some time...."
wait 5
/usr/sbin/so-playbook-ruleupdate
sleep 5
/usr/sbin/so-playbook-ruleupdate

View File

@@ -17,18 +17,6 @@
. /usr/sbin/so-common
#check_boss_raid() {
# BOSSBIN=/opt/boss/mvcli
# BOSSRC=$($BOSSBIN info -o vd | grep functional)
#
# if [[ $BOSSRC ]]; then
# # Raid is good
# BOSSRAID=0
# else
# BOSSRAID=1
# fi
#}
check_lsi_raid() {
# For use for LSI on Ubuntu
#MEGA=/opt/MegaRAID/MegeCli/MegaCli64
@@ -66,13 +54,11 @@ mkdir -p /opt/so/log/raid
{%- if grains['sosmodel'] in ['SOSMN', 'SOSSNNV'] %}
#check_boss_raid
check_software_raid
#echo "osraid=$BOSSRAID nsmraid=$SWRAID" > /opt/so/log/raid/status.log
echo "osraid=1 nsmraid=$SWRAID" > /opt/so/log/raid/status.log
echo "nsmraid=$SWRAID" > /opt/so/log/raid/status.log
{%- elif grains['sosmodel'] in ['SOS1000F', 'SOS1000', 'SOSSN7200', 'SOS10K', 'SOS4000'] %}
#check_boss_raid
check_lsi_raid
#echo "osraid=$BOSSRAID nsmraid=$LSIRAID" > /opt/so/log/raid/status.log
echo "osraid=1 nsmraid=$LSIRAID" > /opt/so/log/raid/status.log
echo "nsmraid=$LSIRAID" > /opt/so/log/raid/status.log
{%- else %}
exit 0
{%- endif %}

View File

@@ -23,6 +23,11 @@ TESTPCAP=$2
. /usr/sbin/so-common
if [ $# -lt 2 ]; then
echo "Usage: $0 <CustomRule> <TargetPCAP>"
exit 1
fi
echo ""
echo "==============="
echo "Running all.rules and $TESTRULE against the following pcap: $TESTPCAP"

View File

@@ -39,10 +39,18 @@ email=$2
kratosUrl=${KRATOS_URL:-http://127.0.0.1:4434}
databasePath=${KRATOS_DB_PATH:-/opt/so/conf/kratos/db/db.sqlite}
argon2Iterations=${ARGON2_ITERATIONS:-3}
argon2Memory=${ARGON2_MEMORY:-14}
argon2Parallelism=${ARGON2_PARALLELISM:-2}
argon2HashSize=${ARGON2_HASH_SIZE:-32}
bcryptRounds=${BCRYPT_ROUNDS:-12}
elasticUsersFile=${ELASTIC_USERS_FILE:-/opt/so/saltstack/local/salt/elasticsearch/files/users}
elasticRolesFile=${ELASTIC_ROLES_FILE:-/opt/so/saltstack/local/salt/elasticsearch/files/users_roles}
esUID=${ELASTIC_UID:-930}
esGID=${ELASTIC_GID:-930}
function lock() {
# Obtain file descriptor lock
exec 99>/var/tmp/so-user.lock || fail "Unable to create lock descriptor; if the system was not shutdown gracefully you may need to remove /var/tmp/so-user.lock manually."
flock -w 10 99 || fail "Another process is using so-user; if the system was not shutdown gracefully you may need to remove /var/tmp/so-user.lock manually."
trap 'rm -f /var/tmp/so-user.lock' EXIT
}
function fail() {
msg=$1
@@ -58,7 +66,7 @@ function require() {
# Verify this environment is capable of running this script
function verifyEnvironment() {
require "argon2"
require "htpasswd"
require "jq"
require "curl"
require "openssl"
@@ -95,6 +103,16 @@ function validateEmail() {
fi
}
function hashPassword() {
password=$1
passwordHash=$(echo "${password}" | htpasswd -niBC $bcryptRounds SOUSER)
passwordHash=$(echo "$passwordHash" | cut -c 11-)
passwordHash="\$2a${passwordHash}" # still waiting for https://github.com/elastic/elasticsearch/issues/51132
echo "$passwordHash"
}
function updatePassword() {
identityId=$1
@@ -111,15 +129,125 @@ function updatePassword() {
if [[ -n $identityId ]]; then
# Generate password hash
salt=$(openssl rand -hex 8)
passwordHash=$(echo "${password}" | argon2 ${salt} -id -t $argon2Iterations -m $argon2Memory -p $argon2Parallelism -l $argon2HashSize -e)
passwordHash=$(hashPassword "$password")
# Update DB with new hash
echo "update identity_credentials set config=CAST('{\"hashed_password\":\"${passwordHash}\"}' as BLOB) where identity_id=${identityId};" | sqlite3 "$databasePath"
echo "update identity_credentials set config=CAST('{\"hashed_password\":\"$passwordHash\"}' as BLOB) where identity_id=${identityId};" | sqlite3 "$databasePath"
[[ $? != 0 ]] && fail "Unable to update password"
fi
}
function createElasticFile() {
filename=$1
tmpFile=${filename}
truncate -s 0 "$tmpFile"
chmod 600 "$tmpFile"
chown "${esUID}:${esGID}" "$tmpFile"
}
function syncElasticSystemUser() {
json=$1
userid=$2
usersFile=$3
user=$(echo "$json" | jq -r ".local.users.$userid.user")
pass=$(echo "$json" | jq -r ".local.users.$userid.pass")
[[ -z "$user" || -z "$pass" ]] && fail "Elastic auth credentials for system user '$userid' are missing"
hash=$(hashPassword "$pass")
echo "${user}:${hash}" >> "$usersFile"
}
function syncElasticSystemRole() {
json=$1
userid=$2
role=$3
rolesFile=$4
user=$(echo "$json" | jq -r ".local.users.$userid.user")
[[ -z "$user" ]] && fail "Elastic auth credentials for system user '$userid' are missing"
echo "${role}:${user}" >> "$rolesFile"
}
function syncElastic() {
echo "Syncing users between SOC and Elastic..."
usersTmpFile="${elasticUsersFile}.tmp"
rolesTmpFile="${elasticRolesFile}.tmp"
createElasticFile "${usersTmpFile}"
createElasticFile "${rolesTmpFile}"
authPillarJson=$(lookup_salt_value "auth" "elasticsearch" "pillar" "json")
syncElasticSystemUser "$authPillarJson" "so_elastic_user" "$usersTmpFile"
syncElasticSystemRole "$authPillarJson" "so_elastic_user" "superuser" "$rolesTmpFile"
syncElasticSystemUser "$authPillarJson" "so_kibana_user" "$usersTmpFile"
syncElasticSystemRole "$authPillarJson" "so_kibana_user" "superuser" "$rolesTmpFile"
syncElasticSystemUser "$authPillarJson" "so_logstash_user" "$usersTmpFile"
syncElasticSystemRole "$authPillarJson" "so_logstash_user" "superuser" "$rolesTmpFile"
syncElasticSystemUser "$authPillarJson" "so_beats_user" "$usersTmpFile"
syncElasticSystemRole "$authPillarJson" "so_beats_user" "superuser" "$rolesTmpFile"
syncElasticSystemUser "$authPillarJson" "so_monitor_user" "$usersTmpFile"
syncElasticSystemRole "$authPillarJson" "so_monitor_user" "remote_monitoring_collector" "$rolesTmpFile"
syncElasticSystemRole "$authPillarJson" "so_monitor_user" "remote_monitoring_agent" "$rolesTmpFile"
syncElasticSystemRole "$authPillarJson" "so_monitor_user" "monitoring_user" "$rolesTmpFile"
if [[ -f "$databasePath" ]]; then
# Generate the new users file
echo "select '{\"user\":\"' || ici.identifier || '\", \"data\":' || ic.config || '}'" \
"from identity_credential_identifiers ici, identity_credentials ic " \
"where ici.identity_credential_id=ic.id and instr(ic.config, 'hashed_password') " \
"order by ici.identifier;" | \
sqlite3 "$databasePath" | \
jq -r '.user + ":" + .data.hashed_password' \
>> "$usersTmpFile"
[[ $? != 0 ]] && fail "Unable to read credential hashes from database"
# Generate the new users_roles file
echo "select 'superuser:' || ici.identifier " \
"from identity_credential_identifiers ici, identity_credentials ic " \
"where ici.identity_credential_id=ic.id and instr(ic.config, 'hashed_password') " \
"order by ici.identifier;" | \
sqlite3 "$databasePath" \
>> "$rolesTmpFile"
[[ $? != 0 ]] && fail "Unable to read credential IDs from database"
else
echo "Database file does not exist yet, skipping users export"
fi
if [[ -s "${usersTmpFile}" ]]; then
mv "${usersTmpFile}" "${elasticUsersFile}"
mv "${rolesTmpFile}" "${elasticRolesFile}"
if [[ -z "$SKIP_STATE_APPLY" ]]; then
echo "Elastic state will be re-applied to affected minions. This may take several minutes..."
echo "Applying elastic state to elastic minions at $(date)" >> /opt/so/log/soc/sync.log 2>&1
salt -C 'G@role:so-standalone or G@role:so-eval or G@role:so-import or G@role:so-manager or G@role:so-managersearch or G@role:so-node or G@role:so-heavynode' state.apply elasticsearch queue=True >> /opt/so/log/soc/sync.log 2>&1
fi
else
echo "Newly generated users/roles files are incomplete; aborting."
fi
}
function syncAll() {
if [[ -z "$FORCE_SYNC" && -f "$databasePath" && -f "$elasticUsersFile" ]]; then
usersFileAgeSecs=$(echo $(($(date +%s) - $(date +%s -r "$elasticUsersFile"))))
staleCount=$(echo "select count(*) from identity_credentials where updated_at >= Datetime('now', '-${usersFileAgeSecs} seconds');" \
| sqlite3 "$databasePath")
if [[ "$staleCount" == "0" ]]; then
return 1
fi
fi
syncElastic
return 0
}
function listUsers() {
response=$(curl -Ss -L ${kratosUrl}/identities)
[[ $? != 0 ]] && fail "Unable to communicate with Kratos"
@@ -208,12 +336,14 @@ case "${operation}" in
verifyEnvironment
[[ "$email" == "" ]] && fail "Email address must be provided"
lock
validateEmail "$email"
updatePassword
createUser "$email"
syncAll
echo "Successfully added new user to SOC"
check_container thehive && echo $password | so-thehive-user-add "$email"
check_container fleet && echo $password | so-fleet-user-add "$email"
check_container thehive && echo "$password" | so-thehive-user-add "$email"
check_container fleet && echo "$password" | so-fleet-user-add "$email"
;;
"list")
@@ -225,7 +355,9 @@ case "${operation}" in
verifyEnvironment
[[ "$email" == "" ]] && fail "Email address must be provided"
lock
updateUser "$email"
syncAll
echo "Successfully updated user"
;;
@@ -233,7 +365,9 @@ case "${operation}" in
verifyEnvironment
[[ "$email" == "" ]] && fail "Email address must be provided"
lock
updateStatus "$email" 'active'
syncAll
echo "Successfully enabled user"
check_container thehive && so-thehive-user-enable "$email" true
check_container fleet && so-fleet-user-enable "$email" true
@@ -243,7 +377,9 @@ case "${operation}" in
verifyEnvironment
[[ "$email" == "" ]] && fail "Email address must be provided"
lock
updateStatus "$email" 'locked'
syncAll
echo "Successfully disabled user"
check_container thehive && so-thehive-user-enable "$email" false
check_container fleet && so-fleet-user-enable "$email" false
@@ -253,12 +389,19 @@ case "${operation}" in
verifyEnvironment
[[ "$email" == "" ]] && fail "Email address must be provided"
lock
deleteUser "$email"
syncAll
echo "Successfully deleted user"
check_container thehive && so-thehive-user-enable "$email" false
check_container fleet && so-fleet-user-enable "$email" false
;;
"sync")
lock
syncAll
;;
"validate")
validateEmail "$email"
updatePassword
@@ -280,4 +423,4 @@ case "${operation}" in
;;
esac
exit 0
exit 0

View File

@@ -10,11 +10,10 @@ zeek_logs_enabled() {
}
whiptail_manager_adv_service_zeeklogs() {
BLOGS=$(whiptail --title "Security Onion Setup" --checklist "Please Select Logs to Send:" 24 78 12 \
BLOGS=$(whiptail --title "so-zeek-logs" --checklist "Please Select Logs to Send:" 24 78 12 \
"conn" "Connection Logging" ON \
"dce_rpc" "RPC Logs" ON \
"dhcp" "DHCP Logs" ON \
"dhcpv6" "DHCP IPv6 Logs" ON \
"dnp3" "DNP3 Logs" ON \
"dns" "DNS Logs" ON \
"dpd" "DPD Logs" ON \
@@ -25,25 +24,20 @@ whiptail_manager_adv_service_zeeklogs() {
"irc" "IRC Chat Logs" ON \
"kerberos" "Kerberos Logs" ON \
"modbus" "MODBUS Logs" ON \
"mqtt" "MQTT Logs" ON \
"notice" "Zeek Notice Logs" ON \
"ntlm" "NTLM Logs" ON \
"openvpn" "OPENVPN Logs" ON \
"pe" "PE Logs" ON \
"radius" "Radius Logs" ON \
"rfb" "RFB Logs" ON \
"rdp" "RDP Logs" ON \
"signatures" "Signatures Logs" ON \
"sip" "SIP Logs" ON \
"smb_files" "SMB Files Logs" ON \
"smb_mapping" "SMB Mapping Logs" ON \
"smtp" "SMTP Logs" ON \
"snmp" "SNMP Logs" ON \
"software" "Software Logs" ON \
"ssh" "SSH Logs" ON \
"ssl" "SSL Logs" ON \
"syslog" "Syslog Logs" ON \
"telnet" "Telnet Logs" ON \
"tunnel" "Tunnel Logs" ON \
"weird" "Zeek Weird Logs" ON \
"mysql" "MySQL Logs" ON \
@@ -61,10 +55,10 @@ whiptail_manager_adv_service_zeeklogs
return_code=$?
case $return_code in
1)
whiptail --title "Security Onion Setup" --msgbox "Cancelling. No changes have been made." 8 75
whiptail --title "so-zeek-logs" --msgbox "Cancelling. No changes have been made." 8 75
;;
255)
whiptail --title "Security Onion Setup" --msgbox "Whiptail error occured, exiting." 8 75
whiptail --title "so-zeek-logs" --msgbox "Whiptail error occured, exiting." 8 75
;;
*)
zeek_logs_enabled

View File

@@ -18,12 +18,82 @@
. /usr/sbin/so-common
UPDATE_DIR=/tmp/sogh/securityonion
DEFAULT_SALT_DIR=/opt/so/saltstack/default
INSTALLEDVERSION=$(cat /etc/soversion)
POSTVERSION=$INSTALLEDVERSION
INSTALLEDSALTVERSION=$(salt --versions-report | grep Salt: | awk {'print $2'})
INSTALLEDSALTVERSION=$(salt --versions-report | grep Salt: | awk '{print $2}')
BATCHSIZE=5
SOUP_LOG=/root/soup.log
INFLUXDB_MIGRATION_LOG=/opt/so/log/influxdb/soup_migration.log
WHATWOULDYOUSAYYAHDOHERE=soup
whiptail_title='Security Onion UPdater'
check_err() {
local exit_code=$1
local err_msg="Unhandled error occured, please check $SOUP_LOG for details."
[[ $ERR_HANDLED == true ]] && exit $exit_code
if [[ $exit_code -ne 0 ]]; then
printf '%s' "Soup failed with error $exit_code: "
case $exit_code in
2)
echo 'No such file or directory'
;;
5)
echo 'Interrupted system call'
;;
12)
echo 'Out of memory'
;;
28)
echo 'No space left on device'
echo 'Likely ran out of space on disk, please review hardware requirements for Security Onion: https://docs.securityonion.net/en/2.3/hardware.html'
;;
30)
echo 'Read-only file system'
;;
35)
echo 'Resource temporarily unavailable'
;;
64)
echo 'Machine is not on the network'
;;
67)
echo 'Link has been severed'
;;
100)
echo 'Network is down'
;;
101)
echo 'Network is unreachable'
;;
102)
echo 'Network reset'
;;
110)
echo 'Connection timed out'
;;
111)
echo 'Connection refused'
;;
112)
echo 'Host is down'
;;
113)
echo 'No route to host'
;;
*)
echo 'Unhandled error'
echo "$err_msg"
;;
esac
if [[ $exit_code -ge 64 && $exit_code -le 113 ]]; then
echo "$err_msg"
fi
exit $exit_code
fi
}
add_common() {
cp $UPDATE_DIR/salt/common/tools/sbin/so-common $DEFAULT_SALT_DIR/salt/common/tools/sbin/
@@ -39,15 +109,14 @@ airgap_mounted() {
echo "The ISO is already mounted"
else
echo ""
echo "Looks like we need access to the upgrade content"
echo ""
echo "If you just copied the .iso file over you can specify the path."
echo "If you burned the ISO to a disk the standard way you can specify the device."
echo "Example: /home/user/securityonion-2.X.0.iso"
echo "Example: /dev/sdx1"
echo ""
read -p 'Enter the location of the iso: ' ISOLOC
if [ -f $ISOLOC ]; then
cat << EOF
In order for soup to proceed, the path to the downloaded Security Onion ISO file, or the path to the CD-ROM or equivalent device containing the ISO media must be provided.
For example, if you have copied the new Security Onion ISO file to your home directory, then the path might look like /home/myuser/securityonion-2.x.y.iso.
Or, if you have burned the new ISO onto an optical disk then the path might look like /dev/cdrom.
EOF
read -rp 'Enter the path to the new Security Onion ISO content: ' ISOLOC
if [[ -f $ISOLOC ]]; then
# Mounting the ISO image
mkdir -p /tmp/soagupdate
mount -t iso9660 -o loop $ISOLOC /tmp/soagupdate
@@ -59,7 +128,7 @@ airgap_mounted() {
else
echo "ISO has been mounted!"
fi
elif [ -f $ISOLOC/SecurityOnion/VERSION ]; then
elif [[ -f $ISOLOC/SecurityOnion/VERSION ]]; then
ln -s $ISOLOC /tmp/soagupdate
echo "Found the update content"
else
@@ -77,9 +146,9 @@ airgap_mounted() {
}
airgap_update_dockers() {
if [ $is_airgap -eq 0 ]; then
if [[ $is_airgap -eq 0 ]]; then
# Let's copy the tarball
if [ ! -f $AGDOCKER/registry.tar ]; then
if [[ ! -f $AGDOCKER/registry.tar ]]; then
echo "Unable to locate registry. Exiting"
exit 1
else
@@ -87,9 +156,9 @@ airgap_update_dockers() {
docker stop so-dockerregistry
docker rm so-dockerregistry
echo "Copying the new dockers over"
tar xvf $AGDOCKER/registry.tar -C /nsm/docker-registry/docker
tar xvf "$AGDOCKER/registry.tar" -C /nsm/docker-registry/docker
echo "Add Registry back"
docker load -i $AGDOCKER/registry_image.tar
docker load -i "$AGDOCKER/registry_image.tar"
fi
fi
}
@@ -100,10 +169,23 @@ update_registry() {
salt-call state.apply registry queue=True
}
check_airgap() {
# See if this is an airgap install
AIRGAP=$(cat /opt/so/saltstack/local/pillar/global.sls | grep airgap: | awk '{print $2}')
if [[ "$AIRGAP" == "True" ]]; then
is_airgap=0
UPDATE_DIR=/tmp/soagupdate/SecurityOnion
AGDOCKER=/tmp/soagupdate/docker
AGREPO=/tmp/soagupdate/Packages
else
is_airgap=1
fi
}
check_sudoers() {
if grep -q "so-setup" /etc/sudoers; then
echo "There is an entry for so-setup in the sudoers file, this can be safely deleted using \"visudo\"."
fi
if grep -q "so-setup" /etc/sudoers; then
echo "There is an entry for so-setup in the sudoers file, this can be safely deleted using \"visudo\"."
fi
}
check_log_size_limit() {
@@ -177,7 +259,9 @@ check_os_updates() {
echo "Continuing without updating packages"
elif [[ "$confirm" == [uU] ]]; then
echo "Applying Grid Updates"
salt \* -b 5 state.apply patch.os queue=True
set +e
run_check_net_err "salt '*' -b 5 state.apply patch.os queue=True" 'Could not apply OS updates, please check your network connection.'
set -e
else
echo "Exiting soup"
exit 0
@@ -257,6 +341,7 @@ preupgrade_changes() {
[[ "$INSTALLEDVERSION" == 2.3.0 || "$INSTALLEDVERSION" == 2.3.1 || "$INSTALLEDVERSION" == 2.3.2 || "$INSTALLEDVERSION" == 2.3.10 ]] && up_2.3.0_to_2.3.20
[[ "$INSTALLEDVERSION" == 2.3.20 || "$INSTALLEDVERSION" == 2.3.21 ]] && up_2.3.2X_to_2.3.30
[[ "$INSTALLEDVERSION" == 2.3.30 || "$INSTALLEDVERSION" == 2.3.40 ]] && up_2.3.3X_to_2.3.50
true
}
postupgrade_changes() {
@@ -266,6 +351,8 @@ postupgrade_changes() {
[[ "$POSTVERSION" =~ rc.1 ]] && post_rc1_to_rc2
[[ "$POSTVERSION" == 2.3.20 || "$POSTVERSION" == 2.3.21 ]] && post_2.3.2X_to_2.3.30
[[ "$POSTVERSION" == 2.3.30 ]] && post_2.3.30_to_2.3.40
[[ "$POSTVERSION" == 2.3.50 ]] && post_2.3.5X_to_2.3.60
true
}
post_rc1_to_2.3.21() {
@@ -286,6 +373,10 @@ post_2.3.30_to_2.3.40() {
POSTVERSION=2.3.40
}
post_2.3.5X_to_2.3.60() {
POSTVERSION=2.3.60
}
rc1_to_rc2() {
@@ -419,7 +510,7 @@ up_2.3.2X_to_2.3.30() {
sed -i "/ imagerepo: securityonion/c\ imagerepo: 'security-onion-solutions'" /opt/so/saltstack/local/pillar/global.sls
# Strelka rule repo pillar addition
if [ $is_airgap -eq 0 ]; then
if [[ $is_airgap -eq 0 ]]; then
# Add manager as default Strelka YARA rule repo
sed -i "/^strelka:/a \\ repos: \n - https://$HOSTNAME/repo/rules/strelka" /opt/so/saltstack/local/pillar/global.sls;
else
@@ -446,7 +537,7 @@ upgrade_to_2.3.50_repo() {
rm -f "/etc/yum.repos.d/$DELREPO.repo"
fi
done
if [ $is_airgap -eq 1 ]; then
if [[ $is_airgap -eq 1 ]]; then
# Copy the new repo file if not airgap
cp $UPDATE_DIR/salt/repo/client/files/centos/securityonion.repo /etc/yum.repos.d/
yum clean all
@@ -562,7 +653,7 @@ upgrade_check() {
# Let's make sure we actually need to update.
NEWVERSION=$(cat $UPDATE_DIR/VERSION)
HOTFIXVERSION=$(cat $UPDATE_DIR/HOTFIX)
CURRENTHOTFIX=$(cat /etc/sohotfix 2>/dev/null)
[[ -f /etc/sohotfix ]] && CURRENTHOTFIX=$(cat /etc/sohotfix)
if [ "$INSTALLEDVERSION" == "$NEWVERSION" ]; then
echo "Checking to see if there are hotfixes needed"
if [ "$HOTFIXVERSION" == "$CURRENTHOTFIX" ]; then
@@ -579,13 +670,14 @@ upgrade_check() {
}
upgrade_check_salt() {
NEWSALTVERSION=$(grep version: $UPDATE_DIR/salt/salt/master.defaults.yaml | awk {'print $2'})
NEWSALTVERSION=$(grep version: $UPDATE_DIR/salt/salt/master.defaults.yaml | awk '{print $2}')
if [ "$INSTALLEDSALTVERSION" == "$NEWSALTVERSION" ]; then
echo "You are already running the correct version of Salt for Security Onion."
else
UPGRADESALT=1
fi
}
upgrade_salt() {
SALTUPGRADED=True
echo "Performing upgrade of Salt from $INSTALLEDSALTVERSION to $NEWSALTVERSION."
@@ -597,7 +689,11 @@ upgrade_salt() {
yum versionlock delete "salt-*"
echo "Updating Salt packages and restarting services."
echo ""
sh $UPDATE_DIR/salt/salt/scripts/bootstrap-salt.sh -r -F -M -x python3 stable "$NEWSALTVERSION"
set +e
run_check_net_err \
"sh $UPDATE_DIR/salt/salt/scripts/bootstrap-salt.sh -r -F -M -x python3 stable \"$NEWSALTVERSION\"" \
"Could not update salt, please check $SOUP_LOG for details."
set -e
echo "Applying yum versionlock for Salt."
echo ""
yum versionlock add "salt-*"
@@ -610,7 +706,11 @@ upgrade_salt() {
apt-mark unhold "salt-minion"
echo "Updating Salt packages and restarting services."
echo ""
sh $UPDATE_DIR/salt/salt/scripts/bootstrap-salt.sh -F -M -x python3 stable "$NEWSALTVERSION"
set +e
run_check_net_err \
"sh $UPDATE_DIR/salt/salt/scripts/bootstrap-salt.sh -F -M -x python3 stable \"$NEWSALTVERSION\"" \
"Could not update salt, please check $SOUP_LOG for details."
set -e
echo "Applying apt hold for Salt."
echo ""
apt-mark hold "salt-common"
@@ -635,222 +735,248 @@ verify_latest_update_script() {
cp $UPDATE_DIR/salt/common/tools/sbin/soup $DEFAULT_SALT_DIR/salt/common/tools/sbin/
cp $UPDATE_DIR/salt/common/tools/sbin/so-common $DEFAULT_SALT_DIR/salt/common/tools/sbin/
cp $UPDATE_DIR/salt/common/tools/sbin/so-image-common $DEFAULT_SALT_DIR/salt/common/tools/sbin/
salt-call state.apply common queue=True
salt-call state.apply -l info common queue=True
echo ""
echo "soup has been updated. Please run soup again."
exit 0
fi
}
main () {
echo "### Preparing soup at `date` ###"
while getopts ":b" opt; do
case "$opt" in
b ) # process option b
shift
BATCHSIZE=$1
if ! [[ "$BATCHSIZE" =~ ^[0-9]+$ ]]; then
echo "Batch size must be a number greater than 0."
exit 1
fi
;;
\? )
echo "Usage: cmd [-b]"
;;
esac
done
echo "Checking to see if this is a manager."
echo ""
require_manager
set_minionid
echo "Checking to see if this is an airgap install"
echo ""
check_airgap
echo "Found that Security Onion $INSTALLEDVERSION is currently installed."
echo ""
set_os
set_palette
check_elastic_license
echo ""
if [ $is_airgap -eq 0 ]; then
# Let's mount the ISO since this is airgap
airgap_mounted
else
echo "Cloning Security Onion github repo into $UPDATE_DIR."
echo "Removing previous upgrade sources."
rm -rf $UPDATE_DIR
clone_to_tmp
fi
check_os_updates
echo ""
echo "Verifying we have the latest soup script."
verify_latest_update_script
echo ""
echo "Generating new repo archive"
generate_and_clean_tarballs
if [ -f /usr/sbin/so-image-common ]; then
. /usr/sbin/so-image-common
else
add_common
fi
echo "Let's see if we need to update Security Onion."
upgrade_check
upgrade_space
echo "Checking for Salt Master and Minion updates."
upgrade_check_salt
if [ "$is_hotfix" == "true" ]; then
echo "Applying $HOTFIXVERSION"
copy_new_files
echo ""
update_version
salt-call state.highstate -l info queue=True
main() {
set -e
set +e
trap 'check_err $?' EXIT
else
echo ""
echo "Performing upgrade from Security Onion $INSTALLEDVERSION to Security Onion $NEWVERSION."
echo ""
echo "Updating dockers to $NEWVERSION."
if [ $is_airgap -eq 0 ]; then
airgap_update_dockers
update_centos_repo
yum clean all
check_os_updates
else
update_registry
update_docker_containers "soup"
fi
echo ""
echo "Stopping Salt Minion service."
systemctl stop salt-minion
echo "Killing any remaining Salt Minion processes."
pkill -9 -ef /usr/bin/salt-minion
echo ""
echo "Stopping Salt Master service."
systemctl stop salt-master
echo ""
upgrade_to_2.3.50_repo
# Does salt need upgraded. If so update it.
if [ "$UPGRADESALT" == "1" ]; then
echo "Upgrading Salt"
# Update the repo files so it can actually upgrade
upgrade_salt
fi
echo "Checking if Salt was upgraded."
echo ""
# Check that Salt was upgraded
SALTVERSIONPOSTUPGRADE=$(salt --versions-report | grep Salt: | awk {'print $2'})
if [[ "$SALTVERSIONPOSTUPGRADE" != "$NEWSALTVERSION" ]]; then
echo "Salt upgrade failed. Check of indicators of failure in $SOUP_LOG."
echo "Once the issue is resolved, run soup again."
echo "Exiting."
echo ""
exit 1
else
echo "Salt upgrade success."
echo ""
fi
preupgrade_changes
echo ""
if [ $is_airgap -eq 0 ]; then
echo "Updating Rule Files to the Latest."
update_airgap_rules
fi
# Only update the repo if its airgap
if [[ $is_airgap -eq 0 ]] && [[ "$UPGRADESALT" != "1" ]]; then
update_centos_repo
fi
echo ""
echo "Copying new Security Onion code from $UPDATE_DIR to $DEFAULT_SALT_DIR."
copy_new_files
echo ""
update_version
echo ""
echo "Locking down Salt Master for upgrade"
masterlock
echo ""
echo "Starting Salt Master service."
systemctl start salt-master
# Only regenerate osquery packages if Fleet is enabled
FLEET_MANAGER=$(lookup_pillar fleet_manager)
FLEET_NODE=$(lookup_pillar fleet_node)
if [[ "$FLEET_MANAGER" == "True" || "$FLEET_NODE" == "True" ]]; then
echo ""
echo "Regenerating Osquery Packages.... This will take several minutes."
salt-call state.apply fleet.event_gen-packages -l info queue=True
echo ""
fi
echo ""
echo "Running a highstate to complete the Security Onion upgrade on this manager. This could take several minutes."
salt-call state.highstate -l info queue=True
echo ""
echo "Upgrade from $INSTALLEDVERSION to $NEWVERSION complete."
echo ""
echo "Stopping Salt Master to remove ACL"
systemctl stop salt-master
masterunlock
echo ""
echo "Starting Salt Master service."
systemctl start salt-master
echo "Running a highstate. This could take several minutes."
salt-call state.highstate -l info queue=True
postupgrade_changes
unmount_update
thehive_maint
if [ "$UPGRADESALT" == "1" ]; then
if [ $is_airgap -eq 0 ]; then
echo ""
echo "Cleaning repos on remote Security Onion nodes."
salt -C 'not *_eval and not *_helixsensor and not *_manager and not *_managersearch and not *_standalone and G@os:CentOS' cmd.run "yum clean all"
echo ""
fi
fi
check_sudoers
if [[ -n $lsl_msg ]]; then
case $lsl_msg in
'distributed')
echo "[INFO] The value of log_size_limit in any heavy node minion pillars may be incorrect."
echo " -> We recommend checking and adjusting the values as necessary."
echo " -> Minion pillar directory: /opt/so/saltstack/local/pillar/minions/"
echo "### Preparing soup at $(date) ###"
while getopts ":b" opt; do
case "$opt" in
b ) # process option b
shift
BATCHSIZE=$1
if ! [[ "$BATCHSIZE" =~ ^[0-9]+$ ]]; then
echo "Batch size must be a number greater than 0."
exit 1
fi
;;
'single-node')
# We can assume the lsl_details array has been set if lsl_msg has this value
echo "[WARNING] The value of log_size_limit (${lsl_details[0]}) does not match the recommended value of ${lsl_details[1]}."
echo " -> We recommend checking and adjusting the value as necessary."
echo " -> File: /opt/so/saltstack/local/pillar/minions/${lsl_details[2]}.sls"
\? )
echo "Usage: cmd [-b]"
;;
esac
done
echo "Checking to see if this is a manager."
echo ""
require_manager
set_minionid
echo "Checking to see if this is an airgap install."
echo ""
check_airgap
echo "Found that Security Onion $INSTALLEDVERSION is currently installed."
echo ""
if [[ $is_airgap -eq 0 ]]; then
# Let's mount the ISO since this is airgap
echo "This is airgap. Ask for a location."
airgap_mounted
else
echo "Cloning Security Onion github repo into $UPDATE_DIR."
echo "Removing previous upgrade sources."
rm -rf $UPDATE_DIR
echo "Cloning the Security Onion Repo."
clone_to_tmp
fi
echo "Verifying we have the latest soup script."
verify_latest_update_script
echo ""
set_os
set_palette
check_elastic_license
echo ""
check_os_updates
echo "Generating new repo archive"
generate_and_clean_tarballs
if [ -f /usr/sbin/so-image-common ]; then
. /usr/sbin/so-image-common
else
add_common
fi
NUM_MINIONS=$(ls /opt/so/saltstack/local/pillar/minions/*_*.sls | wc -l)
echo "Let's see if we need to update Security Onion."
upgrade_check
upgrade_space
if [ $NUM_MINIONS -gt 1 ]; then
echo "Checking for Salt Master and Minion updates."
upgrade_check_salt
set -e
cat << EOF
if [ "$is_hotfix" == "true" ]; then
echo "Applying $HOTFIXVERSION"
copy_new_files
echo ""
update_version
salt-call state.highstate -l info queue=True
else
echo ""
echo "Performing upgrade from Security Onion $INSTALLEDVERSION to Security Onion $NEWVERSION."
echo ""
echo "Updating dockers to $NEWVERSION."
if [[ $is_airgap -eq 0 ]]; then
airgap_update_dockers
update_centos_repo
yum clean all
check_os_updates
else
update_registry
set +e
update_docker_containers "soup"
set -e
fi
echo ""
echo "Stopping Salt Minion service."
systemctl stop salt-minion
echo "Killing any remaining Salt Minion processes."
set +e
pkill -9 -ef /usr/bin/salt-minion
set -e
echo ""
echo "Stopping Salt Master service."
systemctl stop salt-master
echo ""
upgrade_to_2.3.50_repo
# Does salt need upgraded. If so update it.
if [[ $UPGRADESALT -eq 1 ]]; then
echo "Upgrading Salt"
# Update the repo files so it can actually upgrade
upgrade_salt
fi
echo "Checking if Salt was upgraded."
echo ""
# Check that Salt was upgraded
SALTVERSIONPOSTUPGRADE=$(salt --versions-report | grep Salt: | awk '{print $2}')
if [[ "$SALTVERSIONPOSTUPGRADE" != "$NEWSALTVERSION" ]]; then
echo "Salt upgrade failed. Check of indicators of failure in $SOUP_LOG."
echo "Once the issue is resolved, run soup again."
echo "Exiting."
echo ""
exit 1
else
echo "Salt upgrade success."
echo ""
fi
preupgrade_changes
echo ""
if [[ $is_airgap -eq 0 ]]; then
echo "Updating Rule Files to the Latest."
update_airgap_rules
fi
# Only update the repo if its airgap
if [[ $is_airgap -eq 0 && $UPGRADESALT -ne 1 ]]; then
update_centos_repo
fi
echo ""
echo "Copying new Security Onion code from $UPDATE_DIR to $DEFAULT_SALT_DIR."
copy_new_files
echo ""
update_version
echo ""
echo "Locking down Salt Master for upgrade"
masterlock
echo ""
echo "Starting Salt Master service."
systemctl start salt-master
# Testing that salt-master is up by checking that is it connected to itself
set +e
echo "Waiting on the Salt Master service to be ready."
salt-call state.show_top -l error queue=True || fail "salt-master could not be reached. Check $SOUP_LOG for details."
set -e
echo ""
echo "Ensuring python modules for Salt are installed and patched."
salt-call state.apply salt.python3-influxdb -l info queue=True
echo ""
# Only regenerate osquery packages if Fleet is enabled
FLEET_MANAGER=$(lookup_pillar fleet_manager)
FLEET_NODE=$(lookup_pillar fleet_node)
if [[ "$FLEET_MANAGER" == "True" || "$FLEET_NODE" == "True" ]]; then
echo ""
echo "Regenerating Osquery Packages.... This will take several minutes."
salt-call state.apply fleet.event_gen-packages -l info queue=True
echo ""
fi
echo ""
echo "Running a highstate to complete the Security Onion upgrade on this manager. This could take several minutes."
set +e
salt-call state.highstate -l info queue=True
set -e
echo ""
echo "Upgrade from $INSTALLEDVERSION to $NEWVERSION complete."
echo ""
echo "Stopping Salt Master to remove ACL"
systemctl stop salt-master
masterunlock
echo ""
echo "Starting Salt Master service."
systemctl start salt-master
set +e
echo "Waiting on the Salt Master service to be ready."
salt-call state.show_top -l error queue=True || fail "salt-master could not be reached. Check $SOUP_LOG for details."
set -e
echo "Running a highstate. This could take several minutes."
salt-call state.highstate -l info queue=True
postupgrade_changes
[[ $is_airgap -eq 0 ]] && unmount_update
thehive_maint
NUM_MINIONS=$(ls /opt/so/saltstack/local/pillar/minions/*_*.sls | wc -l)
if [[ $UPGRADESALT -eq 1 ]] && [[ $NUM_MINIONS -gt 1 ]]; then
if [[ $is_airgap -eq 0 ]]; then
echo ""
echo "Cleaning repos on remote Security Onion nodes."
salt -C 'not *_eval and not *_helixsensor and not *_manager and not *_managersearch and not *_standalone and G@os:CentOS' cmd.run "yum clean all"
echo ""
fi
fi
check_sudoers
if [[ -n $lsl_msg ]]; then
case $lsl_msg in
'distributed')
echo "[INFO] The value of log_size_limit in any heavy node minion pillars may be incorrect."
echo " -> We recommend checking and adjusting the values as necessary."
echo " -> Minion pillar directory: /opt/so/saltstack/local/pillar/minions/"
;;
'single-node')
# We can assume the lsl_details array has been set if lsl_msg has this value
echo "[WARNING] The value of log_size_limit (${lsl_details[0]}) does not match the recommended value of ${lsl_details[1]}."
echo " -> We recommend checking and adjusting the value as necessary."
echo " -> File: /opt/so/saltstack/local/pillar/minions/${lsl_details[2]}.sls"
;;
esac
fi
if [[ $NUM_MINIONS -gt 1 ]]; then
cat << EOF
@@ -864,10 +990,10 @@ For more information, please see https://docs.securityonion.net/en/2.3/soup.html
EOF
fi
fi
fi
echo "### soup has been served at `date` ###"
echo "### soup has been served at $(date) ###"
}
cat << EOF
@@ -882,6 +1008,7 @@ Press Enter to continue or Ctrl-C to cancel.
EOF
read input
read -r input
main "$@" | tee -a $SOUP_LOG

View File

@@ -34,7 +34,7 @@ overlimit() {
closedindices() {
INDICES=$(curl -s -k https://{{ELASTICSEARCH_HOST}}:{{ELASTICSEARCH_PORT}}/_cat/indices?h=index\&expand_wildcards=closed 2> /dev/null)
INDICES=$({{ ELASTICCURL }} -s -k https://{{ELASTICSEARCH_HOST}}:{{ELASTICSEARCH_PORT}}/_cat/indices?h=index\&expand_wildcards=closed 2> /dev/null)
[ $? -eq 1 ] && return false
echo ${INDICES} | grep -q -E "(logstash-|so-)"
}
@@ -49,10 +49,10 @@ while overlimit && closedindices; do
# First, get the list of closed indices using _cat/indices?h=index\&expand_wildcards=closed.
# Then, sort by date by telling sort to use hyphen as delimiter and then sort on the third field.
# Finally, select the first entry in that sorted list.
OLDEST_INDEX=$(curl -s -k https://{{ELASTICSEARCH_HOST}}:{{ELASTICSEARCH_PORT}}/_cat/indices?h=index\&expand_wildcards=closed | grep -E "(logstash-|so-)" | sort -t- -k3 | head -1)
OLDEST_INDEX=$({{ ELASTICCURL }} -s -k https://{{ELASTICSEARCH_HOST}}:{{ELASTICSEARCH_PORT}}/_cat/indices?h=index\&expand_wildcards=closed | grep -E "(logstash-|so-)" | sort -t- -k3 | head -1)
# Now that we've determined OLDEST_INDEX, ask Elasticsearch to delete it.
curl -XDELETE -k https://{{ELASTICSEARCH_HOST}}:{{ELASTICSEARCH_PORT}}/${OLDEST_INDEX}
{{ ELASTICCURL }} -XDELETE -k https://{{ELASTICSEARCH_HOST}}:{{ELASTICSEARCH_PORT}}/${OLDEST_INDEX}
# Finally, write a log entry that says we deleted it.
echo "$(date) - Used disk space exceeds LOG_SIZE_LIMIT ({{LOG_SIZE_LIMIT}} GB) - Index ${OLDEST_INDEX} deleted ..." >> ${LOG}

View File

@@ -3,6 +3,13 @@
{% elif grains['role'] in ['so-eval', 'so-managersearch', 'so-standalone'] %}
{%- set elasticsearch = salt['pillar.get']('manager:mainip', '') -%}
{%- endif %}
{%- if salt['pillar.get']('elasticsearch:auth:enabled') is sameas true %}
{%- set ES_USER = salt['pillar.get']('elasticsearch:auth:users:so_elastic_user:user', '') %}
{%- set ES_PASS = salt['pillar.get']('elasticsearch:auth:users:so_elastic_user:pass', '') %}
{%- else %}
{%- set ES_USER = '' %}
{%- set ES_PASS = '' %}
{%- endif %}
---
# Remember, leave a key empty if there is no value. None will be a string,
@@ -11,6 +18,10 @@ client:
hosts:
- {{elasticsearch}}
port: 9200
{% if salt['pillar.get']('elasticsearch:auth:enabled') is sameas true %}
username: {{ ES_USER }}
password: {{ ES_PASS }}
{% endif %}
url_prefix:
use_ssl: True
certificate:

View File

@@ -5,6 +5,7 @@
{% set IMAGEREPO = salt['pillar.get']('global:imagerepo') %}
{% set MANAGER = salt['grains.get']('master') %}
{% if grains['role'] in ['so-eval', 'so-node', 'so-managersearch', 'so-heavynode', 'so-standalone'] %}
{% from 'elasticsearch/auth.map.jinja' import ELASTICAUTH with context %}
# Curator
# Create the group
curatorgroup:
@@ -48,6 +49,7 @@ curconf:
- source: salt://curator/files/curator.yml
- user: 934
- group: 939
- mode: 660
- template: jinja
curcloseddel:
@@ -66,6 +68,8 @@ curcloseddeldel:
- group: 939
- mode: 755
- template: jinja
- defaults:
ELASTICCURL: {{ ELASTICAUTH.elasticcurl }}
curclose:
file.managed:
@@ -147,4 +151,4 @@ append_so-curator_so-status.conf:
test.fail_without_changes:
- name: {{sls}}_state_not_allowed
{% endif %}
{% endif %}

View File

@@ -1,3 +1,5 @@
{%- set ES_USER = salt['pillar.get']('elasticsearch:auth:users:so_elastic_user:user', '') %}
{%- set ES_PASS = salt['pillar.get']('elasticsearch:auth:users:so_elastic_user:pass', '') %}
elastalert:
config:
rules_folder: /opt/elastalert/rules/
@@ -19,8 +21,10 @@ elastalert:
use_ssl: true
verify_certs: false
#es_send_get_body_as: GET
#es_username: someusername
#es_password: somepassword
{%- if salt['pillar.get']('elasticsearch:auth:enabled') is sameas true %}
es_username: {{ ES_USER }}
es_password: {{ ES_PASS }}
{%- endif %}
writeback_index: elastalert_status
alert_time_limit:
days: 2
@@ -45,4 +49,4 @@ elastalert:
level: INFO
handlers:
- file
propagate: false
propagate: false

View File

@@ -12,16 +12,21 @@ class PlaybookESAlerter(Alerter):
Use matched data to create alerts in elasticsearch
"""
required_options = set(['play_title','play_url','sigma_level','elasticsearch_host'])
required_options = set(['play_title','play_url','sigma_level'])
def alert(self, matches):
for match in matches:
today = strftime("%Y.%m.%d", gmtime())
timestamp = strftime("%Y-%m-%d"'T'"%H:%M:%S"'.000Z', gmtime())
headers = {"Content-Type": "application/json"}
creds = None
if 'es_username' in self.rule and 'es_password' in self.rule:
creds = (self.rule['es_username'], self.rule['es_password'])
payload = {"rule": { "name": self.rule['play_title'],"case_template": self.rule['play_id'],"uuid": self.rule['play_id'],"category": self.rule['rule.category']},"event":{ "severity": self.rule['event.severity'],"module": self.rule['event.module'],"dataset": self.rule['event.dataset'],"severity_label": self.rule['sigma_level']},"kibana_pivot": self.rule['kibana_pivot'],"soc_pivot": self.rule['soc_pivot'],"play_url": self.rule['play_url'],"sigma_level": self.rule['sigma_level'],"event_data": match, "@timestamp": timestamp}
url = f"https://{self.rule['elasticsearch_host']}/so-playbook-alerts-{today}/_doc/"
requests.post(url, data=json.dumps(payload), headers=headers, verify=False)
url = f"https://{self.rule['es_host']}:{self.rule['es_port']}/so-playbook-alerts-{today}/_doc/"
requests.post(url, data=json.dumps(payload), headers=headers, verify=False, auth=creds)
def get_info(self):
return {'type': 'PlaybookESAlerter'}

View File

@@ -99,14 +99,12 @@ elastaconf:
elastalert_config: {{ elastalert_config.elastalert.config }}
- user: 933
- group: 933
- mode: 660
- template: jinja
wait_for_elasticsearch:
module.run:
- http.wait_for_successful_query:
- url: 'https://{{MANAGER}}:9200/_cat/indices/.kibana*'
- wait_for: 180
- verify_ssl: False
cmd.run:
- name: so-elasticsearch-wait
so-elastalert:
docker_container.running:
@@ -123,7 +121,7 @@ so-elastalert:
- extra_hosts:
- {{MANAGER_URL}}:{{MANAGER_IP}}
- require:
- module: wait_for_elasticsearch
- cmd: wait_for_elasticsearch
- watch:
- file: elastaconf

View File

@@ -0,0 +1,7 @@
{% set ELASTICAUTH = salt['pillar.filter_by']({
True: {
'user': salt['pillar.get']('elasticsearch:auth:users:so_elastic_user:user'),
'pass': salt['pillar.get']('elasticsearch:auth:users:so_elastic_user:pass'),
'elasticcurl':'curl -K /opt/so/conf/elasticsearch/curl.config' },
False: {'elasticcurl': 'curl'},
}, pillar='elasticsearch:auth:enabled', default=False) %}

View File

@@ -0,0 +1,39 @@
{% set so_elastic_user_pass = salt['random.get_str'](20) %}
{% set so_kibana_user_pass = salt['random.get_str'](20) %}
{% set so_logstash_user_pass = salt['random.get_str'](20) %}
{% set so_beats_user_pass = salt['random.get_str'](20) %}
{% set so_monitor_user_pass = salt['random.get_str'](20) %}
elastic_auth_pillar:
file.managed:
- name: /opt/so/saltstack/local/pillar/elasticsearch/auth.sls
- mode: 600
- reload_pillar: True
- contents: |
elasticsearch:
auth:
enabled: False
users:
so_elastic_user:
user: so_elastic
pass: {{ so_elastic_user_pass }}
so_kibana_user:
user: so_kibana
pass: {{ so_kibana_user_pass }}
so_logstash_user:
user: so_logstash
pass: {{ so_logstash_user_pass }}
so_beats_user:
user: so_beats
pass: {{ so_beats_user_pass }}
so_monitor_user:
user: so_monitor
pass: {{ so_monitor_user_pass }}
# since we are generating a random password, and we don't want that to happen everytime
# a highstate runs, we only manage the file each user isn't present in the file. if the
# pillar file doesn't exists, then the default vault provided to pillar.get should not
# be within the file either, so it should then be created
- unless:
{% for so_app_user, values in salt['pillar.get']('elasticsearch:auth:users', {'so_noapp_user': {'user': 'r@NDumu53Rd0NtDOoP'}}).items() %}
- grep {{ values.user }} /opt/so/saltstack/local/pillar/elasticsearch/auth.sls
{% endfor%}

View File

@@ -0,0 +1 @@
user = "{{ salt['pillar.get']('elasticsearch:auth:users:so_elastic_user:user') }}:{{ salt['pillar.get']('elasticsearch:auth:users:so_elastic_user:pass') }}"

View File

@@ -30,13 +30,15 @@ xpack.security.http.ssl.client_authentication: none
xpack.security.http.ssl.key: /usr/share/elasticsearch/config/elasticsearch.key
xpack.security.http.ssl.certificate: /usr/share/elasticsearch/config/elasticsearch.crt
xpack.security.http.ssl.certificate_authorities: /usr/share/elasticsearch/config/ca.crt
{% if not salt['pillar.get']('elasticsearch:auth:enabled', False) %}
xpack.security.authc:
anonymous:
username: anonymous_user
roles: superuser
authz_exception: true
{% endif %}
node.name: {{ grains.host }}
script.max_compilations_rate: 1000/1m
script.max_compilations_rate: 20000/1m
{%- if TRUECLUSTER is sameas true %}
{%- if grains.role == 'so-manager' %}
{%- if salt['pillar.get']('nodestab', {}) %}

View File

@@ -51,7 +51,7 @@
},
{ "set": { "field": "_index", "value": "so-firewall", "override": true } },
{ "set": { "if": "ctx.network?.transport_id == '0'", "field": "network.transport", "value": "icmp", "override": true } },
{"community_id": { "if": "ctx.network?.transport != null", "field":["source.ip","source.port","destination.ip","destination.port","network.transport"],"target_field":"network.community_id"}},
{"community_id": {} },
{ "set": { "field": "module", "value": "pfsense", "override": true } },
{ "set": { "field": "dataset", "value": "firewall", "override": true } },
{ "remove": { "field": ["real_message", "ip_sub_msg", "firewall.sub_message"], "ignore_failure": true } }

View File

@@ -63,7 +63,8 @@
{ "rename": { "field": "fields.module", "target_field": "event.module", "ignore_failure": true, "ignore_missing": true } },
{ "pipeline": { "if": "ctx.winlog?.channel == 'Microsoft-Windows-Sysmon/Operational'", "name": "sysmon" } },
{ "pipeline": { "if": "ctx.winlog?.channel != 'Microsoft-Windows-Sysmon/Operational'", "name":"win.eventlogs" } },
{ "set": { "if": "ctx.rule != null && ctx.rule.name != null", "field": "event.dataset", "value": "alert", "override": true } },
{ "set": { "if": "ctx.rule != null && ctx.rule.name != null", "field": "event.dataset", "value": "ossec.alert", "override": true } },
{ "set": { "if": "ctx.rule != null && ctx.rule.name != null", "field": "event.kind", "value": "alert", "override": true } },
{ "pipeline": { "name": "common" } }
]
}

View File

@@ -53,7 +53,8 @@
{ "set": { "if": "ctx.exiftool?.FileDirectory != null", "field": "file.directory", "value": "{{exiftool.FileDirectory}}", "ignore_failure": true }},
{ "set": { "if": "ctx.exiftool?.Subsystem != null", "field": "host.subsystem", "value": "{{exiftool.Subsystem}}", "ignore_failure": true }},
{ "set": { "if": "ctx.scan?.yara?.matches != null", "field": "rule.name", "value": "{{scan.yara.matches.0}}" }},
{ "set": { "if": "ctx.scan?.yara?.matches != null", "field": "dataset", "value": "alert", "override": true }},
{ "set": { "if": "ctx.scan?.yara?.matches != null", "field": "dataset", "value": "strelka.alert", "override": true }},
{ "set": { "if": "ctx.scan?.yara?.matches != null", "field": "event.kind", "value": "alert", "override": true }},
{ "rename": { "field": "file.flavors.mime", "target_field": "file.mime_type", "ignore_missing": true }},
{ "set": { "if": "ctx.rule?.name != null && ctx.rule?.score == null", "field": "event.severity", "value": 3, "override": true } },
{ "convert" : { "if": "ctx.rule?.score != null", "field" : "rule.score","type": "integer"}},

View File

@@ -1,7 +1,6 @@
{
"description" : "sysmon",
"processors" : [
{"community_id": {"if": "ctx.winlog.event_data?.Protocol != null", "field":["winlog.event_data.SourceIp","winlog.event_data.SourcePort","winlog.event_data.DestinationIp","winlog.event_data.DestinationPort","winlog.event_data.Protocol"],"target_field":"network.community_id"}},
{ "set": { "field": "event.code", "value": "{{winlog.event_id}}", "override": true } },
{ "set": { "field": "event.module", "value": "sysmon", "override": true } },
{ "set": { "if": "ctx.winlog?.computer_name != null", "field": "observer.name", "value": "{{winlog.computer_name}}", "override": true } },
@@ -64,6 +63,7 @@
{ "rename": { "field": "winlog.event_data.SourceIp", "target_field": "source.ip", "ignore_missing": true } },
{ "rename": { "field": "winlog.event_data.SourcePort", "target_field": "source.port", "ignore_missing": true } },
{ "rename": { "field": "winlog.event_data.targetFilename", "target_field": "file.target", "ignore_missing": true } },
{ "rename": { "field": "winlog.event_data.TargetFilename", "target_field": "file.target", "ignore_missing": true } }
{ "rename": { "field": "winlog.event_data.TargetFilename", "target_field": "file.target", "ignore_missing": true } },
{ "community_id": {} }
]
}

View File

@@ -8,11 +8,11 @@
{ "dot_expander": { "field": "id.orig_p", "path": "message2", "ignore_failure": true } },
{ "dot_expander": { "field": "id.resp_h", "path": "message2", "ignore_failure": true } },
{ "dot_expander": { "field": "id.resp_p", "path": "message2", "ignore_failure": true } },
{"community_id": {"if": "ctx.network?.transport != null", "field":["message2.id.orig_h","message2.id.orig_p","message2.id.resp_h","message2.id.resp_p","network.transport"],"target_field":"network.community_id"}},
{ "rename": { "field": "message2.id.orig_h", "target_field": "source.ip", "ignore_missing": true } },
{ "rename": { "field": "message2.id.orig_p", "target_field": "source.port", "ignore_missing": true } },
{ "rename": { "field": "message2.id.resp_h", "target_field": "destination.ip", "ignore_missing": true } },
{ "rename": { "field": "message2.id.resp_p", "target_field": "destination.port", "ignore_missing": true } },
{ "community_id": {} },
{ "set": { "if": "ctx.source?.ip != null", "field": "client.ip", "value": "{{source.ip}}" } },
{ "set": { "if": "ctx.source?.port != null", "field": "client.port", "value": "{{source.port}}" } },
{ "set": { "if": "ctx.destination?.ip != null", "field": "server.ip", "value": "{{destination.ip}}" } },

View File

@@ -1,6 +1,6 @@
#!/bin/bash
#
# Copyright 2014,2015,2016,2017,2018,2019 Security Onion Solutions, LLC
# Copyright 2014,2015,2016,2017,2018,2019,2020,2021 Security Onion Solutions, LLC
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by

View File

@@ -1,6 +1,6 @@
#!/bin/bash
#
# Copyright 2014,2015,2016,2017,2018,2019 Security Onion Solutions, LLC
# Copyright 2014,2015,2016,2017,2018,2019,2020,2021 Security Onion Solutions, LLC
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
@@ -27,7 +27,7 @@ echo -n "Waiting for ElasticSearch..."
COUNT=0
ELASTICSEARCH_CONNECTED="no"
while [[ "$COUNT" -le 240 ]]; do
curl ${ELASTICSEARCH_AUTH} -k --output /dev/null --silent --head --fail -L https://"$ELASTICSEARCH_HOST":"$ELASTICSEARCH_PORT"
{{ ELASTICCURL }} -k --output /dev/null --silent --head --fail -L https://"$ELASTICSEARCH_HOST":"$ELASTICSEARCH_PORT"
if [ $? -eq 0 ]; then
ELASTICSEARCH_CONNECTED="yes"
echo "connected!"
@@ -47,9 +47,9 @@ fi
cd ${ELASTICSEARCH_INGEST_PIPELINES}
echo "Loading pipelines..."
for i in *; do echo $i; RESPONSE=$(curl ${ELASTICSEARCH_AUTH} -k -XPUT -L https://${ELASTICSEARCH_HOST}:${ELASTICSEARCH_PORT}/_ingest/pipeline/$i -H 'Content-Type: application/json' -d@$i 2>/dev/null); echo $RESPONSE; if [[ "$RESPONSE" == *"error"* ]]; then RETURN_CODE=1; fi; done
for i in *; do echo $i; RESPONSE=$({{ ELASTICCURL }} -k -XPUT -L https://${ELASTICSEARCH_HOST}:${ELASTICSEARCH_PORT}/_ingest/pipeline/$i -H 'Content-Type: application/json' -d@$i 2>/dev/null); echo $RESPONSE; if [[ "$RESPONSE" == *"error"* ]]; then RETURN_CODE=1; fi; done
echo
cd - >/dev/null
exit $RETURN_CODE
exit $RETURN_CODE

View File

@@ -35,6 +35,8 @@
{% endif %}
{% set TEMPLATES = salt['pillar.get']('elasticsearch:templates', {}) %}
{% from 'elasticsearch/auth.map.jinja' import ELASTICAUTH with context %}
vm.max_map_count:
sysctl.present:
@@ -169,6 +171,40 @@ eslogdir:
- group: 939
- makedirs: True
auth_users:
file.managed:
- name: /opt/so/conf/elasticsearch/users.tmp
- source: salt://elasticsearch/files/users
- user: 930
- group: 930
- mode: 600
- show_changes: False
auth_users_roles:
file.managed:
- name: /opt/so/conf/elasticsearch/users_roles.tmp
- source: salt://elasticsearch/files/users_roles
- user: 930
- group: 930
- mode: 600
- show_changes: False
auth_users_inode:
require:
- file: auth_users
cmd.run:
- name: cat /opt/so/conf/elasticsearch/users.tmp > /opt/so/conf/elasticsearch/users && chown 930:930 /opt/so/conf/elasticsearch/users && chmod 600 /opt/so/conf/elasticsearch/users
- onchanges:
- file: /opt/so/conf/elasticsearch/users.tmp
auth_users_roles_inode:
require:
- file: auth_users_roles
cmd.run:
- name: cat /opt/so/conf/elasticsearch/users_roles.tmp > /opt/so/conf/elasticsearch/users_roles && chown 930:930 /opt/so/conf/elasticsearch/users_roles && chmod 600 /opt/so/conf/elasticsearch/users_roles
- onchanges:
- file: /opt/so/conf/elasticsearch/users_roles.tmp
so-elasticsearch:
docker_container.running:
- image: {{ MANAGER }}:5000/{{ IMAGEREPO }}/so-elasticsearch:{{ VERSION }}
@@ -213,6 +249,10 @@ so-elasticsearch:
- /etc/pki/elasticsearch.crt:/usr/share/elasticsearch/config/elasticsearch.crt:ro
- /etc/pki/elasticsearch.key:/usr/share/elasticsearch/config/elasticsearch.key:ro
- /etc/pki/elasticsearch.p12:/usr/share/elasticsearch/config/elasticsearch.p12:ro
{% if salt['pillar.get']('elasticsearch:auth:enabled', False) %}
- /opt/so/conf/elasticsearch/users_roles:/usr/share/elasticsearch/config/users_roles:ro
- /opt/so/conf/elasticsearch/users:/usr/share/elasticsearch/config/users:ro
{% endif %}
- watch:
- file: cacertz
- file: esyml
@@ -232,6 +272,8 @@ so-elasticsearch-pipelines-file:
- group: 939
- mode: 754
- template: jinja
- defaults:
ELASTICCURL: {{ ELASTICAUTH.elasticcurl }}
so-elasticsearch-pipelines:
cmd.run:

View File

@@ -1,5 +1,5 @@
{
"index_patterns": ["so-ids-*", "so-firewall-*", "so-syslog-*", "so-zeek-*", "so-import-*", "so-ossec-*", "so-strelka-*", "so-beats-*", "so-osquery-*","so-playbook-*"],
"index_patterns": ["so-*"],
"version":50001,
"order":10,
"settings":{
@@ -228,7 +228,11 @@
"event":{
"type":"object",
"dynamic": true
},
},
"event_data":{
"type":"object",
"dynamic": true
},
"file":{
"type":"object",
"dynamic": true
@@ -316,7 +320,8 @@
"type":"text",
"fields":{
"keyword":{
"type":"keyword"
"type":"keyword",
"ignore_above": 32766
}
}
},
@@ -522,11 +527,159 @@
"version":{
"type":"long"
}
}
},
}
},
"x509":{
"type":"object",
"dynamic": true
},
"suricata":{
"type":"object",
"dynamic": true
},
"zeek":{
"type":"object",
"dynamic": true
},
"aws":{
"type":"object",
"dynamic": true
},
"azure":{
"type":"object",
"dynamic": true
},
"barracuda":{
"type":"object",
"dynamic": true
},
"bluecoat":{
"type":"object",
"dynamic": true
},
"cef":{
"type":"object",
"dynamic": true
},
"checkpoint":{
"type":"object",
"dynamic": true
},
"cisco":{
"type":"object",
"dynamic": true
},
"cyberark":{
"type":"object",
"dynamic": true
},
"cylance":{
"type":"object",
"dynamic": true
},
"f5":{
"type":"object",
"dynamic": true
},
"fortinet":{
"type":"object",
"dynamic": true
},
"gcp":{
"type":"object",
"dynamic": true
},
"google_workspace":{
"type":"object",
"dynamic": true
},
"imperva":{
"type":"object",
"dynamic": true
},
"infoblox":{
"type":"object",
"dynamic": true
},
"juniper":{
"type":"object",
"dynamic": true
},
"microsoft":{
"type":"object",
"dynamic": true
},
"misp":{
"type":"object",
"dynamic": true
},
"netflow":{
"type":"object",
"dynamic": true
},
"netscout":{
"type":"object",
"dynamic": true
},
"o365":{
"type":"object",
"dynamic": true
},
"okta":{
"type":"object",
"dynamic": true
},
"proofpoint":{
"type":"object",
"dynamic": true
},
"radware":{
"type":"object",
"dynamic": true
},
"snort":{
"type":"object",
"dynamic": true
},
"snyk":{
"type":"object",
"dynamic": true
},
"sonicwall":{
"type":"object",
"dynamic": true
},
"sophos":{
"type":"object",
"dynamic": true
},
"squid":{
"type":"object",
"dynamic": true
},
"tomcat":{
"type":"object",
"dynamic": true
},
"zcaler":{
"type":"object",
"dynamic": true
},
"elasticsearch":{
"type":"object",
"dynamic": true
},
"kibana":{
"type":"object",
"dynamic": true
},
"logstash":{
"type":"object",
"dynamic": true
},
"redis":{
"type":"object",
"dynamic": true
}
}
}

View File

@@ -3,7 +3,8 @@
{%- else %}
{%- set MANAGER = salt['grains.get']('master') %}
{%- endif %}
{%- set ES_USER = salt['pillar.get']('elasticsearch:auth:users:so_elastic_user:user', '') %}
{%- set ES_PASS = salt['pillar.get']('elasticsearch:auth:users:so_elastic_user:pass', '') %}
{%- set HOSTNAME = salt['grains.get']('host', '') %}
{%- set ZEEKVER = salt['pillar.get']('global:mdengine', 'COMMUNITY') %}
@@ -71,7 +72,13 @@ logging.files:
# Set to true to log messages in json format.
#logging.json: false
#========================== Modules configuration ============================
filebeat.config.modules:
enabled: true
path: ${path.config}/modules.d/*.yml
filebeat.modules:
#=========================== Filebeat prospectors =============================
@@ -183,7 +190,6 @@ filebeat.inputs:
fields_under_root: true
clean_removed: false
close_removed: false
{%- if STRELKAENABLED == 1 %}
- type: log
paths:
@@ -261,6 +267,10 @@ output.{{ type }}:
output.elasticsearch:
enabled: true
hosts: ["https://{{ MANAGER }}:9200"]
{%- if salt['pillar.get']('elasticsearch:auth:enabled') is sameas true %}
username: "{{ ES_USER }}"
password: "{{ ES_PASS }}"
{%- endif %}
ssl.certificate_authorities: ["/usr/share/filebeat/intraca.crt"]
pipelines:
- pipeline: "%{[module]}.%{[dataset]}"

View File

@@ -0,0 +1,16 @@
{%- if grains['role'] in ['so-managersearch', 'so-heavynode', 'so-node'] %}
{%- set MANAGER = salt['grains.get']('host' '') %}
{%- else %}
{%- set MANAGER = salt['grains.get']('master') %}
{%- endif %}
{%- set ES_USER = salt['pillar.get']('elasticsearch:auth:users:so_elastic_user:user', '') %}
{%- set ES_PASS = salt['pillar.get']('elasticsearch:auth:users:so_elastic_user:pass', '') %}
output.elasticsearch:
enabled: true
hosts: ["https://{{ MANAGER }}:9200"]
{% if salt['pillar.get']('elasticsearch:auth:enabled') is sameas true %}
username: "{{ ES_USER }}"
password: "{{ ES_PASS }}"
{% endif %}
ssl.certificate_authorities: ["/usr/share/filebeat/intraca.crt"]

View File

@@ -0,0 +1,18 @@
# DO NOT EDIT THIS FILE
{%- if MODULES.modules is iterable and MODULES.modules is not string and MODULES.modules|length > 0%}
{%- for module in MODULES.modules.keys() %}
- module: {{ module }}
{%- for fileset in MODULES.modules[module] %}
{{ fileset }}:
enabled: {{ MODULES.modules[module][fileset].enabled|string|lower }}
{#- only manage the settings if the fileset is enabled #}
{%- if MODULES.modules[module][fileset].enabled %}
{%- for var, value in MODULES.modules[module][fileset].items() %}
{%- if var|lower != 'enabled' %}
{{ var }}: {{ value }}
{%- endif %}
{%- endfor %}
{%- endif %}
{%- endfor %}
{%- endfor %}
{% endif %}

View File

@@ -1,3 +1,4 @@
# Copyright 2014,2015,2016,2017,2018,2019,2020,2021 Security Onion Solutions, LLC
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
@@ -20,18 +21,37 @@
{% set LOCALHOSTIP = salt['grains.get']('ip_interfaces').get(MAININT)[0] %}
{% set MANAGER = salt['grains.get']('master') %}
{% set MANAGERIP = salt['pillar.get']('global:managerip', '') %}
{% from 'filebeat/map.jinja' import THIRDPARTY with context %}
{% from 'filebeat/map.jinja' import SO with context %}
{% set ES_INCLUDED_NODES = ['so-eval', 'so-standalone', 'so-managersearch', 'so-node', 'so-heavynode', 'so-import'] %}
#only include elastic state for certain nodes
{% if grains.role in ES_INCLUDED_NODES %}
include:
- elasticsearch
{% endif %}
filebeatetcdir:
file.directory:
- name: /opt/so/conf/filebeat/etc
- user: 939
- group: 939
- makedirs: True
filebeatmoduledir:
file.directory:
- name: /opt/so/conf/filebeat/modules
- user: root
- group: root
- makedirs: True
filebeatlogdir:
file.directory:
- name: /opt/so/log/filebeat
- user: 939
- group: 939
- makedirs: True
filebeatpkidir:
file.directory:
- name: /opt/so/conf/filebeat/etc/pki
@@ -44,6 +64,7 @@ fileregistrydir:
- user: 939
- group: 939
- makedirs: True
# This needs to be owned by root
filebeatconfsync:
file.managed:
@@ -55,6 +76,33 @@ filebeatconfsync:
- defaults:
INPUTS: {{ salt['pillar.get']('filebeat:config:inputs', {}) }}
OUTPUT: {{ salt['pillar.get']('filebeat:config:output', {}) }}
# Filebeat module config file
filebeatmoduleconfsync:
file.managed:
- name: /opt/so/conf/filebeat/etc/module-setup.yml
- source: salt://filebeat/etc/module-setup.yml
- user: root
- group: root
- mode: 640
- template: jinja
sodefaults_module_conf:
file.managed:
- name: /opt/so/conf/filebeat/modules/securityonion.yml
- source: salt://filebeat/etc/module_config.yml.jinja
- template: jinja
- defaults:
MODULES: {{ SO }}
thirdparty_module_conf:
file.managed:
- name: /opt/so/conf/filebeat/modules/thirdparty.yml
- source: salt://filebeat/etc/module_config.yml.jinja
- template: jinja
- defaults:
MODULES: {{ THIRDPARTY }}
so-filebeat:
docker_container.running:
- image: {{ MANAGER }}:5000/{{ IMAGEREPO }}/so-filebeat:{{ VERSION }}
@@ -65,19 +113,41 @@ so-filebeat:
- /nsm:/nsm:ro
- /opt/so/log/filebeat:/usr/share/filebeat/logs:rw
- /opt/so/conf/filebeat/etc/filebeat.yml:/usr/share/filebeat/filebeat.yml:ro
- /opt/so/conf/filebeat/etc/module-setup.yml:/usr/share/filebeat/module-setup.yml:ro
- /nsm/wazuh/logs/alerts:/wazuh/alerts:ro
- /nsm/wazuh/logs/archives:/wazuh/archives:ro
- /opt/so/conf/filebeat/modules:/usr/share/filebeat/modules.d
- /opt/so/conf/filebeat/etc/pki/filebeat.crt:/usr/share/filebeat/filebeat.crt:ro
- /opt/so/conf/filebeat/etc/pki/filebeat.key:/usr/share/filebeat/filebeat.key:ro
- /opt/so/conf/filebeat/registry:/usr/share/filebeat/data/registry:rw
- /etc/ssl/certs/intca.crt:/usr/share/filebeat/intraca.crt:ro
- /opt/so/log:/logs:ro
- port_bindings:
- 0.0.0.0:514:514/udp
- 0.0.0.0:514:514/tcp
- 0.0.0.0:5066:5066/tcp
{% for module in THIRDPARTY.modules.keys() %}
{% for submodule in THIRDPARTY.modules[module] %}
{% if THIRDPARTY.modules[module][submodule].enabled and THIRDPARTY.modules[module][submodule]["var.syslog_port"] is defined %}
- {{ THIRDPARTY.modules[module][submodule].get("var.syslog_host", "0.0.0.0") }}:{{ THIRDPARTY.modules[module][submodule]["var.syslog_port"] }}:{{ THIRDPARTY.modules[module][submodule]["var.syslog_port"] }}/tcp
- {{ THIRDPARTY.modules[module][submodule].get("var.syslog_host", "0.0.0.0") }}:{{ THIRDPARTY.modules[module][submodule]["var.syslog_port"] }}:{{ THIRDPARTY.modules[module][submodule]["var.syslog_port"] }}/udp
{% endif %}
{% endfor %}
{% endfor %}
- watch:
- file: /opt/so/conf/filebeat/etc/filebeat.yml
{% if grains.role in ES_INCLUDED_NODES %}
run_module_setup:
cmd.run:
- name: /usr/sbin/so-filebeat-module-setup
- require:
- file: filebeatmoduleconfsync
- docker_container: so-filebeat
- onchanges:
- docker_container: so-elasticsearch
{% endif %}
append_so-filebeat_so-status.conf:
file.append:
- name: /opt/so/conf/so-status/so-status.conf

6
salt/filebeat/map.jinja Normal file
View File

@@ -0,0 +1,6 @@
{% import_yaml 'filebeat/thirdpartydefaults.yaml' as TPDEFAULTS %}
{% set THIRDPARTY = salt['pillar.get']('filebeat:third_party_filebeat', default=TPDEFAULTS.third_party_filebeat, merge=True) %}
{% import_yaml 'filebeat/securityoniondefaults.yaml' as SODEFAULTS %}
{% set SO = SODEFAULTS.securityonion_filebeat %}
{#% set SO = salt['pillar.get']('filebeat:third_party_filebeat', default=SODEFAULTS.third_party_filebeat, merge=True) %#}

View File

@@ -0,0 +1,31 @@
{%- set ZEEKVER = salt['pillar.get']('global:mdengine', '') %}
{% set ZEEKLOGLOOKUP = {
'conn': 'connection',
} %}
securityonion_filebeat:
modules:
{%- if grains['role'] in ['so-manager', 'so-eval', 'so-managersearch', 'so-standalone','so-node', 'so-hotnode', 'so-warmnode', 'so-heavynode'] %}
elasticsearch:
server:
enabled: true
var.paths: ["/logs/elasticsearch/*.log"]
logstash:
log:
enabled: true
var.paths: ["/logs/logstash.log"]
{%- endif %}
{%- if grains['role'] in ['so-manager', 'so-eval', 'so-managersearch', 'so-standalone'] %}
kibana:
log:
enabled: true
var.paths: ["/logs/kibana/kibana.log"]
{%- endif %}
{%- if grains['role'] in ['so-manager', 'so-eval', 'so-managersearch', 'so-standalone', 'so-heavynode'] %}
redis:
log:
enabled: true
var.paths: ["/logs/redis.log"]
slowlog:
enabled: false
{%- endif %}

View File

@@ -0,0 +1,252 @@
third_party_filebeat:
modules:
aws:
cloudtrail:
enabled: false
cloudwatch:
enabled: false
ec2:
enabled: false
elb:
enabled: false
s3access:
enabled: false
vpcflow:
enabled: false
azure:
activitylogs:
enabled: false
platformlogs:
enabled: false
auditlogs:
enabled: false
signinlogs:
enabled: false
barracuda:
waf:
enabled: false
var.input: udp
var.syslog_host: 0.0.0.0
var.syslog_port: 9503
spamfirewall:
enabled: false
var.input: udp
var.syslog_host: 0.0.0.0
var.syslog_port: 9524
bluecoat:
director:
enabled: false
var.input: udp
var.syslog_host: 0.0.0.0
var.syslog_port: 9505
cef:
log:
enabled: false
var.syslog_host: 0.0.0.0
var.syslog_port: 9003
checkpoint:
firewall:
enabled: false
var.syslog_host: 0.0.0.0
var.syslog_port: 9505
cisco:
asa:
enabled: false
var.syslog_host: 0.0.0.0
var.syslog_port: 9001
ftd:
enabled: false
var.syslog_host: 0.0.0.0
var.syslog_port: 9003
ios:
enabled: false
var.syslog_host: 0.0.0.0
var.syslog_port: 9002
nexus:
enabled: false
var.syslog_host: 0.0.0.0
var.syslog_port: 9506
meraki:
enabled: false
var.syslog_host: 0.0.0.0
var.syslog_port: 9525
umbrella:
enabled: false
amp:
enabled: false
cyberark:
corepas:
enabled: false
var.input: udp
var.syslog_host: 0.0.0.0
var.syslog_port: 9527
cylance:
protect:
enabled: false
var.input: udp
var.syslog_host: 0.0.0.0
var.syslog_port: 9508
f5:
bigipapm:
enabled: false
var.input: udp
var.syslog_host: 0.0.0.0
var.syslog_port: 9504
bigipafm:
enabled: false
var.input: udp
var.syslog_host: 0.0.0.0
var.syslog_port: 9528
fortinet:
firewall:
enabled: false
var.input: udp
var.syslog_host: 0.0.0.0
var.syslog_port: 9004
clientendpoint:
enabled: false
var.input: udp
var.syslog_host: 0.0.0.0
var.syslog_port: 9510
fortimail:
enabled: false
var.input: udp
var.syslog_port: 9350
gcp:
vpcflow:
enabled: false
firewall:
enabled: false
audit:
enabled: false
google_workspace:
saml:
enabled: false
user_accounts:
enabled: false
login:
enabled: false
admin:
enabled: false
drive:
enabled: false
groups:
enabled: false
imperva:
securesphere:
enabled: false
var.input: udp
var.syslog_host: 0.0.0.0
var.syslog_port: 9511
infoblox:
nios:
enabled: false
var.input: udp
var.syslog_host: 0.0.0.0
var.syslog_port: 9512
juniper:
junos:
enabled: false
var.input: udp
var.syslog_host: 0.0.0.0
var.syslog_port: 9513
netscreen:
enabled: false
var.input: udp
var.syslog_host: 0.0.0.0
var.syslog_port: 9523
srx:
enabled: false
var.input: udp
var.syslog_host: 0.0.0.0
var.syslog_port: 9006
microsoft:
defender_atp:
enabled: false
m365_defender:
enabled: false
dhcp:
enabled: false
var.input: udp
var.syslog_host: 0.0.0.0
var.syslog_port: 9515
misp:
threat:
enabled: false
netflow:
log:
enabled: false
var.netflow_host: 0.0.0.0
var.netflow_port: 2055
var.internal_networks:
- private
netscout:
sightline:
enabled: false
var.input: udp
var.syslog_host: 0.0.0.0
var.syslog_port: 9502
o365:
audit:
enabled: false
okta:
system:
enabled: false
proofpoint:
emailsecurity:
enabled: false
var.input: udp
var.syslog_host: 0.0.0.0
var.syslog_port: 9531
radware:
defensepro:
enabled: false
var.input: udp
var.syslog_host: 0.0.0.0
var.syslog_port: 9518
snort:
log:
enabled: false
var.input: udp
var.syslog_host: 0.0.0.0
var.syslog_port: 9532
snyk:
audit:
enabled: false
vulnerabilities:
enabled: false
sonicwall:
firewall:
enabled: false
var.input: udp
var.syslog_host: 0.0.0.0
var.syslog_port: 9519
sophos:
xg:
enabled: false
var.input: udp
var.syslog_host: 0.0.0.0
var.syslog_port: 9005
utm:
enabled: false
var.input: udp
var.syslog_host: 0.0.0.0
var.syslog_port: 9533
squid:
log:
enabled: false
var.input: udp
var.syslog_host: 0.0.0.0
var.syslog_port: 9520
tomcat:
log:
enabled: false
var.input: udp
var.syslog_host: 0.0.0.0
var.syslog_port: 9501
zscaler:
zia:
enabled: false
var.input: udp
var.syslog_host: 0.0.0.0
var.syslog_port: 9521

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,21 @@
{% set measurements = salt['cmd.shell']('docker exec -t so-influxdb influx -format json -ssl -unsafeSsl -database telegraf -execute "show measurements" 2> /root/measurement_query.log | jq -r .results[0].series[0].values[]?[0] 2>> /root/measurement_query.log') %}
influxdb:
retention_policies:
so_short_term:
default: True
duration: 30d
shard_duration: 1d
so_long_term:
default: False
duration: 0d
shard_duration: 7d
downsample:
so_long_term:
resolution: 5m
{% if measurements|length > 0 %}
measurements:
{% for measurement in measurements.splitlines() %}
- {{ measurement }}
{% endfor %}
{% endif %}

View File

@@ -2,12 +2,21 @@
{% if sls in allowed_states %}
{% set GRAFANA = salt['pillar.get']('manager:grafana', '0') %}
{% set MANAGER = salt['grains.get']('master') %}
{% set VERSION = salt['pillar.get']('global:soversion', 'HH1.2.2') %}
{% set IMAGEREPO = salt['pillar.get']('global:imagerepo') %}
{% if grains['role'] in ['so-manager', 'so-managersearch', 'so-standalone'] or (grains.role == 'so-eval' and GRAFANA == 1) %}
{% set MANAGER = salt['grains.get']('master') %}
{% set VERSION = salt['pillar.get']('global:soversion', 'HH1.2.2') %}
{% set IMAGEREPO = salt['pillar.get']('global:imagerepo') %}
{% import_yaml 'influxdb/defaults.yaml' as default_settings %}
{% set influxdb = salt['grains.filter_by'](default_settings, default='influxdb', merge=salt['pillar.get']('influxdb', {})) %}
{% from 'salt/map.jinja' import PYTHON3INFLUX with context %}
{% from 'salt/map.jinja' import PYTHONINFLUXVERSION with context %}
{% set PYTHONINFLUXVERSIONINSTALLED = salt['cmd.run']("python3 -c \"exec('try:import influxdb; print (influxdb.__version__)\\nexcept:print(\\'Module Not Found\\')')\"", python_shell=True) %}
include:
- salt.minion
- salt.python3-influxdb
# Influx DB
influxconfdir:
file.directory:
@@ -57,6 +66,71 @@ append_so-influxdb_so-status.conf:
- name: /opt/so/conf/so-status/so-status.conf
- text: so-influxdb
# We have to make sure the influxdb module is the right version prior to state run since reload_modules is bugged
{% if PYTHONINFLUXVERSIONINSTALLED == PYTHONINFLUXVERSION %}
wait_for_influxdb:
http.query:
- name: 'https://{{MANAGER}}:8086/query?q=SHOW+DATABASES'
- ssl: True
- verify_ssl: False
- status: 200
- timeout: 30
- retry:
attempts: 5
interval: 60
telegraf_database:
influxdb_database.present:
- name: telegraf
- database: telegraf
- ssl: True
- verify_ssl: /etc/pki/ca.crt
- cert: ['/etc/pki/influxdb.crt', '/etc/pki/influxdb.key']
- influxdb_host: {{ MANAGER }}
- require:
- docker_container: so-influxdb
- sls: salt.python3-influxdb
- http: wait_for_influxdb
{% for rp in influxdb.retention_policies.keys() %}
{{rp}}_retention_policy:
influxdb_retention_policy.present:
- name: {{rp}}
- database: telegraf
- duration: {{influxdb.retention_policies[rp].duration}}
- shard_duration: {{influxdb.retention_policies[rp].shard_duration}}
- replication: 1
- default: {{influxdb.retention_policies[rp].get('default', 'False')}}
- ssl: True
- verify_ssl: /etc/pki/ca.crt
- cert: ['/etc/pki/influxdb.crt', '/etc/pki/influxdb.key']
- influxdb_host: {{ MANAGER }}
- require:
- docker_container: so-influxdb
- influxdb_database: telegraf_database
- file: influxdb_retention_policy.present_patch
- sls: salt.python3-influxdb
{% endfor %}
{% for dest_rp in influxdb.downsample.keys() %}
{% for measurement in influxdb.downsample[dest_rp].get('measurements', []) %}
so_downsample_{{measurement}}_cq:
influxdb_continuous_query.present:
- name: so_downsample_{{measurement}}_cq
- database: telegraf
- query: SELECT mean(*) INTO "{{dest_rp}}"."{{measurement}}" FROM "{{measurement}}" GROUP BY time({{influxdb.downsample[dest_rp].resolution}}),*
- ssl: True
- verify_ssl: /etc/pki/ca.crt
- cert: ['/etc/pki/influxdb.crt', '/etc/pki/influxdb.key']
- influxdb_host: {{ MANAGER }}
- require:
- docker_container: so-influxdb
- influxdb_database: telegraf_database
- file: influxdb_continuous_query.present_patch
{% endfor %}
{% endfor %}
{% endif %}
{% endif %}
{% else %}

View File

@@ -1,6 +1,4 @@
#!/bin/bash
# {%- set FLEET_MANAGER = salt['pillar.get']('global:fleet_manager', False) -%}
# {%- set FLEET_NODE = salt['pillar.get']('global:fleet_node', False) -%}
# {%- set MANAGER = salt['pillar.get']('global:url_base', '') %}
. /usr/sbin/so-common
@@ -8,19 +6,12 @@
# Copy template file
cp /opt/so/conf/kibana/saved_objects.ndjson.template /opt/so/conf/kibana/saved_objects.ndjson
# {% if FLEET_NODE or FLEET_MANAGER %}
# Fleet IP
#sed -i "s/FLEETPLACEHOLDER/{{ MANAGER }}/g" /opt/so/conf/kibana/saved_objects.ndjson
# {% endif %}
# SOCtopus and Manager
sed -i "s/PLACEHOLDER/{{ MANAGER }}/g" /opt/so/conf/kibana/saved_objects.ndjson
wait_for_web_response "http://localhost:5601/app/kibana" "Elastic"
## This hackery will be removed if using Elastic Auth ##
wait_for_web_response "http://localhost:5601/app/kibana" "Elastic" 300 "{{ ELASTICCURL }}"
# Let's snag a cookie from Kibana
THECOOKIE=$(curl -c - -X GET http://localhost:5601/ | grep sid | awk '{print $7}')
SESSIONCOOKIE=$({{ ELASTICCURL }} -c - -X GET http://localhost:5601/ | grep sid | awk '{print $7}')
# Load saved objects
curl -b "sid=$THECOOKIE" -L -X POST "localhost:5601/api/saved_objects/_import?overwrite=true" -H "kbn-xsrf: true" --form file=@/opt/so/conf/kibana/saved_objects.ndjson >> /opt/so/log/kibana/misc.log
{{ ELASTICCURL }} -b "sid=$SESSIONCOOKIE" -L -X POST "localhost:5601/api/saved_objects/_import?overwrite=true" -H "kbn-xsrf: true" --form file=@/opt/so/conf/kibana/saved_objects.ndjson >> /opt/so/log/kibana/misc.log

View File

@@ -1,10 +0,0 @@
{ "attributes":
{
"defaultIndex": "2289a0c0-6970-11ea-a0cd-ffa0f6a1bc29",
"defaultRoute":"/app/kibana#/dashboard/a8411b30-6d03-11ea-b301-3d6c35840645",
"discover:sampleSize":"100",
"dashboard:defaultDarkTheme":true,
"theme:darkMode":true,
"timepicker:timeDefaults":"{\n \"from\": \"now-24h\",\n \"to\": \"now\"\n}"
}
}

View File

@@ -1,20 +1,26 @@
---
# Default Kibana configuration from kibana-docker.
{%- set ES = salt['pillar.get']('manager:mainip', '') -%}
{%- set ES_USER = salt['pillar.get']('elasticsearch:auth:users:so_elastic_user:user', '') %}
{%- set ES_PASS = salt['pillar.get']('elasticsearch:auth:users:so_elastic_user:pass', '') %}
server.name: kibana
server.host: "0"
server.basePath: /kibana
elasticsearch.hosts: [ "https://{{ ES }}:9200" ]
elasticsearch.ssl.verificationMode: none
#kibana.index: ".kibana"
#elasticsearch.username: elastic
#elasticsearch.password: changeme
{% if salt['pillar.get']('elasticsearch:auth:enabled') is sameas true %}
elasticsearch.username: {{ ES_USER }}
elasticsearch.password: {{ ES_PASS }}
{% endif %}
#xpack.monitoring.ui.container.elasticsearch.enabled: true
elasticsearch.requestTimeout: 90000
logging.dest: /var/log/kibana/kibana.log
telemetry.enabled: false
security.showInsecureClusterWarning: false
{% if not salt['pillar.get']('elasticsearch:auth:enabled', False) %}
xpack.security.authc.providers:
anonymous.anonymous1:
order: 0
credentials: "elasticsearch_anonymous_user"
{% endif %}

View File

@@ -460,7 +460,7 @@
{"attributes":{"description":"","hits":0,"kibanaSavedObjectMeta":{"searchSourceJSON":"{\"highlightAll\":true,\"version\":true,\"query\":{\"query\":{\"query_string\":{\"analyze_wildcard\":true,\"query\":\"*\",\"default_field\":\"*\",\"time_zone\":\"America/New_York\"}},\"language\":\"lucene\"},\"filter\":[]}"},"optionsJSON":"{\"darkTheme\":true,\"useMargins\":true}","panelsJSON":"[{\"version\":\"7.9.0\",\"gridData\":{\"w\":8,\"h\":48,\"x\":0,\"y\":0,\"i\":\"1\"},\"panelIndex\":\"1\",\"embeddableConfig\":{\"enhancements\":{}},\"panelRefName\":\"panel_0\"},{\"version\":\"7.9.0\",\"gridData\":{\"w\":32,\"h\":8,\"x\":16,\"y\":0,\"i\":\"2\"},\"panelIndex\":\"2\",\"embeddableConfig\":{\"enhancements\":{}},\"panelRefName\":\"panel_1\"},{\"version\":\"7.9.0\",\"gridData\":{\"w\":48,\"h\":24,\"x\":0,\"y\":96,\"i\":\"6\"},\"panelIndex\":\"6\",\"embeddableConfig\":{\"enhancements\":{}},\"panelRefName\":\"panel_2\"},{\"version\":\"7.9.0\",\"gridData\":{\"w\":20,\"h\":28,\"x\":8,\"y\":20,\"i\":\"7\"},\"panelIndex\":\"7\",\"embeddableConfig\":{\"enhancements\":{}},\"panelRefName\":\"panel_3\"},{\"version\":\"7.9.0\",\"gridData\":{\"w\":20,\"h\":28,\"x\":28,\"y\":20,\"i\":\"8\"},\"panelIndex\":\"8\",\"embeddableConfig\":{\"enhancements\":{}},\"panelRefName\":\"panel_4\"},{\"version\":\"7.9.0\",\"gridData\":{\"w\":16,\"h\":24,\"x\":0,\"y\":72,\"i\":\"9\"},\"panelIndex\":\"9\",\"embeddableConfig\":{\"enhancements\":{}},\"panelRefName\":\"panel_5\"},{\"version\":\"7.9.0\",\"gridData\":{\"w\":48,\"h\":24,\"x\":0,\"y\":48,\"i\":\"10\"},\"panelIndex\":\"10\",\"embeddableConfig\":{\"enhancements\":{}},\"panelRefName\":\"panel_6\"},{\"version\":\"7.9.0\",\"gridData\":{\"w\":20,\"h\":12,\"x\":8,\"y\":8,\"i\":\"11\"},\"panelIndex\":\"11\",\"embeddableConfig\":{\"enhancements\":{}},\"panelRefName\":\"panel_7\"},{\"version\":\"7.9.0\",\"gridData\":{\"w\":32,\"h\":24,\"x\":16,\"y\":72,\"i\":\"12\"},\"panelIndex\":\"12\",\"embeddableConfig\":{\"enhancements\":{}},\"panelRefName\":\"panel_8\"},{\"version\":\"7.9.0\",\"gridData\":{\"w\":20,\"h\":12,\"x\":28,\"y\":8,\"i\":\"13\"},\"panelIndex\":\"13\",\"embeddableConfig\":{\"enhancements\":{}},\"panelRefName\":\"panel_9\"},{\"version\":\"7.9.0\",\"gridData\":{\"w\":48,\"h\":24,\"x\":0,\"y\":120,\"i\":\"14\"},\"panelIndex\":\"14\",\"embeddableConfig\":{\"columns\":[\"event_type\",\"source_ip\",\"source_port\",\"destination_ip\",\"destination_port\",\"_id\"],\"sort\":[\"@timestamp\",\"desc\"],\"enhancements\":{}},\"panelRefName\":\"panel_10\"},{\"version\":\"7.9.0\",\"gridData\":{\"w\":8,\"h\":8,\"x\":8,\"y\":0,\"i\":\"15\"},\"panelIndex\":\"15\",\"embeddableConfig\":{\"enhancements\":{}},\"panelRefName\":\"panel_11\"}]","timeRestore":false,"title":"z16.04 - Sysmon - Logs","version":1},"id":"6d189680-6d62-11e7-8ddb-e71eb260f4a3","migrationVersion":{"dashboard":"7.11.0"},"references":[{"id":"b3b449d0-3429-11e7-9d52-4f090484f59e","name":"panel_0","type":"visualization"},{"id":"8cfdeff0-6d6b-11e7-ad64-15aa071374a6","name":"panel_1","type":"visualization"},{"id":"0eb1fd80-6d70-11e7-b09b-f57b22df6524","name":"panel_2","type":"visualization"},{"id":"3072c750-6d71-11e7-b09b-f57b22df6524","name":"panel_3","type":"visualization"},{"id":"7bc74b40-6d71-11e7-b09b-f57b22df6524","name":"panel_4","type":"visualization"},{"id":"13ed0810-6d72-11e7-b09b-f57b22df6524","name":"panel_5","type":"visualization"},{"id":"3b6c92c0-6d72-11e7-b09b-f57b22df6524","name":"panel_6","type":"visualization"},{"id":"e09f6010-6d72-11e7-b09b-f57b22df6524","name":"panel_7","type":"visualization"},{"id":"29611940-6d75-11e7-b09b-f57b22df6524","name":"panel_8","type":"visualization"},{"id":"6b70b840-6d75-11e7-b09b-f57b22df6524","name":"panel_9","type":"visualization"},{"id":"248c1d20-6d6b-11e7-ad64-15aa071374a6","name":"panel_10","type":"search"},{"id":"AWDHHk1sxQT5EBNmq43Y","name":"panel_11","type":"visualization"}],"type":"dashboard","updated_at":"2021-03-19T14:35:12.119Z","version":"WzcwMTQyLDRd"}
{"attributes":{"description":"","kibanaSavedObjectMeta":{"searchSourceJSON":"{\"filter\":[],\"query\":{\"query\":\"\",\"language\":\"lucene\"}}"},"savedSearchRefName":"search_0","title":"SMB - Action (Pie Chart)","uiStateJSON":"{}","version":1,"visState":"{\"title\":\"SMB - Action (Pie Chart)\",\"type\":\"pie\",\"params\":{\"addTooltip\":true,\"addLegend\":true,\"legendPosition\":\"right\",\"isDonut\":true,\"type\":\"pie\",\"labels\":{\"show\":false,\"values\":true,\"last_level\":true,\"truncate\":100}},\"aggs\":[{\"id\":\"1\",\"enabled\":true,\"type\":\"count\",\"schema\":\"metric\",\"params\":{}},{\"id\":\"2\",\"enabled\":true,\"type\":\"terms\",\"schema\":\"segment\",\"params\":{\"field\":\"action.keyword\",\"otherBucket\":false,\"otherBucketLabel\":\"Other\",\"missingBucket\":false,\"missingBucketLabel\":\"Missing\",\"size\":20,\"order\":\"desc\",\"orderBy\":\"1\"}}]}"},"id":"6f883480-3aad-11e7-8b17-0d8709b02c80","migrationVersion":{"visualization":"7.11.0"},"references":[{"id":"19849f30-3aab-11e7-8b17-0d8709b02c80","name":"search_0","type":"search"}],"type":"visualization","updated_at":"2021-03-19T14:35:12.119Z","version":"WzcwMTQzLDRd"}
{"attributes":{"description":"","kibanaSavedObjectMeta":{"searchSourceJSON":"{\"indexRefName\":\"kibanaSavedObjectMeta.searchSourceJSON.index\"}"},"title":"Security Onion - SSL - Subject","uiStateJSON":"{\"vis\":{\"params\":{\"sort\":{\"columnIndex\":null,\"direction\":null}}}}","version":1,"visState":"{\"title\":\"Security Onion - SSL - Subject\",\"type\":\"table\",\"params\":{\"perPage\":10,\"showPartialRows\":false,\"showMetricsAtAllLevels\":false,\"sort\":{\"columnIndex\":null,\"direction\":null},\"showTotal\":false,\"totalFunc\":\"sum\",\"percentageCol\":\"\",\"dimensions\":{\"metrics\":[{\"accessor\":1,\"format\":{\"id\":\"number\"},\"params\":{},\"label\":\"Count\",\"aggType\":\"count\"}],\"buckets\":[{\"accessor\":0,\"format\":{\"id\":\"terms\",\"params\":{\"id\":\"string\",\"otherBucketLabel\":\"Other\",\"missingBucketLabel\":\"Missing\",\"parsedUrl\":{\"origin\":\"https://PLACEHOLDER\",\"pathname\":\"/kibana/app/kibana\",\"basePath\":\"/kibana\"}}},\"params\":{},\"label\":\"ssl.certificate.subject.keyword: Descending\",\"aggType\":\"terms\"}]},\"showToolbar\":true},\"aggs\":[{\"id\":\"1\",\"enabled\":true,\"type\":\"count\",\"schema\":\"metric\",\"params\":{}},{\"id\":\"2\",\"enabled\":true,\"type\":\"terms\",\"schema\":\"bucket\",\"params\":{\"field\":\"ssl.certificate.subject.keyword\",\"orderBy\":\"1\",\"order\":\"desc\",\"size\":100,\"otherBucket\":false,\"otherBucketLabel\":\"Other\",\"missingBucket\":false,\"missingBucketLabel\":\"Missing\",\"customLabel\":\"Subject\"}}]}"},"id":"6fccb600-75ec-11ea-9565-7315f4ee5cac","migrationVersion":{"visualization":"7.11.0"},"references":[{"id":"2289a0c0-6970-11ea-a0cd-ffa0f6a1bc29","name":"kibanaSavedObjectMeta.searchSourceJSON.index","type":"index-pattern"}],"type":"visualization","updated_at":"2021-03-19T14:35:12.119Z","version":"WzcwMTQ0LDRd"}
{"attributes":{"buildNum":33984,"dashboard:defaultDarkTheme":true,"defaultIndex":"2289a0c0-6970-11ea-a0cd-ffa0f6a1bc29","defaultRoute":"/app/dashboards#/view/a8411b30-6d03-11ea-b301-3d6c35840645","discover:sampleSize":"100","theme:darkMode":true,"timepicker:timeDefaults":"{\n \"from\": \"now-24h\",\n \"to\": \"now\"\n}"},"id":"7.11.2","migrationVersion":{"config":"7.9.0"},"references":[],"type":"config","updated_at":"2021-03-19T14:35:12.119Z","version":"WzcwMTQ1LDRd"}
{"attributes":{"buildNum":39457,"defaultIndex":"2289a0c0-6970-11ea-a0cd-ffa0f6a1bc29","defaultRoute":"/app/dashboards#/view/a8411b30-6d03-11ea-b301-3d6c35840645","discover:sampleSize":100,"theme:darkMode":true,"timepicker:timeDefaults":"{\n \"from\": \"now-24h\",\n \"to\": \"now\"\n}"},"coreMigrationVersion":"7.13.2","id":"7.13.2","migrationVersion":{"config":"7.12.0"},"references":[],"type":"config","updated_at":"2021-04-29T21:42:52.430Z","version":"WzY3NTUsM10="}
{"attributes":{"description":"","kibanaSavedObjectMeta":{"searchSourceJSON":"{\"indexRefName\":\"kibanaSavedObjectMeta.searchSourceJSON.index\"}"},"title":"Strelka - File - MIME Flavors","uiStateJSON":"{\"vis\":{\"params\":{\"sort\":{\"columnIndex\":null,\"direction\":null}}}}","version":1,"visState":"{\"title\":\"Strelka - File - MIME Flavors\",\"type\":\"table\",\"params\":{\"perPage\":10,\"showPartialRows\":false,\"showMetricsAtAllLevels\":false,\"sort\":{\"columnIndex\":null,\"direction\":null},\"showTotal\":false,\"totalFunc\":\"sum\",\"percentageCol\":\"\",\"dimensions\":{\"metrics\":[{\"accessor\":0,\"format\":{\"id\":\"number\"},\"params\":{},\"label\":\"Count\",\"aggType\":\"count\"}],\"buckets\":[]},\"showToolbar\":true},\"aggs\":[{\"id\":\"1\",\"enabled\":true,\"type\":\"count\",\"schema\":\"metric\",\"params\":{}},{\"id\":\"2\",\"enabled\":true,\"type\":\"terms\",\"schema\":\"bucket\",\"params\":{\"field\":\"file.flavors.mime.keyword\",\"orderBy\":\"1\",\"order\":\"desc\",\"size\":100,\"otherBucket\":false,\"otherBucketLabel\":\"Other\",\"missingBucket\":false,\"missingBucketLabel\":\"Missing\"}}]}"},"id":"70243970-772c-11ea-bee5-af7f7c7b8e05","migrationVersion":{"visualization":"7.11.0"},"references":[{"id":"2289a0c0-6970-11ea-a0cd-ffa0f6a1bc29","name":"kibanaSavedObjectMeta.searchSourceJSON.index","type":"index-pattern"}],"type":"visualization","updated_at":"2021-03-19T14:35:12.119Z","version":"WzcwMTQ2LDRd"}
{"attributes":{"description":"","kibanaSavedObjectMeta":{"searchSourceJSON":"{\"filter\":[]}"},"savedSearchRefName":"search_0","title":"Modbus - Log Count","uiStateJSON":"{\"vis\":{\"defaultColors\":{\"0 - 100\":\"rgb(0,104,55)\"}}}","version":1,"visState":"{\"title\":\"Modbus - Log Count\",\"type\":\"metric\",\"params\":{\"addTooltip\":true,\"addLegend\":false,\"type\":\"gauge\",\"gauge\":{\"verticalSplit\":false,\"autoExtend\":false,\"percentageMode\":false,\"gaugeType\":\"Metric\",\"gaugeStyle\":\"Full\",\"backStyle\":\"Full\",\"orientation\":\"vertical\",\"colorSchema\":\"Green to Red\",\"gaugeColorMode\":\"None\",\"useRange\":false,\"colorsRange\":[{\"from\":0,\"to\":100}],\"invertColors\":false,\"labels\":{\"show\":false,\"color\":\"black\"},\"scale\":{\"show\":false,\"labels\":false,\"color\":\"#333\",\"width\":2},\"type\":\"simple\",\"style\":{\"fontSize\":\"30\",\"bgColor\":false,\"labelColor\":false,\"subText\":\"\",\"bgFill\":\"#FB9E00\"}}},\"aggs\":[{\"id\":\"1\",\"enabled\":true,\"type\":\"count\",\"schema\":\"metric\",\"params\":{}}],\"listeners\":{}}"},"id":"AWDG_9KpxQT5EBNmq4Oo","migrationVersion":{"visualization":"7.11.0"},"references":[{"id":"52dc9fe0-342e-11e7-9e93-53b62e1857b2","name":"search_0","type":"search"}],"type":"visualization","updated_at":"2021-03-19T14:35:12.119Z","version":"WzcwMTQ3LDRd"}
{"attributes":{"description":"","hits":0,"kibanaSavedObjectMeta":{"searchSourceJSON":"{\"highlightAll\":true,\"version\":true,\"query\":{\"query\":{\"query_string\":{\"analyze_wildcard\":true,\"query\":\"*\",\"default_field\":\"*\"}},\"language\":\"lucene\"},\"filter\":[]}"},"optionsJSON":"{\"darkTheme\":true,\"useMargins\":true}","panelsJSON":"[{\"version\":\"7.9.0\",\"gridData\":{\"w\":8,\"h\":56,\"x\":0,\"y\":0,\"i\":\"2\"},\"panelIndex\":\"2\",\"embeddableConfig\":{\"enhancements\":{}},\"panelRefName\":\"panel_0\"},{\"version\":\"7.9.0\",\"gridData\":{\"w\":32,\"h\":8,\"x\":16,\"y\":0,\"i\":\"3\"},\"panelIndex\":\"3\",\"embeddableConfig\":{\"enhancements\":{}},\"panelRefName\":\"panel_1\"},{\"version\":\"7.9.0\",\"gridData\":{\"w\":12,\"h\":24,\"x\":8,\"y\":32,\"i\":\"5\"},\"panelIndex\":\"5\",\"embeddableConfig\":{\"enhancements\":{}},\"panelRefName\":\"panel_2\"},{\"version\":\"7.9.0\",\"gridData\":{\"w\":12,\"h\":24,\"x\":20,\"y\":32,\"i\":\"6\"},\"panelIndex\":\"6\",\"embeddableConfig\":{\"enhancements\":{}},\"panelRefName\":\"panel_3\"},{\"version\":\"7.9.0\",\"gridData\":{\"w\":48,\"h\":24,\"x\":0,\"y\":56,\"i\":\"8\"},\"panelIndex\":\"8\",\"embeddableConfig\":{\"columns\":[\"source_ip\",\"source_port\",\"destination_ip\",\"destination_port\",\"uid\",\"_id\"],\"sort\":[\"@timestamp\",\"desc\"],\"enhancements\":{}},\"panelRefName\":\"panel_4\"},{\"version\":\"7.9.0\",\"gridData\":{\"w\":16,\"h\":24,\"x\":32,\"y\":32,\"i\":\"9\"},\"panelIndex\":\"9\",\"embeddableConfig\":{\"enhancements\":{}},\"panelRefName\":\"panel_5\"},{\"version\":\"7.9.0\",\"gridData\":{\"w\":8,\"h\":8,\"x\":8,\"y\":0,\"i\":\"10\"},\"panelIndex\":\"10\",\"embeddableConfig\":{\"enhancements\":{}},\"panelRefName\":\"panel_6\"},{\"version\":\"7.9.0\",\"gridData\":{\"w\":40,\"h\":24,\"x\":8,\"y\":8,\"i\":\"11\"},\"panelIndex\":\"11\",\"embeddableConfig\":{\"enhancements\":{}},\"panelRefName\":\"panel_7\"}]","timeRestore":false,"title":"z16.04 - Bro - Modbus","version":1},"id":"70c005f0-3583-11e7-a588-05992195c551","migrationVersion":{"dashboard":"7.11.0"},"references":[{"id":"b3b449d0-3429-11e7-9d52-4f090484f59e","name":"panel_0","type":"visualization"},{"id":"0d168a30-363f-11e7-a6f7-4f44d7bf1c33","name":"panel_1","type":"visualization"},{"id":"20eabd60-380b-11e7-a1cc-ebc6a7e70e84","name":"panel_2","type":"visualization"},{"id":"3c65f500-380b-11e7-a1cc-ebc6a7e70e84","name":"panel_3","type":"visualization"},{"id":"52dc9fe0-342e-11e7-9e93-53b62e1857b2","name":"panel_4","type":"search"},{"id":"178209e0-6e1b-11e7-b553-7f80727663c1","name":"panel_5","type":"visualization"},{"id":"AWDG_9KpxQT5EBNmq4Oo","name":"panel_6","type":"visualization"},{"id":"453f8b90-4a58-11e8-9b0a-f1d33346f773","name":"panel_7","type":"visualization"}],"type":"dashboard","updated_at":"2021-03-19T14:35:12.119Z","version":"WzcwMTQ4LDRd"}
@@ -730,4 +730,4 @@
{"attributes":{"description":"","kibanaSavedObjectMeta":{"searchSourceJSON":"{\"query\":{\"query\":\"\",\"language\":\"kuery\"},\"filter\":[],\"indexRefName\":\"kibanaSavedObjectMeta.searchSourceJSON.index\"}"},"title":"Security Onion - Syslog - Severity (Donut)","uiStateJSON":"{}","version":1,"visState":"{\"type\":\"pie\",\"aggs\":[{\"id\":\"1\",\"enabled\":true,\"type\":\"count\",\"schema\":\"metric\",\"params\":{}},{\"id\":\"2\",\"enabled\":true,\"type\":\"terms\",\"schema\":\"segment\",\"params\":{\"field\":\"syslog.severity.keyword\",\"orderBy\":\"1\",\"order\":\"desc\",\"size\":25,\"otherBucket\":false,\"otherBucketLabel\":\"Other\",\"missingBucket\":false,\"missingBucketLabel\":\"Missing\",\"customLabel\":\"Severity\"}}],\"params\":{\"type\":\"pie\",\"addTooltip\":true,\"addLegend\":true,\"legendPosition\":\"right\",\"isDonut\":true,\"labels\":{\"show\":false,\"values\":true,\"last_level\":true,\"truncate\":100},\"dimensions\":{\"metric\":{\"accessor\":1,\"format\":{\"id\":\"number\"},\"params\":{},\"label\":\"Count\",\"aggType\":\"count\"},\"buckets\":[{\"accessor\":0,\"format\":{\"id\":\"terms\",\"params\":{\"id\":\"string\",\"otherBucketLabel\":\"Other\",\"missingBucketLabel\":\"Missing\",\"parsedUrl\":{\"origin\":\"https://PLACEHOLDER\",\"pathname\":\"/kibana/app/kibana\",\"basePath\":\"/kibana\"}}},\"params\":{},\"label\":\"syslog.severity.keyword: Descending\",\"aggType\":\"terms\"}]}},\"title\":\"Security Onion - Syslog - Severity (Donut)\"}"},"id":"fc8d41a0-777b-11ea-bee5-af7f7c7b8e05","migrationVersion":{"visualization":"7.11.0"},"references":[{"id":"2289a0c0-6970-11ea-a0cd-ffa0f6a1bc29","name":"kibanaSavedObjectMeta.searchSourceJSON.index","type":"index-pattern"}],"type":"visualization","updated_at":"2021-03-19T14:35:12.119Z","version":"WzcwNDExLDRd"}
{"attributes":{"description":"","kibanaSavedObjectMeta":{"searchSourceJSON":"{}"},"savedSearchRefName":"search_0","title":"Security Onion - Connections - Top Source IPs","uiStateJSON":"{\"vis\":{\"params\":{\"sort\":{\"columnIndex\":null,\"direction\":null}}}}","version":1,"visState":"{\"title\":\"Security Onion - Connections - Top Source IPs\",\"type\":\"table\",\"params\":{\"perPage\":10,\"showPartialRows\":false,\"showMetricsAtAllLevels\":false,\"sort\":{\"columnIndex\":null,\"direction\":null},\"showTotal\":false,\"totalFunc\":\"sum\",\"percentageCol\":\"\",\"dimensions\":{\"metrics\":[{\"accessor\":1,\"format\":{\"id\":\"number\"},\"params\":{},\"label\":\"Count\",\"aggType\":\"count\"}],\"buckets\":[{\"accessor\":0,\"format\":{\"id\":\"terms\",\"params\":{\"id\":\"string\",\"otherBucketLabel\":\"Other\",\"missingBucketLabel\":\"Missing\",\"parsedUrl\":{\"origin\":\"https://PLACEHOLDER\",\"pathname\":\"/kibana/app/kibana\",\"basePath\":\"/kibana\"}}},\"params\":{},\"label\":\"source.ip: Descending\",\"aggType\":\"terms\"}]},\"showToolbar\":true},\"aggs\":[{\"id\":\"1\",\"enabled\":true,\"type\":\"count\",\"schema\":\"metric\",\"params\":{}},{\"id\":\"2\",\"enabled\":true,\"type\":\"terms\",\"schema\":\"bucket\",\"params\":{\"field\":\"source.ip\",\"orderBy\":\"1\",\"order\":\"desc\",\"size\":10,\"otherBucket\":false,\"otherBucketLabel\":\"Other\",\"missingBucket\":false,\"missingBucketLabel\":\"Missing\",\"customLabel\":\"Source IP\"}}]}"},"id":"fd8b4640-6e9f-11ea-9266-1fd14ca6af34","migrationVersion":{"visualization":"7.11.0"},"references":[{"id":"9b333020-6e9f-11ea-9266-1fd14ca6af34","name":"search_0","type":"search"}],"type":"visualization","updated_at":"2021-03-19T14:35:12.119Z","version":"WzcwNDEyLDRd"}
{"attributes":{"description":"","hits":0,"kibanaSavedObjectMeta":{"searchSourceJSON":"{\"query\":{\"language\":\"kuery\",\"query\":\"event.module:strelka\"},\"filter\":[]}"},"optionsJSON":"{\"hidePanelTitles\":false,\"useMargins\":true}","panelsJSON":"[{\"version\":\"7.6.1\",\"gridData\":{\"x\":0,\"y\":0,\"w\":8,\"h\":7,\"i\":\"a2e0a619-a5c5-40d9-8593-e60f13ae22bf\"},\"panelIndex\":\"a2e0a619-a5c5-40d9-8593-e60f13ae22bf\",\"embeddableConfig\":{\"enhancements\":{}},\"panelRefName\":\"panel_0\"},{\"version\":\"7.6.1\",\"gridData\":{\"x\":8,\"y\":0,\"w\":21,\"h\":7,\"i\":\"566a9d04-f2dc-4868-9625-97a19d985703\"},\"panelIndex\":\"566a9d04-f2dc-4868-9625-97a19d985703\",\"embeddableConfig\":{\"enhancements\":{}},\"panelRefName\":\"panel_1\"},{\"version\":\"7.6.1\",\"gridData\":{\"x\":29,\"y\":0,\"w\":19,\"h\":7,\"i\":\"f247ec64-c278-4e05-ac4d-983bea9dfb7d\"},\"panelIndex\":\"f247ec64-c278-4e05-ac4d-983bea9dfb7d\",\"embeddableConfig\":{\"enhancements\":{}},\"panelRefName\":\"panel_2\"},{\"version\":\"7.6.1\",\"gridData\":{\"x\":0,\"y\":7,\"w\":12,\"h\":20,\"i\":\"6e80a142-ab0e-4fd3-891c-e495b78a1625\"},\"panelIndex\":\"6e80a142-ab0e-4fd3-891c-e495b78a1625\",\"embeddableConfig\":{\"enhancements\":{}},\"panelRefName\":\"panel_3\"},{\"version\":\"7.6.1\",\"gridData\":{\"x\":12,\"y\":7,\"w\":11,\"h\":20,\"i\":\"292cc879-6bc0-4541-ba92-3b3c5f4e3368\"},\"panelIndex\":\"292cc879-6bc0-4541-ba92-3b3c5f4e3368\",\"embeddableConfig\":{\"enhancements\":{}},\"panelRefName\":\"panel_4\"},{\"version\":\"7.6.1\",\"gridData\":{\"x\":23,\"y\":7,\"w\":14,\"h\":20,\"i\":\"66979b2c-e7c1-4291-91ac-16537b7f9ec3\"},\"panelIndex\":\"66979b2c-e7c1-4291-91ac-16537b7f9ec3\",\"embeddableConfig\":{\"enhancements\":{}},\"panelRefName\":\"panel_5\"},{\"version\":\"7.6.1\",\"gridData\":{\"x\":37,\"y\":7,\"w\":11,\"h\":20,\"i\":\"8bb1cf98-0401-4a2d-9dd8-deca08205a22\"},\"panelIndex\":\"8bb1cf98-0401-4a2d-9dd8-deca08205a22\",\"embeddableConfig\":{\"enhancements\":{}},\"panelRefName\":\"panel_6\"},{\"version\":\"7.6.1\",\"gridData\":{\"x\":0,\"y\":27,\"w\":8,\"h\":20,\"i\":\"393f3cec-3ee0-4275-b319-f307e7a260c6\"},\"panelIndex\":\"393f3cec-3ee0-4275-b319-f307e7a260c6\",\"embeddableConfig\":{\"enhancements\":{}},\"panelRefName\":\"panel_7\"},{\"version\":\"7.6.1\",\"gridData\":{\"x\":8,\"y\":27,\"w\":15,\"h\":20,\"i\":\"0e8800a9-a6f5-4a79-8370-61713f584886\"},\"panelIndex\":\"0e8800a9-a6f5-4a79-8370-61713f584886\",\"embeddableConfig\":{\"enhancements\":{}},\"panelRefName\":\"panel_8\"},{\"version\":\"7.6.1\",\"gridData\":{\"x\":23,\"y\":27,\"w\":25,\"h\":20,\"i\":\"be9a0a2a-d8c6-4d15-b5d7-d5599d0482a3\"},\"panelIndex\":\"be9a0a2a-d8c6-4d15-b5d7-d5599d0482a3\",\"embeddableConfig\":{\"enhancements\":{}},\"panelRefName\":\"panel_9\"},{\"version\":\"7.6.1\",\"gridData\":{\"x\":0,\"y\":47,\"w\":48,\"h\":27,\"i\":\"40296d2b-cb6f-423f-989c-3fdaa82d2aad\"},\"panelIndex\":\"40296d2b-cb6f-423f-989c-3fdaa82d2aad\",\"embeddableConfig\":{\"enhancements\":{}},\"panelRefName\":\"panel_10\"}]","timeRestore":false,"title":"Security Onion - Strelka","version":1},"id":"ff689c50-75f3-11ea-9565-7315f4ee5cac","migrationVersion":{"dashboard":"7.11.0"},"references":[{"id":"8cfec8c0-6ec2-11ea-9266-1fd14ca6af34","name":"panel_0","type":"visualization"},{"id":"d04b5130-6e99-11ea-9266-1fd14ca6af34","name":"panel_1","type":"visualization"},{"id":"23ed13a0-6e9a-11ea-9266-1fd14ca6af34","name":"panel_2","type":"visualization"},{"id":"7a88adc0-75f0-11ea-9565-7315f4ee5cac","name":"panel_3","type":"visualization"},{"id":"49cfe850-772c-11ea-bee5-af7f7c7b8e05","name":"panel_4","type":"visualization"},{"id":"70243970-772c-11ea-bee5-af7f7c7b8e05","name":"panel_5","type":"visualization"},{"id":"ce9e03f0-772c-11ea-bee5-af7f7c7b8e05","name":"panel_6","type":"visualization"},{"id":"a7ebb450-772c-11ea-bee5-af7f7c7b8e05","name":"panel_7","type":"visualization"},{"id":"08c0b770-772e-11ea-bee5-af7f7c7b8e05","name":"panel_8","type":"visualization"},{"id":"e087c7d0-772d-11ea-bee5-af7f7c7b8e05","name":"panel_9","type":"visualization"},{"id":"8b6f3150-72a2-11ea-8dd2-9d8795a1200b","name":"panel_10","type":"search"}],"type":"dashboard","updated_at":"2021-03-19T14:35:12.119Z","version":"WzcwNDEzLDRd"}
{"exportedCount":732,"missingRefCount":0,"missingReferences":[]}
{"exportedCount":732,"missingRefCount":0,"missingReferences":[]}

View File

@@ -4,6 +4,7 @@
{% set VERSION = salt['pillar.get']('global:soversion', 'HH1.2.2') %}
{% set IMAGEREPO = salt['pillar.get']('global:imagerepo') %}
{% set MANAGER = salt['grains.get']('master') %}
{% from 'elasticsearch/auth.map.jinja' import ELASTICAUTH with context %}
# Add ES Group
kibanasearchgroup:
@@ -34,6 +35,7 @@ synckibanaconfig:
- source: salt://kibana/etc
- user: 932
- group: 939
- file_mode: 660
- template: jinja
kibanalogdir:
@@ -63,6 +65,8 @@ kibanabin:
- source: salt://kibana/bin/so-kibana-config-load
- mode: 755
- template: jinja
- defaults:
ELASTICCURL: {{ ELASTICAUTH.elasticcurl }}
# Start the kibana docker
so-kibana:
@@ -113,4 +117,4 @@ so-kibana-config-load:
test.fail_without_changes:
- name: {{sls}}_state_not_allowed
{% endif %}
{% endif %}

View File

@@ -78,6 +78,7 @@ ls_pipeline_{{PL}}_{{CONFIGFILE.split('.')[0] | replace("/","_") }}:
{% endif %}
- user: 931
- group: 939
- mode: 660
- makedirs: True
{% endfor %}

View File

@@ -3,4 +3,9 @@ input {
port => "5044"
tags => [ "beat-ext" ]
}
}
filter {
mutate {
rename => {"@metadata" => "metadata"}
}
}

View File

@@ -3,11 +3,17 @@
{%- else %}
{%- set ES = salt['pillar.get']('elasticsearch:mainip', '') -%}
{%- endif %}
{%- set ES_USER = salt['pillar.get']('elasticsearch:auth:users:so_elastic_user:user', '') %}
{%- set ES_PASS = salt['pillar.get']('elasticsearch:auth:users:so_elastic_user:pass', '') %}
output {
if [module] =~ "zeek" and "import" not in [tags] {
elasticsearch {
pipeline => "%{module}.%{dataset}"
hosts => "{{ ES }}"
{% if salt['pillar.get']('elasticsearch:auth:enabled') is sameas true %}
user => "{{ ES_USER }}"
password => "{{ ES_PASS }}"
{% endif %}
index => "so-zeek"
template_name => "so-zeek"
template => "/templates/so-zeek-template.json"

View File

@@ -3,11 +3,17 @@
{%- else %}
{%- set ES = salt['pillar.get']('elasticsearch:mainip', '') -%}
{%- endif %}
{%- set ES_USER = salt['pillar.get']('elasticsearch:auth:users:so_elastic_user:user', '') %}
{%- set ES_PASS = salt['pillar.get']('elasticsearch:auth:users:so_elastic_user:pass', '') %}
output {
if "import" in [tags] {
elasticsearch {
pipeline => "%{module}.%{dataset}"
hosts => "{{ ES }}"
{% if salt['pillar.get']('elasticsearch:auth:enabled') is sameas true %}
user => "{{ ES_USER }}"
password => "{{ ES_PASS }}"
{% endif %}
index => "so-import"
template_name => "so-import"
template => "/templates/so-import-template.json"

View File

@@ -3,10 +3,16 @@
{%- else %}
{%- set ES = salt['pillar.get']('elasticsearch:mainip', '') -%}
{%- endif %}
{%- set ES_USER = salt['pillar.get']('elasticsearch:auth:users:so_elastic_user:user', '') %}
{%- set ES_PASS = salt['pillar.get']('elasticsearch:auth:users:so_elastic_user:pass', '') %}
output {
if [event_type] == "sflow" {
elasticsearch {
hosts => "{{ ES }}"
{% if salt['pillar.get']('elasticsearch:auth:enabled') is sameas true %}
user => "{{ ES_USER }}"
password => "{{ ES_PASS }}"
{% endif %}
index => "so-flow"
template_name => "so-flow"
template => "/templates/so-flow-template.json"

View File

@@ -3,10 +3,16 @@
{%- else %}
{%- set ES = salt['pillar.get']('elasticsearch:mainip', '') -%}
{%- endif %}
{%- set ES_USER = salt['pillar.get']('elasticsearch:auth:users:so_elastic_user:user', '') %}
{%- set ES_PASS = salt['pillar.get']('elasticsearch:auth:users:so_elastic_user:pass', '') %}
output {
if [event_type] == "ids" and "import" not in [tags] {
elasticsearch {
hosts => "{{ ES }}"
{% if salt['pillar.get']('elasticsearch:auth:enabled') is sameas true %}
user => "{{ ES_USER }}"
password => "{{ ES_PASS }}"
{% endif %}
index => "so-ids"
template_name => "so-ids"
template => "/templates/so-ids-template.json"

View File

@@ -3,11 +3,17 @@
{%- else %}
{%- set ES = salt['pillar.get']('elasticsearch:mainip', '') -%}
{%- endif %}
{%- set ES_USER = salt['pillar.get']('elasticsearch:auth:users:so_elastic_user:user', '') %}
{%- set ES_PASS = salt['pillar.get']('elasticsearch:auth:users:so_elastic_user:pass', '') %}
output {
if [module] =~ "syslog" {
elasticsearch {
pipeline => "%{module}"
hosts => "{{ ES }}"
{% if salt['pillar.get']('elasticsearch:auth:enabled') is sameas true %}
user => "{{ ES_USER }}"
password => "{{ ES_PASS }}"
{% endif %}
index => "so-syslog"
template_name => "so-syslog"
template => "/templates/so-syslog-template.json"

View File

@@ -0,0 +1,26 @@
{%- if grains['role'] == 'so-eval' -%}
{%- set ES = salt['pillar.get']('manager:mainip', '') -%}
{%- else %}
{%- set ES = salt['pillar.get']('elasticsearch:mainip', '') -%}
{%- endif %}
{%- set ES_USER = salt['pillar.get']('elasticsearch:auth:users:so_elastic_user:user', '') %}
{%- set ES_PASS = salt['pillar.get']('elasticsearch:auth:users:so_elastic_user:pass', '') %}
output {
if [metadata][pipeline] {
elasticsearch {
id => "filebeat_modules_metadata_pipeline"
pipeline => "%{[metadata][pipeline]}"
hosts => "{{ ES }}"
{% if salt['pillar.get']('elasticsearch:auth:enabled') is sameas true %}
user => "{{ ES_USER }}"
password => "{{ ES_PASS }}"
{% endif %}
index => "so-%{[event][module]}-%{+YYYY.MM.dd}"
template_name => "so-common"
template => "/templates/so-common-template.json"
template_overwrite => true
ssl => true
ssl_certificate_verification => false
}
}
}

View File

@@ -3,11 +3,17 @@
{%- else %}
{%- set ES = salt['pillar.get']('elasticsearch:mainip', '') -%}
{%- endif %}
{%- set ES_USER = salt['pillar.get']('elasticsearch:auth:users:so_elastic_user:user', '') %}
{%- set ES_PASS = salt['pillar.get']('elasticsearch:auth:users:so_elastic_user:pass', '') %}
output {
if [module] =~ "osquery" and "live_query" not in [dataset] {
elasticsearch {
pipeline => "%{module}.%{dataset}"
hosts => "{{ ES }}"
{% if salt['pillar.get']('elasticsearch:auth:enabled') is sameas true %}
user => "{{ ES_USER }}"
password => "{{ ES_PASS }}"
{% endif %}
index => "so-osquery"
template_name => "so-osquery"
template => "/templates/so-osquery-template.json"

View File

@@ -3,7 +3,9 @@
{%- else %}
{%- set ES = salt['pillar.get']('elasticsearch:mainip', '') -%}
{%- endif %}
{% set FEATURES = salt['pillar.get']('elastic:features', False) %}
{%- set FEATURES = salt['pillar.get']('elastic:features', False) %}
{%- set ES_USER = salt['pillar.get']('elasticsearch:auth:users:so_elastic_user:user', '') %}
{%- set ES_PASS = salt['pillar.get']('elasticsearch:auth:users:so_elastic_user:pass', '') %}
filter {
if [type] =~ "live_query" {
@@ -30,6 +32,10 @@ output {
elasticsearch {
pipeline => "osquery.live_query"
hosts => "{{ ES }}"
{% if salt['pillar.get']('elasticsearch:auth:enabled') is sameas true %}
user => "{{ ES_USER }}"
password => "{{ ES_PASS }}"
{% endif %}
index => "so-osquery"
template_name => "so-osquery"
template => "/templates/so-osquery-template.json"

Some files were not shown because too many files have changed in this diff Show More