Merge branch 'dev' into feature/nginx-update

This commit is contained in:
William Wernert
2020-07-20 13:13:32 -04:00
151 changed files with 1735 additions and 6655 deletions

View File

@@ -1,3 +1,49 @@
## Security Onion 2.0.0.rc1
Security Onion 2.0.0 RC1 is here! This version requires a fresh install, but there is good news - we have brought back soup! From now on, you should be able to run soup on the manager to upgrade your environment to RC2 and beyond!
### Changes:
- Re-branded 2.0 to give it a fresh look
- All documentation has moved to our [docs site](https://docs.securityonion.net/en/2.0)
- soup is alive! Note: This tool only updates Security Onion components. Please use the built-in OS update process to keep the OS and other components up to date.
- so-import-pcap is back! See the docs [here](http://docs.securityonion.net/en/2.0/so-import-pcap).
- Fixed issue with so-features-enable
- Users can now pivot to PCAP from Suricata alerts
- ISO install now prompts users to create an admin/sudo user instead of using a default account name
- The web email & password set during setup is now used to create the initial accounts for TheHive, Cortex, and Fleet
- Fixed issue with disk cleanup
- Changed the default permissions for /opt/so to keep non-priviledged users from accessing salt and related files
- Locked down access to certain SSL keys
- Suricata logs now compress after they roll over
- Users can now easily customize shard counts per index
- Improved Elastic ingest parsers including Windows event logs and Sysmon logs shipped with WinLogbeat and Osquery (ECS)
- Elastic nodes are now "hot" by default, making it easier to add a warm node later
- so-allow now runs at the end of an install so users can enable access right away
- Alert severities across Wazuh, Suricata and Playbook (Sigma) have been standardized and copied to `event.severity`:
- 1-Low / 2-Medium / 3-High / 4-Critical
- Initial implementation of alerting queues:
- Low & Medium alerts are accessible through Kibana & Hunt
- High & Critical alerts are accessible through Kibana, Hunt and sent to TheHive for immediate analysis
- ATT&CK Navigator is now a statically-hosted site in the nginx container
- Playbook
- All Sigma rules in the community repo (500+) are now imported and kept up to date
- Initial implementation of automated testing when a Play's detection logic has been edited (i.e., Unit Testing)
- Updated UI Theme
- Once authenticated through SOC, users can now access Playbook with analyst permissions without login
- Kolide Launcher has been updated to include the ability to pass arbitrary flags - new functionality sponsored by SOS
- Fixed issue with Wazuh authd registration service port not being correctly exposed
- Added option for exposure of Elasticsearch REST API (port 9200) to so-allow for easier external querying/integration with other tools
- Added option to so-allow for external Strelka file uploads (e.g., via `strelka-fileshot`)
- Added default YARA rules for Strelka -- default rules are maintained by Florian Roth and pulled from https://github.com/Neo23x0/signature-base
- Added the ability to use custom Zeek scripts
- Renamed "master server" to "manager node"
- Improved unification of Zeek and Strelka file data
## Hybrid Hunter Beta 1.4.1 - Beta 3
- Fix install script to handle hostnames properly.
## Hybrid Hunter Beta 1.4.0 - Beta 3 ## Hybrid Hunter Beta 1.4.0 - Beta 3
- Complete overhaul of the way we handle custom and default settings and data. You will now see a default and local directory under the saltstack directory. All customizations are stored in local. - Complete overhaul of the way we handle custom and default settings and data. You will now see a default and local directory under the saltstack directory. All customizations are stored in local.

View File

View File

@@ -5,7 +5,7 @@
{% set PLAYBOOK = salt['pillar.get']('manager:playbook', '0') %} {% set PLAYBOOK = salt['pillar.get']('manager:playbook', '0') %}
{% set FREQSERVER = salt['pillar.get']('manager:freq', '0') %} {% set FREQSERVER = salt['pillar.get']('manager:freq', '0') %}
{% set DOMAINSTATS = salt['pillar.get']('manager:domainstats', '0') %} {% set DOMAINSTATS = salt['pillar.get']('manager:domainstats', '0') %}
{% set BROVER = salt['pillar.get']('static:broversion', 'COMMUNITY') %} {% set ZEEKVER = salt['pillar.get']('static:zeekversion', 'COMMUNITY') %}
{% set GRAFANA = salt['pillar.get']('manager:grafana', '0') %} {% set GRAFANA = salt['pillar.get']('manager:grafana', '0') %}
eval: eval:
@@ -63,7 +63,7 @@ heavy_node:
- so-suricata - so-suricata
- so-wazuh - so-wazuh
- so-filebeat - so-filebeat
{% if BROVER != 'SURICATA' %} {% if ZEEKVER != 'SURICATA' %}
- so-zeek - so-zeek
{% endif %} {% endif %}
helix: helix:
@@ -186,7 +186,7 @@ sensor:
- so-telegraf - so-telegraf
- so-steno - so-steno
- so-suricata - so-suricata
{% if BROVER != 'SURICATA' %} {% if ZEEKVER != 'SURICATA' %}
- so-zeek - so-zeek
{% endif %} {% endif %}
- so-wazuh - so-wazuh

View File

@@ -0,0 +1,13 @@
elasticsearch:
templates:
- so/so-beats-template.json.jinja
- so/so-common-template.json
- so/so-firewall-template.json.jinja
- so/so-flow-template.json.jinja
- so/so-ids-template.json.jinja
- so/so-import-template.json.jinja
- so/so-osquery-template.json.jinja
- so/so-ossec-template.json.jinja
- so/so-strelka-template.json.jinja
- so/so-syslog-template.json.jinja
- so/so-zeek-template.json.jinja

View File

@@ -0,0 +1,13 @@
elasticsearch:
templates:
- so/so-beats-template.json.jinja
- so/so-common-template.json
- so/so-firewall-template.json.jinja
- so/so-flow-template.json.jinja
- so/so-ids-template.json.jinja
- so/so-import-template.json.jinja
- so/so-osquery-template.json.jinja
- so/so-ossec-template.json.jinja
- so/so-strelka-template.json.jinja
- so/so-syslog-template.json.jinja
- so/so-zeek-template.json.jinja

View File

@@ -1,29 +0,0 @@
logstash:
pipelines:
eval:
config:
- so/0800_input_eval.conf
- so/1002_preprocess_json.conf
- so/1033_preprocess_snort.conf
- so/7100_osquery_wel.conf
- so/8999_postprocess_rename_type.conf
- so/9000_output_bro.conf.jinja
- so/9002_output_import.conf.jinja
- so/9033_output_snort.conf.jinja
- so/9100_output_osquery.conf.jinja
- so/9400_output_suricata.conf.jinja
- so/9500_output_beats.conf.jinja
- so/9600_output_ossec.conf.jinja
- so/9700_output_strelka.conf.jinja
templates:
- so/so-beats-template.json.jinja
- so/so-common-template.json
- so/so-firewall-template.json.jinja
- so/so-flow-template.json.jinja
- so/so-ids-template.json.jinja
- so/so-import-template.json.jinja
- so/so-osquery-template.json.jinja
- so/so-ossec-template.json.jinja
- so/so-strelka-template.json.jinja
- so/so-syslog-template.json.jinja
- so/so-zeek-template.json.jinja

View File

@@ -11,15 +11,3 @@ logstash:
- so/9500_output_beats.conf.jinja - so/9500_output_beats.conf.jinja
- so/9600_output_ossec.conf.jinja - so/9600_output_ossec.conf.jinja
- so/9700_output_strelka.conf.jinja - so/9700_output_strelka.conf.jinja
templates:
- so/so-beats-template.json.jinja
- so/so-common-template.json
- so/so-firewall-template.json.jinja
- so/so-flow-template.json.jinja
- so/so-ids-template.json.jinja
- so/so-import-template.json.jinja
- so/so-osquery-template.json.jinja
- so/so-ossec-template.json.jinja
- so/so-strelka-template.json.jinja
- so/so-syslog-template.json.jinja
- so/so-zeek-template.json.jinja

View File

@@ -11,10 +11,11 @@ base:
- logstash - logstash
- logstash.manager - logstash.manager
- logstash.search - logstash.search
- elasticsearch.search
'*_sensor': '*_sensor':
- static - static
- brologs - zeeklogs
- healthcheck.sensor - healthcheck.sensor
- minions.{{ grains.id }} - minions.{{ grains.id }}
@@ -30,19 +31,21 @@ base:
- logstash.manager - logstash.manager
'*_eval': '*_eval':
- static
- data.* - data.*
- brologs - zeeklogs
- secrets - secrets
- healthcheck.eval - healthcheck.eval
- elasticsearch.eval
- static
- minions.{{ grains.id }} - minions.{{ grains.id }}
'*_standalone': '*_standalone':
- logstash - logstash
- logstash.manager - logstash.manager
- logstash.search - logstash.search
- elasticsearch.search
- data.* - data.*
- brologs - zeeklogs
- secrets - secrets
- healthcheck.standalone - healthcheck.standalone
- static - static
@@ -54,13 +57,13 @@ base:
'*_heavynode': '*_heavynode':
- static - static
- brologs - zeeklogs
- minions.{{ grains.id }} - minions.{{ grains.id }}
'*_helix': '*_helix':
- static - static
- fireeye - fireeye
- brologs - zeeklogs
- logstash - logstash
- logstash.helix - logstash.helix
- minions.{{ grains.id }} - minions.{{ grains.id }}
@@ -75,4 +78,5 @@ base:
- static - static
- logstash - logstash
- logstash.search - logstash.search
- elasticsearch.search
- minions.{{ grains.id }} - minions.{{ grains.id }}

View File

@@ -1,4 +1,4 @@
brologs: zeeklogs:
enabled: enabled:
- conn - conn
- dce_rpc - dce_rpc

View File

@@ -15,6 +15,20 @@ socore:
- createhome: True - createhome: True
- shell: /bin/bash - shell: /bin/bash
soconfperms:
file.directory:
- name: /opt/so/conf
- uid: 939
- gid: 939
- dir_mode: 770
sosaltstackperms:
file.directory:
- name: /opt/so/saltstack
- uid: 939
- gid: 939
- dir_mode: 770
# Create a state directory # Create a state directory
statedir: statedir:
file.directory: file.directory:

View File

@@ -33,7 +33,7 @@
{% endif %} {% endif %}
{% if role in ['heavynode', 'standalone'] %} {% if role in ['heavynode', 'standalone'] %}
{{ append_containers('static', 'broversion', 'SURICATA') }} {{ append_containers('static', 'zeekversion', 'SURICATA') }}
{% endif %} {% endif %}
{% if role == 'searchnode' %} {% if role == 'searchnode' %}
@@ -41,5 +41,5 @@
{% endif %} {% endif %}
{% if role == 'sensor' %} {% if role == 'sensor' %}
{{ append_containers('static', 'broversion', 'SURICATA') }} {{ append_containers('static', 'zeekversion', 'SURICATA') }}
{% endif %} {% endif %}

View File

@@ -89,7 +89,7 @@ if [ "$SKIP" -eq 0 ]; then
echo "[p] - Wazuh API - port 55000/tcp" echo "[p] - Wazuh API - port 55000/tcp"
echo "[r] - Wazuh registration service - 1515/tcp" echo "[r] - Wazuh registration service - 1515/tcp"
echo "" echo ""
echo "Please enter your selection (a - analyst, b - beats, o - osquery, w - wazuh):" echo "Please enter your selection:"
read -r ROLE read -r ROLE
echo "Enter a single ip address or range to allow (example: 10.10.10.10 or 10.10.0.0/16):" echo "Enter a single ip address or range to allow (example: 10.10.10.10 or 10.10.0.0/16):"
read -r IP read -r IP

View File

@@ -15,6 +15,8 @@
# You should have received a copy of the GNU General Public License # You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>. # along with this program. If not, see <http://www.gnu.org/licenses/>.
IMAGEREPO=securityonion
# Check for prerequisites # Check for prerequisites
if [ "$(id -u)" -ne 0 ]; then if [ "$(id -u)" -ne 0 ]; then
echo "This script must be run using sudo!" echo "This script must be run using sudo!"

17
salt/common/tools/sbin/so-docker-refresh Normal file → Executable file
View File

@@ -14,12 +14,8 @@
# #
# You should have received a copy of the GNU General Public License # You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>. # along with this program. If not, see <http://www.gnu.org/licenses/>.
got_root(){
if [ "$(id -u)" -ne 0 ]; then . /usr/sbin/so-common
echo "This script must be run using sudo!"
exit 1
fi
}
manager_check() { manager_check() {
# Check to see if this is a manager # Check to see if this is a manager
@@ -39,10 +35,10 @@ update_docker_containers() {
do do
# Pull down the trusted docker image # Pull down the trusted docker image
echo "Downloading $i" echo "Downloading $i"
docker pull --disable-content-trust=false docker.io/soshybridhunter/$i docker pull --disable-content-trust=false docker.io/$IMAGEREPO/$i
# Tag it with the new registry destination # Tag it with the new registry destination
docker tag soshybridhunter/$i $HOSTNAME:5000/soshybridhunter/$i docker tag $IMAGEREPO/$i $HOSTNAME:5000/$IMAGEREPO/$i
docker push $HOSTNAME:5000/soshybridhunter/$i docker push $HOSTNAME:5000/$IMAGEREPO/$i
done done
} }
@@ -55,7 +51,7 @@ version_check() {
exit 1 exit 1
fi fi
} }
got_root
manager_check manager_check
version_check version_check
@@ -82,6 +78,7 @@ if [ $MANAGERCHECK != 'so-helix' ]; then
"so-logstash:$VERSION" \ "so-logstash:$VERSION" \
"so-mysql:$VERSION" \ "so-mysql:$VERSION" \
"so-nginx:$VERSION" \ "so-nginx:$VERSION" \
"so-pcaptools:$VERSION" \
"so-playbook:$VERSION" \ "so-playbook:$VERSION" \
"so-redis:$VERSION" \ "so-redis:$VERSION" \
"so-soc:$VERSION" \ "so-soc:$VERSION" \

View File

@@ -1,43 +0,0 @@
#!/bin/bash
MANAGER=MANAGER
VERSION="HH1.1.4"
TRUSTED_CONTAINERS=( \
"so-nginx:$VERSION" \
"so-thehive-cortex:$VERSION" \
"so-curator:$VERSION" \
"so-domainstats:$VERSION" \
"so-elastalert:$VERSION" \
"so-elasticsearch:$VERSION" \
"so-filebeat:$VERSION" \
"so-fleet:$VERSION" \
"so-fleet-launcher:$VERSION" \
"so-freqserver:$VERSION" \
"so-grafana:$VERSION" \
"so-idstools:$VERSION" \
"so-influxdb:$VERSION" \
"so-kibana:$VERSION" \
"so-logstash:$VERSION" \
"so-mysql:$VERSION" \
"so-playbook:$VERSION" \
"so-redis:$VERSION" \
"so-sensoroni:$VERSION" \
"so-soctopus:$VERSION" \
"so-steno:$VERSION" \
#"so-strelka:$VERSION" \
"so-suricata:$VERSION" \
"so-telegraf:$VERSION" \
"so-thehive:$VERSION" \
"so-thehive-es:$VERSION" \
"so-wazuh:$VERSION" \
"so-zeek:$VERSION" )
for i in "${TRUSTED_CONTAINERS[@]}"
do
# Pull down the trusted docker image
echo "Downloading $i"
docker pull --disable-content-trust=false docker.io/soshybridhunter/$i
# Tag it with the new registry destination
docker tag soshybridhunter/$i $MANAGER:5000/soshybridhunter/$i
docker push $MANAGER:5000/soshybridhunter/$i
docker rmi soshybridhunter/$i
done

0
salt/common/tools/sbin/so-elasticsearch-indices-rw Normal file → Executable file
View File

View File

@@ -15,13 +15,13 @@
# You should have received a copy of the GNU General Public License # You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>. # along with this program. If not, see <http://www.gnu.org/licenses/>.
default_salt_dir=/opt/so/saltstack/default default_conf_dir=/opt/so/conf
ELASTICSEARCH_HOST="{{ MANAGERIP}}" ELASTICSEARCH_HOST="{{ MANAGERIP}}"
ELASTICSEARCH_PORT=9200 ELASTICSEARCH_PORT=9200
#ELASTICSEARCH_AUTH="" #ELASTICSEARCH_AUTH=""
# Define a default directory to load pipelines from # Define a default directory to load pipelines from
ELASTICSEARCH_TEMPLATES="$default_salt_dir/salt/logstash/pipelines/templates/so/" ELASTICSEARCH_TEMPLATES="$default_conf_dir/elasticsearch/templates/"
# Wait for ElasticSearch to initialize # Wait for ElasticSearch to initialize
echo -n "Waiting for ElasticSearch..." echo -n "Waiting for ElasticSearch..."

View File

@@ -17,6 +17,18 @@
. /usr/sbin/so-common . /usr/sbin/so-common
local_salt_dir=/opt/so/saltstack/local local_salt_dir=/opt/so/saltstack/local
manager_check() {
# Check to see if this is a manager
MANAGERCHECK=$(cat /etc/salt/grains | grep role | awk '{print $2}')
if [[ "$MANAGERCHECK" =~ ^('so-eval'|'so-manager'|'so-standalone'|'so-managersearch')$ ]]; then
echo "This is a manager. We can proceed"
else
echo "Please run so-features-enable on the manager."
exit 0
fi
}
manager_check
VERSION=$(grep soversion $local_salt_dir/pillar/static.sls | cut -d':' -f2|sed 's/ //g') VERSION=$(grep soversion $local_salt_dir/pillar/static.sls | cut -d':' -f2|sed 's/ //g')
# Modify static.sls to enable Features # Modify static.sls to enable Features
sed -i 's/features: False/features: True/' $local_salt_dir/pillar/static.sls sed -i 's/features: False/features: True/' $local_salt_dir/pillar/static.sls
@@ -31,13 +43,8 @@ for i in "${TRUSTED_CONTAINERS[@]}"
do do
# Pull down the trusted docker image # Pull down the trusted docker image
echo "Downloading $i" echo "Downloading $i"
docker pull --disable-content-trust=false docker.io/soshybridhunter/$i docker pull --disable-content-trust=false docker.io/$IMAGEREPO/$i
# Tag it with the new registry destination # Tag it with the new registry destination
docker tag soshybridhunter/$i $HOSTNAME:5000/soshybridhunter/$i docker tag $IMAGEREPO/$i $HOSTNAME:5000/$IMAGEREPO/$i
docker push $HOSTNAME:5000/soshybridhunter/$i docker push $HOSTNAME:5000/$IMAGEREPO/$i
done
for i in "${TRUSTED_CONTAINERS[@]}"
do
echo "Removing $i locally"
docker rmi soshybridhunter/$i
done done

0
salt/common/tools/sbin/so-fleet-setup Normal file → Executable file
View File

View File

@@ -17,6 +17,7 @@
{% set MANAGER = salt['grains.get']('master') %} {% set MANAGER = salt['grains.get']('master') %}
{% set VERSION = salt['pillar.get']('static:soversion') %} {% set VERSION = salt['pillar.get']('static:soversion') %}
{% set IMAGEREPO = salt['pillar.get']('static:imagerepo') %}
{%- set MANAGERIP = salt['pillar.get']('static:managerip') -%} {%- set MANAGERIP = salt['pillar.get']('static:managerip') -%}
function usage { function usage {
@@ -31,13 +32,13 @@ EOF
function pcapinfo() { function pcapinfo() {
PCAP=$1 PCAP=$1
ARGS=$2 ARGS=$2
docker run --rm -v $PCAP:/input.pcap --entrypoint capinfos {{ MANAGER }}:5000/soshybridhunter/so-pcaptools:{{ VERSION }} /input.pcap $ARGS docker run --rm -v $PCAP:/input.pcap --entrypoint capinfos {{ MANAGER }}:5000/{{ IMAGEREPO }}/so-pcaptools:{{ VERSION }} /input.pcap $ARGS
} }
function pcapfix() { function pcapfix() {
PCAP=$1 PCAP=$1
PCAP_OUT=$2 PCAP_OUT=$2
docker run --rm -v $PCAP:/input.pcap -v $PCAP_OUT:$PCAP_OUT --entrypoint pcapfix {{ MANAGER }}:5000/soshybridhunter/so-pcaptools:{{ VERSION }} /input.pcap -o $PCAP_OUT > /dev/null 2>&1 docker run --rm -v $PCAP:/input.pcap -v $PCAP_OUT:$PCAP_OUT --entrypoint pcapfix {{ MANAGER }}:5000/{{ IMAGEREPO }}/so-pcaptools:{{ VERSION }} /input.pcap -o $PCAP_OUT > /dev/null 2>&1
} }
function suricata() { function suricata() {
@@ -58,7 +59,7 @@ function suricata() {
-v ${NSM_PATH}/:/nsm/:rw \ -v ${NSM_PATH}/:/nsm/:rw \
-v $PCAP:/input.pcap:ro \ -v $PCAP:/input.pcap:ro \
-v /opt/so/conf/suricata/bpf:/etc/suricata/bpf:ro \ -v /opt/so/conf/suricata/bpf:/etc/suricata/bpf:ro \
{{ MANAGER }}:5000/soshybridhunter/so-suricata:{{ VERSION }} \ {{ MANAGER }}:5000/{{ IMAGEREPO }}/so-suricata:{{ VERSION }} \
--runmode single -k none -r /input.pcap > $LOG_PATH/console.log 2>&1 --runmode single -k none -r /input.pcap > $LOG_PATH/console.log 2>&1
} }
@@ -86,7 +87,7 @@ function zeek() {
-v /opt/so/conf/zeek/bpf:/opt/zeek/etc/bpf:ro \ -v /opt/so/conf/zeek/bpf:/opt/zeek/etc/bpf:ro \
--entrypoint /opt/zeek/bin/zeek \ --entrypoint /opt/zeek/bin/zeek \
-w /nsm/zeek/logs \ -w /nsm/zeek/logs \
{{ MANAGER }}:5000/soshybridhunter/so-zeek:{{ VERSION }} \ {{ MANAGER }}:5000/{{ IMAGEREPO }}/so-zeek:{{ VERSION }} \
-C -r /input.pcap local > $NSM_PATH/logs/console.log 2>&1 -C -r /input.pcap local > $NSM_PATH/logs/console.log 2>&1
} }
@@ -179,6 +180,7 @@ for PCAP in "$@"; do
fi fi
cp -f "${PCAP}" "${PCAP_DIR}"/data.pcap cp -f "${PCAP}" "${PCAP_DIR}"/data.pcap
chmod 644 "${PCAP_DIR}"/data.pcap
fi # end of valid pcap fi # end of valid pcap
@@ -208,7 +210,7 @@ cat << EOF
Import complete! Import complete!
You can use the following hyperlink to view data in the time range of your import. You can triple-click to quickly highlight the entire hyperlink and you can then copy it into your browser: You can use the following hyperlink to view data in the time range of your import. You can triple-click to quickly highlight the entire hyperlink and you can then copy it into your browser:
https://{{ MANAGERIP }}/kibana/app/kibana#/dashboard/a8411b30-6d03-11ea-b301-3d6c35840645?_g=(refreshInterval:(display:Off,pause:!f,value:0),time:(from:'${START_OLDEST}T00:00:00.000Z',mode:absolute,to:'${END_NEWEST}T00:00:00.000Z')) https://{{ MANAGERIP }}/#/hunt?q=%2a%20%7C%20groupby%20event.module%20event.dataset&t=${START_OLDEST_SLASH}%2000%3A00%3A00%20AM%20-%20${END_NEWEST_SLASH}%2000%3A00%3A00%20AM
or you can manually set your Time Range to be: or you can manually set your Time Range to be:
From: $START_OLDEST To: $END_NEWEST From: $START_OLDEST To: $END_NEWEST

4
salt/common/tools/sbin/so-saltstack-update Normal file → Executable file
View File

@@ -32,8 +32,8 @@ copy_new_files() {
# Copy new files over to the salt dir # Copy new files over to the salt dir
cd /tmp/sogh/securityonion cd /tmp/sogh/securityonion
git checkout $BRANCH git checkout $BRANCH
rsync -a --exclude-from 'exclude-list.txt' salt $default_salt_dir/ rsync -a salt $default_salt_dir/
rsync -a --exclude-from 'exclude-list.txt' pillar $default_salt_dir/ rsync -a pillar $default_salt_dir/
chown -R socore:socore $default_salt_dir/salt chown -R socore:socore $default_salt_dir/salt
chown -R socore:socore $default_salt_dir/pillar chown -R socore:socore $default_salt_dir/pillar
chmod 755 $default_salt_dir/pillar/firewall/addfirewall.sh chmod 755 $default_salt_dir/pillar/firewall/addfirewall.sh

0
salt/common/tools/sbin/so-sensor-clean Normal file → Executable file
View File

View File

@@ -1,17 +1,17 @@
#!/bin/bash #!/bin/bash
local_salt_dir=/opt/so/saltstack/local local_salt_dir=/opt/so/saltstack/local
bro_logs_enabled() { zeek_logs_enabled() {
echo "brologs:" > $local_salt_dir/pillar/brologs.sls echo "zeeklogs:" > $local_salt_dir/pillar/zeeklogs.sls
echo " enabled:" >> $local_salt_dir/pillar/brologs.sls echo " enabled:" >> $local_salt_dir/pillar/zeeklogs.sls
for BLOG in ${BLOGS[@]}; do for BLOG in ${BLOGS[@]}; do
echo " - $BLOG" | tr -d '"' >> $local_salt_dir/pillar/brologs.sls echo " - $BLOG" | tr -d '"' >> $local_salt_dir/pillar/zeeklogs.sls
done done
} }
whiptail_manager_adv_service_brologs() { whiptail_manager_adv_service_zeeklogs() {
BLOGS=$(whiptail --title "Security Onion Setup" --checklist "Please Select Logs to Send:" 24 78 12 \ BLOGS=$(whiptail --title "Security Onion Setup" --checklist "Please Select Logs to Send:" 24 78 12 \
"conn" "Connection Logging" ON \ "conn" "Connection Logging" ON \
@@ -54,5 +54,5 @@ whiptail_manager_adv_service_brologs() {
"x509" "x.509 Logs" ON 3>&1 1>&2 2>&3 ) "x509" "x.509 Logs" ON 3>&1 1>&2 2>&3 )
} }
whiptail_manager_adv_service_brologs whiptail_manager_adv_service_zeeklogs
bro_logs_enabled zeek_logs_enabled

0
salt/common/tools/sbin/so-zeek-stats Normal file → Executable file
View File

188
salt/common/tools/sbin/soup Normal file → Executable file
View File

@@ -15,23 +15,187 @@
# You should have received a copy of the GNU General Public License # You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>. # along with this program. If not, see <http://www.gnu.org/licenses/>.
clone_to_tmp() { . /usr/sbin/so-common
UPDATE_DIR=/tmp/sogh/securityonion
INSTALLEDVERSION=$(cat /etc/soversion)
default_salt_dir=/opt/so/saltstack/default
manager_check() {
# Check to see if this is a manager
MANAGERCHECK=$(cat /etc/salt/grains | grep role | awk '{print $2}')
if [[ "$MANAGERCHECK" =~ ^('so-eval'|'so-manager'|'so-standalone'|'so-managersearch')$ ]]; then
echo "This is a manager. We can proceed"
else
echo "Please run soup on the manager. The manager controls all updates."
exit 0
fi
}
clean_dockers() {
# Place Holder for cleaning up old docker images
echo ""
}
clone_to_tmp() {
# TODO Need to add a air gap option # TODO Need to add a air gap option
# Clean old files
rm -rf /tmp/sogh
# Make a temp location for the files # Make a temp location for the files
rm -rf /tmp/soup mkdir -p /tmp/sogh
mkdir -p /tmp/soup cd /tmp/sogh
cd /tmp/soup #git clone -b dev https://github.com/Security-Onion-Solutions/securityonion.git
#git clone -b dev https://github.com/Security-Onion-Solutions/securityonion-saltstack.git git clone https://github.com/Security-Onion-Solutions/securityonion.git
git clone https://github.com/Security-Onion-Solutions/securityonion-saltstack.git cd /tmp
if [ ! -f $UPDATE_DIR/VERSION ]; then
echo "Update was unable to pull from github. Please check your internet."
exit 0
fi
}
copy_new_files() {
# Copy new files over to the salt dir
cd /tmp/sogh/securityonion
rsync -a salt $default_salt_dir/
rsync -a pillar $default_salt_dir/
chown -R socore:socore $default_salt_dir/
chmod 755 $default_salt_dir/pillar/firewall/addfirewall.sh
cd /tmp
}
highstate() {
# Run a highstate but first cancel a running one.
salt-call saltutil.kill_all_jobs
salt-call state.highstate
}
pillar_changes() {
# This function is to add any new pillar items if needed.
echo "Checking to see if pillar changes are needed"
} }
# Prompt the user that this requires internets update_dockers() {
# List all the containers
if [ $MANAGERCHECK != 'so-helix' ]; then
TRUSTED_CONTAINERS=( \
"so-acng" \
"so-thehive-cortex" \
"so-curator" \
"so-domainstats" \
"so-elastalert" \
"so-elasticsearch" \
"so-filebeat" \
"so-fleet" \
"so-fleet-launcher" \
"so-freqserver" \
"so-grafana" \
"so-idstools" \
"so-influxdb" \
"so-kibana" \
"so-kratos" \
"so-logstash" \
"so-mysql" \
"so-nginx" \
"so-pcaptools" \
"so-playbook" \
"so-redis" \
"so-soc" \
"so-soctopus" \
"so-steno" \
"so-strelka" \
"so-suricata" \
"so-telegraf" \
"so-thehive" \
"so-thehive-es" \
"so-wazuh" \
"so-zeek" )
else
TRUSTED_CONTAINERS=( \
"so-filebeat" \
"so-idstools" \
"so-logstash" \
"so-nginx" \
"so-redis" \
"so-steno" \
"so-suricata" \
"so-telegraf" \
"so-zeek" )
fi
# Download the containers from the interwebs
for i in "${TRUSTED_CONTAINERS[@]}"
do
# Pull down the trusted docker image
echo "Downloading $i:$NEWVERSION"
docker pull --disable-content-trust=false docker.io/$IMAGEREPO/$i:$NEWVERSION
# Tag it with the new registry destination
docker tag $IMAGEREPO/$i:$NEWVERSION $HOSTNAME:5000/$IMAGEREPO/$i:$NEWVERSION
docker push $HOSTNAME:5000/$IMAGEREPO/$i:$NEWVERSION
done
}
update_version() {
# Update the version to the latest
echo "Updating the version file."
echo $NEWVERSION > /etc/soversion
sed -i 's/$INSTALLEDVERSION/$NEWVERISON/g' /opt/so/saltstack/local/pillar/static.sls
}
upgrade_check() {
# Let's make sure we actually need to update.
NEWVERSION=$(cat $UPDATE_DIR/VERSION)
if [ "$INSTALLEDVERSION" == "$NEWVERSION" ]; then
echo "You are already running the latest version of Security Onion."
exit 0
else
echo "Performing Upgrade from $INSTALLEDVERSION to $NEWVERSION"
fi
}
verify_latest_update_script() {
# Check to see if the update scripts match. If not run the new one.
CURRENTSOUP=$(md5sum /opt/so/saltstack/default/salt/common/tools/sbin/soup | awk '{print $1}')
GITSOUP=$(md5sum /tmp/sogh/securityonion/salt/common/tools/sbin/soup | awk '{print $1}')
if [[ "$CURRENTSOUP" == "$GITSOUP" ]]; then
echo "This version of the soup script is up to date. Proceeding."
else
echo "You are not running the latest soup version. Updating soup."
cp $UPDATE_DIR/salt/common/tools/sbin/soup $default_salt_dir/salt/common/tools/sbin/
salt-call state.apply common queue=True
echo ""
echo "soup has been updated. Please run soup again"
exit 0
fi
}
echo "Checking to see if this is a manager"
manager_check
echo "Cloning latest code to a temporary location"
clone_to_tmp clone_to_tmp
cd /tmp/soup/securityonion-saltstack/update echo ""
chmod +x soup echo "Verifying we have the latest script"
./soup verify_latest_update_script
echo ""
echo "Let's see if we need to update"
upgrade_check
echo ""
echo "Making pillar changes"
pillar_changes
echo ""
echo "Cleaning up old dockers"
clean_dockers
echo ""
echo "Updating docker to $NEWVERSION"
update_dockers
echo ""
echo "Copying new code"
copy_new_files
echo ""
echo "Running a highstate to complete upgrade"
highstate
echo ""
echo "Updating version"
update_version
echo ""
echo "Upgrade from $INSTALLEDVERSION to $NEWVERSION complete."

View File

@@ -1,4 +1,4 @@
{%- set cur_close_days = salt['pillar.get']('elasticsearch:index_settins:so-beats:close', 30) -%} {%- set cur_close_days = salt['pillar.get']('elasticsearch:index_settings:so-beats:close', 30) -%}
--- ---
# Remember, leave a key empty if there is no value. None will be a string, # Remember, leave a key empty if there is no value. None will be a string,
# not a Python "NoneType" # not a Python "NoneType"

View File

@@ -1,4 +1,4 @@
{%- set cur_close_days = salt['pillar.get']('elasticsearch:index_settins:so-firewall:close', 30) -%} {%- set cur_close_days = salt['pillar.get']('elasticsearch:index_settings:so-firewall:close', 30) -%}
--- ---
# Remember, leave a key empty if there is no value. None will be a string, # Remember, leave a key empty if there is no value. None will be a string,
# not a Python "NoneType" # not a Python "NoneType"

View File

@@ -1,4 +1,4 @@
{%- set cur_close_days = salt['pillar.get']('elasticsearch:index_settins:so-ids:close', 30) -%} {%- set cur_close_days = salt['pillar.get']('elasticsearch:index_settings:so-ids:close', 30) -%}
--- ---
# Remember, leave a key empty if there is no value. None will be a string, # Remember, leave a key empty if there is no value. None will be a string,
# not a Python "NoneType" # not a Python "NoneType"

View File

@@ -1,4 +1,4 @@
{%- set cur_close_days = salt['pillar.get']('elasticsearch:index_settins:so-import:close', 30) -%} {%- set cur_close_days = salt['pillar.get']('elasticsearch:index_settings:so-import:close', 30) -%}
--- ---
# Remember, leave a key empty if there is no value. None will be a string, # Remember, leave a key empty if there is no value. None will be a string,
# not a Python "NoneType" # not a Python "NoneType"

View File

@@ -1,4 +1,4 @@
{%- set cur_close_days = salt['pillar.get']('elasticsearch:index_settins:so-osquery:close', 30) -%} {%- set cur_close_days = salt['pillar.get']('elasticsearch:index_settings:so-osquery:close', 30) -%}
--- ---
# Remember, leave a key empty if there is no value. None will be a string, # Remember, leave a key empty if there is no value. None will be a string,
# not a Python "NoneType" # not a Python "NoneType"

View File

@@ -1,4 +1,4 @@
{%- set cur_close_days = salt['pillar.get']('elasticsearch:index_settins:so-ossec:close', 30) -%} {%- set cur_close_days = salt['pillar.get']('elasticsearch:index_settings:so-ossec:close', 30) -%}
--- ---
# Remember, leave a key empty if there is no value. None will be a string, # Remember, leave a key empty if there is no value. None will be a string,
# not a Python "NoneType" # not a Python "NoneType"

View File

@@ -1,4 +1,4 @@
{%- set cur_close_days = salt['pillar.get']('elasticsearch:index_settins:so-strelka:close', 30) -%} {%- set cur_close_days = salt['pillar.get']('elasticsearch:index_settings:so-strelka:close', 30) -%}
--- ---
# Remember, leave a key empty if there is no value. None will be a string, # Remember, leave a key empty if there is no value. None will be a string,
# not a Python "NoneType" # not a Python "NoneType"

View File

@@ -1,4 +1,4 @@
{%- set cur_close_days = salt['pillar.get']('elasticsearch:index_settins:so-syslog:close', 30) -%} {%- set cur_close_days = salt['pillar.get']('elasticsearch:index_settings:so-syslog:close', 30) -%}
--- ---
# Remember, leave a key empty if there is no value. None will be a string, # Remember, leave a key empty if there is no value. None will be a string,
# not a Python "NoneType" # not a Python "NoneType"

View File

@@ -1,4 +1,4 @@
{%- set cur_close_days = salt['pillar.get']('elasticsearch:index_settins:so-zeek:close', 30) -%} {%- set cur_close_days = salt['pillar.get']('elasticsearch:index_settings:so-zeek:close', 30) -%}
--- ---
# Remember, leave a key empty if there is no value. None will be a string, # Remember, leave a key empty if there is no value. None will be a string,
# not a Python "NoneType" # not a Python "NoneType"

View File

@@ -1,2 +1,2 @@
#!/bin/bash #!/bin/bash
/usr/sbin/so-curator-closed-delete > /dev/null 2>&1; docker exec so-curator curator --config /etc/curator/config/curator.yml /etc/curator/action/close.yml > /dev/null 2>&1 /usr/sbin/so-curator-closed-delete > /dev/null 2>&1; docker exec so-curator curator --config /etc/curator/config/curator.yml /etc/curator/action/so-zeek-close.yml > /dev/null 2>&1; docker exec so-curator curator --config /etc/curator/config/curator.yml /etc/curator/action/so-beats-close.yml > /dev/null 2>&1; docker exec so-curator curator --config /etc/curator/config/curator.yml /etc/curator/action/so-firewall-close.yml > /dev/null 2>&1; docker exec so-curator curator --config /etc/curator/config/curator.yml /etc/curator/action/so-ids-close.yml > /dev/null 2>&1; docker exec so-curator curator --config /etc/curator/config/curator.yml /etc/curator/action/so-import-close.yml > /dev/null 2>&1; docker exec so-curator curator --config /etc/curator/config/curator.yml /etc/curator/action/so-osquery-close.yml > /dev/null 2>&1; docker exec so-curator curator --config /etc/curator/config/curator.yml /etc/curator/action/so-ossec-close.yml > /dev/null 2>&1; docker exec so-curator curator --config /etc/curator/config/curator.yml /etc/curator/action/so-strelka-close.yml > /dev/null 2>&1; docker exec so-curator curator --config /etc/curator/config/curator.yml /etc/curator/action/so-syslog-close.yml > /dev/null 2>&1

View File

@@ -1,4 +1,5 @@
{% set VERSION = salt['pillar.get']('static:soversion', 'HH1.2.2') %} {% set VERSION = salt['pillar.get']('static:soversion', 'HH1.2.2') %}
{% set IMAGEREPO = salt['pillar.get']('static:imagerepo') %}
{% set MANAGER = salt['grains.get']('master') %} {% set MANAGER = salt['grains.get']('master') %}
{% if grains['role'] in ['so-eval', 'so-node', 'so-managersearch', 'so-heavynode', 'so-standalone'] %} {% if grains['role'] in ['so-eval', 'so-node', 'so-managersearch', 'so-heavynode', 'so-standalone'] %}
# Curator # Curator
@@ -111,7 +112,7 @@ so-curatordeletecron:
so-curator: so-curator:
docker_container.running: docker_container.running:
- image: {{ MANAGER }}:5000/soshybridhunter/so-curator:{{ VERSION }} - image: {{ MANAGER }}:5000/{{ IMAGEREPO }}/so-curator:{{ VERSION }}
- hostname: curator - hostname: curator
- name: so-curator - name: so-curator
- user: curator - user: curator

View File

@@ -1,2 +0,0 @@
#!/bin/bash
/usr/bin/docker exec so-bro /opt/bro/bin/broctl netstats | awk '{print $(NF-2),$(NF-1),$NF}' | awk -F '[ =]' '{RCVD += $2;DRP += $4;TTL += $6} END { print "rcvd: " RCVD, "dropped: " DRP, "total: " TTL}' >> /nsm/bro/logs/packetloss.log

View File

@@ -1,64 +0,0 @@
#!/bin/bash
# Delete Zeek Logs based on defined CRIT_DISK_USAGE value
# Copyright 2014,2015,2016,2017,2018, 2019 Security Onion Solutions, LLC
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
clean () {
SENSOR_DIR='/nsm'
CRIT_DISK_USAGE=90
CUR_USAGE=$(df -P $SENSOR_DIR | tail -1 | awk '{print $5}' | tr -d %)
LOG="/nsm/bro/logs/zeek_clean.log"
if [ "$CUR_USAGE" -gt "$CRIT_DISK_USAGE" ]; then
while [ "$CUR_USAGE" -gt "$CRIT_DISK_USAGE" ];
do
TODAY=$(date -u "+%Y-%m-%d")
# find the oldest Zeek logs directory and exclude today
OLDEST_DIR=$(ls /nsm/bro/logs/ | grep -v "current" | grep -v "stats" | grep -v "packetloss" | grep -v "zeek_clean" | sort | grep -v $TODAY | head -n 1)
if [ -z "$OLDEST_DIR" -o "$OLDEST_DIR" == ".." -o "$OLDEST_DIR" == "." ]
then
echo "$(date) - No old Zeek logs available to clean up in /nsm/bro/logs/" >> $LOG
exit 0
else
echo "$(date) - Removing directory: /nsm/bro/logs/$OLDEST_DIR" >> $LOG
rm -rf /nsm/bro/logs/"$OLDEST_DIR"
fi
# find oldest files in extracted directory and exclude today
OLDEST_EXTRACT=$(find /nsm/bro/extracted -type f -printf '%T+ %p\n' 2>/dev/null | sort | grep -v $TODAY | head -n 1)
if [ -z "$OLDEST_EXTRACT" -o "$OLDEST_EXTRACT" == ".." -o "$OLDEST_EXTRACT" == "." ]
then
echo "$(date) - No old extracted files available to clean up in /nsm/bro/extracted/" >> $LOG
else
OLDEST_EXTRACT_DATE=`echo $OLDEST_EXTRACT | awk '{print $1}' | cut -d+ -f1`
OLDEST_EXTRACT_FILE=`echo $OLDEST_EXTRACT | awk '{print $2}'`
echo "$(date) - Removing extracted files for $OLDEST_EXTRACT_DATE" >> $LOG
find /nsm/bro/extracted -type f -printf '%T+ %p\n' | grep $OLDEST_EXTRACT_DATE | awk '{print $2}' |while read FILE
do
echo "$(date) - Removing extracted file: $FILE" >> $LOG
rm -f "$FILE"
done
fi
done
else
echo "$(date) - CRIT_DISK_USAGE value of $CRIT_DISK_USAGE not greater than current usage of $CUR_USAGE..." >> $LOG
fi
}
clean

View File

@@ -1,139 +0,0 @@
##! Local site policy. Customize as appropriate.
##!
##! This file will not be overwritten when upgrading or reinstalling!
# This script logs which scripts were loaded during each run.
@load misc/loaded-scripts
# Apply the default tuning scripts for common tuning settings.
@load tuning/defaults
# Estimate and log capture loss.
@load misc/capture-loss
# Enable logging of memory, packet and lag statistics.
@load misc/stats
# Load the scan detection script.
@load misc/scan
# Detect traceroute being run on the network. This could possibly cause
# performance trouble when there are a lot of traceroutes on your network.
# Enable cautiously.
#@load misc/detect-traceroute
# Generate notices when vulnerable versions of software are discovered.
# The default is to only monitor software found in the address space defined
# as "local". Refer to the software framework's documentation for more
# information.
@load frameworks/software/vulnerable
# Detect software changing (e.g. attacker installing hacked SSHD).
@load frameworks/software/version-changes
# This adds signatures to detect cleartext forward and reverse windows shells.
@load-sigs frameworks/signatures/detect-windows-shells
# Load all of the scripts that detect software in various protocols.
@load protocols/ftp/software
@load protocols/smtp/software
@load protocols/ssh/software
@load protocols/http/software
# The detect-webapps script could possibly cause performance trouble when
# running on live traffic. Enable it cautiously.
#@load protocols/http/detect-webapps
# This script detects DNS results pointing toward your Site::local_nets
# where the name is not part of your local DNS zone and is being hosted
# externally. Requires that the Site::local_zones variable is defined.
@load protocols/dns/detect-external-names
# Script to detect various activity in FTP sessions.
@load protocols/ftp/detect
# Scripts that do asset tracking.
@load protocols/conn/known-hosts
@load protocols/conn/known-services
@load protocols/ssl/known-certs
# This script enables SSL/TLS certificate validation.
@load protocols/ssl/validate-certs
# This script prevents the logging of SSL CA certificates in x509.log
@load protocols/ssl/log-hostcerts-only
# Uncomment the following line to check each SSL certificate hash against the ICSI
# certificate notary service; see http://notary.icsi.berkeley.edu .
# @load protocols/ssl/notary
# If you have libGeoIP support built in, do some geographic detections and
# logging for SSH traffic.
@load protocols/ssh/geo-data
# Detect hosts doing SSH bruteforce attacks.
@load protocols/ssh/detect-bruteforcing
# Detect logins using "interesting" hostnames.
@load protocols/ssh/interesting-hostnames
# Detect SQL injection attacks.
@load protocols/http/detect-sqli
#### Network File Handling ####
# Enable MD5 and SHA1 hashing for all files.
@load frameworks/files/hash-all-files
# Detect SHA1 sums in Team Cymru's Malware Hash Registry.
@load frameworks/files/detect-MHR
# Uncomment the following line to enable detection of the heartbleed attack. Enabling
# this might impact performance a bit.
# @load policy/protocols/ssl/heartbleed
# Uncomment the following line to enable logging of connection VLANs. Enabling
# this adds two VLAN fields to the conn.log file. This may not work properly
# since we use AF_PACKET and it strips VLAN tags.
# @load policy/protocols/conn/vlan-logging
# Uncomment the following line to enable logging of link-layer addresses. Enabling
# this adds the link-layer address for each connection endpoint to the conn.log file.
# @load policy/protocols/conn/mac-logging
# Uncomment the following line to enable the SMB analyzer. The analyzer
# is currently considered a preview and therefore not loaded by default.
@load base/protocols/smb
# BPF Configuration
@load securityonion/bpfconf
# Add the interface to the log event
#@load securityonion/add-interface-to-logs.bro
# Add Sensor Name to the conn.log
#@load securityonion/conn-add-sensorname.bro
# File Extraction
#@load securityonion/file-extraction
# Intel from Mandiant APT1 Report
#@load securityonion/apt1
# ShellShock - detects successful exploitation of Bash vulnerability CVE-2014-6271
#@load securityonion/shellshock
# JA3 - SSL Detection Goodness
@load policy/ja3
# HASSH
@load policy/hassh
# You can load your own intel into:
# /opt/so/saltstack/bro/policy/intel/ on the manager
@load intel
# Load a custom Bro policy
# /opt/so/saltstack/bro/policy/custom/ on the manager
#@load custom/somebropolicy.bro
# Write logs in JSON
redef LogAscii::use_json = T;
redef LogAscii::json_timestamps = JSON::TS_ISO8601;

View File

@@ -1,133 +0,0 @@
##! Local site policy. Customize as appropriate.
##!
##! This file will not be overwritten when upgrading or reinstalling!
# This script logs which scripts were loaded during each run.
@load misc/loaded-scripts
# Apply the default tuning scripts for common tuning settings.
@load tuning/defaults
# Estimate and log capture loss.
@load misc/capture-loss
# Enable logging of memory, packet and lag statistics.
@load misc/stats
# Load the scan detection script.
@load misc/scan
# Detect traceroute being run on the network. This could possibly cause
# performance trouble when there are a lot of traceroutes on your network.
# Enable cautiously.
#@load misc/detect-traceroute
# Generate notices when vulnerable versions of software are discovered.
# The default is to only monitor software found in the address space defined
# as "local". Refer to the software framework's documentation for more
# information.
@load frameworks/software/vulnerable
# Detect software changing (e.g. attacker installing hacked SSHD).
@load frameworks/software/version-changes
# This adds signatures to detect cleartext forward and reverse windows shells.
@load-sigs frameworks/signatures/detect-windows-shells
# Load all of the scripts that detect software in various protocols.
@load protocols/ftp/software
@load protocols/smtp/software
@load protocols/ssh/software
@load protocols/http/software
# The detect-webapps script could possibly cause performance trouble when
# running on live traffic. Enable it cautiously.
#@load protocols/http/detect-webapps
# This script detects DNS results pointing toward your Site::local_nets
# where the name is not part of your local DNS zone and is being hosted
# externally. Requires that the Site::local_zones variable is defined.
@load protocols/dns/detect-external-names
# Script to detect various activity in FTP sessions.
@load protocols/ftp/detect
# Scripts that do asset tracking.
@load protocols/conn/known-hosts
@load protocols/conn/known-services
@load protocols/ssl/known-certs
# This script enables SSL/TLS certificate validation.
@load protocols/ssl/validate-certs
# This script prevents the logging of SSL CA certificates in x509.log
@load protocols/ssl/log-hostcerts-only
# Uncomment the following line to check each SSL certificate hash against the ICSI
# certificate notary service; see http://notary.icsi.berkeley.edu .
# @load protocols/ssl/notary
# If you have libGeoIP support built in, do some geographic detections and
# logging for SSH traffic.
@load protocols/ssh/geo-data
# Detect hosts doing SSH bruteforce attacks.
@load protocols/ssh/detect-bruteforcing
# Detect logins using "interesting" hostnames.
@load protocols/ssh/interesting-hostnames
# Detect SQL injection attacks.
@load protocols/http/detect-sqli
#### Network File Handling ####
# Enable MD5 and SHA1 hashing for all files.
@load frameworks/files/hash-all-files
# Detect SHA1 sums in Team Cymru's Malware Hash Registry.
@load frameworks/files/detect-MHR
# Uncomment the following line to enable detection of the heartbleed attack. Enabling
# this might impact performance a bit.
# @load policy/protocols/ssl/heartbleed
# Uncomment the following line to enable logging of connection VLANs. Enabling
# this adds two VLAN fields to the conn.log file. This may not work properly
# since we use AF_PACKET and it strips VLAN tags.
# @load policy/protocols/conn/vlan-logging
# Uncomment the following line to enable logging of link-layer addresses. Enabling
# this adds the link-layer address for each connection endpoint to the conn.log file.
# @load policy/protocols/conn/mac-logging
# Uncomment the following line to enable the SMB analyzer. The analyzer
# is currently considered a preview and therefore not loaded by default.
# @load policy/protocols/smb
# Add the interface to the log event
#@load securityonion/add-interface-to-logs.bro
# Add Sensor Name to the conn.log
#@load securityonion/conn-add-sensorname.bro
# File Extraction
#@load securityonion/file-extraction
# Intel from Mandiant APT1 Report
#@load securityonion/apt1
# ShellShock - detects successful exploitation of Bash vulnerability CVE-2014-6271
#@load securityonion/shellshock
# JA3 - SSL Detection Goodness
@load policy/ja3
# You can load your own intel into:
# /opt/so/saltstack/bro/policy/intel/ on the manager
@load intel
# Load a custom Bro policy
# /opt/so/saltstack/bro/policy/custom/ on the manager
#@load custom/somebropolicy.bro
# Use JSON
redef LogAscii::use_json = T;
redef LogAscii::json_timestamps = JSON::TS_ISO8601;

View File

@@ -1,47 +0,0 @@
{%- set interface = salt['pillar.get']('sensor:interface', 'bond0') %}
{%- if salt['pillar.get']('sensor:zeek_pins') or salt['pillar.get']('sensor:zeek_lbprocs') %}
{%- if salt['pillar.get']('sensor:zeek_proxies') %}
{%- set proxies = salt['pillar.get']('sensor:zeek_proxies', '1') %}
{%- else %}
{%- if salt['pillar.get']('sensor:zeek_pins') %}
{%- set proxies = (salt['pillar.get']('sensor:zeek_pins')|length/10)|round(0, 'ceil')|int %}
{%- else %}
{%- set proxies = (salt['pillar.get']('sensor:zeek_lbprocs')/10)|round(0, 'ceil')|int %}
{%- endif %}
{%- endif %}
[manager]
type=manager
host=localhost
[logger]
type=logger
host=localhost
[proxy]
type=proxy
host=localhost
[worker-1]
type=worker
host=localhost
interface=af_packet::{{ interface }}
lb_method=custom
{%- if salt['pillar.get']('sensor:zeek_lbprocs') %}
lb_procs={{ salt['pillar.get']('sensor:zeek_lbprocs', '1') }}
{%- else %}
lb_procs={{ salt['pillar.get']('sensor:zeek_pins')|length }}
{%- endif %}
{%- if salt['pillar.get']('sensor:zeek_pins') %}
pin_cpus={{ salt['pillar.get']('sensor:zeek_pins')|join(", ") }}
{%- endif %}
af_packet_fanout_id=23
af_packet_fanout_mode=AF_Packet::FANOUT_HASH
af_packet_buffer_size=128*1024*1024
{%- else %}
[brosa]
type=standalone
host=localhost
interface={{ interface }}
{%- endif %}

View File

@@ -1,206 +0,0 @@
{% set interface = salt['pillar.get']('sensor:interface', 'bond0') %}
{% set BPF_ZEEK = salt['pillar.get']('zeek:bpf') %}
{% set BPF_STATUS = 0 %}
# Bro Salt State
# Add Bro group
brogroup:
group.present:
- name: bro
- gid: 937
# Add Bro User
bro:
user.present:
- uid: 937
- gid: 937
- home: /home/bro
# Create some directories
bropolicydir:
file.directory:
- name: /opt/so/conf/bro/policy
- user: 937
- group: 939
- makedirs: True
# Bro Log Directory
brologdir:
file.directory:
- name: /nsm/bro/logs
- user: 937
- group: 939
- makedirs: True
# Bro Spool Directory
brospooldir:
file.directory:
- name: /nsm/bro/spool/manager
- user: 937
- makedirs: true
# Bro extracted directory
broextractdir:
file.directory:
- name: /nsm/bro/extracted
- user: 937
- group: 939
- makedirs: True
brosfafincompletedir:
file.directory:
- name: /nsm/faf/files/incomplete
- user: 937
- makedirs: true
brosfafcompletedir:
file.directory:
- name: /nsm/faf/files/complete
- user: 937
- makedirs: true
# Sync the policies
bropolicysync:
file.recurse:
- name: /opt/so/conf/bro/policy
- source: salt://bro/policy
- user: 937
- group: 939
- template: jinja
# Sync node.cfg
nodecfgsync:
file.managed:
- name: /opt/so/conf/bro/node.cfg
- source: salt://bro/files/node.cfg
- user: 937
- group: 939
- template: jinja
plcronscript:
file.managed:
- name: /usr/local/bin/packetloss.sh
- source: salt://bro/cron/packetloss.sh
- mode: 755
zeekcleanscript:
file.managed:
- name: /usr/local/bin/zeek_clean
- source: salt://bro/cron/zeek_clean
- mode: 755
/usr/local/bin/zeek_clean:
cron.present:
- user: root
- minute: '*'
- hour: '*'
- daymonth: '*'
- month: '*'
- dayweek: '*'
/usr/local/bin/packetloss.sh:
cron.present:
- user: root
- minute: '*/10'
- hour: '*'
- daymonth: '*'
- month: '*'
- dayweek: '*'
# BPF compilation and configuration
{% if BPF_ZEEK %}
{% set BPF_CALC = salt['cmd.script']('/usr/sbin/so-bpf-compile', interface + ' ' + BPF_ZEEK|join(" ") ) %}
{% if BPF_CALC['stderr'] == "" %}
{% set BPF_STATUS = 1 %}
{% else %}
zeekbpfcompilationfailure:
test.configurable_test_state:
- changes: False
- result: False
- comment: "BPF Syntax Error - Discarding Specified BPF"
{% endif %}
{% endif %}
zeekbpf:
file.managed:
- name: /opt/so/conf/bro/bpf
- user: 940
- group: 940
{% if BPF_STATUS %}
- contents_pillar: zeek:bpf
{% else %}
- contents:
- "ip or not ip"
{% endif %}
# Sync local.bro
{% if salt['pillar.get']('static:broversion', '') == 'COMMUNITY' %}
localbrosync:
file.managed:
- name: /opt/so/conf/bro/local.bro
- source: salt://bro/files/local.bro.community
- user: 937
- group: 939
- template: jinja
so-communitybroimage:
cmd.run:
- name: docker pull --disable-content-trust=false docker.io/soshybridhunter/so-communitybro:HH1.0.3
so-bro:
docker_container.running:
- require:
- so-communitybroimage
- image: docker.io/soshybridhunter/so-communitybro:HH1.0.3
- privileged: True
- binds:
- /nsm/bro/logs:/nsm/bro/logs:rw
- /nsm/bro/spool:/nsm/bro/spool:rw
- /nsm/bro/extracted:/nsm/bro/extracted:rw
- /opt/so/conf/bro/local.bro:/opt/bro/share/bro/site/local.bro:ro
- /opt/so/conf/bro/node.cfg:/opt/bro/etc/node.cfg:ro
- /opt/so/conf/bro/policy/securityonion:/opt/bro/share/bro/policy/securityonion:ro
- /opt/so/conf/bro/policy/custom:/opt/bro/share/bro/policy/custom:ro
- /opt/so/conf/bro/policy/intel:/opt/bro/share/bro/policy/intel:rw
- network_mode: host
- watch:
- file: /opt/so/conf/bro/local.bro
- file: /opt/so/conf/bro/node.cfg
- file: /opt/so/conf/bro/policy
{% else %}
localbrosync:
file.managed:
- name: /opt/so/conf/bro/local.bro
- source: salt://bro/files/local.bro
- user: 937
- group: 939
- template: jinja
so-broimage:
cmd.run:
- name: docker pull --disable-content-trust=false docker.io/soshybridhunter/so-bro:HH1.1.1
so-bro:
docker_container.running:
- require:
- so-broimage
- image: docker.io/soshybridhunter/so-bro:HH1.1.1
- privileged: True
- binds:
- /nsm/bro/logs:/nsm/bro/logs:rw
- /nsm/bro/spool:/nsm/bro/spool:rw
- /nsm/bro/extracted:/nsm/bro/extracted:rw
- /opt/so/conf/bro/local.bro:/opt/bro/share/bro/site/local.bro:ro
- /opt/so/conf/bro/node.cfg:/opt/bro/etc/node.cfg:ro
- /opt/so/conf/bro/bpf:/opt/bro/share/bro/site/bpf:ro
- /opt/so/conf/bro/policy/securityonion:/opt/bro/share/bro/policy/securityonion:ro
- /opt/so/conf/bro/policy/custom:/opt/bro/share/bro/policy/custom:ro
- /opt/so/conf/bro/policy/intel:/opt/bro/share/bro/policy/intel:rw
- network_mode: host
- watch:
- file: /opt/so/conf/bro/local.bro
- file: /opt/so/conf/bro/node.cfg
- file: /opt/so/conf/bro/policy
- file: /opt/so/conf/bro/bpf
{% endif %}

View File

@@ -1 +0,0 @@
#Intel

View File

@@ -1,20 +0,0 @@
{%- set interface = salt['pillar.get']('sensor:interface', '0') %}
global interface = "{{ interface }}";
event bro_init()
{
if ( ! reading_live_traffic() )
return;
Log::remove_default_filter(HTTP::LOG);
Log::add_filter(HTTP::LOG, [$name = "http-interfaces",
$path_func(id: Log::ID, path: string, rec: HTTP::Info) =
{
local peer = get_event_peer()$descr;
if ( peer in Cluster::nodes && Cluster::nodes[peer]?$interface )
return cat("http_", Cluster::nodes[peer]$interface);
else
return "http";
}
]);
}

View File

@@ -1,9 +0,0 @@
@load frameworks/intel/seen
@load frameworks/intel/do_notice
@load frameworks/files/hash-all-files
redef Intel::read_files += {
fmt("%s/apt1-fqdn.dat", @DIR),
fmt("%s/apt1-md5.dat", @DIR),
fmt("%s/apt1-certs.dat", @DIR)
};

View File

@@ -1,26 +0,0 @@
#fields indicator indicator_type meta.source meta.desc meta.do_notice
b054e26ef827fbbf5829f84a9bdbb697a5b042fc Intel::CERT_HASH Mandiant APT1 Report ALPHA T
7bc0cc2cf7c3a996c32dbe7e938993f7087105b4 Intel::CERT_HASH Mandiant APT1 Report AOL T
7855c132af1390413d4e4ff4ead321f8802d8243 Intel::CERT_HASH Mandiant APT1 Report AOL T
f3e3c590d7126bd227733e9d8313d2575c421243 Intel::CERT_HASH Mandiant APT1 Report AOL T
d4d4e896ce7d73b573f0a0006080a246aec61fe7 Intel::CERT_HASH Mandiant APT1 Report AOL T
bcdf4809c1886ac95478bbafde246d0603934298 Intel::CERT_HASH Mandiant APT1 Report AOL T
6b4855df8afc8d57a671fe5ed628f6d88852a922 Intel::CERT_HASH Mandiant APT1 Report AOL T
d50fdc82c328319ac60f256d3119b8708cd5717b Intel::CERT_HASH Mandiant APT1 Report AOL T
70b48d5177eebe9c762e9a37ecabebfd10e1b7e9 Intel::CERT_HASH Mandiant APT1 Report AOL T
3a6a299b764500ce1b6e58a32a257139d61a3543 Intel::CERT_HASH Mandiant APT1 Report AOL T
bf4f90e0029b2263af1141963ddf2a0c71a6b5fb Intel::CERT_HASH Mandiant APT1 Report AOL T
b21139583dec0dae344cca530690ec1f344acc79 Intel::CERT_HASH Mandiant APT1 Report AOL T
21971ffef58baf6f638df2f7e2cceb4c58b173c8 Intel::CERT_HASH Mandiant APT1 Report EMAIL T
04ecff66973c92a1c348666d5a4738557cce0cfc Intel::CERT_HASH Mandiant APT1 Report IBM T
f97d1a703aec44d0f53a3a294e33acda43a49de1 Intel::CERT_HASH Mandiant APT1 Report IBM T
c0d32301a7c96ecb0bc8e381ec19e6b4eaf5d2fe Intel::CERT_HASH Mandiant APT1 Report IBM T
1b27a897cda019da2c3a6dc838761871e8bf5b5d Intel::CERT_HASH Mandiant APT1 Report LAME T
d515996e8696612dc78fc6db39006466fc6550df Intel::CERT_HASH Mandiant APT1 Report MOON-NIGHT T
8f79315659e59c79f1301ef4aee67b18ae2d9f1c Intel::CERT_HASH Mandiant APT1 Report NONAME T
a57a84975e31e376e3512da7b05ad06ef6441f53 Intel::CERT_HASH Mandiant APT1 Report NS T
b3db37a0edde97b3c3c15da5f2d81d27af82f583 Intel::CERT_HASH Mandiant APT1 Report SERVER (PEM) T
6d8f1454f6392361fb2464b744d4fc09eee5fcfd Intel::CERT_HASH Mandiant APT1 Report SUR T
b66e230f404b2cc1c033ccacda5d0a14b74a2752 Intel::CERT_HASH Mandiant APT1 Report VIRTUALLYTHERE T
4acbadb86a91834493dde276736cdf8f7ef5d497 Intel::CERT_HASH Mandiant APT1 Report WEBMAIL T
86a48093d9b577955c4c9bd19e30536aae5543d4 Intel::CERT_HASH Mandiant APT1 Report YAHOO T

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@@ -1,106 +0,0 @@
##! This script is to support the bpf.conf file like other network monitoring tools use.
##! Please don't try to learn from this script right now, there are a large number of
##! hacks in it to work around bugs discovered in Bro.
@load base/frameworks/notice
module BPFConf;
export {
## The file that is watched on disk for BPF filter changes.
## Two templated variables are available; "sensorname" and "interface".
## They can be used by surrounding the term by doubled curly braces.
const filename = "/opt/bro/share/bro/site/bpf" &redef;
redef enum Notice::Type += {
## Invalid filter notice.
InvalidFilter
};
}
global filter_parts: vector of string = vector();
global current_filter_filename = "";
type FilterLine: record {
s: string;
};
redef enum PcapFilterID += {
BPFConfPcapFilter,
};
event BPFConf::line(description: Input::EventDescription, tpe: Input::Event, s: string)
{
local part = sub(s, /[[:blank:]]*#.*$/, "");
# We don't want any blank parts.
if ( part != "" )
filter_parts[|filter_parts|] = part;
}
event Input::end_of_data(name: string, source:string)
{
if ( name == "bpfconf" )
{
local filter = join_string_vec(filter_parts, " ");
capture_filters["bpf.conf"] = filter;
if ( Pcap::precompile_pcap_filter(BPFConfPcapFilter, filter) )
{
PacketFilter::install();
}
else
{
NOTICE([$note=InvalidFilter,
$msg=fmt("Compiling packet filter from %s failed", filename),
$sub=filter]);
}
filter_parts=vector();
}
}
function add_filter_file()
{
local real_filter_filename = BPFConf::filename;
# Support the interface template value.
#if ( SecurityOnion::sensorname != "" )
# real_filter_filename = gsub(real_filter_filename, /\{\{sensorname\}\}/, SecurityOnion::sensorname);
# Support the interface template value.
#if ( SecurityOnion::interface != "" )
# real_filter_filename = gsub(real_filter_filename, /\{\{interface\}\}/, SecurityOnion::interface);
#if ( /\{\{/ in real_filter_filename )
# {
# return;
# }
#else
# Reporter::info(fmt("BPFConf filename set: %s (%s)", real_filter_filename, Cluster::node));
if ( real_filter_filename != current_filter_filename )
{
current_filter_filename = real_filter_filename;
Input::add_event([$source=real_filter_filename,
$name="bpfconf",
$reader=Input::READER_RAW,
$mode=Input::REREAD,
$want_record=F,
$fields=FilterLine,
$ev=BPFConf::line]);
}
}
#event SecurityOnion::found_sensorname(name: string)
# {
# add_filter_file();
# }
event bro_init() &priority=5
{
if ( BPFConf::filename != "" )
add_filter_file();
}

View File

@@ -1,10 +0,0 @@
global sensorname = "{{ grains.host }}";
redef record Conn::Info += {
sensorname: string &log &optional;
};
event connection_state_remove(c: connection)
{
c$conn$sensorname = sensorname;
}

View File

@@ -1,21 +0,0 @@
global ext_map: table[string] of string = {
["application/x-dosexec"] = "exe",
["text/plain"] = "txt",
["image/jpeg"] = "jpg",
["image/png"] = "png",
["text/html"] = "html",
} &default ="";
event file_sniff(f: fa_file, meta: fa_metadata)
{
if ( ! meta?$mime_type || meta$mime_type != "application/x-dosexec" )
return;
local ext = "";
if ( meta?$mime_type )
ext = ext_map[meta$mime_type];
local fname = fmt("/nsm/bro/extracted/%s-%s.%s", f$source, f$id, ext);
Files::add_analyzer(f, Files::ANALYZER_EXTRACT, [$extract_filename=fname]);
}

View File

@@ -1,3 +0,0 @@
@load tuning/json-logs
redef LogAscii::json_timestamps = JSON::TS_ISO8601;
redef LogAscii::use_json = T;

View File

@@ -13,6 +13,8 @@
# You should have received a copy of the GNU General Public License # You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>. # along with this program. If not, see <http://www.gnu.org/licenses/>.
{% set IMAGEREPO = salt['pillar.get']('static:imagerepo') %}
# Create the group # Create the group
dstatsgroup: dstatsgroup:
group.present: group.present:
@@ -37,13 +39,13 @@ dstatslogdir:
so-domainstatsimage: so-domainstatsimage:
cmd.run: cmd.run:
- name: docker pull --disable-content-trust=false docker.io/soshybridhunter/so-domainstats:HH1.0.3 - name: docker pull --disable-content-trust=false docker.io/{{ IMAGEREPO }}/so-domainstats:HH1.0.3
so-domainstats: so-domainstats:
docker_container.running: docker_container.running:
- require: - require:
- so-domainstatsimage - so-domainstatsimage
- image: docker.io/soshybridhunter/so-domainstats:HH1.0.3 - image: docker.io/{{ IMAGEREPO }}/so-domainstats:HH1.0.3
- hostname: domainstats - hostname: domainstats
- name: so-domainstats - name: so-domainstats
- user: domainstats - user: domainstats

View File

@@ -13,6 +13,7 @@
# You should have received a copy of the GNU General Public License # You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>. # along with this program. If not, see <http://www.gnu.org/licenses/>.
{% set VERSION = salt['pillar.get']('static:soversion', 'HH1.2.2') %} {% set VERSION = salt['pillar.get']('static:soversion', 'HH1.2.2') %}
{% set IMAGEREPO = salt['pillar.get']('static:imagerepo') %}
{% set MANAGER = salt['grains.get']('master') %} {% set MANAGER = salt['grains.get']('master') %}
{% if grains['role'] in ['so-eval','so-managersearch', 'so-manager', 'so-standalone'] %} {% if grains['role'] in ['so-eval','so-managersearch', 'so-manager', 'so-standalone'] %}
@@ -101,7 +102,7 @@ elastaconf:
so-elastalert: so-elastalert:
docker_container.running: docker_container.running:
- image: {{ MANAGER }}:5000/soshybridhunter/so-elastalert:{{ VERSION }} - image: {{ MANAGER }}:5000/{{ IMAGEREPO }}/so-elastalert:{{ VERSION }}
- hostname: elastalert - hostname: elastalert
- name: so-elastalert - name: so-elastalert
- user: elastalert - user: elastalert

View File

@@ -3,16 +3,27 @@
"processors" : [ "processors" : [
{"community_id": {"if": "ctx.winlog.event_data?.Protocol != null", "field":["winlog.event_data.SourceIp","winlog.event_data.SourcePort","winlog.event_data.DestinationIp","winlog.event_data.DestinationPort","winlog.event_data.Protocol"],"target_field":"network.community_id"}}, {"community_id": {"if": "ctx.winlog.event_data?.Protocol != null", "field":["winlog.event_data.SourceIp","winlog.event_data.SourcePort","winlog.event_data.DestinationIp","winlog.event_data.DestinationPort","winlog.event_data.Protocol"],"target_field":"network.community_id"}},
{ "set": { "if": "ctx.winlog?.channel == 'Microsoft-Windows-Sysmon/Operational'", "field": "event.module", "value": "sysmon", "override": true } }, { "set": { "if": "ctx.winlog?.channel == 'Microsoft-Windows-Sysmon/Operational'", "field": "event.module", "value": "sysmon", "override": true } },
{ "set": { "if": "ctx.winlog?.channel!= null", "field": "event.module", "value": "win_eventlog", "override": true, "ignore_failure": true } }, { "set": { "if": "ctx.winlog?.channel != null", "field": "event.module", "value": "windows_eventlog", "override": false, "ignore_failure": true } },
{ "set": { "if": "ctx.winlog?.channel != null", "field": "dataset", "value": "{{winlog.channel}}", "override": true } },
{ "set": { "if": "ctx.agent?.type != null", "field": "module", "value": "{{agent.type}}", "override": true } }, { "set": { "if": "ctx.agent?.type != null", "field": "module", "value": "{{agent.type}}", "override": true } },
{ "set": { "if": "ctx.winlog?.channel != 'Microsoft-Windows-Sysmon/Operational'", "field": "event.dataset", "value": "{{winlog.channel}}", "override": true } },
{ "set": { "if": "ctx.winlog?.channel == 'Microsoft-Windows-Sysmon/Operational' && ctx.event?.code == 3", "field": "event.category", "value": "host,process,network", "override": true } }, { "set": { "if": "ctx.winlog?.channel == 'Microsoft-Windows-Sysmon/Operational' && ctx.event?.code == 3", "field": "event.category", "value": "host,process,network", "override": true } },
{ "set": { "if": "ctx.winlog?.channel == 'Microsoft-Windows-Sysmon/Operational' && ctx.event?.code == 1", "field": "event.category", "value": "host,process", "override": true } }, { "set": { "if": "ctx.winlog?.channel == 'Microsoft-Windows-Sysmon/Operational' && ctx.event?.code == 1", "field": "event.category", "value": "host,process", "override": true } },
{ "set": { "if": "ctx.winlog?.channel == 'Microsoft-Windows-Sysmon/Operational' && ctx.event?.code == 1", "field": "event.dataset", "value": "process_creation", "override": true } }, { "set": { "if": "ctx.winlog?.channel == 'Microsoft-Windows-Sysmon/Operational' && ctx.event?.code == 1", "field": "event.dataset", "value": "process_creation", "override": true } },
{ "set": { "if": "ctx.winlog?.channel == 'Microsoft-Windows-Sysmon/Operational' && ctx.event?.code == 2", "field": "event.dataset", "value": "process_changed_file", "override": true } }, { "set": { "if": "ctx.winlog?.channel == 'Microsoft-Windows-Sysmon/Operational' && ctx.event?.code == 2", "field": "event.dataset", "value": "process_changed_file", "override": true } },
{ "set": { "if": "ctx.winlog?.channel == 'Microsoft-Windows-Sysmon/Operational' && ctx.event?.code == 3", "field": "event.dataset", "value": "network_connection", "override": true } }, { "set": { "if": "ctx.winlog?.channel == 'Microsoft-Windows-Sysmon/Operational' && ctx.event?.code == 3", "field": "event.dataset", "value": "network_connection", "override": true } },
{ "set": { "if": "ctx.winlog?.channel == 'Microsoft-Windows-Sysmon/Operational' && ctx.event?.code == 5", "field": "event.dataset", "value": "driver_loaded", "override": true } }, { "set": { "if": "ctx.winlog?.channel == 'Microsoft-Windows-Sysmon/Operational' && ctx.event?.code == 5", "field": "event.dataset", "value": "process_terminated", "override": true } },
{ "set": { "if": "ctx.winlog?.channel == 'Microsoft-Windows-Sysmon/Operational' && ctx.event?.code == 6", "field": "event.dataset", "value": "image_loaded", "override": true } }, { "set": { "if": "ctx.winlog?.channel == 'Microsoft-Windows-Sysmon/Operational' && ctx.event?.code == 6", "field": "event.dataset", "value": "driver_loaded", "override": true } },
{ "set": { "if": "ctx.winlog?.channel == 'Microsoft-Windows-Sysmon/Operational' && ctx.event?.code == 7", "field": "event.dataset", "value": "image_loaded", "override": true } },
{ "set": { "if": "ctx.winlog?.channel == 'Microsoft-Windows-Sysmon/Operational' && ctx.event?.code == 8", "field": "event.dataset", "value": "create_remote_thread", "override": true } },
{ "set": { "if": "ctx.winlog?.channel == 'Microsoft-Windows-Sysmon/Operational' && ctx.event?.code == 9", "field": "event.dataset", "value": "raw_file_access_read", "override": true } },
{ "set": { "if": "ctx.winlog?.channel == 'Microsoft-Windows-Sysmon/Operational' && ctx.event?.code == 10", "field": "event.dataset", "value": "process_access", "override": true } },
{ "set": { "if": "ctx.winlog?.channel == 'Microsoft-Windows-Sysmon/Operational' && ctx.event?.code == 11", "field": "event.dataset", "value": "file_create", "override": true } },
{ "set": { "if": "ctx.winlog?.channel == 'Microsoft-Windows-Sysmon/Operational' && ctx.event?.code == 12", "field": "event.dataset", "value": "registry_create_delete", "override": true } },
{ "set": { "if": "ctx.winlog?.channel == 'Microsoft-Windows-Sysmon/Operational' && ctx.event?.code == 13", "field": "event.dataset", "value": "registry_value_set", "override": true } },
{ "set": { "if": "ctx.winlog?.channel == 'Microsoft-Windows-Sysmon/Operational' && ctx.event?.code == 14", "field": "event.dataset", "value": "registry_key_value_rename", "override": true } },
{ "set": { "if": "ctx.winlog?.channel == 'Microsoft-Windows-Sysmon/Operational' && ctx.event?.code == 15", "field": "event.dataset", "value": "file_create_stream_hash", "override": true } },
{ "set": { "if": "ctx.winlog?.channel == 'Microsoft-Windows-Sysmon/Operational' && ctx.event?.code == 16", "field": "event.dataset", "value": "config_change", "override": true } },
{ "set": { "if": "ctx.winlog?.channel == 'Microsoft-Windows-Sysmon/Operational' && ctx.event?.code == 22", "field": "event.dataset", "value": "dns_query", "override": true } },
{ "rename": { "field": "agent.hostname", "target_field": "agent.name", "ignore_missing": true } }, { "rename": { "field": "agent.hostname", "target_field": "agent.name", "ignore_missing": true } },
{ "rename": { "field": "winlog.event_data.SubjectUserName", "target_field": "user.name", "ignore_missing": true } }, { "rename": { "field": "winlog.event_data.SubjectUserName", "target_field": "user.name", "ignore_missing": true } },
{ "rename": { "field": "winlog.event_data.DestinationHostname", "target_field": "destination.hostname", "ignore_missing": true } }, { "rename": { "field": "winlog.event_data.DestinationHostname", "target_field": "destination.hostname", "ignore_missing": true } },

View File

@@ -37,14 +37,15 @@
"index_name_format": "yyyy.MM.dd" "index_name_format": "yyyy.MM.dd"
} }
}, },
{ "set": { "if": "ctx.event?.severity == 3", "field": "event.severity_label", "value": "low", "override": true } }, { "set": { "if": "ctx.event?.severity == 1", "field": "event.severity_label", "value": "low", "override": true } },
{ "set": { "if": "ctx.event?.severity == 5", "field": "event.severity_label", "value": "medium", "override": true } }, { "set": { "if": "ctx.event?.severity == 2", "field": "event.severity_label", "value": "medium", "override": true } },
{ "set": { "if": "ctx.event?.severity == 7", "field": "event.severity_label", "value": "high", "override": true } }, { "set": { "if": "ctx.event?.severity == 3", "field": "event.severity_label", "value": "high", "override": true } },
{ "set": { "if": "ctx.event?.severity == 10", "field": "event.severity_label", "value": "critical", "override": true } }, { "set": { "if": "ctx.event?.severity == 4", "field": "event.severity_label", "value": "critical", "override": true } },
{ "rename": { "field": "module", "target_field": "event.module", "ignore_failure": true, "ignore_missing": true } }, { "rename": { "field": "module", "target_field": "event.module", "ignore_failure": true, "ignore_missing": true } },
{ "rename": { "field": "dataset", "target_field": "event.dataset", "ignore_failure": true, "ignore_missing": true } }, { "rename": { "field": "dataset", "target_field": "event.dataset", "ignore_failure": true, "ignore_missing": true } },
{ "rename": { "field": "category", "target_field": "event.category", "ignore_missing": true } }, { "rename": { "field": "category", "target_field": "event.category", "ignore_missing": true } },
{ "rename": { "field": "message2.community_id", "target_field": "network.community_id", "ignore_failure": true, "ignore_missing": true } }, { "rename": { "field": "message2.community_id", "target_field": "network.community_id", "ignore_failure": true, "ignore_missing": true } },
{ "lowercase": { "field": "event.dataset", "ignore_failure": true, "ignore_missing": true } },
{ "convert": { "field": "destination.port", "type": "integer", "ignore_failure": true, "ignore_missing": true } }, { "convert": { "field": "destination.port", "type": "integer", "ignore_failure": true, "ignore_missing": true } },
{ "convert": { "field": "source.port", "type": "integer", "ignore_failure": true, "ignore_missing": true } }, { "convert": { "field": "source.port", "type": "integer", "ignore_failure": true, "ignore_missing": true } },
{ "convert": { "field": "log.id.uid", "type": "string", "ignore_failure": true, "ignore_missing": true } }, { "convert": { "field": "log.id.uid", "type": "string", "ignore_failure": true, "ignore_missing": true } },

View File

@@ -31,7 +31,25 @@
{ "rename": { "field": "message3.columns.remote_port", "target_field": "remote.port", "ignore_missing": true } }, { "rename": { "field": "message3.columns.remote_port", "target_field": "remote.port", "ignore_missing": true } },
{ "rename": { "field": "message3.columns.process_name", "target_field": "process.name", "ignore_missing": true } }, { "rename": { "field": "message3.columns.process_name", "target_field": "process.name", "ignore_missing": true } },
{ "rename": { "field": "message3.columns.eventid", "target_field": "event.code", "ignore_missing": true } }, { "rename": { "field": "message3.columns.eventid", "target_field": "event.code", "ignore_missing": true } },
{ "set": { "if": "ctx.message3.columns?.data != null", "field": "dataset", "value": "wel-{{message3.columns.source}}", "override": true } }, { "set": { "if": "ctx.message3.columns?.source == 'Microsoft-Windows-Sysmon/Operational'", "field": "event.module", "value": "sysmon", "override": true } },
{ "set": { "if": "ctx.message3.columns?.source != null", "field": "event.module", "value": "windows_eventlog", "override": false, "ignore_failure": true } },
{ "set": { "if": "ctx.message3.columns?.source != 'Microsoft-Windows-Sysmon/Operational'", "field": "event.dataset", "value": "{{message3.columns.source}}", "override": true } },
{ "set": { "if": "ctx.message3.columns?.source == 'Microsoft-Windows-Sysmon/Operational' && ctx.event?.code == 1", "field": "event.dataset", "value": "process_creation", "override": true } },
{ "set": { "if": "ctx.message3.columns?.source == 'Microsoft-Windows-Sysmon/Operational' && ctx.event?.code == 2", "field": "event.dataset", "value": "process_changed_file", "override": true } },
{ "set": { "if": "ctx.message3.columns?.source == 'Microsoft-Windows-Sysmon/Operational' && ctx.event?.code == 3", "field": "event.dataset", "value": "network_connection", "override": true } },
{ "set": { "if": "ctx.message3.columns?.source == 'Microsoft-Windows-Sysmon/Operational' && ctx.event?.code == 5", "field": "event.dataset", "value": "process_terminated", "override": true } },
{ "set": { "if": "ctx.message3.columns?.source == 'Microsoft-Windows-Sysmon/Operational' && ctx.event?.code == 6", "field": "event.dataset", "value": "driver_loaded", "override": true } },
{ "set": { "if": "ctx.message3.columns?.source == 'Microsoft-Windows-Sysmon/Operational' && ctx.event?.code == 7", "field": "event.dataset", "value": "image_loaded", "override": true } },
{ "set": { "if": "ctx.message3.columns?.source == 'Microsoft-Windows-Sysmon/Operational' && ctx.event?.code == 8", "field": "event.dataset", "value": "create_remote_thread", "override": true } },
{ "set": { "if": "ctx.message3.columns?.source == 'Microsoft-Windows-Sysmon/Operational' && ctx.event?.code == 9", "field": "event.dataset", "value": "raw_file_access_read", "override": true } },
{ "set": { "if": "ctx.message3.columns?.source == 'Microsoft-Windows-Sysmon/Operational' && ctx.event?.code == 10", "field": "event.dataset", "value": "process_access", "override": true } },
{ "set": { "if": "ctx.message3.columns?.source == 'Microsoft-Windows-Sysmon/Operational' && ctx.event?.code == 11", "field": "event.dataset", "value": "file_create", "override": true } },
{ "set": { "if": "ctx.message3.columns?.source == 'Microsoft-Windows-Sysmon/Operational' && ctx.event?.code == 12", "field": "event.dataset", "value": "registry_create_delete", "override": true } },
{ "set": { "if": "ctx.message3.columns?.source == 'Microsoft-Windows-Sysmon/Operational' && ctx.event?.code == 13", "field": "event.dataset", "value": "registry_value_set", "override": true } },
{ "set": { "if": "ctx.message3.columns?.source == 'Microsoft-Windows-Sysmon/Operational' && ctx.event?.code == 14", "field": "event.dataset", "value": "registry_key_value_rename", "override": true } },
{ "set": { "if": "ctx.message3.columns?.source == 'Microsoft-Windows-Sysmon/Operational' && ctx.event?.code == 15", "field": "event.dataset", "value": "file_create_stream_hash", "override": true } },
{ "set": { "if": "ctx.message3.columns?.source == 'Microsoft-Windows-Sysmon/Operational' && ctx.event?.code == 16", "field": "event.dataset", "value": "config_change", "override": true } },
{ "set": { "if": "ctx.message3.columns?.source == 'Microsoft-Windows-Sysmon/Operational' && ctx.event?.code == 22", "field": "event.dataset", "value": "dns_query", "override": true } },
{ "rename": { "field": "message3.columns.winlog.EventData.SubjectUserName", "target_field": "user.name", "ignore_missing": true } }, { "rename": { "field": "message3.columns.winlog.EventData.SubjectUserName", "target_field": "user.name", "ignore_missing": true } },
{ "rename": { "field": "message3.columns.winlog.EventData.destinationHostname", "target_field": "destination.hostname", "ignore_missing": true } }, { "rename": { "field": "message3.columns.winlog.EventData.destinationHostname", "target_field": "destination.hostname", "ignore_missing": true } },
{ "rename": { "field": "message3.columns.winlog.EventData.destinationIp", "target_field": "destination.ip", "ignore_missing": true } }, { "rename": { "field": "message3.columns.winlog.EventData.destinationIp", "target_field": "destination.ip", "ignore_missing": true } },

View File

@@ -19,6 +19,7 @@
} }
} }
}, },
{ "set": { "field": "observer.name", "value": "{{agent.name}}" }},
{ "remove": { "field": ["host", "path", "message", "scan.exiftool.keys"], "ignore_missing": true } }, { "remove": { "field": ["host", "path", "message", "scan.exiftool.keys"], "ignore_missing": true } },
{ "pipeline": { "name": "common" } } { "pipeline": { "name": "common" } }
] ]

View File

@@ -12,6 +12,8 @@
{ "remove":{ "field": "dataset", "ignore_failure": true } }, { "remove":{ "field": "dataset", "ignore_failure": true } },
{ "rename":{ "field": "message2.event_type", "target_field": "dataset", "ignore_failure": true } }, { "rename":{ "field": "message2.event_type", "target_field": "dataset", "ignore_failure": true } },
{ "set": { "field": "observer.name", "value": "{{agent.name}}" } }, { "set": { "field": "observer.name", "value": "{{agent.name}}" } },
{ "set": { "field": "ingest.timestamp", "value": "{{@timestamp}}" } },
{ "date": { "field": "message2.timestamp", "target_field": "@timestamp", "formats": ["ISO8601", "UNIX"], "timezone": "UTC", "ignore_failure": true } },
{ "remove":{ "field": "agent", "ignore_failure": true } }, { "remove":{ "field": "agent", "ignore_failure": true } },
{ "pipeline": { "name": "suricata.{{dataset}}" } } { "pipeline": { "name": "suricata.{{dataset}}" } }
] ]

View File

@@ -13,6 +13,7 @@
# You should have received a copy of the GNU General Public License # You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>. # along with this program. If not, see <http://www.gnu.org/licenses/>.
{% set VERSION = salt['pillar.get']('static:soversion', 'HH1.2.2') %} {% set VERSION = salt['pillar.get']('static:soversion', 'HH1.2.2') %}
{% set IMAGEREPO = salt['pillar.get']('static:imagerepo') %}
{% set MANAGER = salt['grains.get']('master') %} {% set MANAGER = salt['grains.get']('master') %}
{% set FEATURES = salt['pillar.get']('elastic:features', False) %} {% set FEATURES = salt['pillar.get']('elastic:features', False) %}
@@ -30,6 +31,8 @@
{% set esheap = salt['pillar.get']('elasticsearch:esheap', '') %} {% set esheap = salt['pillar.get']('elasticsearch:esheap', '') %}
{% endif %} {% endif %}
{% set TEMPLATES = salt['pillar.get']('elasticsearch:templates', {}) %}
vm.max_map_count: vm.max_map_count:
sysctl.present: sysctl.present:
- value: 262144 - value: 262144
@@ -62,6 +65,13 @@ esingestdir:
- group: 939 - group: 939
- makedirs: True - makedirs: True
estemplatedir:
file.directory:
- name: /opt/so/conf/elasticsearch/templates
- user: 930
- group: 939
- makedirs: True
esingestconf: esingestconf:
file.recurse: file.recurse:
- name: /opt/so/conf/elasticsearch/ingest - name: /opt/so/conf/elasticsearch/ingest
@@ -85,6 +95,21 @@ esyml:
- group: 939 - group: 939
- template: jinja - template: jinja
#sync templates to /opt/so/conf/elasticsearch/templates
{% for TEMPLATE in TEMPLATES %}
es_template_{{TEMPLATE.split('.')[0] | replace("/","_") }}:
file.managed:
- source: salt://elasticsearch/templates/{{TEMPLATE}}
{% if 'jinja' in TEMPLATE.split('.')[-1] %}
- name: /opt/so/conf/elasticsearch/templates/{{TEMPLATE.split('/')[1] | replace(".jinja", "")}}
- template: jinja
{% else %}
- name: /opt/so/conf/elasticsearch/templates/{{TEMPLATE.split('/')[1]}}
{% endif %}
- user: 930
- group: 939
{% endfor %}
nsmesdir: nsmesdir:
file.directory: file.directory:
- name: /nsm/elasticsearch - name: /nsm/elasticsearch
@@ -101,7 +126,7 @@ eslogdir:
so-elasticsearch: so-elasticsearch:
docker_container.running: docker_container.running:
- image: {{ MANAGER }}:5000/soshybridhunter/so-elasticsearch:{{ VERSION }}{{ FEATURES }} - image: {{ MANAGER }}:5000/{{ IMAGEREPO }}/so-elasticsearch:{{ VERSION }}{{ FEATURES }}
- hostname: elasticsearch - hostname: elasticsearch
- name: so-elasticsearch - name: so-elasticsearch
- user: elasticsearch - user: elasticsearch
@@ -141,7 +166,7 @@ so-elasticsearch-pipelines:
- file: esyml - file: esyml
- file: so-elasticsearch-pipelines-file - file: so-elasticsearch-pipelines-file
{% if grains['role'] in ['so-manager', 'so-eval', 'so-managersearch', 'so-standalone'] %} {% if grains['role'] in ['so-manager', 'so-eval', 'so-managersearch', 'so-standalone'] and TEMPLATES %}
so-elasticsearch-templates: so-elasticsearch-templates:
cmd.run: cmd.run:
- name: /usr/sbin/so-elasticsearch-templates - name: /usr/sbin/so-elasticsearch-templates

View File

@@ -1,5 +1,5 @@
{ {
"index_patterns": ["so-ids-*", "so-firewall-*", "so-syslog-*", "so-zeek-*", "so-import-*", "so-ossec-*", "so-strelka-*", "so-beats-*", "so-osquery-*"], "index_patterns": ["so-ids-*", "so-firewall-*", "so-syslog-*", "so-zeek-*", "so-import-*", "so-ossec-*", "so-strelka-*", "so-beats-*", "so-osquery-*","so-playbook-*"],
"version":50001, "version":50001,
"order":10, "order":10,
"settings":{ "settings":{
@@ -381,6 +381,10 @@
"type":"object", "type":"object",
"dynamic": true "dynamic": true
}, },
"winlog":{
"type":"object",
"dynamic": true
},
"x509":{ "x509":{
"type":"object", "type":"object",
"dynamic": true "dynamic": true

View File

@@ -6,7 +6,7 @@
{%- set HOSTNAME = salt['grains.get']('host', '') %} {%- set HOSTNAME = salt['grains.get']('host', '') %}
{%- set BROVER = salt['pillar.get']('static:broversion', 'COMMUNITY') %} {%- set ZEEKVER = salt['pillar.get']('static:zeekversion', 'COMMUNITY') %}
{%- set WAZUHENABLED = salt['pillar.get']('static:wazuh', '0') %} {%- set WAZUHENABLED = salt['pillar.get']('static:wazuh', '0') %}
{%- set STRELKAENABLED = salt['pillar.get']('strelka:enabled', '0') %} {%- set STRELKAENABLED = salt['pillar.get']('strelka:enabled', '0') %}
{%- set FLEETMANAGER = salt['pillar.get']('static:fleet_manager', False) -%} {%- set FLEETMANAGER = salt['pillar.get']('static:fleet_manager', False) -%}
@@ -100,8 +100,8 @@ filebeat.inputs:
- drop_fields: - drop_fields:
fields: ["source", "prospector", "input", "offset", "beat"] fields: ["source", "prospector", "input", "offset", "beat"]
fields_under_root: true fields_under_root: true
{%- if BROVER != 'SURICATA' %} {%- if ZEEKVER != 'SURICATA' %}
{%- for LOGNAME in salt['pillar.get']('brologs:enabled', '') %} {%- for LOGNAME in salt['pillar.get']('zeeklogs:enabled', '') %}
- type: log - type: log
paths: paths:
- /nsm/zeek/logs/current/{{ LOGNAME }}.log - /nsm/zeek/logs/current/{{ LOGNAME }}.log
@@ -127,7 +127,7 @@ filebeat.inputs:
imported: true imported: true
processors: processors:
- add_tags: - add_tags:
tags: [import] tags: ["import"]
- dissect: - dissect:
tokenizer: "/nsm/import/%{import.id}/zeek/logs/%{import.file}" tokenizer: "/nsm/import/%{import.id}/zeek/logs/%{import.file}"
field: "log.file.path" field: "log.file.path"
@@ -167,7 +167,7 @@ filebeat.inputs:
imported: true imported: true
processors: processors:
- add_tags: - add_tags:
tags: [import] tags: ["import"]
- dissect: - dissect:
tokenizer: "/nsm/import/%{import.id}/suricata/%{import.file}" tokenizer: "/nsm/import/%{import.id}/suricata/%{import.file}"
field: "log.file.path" field: "log.file.path"
@@ -260,6 +260,9 @@ output.elasticsearch:
pipelines: pipelines:
- pipeline: "%{[module]}.%{[dataset]}" - pipeline: "%{[module]}.%{[dataset]}"
indices: indices:
- index: "so-import-%{+yyyy.MM.dd}"
when.contains:
tags: "import"
- index: "so-zeek-%{+yyyy.MM.dd}" - index: "so-zeek-%{+yyyy.MM.dd}"
when.contains: when.contains:
module: "zeek" module: "zeek"

View File

@@ -12,6 +12,7 @@
# You should have received a copy of the GNU General Public License # You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>. # along with this program. If not, see <http://www.gnu.org/licenses/>.
{% set VERSION = salt['pillar.get']('static:soversion', 'HH1.2.2') %} {% set VERSION = salt['pillar.get']('static:soversion', 'HH1.2.2') %}
{% set IMAGEREPO = salt['pillar.get']('static:imagerepo') %}
{% set MANAGER = salt['grains.get']('master') %} {% set MANAGER = salt['grains.get']('master') %}
{% set MANAGERIP = salt['pillar.get']('static:managerip', '') %} {% set MANAGERIP = salt['pillar.get']('static:managerip', '') %}
{% set FEATURES = salt['pillar.get']('elastic:features', False) %} {% set FEATURES = salt['pillar.get']('elastic:features', False) %}
@@ -51,7 +52,7 @@ filebeatconfsync:
OUTPUT: {{ salt['pillar.get']('filebeat:config:output', {}) }} OUTPUT: {{ salt['pillar.get']('filebeat:config:output', {}) }}
so-filebeat: so-filebeat:
docker_container.running: docker_container.running:
- image: {{ MANAGER }}:5000/soshybridhunter/so-filebeat:{{ VERSION }}{{ FEATURES }} - image: {{ MANAGER }}:5000/{{ IMAGEREPO }}/so-filebeat:{{ VERSION }}{{ FEATURES }}
- hostname: so-filebeat - hostname: so-filebeat
- user: root - user: root
- extra_hosts: {{ MANAGER }}:{{ MANAGERIP }} - extra_hosts: {{ MANAGER }}:{{ MANAGERIP }}

View File

@@ -3,11 +3,15 @@
{% set CURRENTPACKAGEVERSION = salt['pillar.get']('static:fleet_packages-version') %} {% set CURRENTPACKAGEVERSION = salt['pillar.get']('static:fleet_packages-version') %}
{% set VERSION = salt['pillar.get']('static:soversion') %} {% set VERSION = salt['pillar.get']('static:soversion') %}
{% set CUSTOM_FLEET_HOSTNAME = salt['pillar.get']('static:fleet_custom_hostname', None) %} {% set CUSTOM_FLEET_HOSTNAME = salt['pillar.get']('static:fleet_custom_hostname', None) %}
{% set IMAGEREPO = salt['pillar.get']('static:imagerepo') %}
{%- set FLEETNODE = salt['pillar.get']('static:fleet_node') -%}
{% if CUSTOM_FLEET_HOSTNAME != None and CUSTOM_FLEET_HOSTNAME != '' %} {% if CUSTOM_FLEET_HOSTNAME != None and CUSTOM_FLEET_HOSTNAME != '' %}
{% set HOSTNAME = CUSTOM_FLEET_HOSTNAME %} {% set HOSTNAME = CUSTOM_FLEET_HOSTNAME %}
{% else %} {% elif FLEETNODE %}
{% set HOSTNAME = grains.host %} {% set HOSTNAME = grains.host %}
{% else %}
{% set HOSTNAME = salt['pillar.get']('manager:url_base') %}
{% endif %} {% endif %}
so/fleet: so/fleet:
@@ -21,4 +25,4 @@ so/fleet:
current-package-version: {{ CURRENTPACKAGEVERSION }} current-package-version: {{ CURRENTPACKAGEVERSION }}
manager: {{ MANAGER }} manager: {{ MANAGER }}
version: {{ VERSION }} version: {{ VERSION }}
imagerepo: {{ IMAGEREPO }}

View File

@@ -2,6 +2,7 @@
{%- set FLEETPASS = salt['pillar.get']('secrets:fleet', None) -%} {%- set FLEETPASS = salt['pillar.get']('secrets:fleet', None) -%}
{%- set FLEETJWT = salt['pillar.get']('secrets:fleet_jwt', None) -%} {%- set FLEETJWT = salt['pillar.get']('secrets:fleet_jwt', None) -%}
{% set VERSION = salt['pillar.get']('static:soversion', 'HH1.2.2') %} {% set VERSION = salt['pillar.get']('static:soversion', 'HH1.2.2') %}
{% set IMAGEREPO = salt['pillar.get']('static:imagerepo') %}
{% set MANAGER = salt['grains.get']('master') %} {% set MANAGER = salt['grains.get']('master') %}
{% set FLEETARCH = salt['grains.get']('role') %} {% set FLEETARCH = salt['grains.get']('role') %}
@@ -105,7 +106,7 @@ fleet_password_none:
so-fleet: so-fleet:
docker_container.running: docker_container.running:
- image: {{ MANAGER }}:5000/soshybridhunter/so-fleet:{{ VERSION }} - image: {{ MANAGER }}:5000/{{ IMAGEREPO }}/so-fleet:{{ VERSION }}
- hostname: so-fleet - hostname: so-fleet
- port_bindings: - port_bindings:
- 0.0.0.0:8080:8080 - 0.0.0.0:8080:8080

View File

@@ -13,6 +13,8 @@
# You should have received a copy of the GNU General Public License # You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>. # along with this program. If not, see <http://www.gnu.org/licenses/>.
{% set IMAGEREPO = salt['pillar.get']('static:imagerepo') %}
# Create the user # Create the user
fservergroup: fservergroup:
group.present: group.present:
@@ -37,13 +39,13 @@ freqlogdir:
so-freqimage: so-freqimage:
cmd.run: cmd.run:
- name: docker pull --disable-content-trust=false docker.io/soshybridhunter/so-freqserver:HH1.0.3 - name: docker pull --disable-content-trust=false docker.io/{{ IMAGEREPO }}/so-freqserver:HH1.0.3
so-freq: so-freq:
docker_container.running: docker_container.running:
- require: - require:
- so-freqimage - so-freqimage
- image: docker.io/soshybridhunter/so-freqserver:HH1.0.3 - image: docker.io/{{ IMAGEREPO }}/so-freqserver:HH1.0.3
- hostname: freqserver - hostname: freqserver
- name: so-freqserver - name: so-freqserver
- user: freqserver - user: freqserver

View File

@@ -1,6 +1,7 @@
{% set GRAFANA = salt['pillar.get']('manager:grafana', '0') %} {% set GRAFANA = salt['pillar.get']('manager:grafana', '0') %}
{% set MANAGER = salt['grains.get']('master') %} {% set MANAGER = salt['grains.get']('master') %}
{% set VERSION = salt['pillar.get']('static:soversion', 'HH1.2.2') %} {% set VERSION = salt['pillar.get']('static:soversion', 'HH1.2.2') %}
{% set IMAGEREPO = salt['pillar.get']('static:imagerepo') %}
{% if grains['role'] in ['so-manager', 'so-managersearch', 'so-eval', 'so-standalone'] and GRAFANA == 1 %} {% if grains['role'] in ['so-manager', 'so-managersearch', 'so-eval', 'so-standalone'] and GRAFANA == 1 %}
@@ -92,7 +93,7 @@ dashboard-manager:
MANINT: {{ SNDATA.manint }} MANINT: {{ SNDATA.manint }}
MONINT: {{ SNDATA.manint }} MONINT: {{ SNDATA.manint }}
CPUS: {{ SNDATA.totalcpus }} CPUS: {{ SNDATA.totalcpus }}
UID: {{ SNDATA.guid }} UID: so_overview
ROOTFS: {{ SNDATA.rootfs }} ROOTFS: {{ SNDATA.rootfs }}
NSMFS: {{ SNDATA.nsmfs }} NSMFS: {{ SNDATA.nsmfs }}
@@ -115,7 +116,7 @@ dashboard-managersearch:
MANINT: {{ SNDATA.manint }} MANINT: {{ SNDATA.manint }}
MONINT: {{ SNDATA.manint }} MONINT: {{ SNDATA.manint }}
CPUS: {{ SNDATA.totalcpus }} CPUS: {{ SNDATA.totalcpus }}
UID: {{ SNDATA.guid }} UID: so_overview
ROOTFS: {{ SNDATA.rootfs }} ROOTFS: {{ SNDATA.rootfs }}
NSMFS: {{ SNDATA.nsmfs }} NSMFS: {{ SNDATA.nsmfs }}
@@ -138,7 +139,7 @@ dashboard-standalone:
MANINT: {{ SNDATA.manint }} MANINT: {{ SNDATA.manint }}
MONINT: {{ SNDATA.manint }} MONINT: {{ SNDATA.manint }}
CPUS: {{ SNDATA.totalcpus }} CPUS: {{ SNDATA.totalcpus }}
UID: {{ SNDATA.guid }} UID: so_overview
ROOTFS: {{ SNDATA.rootfs }} ROOTFS: {{ SNDATA.rootfs }}
NSMFS: {{ SNDATA.nsmfs }} NSMFS: {{ SNDATA.nsmfs }}
@@ -207,7 +208,7 @@ dashboard-{{ SN }}:
MANINT: {{ SNDATA.manint }} MANINT: {{ SNDATA.manint }}
MONINT: {{ SNDATA.monint }} MONINT: {{ SNDATA.monint }}
CPUS: {{ SNDATA.totalcpus }} CPUS: {{ SNDATA.totalcpus }}
UID: {{ SNDATA.guid }} UID: so_overview
ROOTFS: {{ SNDATA.rootfs }} ROOTFS: {{ SNDATA.rootfs }}
NSMFS: {{ SNDATA.nsmfs }} NSMFS: {{ SNDATA.nsmfs }}
@@ -216,7 +217,7 @@ dashboard-{{ SN }}:
so-grafana: so-grafana:
docker_container.running: docker_container.running:
- image: {{ MANAGER }}:5000/soshybridhunter/so-grafana:{{ VERSION }} - image: {{ MANAGER }}:5000/{{ IMAGEREPO }}/so-grafana:{{ VERSION }}
- hostname: grafana - hostname: grafana
- user: socore - user: socore
- binds: - binds:

View File

@@ -1,4 +1,4 @@
{% set disabled_sids = salt['pillar.get']('idstools:sids:disabled', {}) -%} {%- set disabled_sids = salt['pillar.get']('idstools:sids:disabled', {}) -%}
# idstools - disable.conf # idstools - disable.conf
# Example of disabling a rule by signature ID (gid is optional). # Example of disabling a rule by signature ID (gid is optional).
@@ -9,7 +9,8 @@
# - All regular expression matches are case insensitive. # - All regular expression matches are case insensitive.
# re:hearbleed # re:hearbleed
# re:MS(0[7-9]|10)-\d+ # re:MS(0[7-9]|10)-\d+
{%- if disabled_sids != None %}
{%- for sid in disabled_sids %} {%- for sid in disabled_sids %}
{{ sid }} {{ sid }}
{%- endfor %} {%- endfor %}
{%- endif %}

View File

@@ -1,4 +1,4 @@
{% set enabled_sids = salt['pillar.get']('idstools:sids:enabled', {}) -%} {%- set enabled_sids = salt['pillar.get']('idstools:sids:enabled', {}) -%}
# idstools-rulecat - enable.conf # idstools-rulecat - enable.conf
# Example of enabling a rule by signature ID (gid is optional). # Example of enabling a rule by signature ID (gid is optional).
@@ -9,7 +9,8 @@
# - All regular expression matches are case insensitive. # - All regular expression matches are case insensitive.
# re:hearbleed # re:hearbleed
# re:MS(0[7-9]|10)-\d+ # re:MS(0[7-9]|10)-\d+
{%- if enabled_sids != None %}
{%- for sid in enabled_sids %} {%- for sid in enabled_sids %}
{{ sid }} {{ sid }}
{%- endfor %} {%- endfor %}
{%- endif %}

View File

@@ -1,14 +1,12 @@
{%- set modify_sids = salt['pillar.get']('idstools:sids:modify', {}) -%}
# idstools-rulecat - modify.conf # idstools-rulecat - modify.conf
# Format: <sid> "<from>" "<to>" # Format: <sid> "<from>" "<to>"
# Example changing the seconds for rule 2019401 to 3600. # Example changing the seconds for rule 2019401 to 3600.
#2019401 "seconds \d+" "seconds 3600" #2019401 "seconds \d+" "seconds 3600"
{%- if modify_sids != None %}
# Change all trojan-activity rules to drop. Its better to setup a {%- for sid in modify_sids %}
# drop.conf for this, but this does show the use of back references. {{ sid }}
#re:classtype:trojan-activity "(alert)(.*)" "drop\\2" {%- endfor %}
{%- endif %}
# For compatibility, most Oinkmaster modifysid lines should work as
# well.
#modifysid * "^drop(.*)noalert(.*)" | "alert${1}noalert${2}"

View File

@@ -1,6 +1,21 @@
--suricata-version=4.0 {%- set URLS = salt['pillar.get']('idstools:config:urls') -%}
{%- set RULESET = salt['pillar.get']('idstools:config:ruleset') -%}
{%- set OINKCODE = salt['pillar.get']('idstools:config:oinkcode', '' ) -%}
--suricata-version=5.0
--merged=/opt/so/rules/nids/all.rules --merged=/opt/so/rules/nids/all.rules
--local=/opt/so/rules/nids/local.rules --local=/opt/so/rules/nids/local.rules
--disable=/opt/so/idstools/etc/disable.conf --disable=/opt/so/idstools/etc/disable.conf
--enable=/opt/so/idstools/etc/enable.conf --enable=/opt/so/idstools/etc/enable.conf
--modify=/opt/so/idstools/etc/modify.conf --modify=/opt/so/idstools/etc/modify.conf
{%- if RULESET == 'ETOPEN' %}
--etopen
{%- elif RULESET == 'ETPRO' %}
--etpro={{ OINKCODE }}
{%- elif RULESET == 'TALOS' %}
--url=https://www.snort.org/rules/snortrules-snapshot-2983.tar.gz?oinkcode={{ OINKCODE }}
{%- endif %}
{%- if URLS != None %}
{%- for URL in URLS %}
--url={{ URL }}
{%- endfor %}
{%- endif %}

View File

@@ -13,6 +13,7 @@
# You should have received a copy of the GNU General Public License # You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>. # along with this program. If not, see <http://www.gnu.org/licenses/>.
{% set VERSION = salt['pillar.get']('static:soversion', 'HH1.2.2') %} {% set VERSION = salt['pillar.get']('static:soversion', 'HH1.2.2') %}
{% set IMAGEREPO = salt['pillar.get']('static:imagerepo') %}
{% set MANAGER = salt['grains.get']('master') %} {% set MANAGER = salt['grains.get']('master') %}
# IDSTools Setup # IDSTools Setup
idstoolsdir: idstoolsdir:
@@ -60,7 +61,7 @@ synclocalnidsrules:
so-idstools: so-idstools:
docker_container.running: docker_container.running:
- image: {{ MANAGER }}:5000/soshybridhunter/so-idstools:{{ VERSION }} - image: {{ MANAGER }}:5000/{{ IMAGEREPO }}/so-idstools:{{ VERSION }}
- hostname: so-idstools - hostname: so-idstools
- user: socore - user: socore
- binds: - binds:

View File

@@ -1,7 +1,7 @@
{% set GRAFANA = salt['pillar.get']('manager:grafana', '0') %} {% set GRAFANA = salt['pillar.get']('manager:grafana', '0') %}
{% set MANAGER = salt['grains.get']('master') %} {% set MANAGER = salt['grains.get']('master') %}
{% set VERSION = salt['pillar.get']('static:soversion', 'HH1.2.2') %} {% set VERSION = salt['pillar.get']('static:soversion', 'HH1.2.2') %}
{% set IMAGEREPO = salt['pillar.get']('static:imagerepo') %}
{% if grains['role'] in ['so-manager', 'so-managersearch', 'so-eval', 'so-standalone'] and GRAFANA == 1 %} {% if grains['role'] in ['so-manager', 'so-managersearch', 'so-eval', 'so-standalone'] and GRAFANA == 1 %}
@@ -26,7 +26,7 @@ influxdbconf:
so-influxdb: so-influxdb:
docker_container.running: docker_container.running:
- image: {{ MANAGER }}:5000/soshybridhunter/so-influxdb:{{ VERSION }} - image: {{ MANAGER }}:5000/{{ IMAGEREPO }}/so-influxdb:{{ VERSION }}
- hostname: influxdb - hostname: influxdb
- environment: - environment:
- INFLUXDB_HTTP_LOG_ENABLED=false - INFLUXDB_HTTP_LOG_ENABLED=false

View File

@@ -10,7 +10,7 @@ cp /opt/so/conf/kibana/saved_objects.ndjson.template /opt/so/conf/kibana/saved_o
# {% if FLEET_NODE or FLEET_MANAGER %} # {% if FLEET_NODE or FLEET_MANAGER %}
# Fleet IP # Fleet IP
sed -i "s/FLEETPLACEHOLDER/{{ MANAGER }}/g" /opt/so/conf/kibana/saved_objects.ndjson #sed -i "s/FLEETPLACEHOLDER/{{ MANAGER }}/g" /opt/so/conf/kibana/saved_objects.ndjson
# {% endif %} # {% endif %}
# SOCtopus and Manager # SOCtopus and Manager

File diff suppressed because one or more lines are too long

View File

@@ -1,4 +1,5 @@
{% set VERSION = salt['pillar.get']('static:soversion', 'HH1.2.2') %} {% set VERSION = salt['pillar.get']('static:soversion', 'HH1.2.2') %}
{% set IMAGEREPO = salt['pillar.get']('static:imagerepo') %}
{% set MANAGER = salt['grains.get']('master') %} {% set MANAGER = salt['grains.get']('master') %}
{% set FEATURES = salt['pillar.get']('elastic:features', False) %} {% set FEATURES = salt['pillar.get']('elastic:features', False) %}
{% if FEATURES %} {% if FEATURES %}
@@ -69,7 +70,7 @@ kibanabin:
# Start the kibana docker # Start the kibana docker
so-kibana: so-kibana:
docker_container.running: docker_container.running:
- image: {{ MANAGER }}:5000/soshybridhunter/so-kibana:{{ VERSION }}{{ FEATURES }} - image: {{ MANAGER }}:5000/{{ IMAGEREPO }}/so-kibana:{{ VERSION }}{{ FEATURES }}
- hostname: kibana - hostname: kibana
- user: kibana - user: kibana
- environment: - environment:

View File

@@ -13,6 +13,7 @@
# You should have received a copy of the GNU General Public License # You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>. # along with this program. If not, see <http://www.gnu.org/licenses/>.
{% set VERSION = salt['pillar.get']('static:soversion', 'HH1.2.2') %} {% set VERSION = salt['pillar.get']('static:soversion', 'HH1.2.2') %}
{% set IMAGEREPO = salt['pillar.get']('static:imagerepo') %}
{% set MANAGER = salt['grains.get']('master') %} {% set MANAGER = salt['grains.get']('master') %}
{% set FEATURES = salt['pillar.get']('elastic:features', False) %} {% set FEATURES = salt['pillar.get']('elastic:features', False) %}
@@ -35,8 +36,11 @@
{% endif %} {% endif %}
{% set PIPELINES = salt['pillar.get']('logstash:pipelines', {}) %} {% set PIPELINES = salt['pillar.get']('logstash:pipelines', {}) %}
{% set TEMPLATES = salt['pillar.get']('logstash:templates', {}) %}
{% set DOCKER_OPTIONS = salt['pillar.get']('logstash:docker_options', {}) %} {% set DOCKER_OPTIONS = salt['pillar.get']('logstash:docker_options', {}) %}
{% set TEMPLATES = salt['pillar.get']('elasticsearch:templates', {}) %}
include:
- elasticsearch
# Create the logstash group # Create the logstash group
logstashgroup: logstashgroup:
@@ -93,21 +97,6 @@ ls_pipeline_{{PL}}:
{% endfor %} {% endfor %}
#sync templates to /opt/so/conf/logstash/etc
{% for TEMPLATE in TEMPLATES %}
ls_template_{{TEMPLATE.split('.')[0] | replace("/","_") }}:
file.managed:
- source: salt://logstash/pipelines/templates/{{TEMPLATE}}
{% if 'jinja' in TEMPLATE.split('.')[-1] %}
- name: /opt/so/conf/logstash/etc/{{TEMPLATE.split('/')[1] | replace(".jinja", "")}}
- template: jinja
{% else %}
- name: /opt/so/conf/logstash/etc/{{TEMPLATE.split('/')[1]}}
{% endif %}
- user: 931
- group: 939
{% endfor %}
lspipelinesyml: lspipelinesyml:
file.managed: file.managed:
- name: /opt/so/conf/logstash/etc/pipelines.yml - name: /opt/so/conf/logstash/etc/pipelines.yml
@@ -125,12 +114,6 @@ lsetcsync:
- group: 939 - group: 939
- template: jinja - template: jinja
- clean: True - clean: True
{% if TEMPLATES %}
- require:
{% for TEMPLATE in TEMPLATES %}
- file: ls_template_{{TEMPLATE.split('.')[0] | replace("/","_") }}
{% endfor %}
{% endif %}
- exclude_pat: pipelines* - exclude_pat: pipelines*
# Create the import directory # Create the import directory
@@ -159,7 +142,7 @@ lslogdir:
so-logstash: so-logstash:
docker_container.running: docker_container.running:
- image: {{ MANAGER }}:5000/soshybridhunter/so-logstash:{{ VERSION }}{{ FEATURES }} - image: {{ MANAGER }}:5000/{{ IMAGEREPO }}/so-logstash:{{ VERSION }}{{ FEATURES }}
- hostname: so-logstash - hostname: so-logstash
- name: so-logstash - name: so-logstash
- user: logstash - user: logstash
@@ -170,13 +153,7 @@ so-logstash:
- {{ BINDING }} - {{ BINDING }}
{% endfor %} {% endfor %}
- binds: - binds:
{% for TEMPLATE in TEMPLATES %} - /opt/so/conf/elasticsearch/templates/:/templates/:ro
{% if 'jinja' in TEMPLATE.split('.')[-1] %}
- /opt/so/conf/logstash/etc/{{TEMPLATE.split('/')[1] | replace(".jinja", "")}}:/{{TEMPLATE.split('/')[1] | replace(".jinja", "")}}:ro
{% else %}
- /opt/so/conf/logstash/etc/{{TEMPLATE.split('/')[1]}}:/{{TEMPLATE.split('/')[1]}}:ro
{% endif %}
{% endfor %}
- /opt/so/conf/logstash/etc/log4j2.properties:/usr/share/logstash/config/log4j2.properties:ro - /opt/so/conf/logstash/etc/log4j2.properties:/usr/share/logstash/config/log4j2.properties:ro
- /opt/so/conf/logstash/etc/logstash.yml:/usr/share/logstash/config/logstash.yml:ro - /opt/so/conf/logstash/etc/logstash.yml:/usr/share/logstash/config/logstash.yml:ro
- /opt/so/conf/logstash/etc/pipelines.yml:/usr/share/logstash/config/pipelines.yml - /opt/so/conf/logstash/etc/pipelines.yml:/usr/share/logstash/config/pipelines.yml
@@ -206,6 +183,5 @@ so-logstash:
{% endfor %} {% endfor %}
{% endfor %} {% endfor %}
{% for TEMPLATE in TEMPLATES %} {% for TEMPLATE in TEMPLATES %}
- file: ls_template_{{TEMPLATE.split('.')[0] | replace("/","_") }} - file: es_template_{{TEMPLATE.split('.')[0] | replace("/","_") }}
{% endfor %} {% endfor %}
# - file: /opt/so/conf/logstash/rulesets

View File

@@ -10,7 +10,7 @@ output {
hosts => "{{ ES }}" hosts => "{{ ES }}"
index => "so-zeek-%{+YYYY.MM.dd}" index => "so-zeek-%{+YYYY.MM.dd}"
template_name => "so-zeek" template_name => "so-zeek"
template => "/so-zeek-template.json" template => "/templates/so-zeek-template.json"
template_overwrite => true template_overwrite => true
} }
} }

View File

@@ -10,7 +10,7 @@ output {
hosts => "{{ ES }}" hosts => "{{ ES }}"
index => "so-import-%{+YYYY.MM.dd}" index => "so-import-%{+YYYY.MM.dd}"
template_name => "so-import" template_name => "so-import"
template => "/so-import-template.json" template => "/templates/so-import-template.json"
template_overwrite => true template_overwrite => true
} }
} }

View File

@@ -9,7 +9,7 @@ output {
hosts => "{{ ES }}" hosts => "{{ ES }}"
index => "so-flow-%{+YYYY.MM.dd}" index => "so-flow-%{+YYYY.MM.dd}"
template_name => "so-flow" template_name => "so-flow"
template => "/so-flow-template.json" template => "/templates/so-flow-template.json"
template_overwrite => true template_overwrite => true
} }
} }

View File

@@ -9,7 +9,7 @@ output {
hosts => "{{ ES }}" hosts => "{{ ES }}"
index => "so-ids-%{+YYYY.MM.dd}" index => "so-ids-%{+YYYY.MM.dd}"
template_name => "so-ids" template_name => "so-ids"
template => "/so-ids-template.json" template => "/templates/so-ids-template.json"
template_overwrite => true template_overwrite => true
} }
} }

View File

@@ -10,7 +10,7 @@ output {
hosts => "{{ ES }}" hosts => "{{ ES }}"
index => "so-syslog-%{+YYYY.MM.dd}" index => "so-syslog-%{+YYYY.MM.dd}"
template_name => "so-syslog" template_name => "so-syslog"
template => "/so-syslog-template.json" template => "/templates/so-syslog-template.json"
template_overwrite => true template_overwrite => true
} }
} }

View File

@@ -10,7 +10,7 @@ output {
hosts => "{{ ES }}" hosts => "{{ ES }}"
index => "so-osquery-%{+YYYY.MM.dd}" index => "so-osquery-%{+YYYY.MM.dd}"
template_name => "so-osquery" template_name => "so-osquery"
template => "/so-osquery-template.json" template => "/templates/so-osquery-template.json"
template_overwrite => true template_overwrite => true
} }
} }

View File

@@ -9,7 +9,7 @@ output {
hosts => "{{ ES }}" hosts => "{{ ES }}"
index => "so-firewall-%{+YYYY.MM.dd}" index => "so-firewall-%{+YYYY.MM.dd}"
template_name => "so-firewall" template_name => "so-firewall"
template => "/so-firewall-template.json" template => "/templates/so-firewall-template.json"
template_overwrite => true template_overwrite => true
} }
} }

View File

@@ -10,7 +10,7 @@ output {
hosts => "{{ ES }}" hosts => "{{ ES }}"
index => "so-ids-%{+YYYY.MM.dd}" index => "so-ids-%{+YYYY.MM.dd}"
template_name => "so-ids" template_name => "so-ids"
template => "/so-ids-template.json" template => "/templates/so-ids-template.json"
} }
} }
} }

View File

@@ -10,7 +10,7 @@ output {
hosts => "{{ ES }}" hosts => "{{ ES }}"
index => "so-beats-%{+YYYY.MM.dd}" index => "so-beats-%{+YYYY.MM.dd}"
template_name => "so-beats" template_name => "so-beats"
template => "/so-beats-template.json" template => "/templates/so-beats-template.json"
template_overwrite => true template_overwrite => true
} }
} }

View File

@@ -10,7 +10,7 @@ output {
hosts => "{{ ES }}" hosts => "{{ ES }}"
index => "so-ossec-%{+YYYY.MM.dd}" index => "so-ossec-%{+YYYY.MM.dd}"
template_name => "so-ossec" template_name => "so-ossec"
template => "/so-ossec-template.json" template => "/templates/so-ossec-template.json"
template_overwrite => true template_overwrite => true
} }
} }

Some files were not shown because too many files have changed in this diff Show More