Merge pull request #745 from Security-Onion-Solutions/dev

1.3.0
This commit is contained in:
Mike Reeves
2020-05-20 13:51:48 -04:00
committed by GitHub
166 changed files with 12951 additions and 4444 deletions

59
.gitignore vendored
View File

@@ -1,2 +1,59 @@
# Created by https://www.gitignore.io/api/macos,windows
# Edit at https://www.gitignore.io/?templates=macos,windows
### macOS ###
# General
.DS_Store .DS_Store
.idea .AppleDouble
.LSOverride
# Icon must end with two \r
Icon
# Thumbnails
._*
# Files that might appear in the root of a volume
.DocumentRevisions-V100
.fseventsd
.Spotlight-V100
.TemporaryItems
.Trashes
.VolumeIcon.icns
.com.apple.timemachine.donotpresent
# Directories potentially created on remote AFP share
.AppleDB
.AppleDesktop
Network Trash Folder
Temporary Items
.apdisk
### Windows ###
# Windows thumbnail cache files
Thumbs.db
Thumbs.db:encryptable
ehthumbs.db
ehthumbs_vista.db
# Dump file
*.stackdump
# Folder config file
[Dd]esktop.ini
# Recycle Bin used on file shares
$RECYCLE.BIN/
# Windows Installer files
*.cab
*.msi
*.msix
*.msm
*.msp
# Windows shortcuts
*.lnk
# End of https://www.gitignore.io/api/macos,windows

View File

@@ -1,42 +1,34 @@
## Hybrid Hunter Beta 1.2.2 - Beta 1 ## Hybrid Hunter Beta 1.3.0 - Beta 2
### Changes: ### Changes:
- Updated Saltstack to 2019.2.4 to address [CVE-2020-11651](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-11651) - New Feature: Codename: "Onion Hunt". Select Hunt from the menu and start hunting down your adversaries!
- Updated Suricata to 4.1.8 to address some possible security issues. Details [here](https://suricata-ids.org/2020/04/28/suricata-4-1-8-released/). - Improved ECS support.
- Fixed an issue that was preventing Strelka to function properly. - Complete refactor of the setup to make it easier to follow.
- ISO installs should now use the built in docker containers instead of re-downloading them. - Improved setup script logging to better assist on any issues.
- Setup now checks for minimal requirements during install.
- Updated Cyberchef to version 9.20.3.
- Updated Elastalert to version 0.2.4 and switched to alpine to reduce container size.
- Updated Redis to 5.0.9 and switched to alpine to reduce container size.
- Updated Salt to 2019.2.5
- Updated Grafana to 6.7.3.
- Zeek 3.0.6
- Suricata 4.1.8
- Fixes so-status to now display correct containers and status.
- local.zeek is now controlled by a pillar instead of modifying the file directly.
- Renamed so-core to so-nginx and switched to alpine to reduce container size.
- Playbook now uses MySQL instead of SQLite.
- Sigma rules have all been updated.
- Kibana dashboard improvements for ECS.
- Fixed an issue where geoip was not properly parsed.
- ATT&CK Navigator is now it's own state.
- Standlone mode is now supported.
- Mastersearch previously used the same Grafana dashboard as a Search node. It now has its own dashboard that incorporates panels from the Master node and Search node dashboards.
### Known Issues:
## Hybrid Hunter Beta 1.2.1 - Beta 1 - The Hunt feature is currently considered "Preview" and although very useful in its current state, not everything works. We wanted to get this out as soon as possible to get the feedback from you! Let us know what you want to see! Let us know what you think we should call it!
- You cannot pivot to PCAP from Suricata alerts in Kibana or Hunt.
### Changes:
- Full support for Ubuntu 18.04. 16.04 is no longer supported for Hybrid Hunter.
- Introduction of the Security Onion Console. Once logged in you are directly taken to the SOC.
- New authentication using Kratos.
- During install you must specify how you would like to access the SOC ui. This is for strict cookie security.
- Ability to list and delete web users from the SOC ui.
- The soremote account is now used to add nodes to the grid vs using socore.
- Community ID support for Zeek, osquery, and Suricata. You can now tie host events to connection logs!
- Elastic 7.6.1 with ECS support.
- New set of Kibana dashboards that align with ECS.
- Eval mode no longer uses Logstash for parsing (Filebeat -> ES Ingest)
- Ingest node parsing for osquery-shipped logs (osquery, WEL, Sysmon).
- Fleet standalone mode with improved Web UI & API access control.
- Improved Fleet integration support.
- Playbook now has full Windows Sigma community ruleset builtin.
- Automatic Sigma community rule updates.
- Playbook stability enhancements.
- Zeek health check. Zeek will now auto restart if a worker crashes.
- zeekctl is now managed by salt.
- Grafana dashboard improvements and cleanup.
- Moved logstash configs to pillars.
- Salt logs moved to /opt/so/log/salt.
- Strelka integrated for file-oriented detection/analysis at scale
### Known issues:
- Updating users via the SOC ui is known to fail. To change a user, delete the user and re-add them. - Updating users via the SOC ui is known to fail. To change a user, delete the user and re-add them.
- Due to the move to ECS, the current Playbook plays may not alert correctly at this time. - Due to the move to ECS, the current Playbook plays may not alert correctly at this time.
- The osquery MacOS package does not install correctly. - The osquery MacOS package does not install correctly.

View File

@@ -1 +1 @@
1.2.2 1.3.0

View File

@@ -0,0 +1 @@
mastersearchtab:

View File

@@ -10,7 +10,7 @@
eval: eval:
containers: containers:
- so-core - so-nginx
- so-telegraf - so-telegraf
{% if GRAFANA == '1' %} {% if GRAFANA == '1' %}
- so-influxdb - so-influxdb
@@ -54,7 +54,7 @@ eval:
{% endif %} {% endif %}
heavy_node: heavy_node:
containers: containers:
- so-core - so-nginx
- so-telegraf - so-telegraf
- so-redis - so-redis
- so-logstash - so-logstash
@@ -69,7 +69,7 @@ heavy_node:
{% endif %} {% endif %}
helix: helix:
containers: containers:
- so-core - so-nginx
- so-telegraf - so-telegraf
- so-idstools - so-idstools
- so-steno - so-steno
@@ -79,14 +79,14 @@ helix:
- so-filebeat - so-filebeat
hot_node: hot_node:
containers: containers:
- so-core - so-nginx
- so-telegraf - so-telegraf
- so-logstash - so-logstash
- so-elasticsearch - so-elasticsearch
- so-curator - so-curator
master_search: master_search:
containers: containers:
- so-core - so-nginx
- so-telegraf - so-telegraf
- so-soc - so-soc
- so-kratos - so-kratos
@@ -127,7 +127,7 @@ master_search:
master: master:
containers: containers:
- so-dockerregistry - so-dockerregistry
- so-core - so-nginx
- so-telegraf - so-telegraf
{% if GRAFANA == '1' %} {% if GRAFANA == '1' %}
- so-influxdb - so-influxdb
@@ -169,12 +169,12 @@ master:
{% endif %} {% endif %}
parser_node: parser_node:
containers: containers:
- so-core - so-nginx
- so-telegraf - so-telegraf
- so-logstash - so-logstash
search_node: search_node:
containers: containers:
- so-core - so-nginx
- so-telegraf - so-telegraf
- so-logstash - so-logstash
- so-elasticsearch - so-elasticsearch
@@ -185,7 +185,7 @@ search_node:
{% endif %} {% endif %}
sensor: sensor:
containers: containers:
- so-core - so-nginx
- so-telegraf - so-telegraf
- so-steno - so-steno
- so-suricata - so-suricata
@@ -196,7 +196,7 @@ sensor:
- so-filebeat - so-filebeat
warm_node: warm_node:
containers: containers:
- so-core - so-nginx
- so-telegraf - so-telegraf
- so-elasticsearch - so-elasticsearch
fleet: fleet:
@@ -206,6 +206,6 @@ fleet:
- so-fleet - so-fleet
- so-redis - so-redis
- so-filebeat - so-filebeat
- so-core - so-nginx
- so-telegraf - so-telegraf
{% endif %} {% endif %}

View File

@@ -1,3 +0,0 @@
analyst:
- 127.0.0.1

View File

@@ -1,3 +0,0 @@
beats_endpoint:
- 127.0.0.1

View File

@@ -1,3 +0,0 @@
forward_nodes:
- 127.0.0.1

View File

@@ -1,2 +0,0 @@
masterfw:
- 127.0.0.1

View File

@@ -1,3 +0,0 @@
minions:
- 127.0.0.1

View File

@@ -1,3 +0,0 @@
osquery_endpoint:
- 127.0.0.1

View File

@@ -1,2 +0,0 @@
search_nodes:
- 127.0.0.1

View File

@@ -1,2 +0,0 @@
wazuh_endpoint:
- 127.0.0.1

View File

@@ -0,0 +1,5 @@
healthcheck:
enabled: False
schedule: 300
checks:
- zeek

View File

@@ -1,7 +1,10 @@
base: base:
'*': '*':
- patch.needs_restarting - patch.needs_restarting
- docker.config
'*_eval or *_helix or *_heavynode or *_sensor or *_standalone':
- match: compound
- zeek
'*_mastersearch or *_heavynode': '*_mastersearch or *_heavynode':
- match: compound - match: compound
@@ -37,6 +40,18 @@ base:
- healthcheck.eval - healthcheck.eval
- minions.{{ grains.id }} - minions.{{ grains.id }}
'*_standalone':
- logstash
- logstash.master
- logstash.search
- firewall.*
- data.*
- brologs
- secrets
- healthcheck.standalone
- static
- minions.{{ grains.id }}
'*_node': '*_node':
- static - static
- firewall.* - firewall.*

55
pillar/zeek/init.sls Normal file
View File

@@ -0,0 +1,55 @@
zeek:
zeekctl:
MailTo: root@localhost
MailConnectionSummary: 1
MinDiskSpace: 5
MailHostUpDown: 1
LogRotationInterval: 3600
LogExpireInterval: 0
StatsLogEnable: 1
StatsLogExpireInterval: 0
StatusCmdShowAll: 0
CrashExpireInterval: 0
SitePolicyScripts: local.zeek
LogDir: /nsm/zeek/logs
SpoolDir: /nsm/zeek/spool
CfgDir: /opt/zeek/etc
CompressLogs: 1
local:
'@load':
- misc/loaded-scripts
- tuning/defaults
- misc/capture-loss
- misc/stats
- frameworks/software/vulnerable
- frameworks/software/version-changes
- protocols/ftp/software
- protocols/smtp/software
- protocols/ssh/software
- protocols/http/software
- protocols/dns/detect-external-names
- protocols/ftp/detect
- protocols/conn/known-hosts
- protocols/conn/known-services
- protocols/ssl/known-certs
- protocols/ssl/validate-certs
- protocols/ssl/log-hostcerts-only
- protocols/ssh/geo-data
- protocols/ssh/detect-bruteforcing
- protocols/ssh/interesting-hostnames
- protocols/http/detect-sqli
- frameworks/files/hash-all-files
- frameworks/files/detect-MHR
- policy/frameworks/notice/extend-email/hostnames
- ja3
- hassh
- intel
- cve-2020-0601
- securityonion/bpfconf
- securityonion/communityid
- securityonion/file-extraction
'@load-sigs':
- frameworks/signatures/detect-windows-shells
redef:
- LogAscii::use_json = T;
- LogAscii::json_timestamps = JSON::TS_ISO8601;

View File

@@ -1,8 +1,3 @@
{% set VERSION = salt['pillar.get']('static:soversion', 'HH1.2.1') %}
{% set MASTER = salt['grains.get']('master') %}
{% set GRAFANA = salt['pillar.get']('master:grafana', '0') %}
{% set FLEETMASTER = salt['pillar.get']('static:fleet_master', False) %}
{% set FLEETNODE = salt['pillar.get']('static:fleet_node', False) %}
# Add socore Group # Add socore Group
socoregroup: socoregroup:
group.present: group.present:
@@ -19,7 +14,6 @@ socore:
- shell: /bin/bash - shell: /bin/bash
# Create a state directory # Create a state directory
statedir: statedir:
file.directory: file.directory:
- name: /opt/so/state - name: /opt/so/state
@@ -35,17 +29,13 @@ salttmp:
- makedirs: True - makedirs: True
# Install packages needed for the sensor # Install packages needed for the sensor
sensorpkgs: sensorpkgs:
pkg.installed: pkg.installed:
- skip_suggestions: False - skip_suggestions: False
- pkgs: - pkgs:
- docker-ce
- wget - wget
- jq - jq
{% if grains['os'] != 'CentOS' %} {% if grains['os'] != 'CentOS' %}
- python-docker
- python-m2crypto
- apache2-utils - apache2-utils
{% else %} {% else %}
- net-tools - net-tools
@@ -64,7 +54,6 @@ alwaysupdated:
- skip_suggestions: True - skip_suggestions: True
# Set time to UTC # Set time to UTC
Etc/UTC: Etc/UTC:
timezone.system timezone.system
@@ -77,339 +66,3 @@ utilsyncscripts:
- file_mode: 755 - file_mode: 755
- template: jinja - template: jinja
- source: salt://common/tools/sbin - source: salt://common/tools/sbin
# Make sure Docker is running!
docker:
service.running:
- enable: True
# Drop the correct nginx config based on role
nginxconfdir:
file.directory:
- name: /opt/so/conf/nginx
- user: 939
- group: 939
- makedirs: True
nginxconf:
file.managed:
- name: /opt/so/conf/nginx/nginx.conf
- user: 939
- group: 939
- template: jinja
- source: salt://common/nginx/nginx.conf.{{ grains.role }}
nginxlogdir:
file.directory:
- name: /opt/so/log/nginx/
- user: 939
- group: 939
- makedirs: True
nginxtmp:
file.directory:
- name: /opt/so/tmp/nginx/tmp
- user: 939
- group: 939
- makedirs: True
so-core:
docker_container.running:
- image: {{ MASTER }}:5000/soshybridhunter/so-core:{{ VERSION }}
- hostname: so-core
- user: socore
- binds:
- /opt/so:/opt/so:rw
- /opt/so/conf/nginx/nginx.conf:/etc/nginx/nginx.conf:ro
- /opt/so/log/nginx/:/var/log/nginx:rw
- /opt/so/tmp/nginx/:/var/lib/nginx:rw
- /opt/so/tmp/nginx/:/run:rw
- /etc/pki/masterssl.crt:/etc/pki/nginx/server.crt:ro
- /etc/pki/masterssl.key:/etc/pki/nginx/server.key:ro
- /opt/so/conf/fleet/packages:/opt/socore/html/packages
- cap_add: NET_BIND_SERVICE
- port_bindings:
- 80:80
- 443:443
{%- if FLEETMASTER or FLEETNODE %}
- 8090:8090
{%- endif %}
- watch:
- file: /opt/so/conf/nginx/nginx.conf
# Add Telegraf to monitor all the things.
tgraflogdir:
file.directory:
- name: /opt/so/log/telegraf
- makedirs: True
tgrafetcdir:
file.directory:
- name: /opt/so/conf/telegraf/etc
- makedirs: True
tgrafetsdir:
file.directory:
- name: /opt/so/conf/telegraf/scripts
- makedirs: True
tgrafsyncscripts:
file.recurse:
- name: /opt/so/conf/telegraf/scripts
- user: 939
- group: 939
- file_mode: 755
- template: jinja
- source: salt://common/telegraf/scripts
tgrafconf:
file.managed:
- name: /opt/so/conf/telegraf/etc/telegraf.conf
- user: 939
- group: 939
- template: jinja
- source: salt://common/telegraf/etc/telegraf.conf
so-telegraf:
docker_container.running:
- image: {{ MASTER }}:5000/soshybridhunter/so-telegraf:{{ VERSION }}
- environment:
- HOST_PROC=/host/proc
- HOST_ETC=/host/etc
- HOST_SYS=/host/sys
- HOST_MOUNT_PREFIX=/host
- network_mode: host
- port_bindings:
- 127.0.0.1:8094:8094
- binds:
- /opt/so/log/telegraf:/var/log/telegraf:rw
- /opt/so/conf/telegraf/etc/telegraf.conf:/etc/telegraf/telegraf.conf:ro
- /var/run/utmp:/var/run/utmp:ro
- /var/run/docker.sock:/var/run/docker.sock:ro
- /:/host/root:ro
- /sys:/host/sys:ro
- /proc:/host/proc:ro
- /nsm:/host/nsm:ro
- /etc:/host/etc:ro
{% if grains['role'] == 'so-master' or grains['role'] == 'so-eval' or grains['role'] == 'so-mastersearch' %}
- /etc/pki/ca.crt:/etc/telegraf/ca.crt:ro
{% else %}
- /etc/ssl/certs/intca.crt:/etc/telegraf/ca.crt:ro
{% endif %}
- /etc/pki/influxdb.crt:/etc/telegraf/telegraf.crt:ro
- /etc/pki/influxdb.key:/etc/telegraf/telegraf.key:ro
- /opt/so/conf/telegraf/scripts:/scripts:ro
- /opt/so/log/stenographer:/var/log/stenographer:ro
- /opt/so/log/suricata:/var/log/suricata:ro
- watch:
- /opt/so/conf/telegraf/etc/telegraf.conf
- /opt/so/conf/telegraf/scripts
# If its a master or eval lets install the back end for now
{% if grains['role'] in ['so-master', 'so-mastersearch', 'so-eval'] and GRAFANA == 1 %}
# Influx DB
influxconfdir:
file.directory:
- name: /opt/so/conf/influxdb/etc
- makedirs: True
influxdbdir:
file.directory:
- name: /nsm/influxdb
- makedirs: True
influxdbconf:
file.managed:
- name: /opt/so/conf/influxdb/etc/influxdb.conf
- user: 939
- group: 939
- template: jinja
- source: salt://common/influxdb/etc/influxdb.conf
so-influxdb:
docker_container.running:
- image: {{ MASTER }}:5000/soshybridhunter/so-influxdb:{{ VERSION }}
- hostname: influxdb
- environment:
- INFLUXDB_HTTP_LOG_ENABLED=false
- binds:
- /opt/so/conf/influxdb/etc/influxdb.conf:/etc/influxdb/influxdb.conf:ro
- /nsm/influxdb:/var/lib/influxdb:rw
- /etc/pki/influxdb.crt:/etc/ssl/influxdb.crt:ro
- /etc/pki/influxdb.key:/etc/ssl/influxdb.key:ro
- port_bindings:
- 0.0.0.0:8086:8086
- watch:
- file: /opt/so/conf/influxdb/etc/influxdb.conf
# Grafana all the things
grafanadir:
file.directory:
- name: /nsm/grafana
- user: 939
- group: 939
- makedirs: True
grafanaconfdir:
file.directory:
- name: /opt/so/conf/grafana/etc
- user: 939
- group: 939
- makedirs: True
grafanadashdir:
file.directory:
- name: /opt/so/conf/grafana/grafana_dashboards
- user: 939
- group: 939
- makedirs: True
grafanadashmdir:
file.directory:
- name: /opt/so/conf/grafana/grafana_dashboards/master
- user: 939
- group: 939
- makedirs: True
grafanadashevaldir:
file.directory:
- name: /opt/so/conf/grafana/grafana_dashboards/eval
- user: 939
- group: 939
- makedirs: True
grafanadashfndir:
file.directory:
- name: /opt/so/conf/grafana/grafana_dashboards/sensor_nodes
- user: 939
- group: 939
- makedirs: True
grafanadashsndir:
file.directory:
- name: /opt/so/conf/grafana/grafana_dashboards/search_nodes
- user: 939
- group: 939
- makedirs: True
grafanaconf:
file.recurse:
- name: /opt/so/conf/grafana/etc
- user: 939
- group: 939
- template: jinja
- source: salt://common/grafana/etc
{% if salt['pillar.get']('mastertab', False) %}
{% for SN, SNDATA in salt['pillar.get']('mastertab', {}).items() %}
{% set NODETYPE = SN.split('_')|last %}
{% set SN = SN | regex_replace('_' ~ NODETYPE, '') %}
dashboard-master:
file.managed:
- name: /opt/so/conf/grafana/grafana_dashboards/master/{{ SN }}-Master.json
- user: 939
- group: 939
- template: jinja
- source: salt://common/grafana/grafana_dashboards/master/master.json
- defaults:
SERVERNAME: {{ SN }}
MANINT: {{ SNDATA.manint }}
MONINT: {{ SNDATA.manint }}
CPUS: {{ SNDATA.totalcpus }}
UID: {{ SNDATA.guid }}
ROOTFS: {{ SNDATA.rootfs }}
NSMFS: {{ SNDATA.nsmfs }}
{% endfor %}
{% endif %}
{% if salt['pillar.get']('sensorstab', False) %}
{% for SN, SNDATA in salt['pillar.get']('sensorstab', {}).items() %}
{% set NODETYPE = SN.split('_')|last %}
{% set SN = SN | regex_replace('_' ~ NODETYPE, '') %}
dashboard-{{ SN }}:
file.managed:
- name: /opt/so/conf/grafana/grafana_dashboards/sensor_nodes/{{ SN }}-Sensor.json
- user: 939
- group: 939
- template: jinja
- source: salt://common/grafana/grafana_dashboards/sensor_nodes/sensor.json
- defaults:
SERVERNAME: {{ SN }}
MONINT: {{ SNDATA.monint }}
MANINT: {{ SNDATA.manint }}
CPUS: {{ SNDATA.totalcpus }}
UID: {{ SNDATA.guid }}
ROOTFS: {{ SNDATA.rootfs }}
NSMFS: {{ SNDATA.nsmfs }}
{% endfor %}
{% endif %}
{% if salt['pillar.get']('nodestab', False) %}
{% for SN, SNDATA in salt['pillar.get']('nodestab', {}).items() %}
{% set NODETYPE = SN.split('_')|last %}
{% set SN = SN | regex_replace('_' ~ NODETYPE, '') %}
dashboardsearch-{{ SN }}:
file.managed:
- name: /opt/so/conf/grafana/grafana_dashboards/search_nodes/{{ SN }}-Node.json
- user: 939
- group: 939
- template: jinja
- source: salt://common/grafana/grafana_dashboards/search_nodes/searchnode.json
- defaults:
SERVERNAME: {{ SN }}
MANINT: {{ SNDATA.manint }}
MONINT: {{ SNDATA.manint }}
CPUS: {{ SNDATA.totalcpus }}
UID: {{ SNDATA.guid }}
ROOTFS: {{ SNDATA.rootfs }}
NSMFS: {{ SNDATA.nsmfs }}
{% endfor %}
{% endif %}
{% if salt['pillar.get']('evaltab', False) %}
{% for SN, SNDATA in salt['pillar.get']('evaltab', {}).items() %}
{% set NODETYPE = SN.split('_')|last %}
{% set SN = SN | regex_replace('_' ~ NODETYPE, '') %}
dashboard-{{ SN }}:
file.managed:
- name: /opt/so/conf/grafana/grafana_dashboards/eval/{{ SN }}-Node.json
- user: 939
- group: 939
- template: jinja
- source: salt://common/grafana/grafana_dashboards/eval/eval.json
- defaults:
SERVERNAME: {{ SN }}
MANINT: {{ SNDATA.manint }}
MONINT: {{ SNDATA.monint }}
CPUS: {{ SNDATA.totalcpus }}
UID: {{ SNDATA.guid }}
ROOTFS: {{ SNDATA.rootfs }}
NSMFS: {{ SNDATA.nsmfs }}
{% endfor %}
{% endif %}
so-grafana:
docker_container.running:
- image: {{ MASTER }}:5000/soshybridhunter/so-grafana:{{ VERSION }}
- hostname: grafana
- user: socore
- binds:
- /nsm/grafana:/var/lib/grafana:rw
- /opt/so/conf/grafana/etc/grafana.ini:/etc/grafana/grafana.ini:ro
- /opt/so/conf/grafana/etc/datasources:/etc/grafana/provisioning/datasources:rw
- /opt/so/conf/grafana/etc/dashboards:/etc/grafana/provisioning/dashboards:rw
- /opt/so/conf/grafana/grafana_dashboards:/etc/grafana/grafana_dashboards:rw
- environment:
- GF_SECURITY_ADMIN_PASSWORD=augusta
- port_bindings:
- 0.0.0.0:3000:3000
- watch:
- file: /opt/so/conf/grafana/*
{% endif %}

View File

@@ -0,0 +1,5 @@
{% set docker = {
'containers': [
'so-zeek'
]
} %}

View File

@@ -0,0 +1,5 @@
{% set docker = {
'containers': [
'so-domainstats'
]
} %}

View File

@@ -0,0 +1,18 @@
{% set docker = {
'containers': [
'so-nginx',
'so-telegraf',
'so-dockerregistry',
'so-soc',
'so-kratos',
'so-idstools',
'so-elasticsearch',
'so-kibana',
'so-steno',
'so-suricata',
'so-zeek',
'so-curator',
'so-elastalert',
'so-soctopus'
]
} %}

View File

@@ -0,0 +1,10 @@
{% set docker = {
'containers': [
'so-mysql',
'so-fleet',
'so-redis',
'so-filebeat',
'so-nginx',
'so-telegraf'
]
} %}

View File

@@ -0,0 +1,7 @@
{% set docker = {
'containers': [
'so-mysql',
'so-fleet',
'so-redis'
]
} %}

View File

@@ -0,0 +1,5 @@
{% set docker = {
'containers': [
'so-freqserver'
]
} %}

View File

@@ -0,0 +1,6 @@
{% set docker = {
'containers': [
'so-influxdb',
'so-grafana'
]
} %}

View File

@@ -0,0 +1,14 @@
{% set docker = {
'containers': [
'so-nginx',
'so-telegraf',
'so-redis',
'so-logstash',
'so-elasticsearch',
'so-curator',
'so-steno',
'so-suricata',
'so-wazuh',
'so-filebeat
]
} %}

View File

@@ -0,0 +1,12 @@
{% set docker = {
'containers': [
'so-nginx',
'so-telegraf',
'so-idstools',
'so-steno',
'so-zeek',
'so-redis',
'so-logstash',
'so-filebeat
]
} %}

View File

@@ -0,0 +1,9 @@
{% set docker = {
'containers': [
'so-nginx',
'so-telegraf',
'so-logstash',
'so-elasticsearch',
'so-curator',
]
} %}

View File

@@ -0,0 +1,18 @@
{% set docker = {
'containers': [
'so-dockerregistry',
'so-nginx',
'so-telegraf',
'so-soc',
'so-kratos',
'so-aptcacherng',
'so-idstools',
'so-redis',
'so-elasticsearch',
'so-logstash',
'so-kibana',
'so-elastalert',
'so-filebeat',
'so-soctopus'
]
} %}

View File

@@ -0,0 +1,18 @@
{% set docker = {
'containers': [
'so-nginx',
'so-telegraf',
'so-soc',
'so-kratos',
'so-aptcacherng',
'so-idstools',
'so-redis',
'so-logstash',
'so-elasticsearch',
'so-curator',
'so-kibana',
'so-elastalert',
'so-filebeat',
'so-soctopus'
]
} %}

View File

@@ -0,0 +1,6 @@
{% set docker = {
'containers': [
'so-playbook',
'so-navigator'
]
} %}

View File

@@ -0,0 +1,10 @@
{% set docker = {
'containers': [
'so-nginx',
'so-telegraf',
'so-logstash',
'so-elasticsearch',
'so-curator',
'so-filebeat'
]
} %}

View File

@@ -0,0 +1,8 @@
{% set docker = {
'containers': [
'so-telegraf',
'so-steno',
'so-suricata',
'so-filebeat'
]
} %}

View File

@@ -0,0 +1,45 @@
{% set role = grains.id.split('_') | last %}
{% from 'common/maps/'~ role ~'.map.jinja' import docker with context %}
# Check if the service is enabled and append it's required containers
# to the list predefined by the role / minion id affix
{% macro append_containers(pillar_name, k, compare )%}
{% if salt['pillar.get'](pillar_name~':'~k, {}) != compare %}
{% from 'common/maps/'~k~'.map.jinja' import docker as d with context %}
{% for li in d['containers'] %}
{{ docker['containers'].append(li) }}
{% endfor %}
{% endif %}
{% endmacro %}
{% set docker = salt['grains.filter_by']({
'*_'~role: {
'containers': docker['containers']
}
},grain='id', merge=salt['pillar.get']('docker')) %}
{% if role in ['eval', 'mastersearch', 'master', 'standalone'] %}
{{ append_containers('master', 'grafana', 0) }}
{{ append_containers('static', 'fleet_master', 0) }}
{{ append_containers('master', 'wazuh', 0) }}
{{ append_containers('master', 'thehive', 0) }}
{{ append_containers('master', 'playbook', 0) }}
{{ append_containers('master', 'freq', 0) }}
{{ append_containers('master', 'domainstats', 0) }}
{% endif %}
{% if role in ['eval', 'heavynode', 'sensor', 'standalone'] %}
{{ append_containers('static', 'strelka', 0) }}
{% endif %}
{% if role in ['heavynode', 'standalone'] %}
{{ append_containers('static', 'broversion', 'SURICATA') }}
{% endif %}
{% if role == 'searchnode' %}
{{ append_containers('master', 'wazuh', 0) }}
{% endif %}
{% if role == 'sensor' %}
{{ append_containers('static', 'broversion', 'SURICATA') }}
{% endif %}

View File

@@ -0,0 +1,21 @@
{% set docker = {
'containers': [
'so-nginx',
'so-telegraf',
'so-soc',
'so-kratos',
'so-aptcacherng',
'so-idstools',
'so-redis',
'so-logstash',
'so-elasticsearch',
'so-curator',
'so-kibana',
'so-elastalert',
'so-filebeat',
'so-suricata',
'so-steno',
'so-dockerregistry',
'so-soctopus'
]
} %}

View File

@@ -0,0 +1,9 @@
{% set docker = {
'containers': [
'so-strelka-coordinator',
'so-strelka-gatekeeper',
'so-strelka-manager',
'so-strelka-frontend',
'so-strelka-filestream'
]
} %}

View File

@@ -0,0 +1,7 @@
{% set docker = {
'containers': [
'so-thehive',
'so-thehive-es',
'so-cortex'
]
} %}

View File

@@ -0,0 +1,7 @@
{% set docker = {
'containers': [
'so-nginx',
'so-telegraf',
'so-elasticsearch'
]
} %}

View File

@@ -0,0 +1,5 @@
{% set docker = {
'containers': [
'so-wazuh'
]
} %}

View File

@@ -2,7 +2,7 @@
MASTER=MASTER MASTER=MASTER
VERSION="HH1.1.4" VERSION="HH1.1.4"
TRUSTED_CONTAINERS=( \ TRUSTED_CONTAINERS=( \
"so-core:$VERSION" \ "so-nginx:$VERSION" \
"so-thehive-cortex:$VERSION" \ "so-thehive-cortex:$VERSION" \
"so-curator:$VERSION" \ "so-curator:$VERSION" \
"so-domainstats:$VERSION" \ "so-domainstats:$VERSION" \

37
salt/common/tools/sbin/so-kibana-config-export Normal file → Executable file
View File

@@ -1,6 +1,35 @@
#!/bin/bash #!/bin/bash
KIBANA_HOST=10.66.166.141 #
# {%- set FLEET_MASTER = salt['pillar.get']('static:fleet_master', False) -%}
# {%- set FLEET_NODE = salt['pillar.get']('static:fleet_node', False) -%}
# {%- set FLEET_IP = salt['pillar.get']('static:fleet_ip', '') %}
# {%- set MASTER = salt['pillar.get']('master:url_base', '') %}
#
# Copyright 2014,2015,2016,2017,2018,2019,2020 Security Onion Solutions, LLC
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
KIBANA_HOST={{ MASTER }}
KSO_PORT=5601 KSO_PORT=5601
OUTFILE="saved_objects.json" OUTFILE="saved_objects.ndjson"
curl -s -H 'kbn-xsrf: true' -H 'Content-Type: application/json' -XPOST $KIBANA_HOST:$KSO_PORT/api/saved_objects/_export -d '{ "type": "index-pattern", "type": "config", "type": "dashboard", "type": "query", "type": "search", "type": "url", "type": "visualization" }' -o $OUTFILE curl -s -H 'kbn-xsrf: true' -H 'Content-Type: application/json' -XPOST $KIBANA_HOST:$KSO_PORT/api/saved_objects/_export -d '{ "type": [ "index-pattern", "config", "visualization", "dashboard", "search" ], "excludeExportDetails": false }' > $OUTFILE
# Clean up using PLACEHOLDER
sed -i "s/$KIBANA_HOST/PLACEHOLDER/g" $OUTFILE
# Clean up for Fleet, if applicable
# {% if FLEET_NODE or FLEET_MASTER %}
# Fleet IP
sed -i "s/{{ FLEET_IP }}/FLEETPLACEHOLDER/g" $OUTFILE
# {% endif %}

View File

@@ -14,35 +14,8 @@
# #
# You should have received a copy of the GNU General Public License # You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>. # along with this program. If not, see <http://www.gnu.org/licenses/>.
{%- from 'common/maps/so-status.map.jinja' import docker with context %}
{%- set pillar_suffix = ':containers' -%} {%- set container_list = docker['containers'] %}
{%- if (salt['grains.get']('role') == 'so-mastersearch') -%}
{%- set pillar_val = 'master_search' -%}
{%- elif (salt['grains.get']('role') == 'so-master') -%}
{%- set pillar_val = 'master' -%}
{%- elif (salt['grains.get']('role') == 'so-heavynode') -%}
{%- set pillar_val = 'heavy_node' -%}
{%- elif (salt['grains.get']('role') == 'so-sensor') -%}
{%- set pillar_val = 'sensor' -%}
{%- elif (salt['grains.get']('role') == 'so-eval') -%}
{%- set pillar_val = 'eval' -%}
{%- elif (salt['grains.get']('role') == 'so-fleet') -%}
{%- set pillar_val = 'fleet' -%}
{%- elif (salt['grains.get']('role') == 'so-helix') -%}
{%- set pillar_val = 'helix' -%}
{%- elif (salt['grains.get']('role') == 'so-node') -%}
{%- if (salt['pillar.get']('node:node_type') == 'parser') -%}
{%- set pillar_val = 'parser_node' -%}
{%- elif (salt['pillar.get']('node:node_type') == 'hot') -%}
{%- set pillar_val = 'hot_node' -%}
{%- elif (salt['pillar.get']('node:node_type') == 'warm') -%}
{%- set pillar_val = 'warm_node' -%}
{%- elif (salt['pillar.get']('node:node_type') == 'search') -%}
{%- set pillar_val = 'search_node' -%}
{%- endif -%}
{%- endif -%}
{%- set pillar_name = pillar_val ~ pillar_suffix -%}
{%- set container_list = salt['pillar.get'](pillar_name) %}
if ! [ "$(id -u)" = 0 ]; then if ! [ "$(id -u)" = 0 ]; then
echo "This command must be run as root" echo "This command must be run as root"

View File

@@ -1,12 +1,8 @@
{% if grains['role'] == 'so-node' %} {%- if grains['role'] in ['so-node', 'so-searchnode', 'so-heavynode'] %}
{%- set cur_close_days = salt['pillar.get']('node:cur_close_days', '') -%}
{%- set cur_close_days = salt['pillar.get']('node:cur_close_days', '') -%} {%- elif grains['role'] in ['so-eval', 'so-mastersearch', 'so-standalone'] %}
{%- set cur_close_days = salt['pillar.get']('master:cur_close_days', '') -%}
{% elif grains['role'] == 'so-eval' %} {%- endif -%}
{%- set cur_close_days = salt['pillar.get']('master:cur_close_days', '') -%}
{%- endif %}
--- ---
# Remember, leave a key empty if there is no value. None will be a string, # Remember, leave a key empty if there is no value. None will be a string,

View File

@@ -1,11 +1,7 @@
{% if grains['role'] == 'so-node' %} {%- if grains['role'] in ['so-node', 'so-searchnode', 'so-heavynode'] %}
{%- set log_size_limit = salt['pillar.get']('node:log_size_limit', '') -%}
{%- set log_size_limit = salt['pillar.get']('node:log_size_limit', '') -%} {%- elif grains['role'] in ['so-eval', 'so-mastersearch', 'so-standalone'] %}
{%- set log_size_limit = salt['pillar.get']('master:log_size_limit', '') -%}
{% elif grains['role'] == 'so-eval' %}
{%- set log_size_limit = salt['pillar.get']('master:log_size_limit', '') -%}
{%- endif %} {%- endif %}
--- ---
# Remember, leave a key empty if there is no value. None will be a string, # Remember, leave a key empty if there is no value. None will be a string,

View File

@@ -1,17 +1,13 @@
{% if grains['role'] == 'so-node' %} {%- if grains['role'] in ['so-node', 'so-searchnode', 'so-heavynode'] %}
{%- set ELASTICSEARCH_HOST = salt['pillar.get']('node:mainip', '') -%}
{%- set ELASTICSEARCH_HOST = salt['pillar.get']('node:mainip', '') -%} {%- set ELASTICSEARCH_PORT = salt['pillar.get']('node:es_port', '') -%}
{%- set ELASTICSEARCH_PORT = salt['pillar.get']('node:es_port', '') -%} {%- set LOG_SIZE_LIMIT = salt['pillar.get']('node:log_size_limit', '') -%}
{%- set LOG_SIZE_LIMIT = salt['pillar.get']('node:log_size_limit', '') -%} {%- elif grains['role'] in ['so-eval', 'so-mastersearch', 'so-standalone'] %}
{%- set ELASTICSEARCH_HOST = salt['pillar.get']('master:mainip', '') -%}
{% elif grains['role'] == 'so-eval' %} {%- set ELASTICSEARCH_PORT = salt['pillar.get']('master:es_port', '') -%}
{%- set LOG_SIZE_LIMIT = salt['pillar.get']('master:log_size_limit', '') -%}
{%- set ELASTICSEARCH_HOST = salt['pillar.get']('master:mainip', '') -%} {%- endif -%}
{%- set ELASTICSEARCH_PORT = salt['pillar.get']('master:es_port', '') -%}
{%- set LOG_SIZE_LIMIT = salt['pillar.get']('master:log_size_limit', '') -%}
{%- endif %}
#!/bin/bash #!/bin/bash
# #

View File

@@ -1,11 +1,7 @@
{% if grains['role'] == 'so-node' %} {% if grains['role'] in ['so-node', 'so-searchnode', 'so-heavynode'] %}
{%- set elasticsearch = salt['pillar.get']('node:mainip', '') -%}
{%- set elasticsearch = salt['pillar.get']('node:mainip', '') -%} {% elif grains['role'] in ['so-eval', 'so-mastersearch', 'so-standalone'] %}
{%- set elasticsearch = salt['pillar.get']('master:mainip', '') -%}
{% elif grains['role'] == 'so-eval' %}
{%- set elasticsearch = salt['pillar.get']('master:mainip', '') -%}
{%- endif %} {%- endif %}
--- ---

View File

@@ -1,6 +1,6 @@
{% set VERSION = salt['pillar.get']('static:soversion', 'HH1.2.1') %} {% set VERSION = salt['pillar.get']('static:soversion', 'HH1.2.2') %}
{% set MASTER = salt['grains.get']('master') %} {% set MASTER = salt['grains.get']('master') %}
{% if grains['role'] == 'so-node' or grains['role'] == 'so-eval' %} {% if grains['role'] in ['so-searchnode', 'so-eval', 'so-node', 'so-mastersearch', 'so-heavynode', 'so-standalone'] %}
# Curator # Curator
# Create the group # Create the group
curatorgroup: curatorgroup:

8
salt/docker/init.sls Normal file
View File

@@ -0,0 +1,8 @@
installdocker:
pkg.installed:
- name: docker-ce
# Make sure Docker is running!
docker:
service.running:
- enable: True

View File

@@ -2,7 +2,7 @@
{% set esport = salt['pillar.get']('master:es_port', '') %} {% set esport = salt['pillar.get']('master:es_port', '') %}
# This is the folder that contains the rule yaml files # This is the folder that contains the rule yaml files
# Any .yaml file will be loaded as a rule # Any .yaml file will be loaded as a rule
rules_folder: /etc/elastalert/rules/ rules_folder: /opt/elastalert/rules/
# Sets whether or not ElastAlert should recursively descend # Sets whether or not ElastAlert should recursively descend
# the rules directory - true or false # the rules directory - true or false

View File

@@ -1,107 +0,0 @@
# -*- coding: utf-8 -*-
# HiveAlerter modified from original at: https://raw.githubusercontent.com/Nclose-ZA/elastalert_hive_alerter/master/elastalert_hive_alerter/hive_alerter.py
import uuid
from elastalert.alerts import Alerter
from thehive4py.api import TheHiveApi
from thehive4py.models import Alert, AlertArtifact, CustomFieldHelper
class TheHiveAlerter(Alerter):
"""
Use matched data to create alerts containing observables in an instance of TheHive
"""
required_options = set(['hive_connection', 'hive_alert_config'])
def get_aggregation_summary_text(self, matches):
text = super(TheHiveAlerter, self).get_aggregation_summary_text(matches)
if text:
text = '```\n{0}```\n'.format(text)
return text
def create_artifacts(self, match):
artifacts = []
context = {'rule': self.rule, 'match': match}
for mapping in self.rule.get('hive_observable_data_mapping', []):
for observable_type, match_data_key in mapping.items():
try:
artifacts.append(AlertArtifact(dataType=observable_type, data=match_data_key.format(**context)))
except KeyError as e:
print(('format string {} fail cause no key {} in {}'.format(e, match_data_key, context)))
return artifacts
def create_alert_config(self, match):
context = {'rule': self.rule, 'match': match}
alert_config = {
'artifacts': self.create_artifacts(match),
'sourceRef': str(uuid.uuid4())[0:6],
'title': '{rule[name]}'.format(**context)
}
alert_config.update(self.rule.get('hive_alert_config', {}))
for alert_config_field, alert_config_value in alert_config.items():
if alert_config_field == 'customFields':
custom_fields = CustomFieldHelper()
for cf_key, cf_value in alert_config_value.items():
try:
func = getattr(custom_fields, 'add_{}'.format(cf_value['type']))
except AttributeError:
raise Exception('unsupported custom field type {}'.format(cf_value['type']))
value = cf_value['value'].format(**context)
func(cf_key, value)
alert_config[alert_config_field] = custom_fields.build()
elif isinstance(alert_config_value, str):
alert_config[alert_config_field] = alert_config_value.format(**context)
elif isinstance(alert_config_value, (list, tuple)):
formatted_list = []
for element in alert_config_value:
try:
formatted_list.append(element.format(**context))
except (AttributeError, KeyError, IndexError):
formatted_list.append(element)
alert_config[alert_config_field] = formatted_list
return alert_config
def send_to_thehive(self, alert_config):
connection_details = self.rule['hive_connection']
api = TheHiveApi(
connection_details.get('hive_host', ''),
connection_details.get('hive_apikey', ''),
proxies=connection_details.get('hive_proxies', {'http': '', 'https': ''}),
cert=connection_details.get('hive_verify', False))
alert = Alert(**alert_config)
response = api.create_alert(alert)
if response.status_code != 201:
raise Exception('alert not successfully created in TheHive\n{}'.format(response.text))
def alert(self, matches):
if self.rule.get('hive_alert_config_type', 'custom') != 'classic':
for match in matches:
alert_config = self.create_alert_config(match)
self.send_to_thehive(alert_config)
else:
alert_config = self.create_alert_config(matches[0])
artifacts = []
for match in matches:
artifacts += self.create_artifacts(match)
if 'related_events' in match:
for related_event in match['related_events']:
artifacts += self.create_artifacts(related_event)
alert_config['artifacts'] = artifacts
alert_config['title'] = self.create_title(matches)
alert_config['description'] = self.create_alert_body(matches)
self.send_to_thehive(alert_config)
def get_info(self):
return {
'type': 'hivealerter',
'hive_host': self.rule.get('hive_connection', {}).get('hive_host', '')
}

View File

@@ -1,6 +1,8 @@
{% set es = salt['pillar.get']('static:masterip', '') %} {% set es = salt['pillar.get']('static:masterip', '') %}
{% set hivehost = salt['pillar.get']('static:masterip', '') %} {% set hivehost = salt['pillar.get']('static:masterip', '') %}
{% set hivekey = salt['pillar.get']('static:hivekey', '') %} {% set hivekey = salt['pillar.get']('static:hivekey', '') %}
{% set MASTER = salt['pillar.get']('master:url_base', '') %}
# hive.yaml # hive.yaml
# Elastalert rule to forward IDS alerts from Security Onion to a specified TheHive instance. # Elastalert rule to forward IDS alerts from Security Onion to a specified TheHive instance.
# #
@@ -15,7 +17,7 @@ timeframe:
buffer_time: buffer_time:
minutes: 10 minutes: 10
allow_buffer_time_overlap: true allow_buffer_time_overlap: true
query_key: ["rule.signature_id"] query_key: ["rule.uuid"]
realert: realert:
days: 1 days: 1
filter: filter:
@@ -23,10 +25,11 @@ filter:
query_string: query_string:
query: "event.module: suricata" query: "event.module: suricata"
alert: modules.so.thehive.TheHiveAlerter alert: hivealerter
hive_connection: hive_connection:
hive_host: https://{{hivehost}}/thehive/ hive_host: http://{{hivehost}}
hive_port: 9000/thehive
hive_apikey: {{hivekey}} hive_apikey: {{hivekey}}
hive_proxies: hive_proxies:
@@ -37,9 +40,9 @@ hive_alert_config:
title: '{match[rule][name]}' title: '{match[rule][name]}'
type: 'NIDS' type: 'NIDS'
source: 'SecurityOnion' source: 'SecurityOnion'
description: "`NIDS Dashboard:` \n\n <https://{{es}}/kibana/app/kibana#/dashboard/ed6f7e20-e060-11e9-8f0c-2ddbf5ed9290?_g=(refreshInterval:(display:Off,pause:!f,value:0),time:(from:now-24h,mode:quick,to:now))&_a=(columns:!(_source),index:'*:logstash-*',interval:auto,query:(query_string:(analyze_wildcard:!t,query:'sid:')),sort:!('@timestamp',desc))> \n\n `IPs: `{match[source][ip]}:{match[source][port]} --> {match[destination][ip]}:{match[destination][port]} \n\n `Signature:`{match[rule][rule]}" description: "`Hunting Pivot:` \n\n <https://{{MASTER}}/#/hunt?q=event.module%3A%20suricata%20AND%20rule.uuid%3A{match[rule][uuid]}%20%7C%20groupby%20source.ip%20destination.ip%20rule.name> \n\n `Kibana Dashboard:` \n\n <https://{{MASTER}}/kibana/app/kibana#/dashboard/ed6f7e20-e060-11e9-8f0c-2ddbf5ed9290?_g=(refreshInterval:(display:Off,pause:!f,value:0),time:(from:now-24h,mode:quick,to:now))&_a=(columns:!(_source),index:'*:logstash-*',interval:auto,query:(query_string:(analyze_wildcard:!t,query:'sid:')),sort:!('@timestamp',desc))> \n\n `IPs: `{match[source][ip]}:{match[source][port]} --> {match[destination][ip]}:{match[destination][port]} \n\n `Signature:`{match[rule][rule]}"
severity: 2 severity: 2
tags: ['{match[rule][signature_id]}','{match[source][ip]}','{match[destination][ip]}'] tags: ['{match[rule][uuid]}','{match[source][ip]}','{match[destination][ip]}']
tlp: 3 tlp: 3
status: 'New' status: 'New'
follow: True follow: True

View File

@@ -12,26 +12,15 @@
# #
# You should have received a copy of the GNU General Public License # You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>. # along with this program. If not, see <http://www.gnu.org/licenses/>.
{% set VERSION = salt['pillar.get']('static:soversion', 'HH1.2.1') %} {% set VERSION = salt['pillar.get']('static:soversion', 'HH1.2.2') %}
{% set MASTER = salt['grains.get']('master') %} {% set MASTER = salt['grains.get']('master') %}
{% if grains['role'] == 'so-master' %}
{% set esalert = salt['pillar.get']('master:elastalert', '1') %}
{% set esip = salt['pillar.get']('master:mainip', '') %}
{% set esport = salt['pillar.get']('master:es_port', '') %}
{% elif grains['role'] in ['so-eval','so-mastersearch'] %}
{% set esalert = salt['pillar.get']('master:elastalert', '1') %}
{% set esip = salt['pillar.get']('master:mainip', '') %}
{% set esport = salt['pillar.get']('master:es_port', '') %}
{% if grains['role'] in ['so-eval','so-mastersearch', 'so-master', 'so-standalone'] %}
{% set esalert = salt['pillar.get']('master:elastalert', '1') %}
{% set esip = salt['pillar.get']('master:mainip', '') %}
{% set esport = salt['pillar.get']('master:es_port', '') %}
{% elif grains['role'] == 'so-node' %} {% elif grains['role'] == 'so-node' %}
{% set esalert = salt['pillar.get']('node:elastalert', '0') %}
{% set esalert = salt['pillar.get']('node:elastalert', '0') %}
{% endif %} {% endif %}
# Elastalert # Elastalert
@@ -55,35 +44,35 @@ elastalogdir:
file.directory: file.directory:
- name: /opt/so/log/elastalert - name: /opt/so/log/elastalert
- user: 933 - user: 933
- group: 939 - group: 933
- makedirs: True - makedirs: True
elastarules: elastarules:
file.directory: file.directory:
- name: /opt/so/rules/elastalert - name: /opt/so/rules/elastalert
- user: 933 - user: 933
- group: 939 - group: 933
- makedirs: True - makedirs: True
elastaconfdir: elastaconfdir:
file.directory: file.directory:
- name: /opt/so/conf/elastalert - name: /opt/so/conf/elastalert
- user: 933 - user: 933
- group: 939 - group: 933
- makedirs: True - makedirs: True
elastasomodulesdir: elastasomodulesdir:
file.directory: file.directory:
- name: /opt/so/conf/elastalert/modules/so - name: /opt/so/conf/elastalert/modules/so
- user: 933 - user: 933
- group: 939 - group: 933
- makedirs: True - makedirs: True
elastacustmodulesdir: elastacustmodulesdir:
file.directory: file.directory:
- name: /opt/so/conf/elastalert/modules/custom - name: /opt/so/conf/elastalert/modules/custom
- user: 933 - user: 933
- group: 939 - group: 933
- makedirs: True - makedirs: True
elastasomodulesync: elastasomodulesync:
@@ -91,7 +80,7 @@ elastasomodulesync:
- name: /opt/so/conf/elastalert/modules/so - name: /opt/so/conf/elastalert/modules/so
- source: salt://elastalert/files/modules/so - source: salt://elastalert/files/modules/so
- user: 933 - user: 933
- group: 939 - group: 933
- makedirs: True - makedirs: True
elastarulesync: elastarulesync:
@@ -99,7 +88,7 @@ elastarulesync:
- name: /opt/so/rules/elastalert - name: /opt/so/rules/elastalert
- source: salt://elastalert/files/rules/so - source: salt://elastalert/files/rules/so
- user: 933 - user: 933
- group: 939 - group: 933
- template: jinja - template: jinja
elastaconf: elastaconf:
@@ -107,7 +96,7 @@ elastaconf:
- name: /opt/so/conf/elastalert/elastalert_config.yaml - name: /opt/so/conf/elastalert/elastalert_config.yaml
- source: salt://elastalert/files/elastalert_config.yaml - source: salt://elastalert/files/elastalert_config.yaml
- user: 933 - user: 933
- group: 939 - group: 933
- template: jinja - template: jinja
so-elastalert: so-elastalert:
@@ -118,16 +107,9 @@ so-elastalert:
- user: elastalert - user: elastalert
- detach: True - detach: True
- binds: - binds:
- /opt/so/rules/elastalert:/etc/elastalert/rules/:ro - /opt/so/rules/elastalert:/opt/elastalert/rules/:ro
- /opt/so/log/elastalert:/var/log/elastalert:rw - /opt/so/log/elastalert:/var/log/elastalert:rw
- /opt/so/conf/elastalert/modules/:/opt/elastalert/modules/:ro - /opt/so/conf/elastalert/modules/:/opt/elastalert/modules/:ro
- /opt/so/conf/elastalert/elastalert_config.yaml:/etc/elastalert/conf/elastalert_config.yaml:ro - /opt/so/conf/elastalert/elastalert_config.yaml:/opt/config/elastalert_config.yaml:ro
- environment:
- ELASTICSEARCH_HOST: {{ esip }}
- ELASTICSEARCH_PORT: {{ esport }}
- ELASTALERT_CONFIG: /etc/elastalert/conf/elastalert_config.yaml
- ELASTALERT_SUPERVISOR_CONF: /etc/elastalert/conf/elastalert_supervisord.conf
- RULES_DIRECTORY: /etc/elastalert/rules/
- LOG_DIR: /var/log/elastalert
{% endif %} {% endif %}

View File

@@ -4,7 +4,7 @@
{ {
"geoip": { "geoip": {
"field": "destination.ip", "field": "destination.ip",
"target_field": "geo", "target_field": "destination.geo",
"database_file": "GeoLite2-City.mmdb", "database_file": "GeoLite2-City.mmdb",
"ignore_missing": true, "ignore_missing": true,
"properties": ["ip", "country_iso_code", "country_name", "continent_name", "region_iso_code", "region_name", "city_name", "timezone", "location"] "properties": ["ip", "country_iso_code", "country_name", "continent_name", "region_iso_code", "region_name", "city_name", "timezone", "location"]
@@ -13,7 +13,7 @@
{ {
"geoip": { "geoip": {
"field": "source.ip", "field": "source.ip",
"target_field": "geo", "target_field": "source.geo",
"database_file": "GeoLite2-City.mmdb", "database_file": "GeoLite2-City.mmdb",
"ignore_missing": true, "ignore_missing": true,
"properties": ["ip", "country_iso_code", "country_name", "continent_name", "region_iso_code", "region_name", "city_name", "timezone", "location"] "properties": ["ip", "country_iso_code", "country_name", "continent_name", "region_iso_code", "region_name", "city_name", "timezone", "location"]
@@ -41,8 +41,8 @@
{ "rename": { "field": "message2.community_id", "target_field": "network.community_id", "ignore_missing": true } }, { "rename": { "field": "message2.community_id", "target_field": "network.community_id", "ignore_missing": true } },
{ {
"remove": { "remove": {
"field": [ "index_name_prefix", "message2"], "field": [ "index_name_prefix", "message2", "type" ],
"ignore_failure": false "ignore_failure": true
} }
} }
] ]

View File

@@ -24,8 +24,14 @@
{ "rename": { "field": "message3.columns.pid", "target_field": "process.pid", "ignore_missing": true } }, { "rename": { "field": "message3.columns.pid", "target_field": "process.pid", "ignore_missing": true } },
{ "rename": { "field": "message3.columns.parent", "target_field": "process.ppid", "ignore_missing": true } }, { "rename": { "field": "message3.columns.parent", "target_field": "process.ppid", "ignore_missing": true } },
{ "rename": { "field": "message3.columns.cwd", "target_field": "process.working_directory", "ignore_missing": true } }, { "rename": { "field": "message3.columns.cwd", "target_field": "process.working_directory", "ignore_missing": true } },
{ "rename": { "field": "message3.columns.community_id", "target_field": "network.community_id", "ignore_missing": true } },
{ "rename": { "field": "message3.columns.local_address", "target_field": "local.ip", "ignore_missing": true } },
{ "rename": { "field": "message3.columns.local_port", "target_field": "local.port", "ignore_missing": true } },
{ "rename": { "field": "message3.columns.remote_address", "target_field": "remote.ip", "ignore_missing": true } },
{ "rename": { "field": "message3.columns.remote_port", "target_field": "remote.port", "ignore_missing": true } },
{ "rename": { "field": "message3.columns.process_name", "target_field": "process.name", "ignore_missing": true } },
{ "rename": { "field": "message3.columns.eventid", "target_field": "event.code", "ignore_missing": true } }, { "rename": { "field": "message3.columns.eventid", "target_field": "event.code", "ignore_missing": true } },
{ "set": { "if": "ctx.message3.columns.data != null", "field": "dataset", "value": "wel-{{message3.columns.source}}", "override": true } }, { "set": { "if": "ctx.message3.columns.?data != null", "field": "dataset", "value": "wel-{{message3.columns.source}}", "override": true } },
{ "rename": { "field": "message3.columns.winlog.EventData.SubjectUserName", "target_field": "user.name", "ignore_missing": true } }, { "rename": { "field": "message3.columns.winlog.EventData.SubjectUserName", "target_field": "user.name", "ignore_missing": true } },
{ "rename": { "field": "message3.columns.winlog.EventData.destinationHostname", "target_field": "destination.hostname", "ignore_missing": true } }, { "rename": { "field": "message3.columns.winlog.EventData.destinationHostname", "target_field": "destination.hostname", "ignore_missing": true } },
{ "rename": { "field": "message3.columns.winlog.EventData.destinationIp", "target_field": "destination.ip", "ignore_missing": true } }, { "rename": { "field": "message3.columns.winlog.EventData.destinationIp", "target_field": "destination.ip", "ignore_missing": true } },

View File

@@ -6,6 +6,8 @@
{ "rename":{ "field": "message2.alert", "target_field": "rule", "ignore_failure": true } }, { "rename":{ "field": "message2.alert", "target_field": "rule", "ignore_failure": true } },
{ "rename":{ "field": "rule.signature", "target_field": "rule.name", "ignore_failure": true } }, { "rename":{ "field": "rule.signature", "target_field": "rule.name", "ignore_failure": true } },
{ "rename":{ "field": "rule.ref", "target_field": "rule.version", "ignore_failure": true } }, { "rename":{ "field": "rule.ref", "target_field": "rule.version", "ignore_failure": true } },
{ "rename":{ "field": "rule.signature_id", "target_field": "rule.uuid", "ignore_failure": true } },
{ "rename":{ "field": "rule.signature_id", "target_field": "rule.signature", "ignore_failure": true } },
{ "pipeline": { "name": "suricata.common" } } { "pipeline": { "name": "suricata.common" } }
] ]
} }

View File

@@ -12,9 +12,9 @@
{ "rename": { "field": "message2.id.resp_h", "target_field": "destination.ip", "ignore_missing": true } }, { "rename": { "field": "message2.id.resp_h", "target_field": "destination.ip", "ignore_missing": true } },
{ "rename": { "field": "message2.id.resp_p", "target_field": "destination.port", "ignore_missing": true } }, { "rename": { "field": "message2.id.resp_p", "target_field": "destination.port", "ignore_missing": true } },
{ "set": { "field": "client.ip", "value": "{{source.ip}}" } }, { "set": { "field": "client.ip", "value": "{{source.ip}}" } },
{ "set": { "if": "ctx.source.port != null", "field": "client.port", "value": "{{source.port}}" } }, { "set": { "if": "ctx.source?.port != null", "field": "client.port", "value": "{{source.port}}" } },
{ "set": { "field": "server.ip", "value": "{{destination.ip}}" } }, { "set": { "field": "server.ip", "value": "{{destination.ip}}" } },
{ "set": { "if": "ctx.destination.port != null", "field": "server.port", "value": "{{destination.port}}" } }, { "set": { "if": "ctx.destination?.port != null", "field": "server.port", "value": "{{destination.port}}" } },
{ "set": { "field": "observer.name", "value": "{{agent.name}}" } }, { "set": { "field": "observer.name", "value": "{{agent.name}}" } },
{ "date": { "field": "message2.ts", "target_field": "@timestamp", "formats": ["ISO8601", "UNIX"], "ignore_failure": true } }, { "date": { "field": "message2.ts", "target_field": "@timestamp", "formats": ["ISO8601", "UNIX"], "ignore_failure": true } },
{ "remove": { "field": ["agent"], "ignore_failure": true } }, { "remove": { "field": ["agent"], "ignore_failure": true } },

View File

@@ -21,6 +21,20 @@
{ "rename": { "field": "message2.orig_cc", "target_field": "client.country_code","ignore_missing": true } }, { "rename": { "field": "message2.orig_cc", "target_field": "client.country_code","ignore_missing": true } },
{ "rename": { "field": "message2.resp_cc", "target_field": "server.country_code", "ignore_missing": true } }, { "rename": { "field": "message2.resp_cc", "target_field": "server.country_code", "ignore_missing": true } },
{ "rename": { "field": "message2.sensorname", "target_field": "observer.name", "ignore_missing": true } }, { "rename": { "field": "message2.sensorname", "target_field": "observer.name", "ignore_missing": true } },
{ "script": { "lang": "painless", "source": "ctx.network.bytes = (ctx.client.bytes + ctx.server.bytes)", "ignore_failure": true } },
{ "set": { "if": "ctx.connection.state == 'S0'", "field": "connection.state_description", "value": "Connection attempt seen, no reply" } },
{ "set": { "if": "ctx.connection.state == 'S1'", "field": "connection.state_description", "value": "Connection established, not terminated" } },
{ "set": { "if": "ctx.connection.state == 'S2'", "field": "connection.state_description", "value": "Connection established and close attempt by originator seen (but no reply from responder)" } },
{ "set": { "if": "ctx.connection.state == 'S3'", "field": "connection.state_description", "value": "Connection established and close attempt by responder seen (but no reply from originator)" } },
{ "set": { "if": "ctx.connection.state == 'SF'", "field": "connection.state_description", "value": "Normal SYN/FIN completion" } },
{ "set": { "if": "ctx.connection.state == 'REJ'", "field": "connection.state_description", "value": "Connection attempt rejected" } },
{ "set": { "if": "ctx.connection.state == 'RSTO'", "field": "connection.state_description", "value": "Connection established, originator aborted (sent a RST)" } },
{ "set": { "if": "ctx.connection.state == 'RSTR'", "field": "connection.state_description", "value": "Established, responder aborted" } },
{ "set": { "if": "ctx.connection.state == 'RSTOS0'","field": "connection.state_description", "value": "Originator sent a SYN followed by a RST, we never saw a SYN-ACK from the responder" } },
{ "set": { "if": "ctx.connection.state == 'RSTRH'", "field": "connection.state_description", "value": "Responder sent a SYN ACK followed by a RST, we never saw a SYN from the (purported) originator" } },
{ "set": { "if": "ctx.connection.state == 'SH'", "field": "connection.state_description", "value": "Originator sent a SYN followed by a FIN, we never saw a SYN ACK from the responder (hence the connection was 'half' open)" } },
{ "set": { "if": "ctx.connection.state == 'SHR'", "field": "connection.state_description", "value": "Responder sent a SYN ACK followed by a FIN, we never saw a SYN from the originator" } },
{ "set": { "if": "ctx.connection.state == 'OTH'", "field": "connection.state_description", "value": "No SYN seen, just midstream traffic (a 'partial connection' that was not later closed)" } },
{ "pipeline": { "name": "zeek.common" } } { "pipeline": { "name": "zeek.common" } }
] ]
} }

View File

@@ -4,9 +4,9 @@
{ "remove": { "field": ["host"], "ignore_failure": true } }, { "remove": { "field": ["host"], "ignore_failure": true } },
{ "json": { "field": "message", "target_field": "message2", "ignore_failure": true } }, { "json": { "field": "message", "target_field": "message2", "ignore_failure": true } },
{ "rename": { "field": "message2.rtt", "target_field": "event.duration", "ignore_missing": true } }, { "rename": { "field": "message2.rtt", "target_field": "event.duration", "ignore_missing": true } },
{ "rename": { "field": "message2.named_pipe", "target_field": "named_pipe", "ignore_missing": true } }, { "rename": { "field": "message2.named_pipe", "target_field": "dce_rpc.named_pipe", "ignore_missing": true } },
{ "rename": { "field": "message2.endpoint", "target_field": "endpoint", "ignore_missing": true } }, { "rename": { "field": "message2.endpoint", "target_field": "dce_rpc.endpoint", "ignore_missing": true } },
{ "rename": { "field": "message2.operation", "target_field": "operation", "ignore_missing": true } }, { "rename": { "field": "message2.operation", "target_field": "dce_rpc.operation", "ignore_missing": true } },
{ "pipeline": { "name": "zeek.common" } } { "pipeline": { "name": "zeek.common" } }
] ]
} }

View File

@@ -15,7 +15,7 @@
{ "rename": { "field": "message2.domain", "target_field": "host.domain", "ignore_missing": true } }, { "rename": { "field": "message2.domain", "target_field": "host.domain", "ignore_missing": true } },
{ "rename": { "field": "message2.host_name", "target_field": "host.hostname", "ignore_missing": true } }, { "rename": { "field": "message2.host_name", "target_field": "host.hostname", "ignore_missing": true } },
{ "rename": { "field": "message2.duration", "target_field": "event.duration", "ignore_missing": true } }, { "rename": { "field": "message2.duration", "target_field": "event.duration", "ignore_missing": true } },
{ "rename": { "field": "message2.msg_types", "target_field": "message_types", "ignore_missing": true } }, { "rename": { "field": "message2.msg_types", "target_field": "dhcp.message_types", "ignore_missing": true } },
{ "pipeline": { "name": "zeek.common" } } { "pipeline": { "name": "zeek.common" } }
] ]
} }

View File

@@ -23,6 +23,7 @@
{ "rename": { "field": "message2.TTLs", "target_field": "dns.ttls", "ignore_missing": true } }, { "rename": { "field": "message2.TTLs", "target_field": "dns.ttls", "ignore_missing": true } },
{ "rename": { "field": "message2.rejected", "target_field": "dns.query.rejected", "ignore_missing": true } }, { "rename": { "field": "message2.rejected", "target_field": "dns.query.rejected", "ignore_missing": true } },
{ "script": { "lang": "painless", "source": "ctx.dns.query.length = ctx.dns.query.name.length()", "ignore_failure": true } }, { "script": { "lang": "painless", "source": "ctx.dns.query.length = ctx.dns.query.name.length()", "ignore_failure": true } },
{ "pipeline": { "if": "ctx.dns.query.name.contains('.')", "name": "zeek.dns.tld"} },
{ "pipeline": { "name": "zeek.common" } } { "pipeline": { "name": "zeek.common" } }
] ]
} }

View File

@@ -0,0 +1,13 @@
{
"description" : "zeek.dns.tld",
"processors" : [
{ "script": { "lang": "painless", "source": "ctx.dns.top_level_domain = ctx.dns.query.name.substring(ctx.dns.query.name.lastIndexOf('.') + 1)", "ignore_failure": true } },
{ "script": { "lang": "painless", "source": "ctx.dns.query_without_tld = ctx.dns.query.name.substring(0, (ctx.dns.query.name.lastIndexOf('.')))", "ignore_failure": true } },
{ "script": { "lang": "painless", "source": "ctx.dns.parent_domain = ctx.dns.query_without_tld.substring(ctx.dns.query_without_tld.lastIndexOf('.') + 1)", "ignore_failure": true } },
{ "script": { "lang": "painless", "source": "ctx.dns.subdomain = ctx.dns.query_without_tld.substring(0, (ctx.dns.query_without_tld.lastIndexOf('.')))", "ignore_failure": true } },
{ "script": { "lang": "painless", "source": "ctx.dns.highest_registered_domain = ctx.dns.parent_domain + '.' + ctx.dns.top_level_domain", "ignore_failure": true } },
{ "script": { "lang": "painless", "source": "ctx.dns.subdomain_length = ctx.dns.subdomain.length()", "ignore_failure": true } },
{ "script": { "lang": "painless", "source": "ctx.dns.parent_domain_length = ctx.dns.parent_domain.length()", "ignore_failure": true } },
{ "remove": { "field": "dns.query_without_tld", "ignore_failure": true } }
]
}

View File

@@ -12,30 +12,22 @@
# #
# You should have received a copy of the GNU General Public License # You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>. # along with this program. If not, see <http://www.gnu.org/licenses/>.
{% set VERSION = salt['pillar.get']('static:soversion', 'HH1.2.1') %} {% set VERSION = salt['pillar.get']('static:soversion', 'HH1.2.2') %}
{% set MASTER = salt['grains.get']('master') %} {% set MASTER = salt['grains.get']('master') %}
{% set FEATURES = salt['pillar.get']('elastic:features', False) %} {% set FEATURES = salt['pillar.get']('elastic:features', False) %}
{% if FEATURES %} {% if FEATURES %}
{% set FEATURES = "-features" %} {% set FEATURES = "-features" %}
{% else %} {% else %}
{% set FEATURES = '' %} {% set FEATURES = '' %}
{% endif %} {% endif %}
{% if grains['role'] == 'so-master' %} {% if grains['role'] in ['so-eval','so-mastersearch', 'so-master', 'so-standalone'] %}
{% set esclustername = salt['pillar.get']('master:esclustername', '') %}
{% set esclustername = salt['pillar.get']('master:esclustername', '') %} {% set esheap = salt['pillar.get']('master:esheap', '') %}
{% set esheap = salt['pillar.get']('master:esheap', '') %} {% elif grains['role'] in ['so-node','so-heavynode'] %}
{% set esclustername = salt['pillar.get']('node:esclustername', '') %}
{% elif grains['role'] in ['so-eval','so-mastersearch'] %} {% set esheap = salt['pillar.get']('node:esheap', '') %}
{% set esclustername = salt['pillar.get']('master:esclustername', '') %}
{% set esheap = salt['pillar.get']('master:esheap', '') %}
{% elif grains['role'] == 'so-node' or grains['role'] == 'so-heavynode' %}
{% set esclustername = salt['pillar.get']('node:esclustername', '') %}
{% set esheap = salt['pillar.get']('node:esheap', '') %}
{% endif %} {% endif %}
vm.max_map_count: vm.max_map_count:
@@ -144,8 +136,12 @@ so-elasticsearch-pipelines-file:
so-elasticsearch-pipelines: so-elasticsearch-pipelines:
cmd.run: cmd.run:
- name: /opt/so/conf/elasticsearch/so-elasticsearch-pipelines {{ esclustername }} - name: /opt/so/conf/elasticsearch/so-elasticsearch-pipelines {{ esclustername }}
- onchanges:
- file: esingestconf
- file: esyml
- file: so-elasticsearch-pipelines-file
{% if grains['role'] == 'so-master' or grains['role'] == "so-eval" or grains['role'] == "so-mastersearch" %} {% if grains['role'] in ['so-master', 'so-eval', 'so-mastersearch', 'so-standalone'] %}
so-elasticsearch-templates: so-elasticsearch-templates:
cmd.run: cmd.run:
- name: /usr/sbin/so-elasticsearch-templates - name: /usr/sbin/so-elasticsearch-templates

View File

@@ -74,7 +74,7 @@ filebeat.modules:
# List of prospectors to fetch data. # List of prospectors to fetch data.
filebeat.inputs: filebeat.inputs:
#------------------------------ Log prospector -------------------------------- #------------------------------ Log prospector --------------------------------
{%- if grains['role'] == 'so-sensor' or grains['role'] == "so-eval" or grains['role'] == "so-helix" or grains['role'] == "so-heavynode" %} {%- if grains['role'] == 'so-sensor' or grains['role'] == "so-eval" or grains['role'] == "so-helix" or grains['role'] == "so-heavynode" or grains['role'] == "so-standalone" %}
{%- if BROVER != 'SURICATA' %} {%- if BROVER != 'SURICATA' %}
{%- for LOGNAME in salt['pillar.get']('brologs:enabled', '') %} {%- for LOGNAME in salt['pillar.get']('brologs:enabled', '') %}
- type: log - type: log

View File

@@ -11,7 +11,7 @@
# #
# You should have received a copy of the GNU General Public License # You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>. # along with this program. If not, see <http://www.gnu.org/licenses/>.
{% set VERSION = salt['pillar.get']('static:soversion', 'HH1.2.1') %} {% set VERSION = salt['pillar.get']('static:soversion', 'HH1.2.2') %}
{% set MASTER = salt['grains.get']('master') %} {% set MASTER = salt['grains.get']('master') %}
{% set MASTERIP = salt['pillar.get']('static:masterip', '') %} {% set MASTERIP = salt['pillar.get']('static:masterip', '') %}
{% set FEATURES = salt['pillar.get']('elastic:features', False) %} {% set FEATURES = salt['pillar.get']('elastic:features', False) %}

View File

@@ -1,15 +1,16 @@
# Firewall Magic for the grid # Firewall Magic for the grid
{%- if grains['role'] in ['so-eval','so-master','so-helix','so-mastersearch'] %} {% if grains['role'] in ['so-eval','so-master','so-helix','so-mastersearch', 'so-standalone'] %}
{%- set ip = salt['pillar.get']('static:masterip', '') %} {% set ip = salt['pillar.get']('static:masterip', '') %}
{%- elif grains['role'] == 'so-node' or grains['role'] == 'so-heavynode' %} {% elif grains['role'] == 'so-node' or grains['role'] == 'so-heavynode' %}
{%- set ip = salt['pillar.get']('node:mainip', '') %} {% set ip = salt['pillar.get']('node:mainip', '') %}
{%- elif grains['role'] == 'so-sensor' %} {% elif grains['role'] == 'so-sensor' %}
{%- set ip = salt['pillar.get']('sensor:mainip', '') %} {% set ip = salt['pillar.get']('sensor:mainip', '') %}
{%- elif grains['role'] == 'so-fleet' %} {% elif grains['role'] == 'so-fleet' %}
{%- set ip = salt['pillar.get']('node:mainip', '') %} {% set ip = salt['pillar.get']('node:mainip', '') %}
{%- endif %} {% endif %}
{%- set FLEET_NODE = salt['pillar.get']('static:fleet_node') %}
{%- set FLEET_NODE_IP = salt['pillar.get']('static:fleet_ip') %} {% set FLEET_NODE = salt['pillar.get']('static:fleet_node') %}
{% set FLEET_NODE_IP = salt['pillar.get']('static:fleet_ip') %}
# Quick Fix for Docker being difficult # Quick Fix for Docker being difficult
iptables_fix_docker: iptables_fix_docker:
@@ -136,7 +137,7 @@ enable_wazuh_manager_1514_udp_{{ip}}:
- save: True - save: True
# Rules if you are a Master # Rules if you are a Master
{% if grains['role'] == 'so-master' or grains['role'] == 'so-eval' or grains['role'] == 'so-helix' or grains['role'] == 'so-mastersearch' %} {% if grains['role'] in ['so-master', 'so-eval', 'so-helix', 'so-mastersearch', 'so-standalone'] %}
#This should be more granular #This should be more granular
iptables_allow_master_docker: iptables_allow_master_docker:
iptables.insert: iptables.insert:
@@ -364,6 +365,17 @@ enable_minion_osquery_8080_{{ip}}:
- position: 1 - position: 1
- save: True - save: True
enable_minion_osquery_8090_{{ip}}:
iptables.insert:
- table: filter
- chain: DOCKER-USER
- jump: ACCEPT
- proto: tcp
- source: {{ ip }}
- dport: 8090
- position: 1
- save: True
enable_minion_wazuh_55000_{{ip}}: enable_minion_wazuh_55000_{{ip}}:
iptables.insert: iptables.insert:
- table: filter - table: filter
@@ -671,7 +683,14 @@ enable_cluster_ES_9300_{{ip}}:
# Rules if you are a Sensor # Rules if you are a Sensor
{% if grains['role'] == 'so-sensor' %} {% if grains['role'] == 'so-sensor' %}
iptables_allow_sensor_docker:
iptables.insert:
- table: filter
- chain: INPUT
- jump: ACCEPT
- source: 172.17.0.0/24
- position: 1
- save: True
{% endif %} {% endif %}
# Rules if you are a Hot Node # Rules if you are a Hot Node

View File

@@ -31,7 +31,7 @@ docker exec so-fleet fleetctl apply -f /packs/hh/osquery.conf
# Enable Fleet # Enable Fleet
echo "Enabling Fleet..." echo "Enabling Fleet..."
salt-call state.apply fleet.event_enable-fleet queue=True >> /root/fleet-setup.log salt-call state.apply fleet.event_enable-fleet queue=True >> /root/fleet-setup.log
salt-call state.apply common queue=True >> /root/fleet-setup.log salt-call state.apply nginx queue=True >> /root/fleet-setup.log
# Generate osquery install packages # Generate osquery install packages
echo "Generating osquery install packages - this will take some time..." echo "Generating osquery install packages - this will take some time..."
@@ -41,8 +41,8 @@ sleep 120
echo "Installing launcher via salt..." echo "Installing launcher via salt..."
salt-call state.apply fleet.install_package queue=True >> /root/fleet-setup.log salt-call state.apply fleet.install_package queue=True >> /root/fleet-setup.log
salt-call state.apply filebeat queue=True >> /root/fleet-setup.log salt-call state.apply filebeat queue=True >> /root/fleet-setup.log
docker stop so-core docker stop so-nginx
salt-call state.apply common queue=True >> /root/fleet-setup.log salt-call state.apply nginx queue=True >> /root/fleet-setup.log
echo "Fleet Setup Complete - Login here: https://{{ MAIN_HOSTNAME }}" echo "Fleet Setup Complete - Login here: https://{{ MAIN_HOSTNAME }}"
echo "Your username is $2 and your password is $initpw" echo "Your username is $2 and your password is $initpw"

View File

@@ -1,7 +1,7 @@
{%- set MYSQLPASS = salt['pillar.get']('secrets:mysql', None) -%} {%- set MYSQLPASS = salt['pillar.get']('secrets:mysql', None) -%}
{%- set FLEETPASS = salt['pillar.get']('secrets:fleet', None) -%} {%- set FLEETPASS = salt['pillar.get']('secrets:fleet', None) -%}
{%- set FLEETJWT = salt['pillar.get']('secrets:fleet_jwt', None) -%} {%- set FLEETJWT = salt['pillar.get']('secrets:fleet_jwt', None) -%}
{% set VERSION = salt['pillar.get']('static:soversion', 'HH1.2.1') %} {% set VERSION = salt['pillar.get']('static:soversion', 'HH1.2.2') %}
{% set MASTER = salt['grains.get']('master') %} {% set MASTER = salt['grains.get']('master') %}
{% set MAINIP = salt['pillar.get']('node:mainip') %} {% set MAINIP = salt['pillar.get']('node:mainip') %}
{% set FLEETARCH = salt['grains.get']('role') %} {% set FLEETARCH = salt['grains.get']('role') %}
@@ -13,6 +13,9 @@
{% set MAINIP = salt['pillar.get']('static:masterip') %} {% set MAINIP = salt['pillar.get']('static:masterip') %}
{% endif %} {% endif %}
include:
- mysql
#{% if grains.id.split('_')|last in ['master', 'eval', 'fleet'] %} #{% if grains.id.split('_')|last in ['master', 'eval', 'fleet'] %}
#so/fleet: #so/fleet:
# event.send: # event.send:
@@ -86,6 +89,8 @@ fleetdb:
- connection_port: 3306 - connection_port: 3306
- connection_user: root - connection_user: root
- connection_pass: {{ MYSQLPASS }} - connection_pass: {{ MYSQLPASS }}
- require:
- sls: mysql
fleetdbuser: fleetdbuser:
mysql_user.present: mysql_user.present:
@@ -95,6 +100,8 @@ fleetdbuser:
- connection_port: 3306 - connection_port: 3306
- connection_user: root - connection_user: root
- connection_pass: {{ MYSQLPASS }} - connection_pass: {{ MYSQLPASS }}
- require:
- fleetdb
fleetdbpriv: fleetdbpriv:
mysql_grants.present: mysql_grants.present:
@@ -106,6 +113,8 @@ fleetdbpriv:
- connection_port: 3306 - connection_port: 3306
- connection_user: root - connection_user: root
- connection_pass: {{ MYSQLPASS }} - connection_pass: {{ MYSQLPASS }}
- require:
- fleetdb
{% if FLEETPASS == None or FLEETJWT == None %} {% if FLEETPASS == None or FLEETJWT == None %}

View File

@@ -1226,7 +1226,7 @@
}, },
{ {
"params": [ "params": [
" / 5" " / {{ CPUS }}"
], ],
"type": "math" "type": "math"
} }
@@ -1365,7 +1365,7 @@
}, },
{ {
"params": [ "params": [
" / 8" " / {{ CPUS }}"
], ],
"type": "math" "type": "math"
} }
@@ -1504,7 +1504,7 @@
}, },
{ {
"params": [ "params": [
" / 8" " / {{ CPUS }}"
], ],
"type": "math" "type": "math"
} }
@@ -1643,7 +1643,7 @@
}, },
{ {
"params": [ "params": [
" / 8" " / {{ CPUS }}"
], ],
"type": "math" "type": "math"
} }

View File

@@ -290,7 +290,7 @@
}, },
{ {
"params": [ "params": [
" / 16" " / {{ CPUS }}"
], ],
"type": "math" "type": "math"
} }
@@ -430,7 +430,7 @@
}, },
{ {
"params": [ "params": [
" / 16" " / {{ CPUS }}"
], ],
"type": "math" "type": "math"
} }
@@ -1046,7 +1046,7 @@
}, },
{ {
"params": [ "params": [
" / 16" " / {{ CPUS }}"
], ],
"type": "math" "type": "math"
} }
@@ -1186,7 +1186,7 @@
}, },
{ {
"params": [ "params": [
" / 16" " / {{ CPUS }}"
], ],
"type": "math" "type": "math"
} }
@@ -1326,7 +1326,7 @@
}, },
{ {
"params": [ "params": [
" / 16" " / {{ CPUS }}"
], ],
"type": "math" "type": "math"
} }

File diff suppressed because it is too large Load Diff

View File

@@ -298,7 +298,7 @@
}, },
{ {
"params": [ "params": [
" / 16" " / {{ CPUS }}"
], ],
"type": "math" "type": "math"
} }
@@ -438,7 +438,7 @@
}, },
{ {
"params": [ "params": [
" / 16" " / {{ CPUS }}"
], ],
"type": "math" "type": "math"
} }

View File

@@ -1326,7 +1326,7 @@
}, },
{ {
"params": [ "params": [
" / 8" " / {{ CPUS }}"
], ],
"type": "math" "type": "math"
} }
@@ -1465,7 +1465,7 @@
}, },
{ {
"params": [ "params": [
" / 8" " / {{ CPUS }}"
], ],
"type": "math" "type": "math"
} }
@@ -1604,7 +1604,7 @@
}, },
{ {
"params": [ "params": [
" / 8" " / {{ CPUS }}"
], ],
"type": "math" "type": "math"
} }

View File

@@ -10,6 +10,13 @@ providers:
editable: true editable: true
options: options:
path: /etc/grafana/grafana_dashboards/master path: /etc/grafana/grafana_dashboards/master
- name: 'Master Search'
folder: 'Master Search'
type: file
disableDeletion: false
editable: true
options:
path: /etc/grafana/grafana_dashboards/mastersearch
- name: 'Sensor Nodes' - name: 'Sensor Nodes'
folder: 'Sensor Nodes' folder: 'Sensor Nodes'
type: file type: file

205
salt/grafana/init.sls Normal file
View File

@@ -0,0 +1,205 @@
{% set GRAFANA = salt['pillar.get']('master:grafana', '0') %}
{% set MASTER = salt['grains.get']('master') %}
{% set VERSION = salt['pillar.get']('static:soversion', 'HH1.2.2') %}
{% if grains['role'] in ['so-master', 'so-mastersearch', 'so-eval', 'so-standalone'] and GRAFANA == 1 %}
# Grafana all the things
grafanadir:
file.directory:
- name: /nsm/grafana
- user: 939
- group: 939
- makedirs: True
grafanaconfdir:
file.directory:
- name: /opt/so/conf/grafana/etc
- user: 939
- group: 939
- makedirs: True
grafanadashdir:
file.directory:
- name: /opt/so/conf/grafana/grafana_dashboards
- user: 939
- group: 939
- makedirs: True
grafanadashmdir:
file.directory:
- name: /opt/so/conf/grafana/grafana_dashboards/master
- user: 939
- group: 939
- makedirs: True
grafanadashmsdir:
file.directory:
- name: /opt/so/conf/grafana/grafana_dashboards/mastersearch
- user: 939
- group: 939
- makedirs: True
grafanadashevaldir:
file.directory:
- name: /opt/so/conf/grafana/grafana_dashboards/eval
- user: 939
- group: 939
- makedirs: True
grafanadashfndir:
file.directory:
- name: /opt/so/conf/grafana/grafana_dashboards/sensor_nodes
- user: 939
- group: 939
- makedirs: True
grafanadashsndir:
file.directory:
- name: /opt/so/conf/grafana/grafana_dashboards/search_nodes
- user: 939
- group: 939
- makedirs: True
grafanaconf:
file.recurse:
- name: /opt/so/conf/grafana/etc
- user: 939
- group: 939
- template: jinja
- source: salt://grafana/etc
{% if salt['pillar.get']('mastertab', False) %}
{% for SN, SNDATA in salt['pillar.get']('mastertab', {}).items() %}
{% set NODETYPE = SN.split('_')|last %}
{% set SN = SN | regex_replace('_' ~ NODETYPE, '') %}
dashboard-master:
file.managed:
- name: /opt/so/conf/grafana/grafana_dashboards/master/{{ SN }}-Master.json
- user: 939
- group: 939
- template: jinja
- source: salt://grafana/dashboards/master/master.json
- defaults:
SERVERNAME: {{ SN }}
MANINT: {{ SNDATA.manint }}
MONINT: {{ SNDATA.manint }}
CPUS: {{ SNDATA.totalcpus }}
UID: {{ SNDATA.guid }}
ROOTFS: {{ SNDATA.rootfs }}
NSMFS: {{ SNDATA.nsmfs }}
{% endfor %}
{% endif %}
{% if salt['pillar.get']('mastersearchtab', False) %}
{% for SN, SNDATA in salt['pillar.get']('mastersearchtab', {}).items() %}
{% set NODETYPE = SN.split('_')|last %}
{% set SN = SN | regex_replace('_' ~ NODETYPE, '') %}
dashboard-master:
file.managed:
- name: /opt/so/conf/grafana/grafana_dashboards/mastersearch/{{ SN }}-MasterSearch.json
- user: 939
- group: 939
- template: jinja
- source: salt://grafana/dashboards/mastersearch/mastersearch.json
- defaults:
SERVERNAME: {{ SN }}
MANINT: {{ SNDATA.manint }}
MONINT: {{ SNDATA.manint }}
CPUS: {{ SNDATA.totalcpus }}
UID: {{ SNDATA.guid }}
ROOTFS: {{ SNDATA.rootfs }}
NSMFS: {{ SNDATA.nsmfs }}
{% endfor %}
{% endif %}
{% if salt['pillar.get']('sensorstab', False) %}
{% for SN, SNDATA in salt['pillar.get']('sensorstab', {}).items() %}
{% set NODETYPE = SN.split('_')|last %}
{% set SN = SN | regex_replace('_' ~ NODETYPE, '') %}
dashboard-{{ SN }}:
file.managed:
- name: /opt/so/conf/grafana/grafana_dashboards/sensor_nodes/{{ SN }}-Sensor.json
- user: 939
- group: 939
- template: jinja
- source: salt://grafana/dashboards/sensor_nodes/sensor.json
- defaults:
SERVERNAME: {{ SN }}
MONINT: {{ SNDATA.monint }}
MANINT: {{ SNDATA.manint }}
CPUS: {{ SNDATA.totalcpus }}
UID: {{ SNDATA.guid }}
ROOTFS: {{ SNDATA.rootfs }}
NSMFS: {{ SNDATA.nsmfs }}
{% endfor %}
{% endif %}
{% if salt['pillar.get']('nodestab', False) %}
{% for SN, SNDATA in salt['pillar.get']('nodestab', {}).items() %}
{% set NODETYPE = SN.split('_')|last %}
{% set SN = SN | regex_replace('_' ~ NODETYPE, '') %}
dashboardsearch-{{ SN }}:
file.managed:
- name: /opt/so/conf/grafana/grafana_dashboards/search_nodes/{{ SN }}-Node.json
- user: 939
- group: 939
- template: jinja
- source: salt://grafana/dashboards/search_nodes/searchnode.json
- defaults:
SERVERNAME: {{ SN }}
MANINT: {{ SNDATA.manint }}
MONINT: {{ SNDATA.manint }}
CPUS: {{ SNDATA.totalcpus }}
UID: {{ SNDATA.guid }}
ROOTFS: {{ SNDATA.rootfs }}
NSMFS: {{ SNDATA.nsmfs }}
{% endfor %}
{% endif %}
{% if salt['pillar.get']('evaltab', False) %}
{% for SN, SNDATA in salt['pillar.get']('evaltab', {}).items() %}
{% set NODETYPE = SN.split('_')|last %}
{% set SN = SN | regex_replace('_' ~ NODETYPE, '') %}
dashboard-{{ SN }}:
file.managed:
- name: /opt/so/conf/grafana/grafana_dashboards/eval/{{ SN }}-Node.json
- user: 939
- group: 939
- template: jinja
- source: salt://grafana/dashboards/eval/eval.json
- defaults:
SERVERNAME: {{ SN }}
MANINT: {{ SNDATA.manint }}
MONINT: {{ SNDATA.monint }}
CPUS: {{ SNDATA.totalcpus }}
UID: {{ SNDATA.guid }}
ROOTFS: {{ SNDATA.rootfs }}
NSMFS: {{ SNDATA.nsmfs }}
{% endfor %}
{% endif %}
so-grafana:
docker_container.running:
- image: {{ MASTER }}:5000/soshybridhunter/so-grafana:{{ VERSION }}
- hostname: grafana
- user: socore
- binds:
- /nsm/grafana:/var/lib/grafana:rw
- /opt/so/conf/grafana/etc/grafana.ini:/etc/grafana/grafana.ini:ro
- /opt/so/conf/grafana/etc/datasources:/etc/grafana/provisioning/datasources:rw
- /opt/so/conf/grafana/etc/dashboards:/etc/grafana/provisioning/dashboards:rw
- /opt/so/conf/grafana/grafana_dashboards:/etc/grafana/grafana_dashboards:rw
- environment:
- GF_SECURITY_ADMIN_PASSWORD=augusta
- port_bindings:
- 0.0.0.0:3000:3000
- watch:
- file: /opt/so/conf/grafana/*
{% endif %}

View File

@@ -1,5 +1,5 @@
{% set MASTERIP = salt['pillar.get']('master:mainip', '') %} {% set MASTERIP = salt['pillar.get']('master:mainip', '') %}
{% set VERSION = salt['pillar.get']('static:soversion', 'HH1.2.1') %} {% set VERSION = salt['pillar.get']('static:soversion', 'HH1.2.2') %}
{% set MASTER = salt['grains.get']('master') %} {% set MASTER = salt['grains.get']('master') %}
hiveconfdir: hiveconfdir:
file.directory: file.directory:

View File

@@ -23,7 +23,7 @@ search {
# Number of shards # Number of shards
nbshards = 5 nbshards = 5
# Number of replicas # Number of replicas
nbreplicas = 1 nbreplicas = 0
# Arbitrary settings # Arbitrary settings
settings { settings {
# Maximum number of nested fields # Maximum number of nested fields

View File

@@ -22,7 +22,7 @@ search {
# Number of shards # Number of shards
nbshards = 5 nbshards = 5
# Number of replicas # Number of replicas
nbreplicas = 1 nbreplicas = 0
# Arbitrary settings # Arbitrary settings
settings { settings {
# Maximum number of nested fields # Maximum number of nested fields

View File

@@ -5,7 +5,7 @@
{%- set HIVEKEY = salt['pillar.get']('static:hivekey', '') %} {%- set HIVEKEY = salt['pillar.get']('static:hivekey', '') %}
hive_init(){ hive_init(){
sleep 60 sleep 120
HIVE_IP="{{MASTERIP}}" HIVE_IP="{{MASTERIP}}"
HIVE_USER="{{HIVEUSER}}" HIVE_USER="{{HIVEUSER}}"
HIVE_PASSWORD="{{HIVEPASSWORD}}" HIVE_PASSWORD="{{HIVEPASSWORD}}"
@@ -16,7 +16,7 @@ hive_init(){
COUNT=0 COUNT=0
HIVE_CONNECTED="no" HIVE_CONNECTED="no"
while [[ "$COUNT" -le 240 ]]; do while [[ "$COUNT" -le 240 ]]; do
curl --output /dev/null --silent --head --fail -k "https://$HIVE_IP:/thehive" curl --output /dev/null --silent --head --fail -k "https://$HIVE_IP/thehive"
if [ $? -eq 0 ]; then if [ $? -eq 0 ]; then
HIVE_CONNECTED="yes" HIVE_CONNECTED="yes"
echo "connected!" echo "connected!"
@@ -52,7 +52,7 @@ if [ -f /opt/so/state/thehive.txt ]; then
exit 0 exit 0
else else
rm -f garbage_file rm -f garbage_file
while ! wget -O garbage_file {{MASTERIP}}:9500 2>/dev/null while ! wget -O garbage_file {{MASTERIP}}:9400 2>/dev/null
do do
echo "Waiting for Elasticsearch..." echo "Waiting for Elasticsearch..."
rm -f garbage_file rm -f garbage_file

View File

@@ -12,7 +12,7 @@
# #
# You should have received a copy of the GNU General Public License # You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>. # along with this program. If not, see <http://www.gnu.org/licenses/>.
{% set VERSION = salt['pillar.get']('static:soversion', 'HH1.2.1') %} {% set VERSION = salt['pillar.get']('static:soversion', 'HH1.2.2') %}
{% set MASTER = salt['grains.get']('master') %} {% set MASTER = salt['grains.get']('master') %}
# IDSTools Setup # IDSTools Setup
idstoolsdir: idstoolsdir:

43
salt/influxdb/init.sls Normal file
View File

@@ -0,0 +1,43 @@
{% set GRAFANA = salt['pillar.get']('master:grafana', '0') %}
{% set MASTER = salt['grains.get']('master') %}
{% set VERSION = salt['pillar.get']('static:soversion', 'HH1.2.2') %}
{% if grains['role'] in ['so-master', 'so-mastersearch', 'so-eval', 'so-standalone'] and GRAFANA == 1 %}
# Influx DB
influxconfdir:
file.directory:
- name: /opt/so/conf/influxdb/etc
- makedirs: True
influxdbdir:
file.directory:
- name: /nsm/influxdb
- makedirs: True
influxdbconf:
file.managed:
- name: /opt/so/conf/influxdb/etc/influxdb.conf
- user: 939
- group: 939
- template: jinja
- source: salt://influxdb/etc/influxdb.conf
so-influxdb:
docker_container.running:
- image: {{ MASTER }}:5000/soshybridhunter/so-influxdb:{{ VERSION }}
- hostname: influxdb
- environment:
- INFLUXDB_HTTP_LOG_ENABLED=false
- binds:
- /opt/so/conf/influxdb/etc/influxdb.conf:/etc/influxdb/influxdb.conf:ro
- /nsm/influxdb:/var/lib/influxdb:rw
- /etc/pki/influxdb.crt:/etc/ssl/influxdb.crt:ro
- /etc/pki/influxdb.key:/etc/ssl/influxdb.key:ro
- port_bindings:
- 0.0.0.0:8086:8086
- watch:
- file: influxdbconf
{% endif %}

View File

@@ -1,38 +1,21 @@
#!/bin/bash #!/bin/bash
# {%- set FLEET_MASTER = salt['pillar.get']('static:fleet_master', False) -%}
{%- set MASTER = salt['pillar.get']('static:masterip', '') %} # {%- set FLEET_NODE = salt['pillar.get']('static:fleet_node', False) -%}
{%- set FLEET = salt['pillar.get']('static:fleet_ip', '') %} # {%- set FLEET_IP = salt['pillar.get']('static:fleet_ip', '') %}
{%- set KRATOS = salt['pillar.get']('kratos:redirect', '') %} # {%- set MASTER = salt['pillar.get']('master:url_base', '') %}
KIBANA_VERSION="7.6.1" KIBANA_VERSION="7.6.1"
MAX_WAIT=120
# Check to see if Kibana is available
until curl "{{ MASTER }}:5601/nonexistenturl" 2>&1 |grep -q "Not Found" ; do
wait_step=$(( ${wait_step} + 1 ))
echo "Waiting on Kibana ({{ MASTER }}:5601)...Attempt #$wait_step"
if [ ${wait_step} -gt ${MAX_WAIT} ]; then
echo "ERROR: Kibana not available for more than ${MAX_WAIT} seconds."
exit 5
fi
sleep 1s;
done
# Sleep additional JIC server is not ready
sleep 30s
# Copy template file # Copy template file
cp /opt/so/conf/kibana/saved_objects.ndjson.template /opt/so/conf/kibana/saved_objects.ndjson cp /opt/so/conf/kibana/saved_objects.ndjson.template /opt/so/conf/kibana/saved_objects.ndjson
# {% if FLEET_NODE or FLEET_MASTER %}
# Fleet IP
sed -i "s/FLEETPLACEHOLDER/{{ FLEET_IP }}/g" /opt/so/conf/kibana/saved_objects.ndjson
# {% endif %}
# SOCtopus and Master # SOCtopus and Master
sed -i "s/PLACEHOLDER/{{ MASTER }}/g" /opt/so/conf/kibana/saved_objects.ndjson sed -i "s/PLACEHOLDER/{{ MASTER }}/g" /opt/so/conf/kibana/saved_objects.ndjson
# Fleet IP
sed -i "s/FLEETPLACEHOLDER/{{ FLEET }}/g" /opt/so/conf/kibana/saved_objects.ndjson
# Kratos redirect
sed -i "s/PCAPPLACEHOLDER/{{ KRATOS }}/g" /opt/so/conf/kibana/saved_objects.ndjson
# Load saved objects # Load saved objects
curl -X POST "localhost:5601/api/saved_objects/_import" -H "kbn-xsrf: true" --form file=@/opt/so/conf/kibana/saved_objects.ndjson > /dev/null 2>&1 curl -X POST "localhost:5601/api/saved_objects/_import" -H "kbn-xsrf: true" --form file=@/opt/so/conf/kibana/saved_objects.ndjson > /dev/null 2>&1

File diff suppressed because one or more lines are too long

View File

@@ -1,4 +1,4 @@
{% set VERSION = salt['pillar.get']('static:soversion', 'HH1.2.1') %} {% set VERSION = salt['pillar.get']('static:soversion', 'HH1.2.2') %}
{% set MASTER = salt['grains.get']('master') %} {% set MASTER = salt['grains.get']('master') %}
{% set FEATURES = salt['pillar.get']('elastic:features', False) %} {% set FEATURES = salt['pillar.get']('elastic:features', False) %}
{% if FEATURES %} {% if FEATURES %}
@@ -66,13 +66,6 @@ kibanabin:
- mode: 755 - mode: 755
- template: jinja - template: jinja
kibanadashtemplate:
file.managed:
- name: /opt/so/conf/kibana/saved_objects.ndjson.template
- source: salt://kibana/files/saved_objects.ndjson
- user: 932
- group: 939
# Start the kibana docker # Start the kibana docker
so-kibana: so-kibana:
docker_container.running: docker_container.running:
@@ -91,12 +84,27 @@ so-kibana:
- port_bindings: - port_bindings:
- 0.0.0.0:5601:5601 - 0.0.0.0:5601:5601
kibanadashtemplate:
file.managed:
- name: /opt/so/conf/kibana/saved_objects.ndjson.template
- source: salt://kibana/files/saved_objects.ndjson
- user: 932
- group: 939
wait_for_kibana:
module.run:
- http.wait_for_successful_query:
- url: "http://{{MASTER}}:5601/api/saved_objects/_find?type=config"
- wait_for: 180
- onchanges:
- file: kibanadashtemplate
so-kibana-config-load: so-kibana-config-load:
cmd.run: cmd.run:
- name: /usr/sbin/so-kibana-config-load - name: /usr/sbin/so-kibana-config-load
- cwd: /opt/so - cwd: /opt/so
- onchanges: - onchanges:
- file: kibanadashtemplate - wait_for_kibana
# Keep the setting correct # Keep the setting correct

View File

@@ -12,9 +12,10 @@
# #
# You should have received a copy of the GNU General Public License # You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>. # along with this program. If not, see <http://www.gnu.org/licenses/>.
{% set VERSION = salt['pillar.get']('static:soversion', 'HH1.2.1') %} {% set VERSION = salt['pillar.get']('static:soversion', 'HH1.2.2') %}
{% set MASTER = salt['grains.get']('master') %} {% set MASTER = salt['grains.get']('master') %}
{% set FEATURES = salt['pillar.get']('elastic:features', False) %} {% set FEATURES = salt['pillar.get']('elastic:features', False) %}
{% if FEATURES %} {% if FEATURES %}
{% set FEATURES = "-features" %} {% set FEATURES = "-features" %}
{% else %} {% else %}
@@ -23,35 +24,21 @@
# Logstash Section - Decide which pillar to use # Logstash Section - Decide which pillar to use
{% if grains['role'] == 'so-sensor' %} {% if grains['role'] == 'so-sensor' %}
{% set lsheap = salt['pillar.get']('sensor:lsheap', '') %}
{% set lsheap = salt['pillar.get']('sensor:lsheap', '') %} {% set lsaccessip = salt['pillar.get']('sensor:lsaccessip', '') %}
{% set lsaccessip = salt['pillar.get']('sensor:lsaccessip', '') %}
{% elif grains['role'] == 'so-node' or grains['role'] == 'so-heavynode' %} {% elif grains['role'] == 'so-node' or grains['role'] == 'so-heavynode' %}
{% set lsheap = salt['pillar.get']('node:lsheap', '') %} {% set lsheap = salt['pillar.get']('node:lsheap', '') %}
{% set nodetype = salt['pillar.get']('node:node_type', 'storage') %} {% set nodetype = salt['pillar.get']('node:node_type', 'storage') %}
{% elif grains['role'] in ['so-eval','so-mastersearch', 'so-master', 'so-standalone'] %}
{% elif grains['role'] == 'so-master' %} {% set lsheap = salt['pillar.get']('master:lsheap', '') %}
{% set freq = salt['pillar.get']('master:freq', '0') %}
{% set lsheap = salt['pillar.get']('master:lsheap', '') %} {% set dstats = salt['pillar.get']('master:domainstats', '0') %}
{% set freq = salt['pillar.get']('master:freq', '0') %} {% set nodetype = salt['grains.get']('role', '') %}
{% set dstats = salt['pillar.get']('master:domainstats', '0') %}
{% set nodetype = salt['grains.get']('role', '') %}
{% elif grains['role'] == 'so-helix' %} {% elif grains['role'] == 'so-helix' %}
{% set lsheap = salt['pillar.get']('master:lsheap', '') %}
{% set lsheap = salt['pillar.get']('master:lsheap', '') %} {% set freq = salt['pillar.get']('master:freq', '0') %}
{% set freq = salt['pillar.get']('master:freq', '0') %} {% set dstats = salt['pillar.get']('master:domainstats', '0') %}
{% set dstats = salt['pillar.get']('master:domainstats', '0') %} {% set nodetype = salt['grains.get']('role', '') %}
{% set nodetype = salt['grains.get']('role', '') %}
{% elif grains['role'] in ['so-eval','so-mastersearch'] %}
{% set lsheap = salt['pillar.get']('master:lsheap', '') %}
{% set freq = salt['pillar.get']('master:freq', '0') %}
{% set dstats = salt['pillar.get']('master:domainstats', '0') %}
{% set nodetype = salt['grains.get']('role', '') %}
{% endif %} {% endif %}
{% set PIPELINES = salt['pillar.get']('logstash:pipelines', {}) %} {% set PIPELINES = salt['pillar.get']('logstash:pipelines', {}) %}

View File

@@ -5,7 +5,7 @@ input {
ssl_certificate_authorities => ["/usr/share/filebeat/ca.crt"] ssl_certificate_authorities => ["/usr/share/filebeat/ca.crt"]
ssl_certificate => "/usr/share/logstash/filebeat.crt" ssl_certificate => "/usr/share/logstash/filebeat.crt"
ssl_key => "/usr/share/logstash/filebeat.key" ssl_key => "/usr/share/logstash/filebeat.key"
tags => [ "beat" ] #tags => [ "beat" ]
} }
} }
filter { filter {

View File

@@ -1,9 +1,9 @@
#!/bin/bash #!/bin/bash
MASTER={{ MASTER }} MASTER={{ MASTER }}
VERSION="HH1.2.1" VERSION="HH1.2.2"
TRUSTED_CONTAINERS=( \ TRUSTED_CONTAINERS=( \
"so-core:$VERSION" \ "so-nginx:$VERSION" \
"so-cyberchef:$VERSION" \ "so-cyberchef:$VERSION" \
"so-acng:$VERSION" \ "so-acng:$VERSION" \
"so-soc:$VERSION" \ "so-soc:$VERSION" \

View File

@@ -12,7 +12,7 @@
# #
# You should have received a copy of the GNU General Public License # You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>. # along with this program. If not, see <http://www.gnu.org/licenses/>.
{% set VERSION = salt['pillar.get']('static:soversion', 'HH1.2.1') %} {% set VERSION = salt['pillar.get']('static:soversion', 'HH1.2.2') %}
{% set MASTER = salt['grains.get']('master') %} {% set MASTER = salt['grains.get']('master') %}
{% set masterproxy = salt['pillar.get']('static:masterupdate', '0') %} {% set masterproxy = salt['pillar.get']('static:masterupdate', '0') %}

View File

@@ -1,6 +1,6 @@
{%- set MYSQLPASS = salt['pillar.get']('secrets:mysql', None) %} {%- set MYSQLPASS = salt['pillar.get']('secrets:mysql', None) %}
{%- set MASTERIP = salt['pillar.get']('static:masterip', '') %} {%- set MASTERIP = salt['pillar.get']('static:masterip', '') %}
{% set VERSION = salt['pillar.get']('static:soversion', 'HH1.2.1') %} {% set VERSION = salt['pillar.get']('static:soversion', 'HH1.2.2') %}
{% set MASTER = salt['grains.get']('master') %} {% set MASTER = salt['grains.get']('master') %}
{% set MAINIP = salt['pillar.get']('node:mainip') %} {% set MAINIP = salt['pillar.get']('node:mainip') %}
{% set FLEETARCH = salt['grains.get']('role') %} {% set FLEETARCH = salt['grains.get']('role') %}
@@ -85,4 +85,9 @@ so-mysql:
- /opt/so/log/mysql:/var/log/mysql:rw - /opt/so/log/mysql:/var/log/mysql:rw
- watch: - watch:
- /opt/so/conf/mysql/etc - /opt/so/conf/mysql/etc
cmd.run:
- name: until nc -z {{ MAINIP }} 3306; do sleep 1; done
- timeout: 120
- onchanges:
- docker_container: so-mysql
{% endif %} {% endif %}

22
salt/navigator/init.sls Normal file
View File

@@ -0,0 +1,22 @@
{% set VERSION = salt['pillar.get']('static:soversion', 'HH1.2.2') %}
{% set MASTER = salt['grains.get']('master') %}
navigatorconfig:
file.managed:
- name: /opt/so/conf/navigator/navigator_config.json
- source: salt://navigator/files/navigator_config.json
- user: 939
- group: 939
- makedirs: True
- template: jinja
so-navigator:
docker_container.running:
- image: {{ MASTER }}:5000/soshybridhunter/so-navigator:{{ VERSION }}
- hostname: navigator
- name: so-navigator
- binds:
- /opt/so/conf/navigator/navigator_config.json:/nav-app/src/assets/config.json:ro
- /opt/so/conf/navigator/nav_layer_playbook.json:/nav-app/src/assets/playbook.json:ro
- port_bindings:
- 0.0.0.0:4200:4200

View File

@@ -146,6 +146,20 @@ http {
} }
location /cyberchef/ {
auth_request /auth/sessions/whoami;
proxy_read_timeout 90;
proxy_connect_timeout 90;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Proxy "";
}
location /cyberchef {
rewrite ^ /cyberchef/ permanent;
}
location /packages/ { location /packages/ {
try_files $uri =206; try_files $uri =206;
auth_request /auth/sessions/whoami; auth_request /auth/sessions/whoami;
@@ -299,12 +313,12 @@ http {
return 302 /auth/self-service/browser/flows/login; return 302 /auth/self-service/browser/flows/login;
} }
error_page 404 /404.html; #error_page 404 /404.html;
location = /40x.html { # location = /usr/share/nginx/html/40x.html {
} #}
error_page 500 502 503 504 /50x.html; error_page 500 502 503 504 /50x.html;
location = /50x.html { location = /usr/share/nginx/html/50x.html {
} }
} }

View File

@@ -86,12 +86,12 @@ http {
} }
error_page 404 /404.html; #error_page 404 /404.html;
location = /40x.html { # location = /40x.html {
} #}
error_page 500 502 503 504 /50x.html; error_page 500 502 503 504 /50x.html;
location = /50x.html { location = /usr/share/nginx/html/50x.html {
} }
} }

View File

@@ -77,12 +77,12 @@ http {
# location / { # location / {
# } # }
# #
# error_page 404 /404.html; # #error_page 404 /404.html;
# location = /40x.html { # # location = /40x.html {
# } # #}
# #
# error_page 500 502 503 504 /50x.html; # error_page 500 502 503 504 /50x.html;
# location = /50x.html { # location = /usr/share/nginx/html/50x.html {
# } # }
# } # }

View File

@@ -47,12 +47,12 @@ http {
location / { location / {
} }
error_page 404 /404.html; #error_page 404 /404.html;
location = /40x.html { # location = /40x.html {
} #}
error_page 500 502 503 504 /50x.html; error_page 500 502 503 504 /50x.html;
location = /50x.html { location = /usr/share/nginx/html/50x.html {
} }
} }

View File

@@ -146,6 +146,20 @@ http {
} }
location /cyberchef/ {
auth_request /auth/sessions/whoami;
proxy_read_timeout 90;
proxy_connect_timeout 90;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Proxy "";
}
location /cyberchef {
rewrite ^ /cyberchef/ permanent;
}
location /packages/ { location /packages/ {
try_files $uri =206; try_files $uri =206;
auth_request /auth/sessions/whoami; auth_request /auth/sessions/whoami;
@@ -299,12 +313,12 @@ http {
return 302 /auth/self-service/browser/flows/login; return 302 /auth/self-service/browser/flows/login;
} }
error_page 404 /404.html; #error_page 404 /404.html;
location = /40x.html { # location = /40x.html {
} #}
error_page 500 502 503 504 /50x.html; error_page 500 502 503 504 /50x.html;
location = /50x.html { location = /usr/share/nginx/html/50x.html {
} }
} }

View File

@@ -146,6 +146,20 @@ http {
} }
location /cyberchef/ {
auth_request /auth/sessions/whoami;
proxy_read_timeout 90;
proxy_connect_timeout 90;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Proxy "";
}
location /cyberchef {
rewrite ^ /cyberchef/ permanent;
}
location /packages/ { location /packages/ {
try_files $uri =206; try_files $uri =206;
auth_request /auth/sessions/whoami; auth_request /auth/sessions/whoami;
@@ -299,12 +313,12 @@ http {
return 302 /auth/self-service/browser/flows/login; return 302 /auth/self-service/browser/flows/login;
} }
error_page 404 /404.html; #error_page 404 /404.html;
location = /40x.html { # location = /40x.html {
} #}
error_page 500 502 503 504 /50x.html; error_page 500 502 503 504 /50x.html;
location = /50x.html { location = /usr/share/nginx/html/50x.html {
} }
} }

View File

@@ -47,12 +47,12 @@ http {
location / { location / {
} }
error_page 404 /404.html; #error_page 404 /404.html;
location = /40x.html { # location = /40x.html {
} #}
error_page 500 502 503 504 /50x.html; error_page 500 502 503 504 /50x.html;
location = /50x.html { location = /usr/share/nginx/html/50x.html {
} }
} }

Some files were not shown because too many files have changed in this diff Show More