mirror of
https://github.com/Security-Onion-Solutions/securityonion.git
synced 2025-12-08 18:22:47 +01:00
Merge branch 'dev' of github.com:Security-Onion-Solutions/securityonion-saltstack into dev
# Conflicts: # salt/common/tools/sbin/so-elastic-clear
This commit is contained in:
48
README.md
48
README.md
@@ -1,3 +1,49 @@
|
|||||||
|
## Security Onion 2.0.0.rc1
|
||||||
|
|
||||||
|
Security Onion 2.0.0 RC1 is here! This version requires a fresh install, but there is good news - we have brought back soup! From now on, you should be able to run soup on the manager to upgrade your environment to RC2 and beyond!
|
||||||
|
|
||||||
|
### Changes:
|
||||||
|
- Re-branded 2.0 to give it a fresh look
|
||||||
|
- All documentation has moved to our [docs site](https://docs.securityonion.net/en/2.0)
|
||||||
|
- soup is alive! Note: This tool only updates Security Onion components. Please use the built-in OS update process to keep the OS and other components up to date.
|
||||||
|
- so-import-pcap is back! See the docs [here](http://docs.securityonion.net/en/2.0/so-import-pcap).
|
||||||
|
- Fixed issue with so-features-enable
|
||||||
|
- Users can now pivot to PCAP from Suricata alerts
|
||||||
|
- ISO install now prompts users to create an admin/sudo user instead of using a default account name
|
||||||
|
- The web email & password set during setup is now used to create the initial accounts for TheHive, Cortex, and Fleet
|
||||||
|
- Fixed issue with disk cleanup
|
||||||
|
- Changed the default permissions for /opt/so to keep non-priviledged users from accessing salt and related files
|
||||||
|
- Locked down access to certain SSL keys
|
||||||
|
- Suricata logs now compress after they roll over
|
||||||
|
- Users can now easily customize shard counts per index
|
||||||
|
- Improved Elastic ingest parsers including Windows event logs and Sysmon logs shipped with WinLogbeat and Osquery (ECS)
|
||||||
|
- Elastic nodes are now "hot" by default, making it easier to add a warm node later
|
||||||
|
- so-allow now runs at the end of an install so users can enable access right away
|
||||||
|
- Alert severities across Wazuh, Suricata and Playbook (Sigma) have been standardized and copied to `event.severity`:
|
||||||
|
- 1-Low / 2-Medium / 3-High / 4-Critical
|
||||||
|
- Initial implementation of alerting queues:
|
||||||
|
- Low & Medium alerts are accessible through Kibana & Hunt
|
||||||
|
- High & Critical alerts are accessible through Kibana, Hunt and sent to TheHive for immediate analysis
|
||||||
|
- ATT&CK Navigator is now a statically-hosted site in the nginx container
|
||||||
|
- Playbook
|
||||||
|
- All Sigma rules in the community repo (500+) are now imported and kept up to date
|
||||||
|
- Initial implementation of automated testing when a Play's detection logic has been edited (i.e., Unit Testing)
|
||||||
|
- Updated UI Theme
|
||||||
|
- Once authenticated through SOC, users can now access Playbook with analyst permissions without login
|
||||||
|
- Kolide Launcher has been updated to include the ability to pass arbitrary flags - new functionality sponsored by SOS
|
||||||
|
- Fixed issue with Wazuh authd registration service port not being correctly exposed
|
||||||
|
- Added option for exposure of Elasticsearch REST API (port 9200) to so-allow for easier external querying/integration with other tools
|
||||||
|
- Added option to so-allow for external Strelka file uploads (e.g., via `strelka-fileshot`)
|
||||||
|
- Added default YARA rules for Strelka -- default rules are maintained by Florian Roth and pulled from https://github.com/Neo23x0/signature-base
|
||||||
|
- Added the ability to use custom Zeek scripts
|
||||||
|
- Renamed "master server" to "manager node"
|
||||||
|
- Improved unification of Zeek and Strelka file data
|
||||||
|
|
||||||
|
## Hybrid Hunter Beta 1.4.1 - Beta 3
|
||||||
|
|
||||||
|
- Fix install script to handle hostnames properly.
|
||||||
|
|
||||||
|
|
||||||
## Hybrid Hunter Beta 1.4.0 - Beta 3
|
## Hybrid Hunter Beta 1.4.0 - Beta 3
|
||||||
|
|
||||||
- Complete overhaul of the way we handle custom and default settings and data. You will now see a default and local directory under the saltstack directory. All customizations are stored in local.
|
- Complete overhaul of the way we handle custom and default settings and data. You will now see a default and local directory under the saltstack directory. All customizations are stored in local.
|
||||||
@@ -57,7 +103,7 @@
|
|||||||
- Fixed an issue where geoip was not properly parsed.
|
- Fixed an issue where geoip was not properly parsed.
|
||||||
- ATT&CK Navigator is now it's own state.
|
- ATT&CK Navigator is now it's own state.
|
||||||
- Standlone mode is now supported.
|
- Standlone mode is now supported.
|
||||||
- Mastersearch previously used the same Grafana dashboard as a Search node. It now has its own dashboard that incorporates panels from the Master node and Search node dashboards.
|
- Managersearch previously used the same Grafana dashboard as a Search node. It now has its own dashboard that incorporates panels from the Manager node and Search node dashboards.
|
||||||
|
|
||||||
### Known Issues:
|
### Known Issues:
|
||||||
|
|
||||||
|
|||||||
@@ -13,8 +13,8 @@ role:
|
|||||||
fleet:
|
fleet:
|
||||||
heavynode:
|
heavynode:
|
||||||
helixsensor:
|
helixsensor:
|
||||||
master:
|
manager:
|
||||||
mastersearch:
|
managersearch:
|
||||||
standalone:
|
standalone:
|
||||||
searchnode:
|
searchnode:
|
||||||
sensor:
|
sensor:
|
||||||
@@ -12,6 +12,10 @@ firewall:
|
|||||||
ips:
|
ips:
|
||||||
delete:
|
delete:
|
||||||
insert:
|
insert:
|
||||||
|
elasticsearch_rest:
|
||||||
|
ips:
|
||||||
|
delete:
|
||||||
|
insert:
|
||||||
fleet:
|
fleet:
|
||||||
ips:
|
ips:
|
||||||
delete:
|
delete:
|
||||||
@@ -20,7 +24,7 @@ firewall:
|
|||||||
ips:
|
ips:
|
||||||
delete:
|
delete:
|
||||||
insert:
|
insert:
|
||||||
master:
|
manager:
|
||||||
ips:
|
ips:
|
||||||
delete:
|
delete:
|
||||||
insert:
|
insert:
|
||||||
|
|||||||
@@ -1,12 +1,12 @@
|
|||||||
{%- set FLEETMASTER = salt['pillar.get']('static:fleet_master', False) -%}
|
{%- set FLEETMANAGER = salt['pillar.get']('static:fleet_manager', False) -%}
|
||||||
{%- set FLEETNODE = salt['pillar.get']('static:fleet_node', False) -%}
|
{%- set FLEETNODE = salt['pillar.get']('static:fleet_node', False) -%}
|
||||||
{% set WAZUH = salt['pillar.get']('master:wazuh', '0') %}
|
{% set WAZUH = salt['pillar.get']('manager:wazuh', '0') %}
|
||||||
{% set THEHIVE = salt['pillar.get']('master:thehive', '0') %}
|
{% set THEHIVE = salt['pillar.get']('manager:thehive', '0') %}
|
||||||
{% set PLAYBOOK = salt['pillar.get']('master:playbook', '0') %}
|
{% set PLAYBOOK = salt['pillar.get']('manager:playbook', '0') %}
|
||||||
{% set FREQSERVER = salt['pillar.get']('master:freq', '0') %}
|
{% set FREQSERVER = salt['pillar.get']('manager:freq', '0') %}
|
||||||
{% set DOMAINSTATS = salt['pillar.get']('master:domainstats', '0') %}
|
{% set DOMAINSTATS = salt['pillar.get']('manager:domainstats', '0') %}
|
||||||
{% set BROVER = salt['pillar.get']('static:broversion', 'COMMUNITY') %}
|
{% set ZEEKVER = salt['pillar.get']('static:zeekversion', 'COMMUNITY') %}
|
||||||
{% set GRAFANA = salt['pillar.get']('master:grafana', '0') %}
|
{% set GRAFANA = salt['pillar.get']('manager:grafana', '0') %}
|
||||||
|
|
||||||
eval:
|
eval:
|
||||||
containers:
|
containers:
|
||||||
@@ -20,7 +20,7 @@ eval:
|
|||||||
- so-soc
|
- so-soc
|
||||||
- so-kratos
|
- so-kratos
|
||||||
- so-idstools
|
- so-idstools
|
||||||
{% if FLEETMASTER %}
|
{% if FLEETMANAGER %}
|
||||||
- so-mysql
|
- so-mysql
|
||||||
- so-fleet
|
- so-fleet
|
||||||
- so-redis
|
- so-redis
|
||||||
@@ -44,7 +44,6 @@ eval:
|
|||||||
{% endif %}
|
{% endif %}
|
||||||
{% if PLAYBOOK != '0' %}
|
{% if PLAYBOOK != '0' %}
|
||||||
- so-playbook
|
- so-playbook
|
||||||
- so-navigator
|
|
||||||
{% endif %}
|
{% endif %}
|
||||||
{% if FREQSERVER != '0' %}
|
{% if FREQSERVER != '0' %}
|
||||||
- so-freqserver
|
- so-freqserver
|
||||||
@@ -64,7 +63,7 @@ heavy_node:
|
|||||||
- so-suricata
|
- so-suricata
|
||||||
- so-wazuh
|
- so-wazuh
|
||||||
- so-filebeat
|
- so-filebeat
|
||||||
{% if BROVER != 'SURICATA' %}
|
{% if ZEEKVER != 'SURICATA' %}
|
||||||
- so-zeek
|
- so-zeek
|
||||||
{% endif %}
|
{% endif %}
|
||||||
helix:
|
helix:
|
||||||
@@ -84,7 +83,7 @@ hot_node:
|
|||||||
- so-logstash
|
- so-logstash
|
||||||
- so-elasticsearch
|
- so-elasticsearch
|
||||||
- so-curator
|
- so-curator
|
||||||
master_search:
|
manager_search:
|
||||||
containers:
|
containers:
|
||||||
- so-nginx
|
- so-nginx
|
||||||
- so-telegraf
|
- so-telegraf
|
||||||
@@ -100,7 +99,7 @@ master_search:
|
|||||||
- so-elastalert
|
- so-elastalert
|
||||||
- so-filebeat
|
- so-filebeat
|
||||||
- so-soctopus
|
- so-soctopus
|
||||||
{% if FLEETMASTER %}
|
{% if FLEETMANAGER %}
|
||||||
- so-mysql
|
- so-mysql
|
||||||
- so-fleet
|
- so-fleet
|
||||||
- so-redis
|
- so-redis
|
||||||
@@ -116,7 +115,6 @@ master_search:
|
|||||||
{% endif %}
|
{% endif %}
|
||||||
{% if PLAYBOOK != '0' %}
|
{% if PLAYBOOK != '0' %}
|
||||||
- so-playbook
|
- so-playbook
|
||||||
- so-navigator
|
|
||||||
{% endif %}
|
{% endif %}
|
||||||
{% if FREQSERVER != '0' %}
|
{% if FREQSERVER != '0' %}
|
||||||
- so-freqserver
|
- so-freqserver
|
||||||
@@ -124,7 +122,7 @@ master_search:
|
|||||||
{% if DOMAINSTATS != '0' %}
|
{% if DOMAINSTATS != '0' %}
|
||||||
- so-domainstats
|
- so-domainstats
|
||||||
{% endif %}
|
{% endif %}
|
||||||
master:
|
manager:
|
||||||
containers:
|
containers:
|
||||||
- so-dockerregistry
|
- so-dockerregistry
|
||||||
- so-nginx
|
- so-nginx
|
||||||
@@ -143,7 +141,7 @@ master:
|
|||||||
- so-kibana
|
- so-kibana
|
||||||
- so-elastalert
|
- so-elastalert
|
||||||
- so-filebeat
|
- so-filebeat
|
||||||
{% if FLEETMASTER %}
|
{% if FLEETMANAGER %}
|
||||||
- so-mysql
|
- so-mysql
|
||||||
- so-fleet
|
- so-fleet
|
||||||
- so-redis
|
- so-redis
|
||||||
@@ -159,7 +157,6 @@ master:
|
|||||||
{% endif %}
|
{% endif %}
|
||||||
{% if PLAYBOOK != '0' %}
|
{% if PLAYBOOK != '0' %}
|
||||||
- so-playbook
|
- so-playbook
|
||||||
- so-navigator
|
|
||||||
{% endif %}
|
{% endif %}
|
||||||
{% if FREQSERVER != '0' %}
|
{% if FREQSERVER != '0' %}
|
||||||
- so-freqserver
|
- so-freqserver
|
||||||
@@ -189,7 +186,7 @@ sensor:
|
|||||||
- so-telegraf
|
- so-telegraf
|
||||||
- so-steno
|
- so-steno
|
||||||
- so-suricata
|
- so-suricata
|
||||||
{% if BROVER != 'SURICATA' %}
|
{% if ZEEKVER != 'SURICATA' %}
|
||||||
- so-zeek
|
- so-zeek
|
||||||
{% endif %}
|
{% endif %}
|
||||||
- so-wazuh
|
- so-wazuh
|
||||||
|
|||||||
13
pillar/elasticsearch/eval.sls
Normal file
13
pillar/elasticsearch/eval.sls
Normal file
@@ -0,0 +1,13 @@
|
|||||||
|
elasticsearch:
|
||||||
|
templates:
|
||||||
|
- so/so-beats-template.json.jinja
|
||||||
|
- so/so-common-template.json
|
||||||
|
- so/so-firewall-template.json.jinja
|
||||||
|
- so/so-flow-template.json.jinja
|
||||||
|
- so/so-ids-template.json.jinja
|
||||||
|
- so/so-import-template.json.jinja
|
||||||
|
- so/so-osquery-template.json.jinja
|
||||||
|
- so/so-ossec-template.json.jinja
|
||||||
|
- so/so-strelka-template.json.jinja
|
||||||
|
- so/so-syslog-template.json.jinja
|
||||||
|
- so/so-zeek-template.json.jinja
|
||||||
13
pillar/elasticsearch/search.sls
Normal file
13
pillar/elasticsearch/search.sls
Normal file
@@ -0,0 +1,13 @@
|
|||||||
|
elasticsearch:
|
||||||
|
templates:
|
||||||
|
- so/so-beats-template.json.jinja
|
||||||
|
- so/so-common-template.json
|
||||||
|
- so/so-firewall-template.json.jinja
|
||||||
|
- so/so-flow-template.json.jinja
|
||||||
|
- so/so-ids-template.json.jinja
|
||||||
|
- so/so-import-template.json.jinja
|
||||||
|
- so/so-osquery-template.json.jinja
|
||||||
|
- so/so-ossec-template.json.jinja
|
||||||
|
- so/so-strelka-template.json.jinja
|
||||||
|
- so/so-syslog-template.json.jinja
|
||||||
|
- so/so-zeek-template.json.jinja
|
||||||
@@ -17,7 +17,7 @@ firewall:
|
|||||||
- 5644
|
- 5644
|
||||||
- 9822
|
- 9822
|
||||||
udp:
|
udp:
|
||||||
master:
|
manager:
|
||||||
ports:
|
ports:
|
||||||
tcp:
|
tcp:
|
||||||
- 1514
|
- 1514
|
||||||
|
|||||||
@@ -1,21 +0,0 @@
|
|||||||
logstash:
|
|
||||||
pipelines:
|
|
||||||
eval:
|
|
||||||
config:
|
|
||||||
- so/0800_input_eval.conf
|
|
||||||
- so/1002_preprocess_json.conf
|
|
||||||
- so/1033_preprocess_snort.conf
|
|
||||||
- so/7100_osquery_wel.conf
|
|
||||||
- so/8999_postprocess_rename_type.conf
|
|
||||||
- so/9000_output_bro.conf.jinja
|
|
||||||
- so/9002_output_import.conf.jinja
|
|
||||||
- so/9033_output_snort.conf.jinja
|
|
||||||
- so/9100_output_osquery.conf.jinja
|
|
||||||
- so/9400_output_suricata.conf.jinja
|
|
||||||
- so/9500_output_beats.conf.jinja
|
|
||||||
- so/9600_output_ossec.conf.jinja
|
|
||||||
- so/9700_output_strelka.conf.jinja
|
|
||||||
templates:
|
|
||||||
- so/so-beats-template.json
|
|
||||||
- so/so-common-template.json
|
|
||||||
- so/so-zeek-template.json
|
|
||||||
@@ -1,6 +1,6 @@
|
|||||||
logstash:
|
logstash:
|
||||||
pipelines:
|
pipelines:
|
||||||
master:
|
manager:
|
||||||
config:
|
config:
|
||||||
- so/0009_input_beats.conf
|
- so/0009_input_beats.conf
|
||||||
- so/0010_input_hhbeats.conf
|
- so/0010_input_hhbeats.conf
|
||||||
@@ -11,6 +11,3 @@ logstash:
|
|||||||
- so/9500_output_beats.conf.jinja
|
- so/9500_output_beats.conf.jinja
|
||||||
- so/9600_output_ossec.conf.jinja
|
- so/9600_output_ossec.conf.jinja
|
||||||
- so/9700_output_strelka.conf.jinja
|
- so/9700_output_strelka.conf.jinja
|
||||||
templates:
|
|
||||||
- so/so-common-template.json
|
|
||||||
- so/so-zeek-template.json
|
|
||||||
|
|||||||
@@ -6,43 +6,46 @@ base:
|
|||||||
- match: compound
|
- match: compound
|
||||||
- zeek
|
- zeek
|
||||||
|
|
||||||
'*_mastersearch or *_heavynode':
|
'*_managersearch or *_heavynode':
|
||||||
- match: compound
|
- match: compound
|
||||||
- logstash
|
- logstash
|
||||||
- logstash.master
|
- logstash.manager
|
||||||
- logstash.search
|
- logstash.search
|
||||||
|
- elasticsearch.search
|
||||||
|
|
||||||
'*_sensor':
|
'*_sensor':
|
||||||
- static
|
- static
|
||||||
- brologs
|
- zeeklogs
|
||||||
- healthcheck.sensor
|
- healthcheck.sensor
|
||||||
- minions.{{ grains.id }}
|
- minions.{{ grains.id }}
|
||||||
|
|
||||||
'*_master or *_mastersearch':
|
'*_manager or *_managersearch':
|
||||||
- match: compound
|
- match: compound
|
||||||
- static
|
- static
|
||||||
- data.*
|
- data.*
|
||||||
- secrets
|
- secrets
|
||||||
- minions.{{ grains.id }}
|
- minions.{{ grains.id }}
|
||||||
|
|
||||||
'*_master':
|
'*_manager':
|
||||||
- logstash
|
- logstash
|
||||||
- logstash.master
|
- logstash.manager
|
||||||
|
|
||||||
'*_eval':
|
'*_eval':
|
||||||
- static
|
|
||||||
- data.*
|
- data.*
|
||||||
- brologs
|
- zeeklogs
|
||||||
- secrets
|
- secrets
|
||||||
- healthcheck.eval
|
- healthcheck.eval
|
||||||
|
- elasticsearch.eval
|
||||||
|
- static
|
||||||
- minions.{{ grains.id }}
|
- minions.{{ grains.id }}
|
||||||
|
|
||||||
'*_standalone':
|
'*_standalone':
|
||||||
- logstash
|
- logstash
|
||||||
- logstash.master
|
- logstash.manager
|
||||||
- logstash.search
|
- logstash.search
|
||||||
|
- elasticsearch.search
|
||||||
- data.*
|
- data.*
|
||||||
- brologs
|
- zeeklogs
|
||||||
- secrets
|
- secrets
|
||||||
- healthcheck.standalone
|
- healthcheck.standalone
|
||||||
- static
|
- static
|
||||||
@@ -54,13 +57,13 @@ base:
|
|||||||
|
|
||||||
'*_heavynode':
|
'*_heavynode':
|
||||||
- static
|
- static
|
||||||
- brologs
|
- zeeklogs
|
||||||
- minions.{{ grains.id }}
|
- minions.{{ grains.id }}
|
||||||
|
|
||||||
'*_helix':
|
'*_helix':
|
||||||
- static
|
- static
|
||||||
- fireeye
|
- fireeye
|
||||||
- brologs
|
- zeeklogs
|
||||||
- logstash
|
- logstash
|
||||||
- logstash.helix
|
- logstash.helix
|
||||||
- minions.{{ grains.id }}
|
- minions.{{ grains.id }}
|
||||||
@@ -75,4 +78,5 @@ base:
|
|||||||
- static
|
- static
|
||||||
- logstash
|
- logstash
|
||||||
- logstash.search
|
- logstash.search
|
||||||
|
- elasticsearch.search
|
||||||
- minions.{{ grains.id }}
|
- minions.{{ grains.id }}
|
||||||
|
|||||||
@@ -1,4 +1,4 @@
|
|||||||
brologs:
|
zeeklogs:
|
||||||
enabled:
|
enabled:
|
||||||
- conn
|
- conn
|
||||||
- dce_rpc
|
- dce_rpc
|
||||||
@@ -6,7 +6,7 @@ import socket
|
|||||||
|
|
||||||
def send(data):
|
def send(data):
|
||||||
|
|
||||||
mainint = __salt__['pillar.get']('sensor:mainint', __salt__['pillar.get']('master:mainint'))
|
mainint = __salt__['pillar.get']('sensor:mainint', __salt__['pillar.get']('manager:mainint'))
|
||||||
mainip = __salt__['grains.get']('ip_interfaces').get(mainint)[0]
|
mainip = __salt__['grains.get']('ip_interfaces').get(mainint)[0]
|
||||||
dstport = 8094
|
dstport = 8094
|
||||||
|
|
||||||
|
|||||||
@@ -26,7 +26,7 @@ x509_signing_policies:
|
|||||||
- extendedKeyUsage: serverAuth
|
- extendedKeyUsage: serverAuth
|
||||||
- days_valid: 820
|
- days_valid: 820
|
||||||
- copypath: /etc/pki/issued_certs/
|
- copypath: /etc/pki/issued_certs/
|
||||||
masterssl:
|
managerssl:
|
||||||
- minions: '*'
|
- minions: '*'
|
||||||
- signing_private_key: /etc/pki/ca.key
|
- signing_private_key: /etc/pki/ca.key
|
||||||
- signing_cert: /etc/pki/ca.crt
|
- signing_cert: /etc/pki/ca.crt
|
||||||
|
|||||||
@@ -1,4 +1,4 @@
|
|||||||
{% set master = salt['grains.get']('master') %}
|
{% set manager = salt['grains.get']('master') %}
|
||||||
/etc/salt/minion.d/signing_policies.conf:
|
/etc/salt/minion.d/signing_policies.conf:
|
||||||
file.managed:
|
file.managed:
|
||||||
- source: salt://ca/files/signing_policies.conf
|
- source: salt://ca/files/signing_policies.conf
|
||||||
@@ -20,7 +20,7 @@ pki_private_key:
|
|||||||
/etc/pki/ca.crt:
|
/etc/pki/ca.crt:
|
||||||
x509.certificate_managed:
|
x509.certificate_managed:
|
||||||
- signing_private_key: /etc/pki/ca.key
|
- signing_private_key: /etc/pki/ca.key
|
||||||
- CN: {{ master }}
|
- CN: {{ manager }}
|
||||||
- C: US
|
- C: US
|
||||||
- ST: Utah
|
- ST: Utah
|
||||||
- L: Salt Lake City
|
- L: Salt Lake City
|
||||||
@@ -44,3 +44,10 @@ send_x509_pem_entries_to_mine:
|
|||||||
- mine.send:
|
- mine.send:
|
||||||
- func: x509.get_pem_entries
|
- func: x509.get_pem_entries
|
||||||
- glob_path: /etc/pki/ca.crt
|
- glob_path: /etc/pki/ca.crt
|
||||||
|
|
||||||
|
cakeyperms:
|
||||||
|
file.managed:
|
||||||
|
- replace: False
|
||||||
|
- name: /etc/pki/ca.key
|
||||||
|
- mode: 640
|
||||||
|
- group: 939
|
||||||
|
|||||||
@@ -1,3 +1,5 @@
|
|||||||
|
{% set role = grains.id.split('_') | last %}
|
||||||
|
|
||||||
# Add socore Group
|
# Add socore Group
|
||||||
socoregroup:
|
socoregroup:
|
||||||
group.present:
|
group.present:
|
||||||
@@ -13,6 +15,20 @@ socore:
|
|||||||
- createhome: True
|
- createhome: True
|
||||||
- shell: /bin/bash
|
- shell: /bin/bash
|
||||||
|
|
||||||
|
soconfperms:
|
||||||
|
file.directory:
|
||||||
|
- name: /opt/so/conf
|
||||||
|
- uid: 939
|
||||||
|
- gid: 939
|
||||||
|
- dir_mode: 770
|
||||||
|
|
||||||
|
sosaltstackperms:
|
||||||
|
file.directory:
|
||||||
|
- name: /opt/so/saltstack
|
||||||
|
- uid: 939
|
||||||
|
- gid: 939
|
||||||
|
- dir_mode: 770
|
||||||
|
|
||||||
# Create a state directory
|
# Create a state directory
|
||||||
statedir:
|
statedir:
|
||||||
file.directory:
|
file.directory:
|
||||||
@@ -131,3 +147,15 @@ utilsyncscripts:
|
|||||||
- file_mode: 755
|
- file_mode: 755
|
||||||
- template: jinja
|
- template: jinja
|
||||||
- source: salt://common/tools/sbin
|
- source: salt://common/tools/sbin
|
||||||
|
|
||||||
|
{% if role in ['eval', 'standalone', 'sensor', 'heavynode'] %}
|
||||||
|
# Add sensor cleanup
|
||||||
|
/usr/sbin/so-sensor-clean:
|
||||||
|
cron.present:
|
||||||
|
- user: root
|
||||||
|
- minute: '*'
|
||||||
|
- hour: '*'
|
||||||
|
- daymonth: '*'
|
||||||
|
- month: '*'
|
||||||
|
- dayweek: '*'
|
||||||
|
{% endif %}
|
||||||
|
|||||||
@@ -14,6 +14,7 @@
|
|||||||
'so-zeek',
|
'so-zeek',
|
||||||
'so-curator',
|
'so-curator',
|
||||||
'so-elastalert',
|
'so-elastalert',
|
||||||
'so-soctopus'
|
'so-soctopus',
|
||||||
|
'so-sensoroni'
|
||||||
]
|
]
|
||||||
} %}
|
} %}
|
||||||
@@ -9,6 +9,7 @@
|
|||||||
'so-steno',
|
'so-steno',
|
||||||
'so-suricata',
|
'so-suricata',
|
||||||
'so-wazuh',
|
'so-wazuh',
|
||||||
'so-filebeat
|
'so-filebeat',
|
||||||
|
'so-sensoroni'
|
||||||
]
|
]
|
||||||
} %}
|
} %}
|
||||||
@@ -1,6 +1,5 @@
|
|||||||
{% set docker = {
|
{% set docker = {
|
||||||
'containers': [
|
'containers': [
|
||||||
'so-playbook',
|
'so-playbook'
|
||||||
'so-navigator'
|
|
||||||
]
|
]
|
||||||
} %}
|
} %}
|
||||||
@@ -3,6 +3,7 @@
|
|||||||
'so-telegraf',
|
'so-telegraf',
|
||||||
'so-steno',
|
'so-steno',
|
||||||
'so-suricata',
|
'so-suricata',
|
||||||
'so-filebeat'
|
'so-filebeat',
|
||||||
|
'so-sensoroni'
|
||||||
]
|
]
|
||||||
} %}
|
} %}
|
||||||
@@ -18,14 +18,14 @@
|
|||||||
}
|
}
|
||||||
},grain='id', merge=salt['pillar.get']('docker')) %}
|
},grain='id', merge=salt['pillar.get']('docker')) %}
|
||||||
|
|
||||||
{% if role in ['eval', 'mastersearch', 'master', 'standalone'] %}
|
{% if role in ['eval', 'managersearch', 'manager', 'standalone'] %}
|
||||||
{{ append_containers('master', 'grafana', 0) }}
|
{{ append_containers('manager', 'grafana', 0) }}
|
||||||
{{ append_containers('static', 'fleet_master', 0) }}
|
{{ append_containers('static', 'fleet_manager', 0) }}
|
||||||
{{ append_containers('master', 'wazuh', 0) }}
|
{{ append_containers('manager', 'wazuh', 0) }}
|
||||||
{{ append_containers('master', 'thehive', 0) }}
|
{{ append_containers('manager', 'thehive', 0) }}
|
||||||
{{ append_containers('master', 'playbook', 0) }}
|
{{ append_containers('manager', 'playbook', 0) }}
|
||||||
{{ append_containers('master', 'freq', 0) }}
|
{{ append_containers('manager', 'freq', 0) }}
|
||||||
{{ append_containers('master', 'domainstats', 0) }}
|
{{ append_containers('manager', 'domainstats', 0) }}
|
||||||
{% endif %}
|
{% endif %}
|
||||||
|
|
||||||
{% if role in ['eval', 'heavynode', 'sensor', 'standalone'] %}
|
{% if role in ['eval', 'heavynode', 'sensor', 'standalone'] %}
|
||||||
@@ -33,13 +33,13 @@
|
|||||||
{% endif %}
|
{% endif %}
|
||||||
|
|
||||||
{% if role in ['heavynode', 'standalone'] %}
|
{% if role in ['heavynode', 'standalone'] %}
|
||||||
{{ append_containers('static', 'broversion', 'SURICATA') }}
|
{{ append_containers('static', 'zeekversion', 'SURICATA') }}
|
||||||
{% endif %}
|
{% endif %}
|
||||||
|
|
||||||
{% if role == 'searchnode' %}
|
{% if role == 'searchnode' %}
|
||||||
{{ append_containers('master', 'wazuh', 0) }}
|
{{ append_containers('manager', 'wazuh', 0) }}
|
||||||
{% endif %}
|
{% endif %}
|
||||||
|
|
||||||
{% if role == 'sensor' %}
|
{% if role == 'sensor' %}
|
||||||
{{ append_containers('static', 'broversion', 'SURICATA') }}
|
{{ append_containers('static', 'zeekversion', 'SURICATA') }}
|
||||||
{% endif %}
|
{% endif %}
|
||||||
@@ -16,6 +16,7 @@
|
|||||||
'so-suricata',
|
'so-suricata',
|
||||||
'so-steno',
|
'so-steno',
|
||||||
'so-dockerregistry',
|
'so-dockerregistry',
|
||||||
'so-soctopus'
|
'so-soctopus',
|
||||||
|
'so-sensoroni'
|
||||||
]
|
]
|
||||||
} %}
|
} %}
|
||||||
@@ -17,15 +17,13 @@
|
|||||||
|
|
||||||
. /usr/sbin/so-common
|
. /usr/sbin/so-common
|
||||||
|
|
||||||
default_salt_dir=/opt/so/saltstack/default
|
|
||||||
local_salt_dir=/opt/so/saltstack/local
|
local_salt_dir=/opt/so/saltstack/local
|
||||||
|
|
||||||
SKIP=0
|
SKIP=0
|
||||||
|
|
||||||
while getopts "abowi:" OPTION
|
while getopts "ahfesprbowi:" OPTION
|
||||||
do
|
do
|
||||||
case $OPTION in
|
case $OPTION in
|
||||||
|
|
||||||
h)
|
h)
|
||||||
usage
|
usage
|
||||||
exit 0
|
exit 0
|
||||||
@@ -38,11 +36,14 @@ do
|
|||||||
FULLROLE="beats_endpoint"
|
FULLROLE="beats_endpoint"
|
||||||
SKIP=1
|
SKIP=1
|
||||||
;;
|
;;
|
||||||
f)
|
e)
|
||||||
|
FULLROLE="elasticsearch_rest"
|
||||||
|
SKIP=1
|
||||||
|
;;
|
||||||
|
f)
|
||||||
FULLROLE="strelka_frontend"
|
FULLROLE="strelka_frontend"
|
||||||
SKIP=1
|
SKIP=1
|
||||||
;;
|
;;
|
||||||
|
|
||||||
i) IP=$OPTARG
|
i) IP=$OPTARG
|
||||||
;;
|
;;
|
||||||
o)
|
o)
|
||||||
@@ -65,7 +66,10 @@ do
|
|||||||
FULLROLE="wazuh_authd"
|
FULLROLE="wazuh_authd"
|
||||||
SKIP=1
|
SKIP=1
|
||||||
;;
|
;;
|
||||||
|
*)
|
||||||
|
usage
|
||||||
|
exit 0
|
||||||
|
;;
|
||||||
esac
|
esac
|
||||||
done
|
done
|
||||||
|
|
||||||
@@ -77,21 +81,25 @@ if [ "$SKIP" -eq 0 ]; then
|
|||||||
echo ""
|
echo ""
|
||||||
echo "[a] - Analyst - ports 80/tcp and 443/tcp"
|
echo "[a] - Analyst - ports 80/tcp and 443/tcp"
|
||||||
echo "[b] - Logstash Beat - port 5044/tcp"
|
echo "[b] - Logstash Beat - port 5044/tcp"
|
||||||
|
echo "[e] - Elasticsearch REST API - port 9200/tcp"
|
||||||
echo "[f] - Strelka frontend - port 57314/tcp"
|
echo "[f] - Strelka frontend - port 57314/tcp"
|
||||||
echo "[o] - Osquery endpoint - port 8090/tcp"
|
echo "[o] - Osquery endpoint - port 8090/tcp"
|
||||||
echo "[s] - Syslog device - 514/tcp/udp"
|
echo "[s] - Syslog device - 514/tcp/udp"
|
||||||
echo "[w] - Wazuh agent - port 1514/tcp/udp"
|
echo "[w] - Wazuh agent - port 1514/tcp/udp"
|
||||||
echo "[p] - Wazuh API - port 55000/tcp"
|
echo "[p] - Wazuh API - port 55000/tcp"
|
||||||
echo "[r] - Wazuh registration service - 1515/tcp"
|
echo "[r] - Wazuh registration service - 1515/tcp"
|
||||||
echo "Please enter your selection (a - analyst, b - beats, o - osquery, w - wazuh):"
|
echo ""
|
||||||
read ROLE
|
echo "Please enter your selection:"
|
||||||
|
read -r ROLE
|
||||||
echo "Enter a single ip address or range to allow (example: 10.10.10.10 or 10.10.0.0/16):"
|
echo "Enter a single ip address or range to allow (example: 10.10.10.10 or 10.10.0.0/16):"
|
||||||
read IP
|
read -r IP
|
||||||
|
|
||||||
if [ "$ROLE" == "a" ]; then
|
if [ "$ROLE" == "a" ]; then
|
||||||
FULLROLE=analyst
|
FULLROLE=analyst
|
||||||
elif [ "$ROLE" == "b" ]; then
|
elif [ "$ROLE" == "b" ]; then
|
||||||
FULLROLE=beats_endpoint
|
FULLROLE=beats_endpoint
|
||||||
|
elif [ "$ROLE" == "e" ]; then
|
||||||
|
FULLROLE=elasticsearch_rest
|
||||||
elif [ "$ROLE" == "f" ]; then
|
elif [ "$ROLE" == "f" ]; then
|
||||||
FULLROLE=strelka_frontend
|
FULLROLE=strelka_frontend
|
||||||
elif [ "$ROLE" == "o" ]; then
|
elif [ "$ROLE" == "o" ]; then
|
||||||
@@ -119,16 +127,16 @@ salt-call state.apply firewall queue=True
|
|||||||
if grep -q -R "wazuh: 1" $local_salt_dir/pillar/*; then
|
if grep -q -R "wazuh: 1" $local_salt_dir/pillar/*; then
|
||||||
# If analyst, add to Wazuh AR whitelist
|
# If analyst, add to Wazuh AR whitelist
|
||||||
if [ "$FULLROLE" == "analyst" ]; then
|
if [ "$FULLROLE" == "analyst" ]; then
|
||||||
WAZUH_MGR_CFG="/opt/so/wazuh/etc/ossec.conf"
|
WAZUH_MGR_CFG="/opt/so/wazuh/etc/ossec.conf"
|
||||||
if ! grep -q "<white_list>$IP</white_list>" $WAZUH_MGR_CFG ; then
|
if ! grep -q "<white_list>$IP</white_list>" $WAZUH_MGR_CFG ; then
|
||||||
DATE=`date`
|
DATE=$(date)
|
||||||
sed -i 's/<\/ossec_config>//' $WAZUH_MGR_CFG
|
sed -i 's/<\/ossec_config>//' $WAZUH_MGR_CFG
|
||||||
sed -i '/^$/N;/^\n$/D' $WAZUH_MGR_CFG
|
sed -i '/^$/N;/^\n$/D' $WAZUH_MGR_CFG
|
||||||
echo -e "<!--Address $IP added by /usr/sbin/so-allow on "$DATE"-->\n <global>\n <white_list>$IP</white_list>\n </global>\n</ossec_config>" >> $WAZUH_MGR_CFG
|
echo -e "<!--Address $IP added by /usr/sbin/so-allow on \"$DATE\"-->\n <global>\n <white_list>$IP</white_list>\n </global>\n</ossec_config>" >> $WAZUH_MGR_CFG
|
||||||
echo "Added whitelist entry for $IP in $WAZUH_MGR_CFG."
|
echo "Added whitelist entry for $IP in $WAZUH_MGR_CFG."
|
||||||
echo
|
echo
|
||||||
echo "Restarting OSSEC Server..."
|
echo "Restarting OSSEC Server..."
|
||||||
/usr/sbin/so-wazuh-restart
|
/usr/sbin/so-wazuh-restart
|
||||||
fi
|
fi
|
||||||
fi
|
fi
|
||||||
fi
|
fi
|
||||||
|
|||||||
@@ -15,6 +15,8 @@
|
|||||||
# You should have received a copy of the GNU General Public License
|
# You should have received a copy of the GNU General Public License
|
||||||
# along with this program. If not, see <http://www.gnu.org/licenses/>.
|
# along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||||
|
|
||||||
|
IMAGEREPO=securityonion
|
||||||
|
|
||||||
# Check for prerequisites
|
# Check for prerequisites
|
||||||
if [ "$(id -u)" -ne 0 ]; then
|
if [ "$(id -u)" -ne 0 ]; then
|
||||||
echo "This script must be run using sudo!"
|
echo "This script must be run using sudo!"
|
||||||
|
|||||||
34
salt/common/tools/sbin/so-docker-refresh
Normal file → Executable file
34
salt/common/tools/sbin/so-docker-refresh
Normal file → Executable file
@@ -14,20 +14,16 @@
|
|||||||
#
|
#
|
||||||
# You should have received a copy of the GNU General Public License
|
# You should have received a copy of the GNU General Public License
|
||||||
# along with this program. If not, see <http://www.gnu.org/licenses/>.
|
# along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||||
got_root(){
|
|
||||||
if [ "$(id -u)" -ne 0 ]; then
|
|
||||||
echo "This script must be run using sudo!"
|
|
||||||
exit 1
|
|
||||||
fi
|
|
||||||
}
|
|
||||||
|
|
||||||
master_check() {
|
. /usr/sbin/so-common
|
||||||
# Check to see if this is a master
|
|
||||||
MASTERCHECK=$(cat /etc/salt/grains | grep role | awk '{print $2}')
|
manager_check() {
|
||||||
if [ $MASTERCHECK == 'so-eval' ] || [ $MASTERCHECK == 'so-master' ] || [ $MASTERCHECK == 'so-mastersearch' ] || [ $MASTERCHECK == 'so-standalone' ] || [ $MASTERCHECK == 'so-helix' ]; then
|
# Check to see if this is a manager
|
||||||
echo "This is a master. We can proceed"
|
MANAGERCHECK=$(cat /etc/salt/grains | grep role | awk '{print $2}')
|
||||||
|
if [ $MANAGERCHECK == 'so-eval' ] || [ $MANAGERCHECK == 'so-manager' ] || [ $MANAGERCHECK == 'so-managersearch' ] || [ $MANAGERCHECK == 'so-standalone' ] || [ $MANAGERCHECK == 'so-helix' ]; then
|
||||||
|
echo "This is a manager. We can proceed"
|
||||||
else
|
else
|
||||||
echo "Please run soup on the master. The master controls all updates."
|
echo "Please run soup on the manager. The manager controls all updates."
|
||||||
exit 1
|
exit 1
|
||||||
fi
|
fi
|
||||||
}
|
}
|
||||||
@@ -39,10 +35,10 @@ update_docker_containers() {
|
|||||||
do
|
do
|
||||||
# Pull down the trusted docker image
|
# Pull down the trusted docker image
|
||||||
echo "Downloading $i"
|
echo "Downloading $i"
|
||||||
docker pull --disable-content-trust=false docker.io/soshybridhunter/$i
|
docker pull --disable-content-trust=false docker.io/$IMAGEREPO/$i
|
||||||
# Tag it with the new registry destination
|
# Tag it with the new registry destination
|
||||||
docker tag soshybridhunter/$i $HOSTNAME:5000/soshybridhunter/$i
|
docker tag $IMAGEREPO/$i $HOSTNAME:5000/$IMAGEREPO/$i
|
||||||
docker push $HOSTNAME:5000/soshybridhunter/$i
|
docker push $HOSTNAME:5000/$IMAGEREPO/$i
|
||||||
done
|
done
|
||||||
|
|
||||||
}
|
}
|
||||||
@@ -55,14 +51,14 @@ version_check() {
|
|||||||
exit 1
|
exit 1
|
||||||
fi
|
fi
|
||||||
}
|
}
|
||||||
got_root
|
|
||||||
master_check
|
manager_check
|
||||||
version_check
|
version_check
|
||||||
|
|
||||||
# Use the hostname
|
# Use the hostname
|
||||||
HOSTNAME=$(hostname)
|
HOSTNAME=$(hostname)
|
||||||
# List all the containers
|
# List all the containers
|
||||||
if [ $MASTERCHECK != 'so-helix' ]; then
|
if [ $MANAGERCHECK != 'so-helix' ]; then
|
||||||
TRUSTED_CONTAINERS=( \
|
TRUSTED_CONTAINERS=( \
|
||||||
"so-acng:$VERSION" \
|
"so-acng:$VERSION" \
|
||||||
"so-thehive-cortex:$VERSION" \
|
"so-thehive-cortex:$VERSION" \
|
||||||
@@ -81,8 +77,8 @@ if [ $MASTERCHECK != 'so-helix' ]; then
|
|||||||
"so-kratos:$VERSION" \
|
"so-kratos:$VERSION" \
|
||||||
"so-logstash:$VERSION" \
|
"so-logstash:$VERSION" \
|
||||||
"so-mysql:$VERSION" \
|
"so-mysql:$VERSION" \
|
||||||
"so-navigator:$VERSION" \
|
|
||||||
"so-nginx:$VERSION" \
|
"so-nginx:$VERSION" \
|
||||||
|
"so-pcaptools:$VERSION" \
|
||||||
"so-playbook:$VERSION" \
|
"so-playbook:$VERSION" \
|
||||||
"so-redis:$VERSION" \
|
"so-redis:$VERSION" \
|
||||||
"so-soc:$VERSION" \
|
"so-soc:$VERSION" \
|
||||||
|
|||||||
@@ -198,7 +198,7 @@ EOF
|
|||||||
read alertoption
|
read alertoption
|
||||||
|
|
||||||
if [ $alertoption = "1" ] ; then
|
if [ $alertoption = "1" ] ; then
|
||||||
echo "Please enter the email address you want to send the alerts to. Note: Ensure the Master Server is configured for SMTP."
|
echo "Please enter the email address you want to send the alerts to. Note: Ensure the Manager Server is configured for SMTP."
|
||||||
read emailaddress
|
read emailaddress
|
||||||
cat << EOF >> "$rulename.yaml"
|
cat << EOF >> "$rulename.yaml"
|
||||||
# (Required)
|
# (Required)
|
||||||
|
|||||||
@@ -13,8 +13,8 @@
|
|||||||
# GNU General Public License for more details.
|
# GNU General Public License for more details.
|
||||||
#
|
#
|
||||||
# You should have received a copy of the GNU General Public License
|
# You should have received a copy of the GNU General Public License
|
||||||
# along with this program. If not, see <http://www.gnu.org/licenses/>.. /usr/sbin/so-common
|
# along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||||
{%- set MASTERIP = salt['pillar.get']('static:masterip', '') -%}
|
{%- set MANAGERIP = salt['pillar.get']('static:managerip', '') -%}
|
||||||
. /usr/sbin/so-common
|
. /usr/sbin/so-common
|
||||||
|
|
||||||
SKIP=0
|
SKIP=0
|
||||||
@@ -50,7 +50,7 @@ done
|
|||||||
if [ $SKIP -ne 1 ]; then
|
if [ $SKIP -ne 1 ]; then
|
||||||
# List indices
|
# List indices
|
||||||
echo
|
echo
|
||||||
curl {{ MASTERIP }}:9200/_cat/indices?v&pretty
|
curl {{ MANAGERIP }}:9200/_cat/indices?v
|
||||||
echo
|
echo
|
||||||
# Inform user we are about to delete all data
|
# Inform user we are about to delete all data
|
||||||
echo
|
echo
|
||||||
@@ -89,10 +89,10 @@ fi
|
|||||||
# Delete data
|
# Delete data
|
||||||
echo "Deleting data..."
|
echo "Deleting data..."
|
||||||
|
|
||||||
INDXS=$(curl -s -XGET {{ MASTERIP }}:9200/_cat/indices?v | egrep 'logstash|elastalert' | awk '{ print $3 }')
|
INDXS=$(curl -s -XGET {{ MANAGERIP }}:9200/_cat/indices?v | egrep 'logstash|elastalert|so-' | awk '{ print $3 }')
|
||||||
for INDX in ${INDXS}
|
for INDX in ${INDXS}
|
||||||
do
|
do
|
||||||
curl -XDELETE "{{ MASTERIP }}:9200/${INDX}" > /dev/null 2>&1
|
curl -XDELETE "{{ MANAGERIP }}:9200/${INDX}" > /dev/null 2>&1
|
||||||
done
|
done
|
||||||
|
|
||||||
#Start Logstash/Filebeat
|
#Start Logstash/Filebeat
|
||||||
|
|||||||
@@ -1,44 +0,0 @@
|
|||||||
#!/bin/bash
|
|
||||||
MASTER=MASTER
|
|
||||||
VERSION="HH1.1.4"
|
|
||||||
TRUSTED_CONTAINERS=( \
|
|
||||||
"so-nginx:$VERSION" \
|
|
||||||
"so-thehive-cortex:$VERSION" \
|
|
||||||
"so-curator:$VERSION" \
|
|
||||||
"so-domainstats:$VERSION" \
|
|
||||||
"so-elastalert:$VERSION" \
|
|
||||||
"so-elasticsearch:$VERSION" \
|
|
||||||
"so-filebeat:$VERSION" \
|
|
||||||
"so-fleet:$VERSION" \
|
|
||||||
"so-fleet-launcher:$VERSION" \
|
|
||||||
"so-freqserver:$VERSION" \
|
|
||||||
"so-grafana:$VERSION" \
|
|
||||||
"so-idstools:$VERSION" \
|
|
||||||
"so-influxdb:$VERSION" \
|
|
||||||
"so-kibana:$VERSION" \
|
|
||||||
"so-logstash:$VERSION" \
|
|
||||||
"so-mysql:$VERSION" \
|
|
||||||
"so-navigator:$VERSION" \
|
|
||||||
"so-playbook:$VERSION" \
|
|
||||||
"so-redis:$VERSION" \
|
|
||||||
"so-sensoroni:$VERSION" \
|
|
||||||
"so-soctopus:$VERSION" \
|
|
||||||
"so-steno:$VERSION" \
|
|
||||||
#"so-strelka:$VERSION" \
|
|
||||||
"so-suricata:$VERSION" \
|
|
||||||
"so-telegraf:$VERSION" \
|
|
||||||
"so-thehive:$VERSION" \
|
|
||||||
"so-thehive-es:$VERSION" \
|
|
||||||
"so-wazuh:$VERSION" \
|
|
||||||
"so-zeek:$VERSION" )
|
|
||||||
|
|
||||||
for i in "${TRUSTED_CONTAINERS[@]}"
|
|
||||||
do
|
|
||||||
# Pull down the trusted docker image
|
|
||||||
echo "Downloading $i"
|
|
||||||
docker pull --disable-content-trust=false docker.io/soshybridhunter/$i
|
|
||||||
# Tag it with the new registry destination
|
|
||||||
docker tag soshybridhunter/$i $MASTER:5000/soshybridhunter/$i
|
|
||||||
docker push $MASTER:5000/soshybridhunter/$i
|
|
||||||
docker rmi soshybridhunter/$i
|
|
||||||
done
|
|
||||||
2
salt/common/tools/sbin/so-elasticsearch-indices-rw
Normal file → Executable file
2
salt/common/tools/sbin/so-elasticsearch-indices-rw
Normal file → Executable file
@@ -15,7 +15,7 @@
|
|||||||
#
|
#
|
||||||
# You should have received a copy of the GNU General Public License
|
# You should have received a copy of the GNU General Public License
|
||||||
# along with this program. If not, see <http://www.gnu.org/licenses/>.
|
# along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||||
IP={{ salt['grains.get']('ip_interfaces').get(salt['pillar.get']('sensor:mainint', salt['pillar.get']('master:mainint', salt['pillar.get']('node:mainint', salt['pillar.get']('host:mainint')))))[0] }}
|
IP={{ salt['grains.get']('ip_interfaces').get(salt['pillar.get']('sensor:mainint', salt['pillar.get']('manager:mainint', salt['pillar.get']('elasticsearch:mainint', salt['pillar.get']('host:mainint')))))[0] }}
|
||||||
ESPORT=9200
|
ESPORT=9200
|
||||||
THEHIVEESPORT=9400
|
THEHIVEESPORT=9400
|
||||||
|
|
||||||
|
|||||||
@@ -1,4 +1,4 @@
|
|||||||
{% set MASTERIP = salt['pillar.get']('master:mainip', '') %}
|
{% set MANAGERIP = salt['pillar.get']('manager:mainip', '') %}
|
||||||
#!/bin/bash
|
#!/bin/bash
|
||||||
# Copyright 2014,2015,2016,2017,2018,2019 Security Onion Solutions, LLC
|
# Copyright 2014,2015,2016,2017,2018,2019 Security Onion Solutions, LLC
|
||||||
#
|
#
|
||||||
@@ -15,13 +15,13 @@
|
|||||||
# You should have received a copy of the GNU General Public License
|
# You should have received a copy of the GNU General Public License
|
||||||
# along with this program. If not, see <http://www.gnu.org/licenses/>.
|
# along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||||
|
|
||||||
default_salt_dir=/opt/so/saltstack/default
|
default_conf_dir=/opt/so/conf
|
||||||
ELASTICSEARCH_HOST="{{ MASTERIP}}"
|
ELASTICSEARCH_HOST="{{ MANAGERIP}}"
|
||||||
ELASTICSEARCH_PORT=9200
|
ELASTICSEARCH_PORT=9200
|
||||||
#ELASTICSEARCH_AUTH=""
|
#ELASTICSEARCH_AUTH=""
|
||||||
|
|
||||||
# Define a default directory to load pipelines from
|
# Define a default directory to load pipelines from
|
||||||
ELASTICSEARCH_TEMPLATES="$default_salt_dir/salt/logstash/pipelines/templates/so/"
|
ELASTICSEARCH_TEMPLATES="$default_conf_dir/elasticsearch/templates/"
|
||||||
|
|
||||||
# Wait for ElasticSearch to initialize
|
# Wait for ElasticSearch to initialize
|
||||||
echo -n "Waiting for ElasticSearch..."
|
echo -n "Waiting for ElasticSearch..."
|
||||||
|
|||||||
@@ -17,6 +17,18 @@
|
|||||||
. /usr/sbin/so-common
|
. /usr/sbin/so-common
|
||||||
local_salt_dir=/opt/so/saltstack/local
|
local_salt_dir=/opt/so/saltstack/local
|
||||||
|
|
||||||
|
manager_check() {
|
||||||
|
# Check to see if this is a manager
|
||||||
|
MANAGERCHECK=$(cat /etc/salt/grains | grep role | awk '{print $2}')
|
||||||
|
if [[ "$MANAGERCHECK" =~ ^('so-eval'|'so-manager'|'so-standalone'|'so-managersearch')$ ]]; then
|
||||||
|
echo "This is a manager. We can proceed"
|
||||||
|
else
|
||||||
|
echo "Please run so-features-enable on the manager."
|
||||||
|
exit 0
|
||||||
|
fi
|
||||||
|
}
|
||||||
|
|
||||||
|
manager_check
|
||||||
VERSION=$(grep soversion $local_salt_dir/pillar/static.sls | cut -d':' -f2|sed 's/ //g')
|
VERSION=$(grep soversion $local_salt_dir/pillar/static.sls | cut -d':' -f2|sed 's/ //g')
|
||||||
# Modify static.sls to enable Features
|
# Modify static.sls to enable Features
|
||||||
sed -i 's/features: False/features: True/' $local_salt_dir/pillar/static.sls
|
sed -i 's/features: False/features: True/' $local_salt_dir/pillar/static.sls
|
||||||
@@ -31,13 +43,8 @@ for i in "${TRUSTED_CONTAINERS[@]}"
|
|||||||
do
|
do
|
||||||
# Pull down the trusted docker image
|
# Pull down the trusted docker image
|
||||||
echo "Downloading $i"
|
echo "Downloading $i"
|
||||||
docker pull --disable-content-trust=false docker.io/soshybridhunter/$i
|
docker pull --disable-content-trust=false docker.io/$IMAGEREPO/$i
|
||||||
# Tag it with the new registry destination
|
# Tag it with the new registry destination
|
||||||
docker tag soshybridhunter/$i $HOSTNAME:5000/soshybridhunter/$i
|
docker tag $IMAGEREPO/$i $HOSTNAME:5000/$IMAGEREPO/$i
|
||||||
docker push $HOSTNAME:5000/soshybridhunter/$i
|
docker push $HOSTNAME:5000/$IMAGEREPO/$i
|
||||||
done
|
|
||||||
for i in "${TRUSTED_CONTAINERS[@]}"
|
|
||||||
do
|
|
||||||
echo "Removing $i locally"
|
|
||||||
docker rmi soshybridhunter/$i
|
|
||||||
done
|
done
|
||||||
|
|||||||
1
salt/common/tools/sbin/so-fleet-setup
Normal file → Executable file
1
salt/common/tools/sbin/so-fleet-setup
Normal file → Executable file
@@ -16,6 +16,7 @@ if [ ! "$(docker ps -q -f name=so-fleet)" ]; then
|
|||||||
fi
|
fi
|
||||||
|
|
||||||
docker exec so-fleet fleetctl config set --address https://localhost:8080 --tls-skip-verify --url-prefix /fleet
|
docker exec so-fleet fleetctl config set --address https://localhost:8080 --tls-skip-verify --url-prefix /fleet
|
||||||
|
docker exec -it so-fleet bash -c 'while [[ "$(curl -s -o /dev/null --insecure -w ''%{http_code}'' https://localhost:8080/fleet)" != "301" ]]; do sleep 5; done'
|
||||||
docker exec so-fleet fleetctl setup --email $1 --password $2
|
docker exec so-fleet fleetctl setup --email $1 --password $2
|
||||||
|
|
||||||
docker exec so-fleet fleetctl apply -f /packs/palantir/Fleet/Endpoints/MacOS/osquery.yaml
|
docker exec so-fleet fleetctl apply -f /packs/palantir/Fleet/Endpoints/MacOS/osquery.yaml
|
||||||
|
|||||||
20
salt/common/tools/sbin/so-idstools-restart
Executable file
20
salt/common/tools/sbin/so-idstools-restart
Executable file
@@ -0,0 +1,20 @@
|
|||||||
|
#!/bin/bash
|
||||||
|
|
||||||
|
# Copyright 2014,2015,2016,2017,2018,2019,2020 Security Onion Solutions, LLC
|
||||||
|
#
|
||||||
|
# This program is free software: you can redistribute it and/or modify
|
||||||
|
# it under the terms of the GNU General Public License as published by
|
||||||
|
# the Free Software Foundation, either version 3 of the License, or
|
||||||
|
# (at your option) any later version.
|
||||||
|
#
|
||||||
|
# This program is distributed in the hope that it will be useful,
|
||||||
|
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||||
|
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||||
|
# GNU General Public License for more details.
|
||||||
|
#
|
||||||
|
# You should have received a copy of the GNU General Public License
|
||||||
|
# along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||||
|
|
||||||
|
. /usr/sbin/so-common
|
||||||
|
|
||||||
|
/usr/sbin/so-restart idstools $1
|
||||||
20
salt/common/tools/sbin/so-idstools-start
Executable file
20
salt/common/tools/sbin/so-idstools-start
Executable file
@@ -0,0 +1,20 @@
|
|||||||
|
#!/bin/bash
|
||||||
|
|
||||||
|
# Copyright 2014,2015,2016,2017,2018,2019,2020 Security Onion Solutions, LLC
|
||||||
|
#
|
||||||
|
# This program is free software: you can redistribute it and/or modify
|
||||||
|
# it under the terms of the GNU General Public License as published by
|
||||||
|
# the Free Software Foundation, either version 3 of the License, or
|
||||||
|
# (at your option) any later version.
|
||||||
|
#
|
||||||
|
# This program is distributed in the hope that it will be useful,
|
||||||
|
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||||
|
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||||
|
# GNU General Public License for more details.
|
||||||
|
#
|
||||||
|
# You should have received a copy of the GNU General Public License
|
||||||
|
# along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||||
|
|
||||||
|
. /usr/sbin/so-common
|
||||||
|
|
||||||
|
/usr/sbin/so-start idstools $1
|
||||||
20
salt/common/tools/sbin/so-idstools-stop
Executable file
20
salt/common/tools/sbin/so-idstools-stop
Executable file
@@ -0,0 +1,20 @@
|
|||||||
|
#!/bin/bash
|
||||||
|
|
||||||
|
# Copyright 2014,2015,2016,2017,2018,2019,2020 Security Onion Solutions, LLC
|
||||||
|
#
|
||||||
|
# This program is free software: you can redistribute it and/or modify
|
||||||
|
# it under the terms of the GNU General Public License as published by
|
||||||
|
# the Free Software Foundation, either version 3 of the License, or
|
||||||
|
# (at your option) any later version.
|
||||||
|
#
|
||||||
|
# This program is distributed in the hope that it will be useful,
|
||||||
|
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||||
|
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||||
|
# GNU General Public License for more details.
|
||||||
|
#
|
||||||
|
# You should have received a copy of the GNU General Public License
|
||||||
|
# along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||||
|
|
||||||
|
. /usr/sbin/so-common
|
||||||
|
|
||||||
|
/usr/sbin/so-stop idstools $1
|
||||||
220
salt/common/tools/sbin/so-import-pcap
Executable file
220
salt/common/tools/sbin/so-import-pcap
Executable file
@@ -0,0 +1,220 @@
|
|||||||
|
#!/bin/bash
|
||||||
|
#
|
||||||
|
# Copyright 2014,2015,2016,2017,2018,2019,2020 Security Onion Solutions, LLC
|
||||||
|
#
|
||||||
|
# This program is free software: you can redistribute it and/or modify
|
||||||
|
# it under the terms of the GNU General Public License as published by
|
||||||
|
# the Free Software Foundation, either version 3 of the License, or
|
||||||
|
# (at your option) any later version.
|
||||||
|
#
|
||||||
|
# This program is distributed in the hope that it will be useful,
|
||||||
|
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||||
|
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||||
|
# GNU General Public License for more details.
|
||||||
|
#
|
||||||
|
# You should have received a copy of the GNU General Public License
|
||||||
|
# along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||||
|
|
||||||
|
{% set MANAGER = salt['grains.get']('master') %}
|
||||||
|
{% set VERSION = salt['pillar.get']('static:soversion') %}
|
||||||
|
{% set IMAGEREPO = salt['pillar.get']('static:imagerepo') %}
|
||||||
|
{%- set MANAGERIP = salt['pillar.get']('static:managerip') -%}
|
||||||
|
|
||||||
|
function usage {
|
||||||
|
cat << EOF
|
||||||
|
Usage: $0 <pcap-file-1> [pcap-file-2] [pcap-file-N]
|
||||||
|
|
||||||
|
Imports one or more PCAP files onto a sensor node. The PCAP traffic will be analyzed and
|
||||||
|
made available for review in the Security Onion toolset.
|
||||||
|
EOF
|
||||||
|
}
|
||||||
|
|
||||||
|
function pcapinfo() {
|
||||||
|
PCAP=$1
|
||||||
|
ARGS=$2
|
||||||
|
docker run --rm -v $PCAP:/input.pcap --entrypoint capinfos {{ MANAGER }}:5000/{{ IMAGEREPO }}/so-pcaptools:{{ VERSION }} /input.pcap $ARGS
|
||||||
|
}
|
||||||
|
|
||||||
|
function pcapfix() {
|
||||||
|
PCAP=$1
|
||||||
|
PCAP_OUT=$2
|
||||||
|
docker run --rm -v $PCAP:/input.pcap -v $PCAP_OUT:$PCAP_OUT --entrypoint pcapfix {{ MANAGER }}:5000/{{ IMAGEREPO }}/so-pcaptools:{{ VERSION }} /input.pcap -o $PCAP_OUT > /dev/null 2>&1
|
||||||
|
}
|
||||||
|
|
||||||
|
function suricata() {
|
||||||
|
PCAP=$1
|
||||||
|
HASH=$2
|
||||||
|
|
||||||
|
NSM_PATH=/nsm/import/${HASH}/suricata
|
||||||
|
mkdir -p $NSM_PATH
|
||||||
|
chown suricata:socore $NSM_PATH
|
||||||
|
LOG_PATH=/opt/so/log/suricata/import/${HASH}
|
||||||
|
mkdir -p $LOG_PATH
|
||||||
|
chown suricata:suricata $LOG_PATH
|
||||||
|
docker run --rm \
|
||||||
|
-v /opt/so/conf/suricata/suricata.yaml:/etc/suricata/suricata.yaml:ro \
|
||||||
|
-v /opt/so/conf/suricata/threshold.conf:/etc/suricata/threshold.conf:ro \
|
||||||
|
-v /opt/so/conf/suricata/rules:/etc/suricata/rules:ro \
|
||||||
|
-v ${LOG_PATH}:/var/log/suricata/:rw \
|
||||||
|
-v ${NSM_PATH}/:/nsm/:rw \
|
||||||
|
-v $PCAP:/input.pcap:ro \
|
||||||
|
-v /opt/so/conf/suricata/bpf:/etc/suricata/bpf:ro \
|
||||||
|
{{ MANAGER }}:5000/{{ IMAGEREPO }}/so-suricata:{{ VERSION }} \
|
||||||
|
--runmode single -k none -r /input.pcap > $LOG_PATH/console.log 2>&1
|
||||||
|
}
|
||||||
|
|
||||||
|
function zeek() {
|
||||||
|
PCAP=$1
|
||||||
|
HASH=$2
|
||||||
|
|
||||||
|
NSM_PATH=/nsm/import/${HASH}/zeek
|
||||||
|
mkdir -p $NSM_PATH/logs
|
||||||
|
mkdir -p $NSM_PATH/extracted
|
||||||
|
mkdir -p $NSM_PATH/spool
|
||||||
|
chown -R zeek:socore $NSM_PATH
|
||||||
|
docker run --rm \
|
||||||
|
-v $NSM_PATH/logs:/nsm/zeek/logs:rw \
|
||||||
|
-v $NSM_PATH/spool:/nsm/zeek/spool:rw \
|
||||||
|
-v $NSM_PATH/extracted:/nsm/zeek/extracted:rw \
|
||||||
|
-v $PCAP:/input.pcap:ro \
|
||||||
|
-v /opt/so/conf/zeek/local.zeek:/opt/zeek/share/zeek/site/local.zeek:ro \
|
||||||
|
-v /opt/so/conf/zeek/node.cfg:/opt/zeek/etc/node.cfg:ro \
|
||||||
|
-v /opt/so/conf/zeek/zeekctl.cfg:/opt/zeek/etc/zeekctl.cfg:ro \
|
||||||
|
-v /opt/so/conf/zeek/policy/securityonion:/opt/zeek/share/zeek/policy/securityonion:ro \
|
||||||
|
-v /opt/so/conf/zeek/policy/custom:/opt/zeek/share/zeek/policy/custom:ro \
|
||||||
|
-v /opt/so/conf/zeek/policy/cve-2020-0601:/opt/zeek/share/zeek/policy/cve-2020-0601:ro \
|
||||||
|
-v /opt/so/conf/zeek/policy/intel:/opt/zeek/share/zeek/policy/intel:rw \
|
||||||
|
-v /opt/so/conf/zeek/bpf:/opt/zeek/etc/bpf:ro \
|
||||||
|
--entrypoint /opt/zeek/bin/zeek \
|
||||||
|
-w /nsm/zeek/logs \
|
||||||
|
{{ MANAGER }}:5000/{{ IMAGEREPO }}/so-zeek:{{ VERSION }} \
|
||||||
|
-C -r /input.pcap local > $NSM_PATH/logs/console.log 2>&1
|
||||||
|
}
|
||||||
|
|
||||||
|
# if no parameters supplied, display usage
|
||||||
|
if [ $# -eq 0 ]; then
|
||||||
|
usage
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
# ensure this is a sensor node
|
||||||
|
if [ ! -d /opt/so/conf/suricata ]; then
|
||||||
|
echo "This command must be run on a sensor node."
|
||||||
|
exit 3
|
||||||
|
fi
|
||||||
|
|
||||||
|
# verify that all parameters are files
|
||||||
|
for i in "$@"; do
|
||||||
|
if ! [ -f "$i" ]; then
|
||||||
|
usage
|
||||||
|
echo "\"$i\" is not a valid file!"
|
||||||
|
exit 2
|
||||||
|
fi
|
||||||
|
done
|
||||||
|
|
||||||
|
# track if we have any valid or invalid pcaps
|
||||||
|
INVALID_PCAPS="no"
|
||||||
|
VALID_PCAPS="no"
|
||||||
|
|
||||||
|
# track oldest start and newest end so that we can generate the Kibana search hyperlink at the end
|
||||||
|
START_OLDEST="2050-12-31"
|
||||||
|
END_NEWEST="1971-01-01"
|
||||||
|
|
||||||
|
# paths must be quoted in case they include spaces
|
||||||
|
for PCAP in "$@"; do
|
||||||
|
PCAP=$(/usr/bin/realpath "$PCAP")
|
||||||
|
echo "Processing Import: ${PCAP}"
|
||||||
|
echo "- verifying file"
|
||||||
|
if ! pcapinfo "${PCAP}" > /dev/null 2>&1; then
|
||||||
|
# try to fix pcap and then process the fixed pcap directly
|
||||||
|
PCAP_FIXED=`mktemp /tmp/so-import-pcap-XXXXXXXXXX.pcap`
|
||||||
|
echo "- attempting to recover corrupted PCAP file"
|
||||||
|
pcapfix "${PCAP}" "${PCAP_FIXED}"
|
||||||
|
PCAP="${PCAP_FIXED}"
|
||||||
|
TEMP_PCAPS+=(${PCAP_FIXED})
|
||||||
|
fi
|
||||||
|
|
||||||
|
# generate a unique hash to assist with dedupe checks
|
||||||
|
HASH=$(md5sum "${PCAP}" | awk '{ print $1 }')
|
||||||
|
HASH_DIR=/nsm/import/${HASH}
|
||||||
|
echo "- assigning unique identifier to import: $HASH"
|
||||||
|
|
||||||
|
if [ -d $HASH_DIR ]; then
|
||||||
|
echo "- this PCAP has already been imported; skipping"
|
||||||
|
INVALID_PCAPS="yes"
|
||||||
|
elif pcapinfo "${PCAP}" |egrep -q "Last packet time: 1970-01-01|Last packet time: n/a"; then
|
||||||
|
echo "- this PCAP file is invalid; skipping"
|
||||||
|
INVALID_PCAPS="yes"
|
||||||
|
else
|
||||||
|
VALID_PCAPS="yes"
|
||||||
|
|
||||||
|
PCAP_DIR=$HASH_DIR/pcap
|
||||||
|
mkdir -p $PCAP_DIR
|
||||||
|
|
||||||
|
# generate IDS alerts and write them to standard pipeline
|
||||||
|
echo "- analyzing traffic with Suricata"
|
||||||
|
suricata "${PCAP}" $HASH
|
||||||
|
|
||||||
|
# generate Zeek logs and write them to a unique subdirectory in /nsm/import/bro/
|
||||||
|
# since each run writes to a unique subdirectory, there is no need for a lock file
|
||||||
|
echo "- analyzing traffic with Zeek"
|
||||||
|
zeek "${PCAP}" $HASH
|
||||||
|
|
||||||
|
START=$(pcapinfo "${PCAP}" -a |grep "First packet time:" | awk '{print $4}')
|
||||||
|
END=$(pcapinfo "${PCAP}" -e |grep "Last packet time:" | awk '{print $4}')
|
||||||
|
echo "- saving PCAP data spanning dates $START through $END"
|
||||||
|
|
||||||
|
# compare $START to $START_OLDEST
|
||||||
|
START_COMPARE=$(date -d $START +%s)
|
||||||
|
START_OLDEST_COMPARE=$(date -d $START_OLDEST +%s)
|
||||||
|
if [ $START_COMPARE -lt $START_OLDEST_COMPARE ]; then
|
||||||
|
START_OLDEST=$START
|
||||||
|
fi
|
||||||
|
|
||||||
|
# compare $ENDNEXT to $END_NEWEST
|
||||||
|
ENDNEXT=`date +%Y-%m-%d --date="$END 1 day"`
|
||||||
|
ENDNEXT_COMPARE=$(date -d $ENDNEXT +%s)
|
||||||
|
END_NEWEST_COMPARE=$(date -d $END_NEWEST +%s)
|
||||||
|
if [ $ENDNEXT_COMPARE -gt $END_NEWEST_COMPARE ]; then
|
||||||
|
END_NEWEST=$ENDNEXT
|
||||||
|
fi
|
||||||
|
|
||||||
|
cp -f "${PCAP}" "${PCAP_DIR}"/data.pcap
|
||||||
|
chmod 644 "${PCAP_DIR}"/data.pcap
|
||||||
|
|
||||||
|
fi # end of valid pcap
|
||||||
|
|
||||||
|
echo
|
||||||
|
|
||||||
|
done # end of for-loop processing pcap files
|
||||||
|
|
||||||
|
# remove temp files
|
||||||
|
echo "Cleaning up:"
|
||||||
|
for TEMP_PCAP in ${TEMP_PCAPS[@]}; do
|
||||||
|
echo "- removing temporary pcap $TEMP_PCAP"
|
||||||
|
rm -f $TEMP_PCAP
|
||||||
|
done
|
||||||
|
|
||||||
|
# output final messages
|
||||||
|
if [ "$INVALID_PCAPS" = "yes" ]; then
|
||||||
|
echo
|
||||||
|
echo "Please note! One or more pcaps was invalid! You can scroll up to see which ones were invalid."
|
||||||
|
fi
|
||||||
|
|
||||||
|
START_OLDEST_SLASH=$(echo $START_OLDEST | sed -e 's/-/%2F/g')
|
||||||
|
END_NEWEST_SLASH=$(echo $END_NEWEST | sed -e 's/-/%2F/g')
|
||||||
|
|
||||||
|
if [ "$VALID_PCAPS" = "yes" ]; then
|
||||||
|
cat << EOF
|
||||||
|
|
||||||
|
Import complete!
|
||||||
|
|
||||||
|
You can use the following hyperlink to view data in the time range of your import. You can triple-click to quickly highlight the entire hyperlink and you can then copy it into your browser:
|
||||||
|
https://{{ MANAGERIP }}/#/hunt?q=%2a%20%7C%20groupby%20event.module%20event.dataset&t=${START_OLDEST_SLASH}%2000%3A00%3A00%20AM%20-%20${END_NEWEST_SLASH}%2000%3A00%3A00%20AM
|
||||||
|
|
||||||
|
or you can manually set your Time Range to be:
|
||||||
|
From: $START_OLDEST To: $END_NEWEST
|
||||||
|
|
||||||
|
Please note that it may take 30 seconds or more for events to appear in Onion Hunt.
|
||||||
|
EOF
|
||||||
|
fi
|
||||||
20
salt/common/tools/sbin/so-influxdb-restart
Executable file
20
salt/common/tools/sbin/so-influxdb-restart
Executable file
@@ -0,0 +1,20 @@
|
|||||||
|
#!/bin/bash
|
||||||
|
|
||||||
|
# Copyright 2014,2015,2016,2017,2018,2019,2020 Security Onion Solutions, LLC
|
||||||
|
#
|
||||||
|
# This program is free software: you can redistribute it and/or modify
|
||||||
|
# it under the terms of the GNU General Public License as published by
|
||||||
|
# the Free Software Foundation, either version 3 of the License, or
|
||||||
|
# (at your option) any later version.
|
||||||
|
#
|
||||||
|
# This program is distributed in the hope that it will be useful,
|
||||||
|
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||||
|
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||||
|
# GNU General Public License for more details.
|
||||||
|
#
|
||||||
|
# You should have received a copy of the GNU General Public License
|
||||||
|
# along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||||
|
|
||||||
|
. /usr/sbin/so-common
|
||||||
|
|
||||||
|
/usr/sbin/so-restart influxdb $1
|
||||||
20
salt/common/tools/sbin/so-influxdb-start
Executable file
20
salt/common/tools/sbin/so-influxdb-start
Executable file
@@ -0,0 +1,20 @@
|
|||||||
|
#!/bin/bash
|
||||||
|
|
||||||
|
# Copyright 2014,2015,2016,2017,2018,2019,2020 Security Onion Solutions, LLC
|
||||||
|
#
|
||||||
|
# This program is free software: you can redistribute it and/or modify
|
||||||
|
# it under the terms of the GNU General Public License as published by
|
||||||
|
# the Free Software Foundation, either version 3 of the License, or
|
||||||
|
# (at your option) any later version.
|
||||||
|
#
|
||||||
|
# This program is distributed in the hope that it will be useful,
|
||||||
|
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||||
|
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||||
|
# GNU General Public License for more details.
|
||||||
|
#
|
||||||
|
# You should have received a copy of the GNU General Public License
|
||||||
|
# along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||||
|
|
||||||
|
. /usr/sbin/so-common
|
||||||
|
|
||||||
|
/usr/sbin/so-start influxdb $1
|
||||||
20
salt/common/tools/sbin/so-influxdb-stop
Executable file
20
salt/common/tools/sbin/so-influxdb-stop
Executable file
@@ -0,0 +1,20 @@
|
|||||||
|
#!/bin/bash
|
||||||
|
|
||||||
|
# Copyright 2014,2015,2016,2017,2018,2019,2020 Security Onion Solutions, LLC
|
||||||
|
#
|
||||||
|
# This program is free software: you can redistribute it and/or modify
|
||||||
|
# it under the terms of the GNU General Public License as published by
|
||||||
|
# the Free Software Foundation, either version 3 of the License, or
|
||||||
|
# (at your option) any later version.
|
||||||
|
#
|
||||||
|
# This program is distributed in the hope that it will be useful,
|
||||||
|
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||||
|
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||||
|
# GNU General Public License for more details.
|
||||||
|
#
|
||||||
|
# You should have received a copy of the GNU General Public License
|
||||||
|
# along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||||
|
|
||||||
|
. /usr/sbin/so-common
|
||||||
|
|
||||||
|
/usr/sbin/so-stop influxdb $1
|
||||||
@@ -1,9 +1,9 @@
|
|||||||
#!/bin/bash
|
#!/bin/bash
|
||||||
#
|
#
|
||||||
# {%- set FLEET_MASTER = salt['pillar.get']('static:fleet_master', False) -%}
|
# {%- set FLEET_MANAGER = salt['pillar.get']('static:fleet_manager', False) -%}
|
||||||
# {%- set FLEET_NODE = salt['pillar.get']('static:fleet_node', False) -%}
|
# {%- set FLEET_NODE = salt['pillar.get']('static:fleet_node', False) -%}
|
||||||
# {%- set FLEET_IP = salt['pillar.get']('static:fleet_ip', '') %}
|
# {%- set FLEET_IP = salt['pillar.get']('static:fleet_ip', '') %}
|
||||||
# {%- set MASTER = salt['pillar.get']('master:url_base', '') %}
|
# {%- set MANAGER = salt['pillar.get']('manager:url_base', '') %}
|
||||||
#
|
#
|
||||||
# Copyright 2014,2015,2016,2017,2018,2019,2020 Security Onion Solutions, LLC
|
# Copyright 2014,2015,2016,2017,2018,2019,2020 Security Onion Solutions, LLC
|
||||||
#
|
#
|
||||||
@@ -20,7 +20,7 @@
|
|||||||
# You should have received a copy of the GNU General Public License
|
# You should have received a copy of the GNU General Public License
|
||||||
# along with this program. If not, see <http://www.gnu.org/licenses/>.
|
# along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||||
|
|
||||||
KIBANA_HOST={{ MASTER }}
|
KIBANA_HOST={{ MANAGER }}
|
||||||
KSO_PORT=5601
|
KSO_PORT=5601
|
||||||
OUTFILE="saved_objects.ndjson"
|
OUTFILE="saved_objects.ndjson"
|
||||||
curl -s -H 'kbn-xsrf: true' -H 'Content-Type: application/json' -XPOST $KIBANA_HOST:$KSO_PORT/api/saved_objects/_export -d '{ "type": [ "index-pattern", "config", "visualization", "dashboard", "search" ], "excludeExportDetails": false }' > $OUTFILE
|
curl -s -H 'kbn-xsrf: true' -H 'Content-Type: application/json' -XPOST $KIBANA_HOST:$KSO_PORT/api/saved_objects/_export -d '{ "type": [ "index-pattern", "config", "visualization", "dashboard", "search" ], "excludeExportDetails": false }' > $OUTFILE
|
||||||
@@ -29,7 +29,7 @@ curl -s -H 'kbn-xsrf: true' -H 'Content-Type: application/json' -XPOST $KIBANA_H
|
|||||||
sed -i "s/$KIBANA_HOST/PLACEHOLDER/g" $OUTFILE
|
sed -i "s/$KIBANA_HOST/PLACEHOLDER/g" $OUTFILE
|
||||||
|
|
||||||
# Clean up for Fleet, if applicable
|
# Clean up for Fleet, if applicable
|
||||||
# {% if FLEET_NODE or FLEET_MASTER %}
|
# {% if FLEET_NODE or FLEET_MANAGER %}
|
||||||
# Fleet IP
|
# Fleet IP
|
||||||
sed -i "s/{{ MASTER }}/FLEETPLACEHOLDER/g" $OUTFILE
|
sed -i "s/{{ MANAGER }}/FLEETPLACEHOLDER/g" $OUTFILE
|
||||||
# {% endif %}
|
# {% endif %}
|
||||||
|
|||||||
20
salt/common/tools/sbin/so-nginx-restart
Executable file
20
salt/common/tools/sbin/so-nginx-restart
Executable file
@@ -0,0 +1,20 @@
|
|||||||
|
#!/bin/bash
|
||||||
|
|
||||||
|
# Copyright 2014,2015,2016,2017,2018,2019,2020 Security Onion Solutions, LLC
|
||||||
|
#
|
||||||
|
# This program is free software: you can redistribute it and/or modify
|
||||||
|
# it under the terms of the GNU General Public License as published by
|
||||||
|
# the Free Software Foundation, either version 3 of the License, or
|
||||||
|
# (at your option) any later version.
|
||||||
|
#
|
||||||
|
# This program is distributed in the hope that it will be useful,
|
||||||
|
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||||
|
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||||
|
# GNU General Public License for more details.
|
||||||
|
#
|
||||||
|
# You should have received a copy of the GNU General Public License
|
||||||
|
# along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||||
|
|
||||||
|
. /usr/sbin/so-common
|
||||||
|
|
||||||
|
/usr/sbin/so-restart nginx $1
|
||||||
20
salt/common/tools/sbin/so-nginx-start
Executable file
20
salt/common/tools/sbin/so-nginx-start
Executable file
@@ -0,0 +1,20 @@
|
|||||||
|
#!/bin/bash
|
||||||
|
|
||||||
|
# Copyright 2014,2015,2016,2017,2018,2019,2020 Security Onion Solutions, LLC
|
||||||
|
#
|
||||||
|
# This program is free software: you can redistribute it and/or modify
|
||||||
|
# it under the terms of the GNU General Public License as published by
|
||||||
|
# the Free Software Foundation, either version 3 of the License, or
|
||||||
|
# (at your option) any later version.
|
||||||
|
#
|
||||||
|
# This program is distributed in the hope that it will be useful,
|
||||||
|
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||||
|
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||||
|
# GNU General Public License for more details.
|
||||||
|
#
|
||||||
|
# You should have received a copy of the GNU General Public License
|
||||||
|
# along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||||
|
|
||||||
|
. /usr/sbin/so-common
|
||||||
|
|
||||||
|
/usr/sbin/so-start nginx $1
|
||||||
20
salt/common/tools/sbin/so-nginx-stop
Executable file
20
salt/common/tools/sbin/so-nginx-stop
Executable file
@@ -0,0 +1,20 @@
|
|||||||
|
#!/bin/bash
|
||||||
|
|
||||||
|
# Copyright 2014,2015,2016,2017,2018,2019,2020 Security Onion Solutions, LLC
|
||||||
|
#
|
||||||
|
# This program is free software: you can redistribute it and/or modify
|
||||||
|
# it under the terms of the GNU General Public License as published by
|
||||||
|
# the Free Software Foundation, either version 3 of the License, or
|
||||||
|
# (at your option) any later version.
|
||||||
|
#
|
||||||
|
# This program is distributed in the hope that it will be useful,
|
||||||
|
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||||
|
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||||
|
# GNU General Public License for more details.
|
||||||
|
#
|
||||||
|
# You should have received a copy of the GNU General Public License
|
||||||
|
# along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||||
|
|
||||||
|
. /usr/sbin/so-common
|
||||||
|
|
||||||
|
/usr/sbin/so-stop nginx $1
|
||||||
10
salt/common/tools/sbin/so-saltstack-update
Normal file → Executable file
10
salt/common/tools/sbin/so-saltstack-update
Normal file → Executable file
@@ -21,8 +21,8 @@ clone_to_tmp() {
|
|||||||
# Make a temp location for the files
|
# Make a temp location for the files
|
||||||
mkdir /tmp/sogh
|
mkdir /tmp/sogh
|
||||||
cd /tmp/sogh
|
cd /tmp/sogh
|
||||||
#git clone -b dev https://github.com/Security-Onion-Solutions/securityonion-saltstack.git
|
#git clone -b dev https://github.com/Security-Onion-Solutions/securityonion.git
|
||||||
git clone https://github.com/Security-Onion-Solutions/securityonion-saltstack.git
|
git clone https://github.com/Security-Onion-Solutions/securityonion.git
|
||||||
cd /tmp
|
cd /tmp
|
||||||
|
|
||||||
}
|
}
|
||||||
@@ -30,10 +30,10 @@ clone_to_tmp() {
|
|||||||
copy_new_files() {
|
copy_new_files() {
|
||||||
|
|
||||||
# Copy new files over to the salt dir
|
# Copy new files over to the salt dir
|
||||||
cd /tmp/sogh/securityonion-saltstack
|
cd /tmp/sogh/securityonion
|
||||||
git checkout $BRANCH
|
git checkout $BRANCH
|
||||||
rsync -a --exclude-from 'exclude-list.txt' salt $default_salt_dir/
|
rsync -a salt $default_salt_dir/
|
||||||
rsync -a --exclude-from 'exclude-list.txt' pillar $default_salt_dir/
|
rsync -a pillar $default_salt_dir/
|
||||||
chown -R socore:socore $default_salt_dir/salt
|
chown -R socore:socore $default_salt_dir/salt
|
||||||
chown -R socore:socore $default_salt_dir/pillar
|
chown -R socore:socore $default_salt_dir/pillar
|
||||||
chmod 755 $default_salt_dir/pillar/firewall/addfirewall.sh
|
chmod 755 $default_salt_dir/pillar/firewall/addfirewall.sh
|
||||||
|
|||||||
121
salt/common/tools/sbin/so-sensor-clean
Executable file
121
salt/common/tools/sbin/so-sensor-clean
Executable file
@@ -0,0 +1,121 @@
|
|||||||
|
#!/bin/bash
|
||||||
|
|
||||||
|
# Delete Zeek Logs based on defined CRIT_DISK_USAGE value
|
||||||
|
|
||||||
|
# Copyright 2014,2015,2016,2017,2018, 2019 Security Onion Solutions, LLC
|
||||||
|
|
||||||
|
# This program is free software: you can redistribute it and/or modify
|
||||||
|
# it under the terms of the GNU General Public License as published by
|
||||||
|
# the Free Software Foundation, either version 3 of the License, or
|
||||||
|
# (at your option) any later version.
|
||||||
|
#
|
||||||
|
# This program is distributed in the hope that it will be useful,
|
||||||
|
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||||
|
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||||
|
# GNU General Public License for more details.
|
||||||
|
#
|
||||||
|
# You should have received a copy of the GNU General Public License
|
||||||
|
# along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||||
|
|
||||||
|
SENSOR_DIR='/nsm'
|
||||||
|
CRIT_DISK_USAGE=90
|
||||||
|
CUR_USAGE=$(df -P $SENSOR_DIR | tail -1 | awk '{print $5}' | tr -d %)
|
||||||
|
LOG="/opt/so/log/sensor_clean.log"
|
||||||
|
TODAY=$(date -u "+%Y-%m-%d")
|
||||||
|
|
||||||
|
clean () {
|
||||||
|
## find the oldest Zeek logs directory
|
||||||
|
OLDEST_DIR=$(ls /nsm/zeek/logs/ | grep -v "current" | grep -v "stats" | grep -v "packetloss" | grep -v "zeek_clean" | sort | head -n 1)
|
||||||
|
if [ -z "$OLDEST_DIR" -o "$OLDEST_DIR" == ".." -o "$OLDEST_DIR" == "." ]
|
||||||
|
then
|
||||||
|
echo "$(date) - No old Zeek logs available to clean up in /nsm/zeek/logs/" >> $LOG
|
||||||
|
#exit 0
|
||||||
|
else
|
||||||
|
echo "$(date) - Removing directory: /nsm/zeek/logs/$OLDEST_DIR" >> $LOG
|
||||||
|
rm -rf /nsm/zeek/logs/"$OLDEST_DIR"
|
||||||
|
fi
|
||||||
|
|
||||||
|
|
||||||
|
## Remarking for now, as we are moving extracted files to /nsm/strelka/processed
|
||||||
|
## find oldest files in extracted directory and exclude today
|
||||||
|
#OLDEST_EXTRACT=$(find /nsm/zeek/extracted/complete -type f -printf '%T+ %p\n' 2>/dev/null | sort | grep -v $TODAY | head -n 1)
|
||||||
|
#if [ -z "$OLDEST_EXTRACT" -o "$OLDEST_EXTRACT" == ".." -o "$OLDEST_EXTRACT" == "." ]
|
||||||
|
#then
|
||||||
|
# echo "$(date) - No old extracted files available to clean up in /nsm/zeek/extracted/complete" >> $LOG
|
||||||
|
#else
|
||||||
|
# OLDEST_EXTRACT_DATE=`echo $OLDEST_EXTRACT | awk '{print $1}' | cut -d+ -f1`
|
||||||
|
# OLDEST_EXTRACT_FILE=`echo $OLDEST_EXTRACT | awk '{print $2}'`
|
||||||
|
# echo "$(date) - Removing extracted files for $OLDEST_EXTRACT_DATE" >> $LOG
|
||||||
|
# find /nsm/zeek/extracted/complete -type f -printf '%T+ %p\n' | grep $OLDEST_EXTRACT_DATE | awk '{print $2}' |while read FILE
|
||||||
|
# do
|
||||||
|
# echo "$(date) - Removing extracted file: $FILE" >> $LOG
|
||||||
|
# rm -f "$FILE"
|
||||||
|
# done
|
||||||
|
#fi
|
||||||
|
|
||||||
|
## Clean up Zeek extracted files processed by Strelka
|
||||||
|
STRELKA_FILES='/nsm/strelka/processed'
|
||||||
|
OLDEST_STRELKA=$(find $STRELKA_FILES -type f -printf '%T+ %p\n' | sort -n | head -n 1 )
|
||||||
|
if [ -z "$OLDEST_STRELKA" -o "$OLDEST_STRELKA" == ".." -o "$OLDEST_STRELKA" == "." ]
|
||||||
|
then
|
||||||
|
echo "$(date) - No old files available to clean up in $STRELKA_FILES" >> $LOG
|
||||||
|
else
|
||||||
|
OLDEST_STRELKA_DATE=`echo $OLDEST_STRELKA | awk '{print $1}' | cut -d+ -f1`
|
||||||
|
OLDEST_STRELKA_FILE=`echo $OLDEST_STRELKA | awk '{print $2}'`
|
||||||
|
echo "$(date) - Removing extracted files for $OLDEST_STRELKA_DATE" >> $LOG
|
||||||
|
find $STRELKA_FILES -type f -printf '%T+ %p\n' | grep $OLDEST_STRELKA_DATE | awk '{print $2}' |while read FILE
|
||||||
|
do
|
||||||
|
echo "$(date) - Removing file: $FILE" >> $LOG
|
||||||
|
rm -f "$FILE"
|
||||||
|
done
|
||||||
|
fi
|
||||||
|
|
||||||
|
## Clean up Suricata log files
|
||||||
|
SURICATA_LOGS='/nsm/suricata'
|
||||||
|
OLDEST_SURICATA=$(find $STRELKA_FILES -type f -printf '%T+ %p\n' | sort -n | head -n 1)
|
||||||
|
if [ -z "$OLDEST_SURICATA" -o "$OLDEST_SURICATA" == ".." -o "$OLDEST_SURICATA" == "." ]
|
||||||
|
then
|
||||||
|
echo "$(date) - No old files available to clean up in $SURICATA_LOGS" >> $LOG
|
||||||
|
else
|
||||||
|
OLDEST_SURICATA_DATE=`echo $OLDEST_SURICATA | awk '{print $1}' | cut -d+ -f1`
|
||||||
|
OLDEST_SURICATA_FILE=`echo $OLDEST_SURICATA | awk '{print $2}'`
|
||||||
|
echo "$(date) - Removing logs for $OLDEST_SURICATA_DATE" >> $LOG
|
||||||
|
find $SURICATA_LOGS -type f -printf '%T+ %p\n' | grep $OLDEST_SURICATA_DATE | awk '{print $2}' |while read FILE
|
||||||
|
do
|
||||||
|
echo "$(date) - Removing file: $FILE" >> $LOG
|
||||||
|
rm -f "$FILE"
|
||||||
|
done
|
||||||
|
fi
|
||||||
|
|
||||||
|
## Clean up extracted pcaps from Steno
|
||||||
|
PCAPS='/nsm/pcapout'
|
||||||
|
OLDEST_PCAP=$(find $PCAPS -type f -printf '%T+ %p\n' | sort -n | head -n 1 )
|
||||||
|
if [ -z "$OLDEST_PCAP" -o "$OLDEST_PCAP" == ".." -o "$OLDEST_PCAP" == "." ]
|
||||||
|
then
|
||||||
|
echo "$(date) - No old files available to clean up in $PCAPS" >> $LOG
|
||||||
|
else
|
||||||
|
OLDEST_PCAP_DATE=`echo $OLDEST_PCAP | awk '{print $1}' | cut -d+ -f1`
|
||||||
|
OLDEST_PCAP_FILE=`echo $OLDEST_PCAP | awk '{print $2}'`
|
||||||
|
echo "$(date) - Removing extracted files for $OLDEST_PCAP_DATE" >> $LOG
|
||||||
|
find $PCAPS -type f -printf '%T+ %p\n' | grep $OLDEST_PCAP_DATE | awk '{print $2}' |while read FILE
|
||||||
|
do
|
||||||
|
echo "$(date) - Removing file: $FILE" >> $LOG
|
||||||
|
rm -f "$FILE"
|
||||||
|
done
|
||||||
|
fi
|
||||||
|
}
|
||||||
|
|
||||||
|
# Check to see if we are already running
|
||||||
|
IS_RUNNING=$(ps aux | grep "sensor_clean" | grep -v grep | wc -l)
|
||||||
|
[ "$IS_RUNNING" -gt 2 ] && echo "$(date) - $IS_RUNNING sensor clean script processes running...exiting." >> $LOG && exit 0
|
||||||
|
|
||||||
|
if [ "$CUR_USAGE" -gt "$CRIT_DISK_USAGE" ]; then
|
||||||
|
while [ "$CUR_USAGE" -gt "$CRIT_DISK_USAGE" ];
|
||||||
|
do
|
||||||
|
clean
|
||||||
|
CUR_USAGE=$(df -P $SENSOR_DIR | tail -1 | awk '{print $5}' | tr -d %)
|
||||||
|
done
|
||||||
|
else
|
||||||
|
echo "$(date) - Current usage value of $CUR_USAGE not greater than CRIT_DISK_USAGE value of $CRIT_DISK_USAGE..." >> $LOG
|
||||||
|
fi
|
||||||
|
|
||||||
20
salt/common/tools/sbin/so-soc-restart
Executable file
20
salt/common/tools/sbin/so-soc-restart
Executable file
@@ -0,0 +1,20 @@
|
|||||||
|
#!/bin/bash
|
||||||
|
|
||||||
|
# Copyright 2014,2015,2016,2017,2018,2019,2020 Security Onion Solutions, LLC
|
||||||
|
#
|
||||||
|
# This program is free software: you can redistribute it and/or modify
|
||||||
|
# it under the terms of the GNU General Public License as published by
|
||||||
|
# the Free Software Foundation, either version 3 of the License, or
|
||||||
|
# (at your option) any later version.
|
||||||
|
#
|
||||||
|
# This program is distributed in the hope that it will be useful,
|
||||||
|
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||||
|
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||||
|
# GNU General Public License for more details.
|
||||||
|
#
|
||||||
|
# You should have received a copy of the GNU General Public License
|
||||||
|
# along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||||
|
|
||||||
|
. /usr/sbin/so-common
|
||||||
|
|
||||||
|
/usr/sbin/so-restart soc $1
|
||||||
20
salt/common/tools/sbin/so-soc-start
Executable file
20
salt/common/tools/sbin/so-soc-start
Executable file
@@ -0,0 +1,20 @@
|
|||||||
|
#!/bin/bash
|
||||||
|
|
||||||
|
# Copyright 2014,2015,2016,2017,2018,2019,2020 Security Onion Solutions, LLC
|
||||||
|
#
|
||||||
|
# This program is free software: you can redistribute it and/or modify
|
||||||
|
# it under the terms of the GNU General Public License as published by
|
||||||
|
# the Free Software Foundation, either version 3 of the License, or
|
||||||
|
# (at your option) any later version.
|
||||||
|
#
|
||||||
|
# This program is distributed in the hope that it will be useful,
|
||||||
|
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||||
|
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||||
|
# GNU General Public License for more details.
|
||||||
|
#
|
||||||
|
# You should have received a copy of the GNU General Public License
|
||||||
|
# along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||||
|
|
||||||
|
. /usr/sbin/so-common
|
||||||
|
|
||||||
|
/usr/sbin/so-start soc $1
|
||||||
20
salt/common/tools/sbin/so-soc-stop
Executable file
20
salt/common/tools/sbin/so-soc-stop
Executable file
@@ -0,0 +1,20 @@
|
|||||||
|
#!/bin/bash
|
||||||
|
|
||||||
|
# Copyright 2014,2015,2016,2017,2018,2019,2020 Security Onion Solutions, LLC
|
||||||
|
#
|
||||||
|
# This program is free software: you can redistribute it and/or modify
|
||||||
|
# it under the terms of the GNU General Public License as published by
|
||||||
|
# the Free Software Foundation, either version 3 of the License, or
|
||||||
|
# (at your option) any later version.
|
||||||
|
#
|
||||||
|
# This program is distributed in the hope that it will be useful,
|
||||||
|
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||||
|
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||||
|
# GNU General Public License for more details.
|
||||||
|
#
|
||||||
|
# You should have received a copy of the GNU General Public License
|
||||||
|
# along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||||
|
|
||||||
|
. /usr/sbin/so-common
|
||||||
|
|
||||||
|
/usr/sbin/so-stop soc $1
|
||||||
26
salt/common/tools/sbin/so-strelka-restart
Executable file
26
salt/common/tools/sbin/so-strelka-restart
Executable file
@@ -0,0 +1,26 @@
|
|||||||
|
#!/bin/bash
|
||||||
|
|
||||||
|
# Copyright 2014,2015,2016,2017,2018,2019,2020 Security Onion Solutions, LLC
|
||||||
|
#
|
||||||
|
# This program is free software: you can redistribute it and/or modify
|
||||||
|
# it under the terms of the GNU General Public License as published by
|
||||||
|
# the Free Software Foundation, either version 3 of the License, or
|
||||||
|
# (at your option) any later version.
|
||||||
|
#
|
||||||
|
# This program is distributed in the hope that it will be useful,
|
||||||
|
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||||
|
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||||
|
# GNU General Public License for more details.
|
||||||
|
#
|
||||||
|
# You should have received a copy of the GNU General Public License
|
||||||
|
# along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||||
|
|
||||||
|
. /usr/sbin/so-common
|
||||||
|
|
||||||
|
/usr/sbin/so-stop strelka-filestream $1
|
||||||
|
/usr/sbin/so-stop strelka-manager $1
|
||||||
|
/usr/sbin/so-stop strelka-frontend $1
|
||||||
|
/usr/sbin/so-stop strelka-backend $1
|
||||||
|
/usr/sbin/so-stop strelka-gatekeeper $1
|
||||||
|
/usr/sbin/so-stop strelka-coordinator $1
|
||||||
|
/usr/sbin/so-start strelka $1
|
||||||
20
salt/common/tools/sbin/so-strelka-start
Executable file
20
salt/common/tools/sbin/so-strelka-start
Executable file
@@ -0,0 +1,20 @@
|
|||||||
|
#!/bin/bash
|
||||||
|
|
||||||
|
# Copyright 2014,2015,2016,2017,2018,2019,2020 Security Onion Solutions, LLC
|
||||||
|
#
|
||||||
|
# This program is free software: you can redistribute it and/or modify
|
||||||
|
# it under the terms of the GNU General Public License as published by
|
||||||
|
# the Free Software Foundation, either version 3 of the License, or
|
||||||
|
# (at your option) any later version.
|
||||||
|
#
|
||||||
|
# This program is distributed in the hope that it will be useful,
|
||||||
|
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||||
|
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||||
|
# GNU General Public License for more details.
|
||||||
|
#
|
||||||
|
# You should have received a copy of the GNU General Public License
|
||||||
|
# along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||||
|
|
||||||
|
. /usr/sbin/so-common
|
||||||
|
|
||||||
|
/usr/sbin/so-start strelka $1
|
||||||
25
salt/common/tools/sbin/so-strelka-stop
Executable file
25
salt/common/tools/sbin/so-strelka-stop
Executable file
@@ -0,0 +1,25 @@
|
|||||||
|
#!/bin/bash
|
||||||
|
|
||||||
|
# Copyright 2014,2015,2016,2017,2018,2019,2020 Security Onion Solutions, LLC
|
||||||
|
#
|
||||||
|
# This program is free software: you can redistribute it and/or modify
|
||||||
|
# it under the terms of the GNU General Public License as published by
|
||||||
|
# the Free Software Foundation, either version 3 of the License, or
|
||||||
|
# (at your option) any later version.
|
||||||
|
#
|
||||||
|
# This program is distributed in the hope that it will be useful,
|
||||||
|
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||||
|
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||||
|
# GNU General Public License for more details.
|
||||||
|
#
|
||||||
|
# You should have received a copy of the GNU General Public License
|
||||||
|
# along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||||
|
|
||||||
|
. /usr/sbin/so-common
|
||||||
|
|
||||||
|
/usr/sbin/so-stop strelka-filestream $1
|
||||||
|
/usr/sbin/so-stop strelka-manager $1
|
||||||
|
/usr/sbin/so-stop strelka-frontend $1
|
||||||
|
/usr/sbin/so-stop strelka-backend $1
|
||||||
|
/usr/sbin/so-stop strelka-gatekeeper $1
|
||||||
|
/usr/sbin/so-stop strelka-coordinator $1
|
||||||
20
salt/common/tools/sbin/so-telegraf-restart
Executable file
20
salt/common/tools/sbin/so-telegraf-restart
Executable file
@@ -0,0 +1,20 @@
|
|||||||
|
#!/bin/bash
|
||||||
|
|
||||||
|
# Copyright 2014,2015,2016,2017,2018,2019,2020 Security Onion Solutions, LLC
|
||||||
|
#
|
||||||
|
# This program is free software: you can redistribute it and/or modify
|
||||||
|
# it under the terms of the GNU General Public License as published by
|
||||||
|
# the Free Software Foundation, either version 3 of the License, or
|
||||||
|
# (at your option) any later version.
|
||||||
|
#
|
||||||
|
# This program is distributed in the hope that it will be useful,
|
||||||
|
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||||
|
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||||
|
# GNU General Public License for more details.
|
||||||
|
#
|
||||||
|
# You should have received a copy of the GNU General Public License
|
||||||
|
# along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||||
|
|
||||||
|
. /usr/sbin/so-common
|
||||||
|
|
||||||
|
/usr/sbin/so-restart telegraf $1
|
||||||
20
salt/common/tools/sbin/so-telegraf-start
Executable file
20
salt/common/tools/sbin/so-telegraf-start
Executable file
@@ -0,0 +1,20 @@
|
|||||||
|
#!/bin/bash
|
||||||
|
|
||||||
|
# Copyright 2014,2015,2016,2017,2018,2019,2020 Security Onion Solutions, LLC
|
||||||
|
#
|
||||||
|
# This program is free software: you can redistribute it and/or modify
|
||||||
|
# it under the terms of the GNU General Public License as published by
|
||||||
|
# the Free Software Foundation, either version 3 of the License, or
|
||||||
|
# (at your option) any later version.
|
||||||
|
#
|
||||||
|
# This program is distributed in the hope that it will be useful,
|
||||||
|
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||||
|
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||||
|
# GNU General Public License for more details.
|
||||||
|
#
|
||||||
|
# You should have received a copy of the GNU General Public License
|
||||||
|
# along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||||
|
|
||||||
|
. /usr/sbin/so-common
|
||||||
|
|
||||||
|
/usr/sbin/so-start telegraf $1
|
||||||
20
salt/common/tools/sbin/so-telegraf-stop
Executable file
20
salt/common/tools/sbin/so-telegraf-stop
Executable file
@@ -0,0 +1,20 @@
|
|||||||
|
#!/bin/bash
|
||||||
|
|
||||||
|
# Copyright 2014,2015,2016,2017,2018,2019,2020 Security Onion Solutions, LLC
|
||||||
|
#
|
||||||
|
# This program is free software: you can redistribute it and/or modify
|
||||||
|
# it under the terms of the GNU General Public License as published by
|
||||||
|
# the Free Software Foundation, either version 3 of the License, or
|
||||||
|
# (at your option) any later version.
|
||||||
|
#
|
||||||
|
# This program is distributed in the hope that it will be useful,
|
||||||
|
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||||
|
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||||
|
# GNU General Public License for more details.
|
||||||
|
#
|
||||||
|
# You should have received a copy of the GNU General Public License
|
||||||
|
# along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||||
|
|
||||||
|
. /usr/sbin/so-common
|
||||||
|
|
||||||
|
/usr/sbin/so-stop telegraf $1
|
||||||
102
salt/common/tools/sbin/so-yara-update
Executable file
102
salt/common/tools/sbin/so-yara-update
Executable file
@@ -0,0 +1,102 @@
|
|||||||
|
#!/bin/bash
|
||||||
|
|
||||||
|
# Copyright 2014,2015,2016,2017,2018,2019,2020 Security Onion Solutions, LLC
|
||||||
|
#
|
||||||
|
# This program is free software: you can redistribute it and/or modify
|
||||||
|
# it under the terms of the GNU General Public License as published by
|
||||||
|
# the Free Software Foundation, either version 3 of the License, or
|
||||||
|
# (at your option) any later version.
|
||||||
|
#
|
||||||
|
# This program is distributed in the hope that it will be useful,
|
||||||
|
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||||
|
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||||
|
# GNU General Public License for more details.
|
||||||
|
#
|
||||||
|
# You should have received a copy of the GNU General Public License
|
||||||
|
# along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||||
|
|
||||||
|
clone_dir="/tmp"
|
||||||
|
output_dir="/opt/so/saltstack/default/salt/strelka/rules"
|
||||||
|
#mkdir -p $output_dir
|
||||||
|
repos="$output_dir/repos.txt"
|
||||||
|
ignorefile="$output_dir/ignore.txt"
|
||||||
|
|
||||||
|
deletecounter=0
|
||||||
|
newcounter=0
|
||||||
|
updatecounter=0
|
||||||
|
|
||||||
|
gh_status=$(curl -s -o /dev/null -w "%{http_code}" http://github.com)
|
||||||
|
|
||||||
|
if [ "$gh_status" == "200" ] || [ "$gh_status" == "301" ]; then
|
||||||
|
|
||||||
|
while IFS= read -r repo; do
|
||||||
|
|
||||||
|
# Remove old repo if existing bc of previous error condition or unexpected disruption
|
||||||
|
repo_name=`echo $repo | awk -F '/' '{print $NF}'`
|
||||||
|
[ -d $repo_name ] && rm -rf $repo_name
|
||||||
|
|
||||||
|
# Clone repo and make appropriate directories for rules
|
||||||
|
|
||||||
|
git clone $repo $clone_dir/$repo_name
|
||||||
|
echo "Analyzing rules from $clone_dir/$repo_name..."
|
||||||
|
mkdir -p $output_dir/$repo_name
|
||||||
|
[ -f $clone_dir/$repo_name/LICENSE ] && cp $clone_dir/$repo_name/LICENSE $output_dir/$repo_name
|
||||||
|
|
||||||
|
# Copy over rules
|
||||||
|
for i in $(find $clone_dir/$repo_name -name "*.yar*"); do
|
||||||
|
rule_name=$(echo $i | awk -F '/' '{print $NF}')
|
||||||
|
repo_sum=$(sha256sum $i | awk '{print $1}')
|
||||||
|
|
||||||
|
# Check rules against those in ignore list -- don't copy if ignored.
|
||||||
|
if ! grep -iq $rule_name $ignorefile; then
|
||||||
|
existing_rules=$(find $output_dir/$repo_name/ -name $rule_name | wc -l)
|
||||||
|
|
||||||
|
# For existing rules, check to see if they need to be updated, by comparing checksums
|
||||||
|
if [ $existing_rules -gt 0 ];then
|
||||||
|
local_sum=$(sha256sum $output_dir/$repo_name/$rule_name | awk '{print $1}')
|
||||||
|
if [ "$repo_sum" != "$local_sum" ]; then
|
||||||
|
echo "Checksums do not match!"
|
||||||
|
echo "Updating $rule_name..."
|
||||||
|
cp $i $output_dir/$repo_name;
|
||||||
|
((updatecounter++))
|
||||||
|
fi
|
||||||
|
else
|
||||||
|
# If rule doesn't exist already, we'll add it
|
||||||
|
echo "Adding new rule: $rule_name..."
|
||||||
|
cp $i $output_dir/$repo_name
|
||||||
|
((newcounter++))
|
||||||
|
fi
|
||||||
|
fi;
|
||||||
|
done
|
||||||
|
|
||||||
|
# Check to see if we have any old rules that need to be removed
|
||||||
|
for i in $(find $output_dir/$repo_name -name "*.yar*" | awk -F '/' '{print $NF}'); do
|
||||||
|
is_repo_rule=$(find $clone_dir/$repo_name -name "$i" | wc -l)
|
||||||
|
if [ $is_repo_rule -eq 0 ]; then
|
||||||
|
echo "Could not find $i in source $repo_name repo...removing from $output_dir/$repo_name..."
|
||||||
|
rm $output_dir/$repo_name/$i
|
||||||
|
((deletecounter++))
|
||||||
|
fi
|
||||||
|
done
|
||||||
|
rm -rf $clone_dir/$repo_name
|
||||||
|
done < $repos
|
||||||
|
|
||||||
|
echo "Done!"
|
||||||
|
|
||||||
|
if [ "$newcounter" -gt 0 ];then
|
||||||
|
echo "$newcounter new rules added."
|
||||||
|
fi
|
||||||
|
|
||||||
|
if [ "$updatecounter" -gt 0 ];then
|
||||||
|
echo "$updatecounter rules updated."
|
||||||
|
fi
|
||||||
|
|
||||||
|
if [ "$deletecounter" -gt 0 ];then
|
||||||
|
echo "$deletecounter rules removed because they were deprecated or don't exist in the source repo."
|
||||||
|
fi
|
||||||
|
|
||||||
|
else
|
||||||
|
echo "Server returned $gh_status status code."
|
||||||
|
echo "No connectivity to Github...exiting..."
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
@@ -1,17 +1,17 @@
|
|||||||
#!/bin/bash
|
#!/bin/bash
|
||||||
local_salt_dir=/opt/so/saltstack/local
|
local_salt_dir=/opt/so/saltstack/local
|
||||||
|
|
||||||
bro_logs_enabled() {
|
zeek_logs_enabled() {
|
||||||
|
|
||||||
echo "brologs:" > $local_salt_dir/pillar/brologs.sls
|
echo "zeeklogs:" > $local_salt_dir/pillar/zeeklogs.sls
|
||||||
echo " enabled:" >> $local_salt_dir/pillar/brologs.sls
|
echo " enabled:" >> $local_salt_dir/pillar/zeeklogs.sls
|
||||||
for BLOG in ${BLOGS[@]}; do
|
for BLOG in ${BLOGS[@]}; do
|
||||||
echo " - $BLOG" | tr -d '"' >> $local_salt_dir/pillar/brologs.sls
|
echo " - $BLOG" | tr -d '"' >> $local_salt_dir/pillar/zeeklogs.sls
|
||||||
done
|
done
|
||||||
|
|
||||||
}
|
}
|
||||||
|
|
||||||
whiptail_master_adv_service_brologs() {
|
whiptail_manager_adv_service_zeeklogs() {
|
||||||
|
|
||||||
BLOGS=$(whiptail --title "Security Onion Setup" --checklist "Please Select Logs to Send:" 24 78 12 \
|
BLOGS=$(whiptail --title "Security Onion Setup" --checklist "Please Select Logs to Send:" 24 78 12 \
|
||||||
"conn" "Connection Logging" ON \
|
"conn" "Connection Logging" ON \
|
||||||
@@ -54,5 +54,5 @@ whiptail_master_adv_service_brologs() {
|
|||||||
"x509" "x.509 Logs" ON 3>&1 1>&2 2>&3 )
|
"x509" "x.509 Logs" ON 3>&1 1>&2 2>&3 )
|
||||||
}
|
}
|
||||||
|
|
||||||
whiptail_master_adv_service_brologs
|
whiptail_manager_adv_service_zeeklogs
|
||||||
bro_logs_enabled
|
zeek_logs_enabled
|
||||||
0
salt/common/tools/sbin/so-zeek-stats
Normal file → Executable file
0
salt/common/tools/sbin/so-zeek-stats
Normal file → Executable file
188
salt/common/tools/sbin/soup
Normal file → Executable file
188
salt/common/tools/sbin/soup
Normal file → Executable file
@@ -15,23 +15,187 @@
|
|||||||
# You should have received a copy of the GNU General Public License
|
# You should have received a copy of the GNU General Public License
|
||||||
# along with this program. If not, see <http://www.gnu.org/licenses/>.
|
# along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||||
|
|
||||||
clone_to_tmp() {
|
. /usr/sbin/so-common
|
||||||
|
UPDATE_DIR=/tmp/sogh/securityonion
|
||||||
|
INSTALLEDVERSION=$(cat /etc/soversion)
|
||||||
|
default_salt_dir=/opt/so/saltstack/default
|
||||||
|
|
||||||
|
manager_check() {
|
||||||
|
# Check to see if this is a manager
|
||||||
|
MANAGERCHECK=$(cat /etc/salt/grains | grep role | awk '{print $2}')
|
||||||
|
if [[ "$MANAGERCHECK" =~ ^('so-eval'|'so-manager'|'so-standalone'|'so-managersearch')$ ]]; then
|
||||||
|
echo "This is a manager. We can proceed"
|
||||||
|
else
|
||||||
|
echo "Please run soup on the manager. The manager controls all updates."
|
||||||
|
exit 0
|
||||||
|
fi
|
||||||
|
}
|
||||||
|
|
||||||
|
clean_dockers() {
|
||||||
|
# Place Holder for cleaning up old docker images
|
||||||
|
echo ""
|
||||||
|
}
|
||||||
|
|
||||||
|
clone_to_tmp() {
|
||||||
# TODO Need to add a air gap option
|
# TODO Need to add a air gap option
|
||||||
|
# Clean old files
|
||||||
|
rm -rf /tmp/sogh
|
||||||
# Make a temp location for the files
|
# Make a temp location for the files
|
||||||
rm -rf /tmp/soup
|
mkdir -p /tmp/sogh
|
||||||
mkdir -p /tmp/soup
|
cd /tmp/sogh
|
||||||
cd /tmp/soup
|
#git clone -b dev https://github.com/Security-Onion-Solutions/securityonion.git
|
||||||
#git clone -b dev https://github.com/Security-Onion-Solutions/securityonion-saltstack.git
|
git clone https://github.com/Security-Onion-Solutions/securityonion.git
|
||||||
git clone https://github.com/Security-Onion-Solutions/securityonion-saltstack.git
|
cd /tmp
|
||||||
|
if [ ! -f $UPDATE_DIR/VERSION ]; then
|
||||||
|
echo "Update was unable to pull from github. Please check your internet."
|
||||||
|
exit 0
|
||||||
|
fi
|
||||||
|
}
|
||||||
|
|
||||||
|
copy_new_files() {
|
||||||
|
# Copy new files over to the salt dir
|
||||||
|
cd /tmp/sogh/securityonion
|
||||||
|
rsync -a salt $default_salt_dir/
|
||||||
|
rsync -a pillar $default_salt_dir/
|
||||||
|
chown -R socore:socore $default_salt_dir/
|
||||||
|
chmod 755 $default_salt_dir/pillar/firewall/addfirewall.sh
|
||||||
|
cd /tmp
|
||||||
|
}
|
||||||
|
|
||||||
|
highstate() {
|
||||||
|
# Run a highstate but first cancel a running one.
|
||||||
|
salt-call saltutil.kill_all_jobs
|
||||||
|
salt-call state.highstate
|
||||||
|
}
|
||||||
|
|
||||||
|
pillar_changes() {
|
||||||
|
# This function is to add any new pillar items if needed.
|
||||||
|
echo "Checking to see if pillar changes are needed"
|
||||||
|
|
||||||
}
|
}
|
||||||
|
|
||||||
# Prompt the user that this requires internets
|
update_dockers() {
|
||||||
|
# List all the containers
|
||||||
|
if [ $MANAGERCHECK != 'so-helix' ]; then
|
||||||
|
TRUSTED_CONTAINERS=( \
|
||||||
|
"so-acng" \
|
||||||
|
"so-thehive-cortex" \
|
||||||
|
"so-curator" \
|
||||||
|
"so-domainstats" \
|
||||||
|
"so-elastalert" \
|
||||||
|
"so-elasticsearch" \
|
||||||
|
"so-filebeat" \
|
||||||
|
"so-fleet" \
|
||||||
|
"so-fleet-launcher" \
|
||||||
|
"so-freqserver" \
|
||||||
|
"so-grafana" \
|
||||||
|
"so-idstools" \
|
||||||
|
"so-influxdb" \
|
||||||
|
"so-kibana" \
|
||||||
|
"so-kratos" \
|
||||||
|
"so-logstash" \
|
||||||
|
"so-mysql" \
|
||||||
|
"so-nginx" \
|
||||||
|
"so-pcaptools" \
|
||||||
|
"so-playbook" \
|
||||||
|
"so-redis" \
|
||||||
|
"so-soc" \
|
||||||
|
"so-soctopus" \
|
||||||
|
"so-steno" \
|
||||||
|
"so-strelka" \
|
||||||
|
"so-suricata" \
|
||||||
|
"so-telegraf" \
|
||||||
|
"so-thehive" \
|
||||||
|
"so-thehive-es" \
|
||||||
|
"so-wazuh" \
|
||||||
|
"so-zeek" )
|
||||||
|
else
|
||||||
|
TRUSTED_CONTAINERS=( \
|
||||||
|
"so-filebeat" \
|
||||||
|
"so-idstools" \
|
||||||
|
"so-logstash" \
|
||||||
|
"so-nginx" \
|
||||||
|
"so-redis" \
|
||||||
|
"so-steno" \
|
||||||
|
"so-suricata" \
|
||||||
|
"so-telegraf" \
|
||||||
|
"so-zeek" )
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Download the containers from the interwebs
|
||||||
|
for i in "${TRUSTED_CONTAINERS[@]}"
|
||||||
|
do
|
||||||
|
# Pull down the trusted docker image
|
||||||
|
echo "Downloading $i:$NEWVERSION"
|
||||||
|
docker pull --disable-content-trust=false docker.io/$IMAGEREPO/$i:$NEWVERSION
|
||||||
|
# Tag it with the new registry destination
|
||||||
|
docker tag $IMAGEREPO/$i:$NEWVERSION $HOSTNAME:5000/$IMAGEREPO/$i:$NEWVERSION
|
||||||
|
docker push $HOSTNAME:5000/$IMAGEREPO/$i:$NEWVERSION
|
||||||
|
done
|
||||||
|
|
||||||
|
}
|
||||||
|
|
||||||
|
update_version() {
|
||||||
|
# Update the version to the latest
|
||||||
|
echo "Updating the version file."
|
||||||
|
echo $NEWVERSION > /etc/soversion
|
||||||
|
sed -i 's/$INSTALLEDVERSION/$NEWVERISON/g' /opt/so/saltstack/local/pillar/static.sls
|
||||||
|
}
|
||||||
|
|
||||||
|
upgrade_check() {
|
||||||
|
# Let's make sure we actually need to update.
|
||||||
|
NEWVERSION=$(cat $UPDATE_DIR/VERSION)
|
||||||
|
if [ "$INSTALLEDVERSION" == "$NEWVERSION" ]; then
|
||||||
|
echo "You are already running the latest version of Security Onion."
|
||||||
|
exit 0
|
||||||
|
else
|
||||||
|
echo "Performing Upgrade from $INSTALLEDVERSION to $NEWVERSION"
|
||||||
|
fi
|
||||||
|
}
|
||||||
|
|
||||||
|
verify_latest_update_script() {
|
||||||
|
# Check to see if the update scripts match. If not run the new one.
|
||||||
|
CURRENTSOUP=$(md5sum /opt/so/saltstack/default/salt/common/tools/sbin/soup | awk '{print $1}')
|
||||||
|
GITSOUP=$(md5sum /tmp/sogh/securityonion/salt/common/tools/sbin/soup | awk '{print $1}')
|
||||||
|
if [[ "$CURRENTSOUP" == "$GITSOUP" ]]; then
|
||||||
|
echo "This version of the soup script is up to date. Proceeding."
|
||||||
|
else
|
||||||
|
echo "You are not running the latest soup version. Updating soup."
|
||||||
|
cp $UPDATE_DIR/salt/common/tools/sbin/soup $default_salt_dir/salt/common/tools/sbin/
|
||||||
|
salt-call state.apply common queue=True
|
||||||
|
echo ""
|
||||||
|
echo "soup has been updated. Please run soup again"
|
||||||
|
exit 0
|
||||||
|
fi
|
||||||
|
}
|
||||||
|
|
||||||
|
echo "Checking to see if this is a manager"
|
||||||
|
manager_check
|
||||||
|
echo "Cloning latest code to a temporary location"
|
||||||
clone_to_tmp
|
clone_to_tmp
|
||||||
cd /tmp/soup/securityonion-saltstack/update
|
echo ""
|
||||||
chmod +x soup
|
echo "Verifying we have the latest script"
|
||||||
./soup
|
verify_latest_update_script
|
||||||
|
echo ""
|
||||||
|
echo "Let's see if we need to update"
|
||||||
|
upgrade_check
|
||||||
|
echo ""
|
||||||
|
echo "Making pillar changes"
|
||||||
|
pillar_changes
|
||||||
|
echo ""
|
||||||
|
echo "Cleaning up old dockers"
|
||||||
|
clean_dockers
|
||||||
|
echo ""
|
||||||
|
echo "Updating docker to $NEWVERSION"
|
||||||
|
update_dockers
|
||||||
|
echo ""
|
||||||
|
echo "Copying new code"
|
||||||
|
copy_new_files
|
||||||
|
echo ""
|
||||||
|
echo "Running a highstate to complete upgrade"
|
||||||
|
highstate
|
||||||
|
echo ""
|
||||||
|
echo "Updating version"
|
||||||
|
update_version
|
||||||
|
echo ""
|
||||||
|
echo "Upgrade from $INSTALLEDVERSION to $NEWVERSION complete."
|
||||||
|
|||||||
@@ -1,8 +1,4 @@
|
|||||||
{%- if grains['role'] in ['so-node', 'so-searchnode', 'so-heavynode'] %}
|
{%- set log_size_limit = salt['pillar.get']('elasticsearch:log_size_limit', '') -%}
|
||||||
{%- set log_size_limit = salt['pillar.get']('node:log_size_limit', '') -%}
|
|
||||||
{%- elif grains['role'] in ['so-eval', 'so-mastersearch', 'so-standalone'] %}
|
|
||||||
{%- set log_size_limit = salt['pillar.get']('master:log_size_limit', '') -%}
|
|
||||||
{%- endif %}
|
|
||||||
---
|
---
|
||||||
# Remember, leave a key empty if there is no value. None will be a string,
|
# Remember, leave a key empty if there is no value. None will be a string,
|
||||||
# not a Python "NoneType"
|
# not a Python "NoneType"
|
||||||
|
|||||||
29
salt/curator/files/action/so-beats-close.yml
Normal file
29
salt/curator/files/action/so-beats-close.yml
Normal file
@@ -0,0 +1,29 @@
|
|||||||
|
{%- set cur_close_days = salt['pillar.get']('elasticsearch:index_settings:so-beats:close', 30) -%}
|
||||||
|
---
|
||||||
|
# Remember, leave a key empty if there is no value. None will be a string,
|
||||||
|
# not a Python "NoneType"
|
||||||
|
#
|
||||||
|
# Also remember that all examples have 'disable_action' set to True. If you
|
||||||
|
# want to use this action as a template, be sure to set this to False after
|
||||||
|
# copying it.
|
||||||
|
actions:
|
||||||
|
1:
|
||||||
|
action: close
|
||||||
|
description: >-
|
||||||
|
Close Beats indices older than {{cur_close_days}} days.
|
||||||
|
options:
|
||||||
|
delete_aliases: False
|
||||||
|
timeout_override:
|
||||||
|
continue_if_exception: False
|
||||||
|
disable_action: False
|
||||||
|
filters:
|
||||||
|
- filtertype: pattern
|
||||||
|
kind: regex
|
||||||
|
value: '^(logstash-beats.*|so-beats.*)$'
|
||||||
|
- filtertype: age
|
||||||
|
source: name
|
||||||
|
direction: older
|
||||||
|
timestring: '%Y.%m.%d'
|
||||||
|
unit: days
|
||||||
|
unit_count: {{cur_close_days}}
|
||||||
|
exclude:
|
||||||
@@ -1,9 +1,4 @@
|
|||||||
{%- if grains['role'] in ['so-node', 'so-searchnode', 'so-heavynode'] %}
|
{%- set cur_close_days = salt['pillar.get']('elasticsearch:index_settings:so-firewall:close', 30) -%}
|
||||||
{%- set cur_close_days = salt['pillar.get']('node:cur_close_days', '') -%}
|
|
||||||
{%- elif grains['role'] in ['so-eval', 'so-mastersearch', 'so-standalone'] %}
|
|
||||||
{%- set cur_close_days = salt['pillar.get']('master:cur_close_days', '') -%}
|
|
||||||
{%- endif -%}
|
|
||||||
|
|
||||||
---
|
---
|
||||||
# Remember, leave a key empty if there is no value. None will be a string,
|
# Remember, leave a key empty if there is no value. None will be a string,
|
||||||
# not a Python "NoneType"
|
# not a Python "NoneType"
|
||||||
@@ -15,8 +10,7 @@ actions:
|
|||||||
1:
|
1:
|
||||||
action: close
|
action: close
|
||||||
description: >-
|
description: >-
|
||||||
Close indices older than {{cur_close_days}} days (based on index name), for logstash-
|
Close Firewall indices older than {{cur_close_days}} days.
|
||||||
prefixed indices.
|
|
||||||
options:
|
options:
|
||||||
delete_aliases: False
|
delete_aliases: False
|
||||||
timeout_override:
|
timeout_override:
|
||||||
@@ -25,7 +19,7 @@ actions:
|
|||||||
filters:
|
filters:
|
||||||
- filtertype: pattern
|
- filtertype: pattern
|
||||||
kind: regex
|
kind: regex
|
||||||
value: '^(logstash-.*|so-.*)$'
|
value: '^(logstash-firewall.*|so-firewall.*)$'
|
||||||
- filtertype: age
|
- filtertype: age
|
||||||
source: name
|
source: name
|
||||||
direction: older
|
direction: older
|
||||||
29
salt/curator/files/action/so-ids-close.yml
Normal file
29
salt/curator/files/action/so-ids-close.yml
Normal file
@@ -0,0 +1,29 @@
|
|||||||
|
{%- set cur_close_days = salt['pillar.get']('elasticsearch:index_settings:so-ids:close', 30) -%}
|
||||||
|
---
|
||||||
|
# Remember, leave a key empty if there is no value. None will be a string,
|
||||||
|
# not a Python "NoneType"
|
||||||
|
#
|
||||||
|
# Also remember that all examples have 'disable_action' set to True. If you
|
||||||
|
# want to use this action as a template, be sure to set this to False after
|
||||||
|
# copying it.
|
||||||
|
actions:
|
||||||
|
1:
|
||||||
|
action: close
|
||||||
|
description: >-
|
||||||
|
Close IDS indices older than {{cur_close_days}} days.
|
||||||
|
options:
|
||||||
|
delete_aliases: False
|
||||||
|
timeout_override:
|
||||||
|
continue_if_exception: False
|
||||||
|
disable_action: False
|
||||||
|
filters:
|
||||||
|
- filtertype: pattern
|
||||||
|
kind: regex
|
||||||
|
value: '^(logstash-ids.*|so-ids.*)$'
|
||||||
|
- filtertype: age
|
||||||
|
source: name
|
||||||
|
direction: older
|
||||||
|
timestring: '%Y.%m.%d'
|
||||||
|
unit: days
|
||||||
|
unit_count: {{cur_close_days}}
|
||||||
|
exclude:
|
||||||
29
salt/curator/files/action/so-import-close.yml
Normal file
29
salt/curator/files/action/so-import-close.yml
Normal file
@@ -0,0 +1,29 @@
|
|||||||
|
{%- set cur_close_days = salt['pillar.get']('elasticsearch:index_settings:so-import:close', 30) -%}
|
||||||
|
---
|
||||||
|
# Remember, leave a key empty if there is no value. None will be a string,
|
||||||
|
# not a Python "NoneType"
|
||||||
|
#
|
||||||
|
# Also remember that all examples have 'disable_action' set to True. If you
|
||||||
|
# want to use this action as a template, be sure to set this to False after
|
||||||
|
# copying it.
|
||||||
|
actions:
|
||||||
|
1:
|
||||||
|
action: close
|
||||||
|
description: >-
|
||||||
|
Close Import indices older than {{cur_close_days}} days.
|
||||||
|
options:
|
||||||
|
delete_aliases: False
|
||||||
|
timeout_override:
|
||||||
|
continue_if_exception: False
|
||||||
|
disable_action: False
|
||||||
|
filters:
|
||||||
|
- filtertype: pattern
|
||||||
|
kind: regex
|
||||||
|
value: '^(logstash-import.*|so-import.*)$'
|
||||||
|
- filtertype: age
|
||||||
|
source: name
|
||||||
|
direction: older
|
||||||
|
timestring: '%Y.%m.%d'
|
||||||
|
unit: days
|
||||||
|
unit_count: {{cur_close_days}}
|
||||||
|
exclude:
|
||||||
29
salt/curator/files/action/so-osquery-close.yml
Normal file
29
salt/curator/files/action/so-osquery-close.yml
Normal file
@@ -0,0 +1,29 @@
|
|||||||
|
{%- set cur_close_days = salt['pillar.get']('elasticsearch:index_settings:so-osquery:close', 30) -%}
|
||||||
|
---
|
||||||
|
# Remember, leave a key empty if there is no value. None will be a string,
|
||||||
|
# not a Python "NoneType"
|
||||||
|
#
|
||||||
|
# Also remember that all examples have 'disable_action' set to True. If you
|
||||||
|
# want to use this action as a template, be sure to set this to False after
|
||||||
|
# copying it.
|
||||||
|
actions:
|
||||||
|
1:
|
||||||
|
action: close
|
||||||
|
description: >-
|
||||||
|
Close osquery indices older than {{cur_close_days}} days.
|
||||||
|
options:
|
||||||
|
delete_aliases: False
|
||||||
|
timeout_override:
|
||||||
|
continue_if_exception: False
|
||||||
|
disable_action: False
|
||||||
|
filters:
|
||||||
|
- filtertype: pattern
|
||||||
|
kind: regex
|
||||||
|
value: '^(logstash-osquery.*|so-osquery.*)$'
|
||||||
|
- filtertype: age
|
||||||
|
source: name
|
||||||
|
direction: older
|
||||||
|
timestring: '%Y.%m.%d'
|
||||||
|
unit: days
|
||||||
|
unit_count: {{cur_close_days}}
|
||||||
|
exclude:
|
||||||
29
salt/curator/files/action/so-ossec-close.yml
Normal file
29
salt/curator/files/action/so-ossec-close.yml
Normal file
@@ -0,0 +1,29 @@
|
|||||||
|
{%- set cur_close_days = salt['pillar.get']('elasticsearch:index_settings:so-ossec:close', 30) -%}
|
||||||
|
---
|
||||||
|
# Remember, leave a key empty if there is no value. None will be a string,
|
||||||
|
# not a Python "NoneType"
|
||||||
|
#
|
||||||
|
# Also remember that all examples have 'disable_action' set to True. If you
|
||||||
|
# want to use this action as a template, be sure to set this to False after
|
||||||
|
# copying it.
|
||||||
|
actions:
|
||||||
|
1:
|
||||||
|
action: close
|
||||||
|
description: >-
|
||||||
|
Close ossec indices older than {{cur_close_days}} days.
|
||||||
|
options:
|
||||||
|
delete_aliases: False
|
||||||
|
timeout_override:
|
||||||
|
continue_if_exception: False
|
||||||
|
disable_action: False
|
||||||
|
filters:
|
||||||
|
- filtertype: pattern
|
||||||
|
kind: regex
|
||||||
|
value: '^(logstash-ossec.*|so-ossec.*)$'
|
||||||
|
- filtertype: age
|
||||||
|
source: name
|
||||||
|
direction: older
|
||||||
|
timestring: '%Y.%m.%d'
|
||||||
|
unit: days
|
||||||
|
unit_count: {{cur_close_days}}
|
||||||
|
exclude:
|
||||||
29
salt/curator/files/action/so-strelka-close.yml
Normal file
29
salt/curator/files/action/so-strelka-close.yml
Normal file
@@ -0,0 +1,29 @@
|
|||||||
|
{%- set cur_close_days = salt['pillar.get']('elasticsearch:index_settings:so-strelka:close', 30) -%}
|
||||||
|
---
|
||||||
|
# Remember, leave a key empty if there is no value. None will be a string,
|
||||||
|
# not a Python "NoneType"
|
||||||
|
#
|
||||||
|
# Also remember that all examples have 'disable_action' set to True. If you
|
||||||
|
# want to use this action as a template, be sure to set this to False after
|
||||||
|
# copying it.
|
||||||
|
actions:
|
||||||
|
1:
|
||||||
|
action: close
|
||||||
|
description: >-
|
||||||
|
Close Strelka indices older than {{cur_close_days}} days.
|
||||||
|
options:
|
||||||
|
delete_aliases: False
|
||||||
|
timeout_override:
|
||||||
|
continue_if_exception: False
|
||||||
|
disable_action: False
|
||||||
|
filters:
|
||||||
|
- filtertype: pattern
|
||||||
|
kind: regex
|
||||||
|
value: '^(logstash-strelka.*|so-strelka.*)$'
|
||||||
|
- filtertype: age
|
||||||
|
source: name
|
||||||
|
direction: older
|
||||||
|
timestring: '%Y.%m.%d'
|
||||||
|
unit: days
|
||||||
|
unit_count: {{cur_close_days}}
|
||||||
|
exclude:
|
||||||
29
salt/curator/files/action/so-syslog-close.yml
Normal file
29
salt/curator/files/action/so-syslog-close.yml
Normal file
@@ -0,0 +1,29 @@
|
|||||||
|
{%- set cur_close_days = salt['pillar.get']('elasticsearch:index_settings:so-syslog:close', 30) -%}
|
||||||
|
---
|
||||||
|
# Remember, leave a key empty if there is no value. None will be a string,
|
||||||
|
# not a Python "NoneType"
|
||||||
|
#
|
||||||
|
# Also remember that all examples have 'disable_action' set to True. If you
|
||||||
|
# want to use this action as a template, be sure to set this to False after
|
||||||
|
# copying it.
|
||||||
|
actions:
|
||||||
|
1:
|
||||||
|
action: close
|
||||||
|
description: >-
|
||||||
|
Close syslog indices older than {{cur_close_days}} days.
|
||||||
|
options:
|
||||||
|
delete_aliases: False
|
||||||
|
timeout_override:
|
||||||
|
continue_if_exception: False
|
||||||
|
disable_action: False
|
||||||
|
filters:
|
||||||
|
- filtertype: pattern
|
||||||
|
kind: regex
|
||||||
|
value: '^(logstash-syslog.*|so-syslog.*)$'
|
||||||
|
- filtertype: age
|
||||||
|
source: name
|
||||||
|
direction: older
|
||||||
|
timestring: '%Y.%m.%d'
|
||||||
|
unit: days
|
||||||
|
unit_count: {{cur_close_days}}
|
||||||
|
exclude:
|
||||||
29
salt/curator/files/action/so-zeek-close.yml
Normal file
29
salt/curator/files/action/so-zeek-close.yml
Normal file
@@ -0,0 +1,29 @@
|
|||||||
|
{%- set cur_close_days = salt['pillar.get']('elasticsearch:index_settings:so-zeek:close', 30) -%}
|
||||||
|
---
|
||||||
|
# Remember, leave a key empty if there is no value. None will be a string,
|
||||||
|
# not a Python "NoneType"
|
||||||
|
#
|
||||||
|
# Also remember that all examples have 'disable_action' set to True. If you
|
||||||
|
# want to use this action as a template, be sure to set this to False after
|
||||||
|
# copying it.
|
||||||
|
actions:
|
||||||
|
1:
|
||||||
|
action: close
|
||||||
|
description: >-
|
||||||
|
Close Zeek indices older than {{cur_close_days}} days.
|
||||||
|
options:
|
||||||
|
delete_aliases: False
|
||||||
|
timeout_override:
|
||||||
|
continue_if_exception: False
|
||||||
|
disable_action: False
|
||||||
|
filters:
|
||||||
|
- filtertype: pattern
|
||||||
|
kind: regex
|
||||||
|
value: '^(logstash-zeek.*|so-zeek.*)$'
|
||||||
|
- filtertype: age
|
||||||
|
source: name
|
||||||
|
direction: older
|
||||||
|
timestring: '%Y.%m.%d'
|
||||||
|
unit: days
|
||||||
|
unit_count: {{cur_close_days}}
|
||||||
|
exclude:
|
||||||
@@ -1,2 +1,2 @@
|
|||||||
#!/bin/bash
|
#!/bin/bash
|
||||||
/usr/sbin/so-curator-closed-delete > /dev/null 2>&1; docker exec so-curator curator --config /etc/curator/config/curator.yml /etc/curator/action/close.yml > /dev/null 2>&1
|
/usr/sbin/so-curator-closed-delete > /dev/null 2>&1; docker exec so-curator curator --config /etc/curator/config/curator.yml /etc/curator/action/so-zeek-close.yml > /dev/null 2>&1; docker exec so-curator curator --config /etc/curator/config/curator.yml /etc/curator/action/so-beats-close.yml > /dev/null 2>&1; docker exec so-curator curator --config /etc/curator/config/curator.yml /etc/curator/action/so-firewall-close.yml > /dev/null 2>&1; docker exec so-curator curator --config /etc/curator/config/curator.yml /etc/curator/action/so-ids-close.yml > /dev/null 2>&1; docker exec so-curator curator --config /etc/curator/config/curator.yml /etc/curator/action/so-import-close.yml > /dev/null 2>&1; docker exec so-curator curator --config /etc/curator/config/curator.yml /etc/curator/action/so-osquery-close.yml > /dev/null 2>&1; docker exec so-curator curator --config /etc/curator/config/curator.yml /etc/curator/action/so-ossec-close.yml > /dev/null 2>&1; docker exec so-curator curator --config /etc/curator/config/curator.yml /etc/curator/action/so-strelka-close.yml > /dev/null 2>&1; docker exec so-curator curator --config /etc/curator/config/curator.yml /etc/curator/action/so-syslog-close.yml > /dev/null 2>&1
|
||||||
|
|||||||
@@ -1,16 +1,16 @@
|
|||||||
|
|
||||||
{%- if grains['role'] in ['so-node', 'so-searchnode', 'so-heavynode'] %}
|
|
||||||
{%- set ELASTICSEARCH_HOST = salt['pillar.get']('node:mainip', '') -%}
|
|
||||||
{%- set ELASTICSEARCH_PORT = salt['pillar.get']('node:es_port', '') -%}
|
|
||||||
{%- set LOG_SIZE_LIMIT = salt['pillar.get']('node:log_size_limit', '') -%}
|
|
||||||
{%- elif grains['role'] in ['so-eval', 'so-mastersearch', 'so-standalone'] %}
|
|
||||||
{%- set ELASTICSEARCH_HOST = salt['pillar.get']('master:mainip', '') -%}
|
|
||||||
{%- set ELASTICSEARCH_PORT = salt['pillar.get']('master:es_port', '') -%}
|
|
||||||
{%- set LOG_SIZE_LIMIT = salt['pillar.get']('master:log_size_limit', '') -%}
|
|
||||||
{%- endif -%}
|
|
||||||
|
|
||||||
#!/bin/bash
|
#!/bin/bash
|
||||||
#
|
|
||||||
|
{%- if grains['role'] in ['so-node', 'so-searchnode', 'so-heavynode'] %}
|
||||||
|
{%- set ELASTICSEARCH_HOST = salt['pillar.get']('elasticsearch:mainip', '') -%}
|
||||||
|
{%- set ELASTICSEARCH_PORT = salt['pillar.get']('elasticsearch:es_port', '') -%}
|
||||||
|
{%- set LOG_SIZE_LIMIT = salt['pillar.get']('elasticsearch:log_size_limit', '') -%}
|
||||||
|
{%- elif grains['role'] in ['so-eval', 'so-managersearch', 'so-standalone'] %}
|
||||||
|
{%- set ELASTICSEARCH_HOST = salt['pillar.get']('manager:mainip', '') -%}
|
||||||
|
{%- set ELASTICSEARCH_PORT = salt['pillar.get']('manager:es_port', '') -%}
|
||||||
|
{%- set LOG_SIZE_LIMIT = salt['pillar.get']('manager:log_size_limit', '') -%}
|
||||||
|
{%- endif -%}
|
||||||
|
|
||||||
# Copyright 2014,2015,2016,2017,2018 Security Onion Solutions, LLC
|
# Copyright 2014,2015,2016,2017,2018 Security Onion Solutions, LLC
|
||||||
#
|
#
|
||||||
# This program is free software: you can redistribute it and/or modify
|
# This program is free software: you can redistribute it and/or modify
|
||||||
|
|||||||
@@ -1,7 +1,7 @@
|
|||||||
{% if grains['role'] in ['so-node', 'so-heavynode'] %}
|
{% if grains['role'] in ['so-node', 'so-heavynode'] %}
|
||||||
{%- set elasticsearch = salt['pillar.get']('node:mainip', '') -%}
|
{%- set elasticsearch = salt['pillar.get']('elasticsearch:mainip', '') -%}
|
||||||
{% elif grains['role'] in ['so-eval', 'so-mastersearch', 'so-standalone'] %}
|
{% elif grains['role'] in ['so-eval', 'so-managersearch', 'so-standalone'] %}
|
||||||
{%- set elasticsearch = salt['pillar.get']('master:mainip', '') -%}
|
{%- set elasticsearch = salt['pillar.get']('manager:mainip', '') -%}
|
||||||
{%- endif %}
|
{%- endif %}
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|||||||
@@ -1,6 +1,7 @@
|
|||||||
{% set VERSION = salt['pillar.get']('static:soversion', 'HH1.2.2') %}
|
{% set VERSION = salt['pillar.get']('static:soversion', 'HH1.2.2') %}
|
||||||
{% set MASTER = salt['grains.get']('master') %}
|
{% set IMAGEREPO = salt['pillar.get']('static:imagerepo') %}
|
||||||
{% if grains['role'] in ['so-eval', 'so-node', 'so-mastersearch', 'so-heavynode', 'so-standalone'] %}
|
{% set MANAGER = salt['grains.get']('master') %}
|
||||||
|
{% if grains['role'] in ['so-eval', 'so-node', 'so-managersearch', 'so-heavynode', 'so-standalone'] %}
|
||||||
# Curator
|
# Curator
|
||||||
# Create the group
|
# Create the group
|
||||||
curatorgroup:
|
curatorgroup:
|
||||||
@@ -30,18 +31,10 @@ curlogdir:
|
|||||||
- user: 934
|
- user: 934
|
||||||
- group: 939
|
- group: 939
|
||||||
|
|
||||||
curcloseconf:
|
actionconfs:
|
||||||
file.managed:
|
file.recurse:
|
||||||
- name: /opt/so/conf/curator/action/close.yml
|
- name: /opt/so/conf/curator/action
|
||||||
- source: salt://curator/files/action/close.yml
|
- source: salt://curator/files/action
|
||||||
- user: 934
|
|
||||||
- group: 939
|
|
||||||
- template: jinja
|
|
||||||
|
|
||||||
curdelconf:
|
|
||||||
file.managed:
|
|
||||||
- name: /opt/so/conf/curator/action/delete.yml
|
|
||||||
- source: salt://curator/files/action/delete.yml
|
|
||||||
- user: 934
|
- user: 934
|
||||||
- group: 939
|
- group: 939
|
||||||
- template: jinja
|
- template: jinja
|
||||||
@@ -119,7 +112,7 @@ so-curatordeletecron:
|
|||||||
|
|
||||||
so-curator:
|
so-curator:
|
||||||
docker_container.running:
|
docker_container.running:
|
||||||
- image: {{ MASTER }}:5000/soshybridhunter/so-curator:{{ VERSION }}
|
- image: {{ MANAGER }}:5000/{{ IMAGEREPO }}/so-curator:{{ VERSION }}
|
||||||
- hostname: curator
|
- hostname: curator
|
||||||
- name: so-curator
|
- name: so-curator
|
||||||
- user: curator
|
- user: curator
|
||||||
|
|||||||
@@ -1,2 +0,0 @@
|
|||||||
#!/bin/bash
|
|
||||||
/usr/bin/docker exec so-bro /opt/bro/bin/broctl netstats | awk '{print $(NF-2),$(NF-1),$NF}' | awk -F '[ =]' '{RCVD += $2;DRP += $4;TTL += $6} END { print "rcvd: " RCVD, "dropped: " DRP, "total: " TTL}' >> /nsm/bro/logs/packetloss.log
|
|
||||||
@@ -1,64 +0,0 @@
|
|||||||
#!/bin/bash
|
|
||||||
|
|
||||||
# Delete Zeek Logs based on defined CRIT_DISK_USAGE value
|
|
||||||
|
|
||||||
# Copyright 2014,2015,2016,2017,2018, 2019 Security Onion Solutions, LLC
|
|
||||||
|
|
||||||
# This program is free software: you can redistribute it and/or modify
|
|
||||||
# it under the terms of the GNU General Public License as published by
|
|
||||||
# the Free Software Foundation, either version 3 of the License, or
|
|
||||||
# (at your option) any later version.
|
|
||||||
#
|
|
||||||
# This program is distributed in the hope that it will be useful,
|
|
||||||
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
|
||||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
|
||||||
# GNU General Public License for more details.
|
|
||||||
#
|
|
||||||
# You should have received a copy of the GNU General Public License
|
|
||||||
# along with this program. If not, see <http://www.gnu.org/licenses/>.
|
|
||||||
|
|
||||||
clean () {
|
|
||||||
|
|
||||||
SENSOR_DIR='/nsm'
|
|
||||||
CRIT_DISK_USAGE=90
|
|
||||||
CUR_USAGE=$(df -P $SENSOR_DIR | tail -1 | awk '{print $5}' | tr -d %)
|
|
||||||
LOG="/nsm/bro/logs/zeek_clean.log"
|
|
||||||
|
|
||||||
if [ "$CUR_USAGE" -gt "$CRIT_DISK_USAGE" ]; then
|
|
||||||
while [ "$CUR_USAGE" -gt "$CRIT_DISK_USAGE" ];
|
|
||||||
do
|
|
||||||
TODAY=$(date -u "+%Y-%m-%d")
|
|
||||||
|
|
||||||
# find the oldest Zeek logs directory and exclude today
|
|
||||||
OLDEST_DIR=$(ls /nsm/bro/logs/ | grep -v "current" | grep -v "stats" | grep -v "packetloss" | grep -v "zeek_clean" | sort | grep -v $TODAY | head -n 1)
|
|
||||||
if [ -z "$OLDEST_DIR" -o "$OLDEST_DIR" == ".." -o "$OLDEST_DIR" == "." ]
|
|
||||||
then
|
|
||||||
echo "$(date) - No old Zeek logs available to clean up in /nsm/bro/logs/" >> $LOG
|
|
||||||
exit 0
|
|
||||||
else
|
|
||||||
echo "$(date) - Removing directory: /nsm/bro/logs/$OLDEST_DIR" >> $LOG
|
|
||||||
rm -rf /nsm/bro/logs/"$OLDEST_DIR"
|
|
||||||
fi
|
|
||||||
|
|
||||||
# find oldest files in extracted directory and exclude today
|
|
||||||
OLDEST_EXTRACT=$(find /nsm/bro/extracted -type f -printf '%T+ %p\n' 2>/dev/null | sort | grep -v $TODAY | head -n 1)
|
|
||||||
if [ -z "$OLDEST_EXTRACT" -o "$OLDEST_EXTRACT" == ".." -o "$OLDEST_EXTRACT" == "." ]
|
|
||||||
then
|
|
||||||
echo "$(date) - No old extracted files available to clean up in /nsm/bro/extracted/" >> $LOG
|
|
||||||
else
|
|
||||||
OLDEST_EXTRACT_DATE=`echo $OLDEST_EXTRACT | awk '{print $1}' | cut -d+ -f1`
|
|
||||||
OLDEST_EXTRACT_FILE=`echo $OLDEST_EXTRACT | awk '{print $2}'`
|
|
||||||
echo "$(date) - Removing extracted files for $OLDEST_EXTRACT_DATE" >> $LOG
|
|
||||||
find /nsm/bro/extracted -type f -printf '%T+ %p\n' | grep $OLDEST_EXTRACT_DATE | awk '{print $2}' |while read FILE
|
|
||||||
do
|
|
||||||
echo "$(date) - Removing extracted file: $FILE" >> $LOG
|
|
||||||
rm -f "$FILE"
|
|
||||||
done
|
|
||||||
fi
|
|
||||||
done
|
|
||||||
else
|
|
||||||
echo "$(date) - CRIT_DISK_USAGE value of $CRIT_DISK_USAGE not greater than current usage of $CUR_USAGE..." >> $LOG
|
|
||||||
fi
|
|
||||||
}
|
|
||||||
|
|
||||||
clean
|
|
||||||
@@ -1,139 +0,0 @@
|
|||||||
##! Local site policy. Customize as appropriate.
|
|
||||||
##!
|
|
||||||
##! This file will not be overwritten when upgrading or reinstalling!
|
|
||||||
|
|
||||||
# This script logs which scripts were loaded during each run.
|
|
||||||
@load misc/loaded-scripts
|
|
||||||
|
|
||||||
# Apply the default tuning scripts for common tuning settings.
|
|
||||||
@load tuning/defaults
|
|
||||||
|
|
||||||
# Estimate and log capture loss.
|
|
||||||
@load misc/capture-loss
|
|
||||||
|
|
||||||
# Enable logging of memory, packet and lag statistics.
|
|
||||||
@load misc/stats
|
|
||||||
|
|
||||||
# Load the scan detection script.
|
|
||||||
@load misc/scan
|
|
||||||
|
|
||||||
# Detect traceroute being run on the network. This could possibly cause
|
|
||||||
# performance trouble when there are a lot of traceroutes on your network.
|
|
||||||
# Enable cautiously.
|
|
||||||
#@load misc/detect-traceroute
|
|
||||||
|
|
||||||
# Generate notices when vulnerable versions of software are discovered.
|
|
||||||
# The default is to only monitor software found in the address space defined
|
|
||||||
# as "local". Refer to the software framework's documentation for more
|
|
||||||
# information.
|
|
||||||
@load frameworks/software/vulnerable
|
|
||||||
|
|
||||||
# Detect software changing (e.g. attacker installing hacked SSHD).
|
|
||||||
@load frameworks/software/version-changes
|
|
||||||
|
|
||||||
# This adds signatures to detect cleartext forward and reverse windows shells.
|
|
||||||
@load-sigs frameworks/signatures/detect-windows-shells
|
|
||||||
|
|
||||||
# Load all of the scripts that detect software in various protocols.
|
|
||||||
@load protocols/ftp/software
|
|
||||||
@load protocols/smtp/software
|
|
||||||
@load protocols/ssh/software
|
|
||||||
@load protocols/http/software
|
|
||||||
# The detect-webapps script could possibly cause performance trouble when
|
|
||||||
# running on live traffic. Enable it cautiously.
|
|
||||||
#@load protocols/http/detect-webapps
|
|
||||||
|
|
||||||
# This script detects DNS results pointing toward your Site::local_nets
|
|
||||||
# where the name is not part of your local DNS zone and is being hosted
|
|
||||||
# externally. Requires that the Site::local_zones variable is defined.
|
|
||||||
@load protocols/dns/detect-external-names
|
|
||||||
|
|
||||||
# Script to detect various activity in FTP sessions.
|
|
||||||
@load protocols/ftp/detect
|
|
||||||
|
|
||||||
# Scripts that do asset tracking.
|
|
||||||
@load protocols/conn/known-hosts
|
|
||||||
@load protocols/conn/known-services
|
|
||||||
@load protocols/ssl/known-certs
|
|
||||||
|
|
||||||
# This script enables SSL/TLS certificate validation.
|
|
||||||
@load protocols/ssl/validate-certs
|
|
||||||
|
|
||||||
# This script prevents the logging of SSL CA certificates in x509.log
|
|
||||||
@load protocols/ssl/log-hostcerts-only
|
|
||||||
|
|
||||||
# Uncomment the following line to check each SSL certificate hash against the ICSI
|
|
||||||
# certificate notary service; see http://notary.icsi.berkeley.edu .
|
|
||||||
# @load protocols/ssl/notary
|
|
||||||
|
|
||||||
# If you have libGeoIP support built in, do some geographic detections and
|
|
||||||
# logging for SSH traffic.
|
|
||||||
@load protocols/ssh/geo-data
|
|
||||||
# Detect hosts doing SSH bruteforce attacks.
|
|
||||||
@load protocols/ssh/detect-bruteforcing
|
|
||||||
# Detect logins using "interesting" hostnames.
|
|
||||||
@load protocols/ssh/interesting-hostnames
|
|
||||||
|
|
||||||
# Detect SQL injection attacks.
|
|
||||||
@load protocols/http/detect-sqli
|
|
||||||
|
|
||||||
#### Network File Handling ####
|
|
||||||
|
|
||||||
# Enable MD5 and SHA1 hashing for all files.
|
|
||||||
@load frameworks/files/hash-all-files
|
|
||||||
|
|
||||||
# Detect SHA1 sums in Team Cymru's Malware Hash Registry.
|
|
||||||
@load frameworks/files/detect-MHR
|
|
||||||
|
|
||||||
# Uncomment the following line to enable detection of the heartbleed attack. Enabling
|
|
||||||
# this might impact performance a bit.
|
|
||||||
# @load policy/protocols/ssl/heartbleed
|
|
||||||
|
|
||||||
# Uncomment the following line to enable logging of connection VLANs. Enabling
|
|
||||||
# this adds two VLAN fields to the conn.log file. This may not work properly
|
|
||||||
# since we use AF_PACKET and it strips VLAN tags.
|
|
||||||
# @load policy/protocols/conn/vlan-logging
|
|
||||||
|
|
||||||
# Uncomment the following line to enable logging of link-layer addresses. Enabling
|
|
||||||
# this adds the link-layer address for each connection endpoint to the conn.log file.
|
|
||||||
# @load policy/protocols/conn/mac-logging
|
|
||||||
|
|
||||||
# Uncomment the following line to enable the SMB analyzer. The analyzer
|
|
||||||
# is currently considered a preview and therefore not loaded by default.
|
|
||||||
@load base/protocols/smb
|
|
||||||
|
|
||||||
# BPF Configuration
|
|
||||||
@load securityonion/bpfconf
|
|
||||||
|
|
||||||
# Add the interface to the log event
|
|
||||||
#@load securityonion/add-interface-to-logs.bro
|
|
||||||
|
|
||||||
# Add Sensor Name to the conn.log
|
|
||||||
#@load securityonion/conn-add-sensorname.bro
|
|
||||||
|
|
||||||
# File Extraction
|
|
||||||
#@load securityonion/file-extraction
|
|
||||||
|
|
||||||
# Intel from Mandiant APT1 Report
|
|
||||||
#@load securityonion/apt1
|
|
||||||
|
|
||||||
# ShellShock - detects successful exploitation of Bash vulnerability CVE-2014-6271
|
|
||||||
#@load securityonion/shellshock
|
|
||||||
|
|
||||||
# JA3 - SSL Detection Goodness
|
|
||||||
@load policy/ja3
|
|
||||||
|
|
||||||
# HASSH
|
|
||||||
@load policy/hassh
|
|
||||||
|
|
||||||
# You can load your own intel into:
|
|
||||||
# /opt/so/saltstack/bro/policy/intel/ on the master
|
|
||||||
@load intel
|
|
||||||
|
|
||||||
# Load a custom Bro policy
|
|
||||||
# /opt/so/saltstack/bro/policy/custom/ on the master
|
|
||||||
#@load custom/somebropolicy.bro
|
|
||||||
|
|
||||||
# Write logs in JSON
|
|
||||||
redef LogAscii::use_json = T;
|
|
||||||
redef LogAscii::json_timestamps = JSON::TS_ISO8601;
|
|
||||||
@@ -1,133 +0,0 @@
|
|||||||
##! Local site policy. Customize as appropriate.
|
|
||||||
##!
|
|
||||||
##! This file will not be overwritten when upgrading or reinstalling!
|
|
||||||
|
|
||||||
# This script logs which scripts were loaded during each run.
|
|
||||||
@load misc/loaded-scripts
|
|
||||||
|
|
||||||
# Apply the default tuning scripts for common tuning settings.
|
|
||||||
@load tuning/defaults
|
|
||||||
|
|
||||||
# Estimate and log capture loss.
|
|
||||||
@load misc/capture-loss
|
|
||||||
|
|
||||||
# Enable logging of memory, packet and lag statistics.
|
|
||||||
@load misc/stats
|
|
||||||
|
|
||||||
# Load the scan detection script.
|
|
||||||
@load misc/scan
|
|
||||||
|
|
||||||
# Detect traceroute being run on the network. This could possibly cause
|
|
||||||
# performance trouble when there are a lot of traceroutes on your network.
|
|
||||||
# Enable cautiously.
|
|
||||||
#@load misc/detect-traceroute
|
|
||||||
|
|
||||||
# Generate notices when vulnerable versions of software are discovered.
|
|
||||||
# The default is to only monitor software found in the address space defined
|
|
||||||
# as "local". Refer to the software framework's documentation for more
|
|
||||||
# information.
|
|
||||||
@load frameworks/software/vulnerable
|
|
||||||
|
|
||||||
# Detect software changing (e.g. attacker installing hacked SSHD).
|
|
||||||
@load frameworks/software/version-changes
|
|
||||||
|
|
||||||
# This adds signatures to detect cleartext forward and reverse windows shells.
|
|
||||||
@load-sigs frameworks/signatures/detect-windows-shells
|
|
||||||
|
|
||||||
# Load all of the scripts that detect software in various protocols.
|
|
||||||
@load protocols/ftp/software
|
|
||||||
@load protocols/smtp/software
|
|
||||||
@load protocols/ssh/software
|
|
||||||
@load protocols/http/software
|
|
||||||
# The detect-webapps script could possibly cause performance trouble when
|
|
||||||
# running on live traffic. Enable it cautiously.
|
|
||||||
#@load protocols/http/detect-webapps
|
|
||||||
|
|
||||||
# This script detects DNS results pointing toward your Site::local_nets
|
|
||||||
# where the name is not part of your local DNS zone and is being hosted
|
|
||||||
# externally. Requires that the Site::local_zones variable is defined.
|
|
||||||
@load protocols/dns/detect-external-names
|
|
||||||
|
|
||||||
# Script to detect various activity in FTP sessions.
|
|
||||||
@load protocols/ftp/detect
|
|
||||||
|
|
||||||
# Scripts that do asset tracking.
|
|
||||||
@load protocols/conn/known-hosts
|
|
||||||
@load protocols/conn/known-services
|
|
||||||
@load protocols/ssl/known-certs
|
|
||||||
|
|
||||||
# This script enables SSL/TLS certificate validation.
|
|
||||||
@load protocols/ssl/validate-certs
|
|
||||||
|
|
||||||
# This script prevents the logging of SSL CA certificates in x509.log
|
|
||||||
@load protocols/ssl/log-hostcerts-only
|
|
||||||
|
|
||||||
# Uncomment the following line to check each SSL certificate hash against the ICSI
|
|
||||||
# certificate notary service; see http://notary.icsi.berkeley.edu .
|
|
||||||
# @load protocols/ssl/notary
|
|
||||||
|
|
||||||
# If you have libGeoIP support built in, do some geographic detections and
|
|
||||||
# logging for SSH traffic.
|
|
||||||
@load protocols/ssh/geo-data
|
|
||||||
# Detect hosts doing SSH bruteforce attacks.
|
|
||||||
@load protocols/ssh/detect-bruteforcing
|
|
||||||
# Detect logins using "interesting" hostnames.
|
|
||||||
@load protocols/ssh/interesting-hostnames
|
|
||||||
|
|
||||||
# Detect SQL injection attacks.
|
|
||||||
@load protocols/http/detect-sqli
|
|
||||||
|
|
||||||
#### Network File Handling ####
|
|
||||||
|
|
||||||
# Enable MD5 and SHA1 hashing for all files.
|
|
||||||
@load frameworks/files/hash-all-files
|
|
||||||
|
|
||||||
# Detect SHA1 sums in Team Cymru's Malware Hash Registry.
|
|
||||||
@load frameworks/files/detect-MHR
|
|
||||||
|
|
||||||
# Uncomment the following line to enable detection of the heartbleed attack. Enabling
|
|
||||||
# this might impact performance a bit.
|
|
||||||
# @load policy/protocols/ssl/heartbleed
|
|
||||||
|
|
||||||
# Uncomment the following line to enable logging of connection VLANs. Enabling
|
|
||||||
# this adds two VLAN fields to the conn.log file. This may not work properly
|
|
||||||
# since we use AF_PACKET and it strips VLAN tags.
|
|
||||||
# @load policy/protocols/conn/vlan-logging
|
|
||||||
|
|
||||||
# Uncomment the following line to enable logging of link-layer addresses. Enabling
|
|
||||||
# this adds the link-layer address for each connection endpoint to the conn.log file.
|
|
||||||
# @load policy/protocols/conn/mac-logging
|
|
||||||
|
|
||||||
# Uncomment the following line to enable the SMB analyzer. The analyzer
|
|
||||||
# is currently considered a preview and therefore not loaded by default.
|
|
||||||
# @load policy/protocols/smb
|
|
||||||
|
|
||||||
# Add the interface to the log event
|
|
||||||
#@load securityonion/add-interface-to-logs.bro
|
|
||||||
|
|
||||||
# Add Sensor Name to the conn.log
|
|
||||||
#@load securityonion/conn-add-sensorname.bro
|
|
||||||
|
|
||||||
# File Extraction
|
|
||||||
#@load securityonion/file-extraction
|
|
||||||
|
|
||||||
# Intel from Mandiant APT1 Report
|
|
||||||
#@load securityonion/apt1
|
|
||||||
|
|
||||||
# ShellShock - detects successful exploitation of Bash vulnerability CVE-2014-6271
|
|
||||||
#@load securityonion/shellshock
|
|
||||||
|
|
||||||
# JA3 - SSL Detection Goodness
|
|
||||||
@load policy/ja3
|
|
||||||
|
|
||||||
# You can load your own intel into:
|
|
||||||
# /opt/so/saltstack/bro/policy/intel/ on the master
|
|
||||||
@load intel
|
|
||||||
|
|
||||||
# Load a custom Bro policy
|
|
||||||
# /opt/so/saltstack/bro/policy/custom/ on the master
|
|
||||||
#@load custom/somebropolicy.bro
|
|
||||||
|
|
||||||
# Use JSON
|
|
||||||
redef LogAscii::use_json = T;
|
|
||||||
redef LogAscii::json_timestamps = JSON::TS_ISO8601;
|
|
||||||
@@ -1,47 +0,0 @@
|
|||||||
{%- set interface = salt['pillar.get']('sensor:interface', 'bond0') %}
|
|
||||||
|
|
||||||
{%- if salt['pillar.get']('sensor:bro_pins') or salt['pillar.get']('sensor:bro_lbprocs') %}
|
|
||||||
{%- if salt['pillar.get']('sensor:bro_proxies') %}
|
|
||||||
{%- set proxies = salt['pillar.get']('sensor:bro_proxies', '1') %}
|
|
||||||
{%- else %}
|
|
||||||
{%- if salt['pillar.get']('sensor:bro_pins') %}
|
|
||||||
{%- set proxies = (salt['pillar.get']('sensor:bro_pins')|length/10)|round(0, 'ceil')|int %}
|
|
||||||
{%- else %}
|
|
||||||
{%- set proxies = (salt['pillar.get']('sensor:bro_lbprocs')/10)|round(0, 'ceil')|int %}
|
|
||||||
{%- endif %}
|
|
||||||
{%- endif %}
|
|
||||||
[manager]
|
|
||||||
type=manager
|
|
||||||
host=localhost
|
|
||||||
|
|
||||||
[logger]
|
|
||||||
type=logger
|
|
||||||
host=localhost
|
|
||||||
|
|
||||||
[proxy]
|
|
||||||
type=proxy
|
|
||||||
host=localhost
|
|
||||||
|
|
||||||
[worker-1]
|
|
||||||
type=worker
|
|
||||||
host=localhost
|
|
||||||
interface=af_packet::{{ interface }}
|
|
||||||
lb_method=custom
|
|
||||||
|
|
||||||
{%- if salt['pillar.get']('sensor:bro_lbprocs') %}
|
|
||||||
lb_procs={{ salt['pillar.get']('sensor:bro_lbprocs', '1') }}
|
|
||||||
{%- else %}
|
|
||||||
lb_procs={{ salt['pillar.get']('sensor:bro_pins')|length }}
|
|
||||||
{%- endif %}
|
|
||||||
{%- if salt['pillar.get']('sensor:bro_pins') %}
|
|
||||||
pin_cpus={{ salt['pillar.get']('sensor:bro_pins')|join(", ") }}
|
|
||||||
{%- endif %}
|
|
||||||
af_packet_fanout_id=23
|
|
||||||
af_packet_fanout_mode=AF_Packet::FANOUT_HASH
|
|
||||||
af_packet_buffer_size=128*1024*1024
|
|
||||||
{%- else %}
|
|
||||||
[brosa]
|
|
||||||
type=standalone
|
|
||||||
host=localhost
|
|
||||||
interface={{ interface }}
|
|
||||||
{%- endif %}
|
|
||||||
@@ -1,206 +0,0 @@
|
|||||||
{% set interface = salt['pillar.get']('sensor:interface', 'bond0') %}
|
|
||||||
{% set BPF_ZEEK = salt['pillar.get']('zeek:bpf') %}
|
|
||||||
{% set BPF_STATUS = 0 %}
|
|
||||||
|
|
||||||
# Bro Salt State
|
|
||||||
# Add Bro group
|
|
||||||
brogroup:
|
|
||||||
group.present:
|
|
||||||
- name: bro
|
|
||||||
- gid: 937
|
|
||||||
|
|
||||||
# Add Bro User
|
|
||||||
bro:
|
|
||||||
user.present:
|
|
||||||
- uid: 937
|
|
||||||
- gid: 937
|
|
||||||
- home: /home/bro
|
|
||||||
|
|
||||||
# Create some directories
|
|
||||||
bropolicydir:
|
|
||||||
file.directory:
|
|
||||||
- name: /opt/so/conf/bro/policy
|
|
||||||
- user: 937
|
|
||||||
- group: 939
|
|
||||||
- makedirs: True
|
|
||||||
|
|
||||||
# Bro Log Directory
|
|
||||||
brologdir:
|
|
||||||
file.directory:
|
|
||||||
- name: /nsm/bro/logs
|
|
||||||
- user: 937
|
|
||||||
- group: 939
|
|
||||||
- makedirs: True
|
|
||||||
|
|
||||||
# Bro Spool Directory
|
|
||||||
brospooldir:
|
|
||||||
file.directory:
|
|
||||||
- name: /nsm/bro/spool/manager
|
|
||||||
- user: 937
|
|
||||||
- makedirs: true
|
|
||||||
|
|
||||||
# Bro extracted directory
|
|
||||||
broextractdir:
|
|
||||||
file.directory:
|
|
||||||
- name: /nsm/bro/extracted
|
|
||||||
- user: 937
|
|
||||||
- group: 939
|
|
||||||
- makedirs: True
|
|
||||||
|
|
||||||
brosfafincompletedir:
|
|
||||||
file.directory:
|
|
||||||
- name: /nsm/faf/files/incomplete
|
|
||||||
- user: 937
|
|
||||||
- makedirs: true
|
|
||||||
|
|
||||||
brosfafcompletedir:
|
|
||||||
file.directory:
|
|
||||||
- name: /nsm/faf/files/complete
|
|
||||||
- user: 937
|
|
||||||
- makedirs: true
|
|
||||||
|
|
||||||
# Sync the policies
|
|
||||||
bropolicysync:
|
|
||||||
file.recurse:
|
|
||||||
- name: /opt/so/conf/bro/policy
|
|
||||||
- source: salt://bro/policy
|
|
||||||
- user: 937
|
|
||||||
- group: 939
|
|
||||||
- template: jinja
|
|
||||||
|
|
||||||
# Sync node.cfg
|
|
||||||
nodecfgsync:
|
|
||||||
file.managed:
|
|
||||||
- name: /opt/so/conf/bro/node.cfg
|
|
||||||
- source: salt://bro/files/node.cfg
|
|
||||||
- user: 937
|
|
||||||
- group: 939
|
|
||||||
- template: jinja
|
|
||||||
|
|
||||||
plcronscript:
|
|
||||||
file.managed:
|
|
||||||
- name: /usr/local/bin/packetloss.sh
|
|
||||||
- source: salt://bro/cron/packetloss.sh
|
|
||||||
- mode: 755
|
|
||||||
|
|
||||||
zeekcleanscript:
|
|
||||||
file.managed:
|
|
||||||
- name: /usr/local/bin/zeek_clean
|
|
||||||
- source: salt://bro/cron/zeek_clean
|
|
||||||
- mode: 755
|
|
||||||
|
|
||||||
/usr/local/bin/zeek_clean:
|
|
||||||
cron.present:
|
|
||||||
- user: root
|
|
||||||
- minute: '*'
|
|
||||||
- hour: '*'
|
|
||||||
- daymonth: '*'
|
|
||||||
- month: '*'
|
|
||||||
- dayweek: '*'
|
|
||||||
|
|
||||||
/usr/local/bin/packetloss.sh:
|
|
||||||
cron.present:
|
|
||||||
- user: root
|
|
||||||
- minute: '*/10'
|
|
||||||
- hour: '*'
|
|
||||||
- daymonth: '*'
|
|
||||||
- month: '*'
|
|
||||||
- dayweek: '*'
|
|
||||||
|
|
||||||
# BPF compilation and configuration
|
|
||||||
{% if BPF_ZEEK %}
|
|
||||||
{% set BPF_CALC = salt['cmd.script']('/usr/sbin/so-bpf-compile', interface + ' ' + BPF_ZEEK|join(" ") ) %}
|
|
||||||
{% if BPF_CALC['stderr'] == "" %}
|
|
||||||
{% set BPF_STATUS = 1 %}
|
|
||||||
{% else %}
|
|
||||||
zeekbpfcompilationfailure:
|
|
||||||
test.configurable_test_state:
|
|
||||||
- changes: False
|
|
||||||
- result: False
|
|
||||||
- comment: "BPF Syntax Error - Discarding Specified BPF"
|
|
||||||
{% endif %}
|
|
||||||
{% endif %}
|
|
||||||
|
|
||||||
zeekbpf:
|
|
||||||
file.managed:
|
|
||||||
- name: /opt/so/conf/bro/bpf
|
|
||||||
- user: 940
|
|
||||||
- group: 940
|
|
||||||
{% if BPF_STATUS %}
|
|
||||||
- contents_pillar: zeek:bpf
|
|
||||||
{% else %}
|
|
||||||
- contents:
|
|
||||||
- "ip or not ip"
|
|
||||||
{% endif %}
|
|
||||||
|
|
||||||
# Sync local.bro
|
|
||||||
{% if salt['pillar.get']('static:broversion', '') == 'COMMUNITY' %}
|
|
||||||
localbrosync:
|
|
||||||
file.managed:
|
|
||||||
- name: /opt/so/conf/bro/local.bro
|
|
||||||
- source: salt://bro/files/local.bro.community
|
|
||||||
- user: 937
|
|
||||||
- group: 939
|
|
||||||
- template: jinja
|
|
||||||
|
|
||||||
so-communitybroimage:
|
|
||||||
cmd.run:
|
|
||||||
- name: docker pull --disable-content-trust=false docker.io/soshybridhunter/so-communitybro:HH1.0.3
|
|
||||||
|
|
||||||
so-bro:
|
|
||||||
docker_container.running:
|
|
||||||
- require:
|
|
||||||
- so-communitybroimage
|
|
||||||
- image: docker.io/soshybridhunter/so-communitybro:HH1.0.3
|
|
||||||
- privileged: True
|
|
||||||
- binds:
|
|
||||||
- /nsm/bro/logs:/nsm/bro/logs:rw
|
|
||||||
- /nsm/bro/spool:/nsm/bro/spool:rw
|
|
||||||
- /nsm/bro/extracted:/nsm/bro/extracted:rw
|
|
||||||
- /opt/so/conf/bro/local.bro:/opt/bro/share/bro/site/local.bro:ro
|
|
||||||
- /opt/so/conf/bro/node.cfg:/opt/bro/etc/node.cfg:ro
|
|
||||||
- /opt/so/conf/bro/policy/securityonion:/opt/bro/share/bro/policy/securityonion:ro
|
|
||||||
- /opt/so/conf/bro/policy/custom:/opt/bro/share/bro/policy/custom:ro
|
|
||||||
- /opt/so/conf/bro/policy/intel:/opt/bro/share/bro/policy/intel:rw
|
|
||||||
- network_mode: host
|
|
||||||
- watch:
|
|
||||||
- file: /opt/so/conf/bro/local.bro
|
|
||||||
- file: /opt/so/conf/bro/node.cfg
|
|
||||||
- file: /opt/so/conf/bro/policy
|
|
||||||
|
|
||||||
{% else %}
|
|
||||||
localbrosync:
|
|
||||||
file.managed:
|
|
||||||
- name: /opt/so/conf/bro/local.bro
|
|
||||||
- source: salt://bro/files/local.bro
|
|
||||||
- user: 937
|
|
||||||
- group: 939
|
|
||||||
- template: jinja
|
|
||||||
|
|
||||||
so-broimage:
|
|
||||||
cmd.run:
|
|
||||||
- name: docker pull --disable-content-trust=false docker.io/soshybridhunter/so-bro:HH1.1.1
|
|
||||||
|
|
||||||
so-bro:
|
|
||||||
docker_container.running:
|
|
||||||
- require:
|
|
||||||
- so-broimage
|
|
||||||
- image: docker.io/soshybridhunter/so-bro:HH1.1.1
|
|
||||||
- privileged: True
|
|
||||||
- binds:
|
|
||||||
- /nsm/bro/logs:/nsm/bro/logs:rw
|
|
||||||
- /nsm/bro/spool:/nsm/bro/spool:rw
|
|
||||||
- /nsm/bro/extracted:/nsm/bro/extracted:rw
|
|
||||||
- /opt/so/conf/bro/local.bro:/opt/bro/share/bro/site/local.bro:ro
|
|
||||||
- /opt/so/conf/bro/node.cfg:/opt/bro/etc/node.cfg:ro
|
|
||||||
- /opt/so/conf/bro/bpf:/opt/bro/share/bro/site/bpf:ro
|
|
||||||
- /opt/so/conf/bro/policy/securityonion:/opt/bro/share/bro/policy/securityonion:ro
|
|
||||||
- /opt/so/conf/bro/policy/custom:/opt/bro/share/bro/policy/custom:ro
|
|
||||||
- /opt/so/conf/bro/policy/intel:/opt/bro/share/bro/policy/intel:rw
|
|
||||||
- network_mode: host
|
|
||||||
- watch:
|
|
||||||
- file: /opt/so/conf/bro/local.bro
|
|
||||||
- file: /opt/so/conf/bro/node.cfg
|
|
||||||
- file: /opt/so/conf/bro/policy
|
|
||||||
- file: /opt/so/conf/bro/bpf
|
|
||||||
{% endif %}
|
|
||||||
@@ -1 +0,0 @@
|
|||||||
#Intel
|
|
||||||
@@ -1,20 +0,0 @@
|
|||||||
{%- set interface = salt['pillar.get']('sensor:interface', '0') %}
|
|
||||||
global interface = "{{ interface }}";
|
|
||||||
|
|
||||||
event bro_init()
|
|
||||||
{
|
|
||||||
if ( ! reading_live_traffic() )
|
|
||||||
return;
|
|
||||||
|
|
||||||
Log::remove_default_filter(HTTP::LOG);
|
|
||||||
Log::add_filter(HTTP::LOG, [$name = "http-interfaces",
|
|
||||||
$path_func(id: Log::ID, path: string, rec: HTTP::Info) =
|
|
||||||
{
|
|
||||||
local peer = get_event_peer()$descr;
|
|
||||||
if ( peer in Cluster::nodes && Cluster::nodes[peer]?$interface )
|
|
||||||
return cat("http_", Cluster::nodes[peer]$interface);
|
|
||||||
else
|
|
||||||
return "http";
|
|
||||||
}
|
|
||||||
]);
|
|
||||||
}
|
|
||||||
@@ -1,9 +0,0 @@
|
|||||||
@load frameworks/intel/seen
|
|
||||||
@load frameworks/intel/do_notice
|
|
||||||
@load frameworks/files/hash-all-files
|
|
||||||
|
|
||||||
redef Intel::read_files += {
|
|
||||||
fmt("%s/apt1-fqdn.dat", @DIR),
|
|
||||||
fmt("%s/apt1-md5.dat", @DIR),
|
|
||||||
fmt("%s/apt1-certs.dat", @DIR)
|
|
||||||
};
|
|
||||||
@@ -1,26 +0,0 @@
|
|||||||
#fields indicator indicator_type meta.source meta.desc meta.do_notice
|
|
||||||
b054e26ef827fbbf5829f84a9bdbb697a5b042fc Intel::CERT_HASH Mandiant APT1 Report ALPHA T
|
|
||||||
7bc0cc2cf7c3a996c32dbe7e938993f7087105b4 Intel::CERT_HASH Mandiant APT1 Report AOL T
|
|
||||||
7855c132af1390413d4e4ff4ead321f8802d8243 Intel::CERT_HASH Mandiant APT1 Report AOL T
|
|
||||||
f3e3c590d7126bd227733e9d8313d2575c421243 Intel::CERT_HASH Mandiant APT1 Report AOL T
|
|
||||||
d4d4e896ce7d73b573f0a0006080a246aec61fe7 Intel::CERT_HASH Mandiant APT1 Report AOL T
|
|
||||||
bcdf4809c1886ac95478bbafde246d0603934298 Intel::CERT_HASH Mandiant APT1 Report AOL T
|
|
||||||
6b4855df8afc8d57a671fe5ed628f6d88852a922 Intel::CERT_HASH Mandiant APT1 Report AOL T
|
|
||||||
d50fdc82c328319ac60f256d3119b8708cd5717b Intel::CERT_HASH Mandiant APT1 Report AOL T
|
|
||||||
70b48d5177eebe9c762e9a37ecabebfd10e1b7e9 Intel::CERT_HASH Mandiant APT1 Report AOL T
|
|
||||||
3a6a299b764500ce1b6e58a32a257139d61a3543 Intel::CERT_HASH Mandiant APT1 Report AOL T
|
|
||||||
bf4f90e0029b2263af1141963ddf2a0c71a6b5fb Intel::CERT_HASH Mandiant APT1 Report AOL T
|
|
||||||
b21139583dec0dae344cca530690ec1f344acc79 Intel::CERT_HASH Mandiant APT1 Report AOL T
|
|
||||||
21971ffef58baf6f638df2f7e2cceb4c58b173c8 Intel::CERT_HASH Mandiant APT1 Report EMAIL T
|
|
||||||
04ecff66973c92a1c348666d5a4738557cce0cfc Intel::CERT_HASH Mandiant APT1 Report IBM T
|
|
||||||
f97d1a703aec44d0f53a3a294e33acda43a49de1 Intel::CERT_HASH Mandiant APT1 Report IBM T
|
|
||||||
c0d32301a7c96ecb0bc8e381ec19e6b4eaf5d2fe Intel::CERT_HASH Mandiant APT1 Report IBM T
|
|
||||||
1b27a897cda019da2c3a6dc838761871e8bf5b5d Intel::CERT_HASH Mandiant APT1 Report LAME T
|
|
||||||
d515996e8696612dc78fc6db39006466fc6550df Intel::CERT_HASH Mandiant APT1 Report MOON-NIGHT T
|
|
||||||
8f79315659e59c79f1301ef4aee67b18ae2d9f1c Intel::CERT_HASH Mandiant APT1 Report NONAME T
|
|
||||||
a57a84975e31e376e3512da7b05ad06ef6441f53 Intel::CERT_HASH Mandiant APT1 Report NS T
|
|
||||||
b3db37a0edde97b3c3c15da5f2d81d27af82f583 Intel::CERT_HASH Mandiant APT1 Report SERVER (PEM) T
|
|
||||||
6d8f1454f6392361fb2464b744d4fc09eee5fcfd Intel::CERT_HASH Mandiant APT1 Report SUR T
|
|
||||||
b66e230f404b2cc1c033ccacda5d0a14b74a2752 Intel::CERT_HASH Mandiant APT1 Report VIRTUALLYTHERE T
|
|
||||||
4acbadb86a91834493dde276736cdf8f7ef5d497 Intel::CERT_HASH Mandiant APT1 Report WEBMAIL T
|
|
||||||
86a48093d9b577955c4c9bd19e30536aae5543d4 Intel::CERT_HASH Mandiant APT1 Report YAHOO T
|
|
||||||
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
@@ -1,106 +0,0 @@
|
|||||||
##! This script is to support the bpf.conf file like other network monitoring tools use.
|
|
||||||
##! Please don't try to learn from this script right now, there are a large number of
|
|
||||||
##! hacks in it to work around bugs discovered in Bro.
|
|
||||||
|
|
||||||
@load base/frameworks/notice
|
|
||||||
|
|
||||||
module BPFConf;
|
|
||||||
|
|
||||||
export {
|
|
||||||
## The file that is watched on disk for BPF filter changes.
|
|
||||||
## Two templated variables are available; "sensorname" and "interface".
|
|
||||||
## They can be used by surrounding the term by doubled curly braces.
|
|
||||||
const filename = "/opt/bro/share/bro/site/bpf" &redef;
|
|
||||||
|
|
||||||
redef enum Notice::Type += {
|
|
||||||
## Invalid filter notice.
|
|
||||||
InvalidFilter
|
|
||||||
};
|
|
||||||
}
|
|
||||||
|
|
||||||
global filter_parts: vector of string = vector();
|
|
||||||
global current_filter_filename = "";
|
|
||||||
|
|
||||||
type FilterLine: record {
|
|
||||||
s: string;
|
|
||||||
};
|
|
||||||
|
|
||||||
redef enum PcapFilterID += {
|
|
||||||
BPFConfPcapFilter,
|
|
||||||
};
|
|
||||||
|
|
||||||
event BPFConf::line(description: Input::EventDescription, tpe: Input::Event, s: string)
|
|
||||||
{
|
|
||||||
local part = sub(s, /[[:blank:]]*#.*$/, "");
|
|
||||||
|
|
||||||
# We don't want any blank parts.
|
|
||||||
if ( part != "" )
|
|
||||||
filter_parts[|filter_parts|] = part;
|
|
||||||
}
|
|
||||||
|
|
||||||
event Input::end_of_data(name: string, source:string)
|
|
||||||
{
|
|
||||||
if ( name == "bpfconf" )
|
|
||||||
{
|
|
||||||
local filter = join_string_vec(filter_parts, " ");
|
|
||||||
capture_filters["bpf.conf"] = filter;
|
|
||||||
if ( Pcap::precompile_pcap_filter(BPFConfPcapFilter, filter) )
|
|
||||||
{
|
|
||||||
PacketFilter::install();
|
|
||||||
}
|
|
||||||
else
|
|
||||||
{
|
|
||||||
NOTICE([$note=InvalidFilter,
|
|
||||||
$msg=fmt("Compiling packet filter from %s failed", filename),
|
|
||||||
$sub=filter]);
|
|
||||||
}
|
|
||||||
|
|
||||||
filter_parts=vector();
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
|
|
||||||
function add_filter_file()
|
|
||||||
{
|
|
||||||
local real_filter_filename = BPFConf::filename;
|
|
||||||
|
|
||||||
# Support the interface template value.
|
|
||||||
#if ( SecurityOnion::sensorname != "" )
|
|
||||||
# real_filter_filename = gsub(real_filter_filename, /\{\{sensorname\}\}/, SecurityOnion::sensorname);
|
|
||||||
|
|
||||||
# Support the interface template value.
|
|
||||||
#if ( SecurityOnion::interface != "" )
|
|
||||||
# real_filter_filename = gsub(real_filter_filename, /\{\{interface\}\}/, SecurityOnion::interface);
|
|
||||||
|
|
||||||
#if ( /\{\{/ in real_filter_filename )
|
|
||||||
# {
|
|
||||||
# return;
|
|
||||||
# }
|
|
||||||
#else
|
|
||||||
# Reporter::info(fmt("BPFConf filename set: %s (%s)", real_filter_filename, Cluster::node));
|
|
||||||
|
|
||||||
if ( real_filter_filename != current_filter_filename )
|
|
||||||
{
|
|
||||||
current_filter_filename = real_filter_filename;
|
|
||||||
Input::add_event([$source=real_filter_filename,
|
|
||||||
$name="bpfconf",
|
|
||||||
$reader=Input::READER_RAW,
|
|
||||||
$mode=Input::REREAD,
|
|
||||||
$want_record=F,
|
|
||||||
$fields=FilterLine,
|
|
||||||
$ev=BPFConf::line]);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
#event SecurityOnion::found_sensorname(name: string)
|
|
||||||
# {
|
|
||||||
# add_filter_file();
|
|
||||||
# }
|
|
||||||
|
|
||||||
event bro_init() &priority=5
|
|
||||||
{
|
|
||||||
if ( BPFConf::filename != "" )
|
|
||||||
add_filter_file();
|
|
||||||
}
|
|
||||||
|
|
||||||
|
|
||||||
@@ -1,10 +0,0 @@
|
|||||||
global sensorname = "{{ grains.host }}";
|
|
||||||
|
|
||||||
redef record Conn::Info += {
|
|
||||||
sensorname: string &log &optional;
|
|
||||||
};
|
|
||||||
|
|
||||||
event connection_state_remove(c: connection)
|
|
||||||
{
|
|
||||||
c$conn$sensorname = sensorname;
|
|
||||||
}
|
|
||||||
@@ -1 +0,0 @@
|
|||||||
@load ./extract
|
|
||||||
@@ -1,21 +0,0 @@
|
|||||||
global ext_map: table[string] of string = {
|
|
||||||
["application/x-dosexec"] = "exe",
|
|
||||||
["text/plain"] = "txt",
|
|
||||||
["image/jpeg"] = "jpg",
|
|
||||||
["image/png"] = "png",
|
|
||||||
["text/html"] = "html",
|
|
||||||
} &default ="";
|
|
||||||
|
|
||||||
event file_sniff(f: fa_file, meta: fa_metadata)
|
|
||||||
{
|
|
||||||
if ( ! meta?$mime_type || meta$mime_type != "application/x-dosexec" )
|
|
||||||
return;
|
|
||||||
|
|
||||||
local ext = "";
|
|
||||||
|
|
||||||
if ( meta?$mime_type )
|
|
||||||
ext = ext_map[meta$mime_type];
|
|
||||||
|
|
||||||
local fname = fmt("/nsm/bro/extracted/%s-%s.%s", f$source, f$id, ext);
|
|
||||||
Files::add_analyzer(f, Files::ANALYZER_EXTRACT, [$extract_filename=fname]);
|
|
||||||
}
|
|
||||||
@@ -1,3 +0,0 @@
|
|||||||
@load tuning/json-logs
|
|
||||||
redef LogAscii::json_timestamps = JSON::TS_ISO8601;
|
|
||||||
redef LogAscii::use_json = T;
|
|
||||||
@@ -13,6 +13,8 @@
|
|||||||
# You should have received a copy of the GNU General Public License
|
# You should have received a copy of the GNU General Public License
|
||||||
# along with this program. If not, see <http://www.gnu.org/licenses/>.
|
# along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||||
|
|
||||||
|
{% set IMAGEREPO = salt['pillar.get']('static:imagerepo') %}
|
||||||
|
|
||||||
# Create the group
|
# Create the group
|
||||||
dstatsgroup:
|
dstatsgroup:
|
||||||
group.present:
|
group.present:
|
||||||
@@ -37,13 +39,13 @@ dstatslogdir:
|
|||||||
|
|
||||||
so-domainstatsimage:
|
so-domainstatsimage:
|
||||||
cmd.run:
|
cmd.run:
|
||||||
- name: docker pull --disable-content-trust=false docker.io/soshybridhunter/so-domainstats:HH1.0.3
|
- name: docker pull --disable-content-trust=false docker.io/{{ IMAGEREPO }}/so-domainstats:HH1.0.3
|
||||||
|
|
||||||
so-domainstats:
|
so-domainstats:
|
||||||
docker_container.running:
|
docker_container.running:
|
||||||
- require:
|
- require:
|
||||||
- so-domainstatsimage
|
- so-domainstatsimage
|
||||||
- image: docker.io/soshybridhunter/so-domainstats:HH1.0.3
|
- image: docker.io/{{ IMAGEREPO }}/so-domainstats:HH1.0.3
|
||||||
- hostname: domainstats
|
- hostname: domainstats
|
||||||
- name: so-domainstats
|
- name: so-domainstats
|
||||||
- user: domainstats
|
- user: domainstats
|
||||||
|
|||||||
@@ -1,5 +1,5 @@
|
|||||||
{% set esip = salt['pillar.get']('master:mainip', '') %}
|
{% set esip = salt['pillar.get']('manager:mainip', '') %}
|
||||||
{% set esport = salt['pillar.get']('master:es_port', '') %}
|
{% set esport = salt['pillar.get']('manager:es_port', '') %}
|
||||||
# This is the folder that contains the rule yaml files
|
# This is the folder that contains the rule yaml files
|
||||||
# Any .yaml file will be loaded as a rule
|
# Any .yaml file will be loaded as a rule
|
||||||
rules_folder: /opt/elastalert/rules/
|
rules_folder: /opt/elastalert/rules/
|
||||||
@@ -86,3 +86,25 @@ alert_time_limit:
|
|||||||
index_settings:
|
index_settings:
|
||||||
shards: 1
|
shards: 1
|
||||||
replicas: 0
|
replicas: 0
|
||||||
|
|
||||||
|
logging:
|
||||||
|
version: 1
|
||||||
|
incremental: false
|
||||||
|
disable_existing_loggers: false
|
||||||
|
formatters:
|
||||||
|
logline:
|
||||||
|
format: '%(asctime)s %(levelname)+8s %(name)+20s %(message)s'
|
||||||
|
|
||||||
|
handlers:
|
||||||
|
file:
|
||||||
|
class : logging.FileHandler
|
||||||
|
formatter: logline
|
||||||
|
level: INFO
|
||||||
|
filename: /var/log/elastalert/elastalert.log
|
||||||
|
|
||||||
|
loggers:
|
||||||
|
'':
|
||||||
|
level: INFO
|
||||||
|
handlers:
|
||||||
|
- file
|
||||||
|
propagate: false
|
||||||
|
|||||||
@@ -1,6 +1,6 @@
|
|||||||
# -*- coding: utf-8 -*-
|
# -*- coding: utf-8 -*-
|
||||||
|
|
||||||
from datetime import date
|
from time import gmtime, strftime
|
||||||
import requests,json
|
import requests,json
|
||||||
from elastalert.alerts import Alerter
|
from elastalert.alerts import Alerter
|
||||||
|
|
||||||
@@ -13,11 +13,12 @@ class PlaybookESAlerter(Alerter):
|
|||||||
|
|
||||||
def alert(self, matches):
|
def alert(self, matches):
|
||||||
for match in matches:
|
for match in matches:
|
||||||
|
today = strftime("%Y.%m.%d", gmtime())
|
||||||
|
timestamp = strftime("%Y-%m-%d"'T'"%H:%M:%S", gmtime())
|
||||||
headers = {"Content-Type": "application/json"}
|
headers = {"Content-Type": "application/json"}
|
||||||
payload = {"play_title": self.rule['play_title'],"play_url": self.rule['play_url'],"sigma_level": self.rule['sigma_level'],"data": match}
|
payload = {"rule.name": self.rule['play_title'],"event.severity": self.rule['event.severity'],"kibana_pivot": self.rule['kibana_pivot'],"soc_pivot": self.rule['soc_pivot'],"event.module": self.rule['event.module'],"event.dataset": self.rule['event.dataset'],"play_url": self.rule['play_url'],"sigma_level": self.rule['sigma_level'],"rule.category": self.rule['rule.category'],"data": match, "@timestamp": timestamp}
|
||||||
today = str(date.today())
|
url = f"http://{self.rule['elasticsearch_host']}/so-playbook-alerts-{today}/_doc/"
|
||||||
url = f"http://{self.rule['elasticsearch_host']}/playbook-alerts-{today}/_doc/"
|
|
||||||
requests.post(url, data=json.dumps(payload), headers=headers, verify=False)
|
requests.post(url, data=json.dumps(payload), headers=headers, verify=False)
|
||||||
|
|
||||||
def get_info(self):
|
def get_info(self):
|
||||||
return {'type': 'PlaybookESAlerter'}
|
return {'type': 'PlaybookESAlerter'}
|
||||||
@@ -1,52 +0,0 @@
|
|||||||
{% set es = salt['pillar.get']('static:masterip', '') %}
|
|
||||||
{% set hivehost = salt['pillar.get']('static:masterip', '') %}
|
|
||||||
{% set hivekey = salt['pillar.get']('static:hivekey', '') %}
|
|
||||||
{% set MASTER = salt['pillar.get']('master:url_base', '') %}
|
|
||||||
|
|
||||||
# hive.yaml
|
|
||||||
# Elastalert rule to forward IDS alerts from Security Onion to a specified TheHive instance.
|
|
||||||
#
|
|
||||||
es_host: {{es}}
|
|
||||||
es_port: 9200
|
|
||||||
name: NIDS-Alert
|
|
||||||
type: frequency
|
|
||||||
index: "so-ids-*"
|
|
||||||
num_events: 1
|
|
||||||
timeframe:
|
|
||||||
minutes: 10
|
|
||||||
buffer_time:
|
|
||||||
minutes: 10
|
|
||||||
allow_buffer_time_overlap: true
|
|
||||||
query_key: ["rule.uuid"]
|
|
||||||
realert:
|
|
||||||
days: 1
|
|
||||||
filter:
|
|
||||||
- query:
|
|
||||||
query_string:
|
|
||||||
query: "event.module: suricata"
|
|
||||||
|
|
||||||
alert: hivealerter
|
|
||||||
|
|
||||||
hive_connection:
|
|
||||||
hive_host: http://{{hivehost}}
|
|
||||||
hive_port: 9000/thehive
|
|
||||||
hive_apikey: {{hivekey}}
|
|
||||||
|
|
||||||
hive_proxies:
|
|
||||||
http: ''
|
|
||||||
https: ''
|
|
||||||
|
|
||||||
hive_alert_config:
|
|
||||||
title: '{match[rule][name]}'
|
|
||||||
type: 'NIDS'
|
|
||||||
source: 'SecurityOnion'
|
|
||||||
description: "`Hunting Pivot:` \n\n <https://{{MASTER}}/#/hunt?q=event.module%3A%20suricata%20AND%20rule.uuid%3A{match[rule][uuid]}%20%7C%20groupby%20source.ip%20destination.ip%20rule.name> \n\n `Kibana Dashboard - Signature Drilldown:` \n\n <https://{{MASTER}}/kibana/app/kibana#/dashboard/ed6f7e20-e060-11e9-8f0c-2ddbf5ed9290?_g=(refreshInterval:(display:Off,pause:!f,value:0),time:(from:now-24h,mode:quick,to:now))&_a=(columns:!(_source),index:'*:logstash-*',interval:auto,query:(query_string:(analyze_wildcard:!t,query:'sid:')),sort:!('@timestamp',desc))> \n\n `Kibana Dashboard - Community_ID:` \n\n <https://{{MASTER}}/kibana/app/kibana#/dashboard/30d0ac90-729f-11ea-8dd2-9d8795a1200b?_g=(filters:!(('$state':(store:globalState),meta:(alias:!n,disabled:!f,index:'*:so-*',key:network.community_id,negate:!f,params:(query:'{match[network][community_id]}'),type:phrase),query:(match_phrase:(network.community_id:'{match[network][community_id]}')))),refreshInterval:(pause:!t,value:0),time:(from:now-7d,to:now))> \n\n `IPs: `{match[source][ip]}:{match[source][port]} --> {match[destination][ip]}:{match[destination][port]} \n\n `Signature:`{match[rule][rule]}"
|
|
||||||
severity: 2
|
|
||||||
tags: ['{match[rule][uuid]}','{match[source][ip]}','{match[destination][ip]}']
|
|
||||||
tlp: 3
|
|
||||||
status: 'New'
|
|
||||||
follow: True
|
|
||||||
|
|
||||||
hive_observable_data_mapping:
|
|
||||||
- ip: '{match[source][ip]}'
|
|
||||||
- ip: '{match[destination][ip]}'
|
|
||||||
51
salt/elastalert/files/rules/so/suricata_thehive.yaml
Normal file
51
salt/elastalert/files/rules/so/suricata_thehive.yaml
Normal file
@@ -0,0 +1,51 @@
|
|||||||
|
{% set es = salt['pillar.get']('static:managerip', '') %}
|
||||||
|
{% set hivehost = salt['pillar.get']('static:managerip', '') %}
|
||||||
|
{% set hivekey = salt['pillar.get']('static:hivekey', '') %}
|
||||||
|
{% set MANAGER = salt['pillar.get']('manager:url_base', '') %}
|
||||||
|
|
||||||
|
# Elastalert rule to forward Suricata alerts from Security Onion to a specified TheHive instance.
|
||||||
|
#
|
||||||
|
es_host: {{es}}
|
||||||
|
es_port: 9200
|
||||||
|
name: Suricata-Alert
|
||||||
|
type: frequency
|
||||||
|
index: "so-ids-*"
|
||||||
|
num_events: 1
|
||||||
|
timeframe:
|
||||||
|
minutes: 10
|
||||||
|
buffer_time:
|
||||||
|
minutes: 10
|
||||||
|
allow_buffer_time_overlap: true
|
||||||
|
query_key: ["rule.uuid","source.ip","destination.ip"]
|
||||||
|
realert:
|
||||||
|
days: 1
|
||||||
|
filter:
|
||||||
|
- query:
|
||||||
|
query_string:
|
||||||
|
query: "event.module: suricata AND rule.severity:(1 OR 2)"
|
||||||
|
|
||||||
|
alert: hivealerter
|
||||||
|
|
||||||
|
hive_connection:
|
||||||
|
hive_host: http://{{hivehost}}
|
||||||
|
hive_port: 9000/thehive
|
||||||
|
hive_apikey: {{hivekey}}
|
||||||
|
|
||||||
|
hive_proxies:
|
||||||
|
http: ''
|
||||||
|
https: ''
|
||||||
|
|
||||||
|
hive_alert_config:
|
||||||
|
title: '{match[rule][name]}'
|
||||||
|
type: 'NIDS'
|
||||||
|
source: 'SecurityOnion'
|
||||||
|
description: "`SOC Hunt Pivot:` \n\n <https://{{MANAGER}}/#/hunt?q=network.community_id%3A%20%20%22{match[network][community_id]}%22%20%7C%20groupby%20source.ip%20destination.ip,event.module,%20event.dataset> \n\n `Kibana Dashboard Pivot:` \n\n <https://{{MANAGER}}/kibana/app/kibana#/dashboard/30d0ac90-729f-11ea-8dd2-9d8795a1200b?_g=(filters:!(('$state':(store:globalState),meta:(alias:!n,disabled:!f,index:'*:so-*',key:network.community_id,negate:!f,params:(query:'{match[network][community_id]}'),type:phrase),query:(match_phrase:(network.community_id:'{match[network][community_id]}')))),refreshInterval:(pause:!t,value:0),time:(from:now-7d,to:now))> \n\n `IPs: `{match[source][ip]}:{match[source][port]} --> {match[destination][ip]}:{match[destination][port]} \n\n `Signature:`{match[rule][rule]}"
|
||||||
|
severity: 2
|
||||||
|
tags: ['{match[rule][uuid]}','{match[source][ip]}','{match[destination][ip]}']
|
||||||
|
tlp: 3
|
||||||
|
status: 'New'
|
||||||
|
follow: True
|
||||||
|
|
||||||
|
hive_observable_data_mapping:
|
||||||
|
- ip: '{match[source][ip]}'
|
||||||
|
- ip: '{match[destination][ip]}'
|
||||||
49
salt/elastalert/files/rules/so/wazuh_thehive.yaml
Normal file
49
salt/elastalert/files/rules/so/wazuh_thehive.yaml
Normal file
@@ -0,0 +1,49 @@
|
|||||||
|
{% set es = salt['pillar.get']('static:managerip', '') %}
|
||||||
|
{% set hivehost = salt['pillar.get']('static:managerip', '') %}
|
||||||
|
{% set hivekey = salt['pillar.get']('static:hivekey', '') %}
|
||||||
|
{% set MANAGER = salt['pillar.get']('manager:url_base', '') %}
|
||||||
|
|
||||||
|
# Elastalert rule to forward high level Wazuh alerts from Security Onion to a specified TheHive instance.
|
||||||
|
#
|
||||||
|
es_host: {{es}}
|
||||||
|
es_port: 9200
|
||||||
|
name: Wazuh-Alert
|
||||||
|
type: frequency
|
||||||
|
index: "so-ossec-*"
|
||||||
|
num_events: 1
|
||||||
|
timeframe:
|
||||||
|
minutes: 10
|
||||||
|
buffer_time:
|
||||||
|
minutes: 10
|
||||||
|
allow_buffer_time_overlap: true
|
||||||
|
realert:
|
||||||
|
days: 1
|
||||||
|
filter:
|
||||||
|
- query:
|
||||||
|
query_string:
|
||||||
|
query: "event.module: ossec AND rule.level>=8"
|
||||||
|
|
||||||
|
alert: hivealerter
|
||||||
|
|
||||||
|
hive_connection:
|
||||||
|
hive_host: http://{{hivehost}}
|
||||||
|
hive_port: 9000/thehive
|
||||||
|
hive_apikey: {{hivekey}}
|
||||||
|
|
||||||
|
hive_proxies:
|
||||||
|
http: ''
|
||||||
|
https: ''
|
||||||
|
|
||||||
|
hive_alert_config:
|
||||||
|
title: '{match[rule][name]}'
|
||||||
|
type: 'wazuh'
|
||||||
|
source: 'SecurityOnion'
|
||||||
|
description: "`SOC Hunt Pivot:` \n\n <https://{{MANAGER}}/#/hunt?q=event.module%3A%20ossec%20AND%20rule.id%3A{match[rule][id]}%20%7C%20groupby%20host.name%20rule.name> \n\n `Kibana Dashboard Pivot:` \n\n <https://{{MANAGER}}/kibana/app/kibana#/dashboard/ed6f7e20-e060-11e9-8f0c-2ddbf5ed9290?_g=(refreshInterval:(display:Off,pause:!f,value:0),time:(from:now-24h,mode:quick,to:now))&_a=(columns:!(_source),index:'*:logstash-*',interval:auto,query:(query_string:(analyze_wildcard:!t,query:'sid:')),sort:!('@timestamp',desc))>"
|
||||||
|
severity: 2
|
||||||
|
tags: ['{match[rule][id]}','{match[host][name]}']
|
||||||
|
tlp: 3
|
||||||
|
status: 'New'
|
||||||
|
follow: True
|
||||||
|
|
||||||
|
hive_observable_data_mapping:
|
||||||
|
- other: '{match[host][name]}'
|
||||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user