This commit is contained in:
m0duspwnens
2020-05-18 10:20:14 -04:00
42 changed files with 1421 additions and 985 deletions

View File

@@ -1,32 +1,34 @@
## Hybrid Hunter Beta 1.2.1 - Beta 1 ## Hybrid Hunter Beta 1.3.0 - Beta 2
### Changes: ### Changes:
- Full support for Ubuntu 18.04. 16.04 is no longer supported for Hybrid Hunter. - New Feature: Codename: "Onion Hunt". Select Hunt from the menu and start hunting down your adversaries!
- Introduction of the Security Onion Console. Once logged in you are directly taken to the SOC. - Improved ECS support.
- New authentication using Kratos. - Complete refactor of the setup to make it easier to follow.
- During install you must specify how you would like to access the SOC ui. This is for strict cookie security. - Improved setup script logging to better assist on any issues.
- Ability to list and delete web users from the SOC ui. - Setup now checks for minimal requirements during install.
- The soremote account is now used to add nodes to the grid vs using socore. - Updated Cyberchef to version 9.20.3.
- Community ID support for Zeek, osquery, and Suricata. You can now tie host events to connection logs! - Updated Elastalert to version 0.2.4 and switched to alpine to reduce container size.
- Elastic 7.6.1 with ECS support. - Updated Redis to 5.0.9 and switched to alpine to reduce container size.
- New set of Kibana dashboards that align with ECS. - Updated Salt to 2019.2.5
- Eval mode no longer uses Logstash for parsing (Filebeat -> ES Ingest) - Updated Grafana to 6.7.3.
- Ingest node parsing for osquery-shipped logs (osquery, WEL, Sysmon). - Zeek 3.0.6
- Fleet standalone mode with improved Web UI & API access control. - Suricata 4.1.8
- Improved Fleet integration support. - Fixes so-status to now display correct containers and status.
- Playbook now has full Windows Sigma community ruleset builtin. - local.zeek is now controlled by a pillar instead of modifying the file directly.
- Automatic Sigma community rule updates. - Renamed so-core to so-nginx and switched to alpine to reduce container size.
- Playbook stability enhancements. - Playbook now uses MySQL instead of SQLite.
- Zeek health check. Zeek will now auto restart if a worker crashes. - Sigma rules have all been updated.
- zeekctl is now managed by salt. - Kibana dashboard improvements for ECS.
- Grafana dashboard improvements and cleanup. - Fixed an issue where geoip was not properly parsed.
- Moved logstash configs to pillars. - ATT&CK Navigator is now it's own state.
- Salt logs moved to /opt/so/log/salt. - Standlone mode is now supported.
- Strelka integrated for file-oriented detection/analysis at scale - Mastersearch previously used the same Grafana dashboard as a Search node. It now has its own dashboard that incorporates panels from the Master node and Search node dashboards.
### Known issues: ### Known Issues:
- The Hunt feature is currently considered "Preview" and although very useful in its current state, not everything works. We wanted to get this out as soon as possible to get the feedback from you! Let us know what you want to see! Let us know what you think we should call it!
- You cannot pivot to PCAP from Suricata alerts in Kibana or Hunt.
- Updating users via the SOC ui is known to fail. To change a user, delete the user and re-add them. - Updating users via the SOC ui is known to fail. To change a user, delete the user and re-add them.
- Due to the move to ECS, the current Playbook plays may not alert correctly at this time. - Due to the move to ECS, the current Playbook plays may not alert correctly at this time.
- The osquery MacOS package does not install correctly. - The osquery MacOS package does not install correctly.

View File

@@ -0,0 +1,5 @@
healthcheck:
enabled: False
schedule: 300
checks:
- zeek

View File

@@ -2,7 +2,7 @@ base:
'*': '*':
- patch.needs_restarting - patch.needs_restarting
'*_eval or *_helix or *_heavynode or *_sensor': '*_eval or *_helix or *_heavynode or *_sensor or *_standalone':
- match: compound - match: compound
- zeek - zeek
@@ -40,6 +40,18 @@ base:
- healthcheck.eval - healthcheck.eval
- minions.{{ grains.id }} - minions.{{ grains.id }}
'*_standalone':
- logstash
- logstash.master
- logstash.search
- firewall.*
- data.*
- brologs
- secrets
- healthcheck.standalone
- static
- minions.{{ grains.id }}
'*_node': '*_node':
- static - static
- firewall.* - firewall.*

View File

@@ -18,7 +18,7 @@
} }
},grain='id', merge=salt['pillar.get']('docker')) %} },grain='id', merge=salt['pillar.get']('docker')) %}
{% if role == 'eval' %} {% if role in ['eval', 'mastersearch', 'master', 'standalone'] %}
{{ append_containers('master', 'grafana', 0) }} {{ append_containers('master', 'grafana', 0) }}
{{ append_containers('static', 'fleet_master', 0) }} {{ append_containers('static', 'fleet_master', 0) }}
{{ append_containers('master', 'wazuh', 0) }} {{ append_containers('master', 'wazuh', 0) }}
@@ -28,30 +28,10 @@
{{ append_containers('master', 'domainstats', 0) }} {{ append_containers('master', 'domainstats', 0) }}
{% endif %} {% endif %}
{% if role == 'heavynode' %} {% if role in ['heavynode', 'standalone'] %}
{{ append_containers('static', 'broversion', 'SURICATA') }} {{ append_containers('static', 'broversion', 'SURICATA') }}
{% endif %} {% endif %}
{% if role == 'mastersearch' %}
{{ append_containers('master', 'grafana', 0) }}
{{ append_containers('static', 'fleet_master', 0) }}
{{ append_containers('master', 'wazuh', 0) }}
{{ append_containers('master', 'thehive', 0) }}
{{ append_containers('master', 'playbook', 0) }}
{{ append_containers('master', 'freq', 0) }}
{{ append_containers('master', 'domainstats', 0) }}
{% endif %}
{% if role == 'master' %}
{{ append_containers('master', 'grafana', 0) }}
{{ append_containers('static', 'fleet_master', 0) }}
{{ append_containers('master', 'wazuh', 0) }}
{{ append_containers('master', 'thehive', 0) }}
{{ append_containers('master', 'playbook', 0) }}
{{ append_containers('master', 'freq', 0) }}
{{ append_containers('master', 'domainstats', 0) }}
{% endif %}
{% if role == 'searchnode' %} {% if role == 'searchnode' %}
{{ append_containers('master', 'wazuh', 0) }} {{ append_containers('master', 'wazuh', 0) }}
{% endif %} {% endif %}

View File

@@ -0,0 +1,21 @@
{% set docker = {
'containers': [
'so-nginx',
'so-telegraf',
'so-soc',
'so-kratos',
'so-aptcacherng',
'so-idstools',
'so-redis',
'so-logstash',
'so-elasticsearch',
'so-curator',
'so-kibana',
'so-elastalert',
'so-filebeat',
'so-suricata',
'so-steno',
'so-dockerregistry',
'so-soctopus'
]
} %}

37
salt/common/tools/sbin/so-kibana-config-export Normal file → Executable file
View File

@@ -1,6 +1,35 @@
#!/bin/bash #!/bin/bash
KIBANA_HOST=10.66.166.141 #
# {%- set FLEET_MASTER = salt['pillar.get']('static:fleet_master', False) -%}
# {%- set FLEET_NODE = salt['pillar.get']('static:fleet_node', False) -%}
# {%- set FLEET_IP = salt['pillar.get']('static:fleet_ip', '') %}
# {%- set MASTER = salt['pillar.get']('master:url_base', '') %}
#
# Copyright 2014,2015,2016,2017,2018,2019,2020 Security Onion Solutions, LLC
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
KIBANA_HOST={{ MASTER }}
KSO_PORT=5601 KSO_PORT=5601
OUTFILE="saved_objects.json" OUTFILE="saved_objects.ndjson"
curl -s -H 'kbn-xsrf: true' -H 'Content-Type: application/json' -XPOST $KIBANA_HOST:$KSO_PORT/api/saved_objects/_export -d '{ "type": "index-pattern", "type": "config", "type": "dashboard", "type": "query", "type": "search", "type": "url", "type": "visualization" }' -o $OUTFILE curl -s -H 'kbn-xsrf: true' -H 'Content-Type: application/json' -XPOST $KIBANA_HOST:$KSO_PORT/api/saved_objects/_export -d '{ "type": [ "index-pattern", "config", "visualization", "dashboard", "search" ], "excludeExportDetails": false }' > $OUTFILE
# Clean up using PLACEHOLDER
sed -i "s/$KIBANA_HOST/PLACEHOLDER/g" $OUTFILE
# Clean up for Fleet, if applicable
# {% if FLEET_NODE or FLEET_MASTER %}
# Fleet IP
sed -i "s/{{ FLEET_IP }}/FLEETPLACEHOLDER/g" $OUTFILE
# {% endif %}

View File

@@ -1,12 +1,8 @@
{% if grains['role'] == 'so-node' %} {%- if grains['role'] in ['so-node', 'so-searchnode', 'so-heavynode'] %}
{%- set cur_close_days = salt['pillar.get']('node:cur_close_days', '') -%}
{%- set cur_close_days = salt['pillar.get']('node:cur_close_days', '') -%} {%- elif grains['role'] in ['so-eval', 'so-mastersearch', 'so-standalone'] %}
{%- set cur_close_days = salt['pillar.get']('master:cur_close_days', '') -%}
{% elif grains['role'] == 'so-eval' %} {%- endif -%}
{%- set cur_close_days = salt['pillar.get']('master:cur_close_days', '') -%}
{%- endif %}
--- ---
# Remember, leave a key empty if there is no value. None will be a string, # Remember, leave a key empty if there is no value. None will be a string,

View File

@@ -1,11 +1,7 @@
{% if grains['role'] == 'so-node' %} {%- if grains['role'] in ['so-node', 'so-searchnode', 'so-heavynode'] %}
{%- set log_size_limit = salt['pillar.get']('node:log_size_limit', '') -%}
{%- set log_size_limit = salt['pillar.get']('node:log_size_limit', '') -%} {%- elif grains['role'] in ['so-eval', 'so-mastersearch', 'so-standalone'] %}
{%- set log_size_limit = salt['pillar.get']('master:log_size_limit', '') -%}
{% elif grains['role'] == 'so-eval' %}
{%- set log_size_limit = salt['pillar.get']('master:log_size_limit', '') -%}
{%- endif %} {%- endif %}
--- ---
# Remember, leave a key empty if there is no value. None will be a string, # Remember, leave a key empty if there is no value. None will be a string,

View File

@@ -1,17 +1,13 @@
{% if grains['role'] == 'so-node' %} {%- if grains['role'] in ['so-node', 'so-searchnode', 'so-heavynode'] %}
{%- set ELASTICSEARCH_HOST = salt['pillar.get']('node:mainip', '') -%}
{%- set ELASTICSEARCH_HOST = salt['pillar.get']('node:mainip', '') -%} {%- set ELASTICSEARCH_PORT = salt['pillar.get']('node:es_port', '') -%}
{%- set ELASTICSEARCH_PORT = salt['pillar.get']('node:es_port', '') -%} {%- set LOG_SIZE_LIMIT = salt['pillar.get']('node:log_size_limit', '') -%}
{%- set LOG_SIZE_LIMIT = salt['pillar.get']('node:log_size_limit', '') -%} {%- elif grains['role'] in ['so-eval', 'so-mastersearch', 'so-standalone'] %}
{%- set ELASTICSEARCH_HOST = salt['pillar.get']('master:mainip', '') -%}
{% elif grains['role'] == 'so-eval' %} {%- set ELASTICSEARCH_PORT = salt['pillar.get']('master:es_port', '') -%}
{%- set LOG_SIZE_LIMIT = salt['pillar.get']('master:log_size_limit', '') -%}
{%- set ELASTICSEARCH_HOST = salt['pillar.get']('master:mainip', '') -%} {%- endif -%}
{%- set ELASTICSEARCH_PORT = salt['pillar.get']('master:es_port', '') -%}
{%- set LOG_SIZE_LIMIT = salt['pillar.get']('master:log_size_limit', '') -%}
{%- endif %}
#!/bin/bash #!/bin/bash
# #

View File

@@ -1,11 +1,7 @@
{% if grains['role'] == 'so-node' %} {% if grains['role'] in ['so-node', 'so-searchnode', 'so-heavynode'] %}
{%- set elasticsearch = salt['pillar.get']('node:mainip', '') -%}
{%- set elasticsearch = salt['pillar.get']('node:mainip', '') -%} {% elif grains['role'] in ['so-eval', 'so-mastersearch', 'so-standalone'] %}
{%- set elasticsearch = salt['pillar.get']('master:mainip', '') -%}
{% elif grains['role'] == 'so-eval' %}
{%- set elasticsearch = salt['pillar.get']('master:mainip', '') -%}
{%- endif %} {%- endif %}
--- ---

View File

@@ -1,6 +1,6 @@
{% set VERSION = salt['pillar.get']('static:soversion', 'HH1.2.2') %} {% set VERSION = salt['pillar.get']('static:soversion', 'HH1.2.2') %}
{% set MASTER = salt['grains.get']('master') %} {% set MASTER = salt['grains.get']('master') %}
{% if grains['role'] == 'so-node' or grains['role'] == 'so-eval' %} {% if grains['role'] in ['so-searchnode', 'so-eval', 'so-node', 'so-mastersearch', 'so-heavynode', 'so-standalone'] %}
# Curator # Curator
# Create the group # Create the group
curatorgroup: curatorgroup:

View File

@@ -14,24 +14,13 @@
# along with this program. If not, see <http://www.gnu.org/licenses/>. # along with this program. If not, see <http://www.gnu.org/licenses/>.
{% set VERSION = salt['pillar.get']('static:soversion', 'HH1.2.2') %} {% set VERSION = salt['pillar.get']('static:soversion', 'HH1.2.2') %}
{% set MASTER = salt['grains.get']('master') %} {% set MASTER = salt['grains.get']('master') %}
{% if grains['role'] == 'so-master' %}
{% set esalert = salt['pillar.get']('master:elastalert', '1') %}
{% set esip = salt['pillar.get']('master:mainip', '') %}
{% set esport = salt['pillar.get']('master:es_port', '') %}
{% elif grains['role'] in ['so-eval','so-mastersearch'] %}
{% set esalert = salt['pillar.get']('master:elastalert', '1') %}
{% set esip = salt['pillar.get']('master:mainip', '') %}
{% set esport = salt['pillar.get']('master:es_port', '') %}
{% if grains['role'] in ['so-eval','so-mastersearch', 'so-master', 'so-standalone'] %}
{% set esalert = salt['pillar.get']('master:elastalert', '1') %}
{% set esip = salt['pillar.get']('master:mainip', '') %}
{% set esport = salt['pillar.get']('master:es_port', '') %}
{% elif grains['role'] == 'so-node' %} {% elif grains['role'] == 'so-node' %}
{% set esalert = salt['pillar.get']('node:elastalert', '0') %}
{% set esalert = salt['pillar.get']('node:elastalert', '0') %}
{% endif %} {% endif %}
# Elastalert # Elastalert

View File

@@ -40,10 +40,10 @@
{ "rename": { "field": "category", "target_field": "event.category", "ignore_missing": true } }, { "rename": { "field": "category", "target_field": "event.category", "ignore_missing": true } },
{ "rename": { "field": "message2.community_id", "target_field": "network.community_id", "ignore_missing": true } }, { "rename": { "field": "message2.community_id", "target_field": "network.community_id", "ignore_missing": true } },
{ {
"remove": { "remove": {
"field": [ "index_name_prefix", "message2"], "field": [ "index_name_prefix", "message2", "type" ],
"ignore_failure": false "ignore_failure": true
} }
} }
] ]
} }

View File

@@ -24,8 +24,14 @@
{ "rename": { "field": "message3.columns.pid", "target_field": "process.pid", "ignore_missing": true } }, { "rename": { "field": "message3.columns.pid", "target_field": "process.pid", "ignore_missing": true } },
{ "rename": { "field": "message3.columns.parent", "target_field": "process.ppid", "ignore_missing": true } }, { "rename": { "field": "message3.columns.parent", "target_field": "process.ppid", "ignore_missing": true } },
{ "rename": { "field": "message3.columns.cwd", "target_field": "process.working_directory", "ignore_missing": true } }, { "rename": { "field": "message3.columns.cwd", "target_field": "process.working_directory", "ignore_missing": true } },
{ "rename": { "field": "message3.columns.community_id", "target_field": "network.community_id", "ignore_missing": true } },
{ "rename": { "field": "message3.columns.local_address", "target_field": "local.ip", "ignore_missing": true } },
{ "rename": { "field": "message3.columns.local_port", "target_field": "local.port", "ignore_missing": true } },
{ "rename": { "field": "message3.columns.remote_address", "target_field": "remote.ip", "ignore_missing": true } },
{ "rename": { "field": "message3.columns.remote_port", "target_field": "remote.port", "ignore_missing": true } },
{ "rename": { "field": "message3.columns.process_name", "target_field": "process.name", "ignore_missing": true } },
{ "rename": { "field": "message3.columns.eventid", "target_field": "event.code", "ignore_missing": true } }, { "rename": { "field": "message3.columns.eventid", "target_field": "event.code", "ignore_missing": true } },
{ "set": { "if": "ctx.message3.columns.data != null", "field": "dataset", "value": "wel-{{message3.columns.source}}", "override": true } }, { "set": { "if": "ctx.message3.columns.?data != null", "field": "dataset", "value": "wel-{{message3.columns.source}}", "override": true } },
{ "rename": { "field": "message3.columns.winlog.EventData.SubjectUserName", "target_field": "user.name", "ignore_missing": true } }, { "rename": { "field": "message3.columns.winlog.EventData.SubjectUserName", "target_field": "user.name", "ignore_missing": true } },
{ "rename": { "field": "message3.columns.winlog.EventData.destinationHostname", "target_field": "destination.hostname", "ignore_missing": true } }, { "rename": { "field": "message3.columns.winlog.EventData.destinationHostname", "target_field": "destination.hostname", "ignore_missing": true } },
{ "rename": { "field": "message3.columns.winlog.EventData.destinationIp", "target_field": "destination.ip", "ignore_missing": true } }, { "rename": { "field": "message3.columns.winlog.EventData.destinationIp", "target_field": "destination.ip", "ignore_missing": true } },

View File

@@ -12,9 +12,9 @@
{ "rename": { "field": "message2.id.resp_h", "target_field": "destination.ip", "ignore_missing": true } }, { "rename": { "field": "message2.id.resp_h", "target_field": "destination.ip", "ignore_missing": true } },
{ "rename": { "field": "message2.id.resp_p", "target_field": "destination.port", "ignore_missing": true } }, { "rename": { "field": "message2.id.resp_p", "target_field": "destination.port", "ignore_missing": true } },
{ "set": { "field": "client.ip", "value": "{{source.ip}}" } }, { "set": { "field": "client.ip", "value": "{{source.ip}}" } },
{ "set": { "if": "ctx.source.port != null", "field": "client.port", "value": "{{source.port}}" } }, { "set": { "if": "ctx.source?.port != null", "field": "client.port", "value": "{{source.port}}" } },
{ "set": { "field": "server.ip", "value": "{{destination.ip}}" } }, { "set": { "field": "server.ip", "value": "{{destination.ip}}" } },
{ "set": { "if": "ctx.destination.port != null", "field": "server.port", "value": "{{destination.port}}" } }, { "set": { "if": "ctx.destination?.port != null", "field": "server.port", "value": "{{destination.port}}" } },
{ "set": { "field": "observer.name", "value": "{{agent.name}}" } }, { "set": { "field": "observer.name", "value": "{{agent.name}}" } },
{ "date": { "field": "message2.ts", "target_field": "@timestamp", "formats": ["ISO8601", "UNIX"], "ignore_failure": true } }, { "date": { "field": "message2.ts", "target_field": "@timestamp", "formats": ["ISO8601", "UNIX"], "ignore_failure": true } },
{ "remove": { "field": ["agent"], "ignore_failure": true } }, { "remove": { "field": ["agent"], "ignore_failure": true } },

View File

@@ -21,6 +21,20 @@
{ "rename": { "field": "message2.orig_cc", "target_field": "client.country_code","ignore_missing": true } }, { "rename": { "field": "message2.orig_cc", "target_field": "client.country_code","ignore_missing": true } },
{ "rename": { "field": "message2.resp_cc", "target_field": "server.country_code", "ignore_missing": true } }, { "rename": { "field": "message2.resp_cc", "target_field": "server.country_code", "ignore_missing": true } },
{ "rename": { "field": "message2.sensorname", "target_field": "observer.name", "ignore_missing": true } }, { "rename": { "field": "message2.sensorname", "target_field": "observer.name", "ignore_missing": true } },
{ "script": { "lang": "painless", "source": "ctx.network.bytes = (ctx.client.bytes + ctx.server.bytes)", "ignore_failure": true } },
{ "set": { "if": "ctx.connection.state == 'S0'", "field": "connection.state_description", "value": "Connection attempt seen, no reply" } },
{ "set": { "if": "ctx.connection.state == 'S1'", "field": "connection.state_description", "value": "Connection established, not terminated" } },
{ "set": { "if": "ctx.connection.state == 'S2'", "field": "connection.state_description", "value": "Connection established and close attempt by originator seen (but no reply from responder)" } },
{ "set": { "if": "ctx.connection.state == 'S3'", "field": "connection.state_description", "value": "Connection established and close attempt by responder seen (but no reply from originator)" } },
{ "set": { "if": "ctx.connection.state == 'SF'", "field": "connection.state_description", "value": "Normal SYN/FIN completion" } },
{ "set": { "if": "ctx.connection.state == 'REJ'", "field": "connection.state_description", "value": "Connection attempt rejected" } },
{ "set": { "if": "ctx.connection.state == 'RSTO'", "field": "connection.state_description", "value": "Connection established, originator aborted (sent a RST)" } },
{ "set": { "if": "ctx.connection.state == 'RSTR'", "field": "connection.state_description", "value": "Established, responder aborted" } },
{ "set": { "if": "ctx.connection.state == 'RSTOS0'","field": "connection.state_description", "value": "Originator sent a SYN followed by a RST, we never saw a SYN-ACK from the responder" } },
{ "set": { "if": "ctx.connection.state == 'RSTRH'", "field": "connection.state_description", "value": "Responder sent a SYN ACK followed by a RST, we never saw a SYN from the (purported) originator" } },
{ "set": { "if": "ctx.connection.state == 'SH'", "field": "connection.state_description", "value": "Originator sent a SYN followed by a FIN, we never saw a SYN ACK from the responder (hence the connection was 'half' open)" } },
{ "set": { "if": "ctx.connection.state == 'SHR'", "field": "connection.state_description", "value": "Responder sent a SYN ACK followed by a FIN, we never saw a SYN from the originator" } },
{ "set": { "if": "ctx.connection.state == 'OTH'", "field": "connection.state_description", "value": "No SYN seen, just midstream traffic (a 'partial connection' that was not later closed)" } },
{ "pipeline": { "name": "zeek.common" } } { "pipeline": { "name": "zeek.common" } }
] ]
} }

View File

@@ -4,9 +4,9 @@
{ "remove": { "field": ["host"], "ignore_failure": true } }, { "remove": { "field": ["host"], "ignore_failure": true } },
{ "json": { "field": "message", "target_field": "message2", "ignore_failure": true } }, { "json": { "field": "message", "target_field": "message2", "ignore_failure": true } },
{ "rename": { "field": "message2.rtt", "target_field": "event.duration", "ignore_missing": true } }, { "rename": { "field": "message2.rtt", "target_field": "event.duration", "ignore_missing": true } },
{ "rename": { "field": "message2.named_pipe", "target_field": "named_pipe", "ignore_missing": true } }, { "rename": { "field": "message2.named_pipe", "target_field": "dce_rpc.named_pipe", "ignore_missing": true } },
{ "rename": { "field": "message2.endpoint", "target_field": "endpoint", "ignore_missing": true } }, { "rename": { "field": "message2.endpoint", "target_field": "dce_rpc.endpoint", "ignore_missing": true } },
{ "rename": { "field": "message2.operation", "target_field": "operation", "ignore_missing": true } }, { "rename": { "field": "message2.operation", "target_field": "dce_rpc.operation", "ignore_missing": true } },
{ "pipeline": { "name": "zeek.common" } } { "pipeline": { "name": "zeek.common" } }
] ]
} }

View File

@@ -15,7 +15,7 @@
{ "rename": { "field": "message2.domain", "target_field": "host.domain", "ignore_missing": true } }, { "rename": { "field": "message2.domain", "target_field": "host.domain", "ignore_missing": true } },
{ "rename": { "field": "message2.host_name", "target_field": "host.hostname", "ignore_missing": true } }, { "rename": { "field": "message2.host_name", "target_field": "host.hostname", "ignore_missing": true } },
{ "rename": { "field": "message2.duration", "target_field": "event.duration", "ignore_missing": true } }, { "rename": { "field": "message2.duration", "target_field": "event.duration", "ignore_missing": true } },
{ "rename": { "field": "message2.msg_types", "target_field": "message_types", "ignore_missing": true } }, { "rename": { "field": "message2.msg_types", "target_field": "dhcp.message_types", "ignore_missing": true } },
{ "pipeline": { "name": "zeek.common" } } { "pipeline": { "name": "zeek.common" } }
] ]
} }

View File

@@ -23,6 +23,7 @@
{ "rename": { "field": "message2.TTLs", "target_field": "dns.ttls", "ignore_missing": true } }, { "rename": { "field": "message2.TTLs", "target_field": "dns.ttls", "ignore_missing": true } },
{ "rename": { "field": "message2.rejected", "target_field": "dns.query.rejected", "ignore_missing": true } }, { "rename": { "field": "message2.rejected", "target_field": "dns.query.rejected", "ignore_missing": true } },
{ "script": { "lang": "painless", "source": "ctx.dns.query.length = ctx.dns.query.name.length()", "ignore_failure": true } }, { "script": { "lang": "painless", "source": "ctx.dns.query.length = ctx.dns.query.name.length()", "ignore_failure": true } },
{ "pipeline": { "if": "ctx.dns.query.type_name != 'NB' && ctx.dns.query.type_name != 'TKEY' && ctx.dns.query.type_name != 'NBSTAT' && ctx.dns.query.type_name != 'PTR'", "name": "zeek.dns.tld"} },
{ "pipeline": { "name": "zeek.common" } } { "pipeline": { "name": "zeek.common" } }
] ]
} }

View File

@@ -0,0 +1,13 @@
{
"description" : "zeek.dns.tld",
"processors" : [
{ "script": { "lang": "painless", "source": "ctx.dns.top_level_domain = ctx.dns.query.name.substring(ctx.dns.query.name.lastIndexOf('.') + 1)", "ignore_failure": true } },
{ "script": { "lang": "painless", "source": "ctx.dns.query_without_tld = ctx.dns.query.name.substring(0, (ctx.dns.query.name.lastIndexOf('.')))", "ignore_failure": true } },
{ "script": { "lang": "painless", "source": "ctx.dns.parent_domain = ctx.dns.query_without_tld.substring(ctx.dns.query_without_tld.lastIndexOf('.') + 1)", "ignore_failure": true } },
{ "script": { "lang": "painless", "source": "ctx.dns.subdomain = ctx.dns.query_without_tld.substring(0, (ctx.dns.query_without_tld.lastIndexOf('.')))", "ignore_failure": true } },
{ "script": { "lang": "painless", "source": "ctx.dns.highest_registered_domain = ctx.dns.parent_domain + '.' + ctx.dns.top_level_domain", "ignore_failure": true } },
{ "script": { "lang": "painless", "source": "ctx.dns.subdomain_length = ctx.dns.subdomain.length()", "ignore_failure": true } },
{ "script": { "lang": "painless", "source": "ctx.dns.parent_domain_length = ctx.dns.parent_domain.length()", "ignore_failure": true } },
{ "remove": { "field": "dns.query_without_tld", "ignore_failure": true } }
]
}

View File

@@ -15,27 +15,19 @@
{% set VERSION = salt['pillar.get']('static:soversion', 'HH1.2.2') %} {% set VERSION = salt['pillar.get']('static:soversion', 'HH1.2.2') %}
{% set MASTER = salt['grains.get']('master') %} {% set MASTER = salt['grains.get']('master') %}
{% set FEATURES = salt['pillar.get']('elastic:features', False) %} {% set FEATURES = salt['pillar.get']('elastic:features', False) %}
{% if FEATURES %} {% if FEATURES %}
{% set FEATURES = "-features" %} {% set FEATURES = "-features" %}
{% else %} {% else %}
{% set FEATURES = '' %} {% set FEATURES = '' %}
{% endif %} {% endif %}
{% if grains['role'] == 'so-master' %} {% if grains['role'] in ['so-eval','so-mastersearch', 'so-master', 'so-standalone'] %}
{% set esclustername = salt['pillar.get']('master:esclustername', '') %}
{% set esclustername = salt['pillar.get']('master:esclustername', '') %} {% set esheap = salt['pillar.get']('master:esheap', '') %}
{% set esheap = salt['pillar.get']('master:esheap', '') %} {% elif grains['role'] in ['so-node','so-heavynode'] %}
{% set esclustername = salt['pillar.get']('node:esclustername', '') %}
{% elif grains['role'] in ['so-eval','so-mastersearch'] %} {% set esheap = salt['pillar.get']('node:esheap', '') %}
{% set esclustername = salt['pillar.get']('master:esclustername', '') %}
{% set esheap = salt['pillar.get']('master:esheap', '') %}
{% elif grains['role'] == 'so-node' or grains['role'] == 'so-heavynode' %}
{% set esclustername = salt['pillar.get']('node:esclustername', '') %}
{% set esheap = salt['pillar.get']('node:esheap', '') %}
{% endif %} {% endif %}
vm.max_map_count: vm.max_map_count:
@@ -149,7 +141,7 @@ so-elasticsearch-pipelines:
- file: esyml - file: esyml
- file: so-elasticsearch-pipelines-file - file: so-elasticsearch-pipelines-file
{% if grains['role'] == 'so-master' or grains['role'] == "so-eval" or grains['role'] == "so-mastersearch" %} {% if grains['role'] in ['so-master', 'so-eval', 'so-mastersearch', 'so-standalone'] %}
so-elasticsearch-templates: so-elasticsearch-templates:
cmd.run: cmd.run:
- name: /usr/sbin/so-elasticsearch-templates - name: /usr/sbin/so-elasticsearch-templates

View File

@@ -74,7 +74,7 @@ filebeat.modules:
# List of prospectors to fetch data. # List of prospectors to fetch data.
filebeat.inputs: filebeat.inputs:
#------------------------------ Log prospector -------------------------------- #------------------------------ Log prospector --------------------------------
{%- if grains['role'] == 'so-sensor' or grains['role'] == "so-eval" or grains['role'] == "so-helix" or grains['role'] == "so-heavynode" %} {%- if grains['role'] == 'so-sensor' or grains['role'] == "so-eval" or grains['role'] == "so-helix" or grains['role'] == "so-heavynode" or grains['role'] == "so-standalone" %}
{%- if BROVER != 'SURICATA' %} {%- if BROVER != 'SURICATA' %}
{%- for LOGNAME in salt['pillar.get']('brologs:enabled', '') %} {%- for LOGNAME in salt['pillar.get']('brologs:enabled', '') %}
- type: log - type: log

View File

@@ -1,15 +1,16 @@
# Firewall Magic for the grid # Firewall Magic for the grid
{%- if grains['role'] in ['so-eval','so-master','so-helix','so-mastersearch'] %} {% if grains['role'] in ['so-eval','so-master','so-helix','so-mastersearch', 'so-standalone'] %}
{%- set ip = salt['pillar.get']('static:masterip', '') %} {% set ip = salt['pillar.get']('static:masterip', '') %}
{%- elif grains['role'] == 'so-node' or grains['role'] == 'so-heavynode' %} {% elif grains['role'] == 'so-node' or grains['role'] == 'so-heavynode' %}
{%- set ip = salt['pillar.get']('node:mainip', '') %} {% set ip = salt['pillar.get']('node:mainip', '') %}
{%- elif grains['role'] == 'so-sensor' %} {% elif grains['role'] == 'so-sensor' %}
{%- set ip = salt['pillar.get']('sensor:mainip', '') %} {% set ip = salt['pillar.get']('sensor:mainip', '') %}
{%- elif grains['role'] == 'so-fleet' %} {% elif grains['role'] == 'so-fleet' %}
{%- set ip = salt['pillar.get']('node:mainip', '') %} {% set ip = salt['pillar.get']('node:mainip', '') %}
{%- endif %} {% endif %}
{%- set FLEET_NODE = salt['pillar.get']('static:fleet_node') %}
{%- set FLEET_NODE_IP = salt['pillar.get']('static:fleet_ip') %} {% set FLEET_NODE = salt['pillar.get']('static:fleet_node') %}
{% set FLEET_NODE_IP = salt['pillar.get']('static:fleet_ip') %}
# Quick Fix for Docker being difficult # Quick Fix for Docker being difficult
iptables_fix_docker: iptables_fix_docker:
@@ -136,7 +137,7 @@ enable_wazuh_manager_1514_udp_{{ip}}:
- save: True - save: True
# Rules if you are a Master # Rules if you are a Master
{% if grains['role'] == 'so-master' or grains['role'] == 'so-eval' or grains['role'] == 'so-helix' or grains['role'] == 'so-mastersearch' %} {% if grains['role'] in ['so-master', 'so-eval', 'so-helix', 'so-mastersearch', 'so-standalone'] %}
#This should be more granular #This should be more granular
iptables_allow_master_docker: iptables_allow_master_docker:
iptables.insert: iptables.insert:
@@ -364,6 +365,17 @@ enable_minion_osquery_8080_{{ip}}:
- position: 1 - position: 1
- save: True - save: True
enable_minion_osquery_8090_{{ip}}:
iptables.insert:
- table: filter
- chain: DOCKER-USER
- jump: ACCEPT
- proto: tcp
- source: {{ ip }}
- dport: 8090
- position: 1
- save: True
enable_minion_wazuh_55000_{{ip}}: enable_minion_wazuh_55000_{{ip}}:
iptables.insert: iptables.insert:
- table: filter - table: filter
@@ -827,4 +839,4 @@ enable_fleetnode_8090_{{ip}}:
{% endfor %} {% endfor %}
{% endif %} {% endif %}

View File

@@ -31,7 +31,7 @@ docker exec so-fleet fleetctl apply -f /packs/hh/osquery.conf
# Enable Fleet # Enable Fleet
echo "Enabling Fleet..." echo "Enabling Fleet..."
salt-call state.apply fleet.event_enable-fleet queue=True >> /root/fleet-setup.log salt-call state.apply fleet.event_enable-fleet queue=True >> /root/fleet-setup.log
salt-call state.apply common queue=True >> /root/fleet-setup.log salt-call state.apply nginx queue=True >> /root/fleet-setup.log
# Generate osquery install packages # Generate osquery install packages
echo "Generating osquery install packages - this will take some time..." echo "Generating osquery install packages - this will take some time..."
@@ -42,7 +42,7 @@ echo "Installing launcher via salt..."
salt-call state.apply fleet.install_package queue=True >> /root/fleet-setup.log salt-call state.apply fleet.install_package queue=True >> /root/fleet-setup.log
salt-call state.apply filebeat queue=True >> /root/fleet-setup.log salt-call state.apply filebeat queue=True >> /root/fleet-setup.log
docker stop so-nginx docker stop so-nginx
salt-call state.apply common queue=True >> /root/fleet-setup.log salt-call state.apply nginx queue=True >> /root/fleet-setup.log
echo "Fleet Setup Complete - Login here: https://{{ MAIN_HOSTNAME }}" echo "Fleet Setup Complete - Login here: https://{{ MAIN_HOSTNAME }}"
echo "Your username is $2 and your password is $initpw" echo "Your username is $2 and your password is $initpw"

View File

@@ -2,7 +2,7 @@
{% set MASTER = salt['grains.get']('master') %} {% set MASTER = salt['grains.get']('master') %}
{% set VERSION = salt['pillar.get']('static:soversion', 'HH1.2.2') %} {% set VERSION = salt['pillar.get']('static:soversion', 'HH1.2.2') %}
{% if grains['role'] in ['so-master', 'so-mastersearch', 'so-eval'] and GRAFANA == 1 %} {% if grains['role'] in ['so-master', 'so-mastersearch', 'so-eval', 'so-standalone'] and GRAFANA == 1 %}
# Grafana all the things # Grafana all the things
grafanadir: grafanadir:

View File

@@ -3,7 +3,7 @@
{% set VERSION = salt['pillar.get']('static:soversion', 'HH1.2.2') %} {% set VERSION = salt['pillar.get']('static:soversion', 'HH1.2.2') %}
{% if grains['role'] in ['so-master', 'so-mastersearch', 'so-eval'] and GRAFANA == 1 %} {% if grains['role'] in ['so-master', 'so-mastersearch', 'so-eval', 'so-standalone'] and GRAFANA == 1 %}
# Influx DB # Influx DB
influxconfdir: influxconfdir:

View File

@@ -1,25 +1,21 @@
#!/bin/bash #!/bin/bash
# {%- set FLEET_MASTER = salt['pillar.get']('static:fleet_master', False) -%}
{%- set MASTER = salt['pillar.get']('static:masterip', '') %} # {%- set FLEET_NODE = salt['pillar.get']('static:fleet_node', False) -%}
{%- set FLEET_NODE = salt['pillar.get']('static:fleet_node') %} # {%- set FLEET_IP = salt['pillar.get']('static:fleet_ip', '') %}
{%- set FLEET = salt['pillar.get']('static:fleet_ip', '') %} # {%- set MASTER = salt['pillar.get']('master:url_base', '') %}
{%- set KRATOS = salt['pillar.get']('kratos:redirect', '') %}
KIBANA_VERSION="7.6.1" KIBANA_VERSION="7.6.1"
# Copy template file # Copy template file
cp /opt/so/conf/kibana/saved_objects.ndjson.template /opt/so/conf/kibana/saved_objects.ndjson cp /opt/so/conf/kibana/saved_objects.ndjson.template /opt/so/conf/kibana/saved_objects.ndjson
# {% if FLEET_NODE or FLEET_MASTER %}
# Fleet IP
sed -i "s/FLEETPLACEHOLDER/{{ FLEET_IP }}/g" /opt/so/conf/kibana/saved_objects.ndjson
# {% endif %}
# SOCtopus and Master # SOCtopus and Master
sed -i "s/PLACEHOLDER/{{ MASTER }}/g" /opt/so/conf/kibana/saved_objects.ndjson sed -i "s/PLACEHOLDER/{{ MASTER }}/g" /opt/so/conf/kibana/saved_objects.ndjson
{% if FLEET_NODE %}
# Fleet IP
sed -i "s/FLEETPLACEHOLDER/{{ FLEET }}/g" /opt/so/conf/kibana/saved_objects.ndjson
{% endif %}
# Kratos redirect
sed -i "s/PCAPPLACEHOLDER/{{ KRATOS }}/g" /opt/so/conf/kibana/saved_objects.ndjson
# Load saved objects # Load saved objects
curl -X POST "localhost:5601/api/saved_objects/_import" -H "kbn-xsrf: true" --form file=@/opt/so/conf/kibana/saved_objects.ndjson > /dev/null 2>&1 curl -X POST "localhost:5601/api/saved_objects/_import" -H "kbn-xsrf: true" --form file=@/opt/so/conf/kibana/saved_objects.ndjson > /dev/null 2>&1

File diff suppressed because one or more lines are too long

View File

@@ -15,6 +15,7 @@
{% set VERSION = salt['pillar.get']('static:soversion', 'HH1.2.2') %} {% set VERSION = salt['pillar.get']('static:soversion', 'HH1.2.2') %}
{% set MASTER = salt['grains.get']('master') %} {% set MASTER = salt['grains.get']('master') %}
{% set FEATURES = salt['pillar.get']('elastic:features', False) %} {% set FEATURES = salt['pillar.get']('elastic:features', False) %}
{% if FEATURES %} {% if FEATURES %}
{% set FEATURES = "-features" %} {% set FEATURES = "-features" %}
{% else %} {% else %}
@@ -23,35 +24,21 @@
# Logstash Section - Decide which pillar to use # Logstash Section - Decide which pillar to use
{% if grains['role'] == 'so-sensor' %} {% if grains['role'] == 'so-sensor' %}
{% set lsheap = salt['pillar.get']('sensor:lsheap', '') %}
{% set lsheap = salt['pillar.get']('sensor:lsheap', '') %} {% set lsaccessip = salt['pillar.get']('sensor:lsaccessip', '') %}
{% set lsaccessip = salt['pillar.get']('sensor:lsaccessip', '') %}
{% elif grains['role'] == 'so-node' or grains['role'] == 'so-heavynode' %} {% elif grains['role'] == 'so-node' or grains['role'] == 'so-heavynode' %}
{% set lsheap = salt['pillar.get']('node:lsheap', '') %} {% set lsheap = salt['pillar.get']('node:lsheap', '') %}
{% set nodetype = salt['pillar.get']('node:node_type', 'storage') %} {% set nodetype = salt['pillar.get']('node:node_type', 'storage') %}
{% elif grains['role'] in ['so-eval','so-mastersearch', 'so-master', 'so-standalone'] %}
{% elif grains['role'] == 'so-master' %} {% set lsheap = salt['pillar.get']('master:lsheap', '') %}
{% set freq = salt['pillar.get']('master:freq', '0') %}
{% set lsheap = salt['pillar.get']('master:lsheap', '') %} {% set dstats = salt['pillar.get']('master:domainstats', '0') %}
{% set freq = salt['pillar.get']('master:freq', '0') %} {% set nodetype = salt['grains.get']('role', '') %}
{% set dstats = salt['pillar.get']('master:domainstats', '0') %}
{% set nodetype = salt['grains.get']('role', '') %}
{% elif grains['role'] == 'so-helix' %} {% elif grains['role'] == 'so-helix' %}
{% set lsheap = salt['pillar.get']('master:lsheap', '') %}
{% set lsheap = salt['pillar.get']('master:lsheap', '') %} {% set freq = salt['pillar.get']('master:freq', '0') %}
{% set freq = salt['pillar.get']('master:freq', '0') %} {% set dstats = salt['pillar.get']('master:domainstats', '0') %}
{% set dstats = salt['pillar.get']('master:domainstats', '0') %} {% set nodetype = salt['grains.get']('role', '') %}
{% set nodetype = salt['grains.get']('role', '') %}
{% elif grains['role'] in ['so-eval','so-mastersearch'] %}
{% set lsheap = salt['pillar.get']('master:lsheap', '') %}
{% set freq = salt['pillar.get']('master:freq', '0') %}
{% set dstats = salt['pillar.get']('master:domainstats', '0') %}
{% set nodetype = salt['grains.get']('role', '') %}
{% endif %} {% endif %}
{% set PIPELINES = salt['pillar.get']('logstash:pipelines', {}) %} {% set PIPELINES = salt['pillar.get']('logstash:pipelines', {}) %}

View File

@@ -5,7 +5,7 @@ input {
ssl_certificate_authorities => ["/usr/share/filebeat/ca.crt"] ssl_certificate_authorities => ["/usr/share/filebeat/ca.crt"]
ssl_certificate => "/usr/share/logstash/filebeat.crt" ssl_certificate => "/usr/share/logstash/filebeat.crt"
ssl_key => "/usr/share/logstash/filebeat.key" ssl_key => "/usr/share/logstash/filebeat.key"
tags => [ "beat" ] #tags => [ "beat" ]
} }
} }
filter { filter {

View File

@@ -0,0 +1,325 @@
{%- set masterip = salt['pillar.get']('master:mainip', '') %}
{%- set FLEET_MASTER = salt['pillar.get']('static:fleet_master') %}
{%- set FLEET_NODE = salt['pillar.get']('static:fleet_node') %}
{%- set FLEET_IP = salt['pillar.get']('static:fleet_ip', None) %}
# For more information on configuration, see:
# * Official English Documentation: http://nginx.org/en/docs/
# * Official Russian Documentation: http://nginx.org/ru/docs/
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /run/nginx.pid;
# Load dynamic modules. See /usr/share/nginx/README.dynamic.
include /usr/share/nginx/modules/*.conf;
events {
worker_connections 1024;
}
http {
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
client_max_body_size 1024M;
include /etc/nginx/mime.types;
default_type application/octet-stream;
# Load modular configuration files from the /etc/nginx/conf.d directory.
# See http://nginx.org/en/docs/ngx_core_module.html#include
# for more information.
include /etc/nginx/conf.d/*.conf;
#server {
# listen 80 default_server;
# listen [::]:80 default_server;
# server_name _;
# root /opt/socore/html;
# index index.html;
# Load configuration files for the default server block.
#include /etc/nginx/default.d/*.conf;
# location / {
# }
# error_page 404 /404.html;
# location = /40x.html {
# }
# error_page 500 502 503 504 /50x.html;
# location = /50x.html {
# }
#}
server {
listen 80 default_server;
server_name _;
return 301 https://$host$request_uri;
}
{% if FLEET_MASTER %}
server {
listen 8090 ssl http2 default_server;
server_name _;
root /opt/socore/html;
index blank.html;
ssl_certificate "/etc/pki/nginx/server.crt";
ssl_certificate_key "/etc/pki/nginx/server.key";
ssl_session_cache shared:SSL:1m;
ssl_session_timeout 10m;
ssl_ciphers HIGH:!aNULL:!MD5;
ssl_prefer_server_ciphers on;
location ~ ^/kolide.agent.Api/(RequestEnrollment|RequestConfig|RequestQueries|PublishLogs|PublishResults|CheckHealth)$ {
grpc_pass grpcs://{{ masterip }}:8080;
grpc_set_header Host $host;
grpc_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_buffering off;
}
}
{% endif %}
# Settings for a TLS enabled server.
server {
listen 443 ssl http2 default_server;
#listen [::]:443 ssl http2 default_server;
server_name _;
root /opt/socore/html;
index index.html;
ssl_certificate "/etc/pki/nginx/server.crt";
ssl_certificate_key "/etc/pki/nginx/server.key";
ssl_session_cache shared:SSL:1m;
ssl_session_timeout 10m;
ssl_ciphers HIGH:!aNULL:!MD5;
ssl_prefer_server_ciphers on;
# Load configuration files for the default server block.
#include /etc/nginx/default.d/*.conf;
location ~* (^/login/|^/js/.*|^/css/.*|^/images/.*) {
proxy_pass http://{{ masterip }}:9822;
proxy_read_timeout 90;
proxy_connect_timeout 90;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Proxy "";
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
}
location / {
auth_request /auth/sessions/whoami;
proxy_pass http://{{ masterip }}:9822/;
proxy_read_timeout 90;
proxy_connect_timeout 90;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Proxy "";
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
}
location ~ ^/auth/.*?(whoami|login|logout) {
rewrite /auth/(.*) /$1 break;
proxy_pass http://{{ masterip }}:4433;
proxy_read_timeout 90;
proxy_connect_timeout 90;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Proxy "";
}
location /cyberchef/ {
auth_request /auth/sessions/whoami;
proxy_read_timeout 90;
proxy_connect_timeout 90;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Proxy "";
}
location /cyberchef {
rewrite ^ /cyberchef/ permanent;
}
location /packages/ {
try_files $uri =206;
auth_request /auth/sessions/whoami;
proxy_read_timeout 90;
proxy_connect_timeout 90;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Proxy "";
}
location /grafana/ {
rewrite /grafana/(.*) /$1 break;
proxy_pass http://{{ masterip }}:3000/;
proxy_read_timeout 90;
proxy_connect_timeout 90;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Proxy "";
}
location /kibana/ {
auth_request /auth/sessions/whoami;
rewrite /kibana/(.*) /$1 break;
proxy_pass http://{{ masterip }}:5601/;
proxy_read_timeout 90;
proxy_connect_timeout 90;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Proxy "";
}
location /nodered/ {
proxy_pass http://{{ masterip }}:1880/;
proxy_read_timeout 90;
proxy_connect_timeout 90;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
proxy_set_header Proxy "";
}
location /playbook/ {
proxy_pass http://{{ masterip }}:3200/playbook/;
proxy_read_timeout 90;
proxy_connect_timeout 90;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Proxy "";
}
location /navigator/ {
auth_request /auth/sessions/whoami;
proxy_pass http://{{ masterip }}:4200/navigator/;
proxy_read_timeout 90;
proxy_connect_timeout 90;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Proxy "";
}
{%- if FLEET_NODE %}
location /fleet/ {
return 301 https://{{ FLEET_IP }}/fleet;
}
{%- else %}
location /fleet/ {
proxy_pass https://{{ masterip }}:8080;
proxy_read_timeout 90;
proxy_connect_timeout 90;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Proxy "";
}
{%- endif %}
location /thehive/ {
proxy_pass http://{{ masterip }}:9000/thehive/;
proxy_read_timeout 90;
proxy_connect_timeout 90;
proxy_http_version 1.1; # this is essential for chunked responses to work
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Proxy "";
}
location /cortex/ {
proxy_pass http://{{ masterip }}:9001/cortex/;
proxy_read_timeout 90;
proxy_connect_timeout 90;
proxy_http_version 1.1; # this is essential for chunked responses to work
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Proxy "";
}
location /soctopus/ {
proxy_pass http://{{ masterip }}:7000/;
proxy_read_timeout 90;
proxy_connect_timeout 90;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Proxy "";
}
location /kibana/app/soc/ {
rewrite ^/kibana/app/soc/(.*) /soc/$1 permanent;
}
location /kibana/app/fleet/ {
rewrite ^/kibana/app/fleet/(.*) /fleet/$1 permanent;
}
location /kibana/app/soctopus/ {
rewrite ^/kibana/app/soctopus/(.*) /soctopus/$1 permanent;
}
location /sensoroniagents/ {
proxy_pass http://{{ masterip }}:9822/;
proxy_read_timeout 90;
proxy_connect_timeout 90;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Proxy "";
}
error_page 401 = @error401;
location @error401 {
add_header Set-Cookie "AUTH_REDIRECT=$request_uri;Path=/;Max-Age=14400";
return 302 /auth/self-service/browser/flows/login;
}
#error_page 404 /404.html;
# location = /40x.html {
#}
error_page 500 502 503 504 /50x.html;
location = /usr/share/nginx/html/50x.html {
}
}
}

View File

@@ -1,4 +1,4 @@
{%- set WEBACCESS = salt['pillar.get']('kratos:redirect', '') -%} {%- set WEBACCESS = salt['pillar.get']('master:url_base', '') -%}
{%- set KRATOSKEY = salt['pillar.get']('kratos:kratoskey', '') -%} {%- set KRATOSKEY = salt['pillar.get']('kratos:kratoskey', '') -%}
selfservice: selfservice:

View File

@@ -1,29 +1,31 @@
{ {
"title": "Introducing Hybrid Hunter 1.2.1 Beta 1", "title": "Introducing Hybrid Hunter 1.3.0 Beta 2",
"changes": [ "changes": [
{ "summary": "Full support for Ubuntu 18.04. 16.04 is no longer supported for Hybrid Hunter." }, { "summary": "New Feature: Codename: \"Onion Hunt\". Select Hunt from the menu and start hunting down your adversaries!" },
{ "summary": "Introduction of the Security Onion Console. Once logged in you are directly taken to the SOC." }, { "summary": "Improved ECS support." },
{ "summary": "New authentication using Kratos." }, { "summary": "Complete refactor of the setup to make it easier to follow." },
{ "summary": "During install you must specify how you would like to access the SOC ui. This is for strict cookie security." }, { "summary": "Improved setup script logging to better assist on any issues." },
{ "summary": "Ability to list and delete web users from the SOC ui." }, { "summary": "Setup now checks for minimal requirements during install." },
{ "summary": "The soremote account is now used to add nodes to the grid vs using socore." }, { "summary": "Updated Cyberchef to version 9.20.3." },
{ "summary": "Community ID support for Zeek, osquery, and Suricata. You can now tie host events to connection logs!" }, { "summary": "Updated Elastalert to version 0.2.4 and switched to alpine to reduce container size." },
{ "summary": "Elastic 7.6.1 with ECS support." }, { "summary": "Updated Redis to 5.0.9 and switched to alpine to reduce container size." },
{ "summary": "New set of Kibana dashboards that align with ECS." }, { "summary": "Updated Salt to 2019.2.5." },
{ "summary": "Eval mode no longer uses Logstash for parsing (Filebeat -> ES Ingest)" }, { "summary": "Updated Grafana to 6.7.3." },
{ "summary": "Ingest node parsing for osquery-shipped logs (osquery, WEL, Sysmon)." }, { "summary": "Zeek 3.0.6." },
{ "summary": "Fleet standalone mode with improved Web UI & API access control." }, { "summary": "Suricata 4.1.8." },
{ "summary": "Improved Fleet integration support." }, { "summary": "Fixes so-status to now display correct containers and status." },
{ "summary": "Playbook now has full Windows Sigma community ruleset builtin." }, { "summary": "local.zeek is now controlled by a pillar instead of modifying the file directly." },
{ "summary": "Automatic Sigma community rule updates." }, { "summary": "Renamed so-core to so-nginx and switched to alpine to reduce container size." },
{ "summary": "Playbook stability enhancements." }, { "summary": "Playbook now uses MySQL instead of SQLite." },
{ "summary": "Zeek health check. Zeek will now auto restart if a worker crashes." }, { "summary": "Sigma rules have all been updated." },
{ "summary": "zeekctl is now managed by salt." }, { "summary": "Kibana dashboard improvements for ECS." },
{ "summary": "Grafana dashboard improvements and cleanup." }, { "summary": "Fixed an issue where geoip was not properly parsed." },
{ "summary": "Moved logstash configs to pillars." }, { "summary": "ATT&CK Navigator is now it's own state." },
{ "summary": "Salt logs moved to /opt/so/log/salt." }, { "summary": "Standlone mode is now supported." },
{ "summary": "Strelka integrated for file-oriented detection/analysis at scale" }, { "summary": "Mastersearch previously used the same Grafana dashboard as a Search node. It now has its own dashboard that incorporates panels from the Master node and Search node dashboards." },
{ "summary": "KNOWN ISSUE: Updating users via the SOC ui is known to fail. To change a user, delete the user and re-add them." }, { "summary": "KNOWN ISSUE: The Hunt feature is currently considered \"Preview\" and although very useful in its current state, not everything works. We wanted to get this out as soon as possible to get the feedback from you! Let us know what you want to see! Let us know what you think we should call it!" },
{ "summary": "KNOWN ISSUE: You cannot pivot to PCAP from Suricata alerts in Kibana or Hunt." },
{ "summary": "KNOWN ISSUE: Updating users via the SOC ui is known to fail. To change a user, delete the user and re-add them." },
{ "summary": "KNOWN ISSUE: Due to the move to ECS, the current Playbook plays may not alert correctly at this time." }, { "summary": "KNOWN ISSUE: Due to the move to ECS, the current Playbook plays may not alert correctly at this time." },
{ "summary": "KNOWN ISSUE: The osquery MacOS package does not install correctly." } { "summary": "KNOWN ISSUE: The osquery MacOS package does not install correctly." }
] ]

View File

@@ -88,7 +88,7 @@
{ "name": "Alerts", "description": "Show all alerts grouped by alert source", "query": "event.dataset: alert | groupby event.module"}, { "name": "Alerts", "description": "Show all alerts grouped by alert source", "query": "event.dataset: alert | groupby event.module"},
{ "name": "NIDS Alerts", "description": "Show all NIDS alerts grouped by alert name", "query": "event.category: network AND event.dataset: alert | groupby rule.name"}, { "name": "NIDS Alerts", "description": "Show all NIDS alerts grouped by alert name", "query": "event.category: network AND event.dataset: alert | groupby rule.name"},
{ "name": "Wazuh/OSSEC Alerts", "description": "Show all Wazuh alerts grouped by category", "query": "event.module:ossec AND event.dataset:alert | groupby rule.category"}, { "name": "Wazuh/OSSEC Alerts", "description": "Show all Wazuh alerts grouped by category", "query": "event.module:ossec AND event.dataset:alert | groupby rule.category"},
{ "name": "Wazuh/OSSEC Commands", "description": "Show all Wazuh alerts grouped by command line", "query": "eevent.module:ossec AND event.dataset:alert | groupby process.command_line"}, { "name": "Wazuh/OSSEC Commands", "description": "Show all Wazuh alerts grouped by command line", "query": "event.module:ossec AND event.dataset:alert | groupby process.command_line"},
{ "name": "Wazuh/OSSEC Processes", "description": "Show all Wazuh alerts grouped by process name", "query": "event.module:ossec AND event.dataset:alert | groupby process.name"}, { "name": "Wazuh/OSSEC Processes", "description": "Show all Wazuh alerts grouped by process name", "query": "event.module:ossec AND event.dataset:alert | groupby process.name"},
{ "name": "Wazuh/OSSEC Users", "description": "Show all Wazuh alerts grouped by username", "query": "event.module:ossec AND event.dataset:alert | groupby user.name"}, { "name": "Wazuh/OSSEC Users", "description": "Show all Wazuh alerts grouped by username", "query": "event.module:ossec AND event.dataset:alert | groupby user.name"},
{ "name": "Sysmon Events", "description": "Show all Sysmon logs grouped by event_id", "query": "event_type:sysmon | groupby event_id"}, { "name": "Sysmon Events", "description": "Show all Sysmon logs grouped by event_id", "query": "event_type:sysmon | groupby event_id"},
@@ -99,15 +99,15 @@
{ "name": "Connections", "description": "Connections grouped by destination Geo", "query": "event.module:zeek AND event.dataset:conn | groupby destination.geo.country_name"}, { "name": "Connections", "description": "Connections grouped by destination Geo", "query": "event.module:zeek AND event.dataset:conn | groupby destination.geo.country_name"},
{ "name": "Connections", "description": "Connections grouped by source Geo", "query": "event.module:zeek AND event.dataset:conn | groupby source.geo.country_name"}, { "name": "Connections", "description": "Connections grouped by source Geo", "query": "event.module:zeek AND event.dataset:conn | groupby source.geo.country_name"},
{ "name": "DCE_RPC", "description": "DCE_RPC grouped by operation", "query": "event.module:zeek AND event.dataset:dce_rpc | groupby operation"}, { "name": "DCE_RPC", "description": "DCE_RPC grouped by operation", "query": "event.module:zeek AND event.dataset:dce_rpc | groupby operation"},
{ "name": "DHCP", "description": "DHCP leases", "query": "event.module:zeek AND event.dataset:dhcp | groupby host.hostname,host.domain,destination.ip"}, { "name": "DHCP", "description": "DHCP leases", "query": "event.module:zeek AND event.dataset:dhcp | groupby host.hostname host.domain dhcp.requested_address"},
{ "name": "DHCP", "description": "DHCP grouped by message type", "query": "event.module:zeek AND event.dataset:dhcp | groupby message_types"}, { "name": "DHCP", "description": "DHCP grouped by message type", "query": "event.module:zeek AND event.dataset:dhcp | groupby message_types"},
{ "name": "DNP3", "description": "DNP3 grouped by reply", "query": "event.module:zeek AND event.dataset:dnp3 | groupby dnp3.fc_reply"}, { "name": "DNP3", "description": "DNP3 grouped by reply", "query": "event.module:zeek AND event.dataset:dnp3 | groupby dnp3.fc_reply"},
{ "name": "DNS", "description": "DNS queries grouped by port ", "query": "event.module:zeek AND event.dataset:dns | groupby dns.query.name,destination.port"}, { "name": "DNS", "description": "DNS queries grouped by port ", "query": "event.module:zeek AND event.dataset:dns | groupby dns.query.name,destination.port"},
{ "name": "DNS", "description": "DNS queries grouped by type", "query": "event.module:zeek AND event.dataset:dns | groupby dns.query.type_name,destination.port"}, { "name": "DNS", "description": "DNS queries grouped by type", "query": "event.module:zeek AND event.dataset:dns | groupby dns.query.type_name,destination.port"},
{ "name": "DNS", "description": "DNS highest registered domain", "query": "event.module:zeek AND event.dataset:dns | groupby highest_registered_domain"}, { "name": "DNS", "description": "DNS highest registered domain", "query": "event.module:zeek AND event.dataset:dns | groupby dns.highest_registered_domain.keyword"},
{ "name": "DNS", "description": "DNS grouped by parent domain", "query": "event.module:zeek AND event.dataset:dns | groupby parent_domain"}, { "name": "DNS", "description": "DNS grouped by parent domain", "query": "event.module:zeek AND event.dataset:dns | groupby dns.parent_domain.keyword"},
{ "name": "Files", "description": "Files grouped by mimetype", "query": "event.module:zeek AND event.dataset:files | groupby file.mime_type source.ip"}, { "name": "Files", "description": "Files grouped by mimetype", "query": "event.module:zeek AND event.dataset:files | groupby file.mime_type source.ip"},
{ "name": "FTP", "description": "FTP grouped by argument", "query": "event.module:zeek AND event.dataset:ftp | groupby ftp_argument"}, { "name": "FTP", "description": "FTP grouped by argument", "query": "event.module:zeek AND event.dataset:ftp | groupby ftp.argument"},
{ "name": "FTP", "description": "FTP grouped by command", "query": "event.module:zeek AND event.dataset:ftp | groupby ftp.command"}, { "name": "FTP", "description": "FTP grouped by command", "query": "event.module:zeek AND event.dataset:ftp | groupby ftp.command"},
{ "name": "FTP", "description": "FTP grouped by username", "query": "event.module:zeek AND event.dataset:ftp | groupby ftp.user"}, { "name": "FTP", "description": "FTP grouped by username", "query": "event.module:zeek AND event.dataset:ftp | groupby ftp.user"},
{ "name": "HTTP", "description": "HTTP grouped by destination port", "query": "event.module:zeek AND event.dataset:http | groupby destination.port"}, { "name": "HTTP", "description": "HTTP grouped by destination port", "query": "event.module:zeek AND event.dataset:http | groupby destination.port"},

View File

@@ -5,7 +5,7 @@
{% set global_ca_text = [] %} {% set global_ca_text = [] %}
{% set global_ca_server = [] %} {% set global_ca_server = [] %}
{% if 'master' in grains.id.split('_')|last or 'eval' in grains.id.split('_')|last %} {% if grains.id.split('_')|last in ['master', 'eval', 'standalone'] %}
{% set trusttheca_text = salt['mine.get'](grains.id, 'x509.get_pem_entries')[grains.id]['/etc/pki/ca.crt']|replace('\n', '') %} {% set trusttheca_text = salt['mine.get'](grains.id, 'x509.get_pem_entries')[grains.id]['/etc/pki/ca.crt']|replace('\n', '') %}
{% set ca_server = grains.id %} {% set ca_server = grains.id %}
{% else %} {% else %}
@@ -50,7 +50,7 @@ m2cryptopkgs:
bits: 4096 bits: 4096
backup: True backup: True
{% if grains['role'] == 'so-master' or grains['role'] == 'so-eval' or grains['role'] == 'so-helix' or grains['role'] == 'so-mastersearch' %} {% if grains['role'] in ['so-master', 'so-eval', 'so-helix', 'so-mastersearch', 'so-standalone'] %}
# Request a cert and drop it where it needs to go to be distributed # Request a cert and drop it where it needs to go to be distributed
/etc/pki/filebeat.crt: /etc/pki/filebeat.crt:
@@ -142,7 +142,7 @@ fbcrtlink:
backup: True backup: True
{% endif %} {% endif %}
{% if grains['role'] == 'so-sensor' or grains['role'] == 'so-master' or grains['role'] == 'so-node' or grains['role'] == 'so-eval' or grains['role'] == 'so-helix' or grains['role'] == 'so-mastersearch' or grains['role'] == 'so-heavynode' or grains['role'] == 'so-fleet' %} {% if grains['role'] in ['so-sensor', 'so-master', 'so-node', 'so-eval', 'so-helix', 'so-mastersearch', 'so-heavynode', 'so-fleet', 'so-standalone'] %}
fbcertdir: fbcertdir:
file.directory: file.directory:

View File

@@ -156,6 +156,63 @@ base:
- domainstats - domainstats
{%- endif %} {%- endif %}
'*_standalone':
- ca
- ssl
- registry
- master
- common
- nginx
- telegraf
- influxdb
- grafana
- soc
- firewall
- idstools
- healthcheck
- redis
{%- if FLEETMASTER or FLEETNODE or PLAYBOOK != 0 %}
- mysql
{%- endif %}
{%- if WAZUH != 0 %}
- wazuh
{%- endif %}
- elasticsearch
- logstash
- kibana
- pcap
- suricata
- zeek
{%- if STRELKA %}
- strelka
{%- endif %}
- filebeat
- curator
- elastalert
{%- if FLEETMASTER or FLEETNODE %}
- fleet
- redis
- fleet.install_package
{%- endif %}
- utility
- schedule
- soctopus
{%- if THEHIVE != 0 %}
- hive
{%- endif %}
{%- if PLAYBOOK != 0 %}
- playbook
{%- endif %}
{%- if NAVIGATOR != 0 %}
- navigator
{%- endif %}
{%- if FREQSERVER != 0 %}
- freqserver
{%- endif %}
{%- if DOMAINSTATS != 0 %}
- domainstats
{%- endif %}
# Search node logic # Search node logic
'*_node and I@node:node_type:parser': '*_node and I@node:node_type:parser':

View File

@@ -38,9 +38,3 @@ echo "Applying cross cluster search config..."
curl -XPUT http://{{ ES }}:9200/_cluster/settings -H'Content-Type: application/json' -d '{"persistent": {"search": {"remote": {"{{ SN }}": {"skip_unavailable": "true", "seeds": ["{{ SNDATA.ip }}:9300"]}}}}}' curl -XPUT http://{{ ES }}:9200/_cluster/settings -H'Content-Type: application/json' -d '{"persistent": {"search": {"remote": {"{{ SN }}": {"skip_unavailable": "true", "seeds": ["{{ SNDATA.ip }}:9300"]}}}}}'
{%- endfor %} {%- endfor %}
{%- endif %} {%- endif %}
{%- if salt['pillar.get']('mastersearchtab', {}) %}
{%- for SN, SNDATA in salt['pillar.get']('mastersearchtab', {}).items() %}
curl -XPUT http://{{ ES }}:9200/_cluster/settings -H'Content-Type: application/json' -d '{"persistent": {"search": {"remote": {"{{ SN }}": {"skip_unavailable": "true", "seeds": ["{{ SNDATA.ip }}:9300"]}}}}}'
{%- endfor %}
{%- endif %}

View File

@@ -1,9 +1,9 @@
{%- if grains['role'] == 'so-master' or grains['role'] == 'so-eval' or grains['role'] == 'so-mastersearch' %} {%- if grains['role'] in ['so-master', 'so-eval', 'so-mastersearch', 'so-standalone'] %}
{%- set ip = salt['pillar.get']('static:masterip', '') %} {%- set ip = salt['pillar.get']('static:masterip', '') %}
{%- elif grains['role'] == 'so-node' or grains['role'] == 'so-heavynode' %} {%- elif grains['role'] == 'so-node' or grains['role'] == 'so-heavynode' %}
{%- set ip = salt['pillar.get']('node:mainip', '') %} {%- set ip = salt['pillar.get']('node:mainip', '') %}
{%- elif grains['role'] == 'so-sensor' %} {%- elif grains['role'] == 'so-sensor' %}
{%- set ip = salt['pillar.get']('sensor:mainip', '') %} {%- set ip = salt['pillar.get']('sensor:mainip', '') %}
{%- endif %} {%- endif %}
<!-- <!--
Wazuh - Agent - Default configuration for ubuntu 16.04 Wazuh - Agent - Default configuration for ubuntu 16.04

View File

@@ -1,9 +1,9 @@
{%- if grains['role'] == 'so-master' or grains['role'] == 'so-eval' or grains['role'] == 'so-mastersearch' %} {%- if grains['role'] in ['so-master', 'so-eval', 'so-mastersearch', 'so-standalone'] %}
{%- set ip = salt['pillar.get']('static:masterip', '') %} {%- set ip = salt['pillar.get']('static:masterip', '') %}
{%- elif grains['role'] == 'so-node' or grains['role'] == 'so-heavynode' %} {%- elif grains['role'] == 'so-node' or grains['role'] == 'so-heavynode' %}
{%- set ip = salt['pillar.get']('node:mainip', '') %} {%- set ip = salt['pillar.get']('node:mainip', '') %}
{%- elif grains['role'] == 'so-sensor' %} {%- elif grains['role'] == 'so-sensor' %}
{%- set ip = salt['pillar.get']('sensor:mainip', '') %} {%- set ip = salt['pillar.get']('sensor:mainip', '') %}
{%- endif %} {%- endif %}
#!/bin/bash #!/bin/bash

View File

@@ -321,7 +321,7 @@ configure_minion() {
'helix') 'helix')
echo "master: $HOSTNAME" >> "$minion_config" echo "master: $HOSTNAME" >> "$minion_config"
;; ;;
'master' | 'eval' | 'mastersearch') 'master' | 'eval' | 'mastersearch' | 'standalone')
printf '%s\n'\ printf '%s\n'\
"master: $HOSTNAME"\ "master: $HOSTNAME"\
"mysql.host: '$MAINIP'"\ "mysql.host: '$MAINIP'"\
@@ -408,7 +408,7 @@ copy_master_config() {
copy_minion_tmp_files() { copy_minion_tmp_files() {
case "$install_type" in case "$install_type" in
'MASTER' | 'EVAL' | 'HELIXSENSOR' | 'MASTERSEARCH') 'MASTER' | 'EVAL' | 'HELIXSENSOR' | 'MASTERSEARCH' | 'STANDALONE')
echo "Copying pillar and salt files in $temp_install_dir to /opt/so/saltstack" echo "Copying pillar and salt files in $temp_install_dir to /opt/so/saltstack"
cp -Rv "$temp_install_dir"/pillar/ /opt/so/saltstack/ >> "$setup_log" 2>&1 cp -Rv "$temp_install_dir"/pillar/ /opt/so/saltstack/ >> "$setup_log" 2>&1
if [ -d "$temp_install_dir"/salt ] ; then if [ -d "$temp_install_dir"/salt ] ; then
@@ -686,8 +686,7 @@ docker_seed_registry() {
} >> "$setup_log" 2>&1 } >> "$setup_log" 2>&1
done done
else else
cd /nsm/docker-registry/docker tar xvf /nsm/docker-registry/docker/registry.tar -C /nsm/docker-registry/docker >> "$setup_log" 2>&1
tar xvf /nsm/docker-registry/docker/registry.tar >> "$setup_log" 2>&1
rm /nsm/docker-registry/docker/registry.tar >> "$setup_log" 2>&1 rm /nsm/docker-registry/docker/registry.tar >> "$setup_log" 2>&1
fi fi
@@ -767,7 +766,7 @@ got_root() {
get_minion_type() { get_minion_type() {
local minion_type local minion_type
case "$install_type" in case "$install_type" in
'EVAL' | 'MASTERSEARCH' | 'MASTER' | 'SENSOR' | 'HEAVYNODE' | 'FLEET') 'EVAL' | 'MASTERSEARCH' | 'MASTER' | 'SENSOR' | 'HEAVYNODE' | 'FLEET' | 'STANDALONE')
minion_type=$(echo "$install_type" | tr '[:upper:]' '[:lower:]') minion_type=$(echo "$install_type" | tr '[:upper:]' '[:lower:]')
;; ;;
'HELIXSENSOR') 'HELIXSENSOR')
@@ -803,7 +802,7 @@ master_pillar() {
" freq: 0"\ " freq: 0"\
" domainstats: 0" >> "$pillar_file" " domainstats: 0" >> "$pillar_file"
if [ "$install_type" = 'EVAL' ] || [ "$install_type" = 'HELIXSENSOR' ] || [ "$install_type" = 'MASTERSEARCH' ]; then if [ "$install_type" = 'EVAL' ] || [ "$install_type" = 'HELIXSENSOR' ] || [ "$install_type" = 'MASTERSEARCH' ] || [ "$install_type" = 'STANDALONE' ]; then
printf '%s\n'\ printf '%s\n'\
" ls_pipeline_batch_size: 125"\ " ls_pipeline_batch_size: 125"\
" ls_input_threads: 1"\ " ls_input_threads: 1"\
@@ -811,6 +810,18 @@ master_pillar() {
" mtu: $MTU" >> "$pillar_file" " mtu: $MTU" >> "$pillar_file"
fi fi
case $REDIRECTINFO in
'IP')
REDIRECTIT="$MAINIP"
;;
'HOSTNAME')
REDIRECTIT=$HOSTNAME
;;
*)
REDIRECTIT="$REDIRECTHOST"
;;
esac
printf '%s\n'\ printf '%s\n'\
" lsheap: $LS_HEAP_SIZE"\ " lsheap: $LS_HEAP_SIZE"\
" lsaccessip: 127.0.0.1"\ " lsaccessip: 127.0.0.1"\
@@ -826,24 +837,12 @@ master_pillar() {
" thehive: $THEHIVE"\ " thehive: $THEHIVE"\
" playbook: $PLAYBOOK"\ " playbook: $PLAYBOOK"\
" navigator: $NAVIGATOR"\ " navigator: $NAVIGATOR"\
" url_base: $REDIRECTIT"\
""\ ""\
"kratos:" >> "$pillar_file" "kratos:" >> "$pillar_file"
case $REDIRECTINFO in
'IP')
REDIRECTIT="$MAINIP"
;;
'HOSTNAME')
REDIRECTIT=$HOSTNAME
;;
*)
REDIRECTIT="$REDIRECTHOST"
;;
esac
printf '%s\n'\ printf '%s\n'\
" kratoskey: $KRATOSKEY"\ " kratoskey: $KRATOSKEY"\
" redirect: $REDIRECTIT"\
"" >> "$pillar_file" "" >> "$pillar_file"
printf '%s\n' '----' >> "$setup_log" 2>&1 printf '%s\n' '----' >> "$setup_log" 2>&1
@@ -1022,7 +1021,7 @@ saltify() {
set_progress_str 6 'Installing various dependencies' set_progress_str 6 'Installing various dependencies'
yum -y install wget nmap-ncat >> "$setup_log" 2>&1 yum -y install wget nmap-ncat >> "$setup_log" 2>&1
case "$install_type" in case "$install_type" in
'MASTER' | 'EVAL' | 'MASTERSEARCH' | 'FLEET' | 'HELIXSENSOR') 'MASTER' | 'EVAL' | 'MASTERSEARCH' | 'FLEET' | 'HELIXSENSOR' | 'STANDALONE')
reserve_group_ids >> "$setup_log" 2>&1 reserve_group_ids >> "$setup_log" 2>&1
yum -y install epel-release >> "$setup_log" 2>&1 yum -y install epel-release >> "$setup_log" 2>&1
yum -y install sqlite argon2 curl mariadb-devel >> "$setup_log" 2>&1 yum -y install sqlite argon2 curl mariadb-devel >> "$setup_log" 2>&1
@@ -1093,7 +1092,7 @@ saltify() {
'FLEET') 'FLEET')
if [ "$OSVER" != 'xenial' ]; then apt-get -y install python3-mysqldb >> "$setup_log" 2>&1; else apt-get -y install python-mysqldb >> "$setup_log" 2>&1; fi if [ "$OSVER" != 'xenial' ]; then apt-get -y install python3-mysqldb >> "$setup_log" 2>&1; else apt-get -y install python-mysqldb >> "$setup_log" 2>&1; fi
;; ;;
'MASTER' | 'EVAL' | 'MASTERSEARCH') # TODO: should this also be HELIXSENSOR? 'MASTER' | 'EVAL' | 'MASTERSEARCH' | 'STANDALONE') # TODO: should this also be HELIXSENSOR?
if [ "$OSVER" != "xenial" ]; then local py_ver_url_path="/py3"; else local py_ver_url_path="/apt"; fi if [ "$OSVER" != "xenial" ]; then local py_ver_url_path="/py3"; else local py_ver_url_path="/apt"; fi
# Add saltstack repo(s) # Add saltstack repo(s)
@@ -1151,7 +1150,7 @@ saltify() {
salt_checkin() { salt_checkin() {
case "$install_type" in case "$install_type" in
'MASTER' | 'EVAL' | 'HELIXSENSOR' | 'MASTERSEARCH') # Fix Mine usage 'MASTER' | 'EVAL' | 'HELIXSENSOR' | 'MASTERSEARCH' | 'STANDALONE') # Fix Mine usage
{ {
echo "Building Certificate Authority"; echo "Building Certificate Authority";
salt-call state.apply ca; salt-call state.apply ca;
@@ -1282,7 +1281,7 @@ set_hostname() {
set_hostname_iso set_hostname_iso
if [[ ! $install_type =~ ^(MASTER|EVAL|HELIXSENSOR|MASTERSEARCH)$ ]]; then if [[ ! $install_type =~ ^(MASTER|EVAL|HELIXSENSOR|MASTERSEARCH|STANDALONE)$ ]]; then
if ! getent hosts "$MSRV"; then if ! getent hosts "$MSRV"; then
echo "$MSRVIP $MSRV" >> /etc/hosts echo "$MSRVIP $MSRV" >> /etc/hosts
fi fi
@@ -1384,7 +1383,7 @@ set_management_interface() {
set_node_type() { set_node_type() {
case "$install_type" in case "$install_type" in
'SEARCHNODE' | 'EVAL' | 'MASTERSEARCH' | 'HEAVYNODE') 'SEARCHNODE' | 'EVAL' | 'MASTERSEARCH' | 'HEAVYNODE' | 'STANDALONE')
NODETYPE='search' NODETYPE='search'
;; ;;
'PARSINGNODE') 'PARSINGNODE')
@@ -1450,7 +1449,7 @@ ls_heapsize() {
fi fi
case "$install_type" in case "$install_type" in
'MASTERSEARCH' | 'HEAVYNODE' | 'HELIXSENSOR') 'MASTERSEARCH' | 'HEAVYNODE' | 'HELIXSENSOR' | 'STANDALONE')
LS_HEAP_SIZE='1000m' LS_HEAP_SIZE='1000m'
;; ;;
'EVAL') 'EVAL')
@@ -1462,7 +1461,7 @@ ls_heapsize() {
esac esac
export LS_HEAP_SIZE export LS_HEAP_SIZE
if [[ "$install_type" =~ ^(EVAL|MASTERSEARCH)$ ]]; then if [[ "$install_type" =~ ^(EVAL|MASTERSEARCH|STANDALONE)$ ]]; then
NODE_LS_HEAP_SIZE=LS_HEAP_SIZE NODE_LS_HEAP_SIZE=LS_HEAP_SIZE
export NODE_LS_HEAP_SIZE export NODE_LS_HEAP_SIZE
fi fi
@@ -1484,7 +1483,7 @@ es_heapsize() {
fi fi
export ES_HEAP_SIZE export ES_HEAP_SIZE
if [[ "$install_type" =~ ^(EVAL|MASTERSEARCH)$ ]]; then if [[ "$install_type" =~ ^(EVAL|MASTERSEARCH|STANDALONE)$ ]]; then
NODE_ES_HEAP_SIZE=ES_HEAP_SIZE NODE_ES_HEAP_SIZE=ES_HEAP_SIZE
export NODE_ES_HEAP_SIZE export NODE_ES_HEAP_SIZE
fi fi

View File

@@ -116,13 +116,7 @@ case "$setup_type" in
whiptail_management_interface_dns_search whiptail_management_interface_dns_search
fi fi
# Init networking so rest of install works
set_hostname_iso
set_management_interface
collect_adminuser_inputs collect_adminuser_inputs
add_admin_user
disable_onion_user
;; ;;
'network') 'network')
whiptail_network_notice whiptail_network_notice
@@ -247,6 +241,15 @@ fi
whiptail_make_changes whiptail_make_changes
if [[ "$setup_type" == 'iso' ]]; then
# Init networking so rest of install works
set_hostname_iso
set_management_interface
add_admin_user
disable_onion_user
fi
set_hostname 2>> "$setup_log" set_hostname 2>> "$setup_log"
set_version 2>> "$setup_log" set_version 2>> "$setup_log"
clear_master 2>> "$setup_log" clear_master 2>> "$setup_log"
@@ -316,7 +319,6 @@ export percentage=0
master_pillar 2>> "$setup_log" master_pillar 2>> "$setup_log"
fi fi
set_progress_str 16 'Running first Salt checkin' set_progress_str 16 'Running first Salt checkin'
salt_firstcheckin 2>> "$setup_log" salt_firstcheckin 2>> "$setup_log"
@@ -355,7 +357,12 @@ export percentage=0
set_progress_str 25 'Configuring firewall' set_progress_str 25 'Configuring firewall'
set_initial_firewall_policy 2>> "$setup_log" set_initial_firewall_policy 2>> "$setup_log"
set_progress_str 26 'Downloading containers from the internet' if [[ "$setup_type" == 'iso' ]]; then
set_progress_str 26 'Copying containers from iso'
else
set_progress_str 26 'Downloading containers from the internet'
fi
salt-call state.apply -l info registry >> "$setup_log" 2>&1 salt-call state.apply -l info registry >> "$setup_log" 2>&1
docker_seed_registry 2>> "$setup_log" # ~ 60% when finished docker_seed_registry 2>> "$setup_log" # ~ 60% when finished
@@ -461,8 +468,10 @@ export percentage=0
set_progress_str 86 'Updating packages' set_progress_str 86 'Updating packages'
update_packages 2>> "$setup_log" update_packages 2>> "$setup_log"
set_progress_str 87 'Adding user to SOC' if [[ $is_master ]]; then
add_web_user 2>> "$setup_log" set_progress_str 87 'Adding user to SOC'
add_web_user 2>> "$setup_log"
fi
set_progress_str 90 'Enabling checkin at boot' set_progress_str 90 'Enabling checkin at boot'
checkin_at_boot 2>> "$setup_log" checkin_at_boot 2>> "$setup_log"

View File

@@ -391,7 +391,7 @@ whiptail_install_type() {
"SEARCHNODE" "Add a Search Node with parsing" OFF \ "SEARCHNODE" "Add a Search Node with parsing" OFF \
"MASTER" "Start a new grid" OFF \ "MASTER" "Start a new grid" OFF \
"EVAL" "Evaluate all the things" OFF \ "EVAL" "Evaluate all the things" OFF \
"PROD" "Standalone full install of everything" OFF \ "STANDALONE" "Standalone full install of everything" OFF \
"MASTERSEARCH" "Master + Search Node" OFF \ "MASTERSEARCH" "Master + Search Node" OFF \
"HEAVYNODE" "Sensor + Search Node" OFF \ "HEAVYNODE" "Sensor + Search Node" OFF \
"HELIXSENSOR" "Connect this sensor to FireEye Helix" OFF \ "HELIXSENSOR" "Connect this sensor to FireEye Helix" OFF \
@@ -429,7 +429,7 @@ whiptail_management_interface_dns() {
[ -n "$TESTING" ] && return [ -n "$TESTING" ] && return
MDNS=$(whiptail --title "Security Onion Setup" --inputbox \ MDNS=$(whiptail --title "Security Onion Setup" --inputbox \
"Enter your DNS server using space between multiple" 10 60 8.8.8.8 8.8.4.4 3>&1 1>&2 2>&3) "Enter your DNS servers separated by a space" 10 60 8.8.8.8 8.8.4.4 3>&1 1>&2 2>&3)
} }
@@ -958,7 +958,7 @@ whiptail_setup_complete() {
[ -n "$TESTING" ] && return [ -n "$TESTING" ] && return
whiptail --title "Security Onion Setup" --msgbox "Finished installing this as an $install_type. Press Enter to reboot." 8 75 whiptail --title "Security Onion Setup" --msgbox "Finished $install_type install. Press ENTER to reboot." 8 75
install_cleanup >> $setup_log 2>&1 install_cleanup >> $setup_log 2>&1
} }
@@ -967,7 +967,7 @@ whiptail_setup_failed() {
[ -n "$TESTING" ] && return [ -n "$TESTING" ] && return
whiptail --title "Security Onion Setup" --msgbox "Install had a problem. Please see $setup_log for details. Press Enter to reboot." 8 75 whiptail --title "Security Onion Setup" --msgbox "Install had a problem. Please see $setup_log for details. Press ENTER to reboot." 8 75
install_cleanup >> $setup_log 2>&1 install_cleanup >> $setup_log 2>&1
} }
@@ -1012,9 +1012,9 @@ whiptail_master_updates() {
local update_string local update_string
update_string=$(whiptail --title "Security Onion Setup" --radiolist \ update_string=$(whiptail --title "Security Onion Setup" --radiolist \
"How would you like to download updates for your grid?:" 20 75 4 \ "How would you like to download OS package updates for your grid?:" 20 75 4 \
"MASTER" "Master node is proxy for OS/Docker updates." ON \ "MASTER" "Master node is proxy for updates." ON \
"OPEN" "Each node connect to the Internet for updates" OFF 3>&1 1>&2 2>&3 ) "OPEN" "Each node connects to the Internet for updates" OFF 3>&1 1>&2 2>&3 )
local exitstatus=$? local exitstatus=$?
whiptail_check_exitstatus $exitstatus whiptail_check_exitstatus $exitstatus
@@ -1035,9 +1035,9 @@ whiptail_node_updates() {
[ -n "$TESTING" ] && return [ -n "$TESTING" ] && return
NODEUPDATES=$(whiptail --title "Security Onion Setup" --radiolist \ NODEUPDATES=$(whiptail --title "Security Onion Setup" --radiolist \
"How would you like to download updates for this node?:" 20 75 4 \ "How would you like to download OS package updates for your grid?:" 20 75 4 \
"MASTER" "Download OS/Docker updates from the Master." ON \ "MASTER" "Master node is proxy for updates." ON \
"OPEN" "Download updates directly from the Internet" OFF 3>&1 1>&2 2>&3 ) "OPEN" "Each node connects to the Internet for updates" OFF 3>&1 1>&2 2>&3 )
local exitstatus=$? local exitstatus=$?
whiptail_check_exitstatus $exitstatus whiptail_check_exitstatus $exitstatus