Compare commits

...

57 Commits

Author SHA1 Message Date
Jorge Reyes
30d8cf5a6c Merge pull request #15412 from Security-Onion-Solutions/reyesj2-patch-9
missing  updates to variables
2026-01-22 17:01:53 -06:00
Jorge Reyes
07dbdb9f8f Merge pull request #15411 from Security-Onion-Solutions/reyesj2-patch-10
add retries to so-resources repo pull
2026-01-22 17:01:35 -06:00
reyesj2
b4c8f7924a missing updates to variables 2026-01-22 16:49:20 -06:00
reyesj2
809422c517 add retries to so-resources repo pull 2026-01-22 16:39:19 -06:00
Jorge Reyes
bb7593a53a Merge pull request #15410 from Security-Onion-Solutions/reyesj2-patch-9
fix auto soup - check for compatible versions and fallback to a known…
2026-01-22 16:36:40 -06:00
reyesj2
8e3ba8900f fix auto soup - check for compatible versions and fallback to a known good value as needed 2026-01-22 16:12:21 -06:00
Jorge Reyes
005ec87248 Merge pull request #15408 from Security-Onion-Solutions/reyesj2-patch-7
fix kafka state
2026-01-21 12:58:58 -06:00
reyesj2
4c6ff0641b fix kafka state 2026-01-21 12:47:58 -06:00
Jorge Reyes
3e242913e9 Merge pull request #15407 from Security-Onion-Solutions/reyesj2-patch-6
more better
2026-01-20 15:31:44 -06:00
reyesj2
ba68e3c9bd more better 2026-01-20 15:30:19 -06:00
Josh Patterson
e1199a91b9 Merge pull request #15406 from Security-Onion-Solutions/bravo
fix include
2026-01-20 16:29:49 -05:00
Josh Patterson
d381248e30 fix include 2026-01-20 16:27:37 -05:00
Jorge Reyes
f4f0218cae Merge pull request #15404 from Security-Onion-Solutions/reyesj2-patch-6
reinstall agent on grid nodes when service wasn't cleanly removed. eg…
2026-01-20 13:34:55 -06:00
Josh Patterson
7a38e52b01 Merge pull request #15405 from Security-Onion-Solutions/bravo
create dir if nonexistent
2026-01-20 14:34:16 -05:00
Josh Patterson
959fd55e32 create dir if nonexistent 2026-01-20 14:30:11 -05:00
reyesj2
a8e218a9ff reinstall agent on grid nodes when service wasn't cleanly removed. eg. manually deleting /opt/Elastic/Agent/ 2026-01-20 12:37:06 -06:00
Josh Patterson
3f5cd46d7d Merge pull request #15402 from Security-Onion-Solutions/bravo
allow logstash.ssl for eval and import. fix soup create_ca_pillar
2026-01-20 12:08:45 -05:00
Josh Patterson
627f0c2bcc allow logstash.ssl state for so-import 2026-01-20 11:58:31 -05:00
Josh Patterson
f6bde3eb04 remove double logging 2026-01-20 11:56:31 -05:00
Josh Patterson
f6e95c17a0 need to create_ca_pillar for 210 not 220 2026-01-20 11:55:57 -05:00
Josh Patterson
1234cbd04b allow logstash.ssl on so-eval 2026-01-20 09:30:32 -05:00
Josh Patterson
fd5b93542e Merge pull request #15400 from Security-Onion-Solutions/bravo
break out ssl state
2026-01-19 17:21:07 -05:00
Josh Patterson
a192455fae Merge remote-tracking branch 'origin/2.4/dev' into bravo 2026-01-19 17:17:58 -05:00
Josh Patterson
66f17e95aa Merge pull request #15397 from Security-Onion-Solutions/fstes
Fstes
2026-01-16 18:38:06 -05:00
Jorge Reyes
6eda7932e8 Merge pull request #15394 from Security-Onion-Solutions/reyesj2/elastic9-filestream
remove usage of deprecated 'logs' integration in favor of 'filestream'
2026-01-16 13:19:15 -06:00
Jorge Reyes
399b7567dd Merge pull request #15393 from Security-Onion-Solutions/reyesj2/esretries
add additional retries within scripts before salt re-runs the entire …
2026-01-16 13:11:47 -06:00
reyesj2
2133ada3a1 add additional retries within scripts before salt re-runs the entire script 2026-01-16 13:09:08 -06:00
Jorge Reyes
4f6d4738c4 Merge pull request #15391 from Security-Onion-Solutions/reyesj2-patch-3
follow symlinks for docker cp
2026-01-15 15:26:48 -06:00
reyesj2
d430ed6727 false positive 2026-01-15 15:25:28 -06:00
reyesj2
596bc178df ensure docker cp command follows container symlinks 2026-01-15 15:18:18 -06:00
reyesj2
0cd3d7b5a8 deprecated kibana config 2026-01-15 15:17:22 -06:00
reyesj2
349d77ffdf exclude kafka restart error 2026-01-15 14:43:57 -06:00
Josh Patterson
00fbc1c259 add back individual signing policies 2026-01-12 09:25:15 -05:00
Josh Patterson
3bc552ef38 Merge remote-tracking branch 'origin/2.4/dev' into bravo 2026-01-08 17:15:48 -05:00
Josh Patterson
ee70d94e15 remove old key/crt used for telegraf on non managers 2026-01-08 17:15:35 -05:00
Josh Patterson
1887d2c0e9 update heavynode pattern 2026-01-08 17:15:00 -05:00
Josh Patterson
693494024d block redirected to setup_log already, prevent double logging on these lines 2026-01-07 16:58:44 -05:00
Josh Patterson
4ab20c2454 dont remove ca in ssl.remove 2026-01-07 14:14:57 -05:00
Josh Patterson
6c3f9f149d create ca pillar during soup 2026-01-07 10:17:06 -05:00
Josh Patterson
152f2e03f1 Merge remote-tracking branch 'origin/2.4/dev' into bravo 2026-01-06 15:15:30 -05:00
Josh Patterson
f2370043a8 Merge remote-tracking branch 'origin/2.4/dev' into bravo 2026-01-06 09:12:00 -05:00
reyesj2
e9341ee8d3 remove usage of deprecated 'logs' integration in favor of 'filestream' 2025-12-24 10:40:23 -06:00
Josh Patterson
702ba2e0a4 only allow ca.remove state to run if so-setup is running 2025-12-17 10:08:00 -05:00
Josh Patterson
c0845e1612 restart docker if ca changes. cleanup dirs at key/crt location 2025-12-12 22:19:59 -05:00
Josh Patterson
9878d9d37e handle steno ca certs directory properly 2025-12-12 19:07:00 -05:00
Josh Patterson
a2196085d5 import allowed_states 2025-12-12 18:50:37 -05:00
Josh Patterson
ba62a8c10c need to restart docker service if ca changes 2025-12-12 18:50:22 -05:00
Josh Patterson
38f38e2789 fix allowed states for ca 2025-12-12 18:23:29 -05:00
Josh Patterson
1475f0fc2f timestamp logging for wait_for_salt_minion 2025-12-12 16:30:42 -05:00
Josh Patterson
a3396b77a3 Merge remote-tracking branch 'origin/2.4/dev' into bravo 2025-12-12 15:25:09 -05:00
Josh Patterson
8158fee8fc change how we determine if the salt-minion is ready 2025-12-12 15:24:47 -05:00
Josh Patterson
c6fac8c36b need makedirs 2025-12-11 18:37:01 -05:00
Josh Patterson
17b5b81696 dont have py3 yaml module installed yet so do it like this 2025-12-11 18:04:02 -05:00
Josh Patterson
9960db200c Merge remote-tracking branch 'origin/2.4/dev' into bravo 2025-12-11 17:30:43 -05:00
Josh Patterson
b9ff1704b0 the great ssl refactor 2025-12-11 17:30:06 -05:00
Josh Patterson
545060103a Merge remote-tracking branch 'origin/2.4/dev' into bravo 2025-12-03 16:33:27 -05:00
Josh Patterson
36a6a59d55 renew certs 7 days before expire 2025-12-01 11:54:10 -05:00
85 changed files with 1740 additions and 1282 deletions

2
pillar/ca/init.sls Normal file
View File

@@ -0,0 +1,2 @@
ca:
server:

View File

@@ -1,5 +1,6 @@
base: base:
'*': '*':
- ca
- global.soc_global - global.soc_global
- global.adv_global - global.adv_global
- docker.soc_docker - docker.soc_docker

View File

@@ -15,11 +15,7 @@
'salt.minion-check', 'salt.minion-check',
'sensoroni', 'sensoroni',
'salt.lasthighstate', 'salt.lasthighstate',
'salt.minion' 'salt.minion',
] %}
{% set ssl_states = [
'ssl',
'telegraf', 'telegraf',
'firewall', 'firewall',
'schedule', 'schedule',
@@ -28,7 +24,7 @@
{% set manager_states = [ {% set manager_states = [
'salt.master', 'salt.master',
'ca', 'ca.server',
'registry', 'registry',
'manager', 'manager',
'nginx', 'nginx',
@@ -75,28 +71,24 @@
{# Map role-specific states #} {# Map role-specific states #}
{% set role_states = { {% set role_states = {
'so-eval': ( 'so-eval': (
ssl_states +
manager_states + manager_states +
sensor_states + sensor_states +
elastic_stack_states | reject('equalto', 'logstash') | list elastic_stack_states | reject('equalto', 'logstash') | list +
['logstash.ssl']
), ),
'so-heavynode': ( 'so-heavynode': (
ssl_states +
sensor_states + sensor_states +
['elasticagent', 'elasticsearch', 'logstash', 'redis', 'nginx'] ['elasticagent', 'elasticsearch', 'logstash', 'redis', 'nginx']
), ),
'so-idh': ( 'so-idh': (
ssl_states +
['idh'] ['idh']
), ),
'so-import': ( 'so-import': (
ssl_states +
manager_states + manager_states +
sensor_states | reject('equalto', 'strelka') | reject('equalto', 'healthcheck') | list + sensor_states | reject('equalto', 'strelka') | reject('equalto', 'healthcheck') | list +
['elasticsearch', 'elasticsearch.auth', 'kibana', 'kibana.secrets', 'strelka.manager'] ['elasticsearch', 'elasticsearch.auth', 'kibana', 'kibana.secrets', 'logstash.ssl', 'strelka.manager']
), ),
'so-manager': ( 'so-manager': (
ssl_states +
manager_states + manager_states +
['salt.cloud', 'libvirt.packages', 'libvirt.ssh.users', 'strelka.manager'] + ['salt.cloud', 'libvirt.packages', 'libvirt.ssh.users', 'strelka.manager'] +
stig_states + stig_states +
@@ -104,7 +96,6 @@
elastic_stack_states elastic_stack_states
), ),
'so-managerhype': ( 'so-managerhype': (
ssl_states +
manager_states + manager_states +
['salt.cloud', 'strelka.manager', 'hypervisor', 'libvirt'] + ['salt.cloud', 'strelka.manager', 'hypervisor', 'libvirt'] +
stig_states + stig_states +
@@ -112,7 +103,6 @@
elastic_stack_states elastic_stack_states
), ),
'so-managersearch': ( 'so-managersearch': (
ssl_states +
manager_states + manager_states +
['salt.cloud', 'libvirt.packages', 'libvirt.ssh.users', 'strelka.manager'] + ['salt.cloud', 'libvirt.packages', 'libvirt.ssh.users', 'strelka.manager'] +
stig_states + stig_states +
@@ -120,12 +110,10 @@
elastic_stack_states elastic_stack_states
), ),
'so-searchnode': ( 'so-searchnode': (
ssl_states +
['kafka.ca', 'kafka.ssl', 'elasticsearch', 'logstash', 'nginx'] + ['kafka.ca', 'kafka.ssl', 'elasticsearch', 'logstash', 'nginx'] +
stig_states stig_states
), ),
'so-standalone': ( 'so-standalone': (
ssl_states +
manager_states + manager_states +
['salt.cloud', 'libvirt.packages', 'libvirt.ssh.users'] + ['salt.cloud', 'libvirt.packages', 'libvirt.ssh.users'] +
sensor_states + sensor_states +
@@ -134,29 +122,24 @@
elastic_stack_states elastic_stack_states
), ),
'so-sensor': ( 'so-sensor': (
ssl_states +
sensor_states + sensor_states +
['nginx'] + ['nginx'] +
stig_states stig_states
), ),
'so-fleet': ( 'so-fleet': (
ssl_states +
stig_states + stig_states +
['logstash', 'nginx', 'healthcheck', 'elasticfleet'] ['logstash', 'nginx', 'healthcheck', 'elasticfleet']
), ),
'so-receiver': ( 'so-receiver': (
ssl_states +
kafka_states + kafka_states +
stig_states + stig_states +
['logstash', 'redis'] ['logstash', 'redis']
), ),
'so-hypervisor': ( 'so-hypervisor': (
ssl_states +
stig_states + stig_states +
['hypervisor', 'libvirt'] ['hypervisor', 'libvirt']
), ),
'so-desktop': ( 'so-desktop': (
['ssl', 'docker_clean', 'telegraf'] +
stig_states stig_states
) )
} %} } %}

View File

@@ -1,4 +0,0 @@
pki_issued_certs:
file.directory:
- name: /etc/pki/issued_certs
- makedirs: True

View File

@@ -3,70 +3,10 @@
# https://securityonion.net/license; you may not use this file except in compliance with the # https://securityonion.net/license; you may not use this file except in compliance with the
# Elastic License 2.0. # Elastic License 2.0.
{% from 'allowed_states.map.jinja' import allowed_states %}
{% if sls in allowed_states %}
{% from 'vars/globals.map.jinja' import GLOBALS %} {% from 'vars/globals.map.jinja' import GLOBALS %}
include: include:
- ca.dirs {% if GLOBALS.is_manager %}
- ca.server
/etc/salt/minion.d/signing_policies.conf:
file.managed:
- source: salt://ca/files/signing_policies.conf
pki_private_key:
x509.private_key_managed:
- name: /etc/pki/ca.key
- keysize: 4096
- passphrase:
- backup: True
{% if salt['file.file_exists']('/etc/pki/ca.key') -%}
- prereq:
- x509: /etc/pki/ca.crt
{%- endif %}
pki_public_ca_crt:
x509.certificate_managed:
- name: /etc/pki/ca.crt
- signing_private_key: /etc/pki/ca.key
- CN: {{ GLOBALS.manager }}
- C: US
- ST: Utah
- L: Salt Lake City
- basicConstraints: "critical CA:true"
- keyUsage: "critical cRLSign, keyCertSign"
- extendedkeyUsage: "serverAuth, clientAuth"
- subjectKeyIdentifier: hash
- authorityKeyIdentifier: keyid:always, issuer
- days_valid: 3650
- days_remaining: 0
- backup: True
- replace: False
- require:
- sls: ca.dirs
- timeout: 30
- retry:
attempts: 5
interval: 30
mine_update_ca_crt:
module.run:
- mine.update: []
- onchanges:
- x509: pki_public_ca_crt
cakeyperms:
file.managed:
- replace: False
- name: /etc/pki/ca.key
- mode: 640
- group: 939
{% else %}
{{sls}}_state_not_allowed:
test.fail_without_changes:
- name: {{sls}}_state_not_allowed
{% endif %} {% endif %}
- ca.trustca

3
salt/ca/map.jinja Normal file
View File

@@ -0,0 +1,3 @@
{% set CA = {
'server': pillar.ca.server
}%}

View File

@@ -1,7 +1,35 @@
pki_private_key: # Copyright Security Onion Solutions LLC and/or licensed to Security Onion Solutions LLC under one
# or more contributor license agreements. Licensed under the Elastic License 2.0 as shown at
# https://securityonion.net/license; you may not use this file except in compliance with the
# Elastic License 2.0.
{% set setup_running = salt['cmd.retcode']('pgrep -x so-setup') == 0 %}
{% if setup_running%}
include:
- ssl.remove
remove_pki_private_key:
file.absent: file.absent:
- name: /etc/pki/ca.key - name: /etc/pki/ca.key
pki_public_ca_crt: remove_pki_public_ca_crt:
file.absent: file.absent:
- name: /etc/pki/ca.crt - name: /etc/pki/ca.crt
remove_trusttheca:
file.absent:
- name: /etc/pki/tls/certs/intca.crt
remove_pki_public_ca_crt_symlink:
file.absent:
- name: /opt/so/saltstack/local/salt/ca/files/ca.crt
{% else %}
so-setup_not_running:
test.show_notification:
- text: "This state is reserved for usage during so-setup."
{% endif %}

63
salt/ca/server.sls Normal file
View File

@@ -0,0 +1,63 @@
# Copyright Security Onion Solutions LLC and/or licensed to Security Onion Solutions LLC under one
# or more contributor license agreements. Licensed under the Elastic License 2.0 as shown at
# https://securityonion.net/license; you may not use this file except in compliance with the
# Elastic License 2.0.
{% from 'allowed_states.map.jinja' import allowed_states %}
{% if sls in allowed_states %}
{% from 'vars/globals.map.jinja' import GLOBALS %}
pki_private_key:
x509.private_key_managed:
- name: /etc/pki/ca.key
- keysize: 4096
- passphrase:
- backup: True
{% if salt['file.file_exists']('/etc/pki/ca.key') -%}
- prereq:
- x509: /etc/pki/ca.crt
{%- endif %}
pki_public_ca_crt:
x509.certificate_managed:
- name: /etc/pki/ca.crt
- signing_private_key: /etc/pki/ca.key
- CN: {{ GLOBALS.manager }}
- C: US
- ST: Utah
- L: Salt Lake City
- basicConstraints: "critical CA:true"
- keyUsage: "critical cRLSign, keyCertSign"
- extendedkeyUsage: "serverAuth, clientAuth"
- subjectKeyIdentifier: hash
- authorityKeyIdentifier: keyid:always, issuer
- days_valid: 3650
- days_remaining: 7
- backup: True
- replace: False
- timeout: 30
- retry:
attempts: 5
interval: 30
pki_public_ca_crt_symlink:
file.symlink:
- name: /opt/so/saltstack/local/salt/ca/files/ca.crt
- target: /etc/pki/ca.crt
- require:
- x509: pki_public_ca_crt
cakeyperms:
file.managed:
- replace: False
- name: /etc/pki/ca.key
- mode: 640
- group: 939
{% else %}
{{sls}}_state_not_allowed:
test.fail_without_changes:
- name: {{sls}}_state_not_allowed
{% endif %}

View File

@@ -0,0 +1,15 @@
# Copyright Security Onion Solutions LLC and/or licensed to Security Onion Solutions LLC under one
# or more contributor license agreements. Licensed under the Elastic License 2.0 as shown at
# https://securityonion.net/license; you may not use this file except in compliance with the
# Elastic License 2.0.
# when the salt-minion signs the cert, a copy is stored here
issued_certs_copypath:
file.directory:
- name: /etc/pki/issued_certs
- makedirs: True
signing_policy:
file.managed:
- name: /etc/salt/minion.d/signing_policies.conf
- source: salt://ca/files/signing_policies.conf

26
salt/ca/trustca.sls Normal file
View File

@@ -0,0 +1,26 @@
# Copyright Security Onion Solutions LLC and/or licensed to Security Onion Solutions LLC under one
# or more contributor license agreements. Licensed under the Elastic License 2.0 as shown at
# https://securityonion.net/license; you may not use this file except in compliance with the
# Elastic License 2.0.
{% from 'vars/globals.map.jinja' import GLOBALS %}
include:
- docker
# Trust the CA
trusttheca:
file.managed:
- name: /etc/pki/tls/certs/intca.crt
- source: salt://ca/files/ca.crt
- watch_in:
- service: docker_running
- show_changes: False
- makedirs: True
{% if GLOBALS.os_family == 'Debian' %}
symlinkca:
file.symlink:
- target: /etc/pki/tls/certs/intca.crt
- name: /etc/ssl/certs/intca.crt
{% endif %}

View File

@@ -177,7 +177,7 @@ so-status_script:
- source: salt://common/tools/sbin/so-status - source: salt://common/tools/sbin/so-status
- mode: 755 - mode: 755
{% if GLOBALS.role in GLOBALS.sensor_roles %} {% if GLOBALS.is_sensor %}
# Add sensor cleanup # Add sensor cleanup
so-sensor-clean: so-sensor-clean:
cron.present: cron.present:

View File

@@ -554,21 +554,39 @@ run_check_net_err() {
} }
wait_for_salt_minion() { wait_for_salt_minion() {
local minion="$1" local minion="$1"
local timeout="${2:-5}" local max_wait="${2:-30}"
local logfile="${3:-'/dev/stdout'}" local interval="${3:-2}"
retry 60 5 "journalctl -u salt-minion.service | grep 'Minion is ready to receive requests'" >> "$logfile" 2>&1 || fail local logfile="${4:-'/dev/stdout'}"
local attempt=0 local elapsed=0
# each attempts would take about 15 seconds
local maxAttempts=20 echo "$(date '+%a %d %b %Y %H:%M:%S.%6N') - Waiting for salt-minion '$minion' to be ready..."
until check_salt_minion_status "$minion" "$timeout" "$logfile"; do
attempt=$((attempt+1)) while [ $elapsed -lt $max_wait ]; do
if [[ $attempt -eq $maxAttempts ]]; then # Check if service is running
return 1 echo "$(date '+%a %d %b %Y %H:%M:%S.%6N') - Check if salt-minion service is running"
fi if ! systemctl is-active --quiet salt-minion; then
sleep 10 echo "$(date '+%a %d %b %Y %H:%M:%S.%6N') - salt-minion service not running (elapsed: ${elapsed}s)"
done sleep $interval
return 0 elapsed=$((elapsed + interval))
continue
fi
echo "$(date '+%a %d %b %Y %H:%M:%S.%6N') - salt-minion service is running"
# Check if minion responds to ping
echo "$(date '+%a %d %b %Y %H:%M:%S.%6N') - Check if $minion responds to ping"
if salt "$minion" test.ping --timeout=3 --out=json 2>> "$logfile" | grep -q "true"; then
echo "$(date '+%a %d %b %Y %H:%M:%S.%6N') - salt-minion '$minion' is connected and ready!"
return 0
fi
echo "$(date '+%a %d %b %Y %H:%M:%S.%6N') - Waiting... (${elapsed}s / ${max_wait}s)"
sleep $interval
elapsed=$((elapsed + interval))
done
echo "$(date '+%a %d %b %Y %H:%M:%S.%6N') - ERROR: salt-minion '$minion' not ready after $max_wait seconds"
return 1
} }
salt_minion_count() { salt_minion_count() {

View File

@@ -130,6 +130,7 @@ if [[ $EXCLUDE_STARTUP_ERRORS == 'Y' ]]; then
EXCLUDED_ERRORS="$EXCLUDED_ERRORS|process_cluster_event_timeout_exception" # logstash waiting for elasticsearch to start EXCLUDED_ERRORS="$EXCLUDED_ERRORS|process_cluster_event_timeout_exception" # logstash waiting for elasticsearch to start
EXCLUDED_ERRORS="$EXCLUDED_ERRORS|not configured for GeoIP" # SO does not bundle the maxminddb with Zeek EXCLUDED_ERRORS="$EXCLUDED_ERRORS|not configured for GeoIP" # SO does not bundle the maxminddb with Zeek
EXCLUDED_ERRORS="$EXCLUDED_ERRORS|HTTP 404: Not Found" # Salt loops until Kratos returns 200, during startup Kratos may not be ready EXCLUDED_ERRORS="$EXCLUDED_ERRORS|HTTP 404: Not Found" # Salt loops until Kratos returns 200, during startup Kratos may not be ready
EXCLUDED_ERRORS="$EXCLUDED_ERRORS|Cancelling deferred write event maybeFenceReplicas because the event queue is now closed" # Kafka controller log during shutdown/restart
fi fi
if [[ $EXCLUDE_FALSE_POSITIVE_ERRORS == 'Y' ]]; then if [[ $EXCLUDE_FALSE_POSITIVE_ERRORS == 'Y' ]]; then
@@ -160,6 +161,7 @@ if [[ $EXCLUDE_FALSE_POSITIVE_ERRORS == 'Y' ]]; then
EXCLUDED_ERRORS="$EXCLUDED_ERRORS|adding ingest pipeline" # false positive (elasticsearch ingest pipeline names contain 'error') EXCLUDED_ERRORS="$EXCLUDED_ERRORS|adding ingest pipeline" # false positive (elasticsearch ingest pipeline names contain 'error')
EXCLUDED_ERRORS="$EXCLUDED_ERRORS|updating index template" # false positive (elasticsearch index or template names contain 'error') EXCLUDED_ERRORS="$EXCLUDED_ERRORS|updating index template" # false positive (elasticsearch index or template names contain 'error')
EXCLUDED_ERRORS="$EXCLUDED_ERRORS|updating component template" # false positive (elasticsearch index or template names contain 'error') EXCLUDED_ERRORS="$EXCLUDED_ERRORS|updating component template" # false positive (elasticsearch index or template names contain 'error')
EXCLUDED_ERRORS="$EXCLUDED_ERRORS|upgrading component template" # false positive (elasticsearch index or template names contain 'error')
EXCLUDED_ERRORS="$EXCLUDED_ERRORS|upgrading composable template" # false positive (elasticsearch composable template names contain 'error') EXCLUDED_ERRORS="$EXCLUDED_ERRORS|upgrading composable template" # false positive (elasticsearch composable template names contain 'error')
fi fi

View File

@@ -3,29 +3,16 @@
{# we only want this state to run it is CentOS #} {# we only want this state to run it is CentOS #}
{% if GLOBALS.os == 'OEL' %} {% if GLOBALS.os == 'OEL' %}
{% set global_ca_text = [] %}
{% set global_ca_server = [] %}
{% set manager = GLOBALS.manager %}
{% set x509dict = salt['mine.get'](manager | lower~'*', 'x509.get_pem_entries') %}
{% for host in x509dict %}
{% if host.split('_')|last in ['manager', 'managersearch', 'standalone', 'import', 'eval'] %}
{% do global_ca_text.append(x509dict[host].get('/etc/pki/ca.crt')|replace('\n', '')) %}
{% do global_ca_server.append(host) %}
{% endif %}
{% endfor %}
{% set trusttheca_text = global_ca_text[0] %}
{% set ca_server = global_ca_server[0] %}
trusted_ca: trusted_ca:
x509.pem_managed: file.managed:
- name: /etc/pki/ca-trust/source/anchors/ca.crt - name: /etc/pki/ca-trust/source/anchors/ca.crt
- text: {{ trusttheca_text }} - source: salt://ca/files/ca.crt
update_ca_certs: update_ca_certs:
cmd.run: cmd.run:
- name: update-ca-trust - name: update-ca-trust
- onchanges: - onchanges:
- x509: trusted_ca - file: trusted_ca
{% else %} {% else %}

View File

@@ -6,9 +6,9 @@
{% from 'docker/docker.map.jinja' import DOCKER %} {% from 'docker/docker.map.jinja' import DOCKER %}
{% from 'vars/globals.map.jinja' import GLOBALS %} {% from 'vars/globals.map.jinja' import GLOBALS %}
# include ssl since docker service requires the intca # docker service requires the ca.crt
include: include:
- ssl - ca
dockergroup: dockergroup:
group.present: group.present:
@@ -89,10 +89,9 @@ docker_running:
- enable: True - enable: True
- watch: - watch:
- file: docker_daemon - file: docker_daemon
- x509: trusttheca
- require: - require:
- file: docker_daemon - file: docker_daemon
- x509: trusttheca - file: trusttheca
# Reserve OS ports for Docker proxy in case boot settings are not already applied/present # Reserve OS ports for Docker proxy in case boot settings are not already applied/present

View File

@@ -9,6 +9,7 @@
{% from 'docker/docker.map.jinja' import DOCKER %} {% from 'docker/docker.map.jinja' import DOCKER %}
include: include:
- ca
- elasticagent.config - elasticagent.config
- elasticagent.sostatus - elasticagent.sostatus
@@ -55,8 +56,10 @@ so-elastic-agent:
{% endif %} {% endif %}
- require: - require:
- file: create-elastic-agent-config - file: create-elastic-agent-config
- file: trusttheca
- watch: - watch:
- file: create-elastic-agent-config - file: create-elastic-agent-config
- file: trusttheca
delete_so-elastic-agent_so-status.disabled: delete_so-elastic-agent_so-status.disabled:
file.uncomment: file.uncomment:

View File

@@ -95,6 +95,9 @@ soresourcesrepoclone:
- rev: 'main' - rev: 'main'
- depth: 1 - depth: 1
- force_reset: True - force_reset: True
- retry:
attempts: 3
interval: 10
{% endif %} {% endif %}
elasticdefendconfdir: elasticdefendconfdir:

View File

@@ -13,9 +13,11 @@
{% set SERVICETOKEN = salt['pillar.get']('elasticfleet:config:server:es_token','') %} {% set SERVICETOKEN = salt['pillar.get']('elasticfleet:config:server:es_token','') %}
include: include:
- ca
- logstash.ssl
- elasticfleet.ssl
- elasticfleet.config - elasticfleet.config
- elasticfleet.sostatus - elasticfleet.sostatus
- ssl
{% if grains.role not in ['so-fleet'] %} {% if grains.role not in ['so-fleet'] %}
# Wait for Elasticsearch to be ready - no reason to try running Elastic Fleet server if ES is not ready # Wait for Elasticsearch to be ready - no reason to try running Elastic Fleet server if ES is not ready
@@ -133,6 +135,11 @@ so-elastic-fleet:
{% endfor %} {% endfor %}
{% endif %} {% endif %}
- watch: - watch:
- file: trusttheca
- x509: etc_elasticfleet_key
- x509: etc_elasticfleet_crt
- require:
- file: trusttheca
- x509: etc_elasticfleet_key - x509: etc_elasticfleet_key
- x509: etc_elasticfleet_crt - x509: etc_elasticfleet_crt
{% endif %} {% endif %}

View File

@@ -2,7 +2,7 @@
{%- raw -%} {%- raw -%}
{ {
"package": { "package": {
"name": "log", "name": "filestream",
"version": "" "version": ""
}, },
"name": "import-zeek-logs", "name": "import-zeek-logs",
@@ -10,19 +10,31 @@
"description": "Zeek Import logs", "description": "Zeek Import logs",
"policy_id": "so-grid-nodes_general", "policy_id": "so-grid-nodes_general",
"inputs": { "inputs": {
"logs-logfile": { "filestream-filestream": {
"enabled": true, "enabled": true,
"streams": { "streams": {
"log.logs": { "filestream.generic": {
"enabled": true, "enabled": true,
"vars": { "vars": {
"paths": [ "paths": [
"/nsm/import/*/zeek/logs/*.log" "/nsm/import/*/zeek/logs/*.log"
], ],
"data_stream.dataset": "import", "data_stream.dataset": "import",
"tags": [], "pipeline": "",
"parsers": "#- ndjson:\n# target: \"\"\n# message_key: msg\n#- multiline:\n# type: count\n# count_lines: 3\n",
"exclude_files": ["({%- endraw -%}{{ ELASTICFLEETMERGED.logging.zeek.excluded | join('|') }}{%- raw -%}).log$"],
"include_files": [],
"processors": "- dissect:\n tokenizer: \"/nsm/import/%{import.id}/zeek/logs/%{import.file}\"\n field: \"log.file.path\"\n target_prefix: \"\"\n- script:\n lang: javascript\n source: >\n function process(event) {\n var pl = event.Get(\"import.file\").slice(0,-4);\n event.Put(\"@metadata.pipeline\", \"zeek.\" + pl);\n }\n- add_fields:\n target: event\n fields:\n category: network\n module: zeek\n imported: true\n- add_tags:\n tags: \"ics\"\n when:\n regexp:\n import.file: \"^bacnet*|^bsap*|^cip*|^cotp*|^dnp3*|^ecat*|^enip*|^modbus*|^opcua*|^profinet*|^s7comm*\"", "processors": "- dissect:\n tokenizer: \"/nsm/import/%{import.id}/zeek/logs/%{import.file}\"\n field: \"log.file.path\"\n target_prefix: \"\"\n- script:\n lang: javascript\n source: >\n function process(event) {\n var pl = event.Get(\"import.file\").slice(0,-4);\n event.Put(\"@metadata.pipeline\", \"zeek.\" + pl);\n }\n- add_fields:\n target: event\n fields:\n category: network\n module: zeek\n imported: true\n- add_tags:\n tags: \"ics\"\n when:\n regexp:\n import.file: \"^bacnet*|^bsap*|^cip*|^cotp*|^dnp3*|^ecat*|^enip*|^modbus*|^opcua*|^profinet*|^s7comm*\"",
"custom": "exclude_files: [\"{%- endraw -%}{{ ELASTICFLEETMERGED.logging.zeek.excluded | join('|') }}{%- raw -%}.log$\"]\n" "tags": [],
"recursive_glob": true,
"clean_inactive": -1,
"harvester_limit": 0,
"fingerprint": false,
"fingerprint_offset": 0,
"fingerprint_length": "64",
"file_identity_native": true,
"exclude_lines": [],
"include_lines": []
} }
} }
} }

View File

@@ -11,36 +11,51 @@
{%- endif -%} {%- endif -%}
{ {
"package": { "package": {
"name": "log", "name": "filestream",
"version": "" "version": ""
}, },
"name": "kratos-logs", "name": "kratos-logs",
"namespace": "so",
"description": "Kratos logs", "description": "Kratos logs",
"policy_id": "so-grid-nodes_general", "policy_id": "so-grid-nodes_general",
"namespace": "so",
"inputs": { "inputs": {
"logs-logfile": { "filestream-filestream": {
"enabled": true, "enabled": true,
"streams": { "streams": {
"log.logs": { "filestream.generic": {
"enabled": true, "enabled": true,
"vars": { "vars": {
"paths": [ "paths": [
"/opt/so/log/kratos/kratos.log" "/opt/so/log/kratos/kratos.log"
], ],
"data_stream.dataset": "kratos", "data_stream.dataset": "kratos",
"tags": ["so-kratos"], "pipeline": "kratos",
"parsers": "#- ndjson:\n# target: \"\"\n# message_key: msg\n#- multiline:\n# type: count\n# count_lines: 3\n",
"exclude_files": [
"\\.gz$"
],
"include_files": [],
{%- if valid_identities -%} {%- if valid_identities -%}
"processors": "- decode_json_fields:\n fields: [\"message\"]\n target: \"\"\n add_error_key: true\n- add_fields:\n target: event\n fields:\n category: iam\n module: kratos\n- if:\n has_fields:\n - identity_id\n then:{% for id, email in identities %}\n - if:\n equals:\n identity_id: \"{{ id }}\"\n then:\n - add_fields:\n target: ''\n fields:\n user.name: \"{{ email }}\"{% endfor %}", "processors": "- decode_json_fields:\n fields: [\"message\"]\n target: \"\"\n add_error_key: true\n- add_fields:\n target: event\n fields:\n category: iam\n module: kratos\n- if:\n has_fields:\n - identity_id\n then:{% for id, email in identities %}\n - if:\n equals:\n identity_id: \"{{ id }}\"\n then:\n - add_fields:\n target: ''\n fields:\n user.name: \"{{ email }}\"{% endfor %}",
{%- else -%} {%- else -%}
"processors": "- decode_json_fields:\n fields: [\"message\"]\n target: \"\"\n add_error_key: true\n- add_fields:\n target: event\n fields:\n category: iam\n module: kratos", "processors": "- decode_json_fields:\n fields: [\"message\"]\n target: \"\"\n add_error_key: true\n- add_fields:\n target: event\n fields:\n category: iam\n module: kratos",
{%- endif -%} {%- endif -%}
"custom": "pipeline: kratos" "tags": [
"so-kratos"
],
"recursive_glob": true,
"clean_inactive": -1,
"harvester_limit": 0,
"fingerprint": false,
"fingerprint_offset": 0,
"fingerprint_length": "64",
"file_identity_native": true,
"exclude_lines": [],
"include_lines": []
} }
} }
} }
} }
}, },
"force": true "force": true
} }

View File

@@ -2,28 +2,38 @@
{%- raw -%} {%- raw -%}
{ {
"package": { "package": {
"name": "log", "name": "filestream",
"version": "" "version": ""
}, },
"id": "zeek-logs",
"name": "zeek-logs", "name": "zeek-logs",
"namespace": "so", "namespace": "so",
"description": "Zeek logs", "description": "Zeek logs",
"policy_id": "so-grid-nodes_general", "policy_id": "so-grid-nodes_general",
"inputs": { "inputs": {
"logs-logfile": { "filestream-filestream": {
"enabled": true, "enabled": true,
"streams": { "streams": {
"log.logs": { "filestream.generic": {
"enabled": true, "enabled": true,
"vars": { "vars": {
"paths": [ "paths": [
"/nsm/zeek/logs/current/*.log" "/nsm/zeek/logs/current/*.log"
], ],
"data_stream.dataset": "zeek", "data_stream.dataset": "zeek",
"tags": [], "parsers": "#- ndjson:\n# target: \"\"\n# message_key: msg\n#- multiline:\n# type: count\n# count_lines: 3\n",
"exclude_files": ["({%- endraw -%}{{ ELASTICFLEETMERGED.logging.zeek.excluded | join('|') }}{%- raw -%}).log$"],
"include_files": [],
"processors": "- dissect:\n tokenizer: \"/nsm/zeek/logs/current/%{pipeline}.log\"\n field: \"log.file.path\"\n trim_chars: \".log\"\n target_prefix: \"\"\n- script:\n lang: javascript\n source: >\n function process(event) {\n var pl = event.Get(\"pipeline\");\n event.Put(\"@metadata.pipeline\", \"zeek.\" + pl);\n }\n- add_fields:\n target: event\n fields:\n category: network\n module: zeek\n- add_tags:\n tags: \"ics\"\n when:\n regexp:\n pipeline: \"^bacnet*|^bsap*|^cip*|^cotp*|^dnp3*|^ecat*|^enip*|^modbus*|^opcua*|^profinet*|^s7comm*\"", "processors": "- dissect:\n tokenizer: \"/nsm/zeek/logs/current/%{pipeline}.log\"\n field: \"log.file.path\"\n trim_chars: \".log\"\n target_prefix: \"\"\n- script:\n lang: javascript\n source: >\n function process(event) {\n var pl = event.Get(\"pipeline\");\n event.Put(\"@metadata.pipeline\", \"zeek.\" + pl);\n }\n- add_fields:\n target: event\n fields:\n category: network\n module: zeek\n- add_tags:\n tags: \"ics\"\n when:\n regexp:\n pipeline: \"^bacnet*|^bsap*|^cip*|^cotp*|^dnp3*|^ecat*|^enip*|^modbus*|^opcua*|^profinet*|^s7comm*\"",
"custom": "exclude_files: [\"{%- endraw -%}{{ ELASTICFLEETMERGED.logging.zeek.excluded | join('|') }}{%- raw -%}.log$\"]\n" "tags": [],
"recursive_glob": true,
"clean_inactive": -1,
"harvester_limit": 0,
"fingerprint": false,
"fingerprint_offset": 0,
"fingerprint_length": "64",
"file_identity_native": true,
"exclude_lines": [],
"include_lines": []
} }
} }
} }
@@ -31,4 +41,4 @@
}, },
"force": true "force": true
} }
{%- endraw -%} {%- endraw -%}

View File

@@ -1,26 +1,43 @@
{ {
"package": { "package": {
"name": "log", "name": "filestream",
"version": "" "version": ""
}, },
"name": "hydra-logs", "name": "hydra-logs",
"namespace": "so",
"description": "Hydra logs", "description": "Hydra logs",
"policy_id": "so-grid-nodes_general", "policy_id": "so-grid-nodes_general",
"namespace": "so",
"inputs": { "inputs": {
"logs-logfile": { "filestream-filestream": {
"enabled": true, "enabled": true,
"streams": { "streams": {
"log.logs": { "filestream.generic": {
"enabled": true, "enabled": true,
"vars": { "vars": {
"paths": [ "paths": [
"/opt/so/log/hydra/hydra.log" "/opt/so/log/hydra/hydra.log"
], ],
"data_stream.dataset": "hydra", "data_stream.dataset": "hydra",
"tags": ["so-hydra"], "pipeline": "hydra",
"processors": "- decode_json_fields:\n fields: [\"message\"]\n target: \"\"\n add_error_key: true \n- add_fields:\n target: event\n fields:\n category: iam\n module: hydra", "parsers": "#- ndjson:\n# target: \"\"\n# message_key: msg\n#- multiline:\n# type: count\n# count_lines: 3\n",
"custom": "pipeline: hydra" "exclude_files": [
"\\.gz$"
],
"include_files": [],
"processors": "- decode_json_fields:\n fields: [\"message\"]\n target: \"\"\n add_error_key: true\n- add_fields:\n target: event\n fields:\n category: iam\n module: hydra",
"tags": [
"so-hydra"
],
"recursive_glob": true,
"ignore_older": "72h",
"clean_inactive": -1,
"harvester_limit": 0,
"fingerprint": false,
"fingerprint_offset": 0,
"fingerprint_length": "64",
"file_identity_native": true,
"exclude_lines": [],
"include_lines": []
} }
} }
} }
@@ -28,3 +45,5 @@
}, },
"force": true "force": true
} }

View File

@@ -1,30 +1,44 @@
{ {
"package": { "package": {
"name": "log", "name": "filestream",
"version": "" "version": ""
}, },
"name": "idh-logs", "name": "idh-logs",
"namespace": "so",
"description": "IDH integration", "description": "IDH integration",
"policy_id": "so-grid-nodes_general", "policy_id": "so-grid-nodes_general",
"namespace": "so",
"inputs": { "inputs": {
"logs-logfile": { "filestream-filestream": {
"enabled": true, "enabled": true,
"streams": { "streams": {
"log.logs": { "filestream.generic": {
"enabled": true, "enabled": true,
"vars": { "vars": {
"paths": [ "paths": [
"/nsm/idh/opencanary.log" "/nsm/idh/opencanary.log"
], ],
"data_stream.dataset": "idh", "data_stream.dataset": "idh",
"tags": [], "pipeline": "common",
"parsers": "#- ndjson:\n# target: \"\"\n# message_key: msg\n#- multiline:\n# type: count\n# count_lines: 3\n",
"exclude_files": [
"\\.gz$"
],
"include_files": [],
"processors": "\n- decode_json_fields:\n fields: [\"message\"]\n target: \"\"\n add_error_key: true\n- convert:\n fields:\n - {from: \"logtype\", to: \"event.code\", type: \"string\"}\n- drop_fields:\n when:\n equals:\n event.code: \"1001\"\n fields: [\"src_host\", \"src_port\", \"dst_host\", \"dst_port\" ]\n ignore_missing: true\n- rename:\n fields:\n - from: \"src_host\"\n to: \"source.ip\"\n - from: \"src_port\"\n to: \"source.port\"\n - from: \"dst_host\"\n to: \"destination.host\"\n - from: \"dst_port\"\n to: \"destination.port\"\n ignore_missing: true\n- drop_fields:\n fields: '[\"prospector\", \"input\", \"offset\", \"beat\"]'\n- add_fields:\n target: event\n fields:\n category: host\n module: opencanary", "processors": "\n- decode_json_fields:\n fields: [\"message\"]\n target: \"\"\n add_error_key: true\n- convert:\n fields:\n - {from: \"logtype\", to: \"event.code\", type: \"string\"}\n- drop_fields:\n when:\n equals:\n event.code: \"1001\"\n fields: [\"src_host\", \"src_port\", \"dst_host\", \"dst_port\" ]\n ignore_missing: true\n- rename:\n fields:\n - from: \"src_host\"\n to: \"source.ip\"\n - from: \"src_port\"\n to: \"source.port\"\n - from: \"dst_host\"\n to: \"destination.host\"\n - from: \"dst_port\"\n to: \"destination.port\"\n ignore_missing: true\n- drop_fields:\n fields: '[\"prospector\", \"input\", \"offset\", \"beat\"]'\n- add_fields:\n target: event\n fields:\n category: host\n module: opencanary",
"custom": "pipeline: common" "tags": [],
"recursive_glob": true,
"clean_inactive": -1,
"harvester_limit": 0,
"fingerprint": false,
"fingerprint_offset": 0,
"fingerprint_length": "64",
"file_identity_native": true,
"exclude_lines": [],
"include_lines": []
} }
} }
} }
} }
}, },
"force": true "force": true
} }

View File

@@ -1,33 +1,46 @@
{ {
"package": { "package": {
"name": "log", "name": "filestream",
"version": "" "version": ""
}, },
"name": "import-evtx-logs", "name": "import-evtx-logs",
"namespace": "so",
"description": "Import Windows EVTX logs", "description": "Import Windows EVTX logs",
"policy_id": "so-grid-nodes_general", "policy_id": "so-grid-nodes_general",
"vars": {}, "namespace": "so",
"inputs": { "inputs": {
"logs-logfile": { "filestream-filestream": {
"enabled": true, "enabled": true,
"streams": { "streams": {
"log.logs": { "filestream.generic": {
"enabled": true, "enabled": true,
"vars": { "vars": {
"paths": [ "paths": [
"/nsm/import/*/evtx/*.json" "/nsm/import/*/evtx/*.json"
], ],
"data_stream.dataset": "import", "data_stream.dataset": "import",
"custom": "", "parsers": "#- ndjson:\n# target: \"\"\n# message_key: msg\n#- multiline:\n# type: count\n# count_lines: 3\n",
"exclude_files": [
"\\.gz$"
],
"include_files": [],
"processors": "- dissect:\n tokenizer: \"/nsm/import/%{import.id}/evtx/%{import.file}\"\n field: \"log.file.path\"\n target_prefix: \"\"\n- decode_json_fields:\n fields: [\"message\"]\n target: \"\"\n- drop_fields:\n fields: [\"host\"]\n ignore_missing: true\n- add_fields:\n target: data_stream\n fields:\n type: logs\n dataset: system.security\n- add_fields:\n target: event\n fields:\n dataset: system.security\n module: system\n imported: true\n- add_fields:\n target: \"@metadata\"\n fields:\n pipeline: logs-system.security-2.6.1\n- if:\n equals:\n winlog.channel: 'Microsoft-Windows-Sysmon/Operational'\n then: \n - add_fields:\n target: data_stream\n fields:\n dataset: windows.sysmon_operational\n - add_fields:\n target: event\n fields:\n dataset: windows.sysmon_operational\n module: windows\n imported: true\n - add_fields:\n target: \"@metadata\"\n fields:\n pipeline: logs-windows.sysmon_operational-3.1.2\n- if:\n equals:\n winlog.channel: 'Application'\n then: \n - add_fields:\n target: data_stream\n fields:\n dataset: system.application\n - add_fields:\n target: event\n fields:\n dataset: system.application\n - add_fields:\n target: \"@metadata\"\n fields:\n pipeline: logs-system.application-2.6.1\n- if:\n equals:\n winlog.channel: 'System'\n then: \n - add_fields:\n target: data_stream\n fields:\n dataset: system.system\n - add_fields:\n target: event\n fields:\n dataset: system.system\n - add_fields:\n target: \"@metadata\"\n fields:\n pipeline: logs-system.system-2.6.1\n \n- if:\n equals:\n winlog.channel: 'Microsoft-Windows-PowerShell/Operational'\n then: \n - add_fields:\n target: data_stream\n fields:\n dataset: windows.powershell_operational\n - add_fields:\n target: event\n fields:\n dataset: windows.powershell_operational\n module: windows\n - add_fields:\n target: \"@metadata\"\n fields:\n pipeline: logs-windows.powershell_operational-3.1.2\n- add_fields:\n target: data_stream\n fields:\n dataset: import", "processors": "- dissect:\n tokenizer: \"/nsm/import/%{import.id}/evtx/%{import.file}\"\n field: \"log.file.path\"\n target_prefix: \"\"\n- decode_json_fields:\n fields: [\"message\"]\n target: \"\"\n- drop_fields:\n fields: [\"host\"]\n ignore_missing: true\n- add_fields:\n target: data_stream\n fields:\n type: logs\n dataset: system.security\n- add_fields:\n target: event\n fields:\n dataset: system.security\n module: system\n imported: true\n- add_fields:\n target: \"@metadata\"\n fields:\n pipeline: logs-system.security-2.6.1\n- if:\n equals:\n winlog.channel: 'Microsoft-Windows-Sysmon/Operational'\n then: \n - add_fields:\n target: data_stream\n fields:\n dataset: windows.sysmon_operational\n - add_fields:\n target: event\n fields:\n dataset: windows.sysmon_operational\n module: windows\n imported: true\n - add_fields:\n target: \"@metadata\"\n fields:\n pipeline: logs-windows.sysmon_operational-3.1.2\n- if:\n equals:\n winlog.channel: 'Application'\n then: \n - add_fields:\n target: data_stream\n fields:\n dataset: system.application\n - add_fields:\n target: event\n fields:\n dataset: system.application\n - add_fields:\n target: \"@metadata\"\n fields:\n pipeline: logs-system.application-2.6.1\n- if:\n equals:\n winlog.channel: 'System'\n then: \n - add_fields:\n target: data_stream\n fields:\n dataset: system.system\n - add_fields:\n target: event\n fields:\n dataset: system.system\n - add_fields:\n target: \"@metadata\"\n fields:\n pipeline: logs-system.system-2.6.1\n \n- if:\n equals:\n winlog.channel: 'Microsoft-Windows-PowerShell/Operational'\n then: \n - add_fields:\n target: data_stream\n fields:\n dataset: windows.powershell_operational\n - add_fields:\n target: event\n fields:\n dataset: windows.powershell_operational\n module: windows\n - add_fields:\n target: \"@metadata\"\n fields:\n pipeline: logs-windows.powershell_operational-3.1.2\n- add_fields:\n target: data_stream\n fields:\n dataset: import",
"tags": [ "tags": [
"import" "import"
] ],
"recursive_glob": true,
"ignore_older": "72h",
"clean_inactive": -1,
"harvester_limit": 0,
"fingerprint": false,
"fingerprint_offset": 0,
"fingerprint_length": "64",
"file_identity_native": true,
"exclude_lines": [],
"include_lines": []
} }
} }
} }
} }
}, },
"force": true "force": true
} }

View File

@@ -1,30 +1,45 @@
{ {
"package": { "package": {
"name": "log", "name": "filestream",
"version": "" "version": ""
}, },
"name": "import-suricata-logs", "name": "import-suricata-logs",
"namespace": "so",
"description": "Import Suricata logs", "description": "Import Suricata logs",
"policy_id": "so-grid-nodes_general", "policy_id": "so-grid-nodes_general",
"namespace": "so",
"inputs": { "inputs": {
"logs-logfile": { "filestream-filestream": {
"enabled": true, "enabled": true,
"streams": { "streams": {
"log.logs": { "filestream.generic": {
"enabled": true, "enabled": true,
"vars": { "vars": {
"paths": [ "paths": [
"/nsm/import/*/suricata/eve*.json" "/nsm/import/*/suricata/eve*.json"
], ],
"data_stream.dataset": "import", "data_stream.dataset": "import",
"pipeline": "suricata.common",
"parsers": "#- ndjson:\n# target: \"\"\n# message_key: msg\n#- multiline:\n# type: count\n# count_lines: 3\n",
"exclude_files": [
"\\.gz$"
],
"include_files": [],
"processors": "- add_fields:\n target: event\n fields:\n category: network\n module: suricata\n imported: true\n- dissect:\n tokenizer: \"/nsm/import/%{import.id}/suricata/%{import.file}\"\n field: \"log.file.path\"\n target_prefix: \"\"\n",
"tags": [], "tags": [],
"processors": "- add_fields:\n target: event\n fields:\n category: network\n module: suricata\n imported: true\n- dissect:\n tokenizer: \"/nsm/import/%{import.id}/suricata/%{import.file}\"\n field: \"log.file.path\"\n target_prefix: \"\"", "recursive_glob": true,
"custom": "pipeline: suricata.common" "ignore_older": "72h",
"clean_inactive": -1,
"harvester_limit": 0,
"fingerprint": false,
"fingerprint_offset": 0,
"fingerprint_length": "64",
"file_identity_native": true,
"exclude_lines": [],
"include_lines": []
} }
} }
} }
} }
}, },
"force": true "force": true
} }

View File

@@ -1,18 +1,17 @@
{ {
"package": { "package": {
"name": "log", "name": "filestream",
"version": "" "version": ""
}, },
"name": "rita-logs", "name": "rita-logs",
"namespace": "so",
"description": "RITA Logs", "description": "RITA Logs",
"policy_id": "so-grid-nodes_general", "policy_id": "so-grid-nodes_general",
"vars": {}, "namespace": "so",
"inputs": { "inputs": {
"logs-logfile": { "filestream-filestream": {
"enabled": true, "enabled": true,
"streams": { "streams": {
"log.logs": { "filestream.generic": {
"enabled": true, "enabled": true,
"vars": { "vars": {
"paths": [ "paths": [
@@ -20,15 +19,28 @@
"/nsm/rita/exploded-dns.csv", "/nsm/rita/exploded-dns.csv",
"/nsm/rita/long-connections.csv" "/nsm/rita/long-connections.csv"
], ],
"exclude_files": [],
"ignore_older": "72h",
"data_stream.dataset": "rita", "data_stream.dataset": "rita",
"tags": [], "parsers": "#- ndjson:\n# target: \"\"\n# message_key: msg\n#- multiline:\n# type: count\n# count_lines: 3\n",
"exclude_files": [
"\\.gz$"
],
"include_files": [],
"processors": "- dissect:\n tokenizer: \"/nsm/rita/%{pipeline}.csv\"\n field: \"log.file.path\"\n trim_chars: \".csv\"\n target_prefix: \"\"\n- script:\n lang: javascript\n source: >\n function process(event) {\n var pl = event.Get(\"pipeline\").split(\"-\");\n if (pl.length > 1) {\n pl = pl[1];\n }\n else {\n pl = pl[0];\n }\n event.Put(\"@metadata.pipeline\", \"rita.\" + pl);\n }\n- add_fields:\n target: event\n fields:\n category: network\n module: rita", "processors": "- dissect:\n tokenizer: \"/nsm/rita/%{pipeline}.csv\"\n field: \"log.file.path\"\n trim_chars: \".csv\"\n target_prefix: \"\"\n- script:\n lang: javascript\n source: >\n function process(event) {\n var pl = event.Get(\"pipeline\").split(\"-\");\n if (pl.length > 1) {\n pl = pl[1];\n }\n else {\n pl = pl[0];\n }\n event.Put(\"@metadata.pipeline\", \"rita.\" + pl);\n }\n- add_fields:\n target: event\n fields:\n category: network\n module: rita",
"custom": "exclude_lines: ['^Score', '^Source', '^Domain', '^No results']" "tags": [],
"recursive_glob": true,
"ignore_older": "72h",
"clean_inactive": -1,
"harvester_limit": 0,
"fingerprint": false,
"fingerprint_offset": 0,
"fingerprint_length": "64",
"file_identity_native": true,
"exclude_lines": [],
"include_lines": []
} }
} }
} }
} }
} },
"force": true
} }

View File

@@ -1,29 +1,41 @@
{ {
"package": { "package": {
"name": "log", "name": "filestream",
"version": "" "version": ""
}, },
"name": "so-ip-mappings", "name": "so-ip-mappings",
"namespace": "so",
"description": "IP Description mappings", "description": "IP Description mappings",
"policy_id": "so-grid-nodes_general", "policy_id": "so-grid-nodes_general",
"vars": {}, "namespace": "so",
"inputs": { "inputs": {
"logs-logfile": { "filestream-filestream": {
"enabled": true, "enabled": true,
"streams": { "streams": {
"log.logs": { "filestream.generic": {
"enabled": true, "enabled": true,
"vars": { "vars": {
"paths": [ "paths": [
"/nsm/custom-mappings/ip-descriptions.csv" "/nsm/custom-mappings/ip-descriptions.csv"
], ],
"data_stream.dataset": "hostnamemappings", "data_stream.dataset": "hostnamemappings",
"parsers": "#- ndjson:\n# target: \"\"\n# message_key: msg\n#- multiline:\n# type: count\n# count_lines: 3\n",
"exclude_files": [
"\\.gz$"
],
"include_files": [],
"processors": "- decode_csv_fields:\n fields:\n message: decoded.csv\n separator: \",\"\n ignore_missing: false\n overwrite_keys: true\n trim_leading_space: true\n fail_on_error: true\n\n- extract_array:\n field: decoded.csv\n mappings:\n so.ip_address: '0'\n so.description: '1'\n\n- script:\n lang: javascript\n source: >\n function process(event) {\n var ip = event.Get('so.ip_address');\n var validIpRegex = /^((25[0-5]|2[0-4]\\d|1\\d{2}|[1-9]?\\d)\\.){3}(25[0-5]|2[0-4]\\d|1\\d{2}|[1-9]?\\d)$/\n if (!validIpRegex.test(ip)) {\n event.Cancel();\n }\n }\n- fingerprint:\n fields: [\"so.ip_address\"]\n target_field: \"@metadata._id\"\n",
"tags": [ "tags": [
"so-ip-mappings" "so-ip-mappings"
], ],
"processors": "- decode_csv_fields:\n fields:\n message: decoded.csv\n separator: \",\"\n ignore_missing: false\n overwrite_keys: true\n trim_leading_space: true\n fail_on_error: true\n\n- extract_array:\n field: decoded.csv\n mappings:\n so.ip_address: '0'\n so.description: '1'\n\n- script:\n lang: javascript\n source: >\n function process(event) {\n var ip = event.Get('so.ip_address');\n var validIpRegex = /^((25[0-5]|2[0-4]\\d|1\\d{2}|[1-9]?\\d)\\.){3}(25[0-5]|2[0-4]\\d|1\\d{2}|[1-9]?\\d)$/\n if (!validIpRegex.test(ip)) {\n event.Cancel();\n }\n }\n- fingerprint:\n fields: [\"so.ip_address\"]\n target_field: \"@metadata._id\"\n", "recursive_glob": true,
"custom": "" "clean_inactive": -1,
"harvester_limit": 0,
"fingerprint": false,
"fingerprint_offset": 0,
"fingerprint_length": "64",
"file_identity_native": true,
"exclude_lines": [],
"include_lines": []
} }
} }
} }
@@ -31,5 +43,3 @@
}, },
"force": true "force": true
} }

View File

@@ -1,30 +1,44 @@
{ {
"package": { "package": {
"name": "log", "name": "filestream",
"version": "" "version": ""
}, },
"name": "soc-auth-sync-logs", "name": "soc-auth-sync-logs",
"namespace": "so",
"description": "Security Onion - Elastic Auth Sync - Logs", "description": "Security Onion - Elastic Auth Sync - Logs",
"policy_id": "so-grid-nodes_general", "policy_id": "so-grid-nodes_general",
"namespace": "so",
"inputs": { "inputs": {
"logs-logfile": { "filestream-filestream": {
"enabled": true, "enabled": true,
"streams": { "streams": {
"log.logs": { "filestream.generic": {
"enabled": true, "enabled": true,
"vars": { "vars": {
"paths": [ "paths": [
"/opt/so/log/soc/sync.log" "/opt/so/log/soc/sync.log"
], ],
"data_stream.dataset": "soc", "data_stream.dataset": "soc",
"tags": ["so-soc"], "pipeline": "common",
"parsers": "#- ndjson:\n# target: \"\"\n# message_key: msg\n#- multiline:\n# type: count\n# count_lines: 3\n",
"exclude_files": [
"\\.gz$"
],
"include_files": [],
"processors": "- dissect:\n tokenizer: \"%{event.action}\"\n field: \"message\"\n target_prefix: \"\"\n- add_fields:\n target: event\n fields:\n category: host\n module: soc\n dataset_temp: auth_sync", "processors": "- dissect:\n tokenizer: \"%{event.action}\"\n field: \"message\"\n target_prefix: \"\"\n- add_fields:\n target: event\n fields:\n category: host\n module: soc\n dataset_temp: auth_sync",
"custom": "pipeline: common" "tags": [],
"recursive_glob": true,
"clean_inactive": -1,
"harvester_limit": 0,
"fingerprint": false,
"fingerprint_offset": 0,
"fingerprint_length": "64",
"file_identity_native": true,
"exclude_lines": [],
"include_lines": []
} }
} }
} }
} }
}, },
"force": true "force": true
} }

View File

@@ -1,35 +1,48 @@
{ {
"policy_id": "so-grid-nodes_general",
"package": { "package": {
"name": "log", "name": "filestream",
"version": "" "version": ""
}, },
"name": "soc-detections-logs", "name": "soc-detections-logs",
"description": "Security Onion Console - Detections Logs", "description": "Security Onion Console - Detections Logs",
"policy_id": "so-grid-nodes_general",
"namespace": "so", "namespace": "so",
"inputs": { "inputs": {
"logs-logfile": { "filestream-filestream": {
"enabled": true, "enabled": true,
"streams": { "streams": {
"log.logs": { "filestream.generic": {
"enabled": true, "enabled": true,
"vars": { "vars": {
"paths": [ "paths": [
"/opt/so/log/soc/detections_runtime-status_sigma.log", "/opt/so/log/soc/detections_runtime-status_sigma.log",
"/opt/so/log/soc/detections_runtime-status_yara.log" "/opt/so/log/soc/detections_runtime-status_yara.log"
], ],
"exclude_files": [],
"ignore_older": "72h",
"data_stream.dataset": "soc", "data_stream.dataset": "soc",
"pipeline": "common",
"parsers": "#- ndjson:\n# target: \"\"\n# message_key: msg\n#- multiline:\n# type: count\n# count_lines: 3\n",
"exclude_files": [
"\\.gz$"
],
"include_files": [],
"processors": "- decode_json_fields:\n fields: [\"message\"]\n target: \"soc\"\n process_array: true\n max_depth: 2\n add_error_key: true \n- add_fields:\n target: event\n fields:\n category: host\n module: soc\n dataset_temp: detections\n- rename:\n fields:\n - from: \"soc.fields.sourceIp\"\n to: \"source.ip\"\n - from: \"soc.fields.status\"\n to: \"http.response.status_code\"\n - from: \"soc.fields.method\"\n to: \"http.request.method\"\n - from: \"soc.fields.path\"\n to: \"url.path\"\n - from: \"soc.message\"\n to: \"event.action\"\n - from: \"soc.level\"\n to: \"log.level\"\n ignore_missing: true",
"tags": [ "tags": [
"so-soc" "so-soc"
], ],
"processors": "- decode_json_fields:\n fields: [\"message\"]\n target: \"soc\"\n process_array: true\n max_depth: 2\n add_error_key: true \n- add_fields:\n target: event\n fields:\n category: host\n module: soc\n dataset_temp: detections\n- rename:\n fields:\n - from: \"soc.fields.sourceIp\"\n to: \"source.ip\"\n - from: \"soc.fields.status\"\n to: \"http.response.status_code\"\n - from: \"soc.fields.method\"\n to: \"http.request.method\"\n - from: \"soc.fields.path\"\n to: \"url.path\"\n - from: \"soc.message\"\n to: \"event.action\"\n - from: \"soc.level\"\n to: \"log.level\"\n ignore_missing: true", "recursive_glob": true,
"custom": "pipeline: common" "ignore_older": "72h",
"clean_inactive": -1,
"harvester_limit": 0,
"fingerprint": false,
"fingerprint_offset": 0,
"fingerprint_length": "64",
"file_identity_native": true,
"exclude_lines": [],
"include_lines": []
} }
} }
} }
} }
}, },
"force": true "force": true
} }

View File

@@ -1,30 +1,46 @@
{ {
"package": { "package": {
"name": "log", "name": "filestream",
"version": "" "version": ""
}, },
"name": "soc-salt-relay-logs", "name": "soc-salt-relay-logs",
"namespace": "so",
"description": "Security Onion - Salt Relay - Logs", "description": "Security Onion - Salt Relay - Logs",
"policy_id": "so-grid-nodes_general", "policy_id": "so-grid-nodes_general",
"namespace": "so",
"inputs": { "inputs": {
"logs-logfile": { "filestream-filestream": {
"enabled": true, "enabled": true,
"streams": { "streams": {
"log.logs": { "filestream.generic": {
"enabled": true, "enabled": true,
"vars": { "vars": {
"paths": [ "paths": [
"/opt/so/log/soc/salt-relay.log" "/opt/so/log/soc/salt-relay.log"
], ],
"data_stream.dataset": "soc", "data_stream.dataset": "soc",
"tags": ["so-soc"], "pipeline": "common",
"parsers": "#- ndjson:\n# target: \"\"\n# message_key: msg\n#- multiline:\n# type: count\n# count_lines: 3\n",
"exclude_files": [
"\\.gz$"
],
"include_files": [],
"processors": "- dissect:\n tokenizer: \"%{soc.ts} | %{event.action}\"\n field: \"message\"\n target_prefix: \"\"\n- add_fields:\n target: event\n fields:\n category: host\n module: soc\n dataset_temp: salt_relay", "processors": "- dissect:\n tokenizer: \"%{soc.ts} | %{event.action}\"\n field: \"message\"\n target_prefix: \"\"\n- add_fields:\n target: event\n fields:\n category: host\n module: soc\n dataset_temp: salt_relay",
"custom": "pipeline: common" "tags": [
"so-soc"
],
"recursive_glob": true,
"clean_inactive": -1,
"harvester_limit": 0,
"fingerprint": false,
"fingerprint_offset": 0,
"fingerprint_length": "64",
"file_identity_native": true,
"exclude_lines": [],
"include_lines": []
} }
} }
} }
} }
}, },
"force": true "force": true
} }

View File

@@ -1,30 +1,44 @@
{ {
"package": { "package": {
"name": "log", "name": "filestream",
"version": "" "version": ""
}, },
"name": "soc-sensoroni-logs", "name": "soc-sensoroni-logs",
"namespace": "so",
"description": "Security Onion - Sensoroni - Logs", "description": "Security Onion - Sensoroni - Logs",
"policy_id": "so-grid-nodes_general", "policy_id": "so-grid-nodes_general",
"namespace": "so",
"inputs": { "inputs": {
"logs-logfile": { "filestream-filestream": {
"enabled": true, "enabled": true,
"streams": { "streams": {
"log.logs": { "filestream.generic": {
"enabled": true, "enabled": true,
"vars": { "vars": {
"paths": [ "paths": [
"/opt/so/log/sensoroni/sensoroni.log" "/opt/so/log/sensoroni/sensoroni.log"
], ],
"data_stream.dataset": "soc", "data_stream.dataset": "soc",
"tags": [], "pipeline": "common",
"parsers": "#- ndjson:\n# target: \"\"\n# message_key: msg\n#- multiline:\n# type: count\n# count_lines: 3\n",
"exclude_files": [
"\\.gz$"
],
"include_files": [],
"processors": "- decode_json_fields:\n fields: [\"message\"]\n target: \"sensoroni\"\n process_array: true\n max_depth: 2\n add_error_key: true \n- add_fields:\n target: event\n fields:\n category: host\n module: soc\n dataset_temp: sensoroni\n- rename:\n fields:\n - from: \"sensoroni.fields.sourceIp\"\n to: \"source.ip\"\n - from: \"sensoroni.fields.status\"\n to: \"http.response.status_code\"\n - from: \"sensoroni.fields.method\"\n to: \"http.request.method\"\n - from: \"sensoroni.fields.path\"\n to: \"url.path\"\n - from: \"sensoroni.message\"\n to: \"event.action\"\n - from: \"sensoroni.level\"\n to: \"log.level\"\n ignore_missing: true", "processors": "- decode_json_fields:\n fields: [\"message\"]\n target: \"sensoroni\"\n process_array: true\n max_depth: 2\n add_error_key: true \n- add_fields:\n target: event\n fields:\n category: host\n module: soc\n dataset_temp: sensoroni\n- rename:\n fields:\n - from: \"sensoroni.fields.sourceIp\"\n to: \"source.ip\"\n - from: \"sensoroni.fields.status\"\n to: \"http.response.status_code\"\n - from: \"sensoroni.fields.method\"\n to: \"http.request.method\"\n - from: \"sensoroni.fields.path\"\n to: \"url.path\"\n - from: \"sensoroni.message\"\n to: \"event.action\"\n - from: \"sensoroni.level\"\n to: \"log.level\"\n ignore_missing: true",
"custom": "pipeline: common" "tags": [],
"recursive_glob": true,
"clean_inactive": -1,
"harvester_limit": 0,
"fingerprint": false,
"fingerprint_offset": 0,
"fingerprint_length": "64",
"file_identity_native": true,
"exclude_lines": [],
"include_lines": []
} }
} }
} }
} }
}, },
"force": true "force": true
} }

View File

@@ -1,30 +1,46 @@
{ {
"package": { "package": {
"name": "log", "name": "filestream",
"version": "" "version": ""
}, },
"name": "soc-server-logs", "name": "soc-server-logs",
"namespace": "so",
"description": "Security Onion Console Logs", "description": "Security Onion Console Logs",
"policy_id": "so-grid-nodes_general", "policy_id": "so-grid-nodes_general",
"namespace": "so",
"inputs": { "inputs": {
"logs-logfile": { "filestream-filestream": {
"enabled": true, "enabled": true,
"streams": { "streams": {
"log.logs": { "filestream.generic": {
"enabled": true, "enabled": true,
"vars": { "vars": {
"paths": [ "paths": [
"/opt/so/log/soc/sensoroni-server.log" "/opt/so/log/soc/sensoroni-server.log"
], ],
"data_stream.dataset": "soc", "data_stream.dataset": "soc",
"tags": ["so-soc"], "pipeline": "common",
"parsers": "#- ndjson:\n# target: \"\"\n# message_key: msg\n#- multiline:\n# type: count\n# count_lines: 3\n",
"exclude_files": [
"\\.gz$"
],
"include_files": [],
"processors": "- decode_json_fields:\n fields: [\"message\"]\n target: \"soc\"\n process_array: true\n max_depth: 2\n add_error_key: true \n- add_fields:\n target: event\n fields:\n category: host\n module: soc\n dataset_temp: server\n- rename:\n fields:\n - from: \"soc.fields.sourceIp\"\n to: \"source.ip\"\n - from: \"soc.fields.status\"\n to: \"http.response.status_code\"\n - from: \"soc.fields.method\"\n to: \"http.request.method\"\n - from: \"soc.fields.path\"\n to: \"url.path\"\n - from: \"soc.message\"\n to: \"event.action\"\n - from: \"soc.level\"\n to: \"log.level\"\n ignore_missing: true", "processors": "- decode_json_fields:\n fields: [\"message\"]\n target: \"soc\"\n process_array: true\n max_depth: 2\n add_error_key: true \n- add_fields:\n target: event\n fields:\n category: host\n module: soc\n dataset_temp: server\n- rename:\n fields:\n - from: \"soc.fields.sourceIp\"\n to: \"source.ip\"\n - from: \"soc.fields.status\"\n to: \"http.response.status_code\"\n - from: \"soc.fields.method\"\n to: \"http.request.method\"\n - from: \"soc.fields.path\"\n to: \"url.path\"\n - from: \"soc.message\"\n to: \"event.action\"\n - from: \"soc.level\"\n to: \"log.level\"\n ignore_missing: true",
"custom": "pipeline: common" "tags": [
"so-soc"
],
"recursive_glob": true,
"clean_inactive": -1,
"harvester_limit": 0,
"fingerprint": false,
"fingerprint_offset": 0,
"fingerprint_length": "64",
"file_identity_native": true,
"exclude_lines": [],
"include_lines": []
} }
} }
} }
} }
}, },
"force": true "force": true
} }

View File

@@ -1,30 +1,44 @@
{ {
"package": { "package": {
"name": "log", "name": "filestream",
"version": "" "version": ""
}, },
"name": "strelka-logs", "name": "strelka-logs",
"namespace": "so", "description": "Strelka Logs",
"description": "Strelka logs",
"policy_id": "so-grid-nodes_general", "policy_id": "so-grid-nodes_general",
"namespace": "so",
"inputs": { "inputs": {
"logs-logfile": { "filestream-filestream": {
"enabled": true, "enabled": true,
"streams": { "streams": {
"log.logs": { "filestream.generic": {
"enabled": true, "enabled": true,
"vars": { "vars": {
"paths": [ "paths": [
"/nsm/strelka/log/strelka.log" "/nsm/strelka/log/strelka.log"
], ],
"data_stream.dataset": "strelka", "data_stream.dataset": "strelka",
"tags": [], "pipeline": "strelka.file",
"parsers": "#- ndjson:\n# target: \"\"\n# message_key: msg\n#- multiline:\n# type: count\n# count_lines: 3\n",
"exclude_files": [
"\\.gz$"
],
"include_files": [],
"processors": "- add_fields:\n target: event\n fields:\n category: file\n module: strelka", "processors": "- add_fields:\n target: event\n fields:\n category: file\n module: strelka",
"custom": "pipeline: strelka.file" "tags": [],
"recursive_glob": true,
"clean_inactive": -1,
"harvester_limit": 0,
"fingerprint": false,
"fingerprint_offset": 0,
"fingerprint_length": "64",
"file_identity_native": true,
"exclude_lines": [],
"include_lines": []
} }
} }
} }
} }
}, },
"force": true "force": true
} }

View File

@@ -1,30 +1,44 @@
{ {
"package": { "package": {
"name": "log", "name": "filestream",
"version": "" "version": ""
}, },
"name": "suricata-logs", "name": "suricata-logs",
"namespace": "so",
"description": "Suricata integration", "description": "Suricata integration",
"policy_id": "so-grid-nodes_general", "policy_id": "so-grid-nodes_general",
"namespace": "so",
"inputs": { "inputs": {
"logs-logfile": { "filestream-filestream": {
"enabled": true, "enabled": true,
"streams": { "streams": {
"log.logs": { "filestream.generic": {
"enabled": true, "enabled": true,
"vars": { "vars": {
"paths": [ "paths": [
"/nsm/suricata/eve*.json" "/nsm/suricata/eve*.json"
], ],
"data_stream.dataset": "suricata", "data_stream.dataset": "filestream.generic",
"tags": [], "pipeline": "suricata.common",
"parsers": "#- ndjson:\n# target: \"\"\n# message_key: msg\n#- multiline:\n# type: count\n# count_lines: 3\n",
"exclude_files": [
"\\.gz$"
],
"include_files": [],
"processors": "- add_fields:\n target: event\n fields:\n category: network\n module: suricata", "processors": "- add_fields:\n target: event\n fields:\n category: network\n module: suricata",
"custom": "pipeline: suricata.common" "tags": [],
"recursive_glob": true,
"clean_inactive": -1,
"harvester_limit": 0,
"fingerprint": false,
"fingerprint_offset": 0,
"fingerprint_length": "64",
"file_identity_native": true,
"exclude_lines": [],
"include_lines": []
} }
} }
} }
} }
}, },
"force": true "force": true
} }

View File

@@ -8,7 +8,9 @@
{% endif %} {% endif %}
{% set AGENT_STATUS = salt['service.available']('elastic-agent') %} {% set AGENT_STATUS = salt['service.available']('elastic-agent') %}
{% if not AGENT_STATUS %} {% set AGENT_EXISTS = salt['file.file_exists']('/opt/Elastic/Agent/elastic-agent') %}
{% if not AGENT_STATUS or not AGENT_EXISTS %}
pull_agent_installer: pull_agent_installer:
file.managed: file.managed:
@@ -19,7 +21,7 @@ pull_agent_installer:
run_installer: run_installer:
cmd.run: cmd.run:
- name: ./so-elastic-agent_linux_amd64 -token={{ GRIDNODETOKEN }} - name: ./so-elastic-agent_linux_amd64 -token={{ GRIDNODETOKEN }} -force
- cwd: /opt/so - cwd: /opt/so
- retry: - retry:
attempts: 3 attempts: 3

186
salt/elasticfleet/ssl.sls Normal file
View File

@@ -0,0 +1,186 @@
# Copyright Security Onion Solutions LLC and/or licensed to Security Onion Solutions LLC under one
# or more contributor license agreements. Licensed under the Elastic License 2.0 as shown at
# https://securityonion.net/license; you may not use this file except in compliance with the
# Elastic License 2.0.
{% from 'allowed_states.map.jinja' import allowed_states %}
{% if sls.split('.')[0] in allowed_states %}
{% from 'vars/globals.map.jinja' import GLOBALS %}
{% from 'elasticfleet/map.jinja' import ELASTICFLEETMERGED %}
{% from 'ca/map.jinja' import CA %}
{% if GLOBALS.is_manager or GLOBALS.role in ['so-heavynode', 'so-fleet', 'so-receiver'] %}
{% if grains['role'] not in [ 'so-heavynode', 'so-receiver'] %}
# Start -- Elastic Fleet Host Cert
etc_elasticfleet_key:
x509.private_key_managed:
- name: /etc/pki/elasticfleet-server.key
- keysize: 4096
- backup: True
- new: True
{% if salt['file.file_exists']('/etc/pki/elasticfleet-server.key') -%}
- prereq:
- x509: etc_elasticfleet_crt
{%- endif %}
- retry:
attempts: 5
interval: 30
etc_elasticfleet_crt:
x509.certificate_managed:
- name: /etc/pki/elasticfleet-server.crt
- ca_server: {{ CA.server }}
- signing_policy: elasticfleet
- private_key: /etc/pki/elasticfleet-server.key
- CN: {{ GLOBALS.hostname }}
- subjectAltName: DNS:{{ GLOBALS.hostname }},DNS:{{ GLOBALS.url_base }},IP:{{ GLOBALS.node_ip }}{% if ELASTICFLEETMERGED.config.server.custom_fqdn | length > 0 %},DNS:{{ ELASTICFLEETMERGED.config.server.custom_fqdn | join(',DNS:') }}{% endif %}
- days_remaining: 7
- days_valid: 820
- backup: True
- timeout: 30
- retry:
attempts: 5
interval: 30
efperms:
file.managed:
- replace: False
- name: /etc/pki/elasticfleet-server.key
- mode: 640
- group: 939
chownelasticfleetcrt:
file.managed:
- replace: False
- name: /etc/pki/elasticfleet-server.crt
- mode: 640
- user: 947
- group: 939
chownelasticfleetkey:
file.managed:
- replace: False
- name: /etc/pki/elasticfleet-server.key
- mode: 640
- user: 947
- group: 939
# End -- Elastic Fleet Host Cert
{% endif %} # endif is for not including HeavyNodes & Receivers
# Start -- Elastic Fleet Client Cert for Agent (Mutual Auth with Logstash Output)
etc_elasticfleet_agent_key:
x509.private_key_managed:
- name: /etc/pki/elasticfleet-agent.key
- keysize: 4096
- backup: True
- new: True
{% if salt['file.file_exists']('/etc/pki/elasticfleet-agent.key') -%}
- prereq:
- x509: etc_elasticfleet_agent_crt
{%- endif %}
- retry:
attempts: 5
interval: 30
etc_elasticfleet_agent_crt:
x509.certificate_managed:
- name: /etc/pki/elasticfleet-agent.crt
- ca_server: {{ CA.server }}
- signing_policy: elasticfleet
- private_key: /etc/pki/elasticfleet-agent.key
- CN: {{ GLOBALS.hostname }}
- days_remaining: 7
- days_valid: 820
- backup: True
- timeout: 30
- retry:
attempts: 5
interval: 30
cmd.run:
- name: "/usr/bin/openssl pkcs8 -in /etc/pki/elasticfleet-agent.key -topk8 -out /etc/pki/elasticfleet-agent.p8 -nocrypt"
- onchanges:
- x509: etc_elasticfleet_agent_key
efagentperms:
file.managed:
- replace: False
- name: /etc/pki/elasticfleet-agent.key
- mode: 640
- group: 939
chownelasticfleetagentcrt:
file.managed:
- replace: False
- name: /etc/pki/elasticfleet-agent.crt
- mode: 640
- user: 947
- group: 939
chownelasticfleetagentkey:
file.managed:
- replace: False
- name: /etc/pki/elasticfleet-agent.key
- mode: 640
- user: 947
- group: 939
# End -- Elastic Fleet Client Cert for Agent (Mutual Auth with Logstash Output)
{% endif %}
{% if GLOBALS.role in ['so-manager', 'so-managerhype', 'so-managersearch', 'so-standalone'] %}
elasticfleet_kafka_key:
x509.private_key_managed:
- name: /etc/pki/elasticfleet-kafka.key
- keysize: 4096
- backup: True
- new: True
{% if salt['file.file_exists']('/etc/pki/elasticfleet-kafka.key') -%}
- prereq:
- x509: elasticfleet_kafka_crt
{%- endif %}
- retry:
attempts: 5
interval: 30
elasticfleet_kafka_crt:
x509.certificate_managed:
- name: /etc/pki/elasticfleet-kafka.crt
- ca_server: {{ CA.server }}
- signing_policy: kafka
- private_key: /etc/pki/elasticfleet-kafka.key
- CN: {{ GLOBALS.hostname }}
- subjectAltName: DNS:{{ GLOBALS.hostname }}, IP:{{ GLOBALS.node_ip }}
- days_remaining: 7
- days_valid: 820
- backup: True
- timeout: 30
- retry:
attempts: 5
interval: 30
elasticfleet_kafka_cert_perms:
file.managed:
- replace: False
- name: /etc/pki/elasticfleet-kafka.crt
- mode: 640
- user: 947
- group: 939
elasticfleet_kafka_key_perms:
file.managed:
- replace: False
- name: /etc/pki/elasticfleet-kafka.key
- mode: 640
- user: 947
- group: 939
{% endif %}
{% else %}
{{sls}}_state_not_allowed:
test.fail_without_changes:
- name: {{sls}}_state_not_allowed
{% endif %}

View File

@@ -14,7 +14,7 @@ if ! is_manager_node; then
fi fi
# Get current list of Grid Node Agents that need to be upgraded # Get current list of Grid Node Agents that need to be upgraded
RAW_JSON=$(curl -K /opt/so/conf/elasticsearch/curl.config -L "http://localhost:5601/api/fleet/agents?perPage=20&page=1&kuery=NOT%20agent.version%20:%20%22{{ELASTICSEARCHDEFAULTS.elasticsearch.version}}%22%20and%20policy_id%20:%20%22so-grid-nodes_general%22&showInactive=false&getStatusSummary=true") RAW_JSON=$(curl -K /opt/so/conf/elasticsearch/curl.config -L "http://localhost:5601/api/fleet/agents?perPage=20&page=1&kuery=NOT%20agent.version%20:%20%22{{ELASTICSEARCHDEFAULTS.elasticsearch.version}}%22%20and%20policy_id%20:%20%22so-grid-nodes_general%22&showInactive=false&getStatusSummary=true" --retry 3 --retry-delay 30 --fail 2>/dev/null)
# Check to make sure that the server responded with good data - else, bail from script # Check to make sure that the server responded with good data - else, bail from script
CHECKSUM=$(jq -r '.page' <<< "$RAW_JSON") CHECKSUM=$(jq -r '.page' <<< "$RAW_JSON")

View File

@@ -26,7 +26,7 @@ function update_es_urls() {
} }
# Get current list of Fleet Elasticsearch URLs # Get current list of Fleet Elasticsearch URLs
RAW_JSON=$(curl -K /opt/so/conf/elasticsearch/curl.config 'http://localhost:5601/api/fleet/outputs/so-manager_elasticsearch') RAW_JSON=$(curl -K /opt/so/conf/elasticsearch/curl.config 'http://localhost:5601/api/fleet/outputs/so-manager_elasticsearch' --retry 3 --retry-delay 30 --fail 2>/dev/null)
# Check to make sure that the server responded with good data - else, bail from script # Check to make sure that the server responded with good data - else, bail from script
CHECKSUM=$(jq -r '.item.id' <<< "$RAW_JSON") CHECKSUM=$(jq -r '.item.id' <<< "$RAW_JSON")

View File

@@ -142,7 +142,7 @@ function update_kafka_outputs() {
{% if GLOBALS.pipeline == "KAFKA" %} {% if GLOBALS.pipeline == "KAFKA" %}
# Get current list of Kafka Outputs # Get current list of Kafka Outputs
RAW_JSON=$(curl -K /opt/so/conf/elasticsearch/curl.config 'http://localhost:5601/api/fleet/outputs/so-manager_kafka') RAW_JSON=$(curl -K /opt/so/conf/elasticsearch/curl.config 'http://localhost:5601/api/fleet/outputs/so-manager_kafka' --retry 3 --retry-delay 30 --fail 2>/dev/null)
# Check to make sure that the server responded with good data - else, bail from script # Check to make sure that the server responded with good data - else, bail from script
CHECKSUM=$(jq -r '.item.id' <<< "$RAW_JSON") CHECKSUM=$(jq -r '.item.id' <<< "$RAW_JSON")
@@ -168,7 +168,7 @@ function update_kafka_outputs() {
{# If global pipeline isn't set to KAFKA then assume default of REDIS / logstash #} {# If global pipeline isn't set to KAFKA then assume default of REDIS / logstash #}
{% else %} {% else %}
# Get current list of Logstash Outputs # Get current list of Logstash Outputs
RAW_JSON=$(curl -K /opt/so/conf/elasticsearch/curl.config 'http://localhost:5601/api/fleet/outputs/so-manager_logstash') RAW_JSON=$(curl -K /opt/so/conf/elasticsearch/curl.config 'http://localhost:5601/api/fleet/outputs/so-manager_logstash' --retry 3 --retry-delay 30 --fail 2>/dev/null)
# Check to make sure that the server responded with good data - else, bail from script # Check to make sure that the server responded with good data - else, bail from script
CHECKSUM=$(jq -r '.item.id' <<< "$RAW_JSON") CHECKSUM=$(jq -r '.item.id' <<< "$RAW_JSON")

View File

@@ -23,7 +23,7 @@ function update_fleet_urls() {
} }
# Get current list of Fleet Server URLs # Get current list of Fleet Server URLs
RAW_JSON=$(curl -K /opt/so/conf/elasticsearch/curl.config 'http://localhost:5601/api/fleet/fleet_server_hosts/grid-default') RAW_JSON=$(curl -K /opt/so/conf/elasticsearch/curl.config 'http://localhost:5601/api/fleet/fleet_server_hosts/grid-default' --retry 3 --retry-delay 30 --fail 2>/dev/null)
# Check to make sure that the server responded with good data - else, bail from script # Check to make sure that the server responded with good data - else, bail from script
CHECKSUM=$(jq -r '.item.id' <<< "$RAW_JSON") CHECKSUM=$(jq -r '.item.id' <<< "$RAW_JSON")

View File

@@ -26,14 +26,14 @@ catrustscript:
GLOBALS: {{ GLOBALS }} GLOBALS: {{ GLOBALS }}
{% endif %} {% endif %}
cacertz: elasticsearch_cacerts:
file.managed: file.managed:
- name: /opt/so/conf/ca/cacerts - name: /opt/so/conf/ca/cacerts
- source: salt://elasticsearch/cacerts - source: salt://elasticsearch/cacerts
- user: 939 - user: 939
- group: 939 - group: 939
capemz: elasticsearch_capems:
file.managed: file.managed:
- name: /opt/so/conf/ca/tls-ca-bundle.pem - name: /opt/so/conf/ca/tls-ca-bundle.pem
- source: salt://elasticsearch/tls-ca-bundle.pem - source: salt://elasticsearch/tls-ca-bundle.pem

View File

@@ -5,11 +5,6 @@
{% from 'allowed_states.map.jinja' import allowed_states %} {% from 'allowed_states.map.jinja' import allowed_states %}
{% if sls.split('.')[0] in allowed_states %} {% if sls.split('.')[0] in allowed_states %}
include:
- ssl
- elasticsearch.ca
{% from 'vars/globals.map.jinja' import GLOBALS %} {% from 'vars/globals.map.jinja' import GLOBALS %}
{% from 'elasticsearch/config.map.jinja' import ELASTICSEARCHMERGED %} {% from 'elasticsearch/config.map.jinja' import ELASTICSEARCHMERGED %}

View File

@@ -14,6 +14,9 @@
{% from 'elasticsearch/template.map.jinja' import ES_INDEX_SETTINGS %} {% from 'elasticsearch/template.map.jinja' import ES_INDEX_SETTINGS %}
include: include:
- ca
- elasticsearch.ca
- elasticsearch.ssl
- elasticsearch.config - elasticsearch.config
- elasticsearch.sostatus - elasticsearch.sostatus
@@ -61,11 +64,7 @@ so-elasticsearch:
- /nsm/elasticsearch:/usr/share/elasticsearch/data:rw - /nsm/elasticsearch:/usr/share/elasticsearch/data:rw
- /opt/so/log/elasticsearch:/var/log/elasticsearch:rw - /opt/so/log/elasticsearch:/var/log/elasticsearch:rw
- /opt/so/conf/ca/cacerts:/usr/share/elasticsearch/jdk/lib/security/cacerts:ro - /opt/so/conf/ca/cacerts:/usr/share/elasticsearch/jdk/lib/security/cacerts:ro
{% if GLOBALS.is_manager %}
- /etc/pki/ca.crt:/usr/share/elasticsearch/config/ca.crt:ro
{% else %}
- /etc/pki/tls/certs/intca.crt:/usr/share/elasticsearch/config/ca.crt:ro - /etc/pki/tls/certs/intca.crt:/usr/share/elasticsearch/config/ca.crt:ro
{% endif %}
- /etc/pki/elasticsearch.crt:/usr/share/elasticsearch/config/elasticsearch.crt:ro - /etc/pki/elasticsearch.crt:/usr/share/elasticsearch/config/elasticsearch.crt:ro
- /etc/pki/elasticsearch.key:/usr/share/elasticsearch/config/elasticsearch.key:ro - /etc/pki/elasticsearch.key:/usr/share/elasticsearch/config/elasticsearch.key:ro
- /etc/pki/elasticsearch.p12:/usr/share/elasticsearch/config/elasticsearch.p12:ro - /etc/pki/elasticsearch.p12:/usr/share/elasticsearch/config/elasticsearch.p12:ro
@@ -82,22 +81,21 @@ so-elasticsearch:
{% endfor %} {% endfor %}
{% endif %} {% endif %}
- watch: - watch:
- file: cacertz - file: trusttheca
- x509: elasticsearch_crt
- x509: elasticsearch_key
- file: elasticsearch_cacerts
- file: esyml - file: esyml
- require: - require:
- file: trusttheca
- x509: elasticsearch_crt
- x509: elasticsearch_key
- file: elasticsearch_cacerts
- file: esyml - file: esyml
- file: eslog4jfile - file: eslog4jfile
- file: nsmesdir - file: nsmesdir
- file: eslogdir - file: eslogdir
- file: cacertz
- x509: /etc/pki/elasticsearch.crt
- x509: /etc/pki/elasticsearch.key
- file: elasticp12perms - file: elasticp12perms
{% if GLOBALS.is_manager %}
- x509: pki_public_ca_crt
{% else %}
- x509: trusttheca
{% endif %}
- cmd: auth_users_roles_inode - cmd: auth_users_roles_inode
- cmd: auth_users_inode - cmd: auth_users_inode

View File

@@ -0,0 +1,66 @@
# Copyright Security Onion Solutions LLC and/or licensed to Security Onion Solutions LLC under one
# or more contributor license agreements. Licensed under the Elastic License 2.0 as shown at
# https://securityonion.net/license; you may not use this file except in compliance with the
# Elastic License 2.0.
{% from 'allowed_states.map.jinja' import allowed_states %}
{% if sls.split('.')[0] in allowed_states %}
{% from 'vars/globals.map.jinja' import GLOBALS %}
{% from 'ca/map.jinja' import CA %}
# Create a cert for elasticsearch
elasticsearch_key:
x509.private_key_managed:
- name: /etc/pki/elasticsearch.key
- keysize: 4096
- backup: True
- new: True
{% if salt['file.file_exists']('/etc/pki/elasticsearch.key') -%}
- prereq:
- x509: /etc/pki/elasticsearch.crt
{%- endif %}
- retry:
attempts: 5
interval: 30
elasticsearch_crt:
x509.certificate_managed:
- name: /etc/pki/elasticsearch.crt
- ca_server: {{ CA.server }}
- signing_policy: registry
- private_key: /etc/pki/elasticsearch.key
- CN: {{ GLOBALS.hostname }}
- subjectAltName: DNS:{{ GLOBALS.hostname }}, IP:{{ GLOBALS.node_ip }}
- days_remaining: 7
- days_valid: 820
- backup: True
- timeout: 30
- retry:
attempts: 5
interval: 30
cmd.run:
- name: "/usr/bin/openssl pkcs12 -inkey /etc/pki/elasticsearch.key -in /etc/pki/elasticsearch.crt -export -out /etc/pki/elasticsearch.p12 -nodes -passout pass:"
- onchanges:
- x509: /etc/pki/elasticsearch.key
elastickeyperms:
file.managed:
- replace: False
- name: /etc/pki/elasticsearch.key
- mode: 640
- group: 930
elasticp12perms:
file.managed:
- replace: False
- name: /etc/pki/elasticsearch.p12
- mode: 640
- group: 930
{% else %}
{{sls}}_state_not_allowed:
test.fail_without_changes:
- name: {{sls}}_state_not_allowed
{% endif %}

View File

@@ -14,8 +14,9 @@ set -e
# Check to see if we have extracted the ca cert. # Check to see if we have extracted the ca cert.
if [ ! -f /opt/so/saltstack/local/salt/elasticsearch/cacerts ]; then if [ ! -f /opt/so/saltstack/local/salt/elasticsearch/cacerts ]; then
docker run -v /etc/pki/ca.crt:/etc/ssl/ca.crt --name so-elasticsearchca --user root --entrypoint jdk/bin/keytool {{ GLOBALS.registry_host }}:5000/{{ GLOBALS.image_repo }}/so-elasticsearch:$ELASTIC_AGENT_TARBALL_VERSION -keystore /usr/share/elasticsearch/jdk/lib/security/cacerts -alias SOSCA -import -file /etc/ssl/ca.crt -storepass changeit -noprompt docker run -v /etc/pki/ca.crt:/etc/ssl/ca.crt --name so-elasticsearchca --user root --entrypoint jdk/bin/keytool {{ GLOBALS.registry_host }}:5000/{{ GLOBALS.image_repo }}/so-elasticsearch:$ELASTIC_AGENT_TARBALL_VERSION -keystore /usr/share/elasticsearch/jdk/lib/security/cacerts -alias SOSCA -import -file /etc/ssl/ca.crt -storepass changeit -noprompt
docker cp so-elasticsearchca:/usr/share/elasticsearch/jdk/lib/security/cacerts /opt/so/saltstack/local/salt/elasticsearch/cacerts # Make sure symbolic links are followed when copying from container
docker cp so-elasticsearchca:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem /opt/so/saltstack/local/salt/elasticsearch/tls-ca-bundle.pem docker cp -L so-elasticsearchca:/usr/share/elasticsearch/jdk/lib/security/cacerts /opt/so/saltstack/local/salt/elasticsearch/cacerts
docker cp -L so-elasticsearchca:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem /opt/so/saltstack/local/salt/elasticsearch/tls-ca-bundle.pem
docker rm so-elasticsearchca docker rm so-elasticsearchca
echo "" >> /opt/so/saltstack/local/salt/elasticsearch/tls-ca-bundle.pem echo "" >> /opt/so/saltstack/local/salt/elasticsearch/tls-ca-bundle.pem
echo "sosca" >> /opt/so/saltstack/local/salt/elasticsearch/tls-ca-bundle.pem echo "sosca" >> /opt/so/saltstack/local/salt/elasticsearch/tls-ca-bundle.pem

View File

@@ -121,7 +121,7 @@ if [ ! -f $STATE_FILE_SUCCESS ]; then
echo "Loading Security Onion index templates..." echo "Loading Security Onion index templates..."
shopt -s extglob shopt -s extglob
{% if GLOBALS.role == 'so-heavynode' %} {% if GLOBALS.role == 'so-heavynode' %}
pattern="!(*1password*|*aws*|*azure*|*cloudflare*|*elastic_agent*|*fim*|*github*|*google*|*osquery*|*system*|*windows*)" pattern="!(*1password*|*aws*|*azure*|*cloudflare*|*elastic_agent*|*fim*|*github*|*google*|*osquery*|*system*|*windows*|*endpoint*|*elasticsearch*|*generic*|*fleet_server*|*soc*)"
{% else %} {% else %}
pattern="*" pattern="*"
{% endif %} {% endif %}

View File

@@ -9,7 +9,6 @@
include: include:
- salt.minion - salt.minion
- ssl
# Influx DB # Influx DB
influxconfdir: influxconfdir:

View File

@@ -11,6 +11,7 @@
{% set TOKEN = salt['pillar.get']('influxdb:token') %} {% set TOKEN = salt['pillar.get']('influxdb:token') %}
include: include:
- influxdb.ssl
- influxdb.config - influxdb.config
- influxdb.sostatus - influxdb.sostatus
@@ -59,6 +60,8 @@ so-influxdb:
{% endif %} {% endif %}
- watch: - watch:
- file: influxdbconf - file: influxdbconf
- x509: influxdb_key
- x509: influxdb_crt
- require: - require:
- file: influxdbconf - file: influxdbconf
- x509: influxdb_key - x509: influxdb_key

55
salt/influxdb/ssl.sls Normal file
View File

@@ -0,0 +1,55 @@
# Copyright Security Onion Solutions LLC and/or licensed to Security Onion Solutions LLC under one
# or more contributor license agreements. Licensed under the Elastic License 2.0 as shown at
# https://securityonion.net/license; you may not use this file except in compliance with the
# Elastic License 2.0.
{% from 'allowed_states.map.jinja' import allowed_states %}
{% if sls.split('.')[0] in allowed_states %}
{% from 'vars/globals.map.jinja' import GLOBALS %}
{% from 'ca/map.jinja' import CA %}
influxdb_key:
x509.private_key_managed:
- name: /etc/pki/influxdb.key
- keysize: 4096
- backup: True
- new: True
{% if salt['file.file_exists']('/etc/pki/influxdb.key') -%}
- prereq:
- x509: /etc/pki/influxdb.crt
{%- endif %}
- retry:
attempts: 5
interval: 30
# Create a cert for the talking to influxdb
influxdb_crt:
x509.certificate_managed:
- name: /etc/pki/influxdb.crt
- ca_server: {{ CA.server }}
- signing_policy: influxdb
- private_key: /etc/pki/influxdb.key
- CN: {{ GLOBALS.hostname }}
- subjectAltName: DNS:{{ GLOBALS.hostname }}, IP:{{ GLOBALS.node_ip }}
- days_remaining: 7
- days_valid: 820
- backup: True
- timeout: 30
- retry:
attempts: 5
interval: 30
influxkeyperms:
file.managed:
- replace: False
- name: /etc/pki/influxdb.key
- mode: 640
- group: 939
{% else %}
{{sls}}_state_not_allowed:
test.fail_without_changes:
- name: {{sls}}_state_not_allowed
{% endif %}

View File

@@ -68,6 +68,8 @@ so-kafka:
- file: kafka_server_jaas_properties - file: kafka_server_jaas_properties
{% endif %} {% endif %}
- file: kafkacertz - file: kafkacertz
- x509: kafka_crt
- file: kafka_pkcs12_perms
- require: - require:
- file: kafkacertz - file: kafkacertz
@@ -95,4 +97,4 @@ include:
test.fail_without_changes: test.fail_without_changes:
- name: {{sls}}_state_not_allowed - name: {{sls}}_state_not_allowed
{% endif %} {% endif %}

View File

@@ -6,22 +6,13 @@
{% from 'allowed_states.map.jinja' import allowed_states %} {% from 'allowed_states.map.jinja' import allowed_states %}
{% if sls.split('.')[0] in allowed_states or sls in allowed_states %} {% if sls.split('.')[0] in allowed_states or sls in allowed_states %}
{% from 'vars/globals.map.jinja' import GLOBALS %} {% from 'vars/globals.map.jinja' import GLOBALS %}
{% from 'ca/map.jinja' import CA %}
{% set kafka_password = salt['pillar.get']('kafka:config:password') %} {% set kafka_password = salt['pillar.get']('kafka:config:password') %}
include: include:
- ca.dirs - ca
{% set global_ca_server = [] %}
{% set x509dict = salt['mine.get'](GLOBALS.manager | lower~'*', 'x509.get_pem_entries') %}
{% for host in x509dict %}
{% if 'manager' in host.split('_')|last or host.split('_')|last == 'standalone' %}
{% do global_ca_server.append(host) %}
{% endif %}
{% endfor %}
{% set ca_server = global_ca_server[0] %}
{% if GLOBALS.pipeline == "KAFKA" %} {% if GLOBALS.role in ['so-manager', 'so-managersearch', 'so-standalone'] %}
{% if GLOBALS.role in ['so-manager', 'so-managersearch', 'so-standalone'] %}
kafka_client_key: kafka_client_key:
x509.private_key_managed: x509.private_key_managed:
- name: /etc/pki/kafka-client.key - name: /etc/pki/kafka-client.key
@@ -39,12 +30,12 @@ kafka_client_key:
kafka_client_crt: kafka_client_crt:
x509.certificate_managed: x509.certificate_managed:
- name: /etc/pki/kafka-client.crt - name: /etc/pki/kafka-client.crt
- ca_server: {{ ca_server }} - ca_server: {{ CA.server }}
- subjectAltName: DNS:{{ GLOBALS.hostname }}, IP:{{ GLOBALS.node_ip }} - subjectAltName: DNS:{{ GLOBALS.hostname }}, IP:{{ GLOBALS.node_ip }}
- signing_policy: kafka - signing_policy: kafka
- private_key: /etc/pki/kafka-client.key - private_key: /etc/pki/kafka-client.key
- CN: {{ GLOBALS.hostname }} - CN: {{ GLOBALS.hostname }}
- days_remaining: 0 - days_remaining: 7
- days_valid: 820 - days_valid: 820
- backup: True - backup: True
- timeout: 30 - timeout: 30
@@ -67,9 +58,9 @@ kafka_client_crt_perms:
- mode: 640 - mode: 640
- user: 960 - user: 960
- group: 939 - group: 939
{% endif %} {% endif %}
{% if GLOBALS.role in ['so-manager', 'so-managersearch','so-receiver', 'so-standalone'] %} {% if GLOBALS.role in ['so-manager', 'so-managersearch','so-receiver', 'so-standalone'] %}
kafka_key: kafka_key:
x509.private_key_managed: x509.private_key_managed:
- name: /etc/pki/kafka.key - name: /etc/pki/kafka.key
@@ -87,12 +78,12 @@ kafka_key:
kafka_crt: kafka_crt:
x509.certificate_managed: x509.certificate_managed:
- name: /etc/pki/kafka.crt - name: /etc/pki/kafka.crt
- ca_server: {{ ca_server }} - ca_server: {{ CA.server }}
- subjectAltName: DNS:{{ GLOBALS.hostname }}, IP:{{ GLOBALS.node_ip }} - subjectAltName: DNS:{{ GLOBALS.hostname }}, IP:{{ GLOBALS.node_ip }}
- signing_policy: kafka - signing_policy: kafka
- private_key: /etc/pki/kafka.key - private_key: /etc/pki/kafka.key
- CN: {{ GLOBALS.hostname }} - CN: {{ GLOBALS.hostname }}
- days_remaining: 0 - days_remaining: 7
- days_valid: 820 - days_valid: 820
- backup: True - backup: True
- timeout: 30 - timeout: 30
@@ -103,6 +94,7 @@ kafka_crt:
- name: "/usr/bin/openssl pkcs12 -inkey /etc/pki/kafka.key -in /etc/pki/kafka.crt -export -out /etc/pki/kafka.p12 -nodes -passout pass:{{ kafka_password }}" - name: "/usr/bin/openssl pkcs12 -inkey /etc/pki/kafka.key -in /etc/pki/kafka.crt -export -out /etc/pki/kafka.p12 -nodes -passout pass:{{ kafka_password }}"
- onchanges: - onchanges:
- x509: /etc/pki/kafka.key - x509: /etc/pki/kafka.key
kafka_key_perms: kafka_key_perms:
file.managed: file.managed:
- replace: False - replace: False
@@ -126,11 +118,11 @@ kafka_pkcs12_perms:
- mode: 640 - mode: 640
- user: 960 - user: 960
- group: 939 - group: 939
{% endif %} {% endif %}
# Standalone needs kafka-logstash for automated testing. Searchnode/manager search need it for logstash to consume from Kafka. # Standalone needs kafka-logstash for automated testing. Searchnode/manager search need it for logstash to consume from Kafka.
# Manager will have cert, but be unused until a pipeline is created and logstash enabled. # Manager will have cert, but be unused until a pipeline is created and logstash enabled.
{% if GLOBALS.role in ['so-standalone', 'so-managersearch', 'so-searchnode', 'so-manager'] %} {% if GLOBALS.role in ['so-standalone', 'so-managersearch', 'so-searchnode', 'so-manager'] %}
kafka_logstash_key: kafka_logstash_key:
x509.private_key_managed: x509.private_key_managed:
- name: /etc/pki/kafka-logstash.key - name: /etc/pki/kafka-logstash.key
@@ -148,12 +140,12 @@ kafka_logstash_key:
kafka_logstash_crt: kafka_logstash_crt:
x509.certificate_managed: x509.certificate_managed:
- name: /etc/pki/kafka-logstash.crt - name: /etc/pki/kafka-logstash.crt
- ca_server: {{ ca_server }} - ca_server: {{ CA.server }}
- subjectAltName: DNS:{{ GLOBALS.hostname }}, IP:{{ GLOBALS.node_ip }} - subjectAltName: DNS:{{ GLOBALS.hostname }}, IP:{{ GLOBALS.node_ip }}
- signing_policy: kafka - signing_policy: kafka
- private_key: /etc/pki/kafka-logstash.key - private_key: /etc/pki/kafka-logstash.key
- CN: {{ GLOBALS.hostname }} - CN: {{ GLOBALS.hostname }}
- days_remaining: 0 - days_remaining: 7
- days_valid: 820 - days_valid: 820
- backup: True - backup: True
- timeout: 30 - timeout: 30
@@ -189,7 +181,6 @@ kafka_logstash_pkcs12_perms:
- user: 931 - user: 931
- group: 939 - group: 939
{% endif %}
{% endif %} {% endif %}
{% else %} {% else %}
@@ -198,4 +189,4 @@ kafka_logstash_pkcs12_perms:
test.fail_without_changes: test.fail_without_changes:
- name: {{sls}}_state_not_allowed - name: {{sls}}_state_not_allowed
{% endif %} {% endif %}

View File

@@ -25,11 +25,10 @@ kibana:
discardCorruptObjects: "8.18.8" discardCorruptObjects: "8.18.8"
telemetry: telemetry:
enabled: False enabled: False
security:
showInsecureClusterWarning: False
xpack: xpack:
security: security:
secureCookies: true secureCookies: true
showInsecureClusterWarning: false
reporting: reporting:
kibanaServer: kibanaServer:
hostname: localhost hostname: localhost

View File

@@ -10,11 +10,10 @@
{% from 'logstash/map.jinja' import LOGSTASH_MERGED %} {% from 'logstash/map.jinja' import LOGSTASH_MERGED %}
{% set ASSIGNED_PIPELINES = LOGSTASH_MERGED.assigned_pipelines.roles[GLOBALS.role.split('-')[1]] %} {% set ASSIGNED_PIPELINES = LOGSTASH_MERGED.assigned_pipelines.roles[GLOBALS.role.split('-')[1]] %}
{% if GLOBALS.role not in ['so-receiver','so-fleet'] %}
include: include:
- ssl
{% if GLOBALS.role not in ['so-receiver','so-fleet'] %}
- elasticsearch - elasticsearch
{% endif %} {% endif %}
# Create the logstash group # Create the logstash group
logstashgroup: logstashgroup:

View File

@@ -12,6 +12,7 @@
{% set lsheap = LOGSTASH_MERGED.settings.lsheap %} {% set lsheap = LOGSTASH_MERGED.settings.lsheap %}
include: include:
- ca
{% if GLOBALS.role not in ['so-receiver','so-fleet'] %} {% if GLOBALS.role not in ['so-receiver','so-fleet'] %}
- elasticsearch.ca - elasticsearch.ca
{% endif %} {% endif %}
@@ -20,9 +21,9 @@ include:
- kafka.ca - kafka.ca
- kafka.ssl - kafka.ssl
{% endif %} {% endif %}
- logstash.ssl
- logstash.config - logstash.config
- logstash.sostatus - logstash.sostatus
- ssl
so-logstash: so-logstash:
docker_container.running: docker_container.running:
@@ -65,22 +66,18 @@ so-logstash:
- /opt/so/log/logstash:/var/log/logstash:rw - /opt/so/log/logstash:/var/log/logstash:rw
- /sys/fs/cgroup:/sys/fs/cgroup:ro - /sys/fs/cgroup:/sys/fs/cgroup:ro
- /opt/so/conf/logstash/etc/certs:/usr/share/logstash/certs:ro - /opt/so/conf/logstash/etc/certs:/usr/share/logstash/certs:ro
{% if GLOBALS.role in ['so-manager', 'so-managerhype', 'so-managersearch', 'so-standalone', 'so-import', 'so-heavynode', 'so-receiver'] %} - /etc/pki/tls/certs/intca.crt:/usr/share/filebeat/ca.crt:ro
- /etc/pki/filebeat.crt:/usr/share/logstash/filebeat.crt:ro
- /etc/pki/filebeat.p8:/usr/share/logstash/filebeat.key:ro
{% endif %}
{% if GLOBALS.is_manager or GLOBALS.role in ['so-fleet', 'so-heavynode', 'so-receiver'] %} {% if GLOBALS.is_manager or GLOBALS.role in ['so-fleet', 'so-heavynode', 'so-receiver'] %}
- /etc/pki/elasticfleet-logstash.crt:/usr/share/logstash/elasticfleet-logstash.crt:ro - /etc/pki/elasticfleet-logstash.crt:/usr/share/logstash/elasticfleet-logstash.crt:ro
- /etc/pki/elasticfleet-logstash.key:/usr/share/logstash/elasticfleet-logstash.key:ro - /etc/pki/elasticfleet-logstash.key:/usr/share/logstash/elasticfleet-logstash.key:ro
- /etc/pki/elasticfleet-lumberjack.crt:/usr/share/logstash/elasticfleet-lumberjack.crt:ro - /etc/pki/elasticfleet-lumberjack.crt:/usr/share/logstash/elasticfleet-lumberjack.crt:ro
- /etc/pki/elasticfleet-lumberjack.key:/usr/share/logstash/elasticfleet-lumberjack.key:ro - /etc/pki/elasticfleet-lumberjack.key:/usr/share/logstash/elasticfleet-lumberjack.key:ro
{% if GLOBALS.role != 'so-fleet' %}
- /etc/pki/filebeat.crt:/usr/share/logstash/filebeat.crt:ro
- /etc/pki/filebeat.p8:/usr/share/logstash/filebeat.key:ro
{% endif %}
{% endif %} {% endif %}
{% if GLOBALS.role in ['so-manager', 'so-managerhype', 'so-managersearch', 'so-standalone', 'so-import'] %} {% if GLOBALS.role not in ['so-receiver','so-fleet'] %}
- /etc/pki/ca.crt:/usr/share/filebeat/ca.crt:ro
{% else %}
- /etc/pki/tls/certs/intca.crt:/usr/share/filebeat/ca.crt:ro
{% endif %}
{% if GLOBALS.role in ['so-manager', 'so-managerhype', 'so-managersearch', 'so-standalone', 'so-import', 'so-heavynode', 'so-searchnode' ] %}
- /opt/so/conf/ca/cacerts:/etc/pki/ca-trust/extracted/java/cacerts:ro - /opt/so/conf/ca/cacerts:/etc/pki/ca-trust/extracted/java/cacerts:ro
- /opt/so/conf/ca/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro - /opt/so/conf/ca/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro
{% endif %} {% endif %}
@@ -100,11 +97,22 @@ so-logstash:
{% endfor %} {% endfor %}
{% endif %} {% endif %}
- watch: - watch:
{% if GLOBALS.is_manager or GLOBALS.role in ['so-fleet', 'so-receiver'] %}
- x509: etc_elasticfleet_logstash_key
- x509: etc_elasticfleet_logstash_crt
{% endif %}
- file: lsetcsync - file: lsetcsync
- file: trusttheca
{% if GLOBALS.is_manager %}
- file: elasticsearch_cacerts
- file: elasticsearch_capems
{% endif %}
{% if GLOBALS.is_manager or GLOBALS.role in ['so-fleet', 'so-heavynode', 'so-receiver'] %}
- x509: etc_elasticfleet_logstash_crt
- x509: etc_elasticfleet_logstash_key
- x509: etc_elasticfleetlumberjack_crt
- x509: etc_elasticfleetlumberjack_key
{% if GLOBALS.role != 'so-fleet' %}
- x509: etc_filebeat_crt
- file: logstash_filebeat_p8
{% endif %}
{% endif %}
{% for assigned_pipeline in LOGSTASH_MERGED.assigned_pipelines.roles[GLOBALS.role.split('-')[1]] %} {% for assigned_pipeline in LOGSTASH_MERGED.assigned_pipelines.roles[GLOBALS.role.split('-')[1]] %}
- file: ls_pipeline_{{assigned_pipeline}} - file: ls_pipeline_{{assigned_pipeline}}
{% for CONFIGFILE in LOGSTASH_MERGED.defined_pipelines[assigned_pipeline] %} {% for CONFIGFILE in LOGSTASH_MERGED.defined_pipelines[assigned_pipeline] %}
@@ -115,17 +123,20 @@ so-logstash:
- file: kafkacertz - file: kafkacertz
{% endif %} {% endif %}
- require: - require:
{% if grains['role'] in ['so-manager', 'so-managerhype', 'so-managersearch', 'so-standalone', 'so-import', 'so-heavynode', 'so-receiver'] %} - file: trusttheca
{% if GLOBALS.is_manager %}
- file: elasticsearch_cacerts
- file: elasticsearch_capems
{% endif %}
{% if GLOBALS.is_manager or GLOBALS.role in ['so-fleet', 'so-heavynode', 'so-receiver'] %}
- x509: etc_elasticfleet_logstash_crt
- x509: etc_elasticfleet_logstash_key
- x509: etc_elasticfleetlumberjack_crt
- x509: etc_elasticfleetlumberjack_key
{% if GLOBALS.role != 'so-fleet' %}
- x509: etc_filebeat_crt - x509: etc_filebeat_crt
{% endif %} - file: logstash_filebeat_p8
{% if grains['role'] in ['so-manager', 'so-managerhype', 'so-managersearch', 'so-standalone', 'so-import'] %} {% endif %}
- x509: pki_public_ca_crt
{% else %}
- x509: trusttheca
{% endif %}
{% if grains.role in ['so-manager', 'so-managerhype', 'so-managersearch', 'so-standalone', 'so-import'] %}
- file: cacertz
- file: capemz
{% endif %} {% endif %}
{% if GLOBALS.pipeline == 'KAFKA' and GLOBALS.role in ['so-manager', 'so-managerhype', 'so-managersearch', 'so-standalone', 'so-searchnode'] %} {% if GLOBALS.pipeline == 'KAFKA' and GLOBALS.role in ['so-manager', 'so-managerhype', 'so-managersearch', 'so-standalone', 'so-searchnode'] %}
- file: kafkacertz - file: kafkacertz

287
salt/logstash/ssl.sls Normal file
View File

@@ -0,0 +1,287 @@
# Copyright Security Onion Solutions LLC and/or licensed to Security Onion Solutions LLC under one
# or more contributor license agreements. Licensed under the Elastic License 2.0 as shown at
# https://securityonion.net/license; you may not use this file except in compliance with the
# Elastic License 2.0.
{% from 'allowed_states.map.jinja' import allowed_states %}
{% if sls in allowed_states or sls.split('.')[0] in allowed_states %}
{% from 'vars/globals.map.jinja' import GLOBALS %}
{% from 'elasticfleet/map.jinja' import ELASTICFLEETMERGED %}
{% from 'ca/map.jinja' import CA %}
{% if GLOBALS.is_manager or GLOBALS.role in ['so-heavynode', 'so-fleet', 'so-receiver'] %}
{% if grains['role'] not in [ 'so-heavynode'] %}
# Start -- Elastic Fleet Logstash Input Cert
etc_elasticfleet_logstash_key:
x509.private_key_managed:
- name: /etc/pki/elasticfleet-logstash.key
- keysize: 4096
- backup: True
- new: True
{% if salt['file.file_exists']('/etc/pki/elasticfleet-logstash.key') -%}
- prereq:
- x509: etc_elasticfleet_logstash_crt
{%- endif %}
- retry:
attempts: 5
interval: 30
etc_elasticfleet_logstash_crt:
x509.certificate_managed:
- name: /etc/pki/elasticfleet-logstash.crt
- ca_server: {{ CA.server }}
- signing_policy: elasticfleet
- private_key: /etc/pki/elasticfleet-logstash.key
- CN: {{ GLOBALS.hostname }}
- subjectAltName: DNS:{{ GLOBALS.hostname }},DNS:{{ GLOBALS.url_base }},IP:{{ GLOBALS.node_ip }}{% if ELASTICFLEETMERGED.config.server.custom_fqdn | length > 0 %},DNS:{{ ELASTICFLEETMERGED.config.server.custom_fqdn | join(',DNS:') }}{% endif %}
- days_remaining: 7
- days_valid: 820
- backup: True
- timeout: 30
- retry:
attempts: 5
interval: 30
cmd.run:
- name: "/usr/bin/openssl pkcs8 -in /etc/pki/elasticfleet-logstash.key -topk8 -out /etc/pki/elasticfleet-logstash.p8 -nocrypt"
- onchanges:
- x509: etc_elasticfleet_logstash_key
eflogstashperms:
file.managed:
- replace: False
- name: /etc/pki/elasticfleet-logstash.key
- mode: 640
- group: 939
chownelasticfleetlogstashcrt:
file.managed:
- replace: False
- name: /etc/pki/elasticfleet-logstash.crt
- mode: 640
- user: 931
- group: 939
chownelasticfleetlogstashkey:
file.managed:
- replace: False
- name: /etc/pki/elasticfleet-logstash.key
- mode: 640
- user: 931
- group: 939
# End -- Elastic Fleet Logstash Input Cert
{% endif %} # endif is for not including HeavyNodes
# Start -- Elastic Fleet Node - Logstash Lumberjack Input / Output
# Cert needed on: Managers, Receivers
etc_elasticfleetlumberjack_key:
x509.private_key_managed:
- name: /etc/pki/elasticfleet-lumberjack.key
- bits: 4096
- backup: True
- new: True
{% if salt['file.file_exists']('/etc/pki/elasticfleet-lumberjack.key') -%}
- prereq:
- x509: etc_elasticfleetlumberjack_crt
{%- endif %}
- retry:
attempts: 5
interval: 30
etc_elasticfleetlumberjack_crt:
x509.certificate_managed:
- name: /etc/pki/elasticfleet-lumberjack.crt
- ca_server: {{ CA.server }}
- signing_policy: elasticfleet
- private_key: /etc/pki/elasticfleet-lumberjack.key
- CN: {{ GLOBALS.node_ip }}
- subjectAltName: DNS:{{ GLOBALS.hostname }}
- days_remaining: 7
- days_valid: 820
- backup: True
- timeout: 30
- retry:
attempts: 5
interval: 30
cmd.run:
- name: "/usr/bin/openssl pkcs8 -in /etc/pki/elasticfleet-lumberjack.key -topk8 -out /etc/pki/elasticfleet-lumberjack.p8 -nocrypt"
- onchanges:
- x509: etc_elasticfleetlumberjack_key
eflogstashlumberjackperms:
file.managed:
- replace: False
- name: /etc/pki/elasticfleet-lumberjack.key
- mode: 640
- group: 939
chownilogstashelasticfleetlumberjackp8:
file.managed:
- replace: False
- name: /etc/pki/elasticfleet-lumberjack.p8
- mode: 640
- user: 931
- group: 939
chownilogstashelasticfleetlogstashlumberjackcrt:
file.managed:
- replace: False
- name: /etc/pki/elasticfleet-lumberjack.crt
- mode: 640
- user: 931
- group: 939
chownilogstashelasticfleetlogstashlumberjackkey:
file.managed:
- replace: False
- name: /etc/pki/elasticfleet-lumberjack.key
- mode: 640
- user: 931
- group: 939
# End -- Elastic Fleet Node - Logstash Lumberjack Input / Output
{% endif %}
{% if GLOBALS.is_manager or GLOBALS.role in ['so-heavynode', 'so-receiver'] %}
etc_filebeat_key:
x509.private_key_managed:
- name: /etc/pki/filebeat.key
- keysize: 4096
- backup: True
- new: True
{% if salt['file.file_exists']('/etc/pki/filebeat.key') -%}
- prereq:
- x509: etc_filebeat_crt
{%- endif %}
- retry:
attempts: 5
interval: 30
# Request a cert and drop it where it needs to go to be distributed
etc_filebeat_crt:
x509.certificate_managed:
- name: /etc/pki/filebeat.crt
- ca_server: {{ CA.server }}
- signing_policy: filebeat
- private_key: /etc/pki/filebeat.key
- CN: {{ GLOBALS.hostname }}
- subjectAltName: DNS:{{ GLOBALS.hostname }}, IP:{{ GLOBALS.node_ip }}
- days_remaining: 7
- days_valid: 820
- backup: True
- timeout: 30
- retry:
attempts: 5
interval: 30
cmd.run:
- name: "/usr/bin/openssl pkcs8 -in /etc/pki/filebeat.key -topk8 -out /etc/pki/filebeat.p8 -nocrypt"
- onchanges:
- x509: etc_filebeat_key
fbperms:
file.managed:
- replace: False
- name: /etc/pki/filebeat.key
- mode: 640
- group: 939
logstash_filebeat_p8:
file.managed:
- replace: False
- name: /etc/pki/filebeat.p8
- mode: 640
- user: 931
- group: 939
{% if grains.role not in ['so-heavynode', 'so-receiver'] %}
# Create Symlinks to the keys so I can distribute it to all the things
filebeatdir:
file.directory:
- name: /opt/so/saltstack/local/salt/filebeat/files
- makedirs: True
fbkeylink:
file.symlink:
- name: /opt/so/saltstack/local/salt/filebeat/files/filebeat.p8
- target: /etc/pki/filebeat.p8
- user: socore
- group: socore
fbcrtlink:
file.symlink:
- name: /opt/so/saltstack/local/salt/filebeat/files/filebeat.crt
- target: /etc/pki/filebeat.crt
- user: socore
- group: socore
{% endif %}
{% endif %}
{% if GLOBALS.is_manager or GLOBALS.role in ['so-sensor', 'so-searchnode', 'so-heavynode', 'so-fleet', 'so-idh', 'so-receiver'] %}
fbcertdir:
file.directory:
- name: /opt/so/conf/filebeat/etc/pki
- makedirs: True
conf_filebeat_key:
x509.private_key_managed:
- name: /opt/so/conf/filebeat/etc/pki/filebeat.key
- keysize: 4096
- backup: True
- new: True
{% if salt['file.file_exists']('/opt/so/conf/filebeat/etc/pki/filebeat.key') -%}
- prereq:
- x509: conf_filebeat_crt
{%- endif %}
- retry:
attempts: 5
interval: 30
# Request a cert and drop it where it needs to go to be distributed
conf_filebeat_crt:
x509.certificate_managed:
- name: /opt/so/conf/filebeat/etc/pki/filebeat.crt
- ca_server: {{ CA.server }}
- signing_policy: filebeat
- private_key: /opt/so/conf/filebeat/etc/pki/filebeat.key
- CN: {{ GLOBALS.hostname }}
- subjectAltName: DNS:{{ GLOBALS.hostname }}, IP:{{ GLOBALS.node_ip }}
- days_remaining: 7
- days_valid: 820
- backup: True
- timeout: 30
- retry:
attempts: 5
interval: 30
# Convert the key to pkcs#8 so logstash will work correctly.
filebeatpkcs:
cmd.run:
- name: "/usr/bin/openssl pkcs8 -in /opt/so/conf/filebeat/etc/pki/filebeat.key -topk8 -out /opt/so/conf/filebeat/etc/pki/filebeat.p8 -passout pass:"
- onchanges:
- x509: conf_filebeat_key
filebeatkeyperms:
file.managed:
- replace: False
- name: /opt/so/conf/filebeat/etc/pki/filebeat.key
- mode: 640
- group: 939
chownfilebeatp8:
file.managed:
- replace: False
- name: /opt/so/conf/filebeat/etc/pki/filebeat.p8
- mode: 640
- user: 931
- group: 939
{% endif %}
{% else %}
{{sls}}_state_not_allowed:
test.fail_without_changes:
- name: {{sls}}_state_not_allowed
{% endif %}

View File

@@ -1,3 +1,8 @@
# Copyright Security Onion Solutions LLC and/or licensed to Security Onion Solutions LLC under one
# or more contributor license agreements. Licensed under the Elastic License 2.0 as shown at
# https://securityonion.net/license; you may not use this file except in compliance with the
# Elastic License 2.0.
elastic_curl_config_distributed: elastic_curl_config_distributed:
file.managed: file.managed:
- name: /opt/so/saltstack/local/salt/elasticsearch/curl.config - name: /opt/so/saltstack/local/salt/elasticsearch/curl.config

View File

@@ -1,3 +1,8 @@
# Copyright Security Onion Solutions LLC and/or licensed to Security Onion Solutions LLC under one
# or more contributor license agreements. Licensed under the Elastic License 2.0 as shown at
# https://securityonion.net/license; you may not use this file except in compliance with the
# Elastic License 2.0.
kibana_curl_config_distributed: kibana_curl_config_distributed:
file.managed: file.managed:
- name: /opt/so/conf/kibana/curl.config - name: /opt/so/conf/kibana/curl.config
@@ -5,4 +10,4 @@ kibana_curl_config_distributed:
- template: jinja - template: jinja
- mode: 600 - mode: 600
- show_changes: False - show_changes: False
- makedirs: True - makedirs: True

View File

@@ -1,3 +1,8 @@
# Copyright Security Onion Solutions LLC and/or licensed to Security Onion Solutions LLC under one
# or more contributor license agreements. Licensed under the Elastic License 2.0 as shown at
# https://securityonion.net/license; you may not use this file except in compliance with the
# Elastic License 2.0.
include: include:
- elasticsearch.auth - elasticsearch.auth
- kratos - kratos

View File

@@ -716,6 +716,18 @@ function checkMine() {
} }
} }
function create_ca_pillar() {
local capillar=/opt/so/saltstack/local/pillar/ca/init.sls
printf '%s\n'\
"ca:"\
" server: $MINION_ID"\
" " > $capillar
if [ $? -ne 0 ]; then
log "ERROR" "Failed to add $MINION_ID to $capillar"
return 1
fi
}
function createEVAL() { function createEVAL() {
log "INFO" "Creating EVAL configuration for minion $MINION_ID" log "INFO" "Creating EVAL configuration for minion $MINION_ID"
is_pcaplimit=true is_pcaplimit=true
@@ -1013,6 +1025,7 @@ function setupMinionFiles() {
managers=("EVAL" "STANDALONE" "IMPORT" "MANAGER" "MANAGERSEARCH") managers=("EVAL" "STANDALONE" "IMPORT" "MANAGER" "MANAGERSEARCH")
if echo "${managers[@]}" | grep -qw "$NODETYPE"; then if echo "${managers[@]}" | grep -qw "$NODETYPE"; then
add_sensoroni_with_analyze_to_minion || return 1 add_sensoroni_with_analyze_to_minion || return 1
create_ca_pillar || return 1
else else
add_sensoroni_to_minion || return 1 add_sensoroni_to_minion || return 1
fi fi

View File

@@ -325,6 +325,19 @@ clone_to_tmp() {
fi fi
} }
# there is a function like this in so-minion, but we cannot source it since args required for so-minion
create_ca_pillar() {
local ca_pillar_dir="/opt/so/saltstack/local/pillar/ca"
local ca_pillar_file="${ca_pillar_dir}/init.sls"
echo "Updating CA pillar configuration"
mkdir -p "$ca_pillar_dir"
echo "ca: {}" > "$ca_pillar_file"
so-yaml.py add "$ca_pillar_file" ca.server "$MINIONID"
chown -R socore:socore "$ca_pillar_dir"
}
disable_logstash_heavynodes() { disable_logstash_heavynodes() {
c=0 c=0
printf "\nChecking for heavynodes and disabling Logstash if they exist\n" printf "\nChecking for heavynodes and disabling Logstash if they exist\n"
@@ -368,7 +381,6 @@ masterlock() {
echo "base:" > $TOPFILE echo "base:" > $TOPFILE
echo " $MINIONID:" >> $TOPFILE echo " $MINIONID:" >> $TOPFILE
echo " - ca" >> $TOPFILE echo " - ca" >> $TOPFILE
echo " - ssl" >> $TOPFILE
echo " - elasticsearch" >> $TOPFILE echo " - elasticsearch" >> $TOPFILE
} }
@@ -951,6 +963,7 @@ up_to_2.4.201() {
up_to_2.4.210() { up_to_2.4.210() {
# Elastic Update for this release, so download Elastic Agent files # Elastic Update for this release, so download Elastic Agent files
determine_elastic_agent_upgrade determine_elastic_agent_upgrade
create_ca_pillar
INSTALLEDVERSION=2.4.210 INSTALLEDVERSION=2.4.210
} }
@@ -1715,11 +1728,20 @@ verify_es_version_compatibility() {
return 0 return 0
else else
compatible_versions=${es_upgrade_map[$es_version]} compatible_versions=${es_upgrade_map[$es_version]}
next_step_so_version=${es_to_so_version[${compatible_versions##* }]} if [[ -z "$compatible_versions" ]]; then
echo -e "\n##############################################################################################################################\n" # If current ES version is not explicitly defined in the upgrade map, we know they have an intermediate upgrade to do.
echo -e "You are currently running Security Onion $INSTALLEDVERSION. You will need to update to version $next_step_so_version before updating to $(cat $UPDATE_DIR/VERSION).\n" # We default to the lowest ES version defined in es_to_so_version as $first_es_required_version
local first_es_required_version=$(printf '%s\n' "${!es_to_so_version[@]}" | sort -V | head -n1)
next_step_so_version=${es_to_so_version[$first_es_required_version]}
required_es_upgrade_version="$first_es_required_version"
else
next_step_so_version=${es_to_so_version[${compatible_versions##* }]}
required_es_upgrade_version="${compatible_versions##* }"
fi
echo -e "\n##############################################################################################################################\n"
echo -e "You are currently running Security Onion $INSTALLEDVERSION. You will need to update to version $next_step_so_version before updating to $(cat $UPDATE_DIR/VERSION).\n"
echo "${compatible_versions##* }" > "$es_required_version_statefile" echo "$required_es_upgrade_version" > "$es_required_version_statefile"
# We expect to upgrade to the latest compatiable minor version of ES # We expect to upgrade to the latest compatiable minor version of ES
create_intermediate_upgrade_verification_script $es_verification_script create_intermediate_upgrade_verification_script $es_verification_script
@@ -1742,8 +1764,8 @@ verify_es_version_compatibility() {
echo -e "\n##############################################################################################################################\n" echo -e "\n##############################################################################################################################\n"
exec bash -c "BRANCH=$next_step_so_version soup -y && BRANCH=$next_step_so_version soup -y && \ exec bash -c "BRANCH=$next_step_so_version soup -y && BRANCH=$next_step_so_version soup -y && \
echo -e \"\n##############################################################################################################################\n\" && \ echo -e \"\n##############################################################################################################################\n\" && \
echo -e \"Verifying Elasticsearch was successfully upgraded to ${compatible_versions##* } across the grid. This part can take a while as Searchnodes/Heavynodes sync up with the Manager! \n\nOnce verification completes the next soup will begin automatically. If verification takes longer than 1 hour it will stop waiting and your grid will remain at $next_step_so_version. Allowing for all Searchnodes/Heavynodes to upgrade Elasticsearch to the required version on their own time.\n\" \ echo -e \"Verifying Elasticsearch was successfully upgraded to $required_es_upgrade_version across the grid. This part can take a while as Searchnodes/Heavynodes sync up with the Manager! \n\nOnce verification completes the next soup will begin automatically. If verification takes longer than 1 hour it will stop waiting and your grid will remain at $next_step_so_version. Allowing for all Searchnodes/Heavynodes to upgrade Elasticsearch to the required version on their own time.\n\" \
&& timeout --foreground 4000 bash /tmp/so_intermediate_upgrade_verification.sh ${compatible_versions##* } $es_required_version_statefile && \ && timeout --foreground 4000 bash /tmp/so_intermediate_upgrade_verification.sh $required_es_upgrade_version $es_required_version_statefile && \
echo -e \"\n##############################################################################################################################\n\" \ echo -e \"\n##############################################################################################################################\n\" \
&& BRANCH=$originally_requested_so_version soup -y && BRANCH=$originally_requested_so_version soup -y" && BRANCH=$originally_requested_so_version soup -y && BRANCH=$originally_requested_so_version soup -y"
fi fi
@@ -1909,7 +1931,7 @@ apply_hotfix() {
mv /etc/pki/managerssl.crt /etc/pki/managerssl.crt.old mv /etc/pki/managerssl.crt /etc/pki/managerssl.crt.old
mv /etc/pki/managerssl.key /etc/pki/managerssl.key.old mv /etc/pki/managerssl.key /etc/pki/managerssl.key.old
systemctl_func "start" "salt-minion" systemctl_func "start" "salt-minion"
(wait_for_salt_minion "$MINIONID" "5" '/dev/stdout' || fail "Salt minion was not running or ready.") 2>&1 | tee -a "$SOUP_LOG" (wait_for_salt_minion "$MINIONID" "120" "4" "$SOUP_LOG" || fail "Salt minion was not running or ready.") 2>&1 | tee -a "$SOUP_LOG"
fi fi
else else
echo "No actions required. ($INSTALLEDVERSION/$HOTFIXVERSION)" echo "No actions required. ($INSTALLEDVERSION/$HOTFIXVERSION)"
@@ -2108,7 +2130,7 @@ main() {
echo "" echo ""
echo "Running a highstate. This could take several minutes." echo "Running a highstate. This could take several minutes."
set +e set +e
(wait_for_salt_minion "$MINIONID" "5" '/dev/stdout' || fail "Salt minion was not running or ready.") 2>&1 | tee -a "$SOUP_LOG" (wait_for_salt_minion "$MINIONID" "120" "4" "$SOUP_LOG" || fail "Salt minion was not running or ready.") 2>&1 | tee -a "$SOUP_LOG"
highstate highstate
set -e set -e
@@ -2121,7 +2143,7 @@ main() {
check_saltmaster_status check_saltmaster_status
echo "Running a highstate to complete the Security Onion upgrade on this manager. This could take several minutes." echo "Running a highstate to complete the Security Onion upgrade on this manager. This could take several minutes."
(wait_for_salt_minion "$MINIONID" "5" '/dev/stdout' || fail "Salt minion was not running or ready.") 2>&1 | tee -a "$SOUP_LOG" (wait_for_salt_minion "$MINIONID" "120" "4" "$SOUP_LOG" || fail "Salt minion was not running or ready.") 2>&1 | tee -a "$SOUP_LOG"
# Stop long-running scripts to allow potentially updated scripts to load on the next execution. # Stop long-running scripts to allow potentially updated scripts to load on the next execution.
killall salt-relay.sh killall salt-relay.sh

View File

@@ -6,9 +6,6 @@
{% from 'allowed_states.map.jinja' import allowed_states %} {% from 'allowed_states.map.jinja' import allowed_states %}
{% if sls.split('.')[0] in allowed_states %} {% if sls.split('.')[0] in allowed_states %}
include:
- ssl
# Drop the correct nginx config based on role # Drop the correct nginx config based on role
nginxconfdir: nginxconfdir:
file.directory: file.directory:

View File

@@ -8,81 +8,14 @@
{% from 'vars/globals.map.jinja' import GLOBALS %} {% from 'vars/globals.map.jinja' import GLOBALS %}
{% from 'docker/docker.map.jinja' import DOCKER %} {% from 'docker/docker.map.jinja' import DOCKER %}
{% from 'nginx/map.jinja' import NGINXMERGED %} {% from 'nginx/map.jinja' import NGINXMERGED %}
{% set ca_server = GLOBALS.minion_id %}
include: include:
- nginx.ssl
- nginx.config - nginx.config
- nginx.sostatus - nginx.sostatus
{% if GLOBALS.role != 'so-fleet' %}
{% if grains.role not in ['so-fleet'] %} {% set container_config = 'so-nginx' %}
{# if the user has selected to replace the crt and key in the ui #}
{% if NGINXMERGED.ssl.replace_cert %}
managerssl_key:
file.managed:
- name: /etc/pki/managerssl.key
- source: salt://nginx/ssl/ssl.key
- mode: 640
- group: 939
- watch_in:
- docker_container: so-nginx
managerssl_crt:
file.managed:
- name: /etc/pki/managerssl.crt
- source: salt://nginx/ssl/ssl.crt
- mode: 644
- watch_in:
- docker_container: so-nginx
{% else %}
managerssl_key:
x509.private_key_managed:
- name: /etc/pki/managerssl.key
- keysize: 4096
- backup: True
- new: True
{% if salt['file.file_exists']('/etc/pki/managerssl.key') -%}
- prereq:
- x509: /etc/pki/managerssl.crt
{%- endif %}
- retry:
attempts: 5
interval: 30
- watch_in:
- docker_container: so-nginx
# Create a cert for the reverse proxy
managerssl_crt:
x509.certificate_managed:
- name: /etc/pki/managerssl.crt
- ca_server: {{ ca_server }}
- signing_policy: managerssl
- private_key: /etc/pki/managerssl.key
- CN: {{ GLOBALS.hostname }}
- subjectAltName: "DNS:{{ GLOBALS.hostname }}, IP:{{ GLOBALS.node_ip }}, DNS:{{ GLOBALS.url_base }}"
- days_remaining: 0
- days_valid: 820
- backup: True
- timeout: 30
- retry:
attempts: 5
interval: 30
- watch_in:
- docker_container: so-nginx
{% endif %}
msslkeyperms:
file.managed:
- replace: False
- name: /etc/pki/managerssl.key
- mode: 640
- group: 939
make-rule-dir-nginx: make-rule-dir-nginx:
file.directory: file.directory:
- name: /nsm/rules - name: /nsm/rules
@@ -92,15 +25,11 @@ make-rule-dir-nginx:
- user - user
- group - group
- show_changes: False - show_changes: False
{% endif %}
{# if this is an so-fleet node then we want to use the port bindings, custom bind mounts defined for fleet #} {% else %}
{% if GLOBALS.role == 'so-fleet' %} {# if this is an so-fleet node then we want to use the port bindings, custom bind mounts defined for fleet #}
{% set container_config = 'so-nginx-fleet-node' %} {% set container_config = 'so-nginx-fleet-node' %}
{% else %} {% endif %}
{% set container_config = 'so-nginx' %}
{% endif %}
so-nginx: so-nginx:
docker_container.running: docker_container.running:
@@ -154,18 +83,27 @@ so-nginx:
- watch: - watch:
- file: nginxconf - file: nginxconf
- file: nginxconfdir - file: nginxconfdir
- require: {% if GLOBALS.is_manager %}
- file: nginxconf {% if NGINXMERGED.ssl.replace_cert %}
{% if GLOBALS.is_manager %}
{% if NGINXMERGED.ssl.replace_cert %}
- file: managerssl_key - file: managerssl_key
- file: managerssl_crt - file: managerssl_crt
{% else %} {% else %}
- x509: managerssl_key - x509: managerssl_key
- x509: managerssl_crt - x509: managerssl_crt
{% endif%} {% endif%}
{% endif %}
- require:
- file: nginxconf
{% if GLOBALS.is_manager %}
{% if NGINXMERGED.ssl.replace_cert %}
- file: managerssl_key
- file: managerssl_crt
{% else %}
- x509: managerssl_key
- x509: managerssl_crt
{% endif%}
- file: navigatorconfig - file: navigatorconfig
{% endif %} {% endif %}
delete_so-nginx_so-status.disabled: delete_so-nginx_so-status.disabled:
file.uncomment: file.uncomment:

87
salt/nginx/ssl.sls Normal file
View File

@@ -0,0 +1,87 @@
# Copyright Security Onion Solutions LLC and/or licensed to Security Onion Solutions LLC under one
# or more contributor license agreements. Licensed under the Elastic License 2.0 as shown at
# https://securityonion.net/license; you may not use this file except in compliance with the
# Elastic License 2.0.
{% from 'allowed_states.map.jinja' import allowed_states %}
{% if sls.split('.')[0] in allowed_states %}
{% from 'vars/globals.map.jinja' import GLOBALS %}
{% from 'nginx/map.jinja' import NGINXMERGED %}
{% from 'ca/map.jinja' import CA %}
{% if GLOBALS.role != 'so-fleet' %}
{# if the user has selected to replace the crt and key in the ui #}
{% if NGINXMERGED.ssl.replace_cert %}
managerssl_key:
file.managed:
- name: /etc/pki/managerssl.key
- source: salt://nginx/ssl/ssl.key
- mode: 640
- group: 939
- watch_in:
- docker_container: so-nginx
managerssl_crt:
file.managed:
- name: /etc/pki/managerssl.crt
- source: salt://nginx/ssl/ssl.crt
- mode: 644
- watch_in:
- docker_container: so-nginx
{% else %}
managerssl_key:
x509.private_key_managed:
- name: /etc/pki/managerssl.key
- keysize: 4096
- backup: True
- new: True
{% if salt['file.file_exists']('/etc/pki/managerssl.key') -%}
- prereq:
- x509: /etc/pki/managerssl.crt
{%- endif %}
- retry:
attempts: 5
interval: 30
- watch_in:
- docker_container: so-nginx
# Create a cert for the reverse proxy
managerssl_crt:
x509.certificate_managed:
- name: /etc/pki/managerssl.crt
- ca_server: {{ CA.server }}
- signing_policy: managerssl
- private_key: /etc/pki/managerssl.key
- CN: {{ GLOBALS.hostname }}
- subjectAltName: "DNS:{{ GLOBALS.hostname }}, IP:{{ GLOBALS.node_ip }}, DNS:{{ GLOBALS.url_base }}"
- days_remaining: 7
- days_valid: 820
- backup: True
- timeout: 30
- retry:
attempts: 5
interval: 30
- watch_in:
- docker_container: so-nginx
{% endif %}
msslkeyperms:
file.managed:
- replace: False
- name: /etc/pki/managerssl.key
- mode: 640
- group: 939
{% endif %}
{% else %}
{{sls}}_state_not_allowed:
test.fail_without_changes:
- name: {{sls}}_state_not_allowed
{% endif %}

22
salt/pcap/ca.sls Normal file
View File

@@ -0,0 +1,22 @@
# Copyright Security Onion Solutions LLC and/or licensed to Security Onion Solutions LLC under one
# or more contributor license agreements. Licensed under the Elastic License 2.0 as shown at
# https://securityonion.net/license; you may not use this file except in compliance with the
# Elastic License 2.0.
{% from 'allowed_states.map.jinja' import allowed_states %}
{% if sls.split('.')[0] in allowed_states or sls in allowed_states%}
stenoca:
file.directory:
- name: /opt/so/conf/steno/certs
- user: 941
- group: 939
- makedirs: True
{% else %}
{{sls}}_state_not_allowed:
test.fail_without_changes:
- name: {{sls}}_state_not_allowed
{% endif %}

View File

@@ -57,12 +57,6 @@ stenoconf:
PCAPMERGED: {{ PCAPMERGED }} PCAPMERGED: {{ PCAPMERGED }}
STENO_BPF_COMPILED: "{{ STENO_BPF_COMPILED }}" STENO_BPF_COMPILED: "{{ STENO_BPF_COMPILED }}"
stenoca:
file.directory:
- name: /opt/so/conf/steno/certs
- user: 941
- group: 939
pcaptmpdir: pcaptmpdir:
file.directory: file.directory:
- name: /nsm/pcaptmp - name: /nsm/pcaptmp

View File

@@ -10,6 +10,7 @@
include: include:
- pcap.ca
- pcap.config - pcap.config
- pcap.sostatus - pcap.sostatus

View File

@@ -7,9 +7,6 @@
{% if sls.split('.')[0] in allowed_states %} {% if sls.split('.')[0] in allowed_states %}
{% from 'redis/map.jinja' import REDISMERGED %} {% from 'redis/map.jinja' import REDISMERGED %}
include:
- ssl
# Redis Setup # Redis Setup
redisconfdir: redisconfdir:
file.directory: file.directory:

View File

@@ -9,6 +9,8 @@
{% from 'vars/globals.map.jinja' import GLOBALS %} {% from 'vars/globals.map.jinja' import GLOBALS %}
include: include:
- ca
- redis.ssl
- redis.config - redis.config
- redis.sostatus - redis.sostatus
@@ -31,11 +33,7 @@ so-redis:
- /nsm/redis/data:/data:rw - /nsm/redis/data:/data:rw
- /etc/pki/redis.crt:/certs/redis.crt:ro - /etc/pki/redis.crt:/certs/redis.crt:ro
- /etc/pki/redis.key:/certs/redis.key:ro - /etc/pki/redis.key:/certs/redis.key:ro
{% if grains['role'] in ['so-manager', 'so-managersearch', 'so-standalone', 'so-import'] %}
- /etc/pki/ca.crt:/certs/ca.crt:ro
{% else %}
- /etc/pki/tls/certs/intca.crt:/certs/ca.crt:ro - /etc/pki/tls/certs/intca.crt:/certs/ca.crt:ro
{% endif %}
{% if DOCKER.containers['so-redis'].custom_bind_mounts %} {% if DOCKER.containers['so-redis'].custom_bind_mounts %}
{% for BIND in DOCKER.containers['so-redis'].custom_bind_mounts %} {% for BIND in DOCKER.containers['so-redis'].custom_bind_mounts %}
- {{ BIND }} - {{ BIND }}
@@ -55,16 +53,14 @@ so-redis:
{% endif %} {% endif %}
- entrypoint: "redis-server /usr/local/etc/redis/redis.conf" - entrypoint: "redis-server /usr/local/etc/redis/redis.conf"
- watch: - watch:
- file: /opt/so/conf/redis/etc - file: trusttheca
- require: - x509: redis_crt
- file: redisconf - x509: redis_key
- file: /opt/so/conf/redis/etc
- require:
- file: trusttheca
- x509: redis_crt - x509: redis_crt
- x509: redis_key - x509: redis_key
{% if grains['role'] in ['so-manager', 'so-managersearch', 'so-standalone', 'so-import'] %}
- x509: pki_public_ca_crt
{% else %}
- x509: trusttheca
{% endif %}
delete_so-redis_so-status.disabled: delete_so-redis_so-status.disabled:
file.uncomment: file.uncomment:

54
salt/redis/ssl.sls Normal file
View File

@@ -0,0 +1,54 @@
# Copyright Security Onion Solutions LLC and/or licensed to Security Onion Solutions LLC under one
# or more contributor license agreements. Licensed under the Elastic License 2.0 as shown at
# https://securityonion.net/license; you may not use this file except in compliance with the
# Elastic License 2.0.
{% from 'allowed_states.map.jinja' import allowed_states %}
{% if sls.split('.')[0] in allowed_states %}
{% from 'vars/globals.map.jinja' import GLOBALS %}
{% from 'ca/map.jinja' import CA %}
redis_key:
x509.private_key_managed:
- name: /etc/pki/redis.key
- keysize: 4096
- backup: True
- new: True
{% if salt['file.file_exists']('/etc/pki/redis.key') -%}
- prereq:
- x509: /etc/pki/redis.crt
{%- endif %}
- retry:
attempts: 5
interval: 30
redis_crt:
x509.certificate_managed:
- name: /etc/pki/redis.crt
- ca_server: {{ CA.server }}
- subjectAltName: DNS:{{ GLOBALS.hostname }}, IP:{{ GLOBALS.node_ip }}
- signing_policy: registry
- private_key: /etc/pki/redis.key
- CN: {{ GLOBALS.hostname }}
- days_remaining: 7
- days_valid: 820
- backup: True
- timeout: 30
- retry:
attempts: 5
interval: 30
rediskeyperms:
file.managed:
- replace: False
- name: /etc/pki/redis.key
- mode: 640
- group: 939
{% else %}
{{sls}}_state_not_allowed:
test.fail_without_changes:
- name: {{sls}}_state_not_allowed
{% endif %}

View File

@@ -6,9 +6,6 @@
{% from 'allowed_states.map.jinja' import allowed_states %} {% from 'allowed_states.map.jinja' import allowed_states %}
{% if sls.split('.')[0] in allowed_states %} {% if sls.split('.')[0] in allowed_states %}
include:
- ssl
# Create the config directory for the docker registry # Create the config directory for the docker registry
dockerregistryconfdir: dockerregistryconfdir:
file.directory: file.directory:

View File

@@ -9,6 +9,7 @@
{% from 'docker/docker.map.jinja' import DOCKER %} {% from 'docker/docker.map.jinja' import DOCKER %}
include: include:
- registry.ssl
- registry.config - registry.config
- registry.sostatus - registry.sostatus
@@ -53,6 +54,9 @@ so-dockerregistry:
- retry: - retry:
attempts: 5 attempts: 5
interval: 30 interval: 30
- watch:
- x509: registry_crt
- x509: registry_key
- require: - require:
- file: dockerregistryconf - file: dockerregistryconf
- x509: registry_crt - x509: registry_crt

77
salt/registry/ssl.sls Normal file
View File

@@ -0,0 +1,77 @@
# Copyright Security Onion Solutions LLC and/or licensed to Security Onion Solutions LLC under one
# or more contributor license agreements. Licensed under the Elastic License 2.0 as shown at
# https://securityonion.net/license; you may not use this file except in compliance with the
# Elastic License 2.0.
{% from 'allowed_states.map.jinja' import allowed_states %}
{% if sls.split('.')[0] in allowed_states %}
{% from 'vars/globals.map.jinja' import GLOBALS %}
{% from 'ca/map.jinja' import CA %}
include:
- ca
# Delete directory if it exists at the key path
registry_key_cleanup:
file.absent:
- name: /etc/pki/registry.key
- onlyif:
- test -d /etc/pki/registry.key
registry_key:
x509.private_key_managed:
- name: /etc/pki/registry.key
- keysize: 4096
- backup: True
- new: True
- require:
- file: registry_key_cleanup
{% if salt['file.file_exists']('/etc/pki/registry.key') -%}
- prereq:
- x509: /etc/pki/registry.crt
{%- endif %}
- retry:
attempts: 15
interval: 10
# Delete directory if it exists at the crt path
registry_crt_cleanup:
file.absent:
- name: /etc/pki/registry.crt
- onlyif:
- test -d /etc/pki/registry.crt
# Create a cert for the docker registry
registry_crt:
x509.certificate_managed:
- name: /etc/pki/registry.crt
- ca_server: {{ CA.server }}
- subjectAltName: DNS:{{ GLOBALS.manager }}, IP:{{ GLOBALS.manager_ip }}
- signing_policy: registry
- private_key: /etc/pki/registry.key
- CN: {{ GLOBALS.manager }}
- days_remaining: 7
- days_valid: 820
- backup: True
- require:
- file: registry_crt_cleanup
- timeout: 30
- retry:
attempts: 15
interval: 10
regkeyperms:
file.managed:
- replace: False
- name: /etc/pki/registry.key
- mode: 640
- group: 939
{% else %}
{{sls}}_state_not_allowed:
test.fail_without_changes:
- name: {{sls}}_state_not_allowed
{% endif %}

View File

@@ -46,33 +46,6 @@ def start(interval=60):
mine_update(minion) mine_update(minion)
continue continue
# if a manager check that the ca in in the mine and it is correct
if minion.split('_')[-1] in ['manager', 'managersearch', 'eval', 'standalone', 'import']:
x509 = __salt__['saltutil.runner']('mine.get', tgt=minion, fun='x509.get_pem_entries')
try:
ca_crt = x509[minion]['/etc/pki/ca.crt']
log.debug('checkmine engine: found minion %s has ca_crt: %s' % (minion, ca_crt))
# since the cert is defined, make sure it is valid
import salt.modules.x509_v2 as x509_v2
if not x509_v2.verify_private_key('/etc/pki/ca.key', '/etc/pki/ca.crt'):
log.error('checkmine engine: found minion %s does\'t have a valid ca_crt in the mine' % (minion))
log.error('checkmine engine: %s: ca_crt: %s' % (minion, ca_crt))
mine_delete(minion, 'x509.get_pem_entries')
mine_update(minion)
continue
else:
log.debug('checkmine engine: found minion %s has a valid ca_crt in the mine' % (minion))
except IndexError:
log.error('checkmine engine: found minion %s does\'t have a ca_crt in the mine' % (minion))
mine_delete(minion, 'x509.get_pem_entries')
mine_update(minion)
continue
except KeyError:
log.error('checkmine engine: found minion %s is not in the mine' % (minion))
mine_flush(minion)
mine_update(minion)
continue
# Update the mine if the ip in the mine doesn't match returned from manage.alived # Update the mine if the ip in the mine doesn't match returned from manage.alived
network_ip_addrs = __salt__['saltutil.runner']('mine.get', tgt=minion, fun='network.ip_addrs') network_ip_addrs = __salt__['saltutil.runner']('mine.get', tgt=minion, fun='network.ip_addrs')
try: try:

View File

@@ -18,10 +18,6 @@ mine_functions:
mine_functions: mine_functions:
network.ip_addrs: network.ip_addrs:
- interface: {{ interface }} - interface: {{ interface }}
{%- if role in ['so-eval','so-import','so-manager','so-managerhype','so-managersearch','so-standalone'] %}
x509.get_pem_entries:
- glob_path: '/etc/pki/ca.crt'
{% endif %}
mine_update_mine_functions: mine_update_mine_functions:
module.run: module.run:

View File

@@ -17,8 +17,8 @@ include:
- repo.client - repo.client
- salt.mine_functions - salt.mine_functions
- salt.minion.service_file - salt.minion.service_file
{% if GLOBALS.role in GLOBALS.manager_roles %} {% if GLOBALS.is_manager %}
- ca - ca.signing_policy
{% endif %} {% endif %}
{% if INSTALLEDSALTVERSION|string != SALTVERSION|string %} {% if INSTALLEDSALTVERSION|string != SALTVERSION|string %}
@@ -111,7 +111,7 @@ salt_minion_service:
{% if INSTALLEDSALTVERSION|string == SALTVERSION|string %} {% if INSTALLEDSALTVERSION|string == SALTVERSION|string %}
- file: set_log_levels - file: set_log_levels
{% endif %} {% endif %}
{% if GLOBALS.role in GLOBALS.manager_roles %} {% if GLOBALS.is_manager %}
- file: /etc/salt/minion.d/signing_policies.conf - file: signing_policy
{% endif %} {% endif %}
- order: last - order: last

View File

@@ -8,6 +8,9 @@
include: include:
{% if GLOBALS.is_sensor or GLOBALS.role == 'so-import' %}
- pcap.ca
{% endif %}
- sensoroni.config - sensoroni.config
- sensoroni.sostatus - sensoroni.sostatus
@@ -16,7 +19,9 @@ so-sensoroni:
- image: {{ GLOBALS.registry_host }}:5000/{{ GLOBALS.image_repo }}/so-soc:{{ GLOBALS.so_version }} - image: {{ GLOBALS.registry_host }}:5000/{{ GLOBALS.image_repo }}/so-soc:{{ GLOBALS.so_version }}
- network_mode: host - network_mode: host
- binds: - binds:
{% if GLOBALS.is_sensor or GLOBALS.role == 'so-import' %}
- /opt/so/conf/steno/certs:/etc/stenographer/certs:rw - /opt/so/conf/steno/certs:/etc/stenographer/certs:rw
{% endif %}
- /nsm/pcap:/nsm/pcap:rw - /nsm/pcap:/nsm/pcap:rw
- /nsm/import:/nsm/import:rw - /nsm/import:/nsm/import:rw
- /nsm/pcapout:/nsm/pcapout:rw - /nsm/pcapout:/nsm/pcapout:rw

View File

@@ -11,6 +11,7 @@
{% from 'soc/merged.map.jinja' import SOCMERGED %} {% from 'soc/merged.map.jinja' import SOCMERGED %}
include: include:
- ca
- soc.config - soc.config
- soc.sostatus - soc.sostatus
@@ -55,7 +56,7 @@ so-soc:
- /opt/so/conf/soc/migrations:/opt/so/conf/soc/migrations:rw - /opt/so/conf/soc/migrations:/opt/so/conf/soc/migrations:rw
- /nsm/backup/detections-migration:/nsm/backup/detections-migration:ro - /nsm/backup/detections-migration:/nsm/backup/detections-migration:ro
- /opt/so/state:/opt/so/state:rw - /opt/so/state:/opt/so/state:rw
- /etc/pki/ca.crt:/opt/sensoroni/html/so-ca.crt:ro - /etc/pki/tls/certs/intca.crt:/opt/sensoroni/html/so-ca.crt:ro
- extra_hosts: - extra_hosts:
{% for node in DOCKER_EXTRA_HOSTS %} {% for node in DOCKER_EXTRA_HOSTS %}
{% for hostname, ip in node.items() %} {% for hostname, ip in node.items() %}
@@ -78,8 +79,10 @@ so-soc:
{% endfor %} {% endfor %}
{% endif %} {% endif %}
- watch: - watch:
- file: trusttheca
- file: /opt/so/conf/soc/* - file: /opt/so/conf/soc/*
- require: - require:
- file: trusttheca
- file: socdatadir - file: socdatadir
- file: soclogdir - file: soclogdir
- file: socconfig - file: socconfig

View File

@@ -1,720 +0,0 @@
# Copyright Security Onion Solutions LLC and/or licensed to Security Onion Solutions LLC under one
# or more contributor license agreements. Licensed under the Elastic License 2.0 as shown at
# https://securityonion.net/license; you may not use this file except in compliance with the
# Elastic License 2.0.
{% from 'allowed_states.map.jinja' import allowed_states %}
{% if sls in allowed_states %}
{% from 'vars/globals.map.jinja' import GLOBALS %}
{% from 'elasticfleet/map.jinja' import ELASTICFLEETMERGED %}
{% set global_ca_text = [] %}
{% set global_ca_server = [] %}
{% if grains.role in ['so-heavynode'] %}
{% set COMMONNAME = GLOBALS.hostname %}
{% else %}
{% set COMMONNAME = GLOBALS.manager %}
{% endif %}
{% if GLOBALS.is_manager %}
include:
- ca
{% set trusttheca_text = salt['cp.get_file_str']('/etc/pki/ca.crt')|replace('\n', '') %}
{% set ca_server = grains.id %}
{% else %}
include:
- ca.dirs
{% set x509dict = salt['mine.get'](GLOBALS.manager | lower~'*', 'x509.get_pem_entries') %}
{% for host in x509dict %}
{% if 'manager' in host.split('_')|last or host.split('_')|last == 'standalone' %}
{% do global_ca_text.append(x509dict[host].get('/etc/pki/ca.crt')|replace('\n', '')) %}
{% do global_ca_server.append(host) %}
{% endif %}
{% endfor %}
{% set trusttheca_text = global_ca_text[0] %}
{% set ca_server = global_ca_server[0] %}
{% endif %}
cacertdir:
file.directory:
- name: /etc/pki/tls/certs
- makedirs: True
# Trust the CA
trusttheca:
x509.pem_managed:
- name: /etc/pki/tls/certs/intca.crt
- text: {{ trusttheca_text }}
{% if GLOBALS.os_family == 'Debian' %}
symlinkca:
file.symlink:
- target: /etc/pki/tls/certs/intca.crt
- name: /etc/ssl/certs/intca.crt
{% endif %}
# Install packages needed for the sensor
m2cryptopkgs:
pkg.installed:
- skip_suggestions: False
- pkgs:
- python3-m2crypto
influxdb_key:
x509.private_key_managed:
- name: /etc/pki/influxdb.key
- keysize: 4096
- backup: True
- new: True
{% if salt['file.file_exists']('/etc/pki/influxdb.key') -%}
- prereq:
- x509: /etc/pki/influxdb.crt
{%- endif %}
- retry:
attempts: 5
interval: 30
# Create a cert for the talking to influxdb
influxdb_crt:
x509.certificate_managed:
- name: /etc/pki/influxdb.crt
- ca_server: {{ ca_server }}
- signing_policy: influxdb
- private_key: /etc/pki/influxdb.key
- CN: {{ GLOBALS.hostname }}
- subjectAltName: DNS:{{ GLOBALS.hostname }}, IP:{{ GLOBALS.node_ip }}
- days_remaining: 0
- days_valid: 820
- backup: True
- timeout: 30
- retry:
attempts: 5
interval: 30
influxkeyperms:
file.managed:
- replace: False
- name: /etc/pki/influxdb.key
- mode: 640
- group: 939
{% if GLOBALS.is_manager or GLOBALS.role in ['so-heavynode', 'so-fleet', 'so-receiver'] %}
# Create a cert for Redis encryption
redis_key:
x509.private_key_managed:
- name: /etc/pki/redis.key
- keysize: 4096
- backup: True
- new: True
{% if salt['file.file_exists']('/etc/pki/redis.key') -%}
- prereq:
- x509: /etc/pki/redis.crt
{%- endif %}
- retry:
attempts: 5
interval: 30
redis_crt:
x509.certificate_managed:
- name: /etc/pki/redis.crt
- ca_server: {{ ca_server }}
- subjectAltName: DNS:{{ GLOBALS.hostname }}, IP:{{ GLOBALS.node_ip }}
- signing_policy: registry
- private_key: /etc/pki/redis.key
- CN: {{ GLOBALS.hostname }}
- days_remaining: 0
- days_valid: 820
- backup: True
- timeout: 30
- retry:
attempts: 5
interval: 30
rediskeyperms:
file.managed:
- replace: False
- name: /etc/pki/redis.key
- mode: 640
- group: 939
{% endif %}
{% if GLOBALS.is_manager or GLOBALS.role in ['so-heavynode', 'so-fleet', 'so-receiver'] %}
{% if grains['role'] not in [ 'so-heavynode', 'so-receiver'] %}
# Start -- Elastic Fleet Host Cert
etc_elasticfleet_key:
x509.private_key_managed:
- name: /etc/pki/elasticfleet-server.key
- keysize: 4096
- backup: True
- new: True
{% if salt['file.file_exists']('/etc/pki/elasticfleet-server.key') -%}
- prereq:
- x509: etc_elasticfleet_crt
{%- endif %}
- retry:
attempts: 5
interval: 30
etc_elasticfleet_crt:
x509.certificate_managed:
- name: /etc/pki/elasticfleet-server.crt
- ca_server: {{ ca_server }}
- signing_policy: elasticfleet
- private_key: /etc/pki/elasticfleet-server.key
- CN: {{ GLOBALS.hostname }}
- subjectAltName: DNS:{{ GLOBALS.hostname }},DNS:{{ GLOBALS.url_base }},IP:{{ GLOBALS.node_ip }}{% if ELASTICFLEETMERGED.config.server.custom_fqdn | length > 0 %},DNS:{{ ELASTICFLEETMERGED.config.server.custom_fqdn | join(',DNS:') }}{% endif %}
- days_remaining: 0
- days_valid: 820
- backup: True
- timeout: 30
- retry:
attempts: 5
interval: 30
efperms:
file.managed:
- replace: False
- name: /etc/pki/elasticfleet-server.key
- mode: 640
- group: 939
chownelasticfleetcrt:
file.managed:
- replace: False
- name: /etc/pki/elasticfleet-server.crt
- mode: 640
- user: 947
- group: 939
chownelasticfleetkey:
file.managed:
- replace: False
- name: /etc/pki/elasticfleet-server.key
- mode: 640
- user: 947
- group: 939
# End -- Elastic Fleet Host Cert
{% endif %} # endif is for not including HeavyNodes & Receivers
{% if grains['role'] not in [ 'so-heavynode'] %}
# Start -- Elastic Fleet Logstash Input Cert
etc_elasticfleet_logstash_key:
x509.private_key_managed:
- name: /etc/pki/elasticfleet-logstash.key
- keysize: 4096
- backup: True
- new: True
{% if salt['file.file_exists']('/etc/pki/elasticfleet-logstash.key') -%}
- prereq:
- x509: etc_elasticfleet_logstash_crt
{%- endif %}
- retry:
attempts: 5
interval: 30
etc_elasticfleet_logstash_crt:
x509.certificate_managed:
- name: /etc/pki/elasticfleet-logstash.crt
- ca_server: {{ ca_server }}
- signing_policy: elasticfleet
- private_key: /etc/pki/elasticfleet-logstash.key
- CN: {{ GLOBALS.hostname }}
- subjectAltName: DNS:{{ GLOBALS.hostname }},DNS:{{ GLOBALS.url_base }},IP:{{ GLOBALS.node_ip }}{% if ELASTICFLEETMERGED.config.server.custom_fqdn | length > 0 %},DNS:{{ ELASTICFLEETMERGED.config.server.custom_fqdn | join(',DNS:') }}{% endif %}
- days_remaining: 0
- days_valid: 820
- backup: True
- timeout: 30
- retry:
attempts: 5
interval: 30
cmd.run:
- name: "/usr/bin/openssl pkcs8 -in /etc/pki/elasticfleet-logstash.key -topk8 -out /etc/pki/elasticfleet-logstash.p8 -nocrypt"
- onchanges:
- x509: etc_elasticfleet_logstash_key
eflogstashperms:
file.managed:
- replace: False
- name: /etc/pki/elasticfleet-logstash.key
- mode: 640
- group: 939
chownelasticfleetlogstashcrt:
file.managed:
- replace: False
- name: /etc/pki/elasticfleet-logstash.crt
- mode: 640
- user: 931
- group: 939
chownelasticfleetlogstashkey:
file.managed:
- replace: False
- name: /etc/pki/elasticfleet-logstash.key
- mode: 640
- user: 931
- group: 939
# End -- Elastic Fleet Logstash Input Cert
{% endif %} # endif is for not including HeavyNodes
# Start -- Elastic Fleet Node - Logstash Lumberjack Input / Output
# Cert needed on: Managers, Receivers
etc_elasticfleetlumberjack_key:
x509.private_key_managed:
- name: /etc/pki/elasticfleet-lumberjack.key
- bits: 4096
- backup: True
- new: True
{% if salt['file.file_exists']('/etc/pki/elasticfleet-lumberjack.key') -%}
- prereq:
- x509: etc_elasticfleetlumberjack_crt
{%- endif %}
- retry:
attempts: 5
interval: 30
etc_elasticfleetlumberjack_crt:
x509.certificate_managed:
- name: /etc/pki/elasticfleet-lumberjack.crt
- ca_server: {{ ca_server }}
- signing_policy: elasticfleet
- private_key: /etc/pki/elasticfleet-lumberjack.key
- CN: {{ GLOBALS.node_ip }}
- subjectAltName: DNS:{{ GLOBALS.hostname }}
- days_remaining: 0
- days_valid: 820
- backup: True
- timeout: 30
- retry:
attempts: 5
interval: 30
cmd.run:
- name: "/usr/bin/openssl pkcs8 -in /etc/pki/elasticfleet-lumberjack.key -topk8 -out /etc/pki/elasticfleet-lumberjack.p8 -nocrypt"
- onchanges:
- x509: etc_elasticfleetlumberjack_key
eflogstashlumberjackperms:
file.managed:
- replace: False
- name: /etc/pki/elasticfleet-lumberjack.key
- mode: 640
- group: 939
chownilogstashelasticfleetlumberjackp8:
file.managed:
- replace: False
- name: /etc/pki/elasticfleet-lumberjack.p8
- mode: 640
- user: 931
- group: 939
chownilogstashelasticfleetlogstashlumberjackcrt:
file.managed:
- replace: False
- name: /etc/pki/elasticfleet-lumberjack.crt
- mode: 640
- user: 931
- group: 939
chownilogstashelasticfleetlogstashlumberjackkey:
file.managed:
- replace: False
- name: /etc/pki/elasticfleet-lumberjack.key
- mode: 640
- user: 931
- group: 939
# End -- Elastic Fleet Node - Logstash Lumberjack Input / Output
# Start -- Elastic Fleet Client Cert for Agent (Mutual Auth with Logstash Output)
etc_elasticfleet_agent_key:
x509.private_key_managed:
- name: /etc/pki/elasticfleet-agent.key
- keysize: 4096
- backup: True
- new: True
{% if salt['file.file_exists']('/etc/pki/elasticfleet-agent.key') -%}
- prereq:
- x509: etc_elasticfleet_agent_crt
{%- endif %}
- retry:
attempts: 5
interval: 30
etc_elasticfleet_agent_crt:
x509.certificate_managed:
- name: /etc/pki/elasticfleet-agent.crt
- ca_server: {{ ca_server }}
- signing_policy: elasticfleet
- private_key: /etc/pki/elasticfleet-agent.key
- CN: {{ GLOBALS.hostname }}
- days_remaining: 0
- days_valid: 820
- backup: True
- timeout: 30
- retry:
attempts: 5
interval: 30
cmd.run:
- name: "/usr/bin/openssl pkcs8 -in /etc/pki/elasticfleet-agent.key -topk8 -out /etc/pki/elasticfleet-agent.p8 -nocrypt"
- onchanges:
- x509: etc_elasticfleet_agent_key
efagentperms:
file.managed:
- replace: False
- name: /etc/pki/elasticfleet-agent.key
- mode: 640
- group: 939
chownelasticfleetagentcrt:
file.managed:
- replace: False
- name: /etc/pki/elasticfleet-agent.crt
- mode: 640
- user: 947
- group: 939
chownelasticfleetagentkey:
file.managed:
- replace: False
- name: /etc/pki/elasticfleet-agent.key
- mode: 640
- user: 947
- group: 939
# End -- Elastic Fleet Client Cert for Agent (Mutual Auth with Logstash Output)
{% endif %}
{% if GLOBALS.is_manager or GLOBALS.role in ['so-heavynode', 'so-receiver'] %}
etc_filebeat_key:
x509.private_key_managed:
- name: /etc/pki/filebeat.key
- keysize: 4096
- backup: True
- new: True
{% if salt['file.file_exists']('/etc/pki/filebeat.key') -%}
- prereq:
- x509: etc_filebeat_crt
{%- endif %}
- retry:
attempts: 5
interval: 30
# Request a cert and drop it where it needs to go to be distributed
etc_filebeat_crt:
x509.certificate_managed:
- name: /etc/pki/filebeat.crt
- ca_server: {{ ca_server }}
- signing_policy: filebeat
- private_key: /etc/pki/filebeat.key
- CN: {{ GLOBALS.hostname }}
- subjectAltName: DNS:{{ GLOBALS.hostname }}, IP:{{ GLOBALS.node_ip }}
- days_remaining: 0
- days_valid: 820
- backup: True
- timeout: 30
- retry:
attempts: 5
interval: 30
cmd.run:
- name: "/usr/bin/openssl pkcs8 -in /etc/pki/filebeat.key -topk8 -out /etc/pki/filebeat.p8 -nocrypt"
- onchanges:
- x509: etc_filebeat_key
fbperms:
file.managed:
- replace: False
- name: /etc/pki/filebeat.key
- mode: 640
- group: 939
chownilogstashfilebeatp8:
file.managed:
- replace: False
- name: /etc/pki/filebeat.p8
- mode: 640
- user: 931
- group: 939
{% if grains.role not in ['so-heavynode', 'so-receiver'] %}
# Create Symlinks to the keys so I can distribute it to all the things
filebeatdir:
file.directory:
- name: /opt/so/saltstack/local/salt/filebeat/files
- makedirs: True
fbkeylink:
file.symlink:
- name: /opt/so/saltstack/local/salt/filebeat/files/filebeat.p8
- target: /etc/pki/filebeat.p8
- user: socore
- group: socore
fbcrtlink:
file.symlink:
- name: /opt/so/saltstack/local/salt/filebeat/files/filebeat.crt
- target: /etc/pki/filebeat.crt
- user: socore
- group: socore
registry_key:
x509.private_key_managed:
- name: /etc/pki/registry.key
- keysize: 4096
- backup: True
- new: True
{% if salt['file.file_exists']('/etc/pki/registry.key') -%}
- prereq:
- x509: /etc/pki/registry.crt
{%- endif %}
- retry:
attempts: 5
interval: 30
# Create a cert for the docker registry
registry_crt:
x509.certificate_managed:
- name: /etc/pki/registry.crt
- ca_server: {{ ca_server }}
- subjectAltName: DNS:{{ GLOBALS.manager }}, IP:{{ GLOBALS.manager_ip }}
- signing_policy: registry
- private_key: /etc/pki/registry.key
- CN: {{ GLOBALS.manager }}
- days_remaining: 0
- days_valid: 820
- backup: True
- timeout: 30
- retry:
attempts: 5
interval: 30
regkeyperms:
file.managed:
- replace: False
- name: /etc/pki/registry.key
- mode: 640
- group: 939
{% endif %}
{% if grains.role not in ['so-receiver'] %}
# Create a cert for elasticsearch
/etc/pki/elasticsearch.key:
x509.private_key_managed:
- keysize: 4096
- backup: True
- new: True
{% if salt['file.file_exists']('/etc/pki/elasticsearch.key') -%}
- prereq:
- x509: /etc/pki/elasticsearch.crt
{%- endif %}
- retry:
attempts: 5
interval: 30
/etc/pki/elasticsearch.crt:
x509.certificate_managed:
- ca_server: {{ ca_server }}
- signing_policy: registry
- private_key: /etc/pki/elasticsearch.key
- CN: {{ GLOBALS.hostname }}
- subjectAltName: DNS:{{ GLOBALS.hostname }}, IP:{{ GLOBALS.node_ip }}
- days_remaining: 0
- days_valid: 820
- backup: True
- timeout: 30
- retry:
attempts: 5
interval: 30
cmd.run:
- name: "/usr/bin/openssl pkcs12 -inkey /etc/pki/elasticsearch.key -in /etc/pki/elasticsearch.crt -export -out /etc/pki/elasticsearch.p12 -nodes -passout pass:"
- onchanges:
- x509: /etc/pki/elasticsearch.key
elastickeyperms:
file.managed:
- replace: False
- name: /etc/pki/elasticsearch.key
- mode: 640
- group: 930
elasticp12perms:
file.managed:
- replace: False
- name: /etc/pki/elasticsearch.p12
- mode: 640
- group: 930
{% endif %}
{% endif %}
{% if GLOBALS.is_manager or GLOBALS.role in ['so-sensor', 'so-searchnode', 'so-heavynode', 'so-fleet', 'so-idh', 'so-receiver'] %}
fbcertdir:
file.directory:
- name: /opt/so/conf/filebeat/etc/pki
- makedirs: True
conf_filebeat_key:
x509.private_key_managed:
- name: /opt/so/conf/filebeat/etc/pki/filebeat.key
- keysize: 4096
- backup: True
- new: True
{% if salt['file.file_exists']('/opt/so/conf/filebeat/etc/pki/filebeat.key') -%}
- prereq:
- x509: conf_filebeat_crt
{%- endif %}
- retry:
attempts: 5
interval: 30
# Request a cert and drop it where it needs to go to be distributed
conf_filebeat_crt:
x509.certificate_managed:
- name: /opt/so/conf/filebeat/etc/pki/filebeat.crt
- ca_server: {{ ca_server }}
- signing_policy: filebeat
- private_key: /opt/so/conf/filebeat/etc/pki/filebeat.key
- CN: {{ GLOBALS.hostname }}
- subjectAltName: DNS:{{ GLOBALS.hostname }}, IP:{{ GLOBALS.node_ip }}
- days_remaining: 0
- days_valid: 820
- backup: True
- timeout: 30
- retry:
attempts: 5
interval: 30
# Convert the key to pkcs#8 so logstash will work correctly.
filebeatpkcs:
cmd.run:
- name: "/usr/bin/openssl pkcs8 -in /opt/so/conf/filebeat/etc/pki/filebeat.key -topk8 -out /opt/so/conf/filebeat/etc/pki/filebeat.p8 -passout pass:"
- onchanges:
- x509: conf_filebeat_key
filebeatkeyperms:
file.managed:
- replace: False
- name: /opt/so/conf/filebeat/etc/pki/filebeat.key
- mode: 640
- group: 939
chownfilebeatp8:
file.managed:
- replace: False
- name: /opt/so/conf/filebeat/etc/pki/filebeat.p8
- mode: 640
- user: 931
- group: 939
{% endif %}
{% if grains['role'] == 'so-searchnode' %}
# Create a cert for elasticsearch
/etc/pki/elasticsearch.key:
x509.private_key_managed:
- keysize: 4096
- backup: True
- new: True
{% if salt['file.file_exists']('/etc/pki/elasticsearch.key') -%}
- prereq:
- x509: /etc/pki/elasticsearch.crt
{%- endif %}
- retry:
attempts: 5
interval: 30
/etc/pki/elasticsearch.crt:
x509.certificate_managed:
- ca_server: {{ ca_server }}
- signing_policy: registry
- private_key: /etc/pki/elasticsearch.key
- CN: {{ GLOBALS.hostname }}
- subjectAltName: DNS:{{ GLOBALS.hostname }}, IP:{{ GLOBALS.node_ip }}
- days_remaining: 0
- days_valid: 820
- backup: True
- timeout: 30
- retry:
attempts: 5
interval: 30
cmd.run:
- name: "/usr/bin/openssl pkcs12 -inkey /etc/pki/elasticsearch.key -in /etc/pki/elasticsearch.crt -export -out /etc/pki/elasticsearch.p12 -nodes -passout pass:"
- onchanges:
- x509: /etc/pki/elasticsearch.key
elasticp12perms:
file.managed:
- replace: False
- name: /etc/pki/elasticsearch.p12
- mode: 640
- group: 930
elastickeyperms:
file.managed:
- replace: False
- name: /etc/pki/elasticsearch.key
- mode: 640
- group: 930
{%- endif %}
{% if GLOBALS.role in ['so-manager', 'so-managerhype', 'so-managersearch', 'so-standalone'] %}
elasticfleet_kafka_key:
x509.private_key_managed:
- name: /etc/pki/elasticfleet-kafka.key
- keysize: 4096
- backup: True
- new: True
{% if salt['file.file_exists']('/etc/pki/elasticfleet-kafka.key') -%}
- prereq:
- x509: elasticfleet_kafka_crt
{%- endif %}
- retry:
attempts: 5
interval: 30
elasticfleet_kafka_crt:
x509.certificate_managed:
- name: /etc/pki/elasticfleet-kafka.crt
- ca_server: {{ ca_server }}
- signing_policy: kafka
- private_key: /etc/pki/elasticfleet-kafka.key
- CN: {{ GLOBALS.hostname }}
- subjectAltName: DNS:{{ GLOBALS.hostname }}, IP:{{ GLOBALS.node_ip }}
- days_remaining: 0
- days_valid: 820
- backup: True
- timeout: 30
- retry:
attempts: 5
interval: 30
elasticfleet_kafka_cert_perms:
file.managed:
- replace: False
- name: /etc/pki/elasticfleet-kafka.crt
- mode: 640
- user: 947
- group: 939
elasticfleet_kafka_key_perms:
file.managed:
- replace: False
- name: /etc/pki/elasticfleet-kafka.key
- mode: 640
- user: 947
- group: 939
{% endif %}
{% else %}
{{sls}}_state_not_allowed:
test.fail_without_changes:
- name: {{sls}}_state_not_allowed
{% endif %}

View File

@@ -1,10 +1,7 @@
trusttheca: # Copyright Security Onion Solutions LLC and/or licensed to Security Onion Solutions LLC under one
file.absent: # or more contributor license agreements. Licensed under the Elastic License 2.0 as shown at
- name: /etc/pki/tls/certs/intca.crt # https://securityonion.net/license; you may not use this file except in compliance with the
# Elastic License 2.0.
symlinkca:
file.absent:
- name: /etc/ssl/certs/intca.crt
influxdb_key: influxdb_key:
file.absent: file.absent:
@@ -14,6 +11,14 @@ influxdb_crt:
file.absent: file.absent:
- name: /etc/pki/influxdb.crt - name: /etc/pki/influxdb.crt
telegraf_key:
file.absent:
- name: /etc/pki/telegraf.key
telegraf_crt:
file.absent:
- name: /etc/pki/telegraf.crt
redis_key: redis_key:
file.absent: file.absent:
- name: /etc/pki/redis.key - name: /etc/pki/redis.key
@@ -30,6 +35,7 @@ etc_filebeat_crt:
file.absent: file.absent:
- name: /etc/pki/filebeat.crt - name: /etc/pki/filebeat.crt
# manager has symlink to /etc/pki/filebeat.crt and /etc/pki/filebeat.p8
filebeatdir: filebeatdir:
file.absent: file.absent:
- name: /opt/so/saltstack/local/salt/filebeat/files - name: /opt/so/saltstack/local/salt/filebeat/files
@@ -42,11 +48,13 @@ registry_crt:
file.absent: file.absent:
- name: /etc/pki/registry.crt - name: /etc/pki/registry.crt
/etc/pki/elasticsearch.key: elasticsearch_key:
file.absent: [] file.absent:
- name: /etc/pki/elasticsearch.key
/etc/pki/elasticsearch.crt: elasticsearch_crt:
file.absent: [] file.absent:
- name: /etc/pki/elasticsearch.crt
remove_elasticsearch.p12: remove_elasticsearch.p12:
file.absent: file.absent:
@@ -75,6 +83,7 @@ fbcertdir:
kafka_crt: kafka_crt:
file.absent: file.absent:
- name: /etc/pki/kafka.crt - name: /etc/pki/kafka.crt
kafka_key: kafka_key:
file.absent: file.absent:
- name: /etc/pki/kafka.key - name: /etc/pki/kafka.key
@@ -82,9 +91,67 @@ kafka_key:
kafka_logstash_crt: kafka_logstash_crt:
file.absent: file.absent:
- name: /etc/pki/kafka-logstash.crt - name: /etc/pki/kafka-logstash.crt
kafka_logstash_key: kafka_logstash_key:
file.absent: file.absent:
- name: /etc/pki/kafka-logstash.key - name: /etc/pki/kafka-logstash.key
kafka_logstash_keystore: kafka_logstash_keystore:
file.absent: file.absent:
- name: /etc/pki/kafka-logstash.p12 - name: /etc/pki/kafka-logstash.p12
elasticfleet_agent_crt:
file.absent:
- name: /etc/pki/elasticfleet-agent.crt
elasticfleet_agent_key:
file.absent:
- name: /etc/pki/elasticfleet-agent.key
elasticfleet_agent_p8:
file.absent:
- name: /etc/pki/elasticfleet-agent.p8
elasticfleet_kafka_crt:
file.absent:
- name: /etc/pki/elasticfleet-kafka.crt
elasticfleet_kafka_key:
file.absent:
- name: /etc/pki/elasticfleet-kafka.key
elasticfleet_logstash_crt:
file.absent:
- name: /etc/pki/elasticfleet-logstash.crt
elasticfleet_logstash_key:
file.absent:
- name: /etc/pki/elasticfleet-logstash.key
elasticfleet_logstash_p8:
file.absent:
- name: /etc/pki/elasticfleet-logstash.p8
elasticfleet_lumberjack_crt:
file.absent:
- name: /etc/pki/elasticfleet-lumberjack.crt
elasticfleet_lumberjack_key:
file.absent:
- name: /etc/pki/elasticfleet-lumberjack.key
elasticfleet_lumberjack_p8:
file.absent:
- name: /etc/pki/elasticfleet-lumberjack.p8
elasticfleet_server_crt:
file.absent:
- name: /etc/pki/elasticfleet-server.crt
elasticfleet_server_key:
file.absent:
- name: /etc/pki/elasticfleet-server.key
filebeat_p8:
file.absent:
- name: /etc/pki/filebeat.p8

View File

@@ -8,9 +8,6 @@
{% from 'vars/globals.map.jinja' import GLOBALS %} {% from 'vars/globals.map.jinja' import GLOBALS %}
{% from 'telegraf/map.jinja' import TELEGRAFMERGED %} {% from 'telegraf/map.jinja' import TELEGRAFMERGED %}
include:
- ssl
# add Telegraf to monitor all the things # add Telegraf to monitor all the things
tgraflogdir: tgraflogdir:
file.directory: file.directory:

View File

@@ -9,8 +9,9 @@
{% from 'docker/docker.map.jinja' import DOCKER %} {% from 'docker/docker.map.jinja' import DOCKER %}
{% from 'telegraf/map.jinja' import TELEGRAFMERGED %} {% from 'telegraf/map.jinja' import TELEGRAFMERGED %}
include: include:
- ca
- telegraf.ssl
- telegraf.config - telegraf.config
- telegraf.sostatus - telegraf.sostatus
@@ -42,13 +43,9 @@ so-telegraf:
- /proc:/host/proc:ro - /proc:/host/proc:ro
- /nsm:/host/nsm:ro - /nsm:/host/nsm:ro
- /etc:/host/etc:ro - /etc:/host/etc:ro
{% if GLOBALS.role in ['so-manager', 'so-eval', 'so-managersearch' ] %}
- /etc/pki/ca.crt:/etc/telegraf/ca.crt:ro
{% else %}
- /etc/pki/tls/certs/intca.crt:/etc/telegraf/ca.crt:ro - /etc/pki/tls/certs/intca.crt:/etc/telegraf/ca.crt:ro
{% endif %} - /etc/pki/telegraf.crt:/etc/telegraf/telegraf.crt:ro
- /etc/pki/influxdb.crt:/etc/telegraf/telegraf.crt:ro - /etc/pki/telegraf.key:/etc/telegraf/telegraf.key:ro
- /etc/pki/influxdb.key:/etc/telegraf/telegraf.key:ro
- /opt/so/conf/telegraf/scripts:/scripts:ro - /opt/so/conf/telegraf/scripts:/scripts:ro
- /opt/so/log/stenographer:/var/log/stenographer:ro - /opt/so/log/stenographer:/var/log/stenographer:ro
- /opt/so/log/suricata:/var/log/suricata:ro - /opt/so/log/suricata:/var/log/suricata:ro
@@ -71,21 +68,20 @@ so-telegraf:
{% endfor %} {% endfor %}
{% endif %} {% endif %}
- watch: - watch:
- file: trusttheca
- x509: telegraf_crt
- x509: telegraf_key
- file: tgrafconf - file: tgrafconf
- file: node_config - file: node_config
{% for script in TELEGRAFMERGED.scripts[GLOBALS.role.split('-')[1]] %} {% for script in TELEGRAFMERGED.scripts[GLOBALS.role.split('-')[1]] %}
- file: tgraf_sync_script_{{script}} - file: tgraf_sync_script_{{script}}
{% endfor %} {% endfor %}
- require: - require:
- file: trusttheca
- x509: telegraf_crt
- x509: telegraf_key
- file: tgrafconf - file: tgrafconf
- file: node_config - file: node_config
{% if GLOBALS.role in ['so-manager', 'so-eval', 'so-managersearch' ] %}
- x509: pki_public_ca_crt
{% else %}
- x509: trusttheca
{% endif %}
- x509: influxdb_crt
- x509: influxdb_key
delete_so-telegraf_so-status.disabled: delete_so-telegraf_so-status.disabled:
file.uncomment: file.uncomment:

66
salt/telegraf/ssl.sls Normal file
View File

@@ -0,0 +1,66 @@
# Copyright Security Onion Solutions LLC and/or licensed to Security Onion Solutions LLC under one
# or more contributor license agreements. Licensed under the Elastic License 2.0 as shown at
# https://securityonion.net/license; you may not use this file except in compliance with the
# Elastic License 2.0.
{% from 'allowed_states.map.jinja' import allowed_states %}
{% if sls.split('.')[0] in allowed_states %}
{% from 'vars/globals.map.jinja' import GLOBALS %}
{% from 'ca/map.jinja' import CA %}
telegraf_key:
x509.private_key_managed:
- name: /etc/pki/telegraf.key
- keysize: 4096
- backup: True
- new: True
{% if salt['file.file_exists']('/etc/pki/telegraf.key') -%}
- prereq:
- x509: /etc/pki/telegraf.crt
{%- endif %}
- retry:
attempts: 5
interval: 30
# Create a cert for the talking to telegraf
telegraf_crt:
x509.certificate_managed:
- name: /etc/pki/telegraf.crt
- ca_server: {{ CA.server }}
- signing_policy: influxdb
- private_key: /etc/pki/telegraf.key
- CN: {{ GLOBALS.hostname }}
- subjectAltName: DNS:{{ GLOBALS.hostname }}, IP:{{ GLOBALS.node_ip }}
- days_remaining: 7
- days_valid: 820
- backup: True
- timeout: 30
- retry:
attempts: 5
interval: 30
telegraf_key_perms:
file.managed:
- replace: False
- name: /etc/pki/telegraf.key
- mode: 640
- group: 939
{% if not GLOBALS.is_manager %}
{# Prior to 2.4.220, minions used influxdb.crt and key for telegraf #}
remove_influxdb.crt:
file.absent:
- name: /etc/pki/influxdb.crt
remove_influxdb.key:
file.absent:
- name: /etc/pki/influxdb.key
{% endif %}
{% else %}
{{sls}}_state_not_allowed:
test.fail_without_changes:
- name: {{sls}}_state_not_allowed
{% endif %}

View File

@@ -37,6 +37,7 @@ base:
'not ( *_manager* or *_eval or *_import or *_standalone ) and G@saltversion:{{saltversion}}': 'not ( *_manager* or *_eval or *_import or *_standalone ) and G@saltversion:{{saltversion}}':
- match: compound - match: compound
- salt.minion - salt.minion
- ca
- patch.os.schedule - patch.os.schedule
- motd - motd
- salt.minion-check - salt.minion-check
@@ -49,6 +50,7 @@ base:
'( *_manager* or *_eval or *_import or *_standalone ) and G@saltversion:{{saltversion}} and not I@node_data:False': '( *_manager* or *_eval or *_import or *_standalone ) and G@saltversion:{{saltversion}} and not I@node_data:False':
- match: compound - match: compound
- salt.minion - salt.minion
- ca
- patch.os.schedule - patch.os.schedule
- motd - motd
- salt.minion-check - salt.minion-check
@@ -61,8 +63,6 @@ base:
- match: compound - match: compound
- salt.master - salt.master
- sensor - sensor
- ca
- ssl
- registry - registry
- manager - manager
- backup.config_backup - backup.config_backup
@@ -91,8 +91,6 @@ base:
- match: compound - match: compound
- salt.master - salt.master
- sensor - sensor
- ca
- ssl
- registry - registry
- manager - manager
- backup.config_backup - backup.config_backup
@@ -124,8 +122,6 @@ base:
'*_manager or *_managerhype and G@saltversion:{{saltversion}} and not I@node_data:False': '*_manager or *_managerhype and G@saltversion:{{saltversion}} and not I@node_data:False':
- match: compound - match: compound
- salt.master - salt.master
- ca
- ssl
- registry - registry
- nginx - nginx
- influxdb - influxdb
@@ -157,8 +153,6 @@ base:
'*_managersearch and G@saltversion:{{saltversion}} and not I@node_data:False': '*_managersearch and G@saltversion:{{saltversion}} and not I@node_data:False':
- match: compound - match: compound
- salt.master - salt.master
- ca
- ssl
- registry - registry
- nginx - nginx
- influxdb - influxdb
@@ -187,8 +181,6 @@ base:
- match: compound - match: compound
- salt.master - salt.master
- sensor - sensor
- ca
- ssl
- registry - registry
- manager - manager
- nginx - nginx
@@ -212,7 +204,6 @@ base:
'*_searchnode and G@saltversion:{{saltversion}}': '*_searchnode and G@saltversion:{{saltversion}}':
- match: compound - match: compound
- firewall - firewall
- ssl
- elasticsearch - elasticsearch
- logstash - logstash
- sensoroni - sensoroni
@@ -225,7 +216,6 @@ base:
'*_sensor and G@saltversion:{{saltversion}}': '*_sensor and G@saltversion:{{saltversion}}':
- match: compound - match: compound
- sensor - sensor
- ssl
- sensoroni - sensoroni
- telegraf - telegraf
- firewall - firewall
@@ -241,7 +231,6 @@ base:
'*_heavynode and G@saltversion:{{saltversion}}': '*_heavynode and G@saltversion:{{saltversion}}':
- match: compound - match: compound
- sensor - sensor
- ssl
- sensoroni - sensoroni
- telegraf - telegraf
- nginx - nginx
@@ -259,7 +248,6 @@ base:
'*_receiver and G@saltversion:{{saltversion}}': '*_receiver and G@saltversion:{{saltversion}}':
- match: compound - match: compound
- ssl
- sensoroni - sensoroni
- telegraf - telegraf
- firewall - firewall
@@ -271,7 +259,6 @@ base:
'*_idh and G@saltversion:{{saltversion}}': '*_idh and G@saltversion:{{saltversion}}':
- match: compound - match: compound
- ssl
- sensoroni - sensoroni
- telegraf - telegraf
- firewall - firewall
@@ -280,7 +267,6 @@ base:
'*_fleet and G@saltversion:{{saltversion}}': '*_fleet and G@saltversion:{{saltversion}}':
- match: compound - match: compound
- ssl
- sensoroni - sensoroni
- telegraf - telegraf
- firewall - firewall
@@ -293,7 +279,6 @@ base:
'*_hypervisor and I@features:vrt and G@saltversion:{{saltversion}}': '*_hypervisor and I@features:vrt and G@saltversion:{{saltversion}}':
- match: compound - match: compound
- ssl
- sensoroni - sensoroni
- telegraf - telegraf
- firewall - firewall
@@ -304,7 +289,6 @@ base:
- stig - stig
'*_desktop and G@saltversion:{{saltversion}}': '*_desktop and G@saltversion:{{saltversion}}':
- ssl
- sensoroni - sensoroni
- telegraf - telegraf
- elasticfleet.install_agent_grid - elasticfleet.install_agent_grid

View File

@@ -1121,16 +1121,6 @@ generate_ca() {
logCmd "openssl x509 -in /etc/pki/ca.crt -noout -subject -issuer -dates" logCmd "openssl x509 -in /etc/pki/ca.crt -noout -subject -issuer -dates"
} }
generate_ssl() {
# if the install type is a manager then we need to wait for the minion to be ready before trying
# to run the ssl state since we need the minion to sign the certs
if [[ $waitforstate ]]; then
(wait_for_salt_minion "$MINION_ID" "5" '/dev/stdout' || fail_setup) 2>&1 | tee -a "$setup_log"
fi
info "Applying SSL state"
logCmd "salt-call state.apply ssl -l info"
}
generate_passwords(){ generate_passwords(){
title "Generate Random Passwords" title "Generate Random Passwords"
INFLUXPASS=$(get_random_value) INFLUXPASS=$(get_random_value)
@@ -1644,7 +1634,7 @@ reinstall_init() {
{ {
# remove all of root's cronjobs # remove all of root's cronjobs
logCmd "crontab -r -u root" crontab -r -u root
if command -v salt-call &> /dev/null && grep -q "master:" /etc/salt/minion 2> /dev/null; then if command -v salt-call &> /dev/null && grep -q "master:" /etc/salt/minion 2> /dev/null; then
# Disable schedule so highstate doesn't start running during the install # Disable schedule so highstate doesn't start running during the install
@@ -1654,8 +1644,7 @@ reinstall_init() {
salt-call -l info saltutil.kill_all_jobs --local salt-call -l info saltutil.kill_all_jobs --local
fi fi
logCmd "salt-call state.apply ca.remove -linfo --local --file-root=../salt" salt-call state.apply ca.remove -linfo --local --file-root=../salt
logCmd "salt-call state.apply ssl.remove -linfo --local --file-root=../salt"
# Kill any salt processes (safely) # Kill any salt processes (safely)
for service in "${salt_services[@]}"; do for service in "${salt_services[@]}"; do
@@ -1668,7 +1657,7 @@ reinstall_init() {
local count=0 local count=0
while check_service_status "$service"; do while check_service_status "$service"; do
if [[ $count -gt $service_retry_count ]]; then if [[ $count -gt $service_retry_count ]]; then
info "Could not stop $service after 1 minute, exiting setup." echo "Could not stop $service after 1 minute, exiting setup."
# Stop the systemctl process trying to kill the service, show user a message, then exit setup # Stop the systemctl process trying to kill the service, show user a message, then exit setup
kill -9 $pid kill -9 $pid
@@ -1706,10 +1695,10 @@ reinstall_init() {
backup_dir /nsm/influxdb "$date_string" backup_dir /nsm/influxdb "$date_string"
# Uninstall local Elastic Agent, if installed # Uninstall local Elastic Agent, if installed
logCmd "elastic-agent uninstall -f" elastic-agent uninstall -f
if [[ $is_deb ]]; then if [[ $is_deb ]]; then
info "Unholding previously held packages." echo "Unholding previously held packages."
apt-mark unhold $(apt-mark showhold) apt-mark unhold $(apt-mark showhold)
fi fi

View File

@@ -773,12 +773,9 @@ if ! [[ -f $install_opt_file ]]; then
# wait here until we get a response from the salt-master since it may have just restarted # wait here until we get a response from the salt-master since it may have just restarted
# exit setup after 5-6 minutes of trying # exit setup after 5-6 minutes of trying
check_salt_master_status || fail "Can't access salt master or it is not ready" check_salt_master_status || fail "Can't access salt master or it is not ready"
# apply the ca state to create the ca and put it in the mine early in the install # apply the ca state to create the ca and symlink to local/salt/ca/files/ca.crt
# the minion ip will already be in the mine from configure_minion function in so-functions # the minion ip will already be in the mine from configure_minion function in so-functions
generate_ca generate_ca
# this will also call the ssl state since docker requires the intca
# the salt-minion service will need to be up on the manager to sign requests
generate_ssl
logCmd "salt-call state.apply docker" logCmd "salt-call state.apply docker"
firewall_generate_templates firewall_generate_templates
set_initial_firewall_policy set_initial_firewall_policy