mirror of
https://github.com/Security-Onion-Solutions/securityonion.git
synced 2026-01-12 03:03:09 +01:00
Compare commits
155 Commits
8d2701e143
...
bravo
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
3bc552ef38 | ||
|
|
ee70d94e15 | ||
|
|
1887d2c0e9 | ||
|
|
c99dd4e44f | ||
|
|
541b8b288d | ||
|
|
db168a0452 | ||
|
|
aa96cf44d4 | ||
|
|
0d59c35d2a | ||
|
|
8463bde90d | ||
|
|
150c31009e | ||
|
|
693494024d | ||
|
|
ee66d6c7d1 | ||
|
|
3effd30f7e | ||
|
|
4ab20c2454 | ||
|
|
c075b5a1a7 | ||
|
|
cb1e59fa49 | ||
|
|
588aa435ec | ||
|
|
752c764066 | ||
|
|
af604c2ea8 | ||
|
|
6c3f9f149d | ||
|
|
152f2e03f1 | ||
|
|
605797c86a | ||
|
|
1ee5b1611a | ||
|
|
5028729e4c | ||
|
|
ab00fa8809 | ||
|
|
2d705e7caa | ||
|
|
f2370043a8 | ||
|
|
3b349b9803 | ||
|
|
f2b7ffe0eb | ||
|
|
3a410eed1a | ||
|
|
a53619f10f | ||
|
|
893aaafa1b | ||
|
|
33c34cdeca | ||
|
|
9b411867df | ||
|
|
fd1596b3a0 | ||
|
|
b05de22f58 | ||
|
|
f666ad600f | ||
|
|
9345718967 | ||
|
|
6c879cbd13 | ||
|
|
089b5aaf44 | ||
|
|
b61885add5 | ||
|
|
702ba2e0a4 | ||
|
|
5cb1e284af | ||
|
|
e3a4f0873e | ||
|
|
7977a020ac | ||
|
|
1d63269883 | ||
|
|
dd8027480b | ||
|
|
c45bd77e44 | ||
|
|
032e0abd61 | ||
|
|
8509d1e454 | ||
|
|
ddd6935e50 | ||
|
|
5588a56b24 | ||
|
|
12aed6e280 | ||
|
|
b2a469e08c | ||
|
|
285b0e4af9 | ||
|
|
f9edfd6391 | ||
|
|
c0845e1612 | ||
|
|
9878d9d37e | ||
|
|
a2196085d5 | ||
|
|
ba62a8c10c | ||
|
|
38f38e2789 | ||
|
|
1475f0fc2f | ||
|
|
a3396b77a3 | ||
|
|
8158fee8fc | ||
|
|
f6301bc3e5 | ||
|
|
6c5c176b7d | ||
|
|
c6d52b5eb1 | ||
|
|
7cac528389 | ||
|
|
d518f75468 | ||
|
|
c6fac8c36b | ||
|
|
17b5b81696 | ||
|
|
9960db200c | ||
|
|
b9ff1704b0 | ||
|
|
6fe817ca4a | ||
|
|
cb9a6fac25 | ||
|
|
a945768251 | ||
|
|
c6646e3821 | ||
|
|
99dc72cece | ||
|
|
04d6cca204 | ||
|
|
5ab6bda639 | ||
|
|
f433de7e12 | ||
|
|
8ef6c2f91d | ||
|
|
7575218697 | ||
|
|
dc945dad00 | ||
|
|
ddcd74ffd2 | ||
|
|
e105bd12e6 | ||
|
|
f5688175b6 | ||
|
|
72a4ba405f | ||
|
|
94694d394e | ||
|
|
03dd746601 | ||
|
|
eec3373ae7 | ||
|
|
db45ce07ed | ||
|
|
ba49765312 | ||
|
|
72c8c2371e | ||
|
|
80411ab6cf | ||
|
|
0ff8fa57e7 | ||
|
|
411f28a049 | ||
|
|
0f42233092 | ||
|
|
2dd49f6d9b | ||
|
|
271f545f4f | ||
|
|
c4a70b540e | ||
|
|
bef85772e3 | ||
|
|
a6b19c4a6c | ||
|
|
44f5e6659b | ||
|
|
3f9a9b7019 | ||
|
|
b7ad985c7a | ||
|
|
dba087ae25 | ||
|
|
bbc4b1b502 | ||
|
|
9304513ce8 | ||
|
|
0b127582cb | ||
|
|
6e9b8791c8 | ||
|
|
ef87ad77c3 | ||
|
|
8477420911 | ||
|
|
f5741e318f | ||
|
|
545060103a | ||
|
|
e010b5680a | ||
|
|
8620d3987e | ||
|
|
30487a54c1 | ||
|
|
f15a39c153 | ||
|
|
aed27fa111 | ||
|
|
822c411e83 | ||
|
|
41b3ac7554 | ||
|
|
23575fdf6c | ||
|
|
52f70dc49a | ||
|
|
79c9749ff7 | ||
|
|
8abd4c9c78 | ||
|
|
c372cd533d | ||
|
|
999f83ce57 | ||
|
|
36a6a59d55 | ||
|
|
bda83a47a2 | ||
|
|
e96cfd35f7 | ||
|
|
65c96b2edf | ||
|
|
87477ae4f6 | ||
|
|
89a9106d79 | ||
|
|
1284150382 | ||
|
|
4bb0a7c9d9 | ||
|
|
ced3af818c | ||
|
|
148ef7ef21 | ||
|
|
1b55642c86 | ||
|
|
af7f7d0728 | ||
|
|
431e5abf89 | ||
|
|
f047677d8a | ||
|
|
b2606b6094 | ||
|
|
37b3fd9b7b | ||
|
|
573dded921 | ||
|
|
81d7c313af | ||
|
|
9a6ff75793 | ||
|
|
1f24796eba | ||
|
|
55bbbdb58d | ||
|
|
3a8a6bf5ff | ||
|
|
13789bc56f | ||
|
|
11518f6eea | ||
|
|
2f6fb717c1 | ||
|
|
ded520c2c1 | ||
|
|
a77157391c |
1
.github/DISCUSSION_TEMPLATE/2-4.yml
vendored
1
.github/DISCUSSION_TEMPLATE/2-4.yml
vendored
@@ -33,6 +33,7 @@ body:
|
||||
- 2.4.180
|
||||
- 2.4.190
|
||||
- 2.4.200
|
||||
- 2.4.210
|
||||
- Other (please provide detail below)
|
||||
validations:
|
||||
required: true
|
||||
|
||||
@@ -1,17 +1,17 @@
|
||||
### 2.4.190-20251024 ISO image released on 2025/10/24
|
||||
### 2.4.200-20251216 ISO image released on 2025/12/16
|
||||
|
||||
|
||||
### Download and Verify
|
||||
|
||||
2.4.190-20251024 ISO image:
|
||||
https://download.securityonion.net/file/securityonion/securityonion-2.4.190-20251024.iso
|
||||
2.4.200-20251216 ISO image:
|
||||
https://download.securityonion.net/file/securityonion/securityonion-2.4.200-20251216.iso
|
||||
|
||||
MD5: 25358481FB876226499C011FC0710358
|
||||
SHA1: 0B26173C0CE136F2CA40A15046D1DFB78BCA1165
|
||||
SHA256: 4FD9F62EDA672408828B3C0C446FE5EA9FF3C4EE8488A7AB1101544A3C487872
|
||||
MD5: 07B38499952D1F2FD7B5AF10096D0043
|
||||
SHA1: 7F3A26839CA3CAEC2D90BB73D229D55E04C7D370
|
||||
SHA256: 8D3AC735873A2EA8527E16A6A08C34BD5018CBC0925AC4096E15A0C99F591D5F
|
||||
|
||||
Signature for ISO image:
|
||||
https://github.com/Security-Onion-Solutions/securityonion/raw/2.4/main/sigs/securityonion-2.4.190-20251024.iso.sig
|
||||
https://github.com/Security-Onion-Solutions/securityonion/raw/2.4/main/sigs/securityonion-2.4.200-20251216.iso.sig
|
||||
|
||||
Signing key:
|
||||
https://raw.githubusercontent.com/Security-Onion-Solutions/securityonion/2.4/main/KEYS
|
||||
@@ -25,22 +25,22 @@ wget https://raw.githubusercontent.com/Security-Onion-Solutions/securityonion/2.
|
||||
|
||||
Download the signature file for the ISO:
|
||||
```
|
||||
wget https://github.com/Security-Onion-Solutions/securityonion/raw/2.4/main/sigs/securityonion-2.4.190-20251024.iso.sig
|
||||
wget https://github.com/Security-Onion-Solutions/securityonion/raw/2.4/main/sigs/securityonion-2.4.200-20251216.iso.sig
|
||||
```
|
||||
|
||||
Download the ISO image:
|
||||
```
|
||||
wget https://download.securityonion.net/file/securityonion/securityonion-2.4.190-20251024.iso
|
||||
wget https://download.securityonion.net/file/securityonion/securityonion-2.4.200-20251216.iso
|
||||
```
|
||||
|
||||
Verify the downloaded ISO image using the signature file:
|
||||
```
|
||||
gpg --verify securityonion-2.4.190-20251024.iso.sig securityonion-2.4.190-20251024.iso
|
||||
gpg --verify securityonion-2.4.200-20251216.iso.sig securityonion-2.4.200-20251216.iso
|
||||
```
|
||||
|
||||
The output should show "Good signature" and the Primary key fingerprint should match what's shown below:
|
||||
```
|
||||
gpg: Signature made Thu 23 Oct 2025 07:21:46 AM EDT using RSA key ID FE507013
|
||||
gpg: Signature made Mon 15 Dec 2025 05:24:11 PM EST using RSA key ID FE507013
|
||||
gpg: Good signature from "Security Onion Solutions, LLC <info@securityonionsolutions.com>"
|
||||
gpg: WARNING: This key is not certified with a trusted signature!
|
||||
gpg: There is no indication that the signature belongs to the owner.
|
||||
|
||||
2
pillar/ca/init.sls
Normal file
2
pillar/ca/init.sls
Normal file
@@ -0,0 +1,2 @@
|
||||
ca:
|
||||
server:
|
||||
@@ -1,5 +1,6 @@
|
||||
base:
|
||||
'*':
|
||||
- ca
|
||||
- global.soc_global
|
||||
- global.adv_global
|
||||
- docker.soc_docker
|
||||
@@ -43,8 +44,6 @@ base:
|
||||
- secrets
|
||||
- manager.soc_manager
|
||||
- manager.adv_manager
|
||||
- idstools.soc_idstools
|
||||
- idstools.adv_idstools
|
||||
- logstash.nodes
|
||||
- logstash.soc_logstash
|
||||
- logstash.adv_logstash
|
||||
@@ -117,8 +116,6 @@ base:
|
||||
- elastalert.adv_elastalert
|
||||
- manager.soc_manager
|
||||
- manager.adv_manager
|
||||
- idstools.soc_idstools
|
||||
- idstools.adv_idstools
|
||||
- soc.soc_soc
|
||||
- soc.adv_soc
|
||||
- kibana.soc_kibana
|
||||
@@ -158,8 +155,6 @@ base:
|
||||
{% endif %}
|
||||
- secrets
|
||||
- healthcheck.standalone
|
||||
- idstools.soc_idstools
|
||||
- idstools.adv_idstools
|
||||
- kratos.soc_kratos
|
||||
- kratos.adv_kratos
|
||||
- hydra.soc_hydra
|
||||
|
||||
@@ -15,11 +15,7 @@
|
||||
'salt.minion-check',
|
||||
'sensoroni',
|
||||
'salt.lasthighstate',
|
||||
'salt.minion'
|
||||
] %}
|
||||
|
||||
{% set ssl_states = [
|
||||
'ssl',
|
||||
'salt.minion',
|
||||
'telegraf',
|
||||
'firewall',
|
||||
'schedule',
|
||||
@@ -28,7 +24,7 @@
|
||||
|
||||
{% set manager_states = [
|
||||
'salt.master',
|
||||
'ca',
|
||||
'ca.server',
|
||||
'registry',
|
||||
'manager',
|
||||
'nginx',
|
||||
@@ -38,8 +34,6 @@
|
||||
'hydra',
|
||||
'elasticfleet',
|
||||
'elastic-fleet-package-registry',
|
||||
'idstools',
|
||||
'suricata.manager',
|
||||
'utility'
|
||||
] %}
|
||||
|
||||
@@ -77,28 +71,23 @@
|
||||
{# Map role-specific states #}
|
||||
{% set role_states = {
|
||||
'so-eval': (
|
||||
ssl_states +
|
||||
manager_states +
|
||||
sensor_states +
|
||||
elastic_stack_states | reject('equalto', 'logstash') | list
|
||||
),
|
||||
'so-heavynode': (
|
||||
ssl_states +
|
||||
sensor_states +
|
||||
['elasticagent', 'elasticsearch', 'logstash', 'redis', 'nginx']
|
||||
),
|
||||
'so-idh': (
|
||||
ssl_states +
|
||||
['idh']
|
||||
),
|
||||
'so-import': (
|
||||
ssl_states +
|
||||
manager_states +
|
||||
sensor_states | reject('equalto', 'strelka') | reject('equalto', 'healthcheck') | list +
|
||||
['elasticsearch', 'elasticsearch.auth', 'kibana', 'kibana.secrets', 'strelka.manager']
|
||||
),
|
||||
'so-manager': (
|
||||
ssl_states +
|
||||
manager_states +
|
||||
['salt.cloud', 'libvirt.packages', 'libvirt.ssh.users', 'strelka.manager'] +
|
||||
stig_states +
|
||||
@@ -106,7 +95,6 @@
|
||||
elastic_stack_states
|
||||
),
|
||||
'so-managerhype': (
|
||||
ssl_states +
|
||||
manager_states +
|
||||
['salt.cloud', 'strelka.manager', 'hypervisor', 'libvirt'] +
|
||||
stig_states +
|
||||
@@ -114,7 +102,6 @@
|
||||
elastic_stack_states
|
||||
),
|
||||
'so-managersearch': (
|
||||
ssl_states +
|
||||
manager_states +
|
||||
['salt.cloud', 'libvirt.packages', 'libvirt.ssh.users', 'strelka.manager'] +
|
||||
stig_states +
|
||||
@@ -122,12 +109,10 @@
|
||||
elastic_stack_states
|
||||
),
|
||||
'so-searchnode': (
|
||||
ssl_states +
|
||||
['kafka.ca', 'kafka.ssl', 'elasticsearch', 'logstash', 'nginx'] +
|
||||
stig_states
|
||||
),
|
||||
'so-standalone': (
|
||||
ssl_states +
|
||||
manager_states +
|
||||
['salt.cloud', 'libvirt.packages', 'libvirt.ssh.users'] +
|
||||
sensor_states +
|
||||
@@ -136,29 +121,24 @@
|
||||
elastic_stack_states
|
||||
),
|
||||
'so-sensor': (
|
||||
ssl_states +
|
||||
sensor_states +
|
||||
['nginx'] +
|
||||
stig_states
|
||||
),
|
||||
'so-fleet': (
|
||||
ssl_states +
|
||||
stig_states +
|
||||
['logstash', 'nginx', 'healthcheck', 'elasticfleet']
|
||||
),
|
||||
'so-receiver': (
|
||||
ssl_states +
|
||||
kafka_states +
|
||||
stig_states +
|
||||
['logstash', 'redis']
|
||||
),
|
||||
'so-hypervisor': (
|
||||
ssl_states +
|
||||
stig_states +
|
||||
['hypervisor', 'libvirt']
|
||||
),
|
||||
'so-desktop': (
|
||||
['ssl', 'docker_clean', 'telegraf'] +
|
||||
stig_states
|
||||
)
|
||||
} %}
|
||||
|
||||
@@ -1,4 +0,0 @@
|
||||
pki_issued_certs:
|
||||
file.directory:
|
||||
- name: /etc/pki/issued_certs
|
||||
- makedirs: True
|
||||
@@ -1,5 +1,5 @@
|
||||
x509_signing_policies:
|
||||
filebeat:
|
||||
general:
|
||||
- minions: '*'
|
||||
- signing_private_key: /etc/pki/ca.key
|
||||
- signing_cert: /etc/pki/ca.crt
|
||||
@@ -12,72 +12,3 @@ x509_signing_policies:
|
||||
- authorityKeyIdentifier: keyid,issuer:always
|
||||
- days_valid: 820
|
||||
- copypath: /etc/pki/issued_certs/
|
||||
registry:
|
||||
- minions: '*'
|
||||
- signing_private_key: /etc/pki/ca.key
|
||||
- signing_cert: /etc/pki/ca.crt
|
||||
- C: US
|
||||
- ST: Utah
|
||||
- L: Salt Lake City
|
||||
- basicConstraints: "critical CA:false"
|
||||
- keyUsage: "critical keyEncipherment"
|
||||
- subjectKeyIdentifier: hash
|
||||
- authorityKeyIdentifier: keyid,issuer:always
|
||||
- extendedKeyUsage: serverAuth
|
||||
- days_valid: 820
|
||||
- copypath: /etc/pki/issued_certs/
|
||||
managerssl:
|
||||
- minions: '*'
|
||||
- signing_private_key: /etc/pki/ca.key
|
||||
- signing_cert: /etc/pki/ca.crt
|
||||
- C: US
|
||||
- ST: Utah
|
||||
- L: Salt Lake City
|
||||
- basicConstraints: "critical CA:false"
|
||||
- keyUsage: "critical keyEncipherment digitalSignature"
|
||||
- subjectKeyIdentifier: hash
|
||||
- authorityKeyIdentifier: keyid,issuer:always
|
||||
- extendedKeyUsage: serverAuth
|
||||
- days_valid: 820
|
||||
- copypath: /etc/pki/issued_certs/
|
||||
influxdb:
|
||||
- minions: '*'
|
||||
- signing_private_key: /etc/pki/ca.key
|
||||
- signing_cert: /etc/pki/ca.crt
|
||||
- C: US
|
||||
- ST: Utah
|
||||
- L: Salt Lake City
|
||||
- basicConstraints: "critical CA:false"
|
||||
- keyUsage: "critical keyEncipherment"
|
||||
- subjectKeyIdentifier: hash
|
||||
- authorityKeyIdentifier: keyid,issuer:always
|
||||
- extendedKeyUsage: serverAuth
|
||||
- days_valid: 820
|
||||
- copypath: /etc/pki/issued_certs/
|
||||
elasticfleet:
|
||||
- minions: '*'
|
||||
- signing_private_key: /etc/pki/ca.key
|
||||
- signing_cert: /etc/pki/ca.crt
|
||||
- C: US
|
||||
- ST: Utah
|
||||
- L: Salt Lake City
|
||||
- basicConstraints: "critical CA:false"
|
||||
- keyUsage: "digitalSignature, nonRepudiation"
|
||||
- subjectKeyIdentifier: hash
|
||||
- authorityKeyIdentifier: keyid,issuer:always
|
||||
- days_valid: 820
|
||||
- copypath: /etc/pki/issued_certs/
|
||||
kafka:
|
||||
- minions: '*'
|
||||
- signing_private_key: /etc/pki/ca.key
|
||||
- signing_cert: /etc/pki/ca.crt
|
||||
- C: US
|
||||
- ST: Utah
|
||||
- L: Salt Lake City
|
||||
- basicConstraints: "critical CA:false"
|
||||
- keyUsage: "digitalSignature, keyEncipherment"
|
||||
- subjectKeyIdentifier: hash
|
||||
- authorityKeyIdentifier: keyid,issuer:always
|
||||
- extendedKeyUsage: "serverAuth, clientAuth"
|
||||
- days_valid: 820
|
||||
- copypath: /etc/pki/issued_certs/
|
||||
|
||||
@@ -3,70 +3,10 @@
|
||||
# https://securityonion.net/license; you may not use this file except in compliance with the
|
||||
# Elastic License 2.0.
|
||||
|
||||
{% from 'allowed_states.map.jinja' import allowed_states %}
|
||||
{% if sls in allowed_states %}
|
||||
{% from 'vars/globals.map.jinja' import GLOBALS %}
|
||||
|
||||
|
||||
include:
|
||||
- ca.dirs
|
||||
|
||||
/etc/salt/minion.d/signing_policies.conf:
|
||||
file.managed:
|
||||
- source: salt://ca/files/signing_policies.conf
|
||||
|
||||
pki_private_key:
|
||||
x509.private_key_managed:
|
||||
- name: /etc/pki/ca.key
|
||||
- keysize: 4096
|
||||
- passphrase:
|
||||
- backup: True
|
||||
{% if salt['file.file_exists']('/etc/pki/ca.key') -%}
|
||||
- prereq:
|
||||
- x509: /etc/pki/ca.crt
|
||||
{%- endif %}
|
||||
|
||||
pki_public_ca_crt:
|
||||
x509.certificate_managed:
|
||||
- name: /etc/pki/ca.crt
|
||||
- signing_private_key: /etc/pki/ca.key
|
||||
- CN: {{ GLOBALS.manager }}
|
||||
- C: US
|
||||
- ST: Utah
|
||||
- L: Salt Lake City
|
||||
- basicConstraints: "critical CA:true"
|
||||
- keyUsage: "critical cRLSign, keyCertSign"
|
||||
- extendedkeyUsage: "serverAuth, clientAuth"
|
||||
- subjectKeyIdentifier: hash
|
||||
- authorityKeyIdentifier: keyid:always, issuer
|
||||
- days_valid: 3650
|
||||
- days_remaining: 0
|
||||
- backup: True
|
||||
- replace: False
|
||||
- require:
|
||||
- sls: ca.dirs
|
||||
- timeout: 30
|
||||
- retry:
|
||||
attempts: 5
|
||||
interval: 30
|
||||
|
||||
mine_update_ca_crt:
|
||||
module.run:
|
||||
- mine.update: []
|
||||
- onchanges:
|
||||
- x509: pki_public_ca_crt
|
||||
|
||||
cakeyperms:
|
||||
file.managed:
|
||||
- replace: False
|
||||
- name: /etc/pki/ca.key
|
||||
- mode: 640
|
||||
- group: 939
|
||||
|
||||
{% else %}
|
||||
|
||||
{{sls}}_state_not_allowed:
|
||||
test.fail_without_changes:
|
||||
- name: {{sls}}_state_not_allowed
|
||||
|
||||
{% if GLOBALS.is_manager %}
|
||||
- ca.server
|
||||
{% endif %}
|
||||
- ca.trustca
|
||||
|
||||
3
salt/ca/map.jinja
Normal file
3
salt/ca/map.jinja
Normal file
@@ -0,0 +1,3 @@
|
||||
{% set CA = {
|
||||
'server': pillar.ca.server
|
||||
}%}
|
||||
@@ -1,7 +1,35 @@
|
||||
pki_private_key:
|
||||
# Copyright Security Onion Solutions LLC and/or licensed to Security Onion Solutions LLC under one
|
||||
# or more contributor license agreements. Licensed under the Elastic License 2.0 as shown at
|
||||
# https://securityonion.net/license; you may not use this file except in compliance with the
|
||||
# Elastic License 2.0.
|
||||
|
||||
{% set setup_running = salt['cmd.retcode']('pgrep -x so-setup') == 0 %}
|
||||
|
||||
{% if setup_running%}
|
||||
|
||||
include:
|
||||
- ssl.remove
|
||||
|
||||
remove_pki_private_key:
|
||||
file.absent:
|
||||
- name: /etc/pki/ca.key
|
||||
|
||||
pki_public_ca_crt:
|
||||
remove_pki_public_ca_crt:
|
||||
file.absent:
|
||||
- name: /etc/pki/ca.crt
|
||||
|
||||
remove_trusttheca:
|
||||
file.absent:
|
||||
- name: /etc/pki/tls/certs/intca.crt
|
||||
|
||||
remove_pki_public_ca_crt_symlink:
|
||||
file.absent:
|
||||
- name: /opt/so/saltstack/local/salt/ca/files/ca.crt
|
||||
|
||||
{% else %}
|
||||
|
||||
so-setup_not_running:
|
||||
test.show_notification:
|
||||
- text: "This state is reserved for usage during so-setup."
|
||||
|
||||
{% endif %}
|
||||
|
||||
63
salt/ca/server.sls
Normal file
63
salt/ca/server.sls
Normal file
@@ -0,0 +1,63 @@
|
||||
# Copyright Security Onion Solutions LLC and/or licensed to Security Onion Solutions LLC under one
|
||||
# or more contributor license agreements. Licensed under the Elastic License 2.0 as shown at
|
||||
# https://securityonion.net/license; you may not use this file except in compliance with the
|
||||
# Elastic License 2.0.
|
||||
|
||||
{% from 'allowed_states.map.jinja' import allowed_states %}
|
||||
{% if sls in allowed_states %}
|
||||
{% from 'vars/globals.map.jinja' import GLOBALS %}
|
||||
|
||||
pki_private_key:
|
||||
x509.private_key_managed:
|
||||
- name: /etc/pki/ca.key
|
||||
- keysize: 4096
|
||||
- passphrase:
|
||||
- backup: True
|
||||
{% if salt['file.file_exists']('/etc/pki/ca.key') -%}
|
||||
- prereq:
|
||||
- x509: /etc/pki/ca.crt
|
||||
{%- endif %}
|
||||
|
||||
pki_public_ca_crt:
|
||||
x509.certificate_managed:
|
||||
- name: /etc/pki/ca.crt
|
||||
- signing_private_key: /etc/pki/ca.key
|
||||
- CN: {{ GLOBALS.manager }}
|
||||
- C: US
|
||||
- ST: Utah
|
||||
- L: Salt Lake City
|
||||
- basicConstraints: "critical CA:true"
|
||||
- keyUsage: "critical cRLSign, keyCertSign"
|
||||
- extendedkeyUsage: "serverAuth, clientAuth"
|
||||
- subjectKeyIdentifier: hash
|
||||
- authorityKeyIdentifier: keyid:always, issuer
|
||||
- days_valid: 3650
|
||||
- days_remaining: 7
|
||||
- backup: True
|
||||
- replace: False
|
||||
- timeout: 30
|
||||
- retry:
|
||||
attempts: 5
|
||||
interval: 30
|
||||
|
||||
pki_public_ca_crt_symlink:
|
||||
file.symlink:
|
||||
- name: /opt/so/saltstack/local/salt/ca/files/ca.crt
|
||||
- target: /etc/pki/ca.crt
|
||||
- require:
|
||||
- x509: pki_public_ca_crt
|
||||
|
||||
cakeyperms:
|
||||
file.managed:
|
||||
- replace: False
|
||||
- name: /etc/pki/ca.key
|
||||
- mode: 640
|
||||
- group: 939
|
||||
|
||||
{% else %}
|
||||
|
||||
{{sls}}_state_not_allowed:
|
||||
test.fail_without_changes:
|
||||
- name: {{sls}}_state_not_allowed
|
||||
|
||||
{% endif %}
|
||||
@@ -3,11 +3,13 @@
|
||||
# https://securityonion.net/license; you may not use this file except in compliance with the
|
||||
# Elastic License 2.0.
|
||||
|
||||
{% from 'idstools/map.jinja' import IDSTOOLSMERGED %}
|
||||
# when the salt-minion signs the cert, a copy is stored here
|
||||
issued_certs_copypath:
|
||||
file.directory:
|
||||
- name: /etc/pki/issued_certs
|
||||
- makedirs: True
|
||||
|
||||
include:
|
||||
{% if IDSTOOLSMERGED.enabled %}
|
||||
- idstools.enabled
|
||||
{% else %}
|
||||
- idstools.disabled
|
||||
{% endif %}
|
||||
signing_policy:
|
||||
file.managed:
|
||||
- name: /etc/salt/minion.d/signing_policies.conf
|
||||
- source: salt://ca/files/signing_policies.conf
|
||||
30
salt/ca/trustca.sls
Normal file
30
salt/ca/trustca.sls
Normal file
@@ -0,0 +1,30 @@
|
||||
# Copyright Security Onion Solutions LLC and/or licensed to Security Onion Solutions LLC under one
|
||||
# or more contributor license agreements. Licensed under the Elastic License 2.0 as shown at
|
||||
# https://securityonion.net/license; you may not use this file except in compliance with the
|
||||
# Elastic License 2.0.
|
||||
|
||||
{% from 'vars/globals.map.jinja' import GLOBALS %}
|
||||
|
||||
include:
|
||||
- docker
|
||||
|
||||
cacertdir:
|
||||
file.directory:
|
||||
- name: /etc/pki/tls/certs
|
||||
- makedirs: True
|
||||
|
||||
# Trust the CA
|
||||
trusttheca:
|
||||
file.managed:
|
||||
- name: /etc/pki/tls/certs/intca.crt
|
||||
- source: salt://ca/files/ca.crt
|
||||
- watch_in:
|
||||
- service: docker_running
|
||||
- show_changes: False
|
||||
|
||||
{% if GLOBALS.os_family == 'Debian' %}
|
||||
symlinkca:
|
||||
file.symlink:
|
||||
- target: /etc/pki/tls/certs/intca.crt
|
||||
- name: /etc/ssl/certs/intca.crt
|
||||
{% endif %}
|
||||
@@ -177,7 +177,7 @@ so-status_script:
|
||||
- source: salt://common/tools/sbin/so-status
|
||||
- mode: 755
|
||||
|
||||
{% if GLOBALS.role in GLOBALS.sensor_roles %}
|
||||
{% if GLOBALS.is_sensor %}
|
||||
# Add sensor cleanup
|
||||
so-sensor-clean:
|
||||
cron.present:
|
||||
|
||||
@@ -554,21 +554,36 @@ run_check_net_err() {
|
||||
}
|
||||
|
||||
wait_for_salt_minion() {
|
||||
local minion="$1"
|
||||
local timeout="${2:-5}"
|
||||
local logfile="${3:-'/dev/stdout'}"
|
||||
retry 60 5 "journalctl -u salt-minion.service | grep 'Minion is ready to receive requests'" >> "$logfile" 2>&1 || fail
|
||||
local attempt=0
|
||||
# each attempts would take about 15 seconds
|
||||
local maxAttempts=20
|
||||
until check_salt_minion_status "$minion" "$timeout" "$logfile"; do
|
||||
attempt=$((attempt+1))
|
||||
if [[ $attempt -eq $maxAttempts ]]; then
|
||||
return 1
|
||||
fi
|
||||
sleep 10
|
||||
done
|
||||
return 0
|
||||
local minion="$1"
|
||||
local max_wait="${2:-30}"
|
||||
local interval="${3:-2}"
|
||||
local logfile="${4:-'/dev/stdout'}"
|
||||
local elapsed=0
|
||||
|
||||
echo "$(date '+%a %d %b %Y %H:%M:%S.%6N') - Waiting for salt-minion '$minion' to be ready..." | tee -a "$logfile"
|
||||
|
||||
while [ $elapsed -lt $max_wait ]; do
|
||||
# Check if service is running
|
||||
if ! systemctl is-active --quiet salt-minion; then
|
||||
echo "$(date '+%a %d %b %Y %H:%M:%S.%6N') - salt-minion service not running (elapsed: ${elapsed}s)" | tee -a "$logfile"
|
||||
sleep $interval
|
||||
elapsed=$((elapsed + interval))
|
||||
continue
|
||||
fi
|
||||
|
||||
# Check if minion responds to ping
|
||||
if salt "$minion" test.ping --timeout=3 --out=json 2>> "$logfile" | grep -q "true"; then
|
||||
echo "$(date '+%a %d %b %Y %H:%M:%S.%6N') - salt-minion '$minion' is connected and ready!" | tee -a "$logfile"
|
||||
return 0
|
||||
fi
|
||||
|
||||
echo "$(date '+%a %d %b %Y %H:%M:%S.%6N') - Waiting... (${elapsed}s / ${max_wait}s)" | tee -a "$logfile"
|
||||
sleep $interval
|
||||
elapsed=$((elapsed + interval))
|
||||
done
|
||||
|
||||
echo "$(date '+%a %d %b %Y %H:%M:%S.%6N') - ERROR: salt-minion '$minion' not ready after $max_wait seconds" | tee -a "$logfile"
|
||||
return 1
|
||||
}
|
||||
|
||||
salt_minion_count() {
|
||||
|
||||
@@ -25,7 +25,6 @@ container_list() {
|
||||
if [ $MANAGERCHECK == 'so-import' ]; then
|
||||
TRUSTED_CONTAINERS=(
|
||||
"so-elasticsearch"
|
||||
"so-idstools"
|
||||
"so-influxdb"
|
||||
"so-kibana"
|
||||
"so-kratos"
|
||||
@@ -49,7 +48,6 @@ container_list() {
|
||||
"so-elastic-fleet-package-registry"
|
||||
"so-elasticsearch"
|
||||
"so-idh"
|
||||
"so-idstools"
|
||||
"so-influxdb"
|
||||
"so-kafka"
|
||||
"so-kibana"
|
||||
@@ -69,7 +67,6 @@ container_list() {
|
||||
)
|
||||
else
|
||||
TRUSTED_CONTAINERS=(
|
||||
"so-idstools"
|
||||
"so-elasticsearch"
|
||||
"so-logstash"
|
||||
"so-nginx"
|
||||
|
||||
@@ -129,6 +129,7 @@ if [[ $EXCLUDE_STARTUP_ERRORS == 'Y' ]]; then
|
||||
EXCLUDED_ERRORS="$EXCLUDED_ERRORS|responded with status-code 503" # telegraf getting 503 from ES during startup
|
||||
EXCLUDED_ERRORS="$EXCLUDED_ERRORS|process_cluster_event_timeout_exception" # logstash waiting for elasticsearch to start
|
||||
EXCLUDED_ERRORS="$EXCLUDED_ERRORS|not configured for GeoIP" # SO does not bundle the maxminddb with Zeek
|
||||
EXCLUDED_ERRORS="$EXCLUDED_ERRORS|HTTP 404: Not Found" # Salt loops until Kratos returns 200, during startup Kratos may not be ready
|
||||
fi
|
||||
|
||||
if [[ $EXCLUDE_FALSE_POSITIVE_ERRORS == 'Y' ]]; then
|
||||
|
||||
@@ -85,7 +85,7 @@ function suricata() {
|
||||
docker run --rm \
|
||||
-v /opt/so/conf/suricata/suricata.yaml:/etc/suricata/suricata.yaml:ro \
|
||||
-v /opt/so/conf/suricata/threshold.conf:/etc/suricata/threshold.conf:ro \
|
||||
-v /opt/so/conf/suricata/rules:/etc/suricata/rules:ro \
|
||||
-v /opt/so/rules/suricata/:/etc/suricata/rules:ro \
|
||||
-v ${LOG_PATH}:/var/log/suricata/:rw \
|
||||
-v ${NSM_PATH}/:/nsm/:rw \
|
||||
-v "$PCAP:/input.pcap:ro" \
|
||||
|
||||
@@ -3,29 +3,16 @@
|
||||
{# we only want this state to run it is CentOS #}
|
||||
{% if GLOBALS.os == 'OEL' %}
|
||||
|
||||
{% set global_ca_text = [] %}
|
||||
{% set global_ca_server = [] %}
|
||||
{% set manager = GLOBALS.manager %}
|
||||
{% set x509dict = salt['mine.get'](manager | lower~'*', 'x509.get_pem_entries') %}
|
||||
{% for host in x509dict %}
|
||||
{% if host.split('_')|last in ['manager', 'managersearch', 'standalone', 'import', 'eval'] %}
|
||||
{% do global_ca_text.append(x509dict[host].get('/etc/pki/ca.crt')|replace('\n', '')) %}
|
||||
{% do global_ca_server.append(host) %}
|
||||
{% endif %}
|
||||
{% endfor %}
|
||||
{% set trusttheca_text = global_ca_text[0] %}
|
||||
{% set ca_server = global_ca_server[0] %}
|
||||
|
||||
trusted_ca:
|
||||
x509.pem_managed:
|
||||
file.managed:
|
||||
- name: /etc/pki/ca-trust/source/anchors/ca.crt
|
||||
- text: {{ trusttheca_text }}
|
||||
- source: salt://ca/files/ca.crt
|
||||
|
||||
update_ca_certs:
|
||||
cmd.run:
|
||||
- name: update-ca-trust
|
||||
- onchanges:
|
||||
- x509: trusted_ca
|
||||
- file: trusted_ca
|
||||
|
||||
{% else %}
|
||||
|
||||
|
||||
@@ -24,11 +24,6 @@ docker:
|
||||
custom_bind_mounts: []
|
||||
extra_hosts: []
|
||||
extra_env: []
|
||||
'so-idstools':
|
||||
final_octet: 25
|
||||
custom_bind_mounts: []
|
||||
extra_hosts: []
|
||||
extra_env: []
|
||||
'so-influxdb':
|
||||
final_octet: 26
|
||||
port_bindings:
|
||||
|
||||
@@ -6,9 +6,9 @@
|
||||
{% from 'docker/docker.map.jinja' import DOCKER %}
|
||||
{% from 'vars/globals.map.jinja' import GLOBALS %}
|
||||
|
||||
# include ssl since docker service requires the intca
|
||||
# docker service requires the ca.crt
|
||||
include:
|
||||
- ssl
|
||||
- ca
|
||||
|
||||
dockergroup:
|
||||
group.present:
|
||||
@@ -89,10 +89,9 @@ docker_running:
|
||||
- enable: True
|
||||
- watch:
|
||||
- file: docker_daemon
|
||||
- x509: trusttheca
|
||||
- require:
|
||||
- file: docker_daemon
|
||||
- x509: trusttheca
|
||||
- file: trusttheca
|
||||
|
||||
|
||||
# Reserve OS ports for Docker proxy in case boot settings are not already applied/present
|
||||
|
||||
@@ -41,7 +41,6 @@ docker:
|
||||
forcedType: "[]string"
|
||||
so-elastic-fleet: *dockerOptions
|
||||
so-elasticsearch: *dockerOptions
|
||||
so-idstools: *dockerOptions
|
||||
so-influxdb: *dockerOptions
|
||||
so-kibana: *dockerOptions
|
||||
so-kratos: *dockerOptions
|
||||
@@ -102,4 +101,4 @@ docker:
|
||||
multiline: True
|
||||
forcedType: "[]string"
|
||||
so-zeek: *dockerOptions
|
||||
so-kafka: *dockerOptions
|
||||
so-kafka: *dockerOptions
|
||||
|
||||
@@ -60,7 +60,7 @@ so-elastalert:
|
||||
- watch:
|
||||
- file: elastaconf
|
||||
- onlyif:
|
||||
- "so-elasticsearch-query / | jq -r '.version.number[0:1]' | grep -q 8" {# only run this state if elasticsearch is version 8 #}
|
||||
- "so-elasticsearch-query / | jq -r '.version.number[0:1]' | grep -q 9" {# only run this state if elasticsearch is version 9 #}
|
||||
|
||||
delete_so-elastalert_so-status.disabled:
|
||||
file.uncomment:
|
||||
|
||||
@@ -9,6 +9,7 @@
|
||||
{% from 'docker/docker.map.jinja' import DOCKER %}
|
||||
|
||||
include:
|
||||
- ca
|
||||
- elasticagent.config
|
||||
- elasticagent.sostatus
|
||||
|
||||
@@ -55,8 +56,10 @@ so-elastic-agent:
|
||||
{% endif %}
|
||||
- require:
|
||||
- file: create-elastic-agent-config
|
||||
- file: trusttheca
|
||||
- watch:
|
||||
- file: create-elastic-agent-config
|
||||
- file: trusttheca
|
||||
|
||||
delete_so-elastic-agent_so-status.disabled:
|
||||
file.uncomment:
|
||||
|
||||
@@ -13,9 +13,11 @@
|
||||
{% set SERVICETOKEN = salt['pillar.get']('elasticfleet:config:server:es_token','') %}
|
||||
|
||||
include:
|
||||
- ca
|
||||
- logstash.ssl
|
||||
- elasticfleet.ssl
|
||||
- elasticfleet.config
|
||||
- elasticfleet.sostatus
|
||||
- ssl
|
||||
|
||||
{% if grains.role not in ['so-fleet'] %}
|
||||
# Wait for Elasticsearch to be ready - no reason to try running Elastic Fleet server if ES is not ready
|
||||
@@ -133,6 +135,11 @@ so-elastic-fleet:
|
||||
{% endfor %}
|
||||
{% endif %}
|
||||
- watch:
|
||||
- file: trusttheca
|
||||
- x509: etc_elasticfleet_key
|
||||
- x509: etc_elasticfleet_crt
|
||||
- require:
|
||||
- file: trusttheca
|
||||
- x509: etc_elasticfleet_key
|
||||
- x509: etc_elasticfleet_crt
|
||||
{% endif %}
|
||||
|
||||
@@ -5,7 +5,7 @@
|
||||
"package": {
|
||||
"name": "endpoint",
|
||||
"title": "Elastic Defend",
|
||||
"version": "8.18.1",
|
||||
"version": "9.0.2",
|
||||
"requires_root": true
|
||||
},
|
||||
"enabled": true,
|
||||
|
||||
@@ -21,6 +21,7 @@
|
||||
'azure_application_insights.app_state': 'azure.app_state',
|
||||
'azure_billing.billing': 'azure.billing',
|
||||
'azure_functions.metrics': 'azure.function',
|
||||
'azure_ai_foundry.metrics': 'azure.ai_foundry',
|
||||
'azure_metrics.compute_vm_scaleset': 'azure.compute_vm_scaleset',
|
||||
'azure_metrics.compute_vm': 'azure.compute_vm',
|
||||
'azure_metrics.container_instance': 'azure.container_instance',
|
||||
|
||||
186
salt/elasticfleet/ssl.sls
Normal file
186
salt/elasticfleet/ssl.sls
Normal file
@@ -0,0 +1,186 @@
|
||||
# Copyright Security Onion Solutions LLC and/or licensed to Security Onion Solutions LLC under one
|
||||
# or more contributor license agreements. Licensed under the Elastic License 2.0 as shown at
|
||||
# https://securityonion.net/license; you may not use this file except in compliance with the
|
||||
# Elastic License 2.0.
|
||||
|
||||
{% from 'allowed_states.map.jinja' import allowed_states %}
|
||||
{% if sls.split('.')[0] in allowed_states %}
|
||||
{% from 'vars/globals.map.jinja' import GLOBALS %}
|
||||
{% from 'elasticfleet/map.jinja' import ELASTICFLEETMERGED %}
|
||||
{% from 'ca/map.jinja' import CA %}
|
||||
|
||||
{% if GLOBALS.is_manager or GLOBALS.role in ['so-heavynode', 'so-fleet', 'so-receiver'] %}
|
||||
|
||||
{% if grains['role'] not in [ 'so-heavynode', 'so-receiver'] %}
|
||||
# Start -- Elastic Fleet Host Cert
|
||||
etc_elasticfleet_key:
|
||||
x509.private_key_managed:
|
||||
- name: /etc/pki/elasticfleet-server.key
|
||||
- keysize: 4096
|
||||
- backup: True
|
||||
- new: True
|
||||
{% if salt['file.file_exists']('/etc/pki/elasticfleet-server.key') -%}
|
||||
- prereq:
|
||||
- x509: etc_elasticfleet_crt
|
||||
{%- endif %}
|
||||
- retry:
|
||||
attempts: 5
|
||||
interval: 30
|
||||
|
||||
etc_elasticfleet_crt:
|
||||
x509.certificate_managed:
|
||||
- name: /etc/pki/elasticfleet-server.crt
|
||||
- ca_server: {{ CA.server }}
|
||||
- signing_policy: general
|
||||
- private_key: /etc/pki/elasticfleet-server.key
|
||||
- CN: {{ GLOBALS.hostname }}
|
||||
- subjectAltName: DNS:{{ GLOBALS.hostname }},DNS:{{ GLOBALS.url_base }},IP:{{ GLOBALS.node_ip }}{% if ELASTICFLEETMERGED.config.server.custom_fqdn | length > 0 %},DNS:{{ ELASTICFLEETMERGED.config.server.custom_fqdn | join(',DNS:') }}{% endif %}
|
||||
- days_remaining: 7
|
||||
- days_valid: 820
|
||||
- backup: True
|
||||
- timeout: 30
|
||||
- retry:
|
||||
attempts: 5
|
||||
interval: 30
|
||||
|
||||
efperms:
|
||||
file.managed:
|
||||
- replace: False
|
||||
- name: /etc/pki/elasticfleet-server.key
|
||||
- mode: 640
|
||||
- group: 939
|
||||
|
||||
chownelasticfleetcrt:
|
||||
file.managed:
|
||||
- replace: False
|
||||
- name: /etc/pki/elasticfleet-server.crt
|
||||
- mode: 640
|
||||
- user: 947
|
||||
- group: 939
|
||||
|
||||
chownelasticfleetkey:
|
||||
file.managed:
|
||||
- replace: False
|
||||
- name: /etc/pki/elasticfleet-server.key
|
||||
- mode: 640
|
||||
- user: 947
|
||||
- group: 939
|
||||
# End -- Elastic Fleet Host Cert
|
||||
{% endif %} # endif is for not including HeavyNodes & Receivers
|
||||
|
||||
|
||||
# Start -- Elastic Fleet Client Cert for Agent (Mutual Auth with Logstash Output)
|
||||
etc_elasticfleet_agent_key:
|
||||
x509.private_key_managed:
|
||||
- name: /etc/pki/elasticfleet-agent.key
|
||||
- keysize: 4096
|
||||
- backup: True
|
||||
- new: True
|
||||
{% if salt['file.file_exists']('/etc/pki/elasticfleet-agent.key') -%}
|
||||
- prereq:
|
||||
- x509: etc_elasticfleet_agent_crt
|
||||
{%- endif %}
|
||||
- retry:
|
||||
attempts: 5
|
||||
interval: 30
|
||||
|
||||
etc_elasticfleet_agent_crt:
|
||||
x509.certificate_managed:
|
||||
- name: /etc/pki/elasticfleet-agent.crt
|
||||
- ca_server: {{ CA.server }}
|
||||
- signing_policy: general
|
||||
- private_key: /etc/pki/elasticfleet-agent.key
|
||||
- CN: {{ GLOBALS.hostname }}
|
||||
- days_remaining: 7
|
||||
- days_valid: 820
|
||||
- backup: True
|
||||
- timeout: 30
|
||||
- retry:
|
||||
attempts: 5
|
||||
interval: 30
|
||||
cmd.run:
|
||||
- name: "/usr/bin/openssl pkcs8 -in /etc/pki/elasticfleet-agent.key -topk8 -out /etc/pki/elasticfleet-agent.p8 -nocrypt"
|
||||
- onchanges:
|
||||
- x509: etc_elasticfleet_agent_key
|
||||
|
||||
efagentperms:
|
||||
file.managed:
|
||||
- replace: False
|
||||
- name: /etc/pki/elasticfleet-agent.key
|
||||
- mode: 640
|
||||
- group: 939
|
||||
|
||||
chownelasticfleetagentcrt:
|
||||
file.managed:
|
||||
- replace: False
|
||||
- name: /etc/pki/elasticfleet-agent.crt
|
||||
- mode: 640
|
||||
- user: 947
|
||||
- group: 939
|
||||
|
||||
chownelasticfleetagentkey:
|
||||
file.managed:
|
||||
- replace: False
|
||||
- name: /etc/pki/elasticfleet-agent.key
|
||||
- mode: 640
|
||||
- user: 947
|
||||
- group: 939
|
||||
# End -- Elastic Fleet Client Cert for Agent (Mutual Auth with Logstash Output)
|
||||
|
||||
{% endif %}
|
||||
|
||||
{% if GLOBALS.role in ['so-manager', 'so-managerhype', 'so-managersearch', 'so-standalone'] %}
|
||||
elasticfleet_kafka_key:
|
||||
x509.private_key_managed:
|
||||
- name: /etc/pki/elasticfleet-kafka.key
|
||||
- keysize: 4096
|
||||
- backup: True
|
||||
- new: True
|
||||
{% if salt['file.file_exists']('/etc/pki/elasticfleet-kafka.key') -%}
|
||||
- prereq:
|
||||
- x509: elasticfleet_kafka_crt
|
||||
{%- endif %}
|
||||
- retry:
|
||||
attempts: 5
|
||||
interval: 30
|
||||
|
||||
elasticfleet_kafka_crt:
|
||||
x509.certificate_managed:
|
||||
- name: /etc/pki/elasticfleet-kafka.crt
|
||||
- ca_server: {{ CA.server }}
|
||||
- signing_policy: general
|
||||
- private_key: /etc/pki/elasticfleet-kafka.key
|
||||
- CN: {{ GLOBALS.hostname }}
|
||||
- subjectAltName: DNS:{{ GLOBALS.hostname }}, IP:{{ GLOBALS.node_ip }}
|
||||
- days_remaining: 7
|
||||
- days_valid: 820
|
||||
- backup: True
|
||||
- timeout: 30
|
||||
- retry:
|
||||
attempts: 5
|
||||
interval: 30
|
||||
|
||||
elasticfleet_kafka_cert_perms:
|
||||
file.managed:
|
||||
- replace: False
|
||||
- name: /etc/pki/elasticfleet-kafka.crt
|
||||
- mode: 640
|
||||
- user: 947
|
||||
- group: 939
|
||||
|
||||
elasticfleet_kafka_key_perms:
|
||||
file.managed:
|
||||
- replace: False
|
||||
- name: /etc/pki/elasticfleet-kafka.key
|
||||
- mode: 640
|
||||
- user: 947
|
||||
- group: 939
|
||||
{% endif %}
|
||||
|
||||
{% else %}
|
||||
|
||||
{{sls}}_state_not_allowed:
|
||||
test.fail_without_changes:
|
||||
- name: {{sls}}_state_not_allowed
|
||||
|
||||
{% endif %}
|
||||
@@ -86,7 +86,7 @@ if [[ -f $STATE_FILE_SUCCESS ]]; then
|
||||
latest_package_list=$(/usr/sbin/so-elastic-fleet-package-list)
|
||||
echo '{ "packages" : []}' > $BULK_INSTALL_PACKAGE_LIST
|
||||
rm -f $INSTALLED_PACKAGE_LIST
|
||||
echo $latest_package_list | jq '{packages: [.items[] | {name: .name, latest_version: .version, installed_version: .savedObject.attributes.install_version, subscription: .conditions.elastic.subscription }]}' >> $INSTALLED_PACKAGE_LIST
|
||||
echo $latest_package_list | jq '{packages: [.items[] | {name: .name, latest_version: .version, installed_version: .installationInfo.version, subscription: .conditions.elastic.subscription }]}' >> $INSTALLED_PACKAGE_LIST
|
||||
|
||||
while read -r package; do
|
||||
# get package details
|
||||
|
||||
@@ -11,6 +11,8 @@
|
||||
|
||||
FORCE_UPDATE=false
|
||||
UPDATE_CERTS=false
|
||||
LOGSTASH_PILLAR_CONFIG_YAML="{{ LOGSTASH_CONFIG_YAML }}"
|
||||
LOGSTASH_PILLAR_STATE_FILE="/opt/so/state/esfleet_logstash_config_pillar"
|
||||
|
||||
while [[ $# -gt 0 ]]; do
|
||||
case $1 in
|
||||
@@ -43,38 +45,45 @@ function update_logstash_outputs() {
|
||||
LOGSTASHKEY=$(openssl rsa -in /etc/pki/elasticfleet-logstash.key)
|
||||
LOGSTASHCRT=$(openssl x509 -in /etc/pki/elasticfleet-logstash.crt)
|
||||
LOGSTASHCA=$(openssl x509 -in /etc/pki/tls/certs/intca.crt)
|
||||
# Revert escaped \\n to \n for jq
|
||||
LOGSTASH_PILLAR_CONFIG_YAML=$(printf '%b' "$LOGSTASH_PILLAR_CONFIG_YAML")
|
||||
|
||||
if SECRETS=$(echo "$logstash_policy" | jq -er '.item.secrets' 2>/dev/null); then
|
||||
if [[ "$UPDATE_CERTS" != "true" ]]; then
|
||||
# Reuse existing secret
|
||||
JSON_STRING=$(jq -n \
|
||||
--arg UPDATEDLIST "$NEW_LIST_JSON" \
|
||||
--arg CONFIG_YAML "$LOGSTASH_PILLAR_CONFIG_YAML" \
|
||||
--argjson SECRETS "$SECRETS" \
|
||||
--argjson SSL_CONFIG "$SSL_CONFIG" \
|
||||
'{"name":"grid-logstash","type":"logstash","hosts": $UPDATEDLIST,"is_default":true,"is_default_monitoring":true,"config_yaml":"{{ LOGSTASH_CONFIG_YAML }}","ssl": $SSL_CONFIG,"secrets": $SECRETS}')
|
||||
'{"name":"grid-logstash","type":"logstash","hosts": $UPDATEDLIST,"is_default":true,"is_default_monitoring":true,"config_yaml":$CONFIG_YAML,"ssl": $SSL_CONFIG,"secrets": $SECRETS}')
|
||||
else
|
||||
# Update certs, creating new secret
|
||||
JSON_STRING=$(jq -n \
|
||||
--arg UPDATEDLIST "$NEW_LIST_JSON" \
|
||||
--arg CONFIG_YAML "$LOGSTASH_PILLAR_CONFIG_YAML" \
|
||||
--arg LOGSTASHKEY "$LOGSTASHKEY" \
|
||||
--arg LOGSTASHCRT "$LOGSTASHCRT" \
|
||||
--arg LOGSTASHCA "$LOGSTASHCA" \
|
||||
'{"name":"grid-logstash","type":"logstash","hosts": $UPDATEDLIST,"is_default":true,"is_default_monitoring":true,"config_yaml":"{{ LOGSTASH_CONFIG_YAML }}","ssl": {"certificate": $LOGSTASHCRT,"certificate_authorities":[ $LOGSTASHCA ]},"secrets": {"ssl":{"key": $LOGSTASHKEY }}}')
|
||||
'{"name":"grid-logstash","type":"logstash","hosts": $UPDATEDLIST,"is_default":true,"is_default_monitoring":true,"config_yaml":$CONFIG_YAML,"ssl": {"certificate": $LOGSTASHCRT,"certificate_authorities":[ $LOGSTASHCA ]},"secrets": {"ssl":{"key": $LOGSTASHKEY }}}')
|
||||
fi
|
||||
else
|
||||
if [[ "$UPDATE_CERTS" != "true" ]]; then
|
||||
# Reuse existing ssl config
|
||||
JSON_STRING=$(jq -n \
|
||||
--arg UPDATEDLIST "$NEW_LIST_JSON" \
|
||||
--arg CONFIG_YAML "$LOGSTASH_PILLAR_CONFIG_YAML" \
|
||||
--argjson SSL_CONFIG "$SSL_CONFIG" \
|
||||
'{"name":"grid-logstash","type":"logstash","hosts": $UPDATEDLIST,"is_default":true,"is_default_monitoring":true,"config_yaml":"{{ LOGSTASH_CONFIG_YAML }}","ssl": $SSL_CONFIG}')
|
||||
'{"name":"grid-logstash","type":"logstash","hosts": $UPDATEDLIST,"is_default":true,"is_default_monitoring":true,"config_yaml":$CONFIG_YAML,"ssl": $SSL_CONFIG}')
|
||||
else
|
||||
# Update ssl config
|
||||
JSON_STRING=$(jq -n \
|
||||
--arg UPDATEDLIST "$NEW_LIST_JSON" \
|
||||
--arg CONFIG_YAML "$LOGSTASH_PILLAR_CONFIG_YAML" \
|
||||
--arg LOGSTASHKEY "$LOGSTASHKEY" \
|
||||
--arg LOGSTASHCRT "$LOGSTASHCRT" \
|
||||
--arg LOGSTASHCA "$LOGSTASHCA" \
|
||||
'{"name":"grid-logstash","type":"logstash","hosts": $UPDATEDLIST,"is_default":true,"is_default_monitoring":true,"config_yaml":"{{ LOGSTASH_CONFIG_YAML }}","ssl": {"certificate": $LOGSTASHCRT,"key": $LOGSTASHKEY,"certificate_authorities":[ $LOGSTASHCA ]}}')
|
||||
'{"name":"grid-logstash","type":"logstash","hosts": $UPDATEDLIST,"is_default":true,"is_default_monitoring":true,"config_yaml":$CONFIG_YAML,"ssl": {"certificate": $LOGSTASHCRT,"key": $LOGSTASHKEY,"certificate_authorities":[ $LOGSTASHCA ]}}')
|
||||
fi
|
||||
fi
|
||||
fi
|
||||
@@ -167,14 +176,14 @@ function update_kafka_outputs() {
|
||||
printf "Failed to query for current Logstash Outputs..."
|
||||
exit 1
|
||||
fi
|
||||
|
||||
CURRENT_LOGSTASH_ADV_CONFIG=$(jq -r '.item.config_yaml // ""' <<< "$RAW_JSON")
|
||||
CURRENT_LOGSTASH_ADV_CONFIG_HASH=$(sha256sum <<< "$CURRENT_LOGSTASH_ADV_CONFIG" | awk '{print $1}')
|
||||
NEW_LOGSTASH_ADV_CONFIG=$'{{ LOGSTASH_CONFIG_YAML }}'
|
||||
NEW_LOGSTASH_ADV_CONFIG_HASH=$(sha256sum <<< "$NEW_LOGSTASH_ADV_CONFIG" | awk '{print $1}')
|
||||
|
||||
if [ "$CURRENT_LOGSTASH_ADV_CONFIG_HASH" != "$NEW_LOGSTASH_ADV_CONFIG_HASH" ]; then
|
||||
FORCE_UPDATE=true
|
||||
# logstash adv config - compare pillar to last state file value
|
||||
if [[ -f "$LOGSTASH_PILLAR_STATE_FILE" ]]; then
|
||||
PREVIOUS_LOGSTASH_PILLAR_CONFIG_YAML=$(cat "$LOGSTASH_PILLAR_STATE_FILE")
|
||||
if [[ "$LOGSTASH_PILLAR_CONFIG_YAML" != "$PREVIOUS_LOGSTASH_PILLAR_CONFIG_YAML" ]]; then
|
||||
echo "Logstash pillar config has changed - forcing update"
|
||||
FORCE_UPDATE=true
|
||||
fi
|
||||
echo "$LOGSTASH_PILLAR_CONFIG_YAML" > "$LOGSTASH_PILLAR_STATE_FILE"
|
||||
fi
|
||||
|
||||
# Get the current list of Logstash outputs & hash them
|
||||
|
||||
@@ -47,7 +47,7 @@ if ! kafka_output=$(curl -sK /opt/so/conf/elasticsearch/curl.config -L "http://l
|
||||
--arg KAFKACA "$KAFKACA" \
|
||||
--arg MANAGER_IP "{{ GLOBALS.manager_ip }}:9092" \
|
||||
--arg KAFKA_OUTPUT_VERSION "$KAFKA_OUTPUT_VERSION" \
|
||||
'{"name":"grid-kafka", "id":"so-manager_kafka","type":"kafka","hosts":[ $MANAGER_IP ],"is_default":false,"is_default_monitoring":false,"config_yaml":"","ssl":{"certificate_authorities":[ $KAFKACA ],"certificate": $KAFKACRT ,"key":"","verification_mode":"full"},"proxy_id":null,"client_id":"Elastic","version": $KAFKA_OUTPUT_VERSION ,"compression":"none","auth_type":"ssl","partition":"round_robin","round_robin":{"group_events":10},"topics":[{"topic":"default-securityonion"}],"headers":[{"key":"","value":""}],"timeout":30,"broker_timeout":30,"required_acks":1,"secrets":{"ssl":{"key": $KAFKAKEY }}}'
|
||||
'{"name":"grid-kafka", "id":"so-manager_kafka","type":"kafka","hosts":[ $MANAGER_IP ],"is_default":false,"is_default_monitoring":false,"config_yaml":"","ssl":{"certificate_authorities":[ $KAFKACA ],"certificate": $KAFKACRT ,"key":"","verification_mode":"full"},"proxy_id":null,"client_id":"Elastic","version": $KAFKA_OUTPUT_VERSION ,"compression":"none","auth_type":"ssl","partition":"round_robin","round_robin":{"group_events":10},"topic":"default-securityonion","headers":[{"key":"","value":""}],"timeout":30,"broker_timeout":30,"required_acks":1,"secrets":{"ssl":{"key": $KAFKAKEY }}}'
|
||||
)
|
||||
if ! response=$(curl -sK /opt/so/conf/elasticsearch/curl.config -L -X POST "localhost:5601/api/fleet/outputs" -H 'kbn-xsrf: true' -H 'Content-Type: application/json' -d "$JSON_STRING" --fail 2>/dev/null); then
|
||||
echo -e "\nFailed to setup Elastic Fleet output policy for Kafka...\n"
|
||||
@@ -67,7 +67,7 @@ elif kafka_output=$(curl -sK /opt/so/conf/elasticsearch/curl.config -L "http://l
|
||||
--arg ENABLED_DISABLED "$ENABLED_DISABLED"\
|
||||
--arg KAFKA_OUTPUT_VERSION "$KAFKA_OUTPUT_VERSION" \
|
||||
--argjson HOSTS "$HOSTS" \
|
||||
'{"name":"grid-kafka","type":"kafka","hosts":$HOSTS,"is_default":$ENABLED_DISABLED,"is_default_monitoring":$ENABLED_DISABLED,"config_yaml":"","ssl":{"certificate_authorities":[ $KAFKACA ],"certificate": $KAFKACRT ,"key":"","verification_mode":"full"},"proxy_id":null,"client_id":"Elastic","version": $KAFKA_OUTPUT_VERSION ,"compression":"none","auth_type":"ssl","partition":"round_robin","round_robin":{"group_events":10},"topics":[{"topic":"default-securityonion"}],"headers":[{"key":"","value":""}],"timeout":30,"broker_timeout":30,"required_acks":1,"secrets":{"ssl":{"key": $KAFKAKEY }}}'
|
||||
'{"name":"grid-kafka","type":"kafka","hosts":$HOSTS,"is_default":$ENABLED_DISABLED,"is_default_monitoring":$ENABLED_DISABLED,"config_yaml":"","ssl":{"certificate_authorities":[ $KAFKACA ],"certificate": $KAFKACRT ,"key":"","verification_mode":"full"},"proxy_id":null,"client_id":"Elastic","version": $KAFKA_OUTPUT_VERSION ,"compression":"none","auth_type":"ssl","partition":"round_robin","round_robin":{"group_events":10},"topic":"default-securityonion","headers":[{"key":"","value":""}],"timeout":30,"broker_timeout":30,"required_acks":1,"secrets":{"ssl":{"key": $KAFKAKEY }}}'
|
||||
)
|
||||
if ! response=$(curl -sK /opt/so/conf/elasticsearch/curl.config -L -X PUT "localhost:5601/api/fleet/outputs/so-manager_kafka" -H 'kbn-xsrf: true' -H 'Content-Type: application/json' -d "$JSON_STRING" --fail 2>/dev/null); then
|
||||
echo -e "\nFailed to force update to Elastic Fleet output policy for Kafka...\n"
|
||||
|
||||
@@ -26,14 +26,14 @@ catrustscript:
|
||||
GLOBALS: {{ GLOBALS }}
|
||||
{% endif %}
|
||||
|
||||
cacertz:
|
||||
elasticsearch_cacerts:
|
||||
file.managed:
|
||||
- name: /opt/so/conf/ca/cacerts
|
||||
- source: salt://elasticsearch/cacerts
|
||||
- user: 939
|
||||
- group: 939
|
||||
|
||||
capemz:
|
||||
elasticsearch_capems:
|
||||
file.managed:
|
||||
- name: /opt/so/conf/ca/tls-ca-bundle.pem
|
||||
- source: salt://elasticsearch/tls-ca-bundle.pem
|
||||
|
||||
@@ -5,11 +5,6 @@
|
||||
|
||||
{% from 'allowed_states.map.jinja' import allowed_states %}
|
||||
{% if sls.split('.')[0] in allowed_states %}
|
||||
|
||||
include:
|
||||
- ssl
|
||||
- elasticsearch.ca
|
||||
|
||||
{% from 'vars/globals.map.jinja' import GLOBALS %}
|
||||
{% from 'elasticsearch/config.map.jinja' import ELASTICSEARCHMERGED %}
|
||||
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
elasticsearch:
|
||||
enabled: false
|
||||
version: 8.18.8
|
||||
version: 9.0.8
|
||||
index_clean: true
|
||||
config:
|
||||
action:
|
||||
@@ -299,6 +299,19 @@ elasticsearch:
|
||||
hot:
|
||||
actions: {}
|
||||
min_age: 0ms
|
||||
sos-backup:
|
||||
index_sorting: false
|
||||
index_template:
|
||||
composed_of: []
|
||||
ignore_missing_component_templates: []
|
||||
index_patterns:
|
||||
- sos-backup-*
|
||||
priority: 501
|
||||
template:
|
||||
settings:
|
||||
index:
|
||||
number_of_replicas: 0
|
||||
number_of_shards: 1
|
||||
so-assistant-chat:
|
||||
index_sorting: false
|
||||
index_template:
|
||||
@@ -844,53 +857,11 @@ elasticsearch:
|
||||
composed_of:
|
||||
- agent-mappings
|
||||
- dtc-agent-mappings
|
||||
- base-mappings
|
||||
- dtc-base-mappings
|
||||
- client-mappings
|
||||
- dtc-client-mappings
|
||||
- container-mappings
|
||||
- destination-mappings
|
||||
- dtc-destination-mappings
|
||||
- pb-override-destination-mappings
|
||||
- dll-mappings
|
||||
- dns-mappings
|
||||
- dtc-dns-mappings
|
||||
- ecs-mappings
|
||||
- dtc-ecs-mappings
|
||||
- error-mappings
|
||||
- event-mappings
|
||||
- dtc-event-mappings
|
||||
- file-mappings
|
||||
- dtc-file-mappings
|
||||
- group-mappings
|
||||
- host-mappings
|
||||
- dtc-host-mappings
|
||||
- http-mappings
|
||||
- dtc-http-mappings
|
||||
- log-mappings
|
||||
- metadata-mappings
|
||||
- network-mappings
|
||||
- dtc-network-mappings
|
||||
- observer-mappings
|
||||
- dtc-observer-mappings
|
||||
- organization-mappings
|
||||
- package-mappings
|
||||
- process-mappings
|
||||
- dtc-process-mappings
|
||||
- related-mappings
|
||||
- rule-mappings
|
||||
- dtc-rule-mappings
|
||||
- server-mappings
|
||||
- service-mappings
|
||||
- dtc-service-mappings
|
||||
- source-mappings
|
||||
- dtc-source-mappings
|
||||
- pb-override-source-mappings
|
||||
- threat-mappings
|
||||
- tls-mappings
|
||||
- url-mappings
|
||||
- user_agent-mappings
|
||||
- dtc-user_agent-mappings
|
||||
- common-settings
|
||||
- common-dynamic-mappings
|
||||
data_stream:
|
||||
|
||||
@@ -14,6 +14,9 @@
|
||||
{% from 'elasticsearch/template.map.jinja' import ES_INDEX_SETTINGS %}
|
||||
|
||||
include:
|
||||
- ca
|
||||
- elasticsearch.ca
|
||||
- elasticsearch.ssl
|
||||
- elasticsearch.config
|
||||
- elasticsearch.sostatus
|
||||
|
||||
@@ -61,11 +64,7 @@ so-elasticsearch:
|
||||
- /nsm/elasticsearch:/usr/share/elasticsearch/data:rw
|
||||
- /opt/so/log/elasticsearch:/var/log/elasticsearch:rw
|
||||
- /opt/so/conf/ca/cacerts:/usr/share/elasticsearch/jdk/lib/security/cacerts:ro
|
||||
{% if GLOBALS.is_manager %}
|
||||
- /etc/pki/ca.crt:/usr/share/elasticsearch/config/ca.crt:ro
|
||||
{% else %}
|
||||
- /etc/pki/tls/certs/intca.crt:/usr/share/elasticsearch/config/ca.crt:ro
|
||||
{% endif %}
|
||||
- /etc/pki/elasticsearch.crt:/usr/share/elasticsearch/config/elasticsearch.crt:ro
|
||||
- /etc/pki/elasticsearch.key:/usr/share/elasticsearch/config/elasticsearch.key:ro
|
||||
- /etc/pki/elasticsearch.p12:/usr/share/elasticsearch/config/elasticsearch.p12:ro
|
||||
@@ -82,22 +81,21 @@ so-elasticsearch:
|
||||
{% endfor %}
|
||||
{% endif %}
|
||||
- watch:
|
||||
- file: cacertz
|
||||
- file: trusttheca
|
||||
- x509: elasticsearch_crt
|
||||
- x509: elasticsearch_key
|
||||
- file: elasticsearch_cacerts
|
||||
- file: esyml
|
||||
- require:
|
||||
- file: trusttheca
|
||||
- x509: elasticsearch_crt
|
||||
- x509: elasticsearch_key
|
||||
- file: elasticsearch_cacerts
|
||||
- file: esyml
|
||||
- file: eslog4jfile
|
||||
- file: nsmesdir
|
||||
- file: eslogdir
|
||||
- file: cacertz
|
||||
- x509: /etc/pki/elasticsearch.crt
|
||||
- x509: /etc/pki/elasticsearch.key
|
||||
- file: elasticp12perms
|
||||
{% if GLOBALS.is_manager %}
|
||||
- x509: pki_public_ca_crt
|
||||
{% else %}
|
||||
- x509: trusttheca
|
||||
{% endif %}
|
||||
- cmd: auth_users_roles_inode
|
||||
- cmd: auth_users_inode
|
||||
|
||||
|
||||
@@ -1,9 +1,90 @@
|
||||
{
|
||||
"description" : "kratos",
|
||||
"processors" : [
|
||||
{"set":{"field":"audience","value":"access","override":false,"ignore_failure":true}},
|
||||
{"set":{"field":"event.dataset","ignore_empty_value":true,"ignore_failure":true,"value":"kratos.{{{audience}}}","media_type":"text/plain"}},
|
||||
{"set":{"field":"event.action","ignore_failure":true,"copy_from":"msg" }},
|
||||
{ "pipeline": { "name": "common" } }
|
||||
]
|
||||
"description": "kratos",
|
||||
"processors": [
|
||||
{
|
||||
"set": {
|
||||
"field": "audience",
|
||||
"value": "access",
|
||||
"override": false,
|
||||
"ignore_failure": true
|
||||
}
|
||||
},
|
||||
{
|
||||
"set": {
|
||||
"field": "event.dataset",
|
||||
"ignore_empty_value": true,
|
||||
"ignore_failure": true,
|
||||
"value": "kratos.{{{audience}}}",
|
||||
"media_type": "text/plain"
|
||||
}
|
||||
},
|
||||
{
|
||||
"set": {
|
||||
"field": "event.action",
|
||||
"ignore_failure": true,
|
||||
"copy_from": "msg"
|
||||
}
|
||||
},
|
||||
{
|
||||
"rename": {
|
||||
"field": "http_request",
|
||||
"target_field": "http.request",
|
||||
"ignore_failure": true,
|
||||
"ignore_missing": true
|
||||
}
|
||||
},
|
||||
{
|
||||
"rename": {
|
||||
"field": "http_response",
|
||||
"target_field": "http.response",
|
||||
"ignore_failure": true,
|
||||
"ignore_missing": true
|
||||
}
|
||||
},
|
||||
{
|
||||
"rename": {
|
||||
"field": "http.request.path",
|
||||
"target_field": "http.uri",
|
||||
"ignore_failure": true,
|
||||
"ignore_missing": true
|
||||
}
|
||||
},
|
||||
{
|
||||
"rename": {
|
||||
"field": "http.request.method",
|
||||
"target_field": "http.method",
|
||||
"ignore_failure": true,
|
||||
"ignore_missing": true
|
||||
}
|
||||
},
|
||||
{
|
||||
"rename": {
|
||||
"field": "http.request.method",
|
||||
"target_field": "http.method",
|
||||
"ignore_failure": true,
|
||||
"ignore_missing": true
|
||||
}
|
||||
},
|
||||
{
|
||||
"rename": {
|
||||
"field": "http.request.query",
|
||||
"target_field": "http.query",
|
||||
"ignore_failure": true,
|
||||
"ignore_missing": true
|
||||
}
|
||||
},
|
||||
{
|
||||
"rename": {
|
||||
"field": "http.request.headers.user-agent",
|
||||
"target_field": "http.useragent",
|
||||
"ignore_failure": true,
|
||||
"ignore_missing": true
|
||||
}
|
||||
},
|
||||
{
|
||||
"pipeline": {
|
||||
"name": "common"
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
66
salt/elasticsearch/ssl.sls
Normal file
66
salt/elasticsearch/ssl.sls
Normal file
@@ -0,0 +1,66 @@
|
||||
# Copyright Security Onion Solutions LLC and/or licensed to Security Onion Solutions LLC under one
|
||||
# or more contributor license agreements. Licensed under the Elastic License 2.0 as shown at
|
||||
# https://securityonion.net/license; you may not use this file except in compliance with the
|
||||
# Elastic License 2.0.
|
||||
|
||||
{% from 'allowed_states.map.jinja' import allowed_states %}
|
||||
{% if sls.split('.')[0] in allowed_states %}
|
||||
{% from 'vars/globals.map.jinja' import GLOBALS %}
|
||||
{% from 'ca/map.jinja' import CA %}
|
||||
|
||||
# Create a cert for elasticsearch
|
||||
elasticsearch_key:
|
||||
x509.private_key_managed:
|
||||
- name: /etc/pki/elasticsearch.key
|
||||
- keysize: 4096
|
||||
- backup: True
|
||||
- new: True
|
||||
{% if salt['file.file_exists']('/etc/pki/elasticsearch.key') -%}
|
||||
- prereq:
|
||||
- x509: /etc/pki/elasticsearch.crt
|
||||
{%- endif %}
|
||||
- retry:
|
||||
attempts: 5
|
||||
interval: 30
|
||||
|
||||
elasticsearch_crt:
|
||||
x509.certificate_managed:
|
||||
- name: /etc/pki/elasticsearch.crt
|
||||
- ca_server: {{ CA.server }}
|
||||
- signing_policy: general
|
||||
- private_key: /etc/pki/elasticsearch.key
|
||||
- CN: {{ GLOBALS.hostname }}
|
||||
- subjectAltName: DNS:{{ GLOBALS.hostname }}, IP:{{ GLOBALS.node_ip }}
|
||||
- days_remaining: 7
|
||||
- days_valid: 820
|
||||
- backup: True
|
||||
- timeout: 30
|
||||
- retry:
|
||||
attempts: 5
|
||||
interval: 30
|
||||
cmd.run:
|
||||
- name: "/usr/bin/openssl pkcs12 -inkey /etc/pki/elasticsearch.key -in /etc/pki/elasticsearch.crt -export -out /etc/pki/elasticsearch.p12 -nodes -passout pass:"
|
||||
- onchanges:
|
||||
- x509: /etc/pki/elasticsearch.key
|
||||
|
||||
elastickeyperms:
|
||||
file.managed:
|
||||
- replace: False
|
||||
- name: /etc/pki/elasticsearch.key
|
||||
- mode: 640
|
||||
- group: 930
|
||||
|
||||
elasticp12perms:
|
||||
file.managed:
|
||||
- replace: False
|
||||
- name: /etc/pki/elasticsearch.p12
|
||||
- mode: 640
|
||||
- group: 930
|
||||
|
||||
{% else %}
|
||||
|
||||
{{sls}}_state_not_allowed:
|
||||
test.fail_without_changes:
|
||||
- name: {{sls}}_state_not_allowed
|
||||
|
||||
{% endif %}
|
||||
@@ -15,7 +15,7 @@ set -e
|
||||
if [ ! -f /opt/so/saltstack/local/salt/elasticsearch/cacerts ]; then
|
||||
docker run -v /etc/pki/ca.crt:/etc/ssl/ca.crt --name so-elasticsearchca --user root --entrypoint jdk/bin/keytool {{ GLOBALS.registry_host }}:5000/{{ GLOBALS.image_repo }}/so-elasticsearch:$ELASTIC_AGENT_TARBALL_VERSION -keystore /usr/share/elasticsearch/jdk/lib/security/cacerts -alias SOSCA -import -file /etc/ssl/ca.crt -storepass changeit -noprompt
|
||||
docker cp so-elasticsearchca:/usr/share/elasticsearch/jdk/lib/security/cacerts /opt/so/saltstack/local/salt/elasticsearch/cacerts
|
||||
docker cp so-elasticsearchca:/etc/ssl/certs/ca-certificates.crt /opt/so/saltstack/local/salt/elasticsearch/tls-ca-bundle.pem
|
||||
docker cp so-elasticsearchca:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem /opt/so/saltstack/local/salt/elasticsearch/tls-ca-bundle.pem
|
||||
docker rm so-elasticsearchca
|
||||
echo "" >> /opt/so/saltstack/local/salt/elasticsearch/tls-ca-bundle.pem
|
||||
echo "sosca" >> /opt/so/saltstack/local/salt/elasticsearch/tls-ca-bundle.pem
|
||||
|
||||
@@ -121,7 +121,7 @@ if [ ! -f $STATE_FILE_SUCCESS ]; then
|
||||
echo "Loading Security Onion index templates..."
|
||||
shopt -s extglob
|
||||
{% if GLOBALS.role == 'so-heavynode' %}
|
||||
pattern="!(*1password*|*aws*|*azure*|*cloudflare*|*elastic_agent*|*fim*|*github*|*google*|*osquery*|*system*|*windows*)"
|
||||
pattern="!(*1password*|*aws*|*azure*|*cloudflare*|*elastic_agent*|*fim*|*github*|*google*|*osquery*|*system*|*windows*|*endpoint*|*elasticsearch*|*generic*|*fleet_server*|*soc*)"
|
||||
{% else %}
|
||||
pattern="*"
|
||||
{% endif %}
|
||||
|
||||
@@ -1,65 +0,0 @@
|
||||
# Copyright Security Onion Solutions LLC and/or licensed to Security Onion Solutions LLC under one
|
||||
# or more contributor license agreements. Licensed under the Elastic License 2.0 as shown at
|
||||
# https://securityonion.net/license; you may not use this file except in compliance with the
|
||||
# Elastic License 2.0.
|
||||
|
||||
{% from 'allowed_states.map.jinja' import allowed_states %}
|
||||
{% if sls.split('.')[0] in allowed_states %}
|
||||
|
||||
include:
|
||||
- idstools.sync_files
|
||||
|
||||
idstoolslogdir:
|
||||
file.directory:
|
||||
- name: /opt/so/log/idstools
|
||||
- user: 939
|
||||
- group: 939
|
||||
- makedirs: True
|
||||
|
||||
idstools_sbin:
|
||||
file.recurse:
|
||||
- name: /usr/sbin
|
||||
- source: salt://idstools/tools/sbin
|
||||
- user: 939
|
||||
- group: 939
|
||||
- file_mode: 755
|
||||
|
||||
# If this is used, exclude so-rule-update
|
||||
#idstools_sbin_jinja:
|
||||
# file.recurse:
|
||||
# - name: /usr/sbin
|
||||
# - source: salt://idstools/tools/sbin_jinja
|
||||
# - user: 939
|
||||
# - group: 939
|
||||
# - file_mode: 755
|
||||
# - template: jinja
|
||||
|
||||
idstools_so-rule-update:
|
||||
file.managed:
|
||||
- name: /usr/sbin/so-rule-update
|
||||
- source: salt://idstools/tools/sbin_jinja/so-rule-update
|
||||
- user: 939
|
||||
- group: 939
|
||||
- mode: 755
|
||||
- template: jinja
|
||||
|
||||
suricatacustomdirsfile:
|
||||
file.directory:
|
||||
- name: /nsm/rules/detect-suricata/custom_file
|
||||
- user: 939
|
||||
- group: 939
|
||||
- makedirs: True
|
||||
|
||||
suricatacustomdirsurl:
|
||||
file.directory:
|
||||
- name: /nsm/rules/detect-suricata/custom_temp
|
||||
- user: 939
|
||||
- group: 939
|
||||
|
||||
{% else %}
|
||||
|
||||
{{sls}}_state_not_allowed:
|
||||
test.fail_without_changes:
|
||||
- name: {{sls}}_state_not_allowed
|
||||
|
||||
{% endif %}
|
||||
@@ -1,10 +0,0 @@
|
||||
idstools:
|
||||
enabled: False
|
||||
config:
|
||||
urls: []
|
||||
ruleset: ETOPEN
|
||||
oinkcode: ""
|
||||
sids:
|
||||
enabled: []
|
||||
disabled: []
|
||||
modify: []
|
||||
@@ -1,31 +0,0 @@
|
||||
# Copyright Security Onion Solutions LLC and/or licensed to Security Onion Solutions LLC under one
|
||||
# or more contributor license agreements. Licensed under the Elastic License 2.0 as shown at
|
||||
# https://securityonion.net/license; you may not use this file except in compliance with the
|
||||
# Elastic License 2.0.
|
||||
|
||||
{% from 'allowed_states.map.jinja' import allowed_states %}
|
||||
{% if sls.split('.')[0] in allowed_states %}
|
||||
|
||||
include:
|
||||
- idstools.sostatus
|
||||
|
||||
so-idstools:
|
||||
docker_container.absent:
|
||||
- force: True
|
||||
|
||||
so-idstools_so-status.disabled:
|
||||
file.comment:
|
||||
- name: /opt/so/conf/so-status/so-status.conf
|
||||
- regex: ^so-idstools$
|
||||
|
||||
so-rule-update:
|
||||
cron.absent:
|
||||
- identifier: so-rule-update
|
||||
|
||||
{% else %}
|
||||
|
||||
{{sls}}_state_not_allowed:
|
||||
test.fail_without_changes:
|
||||
- name: {{sls}}_state_not_allowed
|
||||
|
||||
{% endif %}
|
||||
@@ -1,91 +0,0 @@
|
||||
# Copyright Security Onion Solutions LLC and/or licensed to Security Onion Solutions LLC under one
|
||||
# or more contributor license agreements. Licensed under the Elastic License 2.0 as shown at
|
||||
# https://securityonion.net/license; you may not use this file except in compliance with the
|
||||
# Elastic License 2.0.
|
||||
|
||||
{% from 'allowed_states.map.jinja' import allowed_states %}
|
||||
{% if sls.split('.')[0] in allowed_states %}
|
||||
{% from 'docker/docker.map.jinja' import DOCKER %}
|
||||
{% from 'vars/globals.map.jinja' import GLOBALS %}
|
||||
{% set proxy = salt['pillar.get']('manager:proxy') %}
|
||||
|
||||
include:
|
||||
- idstools.config
|
||||
- idstools.sostatus
|
||||
|
||||
so-idstools:
|
||||
docker_container.running:
|
||||
- image: {{ GLOBALS.registry_host }}:5000/{{ GLOBALS.image_repo }}/so-idstools:{{ GLOBALS.so_version }}
|
||||
- hostname: so-idstools
|
||||
- user: socore
|
||||
- networks:
|
||||
- sobridge:
|
||||
- ipv4_address: {{ DOCKER.containers['so-idstools'].ip }}
|
||||
{% if proxy %}
|
||||
- environment:
|
||||
- http_proxy={{ proxy }}
|
||||
- https_proxy={{ proxy }}
|
||||
- no_proxy={{ salt['pillar.get']('manager:no_proxy') }}
|
||||
{% if DOCKER.containers['so-idstools'].extra_env %}
|
||||
{% for XTRAENV in DOCKER.containers['so-idstools'].extra_env %}
|
||||
- {{ XTRAENV }}
|
||||
{% endfor %}
|
||||
{% endif %}
|
||||
{% elif DOCKER.containers['so-idstools'].extra_env %}
|
||||
- environment:
|
||||
{% for XTRAENV in DOCKER.containers['so-idstools'].extra_env %}
|
||||
- {{ XTRAENV }}
|
||||
{% endfor %}
|
||||
{% endif %}
|
||||
- binds:
|
||||
- /opt/so/conf/idstools/etc:/opt/so/idstools/etc:ro
|
||||
- /opt/so/rules/nids/suri:/opt/so/rules/nids/suri:rw
|
||||
- /nsm/rules/:/nsm/rules/:rw
|
||||
{% if DOCKER.containers['so-idstools'].custom_bind_mounts %}
|
||||
{% for BIND in DOCKER.containers['so-idstools'].custom_bind_mounts %}
|
||||
- {{ BIND }}
|
||||
{% endfor %}
|
||||
{% endif %}
|
||||
- extra_hosts:
|
||||
- {{ GLOBALS.manager }}:{{ GLOBALS.manager_ip }}
|
||||
{% if DOCKER.containers['so-idstools'].extra_hosts %}
|
||||
{% for XTRAHOST in DOCKER.containers['so-idstools'].extra_hosts %}
|
||||
- {{ XTRAHOST }}
|
||||
{% endfor %}
|
||||
{% endif %}
|
||||
- watch:
|
||||
- file: idstoolsetcsync
|
||||
- file: idstools_so-rule-update
|
||||
|
||||
delete_so-idstools_so-status.disabled:
|
||||
file.uncomment:
|
||||
- name: /opt/so/conf/so-status/so-status.conf
|
||||
- regex: ^so-idstools$
|
||||
|
||||
so-rule-update:
|
||||
cron.present:
|
||||
- name: /usr/sbin/so-rule-update > /opt/so/log/idstools/download_cron.log 2>&1
|
||||
- identifier: so-rule-update
|
||||
- user: root
|
||||
- minute: '1'
|
||||
- hour: '7'
|
||||
|
||||
# order this last to give so-idstools container time to be ready
|
||||
run_so-rule-update:
|
||||
cmd.run:
|
||||
- name: '/usr/sbin/so-rule-update > /opt/so/log/idstools/download_idstools_state.log 2>&1'
|
||||
- require:
|
||||
- docker_container: so-idstools
|
||||
- onchanges:
|
||||
- file: idstools_so-rule-update
|
||||
- file: idstoolsetcsync
|
||||
- file: synclocalnidsrules
|
||||
- order: last
|
||||
|
||||
{% else %}
|
||||
|
||||
{{sls}}_state_not_allowed:
|
||||
test.fail_without_changes:
|
||||
- name: {{sls}}_state_not_allowed
|
||||
|
||||
{% endif %}
|
||||
@@ -1,16 +0,0 @@
|
||||
{%- set disabled_sids = salt['pillar.get']('idstools:sids:disabled', {}) -%}
|
||||
# idstools - disable.conf
|
||||
|
||||
# Example of disabling a rule by signature ID (gid is optional).
|
||||
# 1:2019401
|
||||
# 2019401
|
||||
|
||||
# Example of disabling a rule by regular expression.
|
||||
# - All regular expression matches are case insensitive.
|
||||
# re:hearbleed
|
||||
# re:MS(0[7-9]|10)-\d+
|
||||
{%- if disabled_sids != None %}
|
||||
{%- for sid in disabled_sids %}
|
||||
{{ sid }}
|
||||
{%- endfor %}
|
||||
{%- endif %}
|
||||
@@ -1,16 +0,0 @@
|
||||
{%- set enabled_sids = salt['pillar.get']('idstools:sids:enabled', {}) -%}
|
||||
# idstools-rulecat - enable.conf
|
||||
|
||||
# Example of enabling a rule by signature ID (gid is optional).
|
||||
# 1:2019401
|
||||
# 2019401
|
||||
|
||||
# Example of enabling a rule by regular expression.
|
||||
# - All regular expression matches are case insensitive.
|
||||
# re:hearbleed
|
||||
# re:MS(0[7-9]|10)-\d+
|
||||
{%- if enabled_sids != None %}
|
||||
{%- for sid in enabled_sids %}
|
||||
{{ sid }}
|
||||
{%- endfor %}
|
||||
{%- endif %}
|
||||
@@ -1,12 +0,0 @@
|
||||
{%- set modify_sids = salt['pillar.get']('idstools:sids:modify', {}) -%}
|
||||
# idstools-rulecat - modify.conf
|
||||
|
||||
# Format: <sid> "<from>" "<to>"
|
||||
|
||||
# Example changing the seconds for rule 2019401 to 3600.
|
||||
#2019401 "seconds \d+" "seconds 3600"
|
||||
{%- if modify_sids != None %}
|
||||
{%- for sid in modify_sids %}
|
||||
{{ sid }}
|
||||
{%- endfor %}
|
||||
{%- endif %}
|
||||
@@ -1,23 +0,0 @@
|
||||
{%- from 'vars/globals.map.jinja' import GLOBALS -%}
|
||||
{%- from 'soc/merged.map.jinja' import SOCMERGED -%}
|
||||
--suricata-version=7.0.3
|
||||
--merged=/opt/so/rules/nids/suri/all.rules
|
||||
--output=/nsm/rules/detect-suricata/custom_temp
|
||||
--local=/opt/so/rules/nids/suri/local.rules
|
||||
{%- if GLOBALS.md_engine == "SURICATA" %}
|
||||
--local=/opt/so/rules/nids/suri/extraction.rules
|
||||
--local=/opt/so/rules/nids/suri/filters.rules
|
||||
{%- endif %}
|
||||
--url=http://{{ GLOBALS.manager }}:7788/suricata/emerging-all.rules
|
||||
--disable=/opt/so/idstools/etc/disable.conf
|
||||
--enable=/opt/so/idstools/etc/enable.conf
|
||||
--modify=/opt/so/idstools/etc/modify.conf
|
||||
{%- if SOCMERGED.config.server.modules.suricataengine.customRulesets %}
|
||||
{%- for ruleset in SOCMERGED.config.server.modules.suricataengine.customRulesets %}
|
||||
{%- if 'url' in ruleset %}
|
||||
--url={{ ruleset.url }}
|
||||
{%- elif 'file' in ruleset %}
|
||||
--local={{ ruleset.file }}
|
||||
{%- endif %}
|
||||
{%- endfor %}
|
||||
{%- endif %}
|
||||
@@ -1,7 +0,0 @@
|
||||
{# Copyright Security Onion Solutions LLC and/or licensed to Security Onion Solutions LLC under one
|
||||
or more contributor license agreements. Licensed under the Elastic License 2.0 as shown at
|
||||
https://securityonion.net/license; you may not use this file except in compliance with the
|
||||
Elastic License 2.0. #}
|
||||
|
||||
{% import_yaml 'idstools/defaults.yaml' as IDSTOOLSDEFAULTS with context %}
|
||||
{% set IDSTOOLSMERGED = salt['pillar.get']('idstools', IDSTOOLSDEFAULTS.idstools, merge=True) %}
|
||||
@@ -1 +0,0 @@
|
||||
# Add your custom Suricata rules in this file.
|
||||
@@ -1,72 +0,0 @@
|
||||
idstools:
|
||||
enabled:
|
||||
description: Enables or disables the IDStools process which is used by the Detection system.
|
||||
config:
|
||||
oinkcode:
|
||||
description: Enter your registration code or oinkcode for paid NIDS rulesets.
|
||||
title: Registration Code
|
||||
global: True
|
||||
forcedType: string
|
||||
helpLink: rules.html
|
||||
ruleset:
|
||||
description: 'Defines the ruleset you want to run. Options are ETOPEN or ETPRO. Once you have changed the ruleset here, you will need to wait for the rule update to take place (every 24 hours), or you can force the update by nagivating to Detections --> Options dropdown menu --> Suricata --> Full Update. WARNING! Changing the ruleset will remove all existing non-overlapping Suricata rules of the previous ruleset and their associated overrides. This removal cannot be undone.'
|
||||
global: True
|
||||
regex: ETPRO\b|ETOPEN\b
|
||||
helpLink: rules.html
|
||||
urls:
|
||||
description: This is a list of additional rule download locations. This feature is currently disabled.
|
||||
global: True
|
||||
multiline: True
|
||||
forcedType: "[]string"
|
||||
readonly: True
|
||||
helpLink: rules.html
|
||||
sids:
|
||||
disabled:
|
||||
description: Contains the list of NIDS rules (or regex patterns) disabled across the grid. This setting is readonly; Use the Detections screen to disable rules.
|
||||
global: True
|
||||
multiline: True
|
||||
forcedType: "[]string"
|
||||
regex: \d*|re:.*
|
||||
helpLink: managing-alerts.html
|
||||
readonlyUi: True
|
||||
advanced: true
|
||||
enabled:
|
||||
description: Contains the list of NIDS rules (or regex patterns) enabled across the grid. This setting is readonly; Use the Detections screen to enable rules.
|
||||
global: True
|
||||
multiline: True
|
||||
forcedType: "[]string"
|
||||
regex: \d*|re:.*
|
||||
helpLink: managing-alerts.html
|
||||
readonlyUi: True
|
||||
advanced: true
|
||||
modify:
|
||||
description: Contains the list of NIDS rules (SID "REGEX_SEARCH_TERM" "REGEX_REPLACE_TERM"). This setting is readonly; Use the Detections screen to modify rules.
|
||||
global: True
|
||||
multiline: True
|
||||
forcedType: "[]string"
|
||||
helpLink: managing-alerts.html
|
||||
readonlyUi: True
|
||||
advanced: true
|
||||
rules:
|
||||
local__rules:
|
||||
description: Contains the list of custom NIDS rules applied to the grid. This setting is readonly; Use the Detections screen to adjust rules.
|
||||
file: True
|
||||
global: True
|
||||
advanced: True
|
||||
title: Local Rules
|
||||
helpLink: local-rules.html
|
||||
readonlyUi: True
|
||||
filters__rules:
|
||||
description: If you are using Suricata for metadata, then you can set custom filters for that metadata here.
|
||||
file: True
|
||||
global: True
|
||||
advanced: True
|
||||
title: Filter Rules
|
||||
helpLink: suricata.html
|
||||
extraction__rules:
|
||||
description: If you are using Suricata for metadata, then you can set a list of MIME types for file extraction here.
|
||||
file: True
|
||||
global: True
|
||||
advanced: True
|
||||
title: Extraction Rules
|
||||
helpLink: suricata.html
|
||||
@@ -1,37 +0,0 @@
|
||||
# Copyright Security Onion Solutions LLC and/or licensed to Security Onion Solutions LLC under one
|
||||
# or more contributor license agreements. Licensed under the Elastic License 2.0 as shown at
|
||||
# https://securityonion.net/license; you may not use this file except in compliance with the
|
||||
# Elastic License 2.0.
|
||||
|
||||
|
||||
idstoolsdir:
|
||||
file.directory:
|
||||
- name: /opt/so/conf/idstools/etc
|
||||
- user: 939
|
||||
- group: 939
|
||||
- makedirs: True
|
||||
|
||||
idstoolsetcsync:
|
||||
file.recurse:
|
||||
- name: /opt/so/conf/idstools/etc
|
||||
- source: salt://idstools/etc
|
||||
- user: 939
|
||||
- group: 939
|
||||
- template: jinja
|
||||
|
||||
rulesdir:
|
||||
file.directory:
|
||||
- name: /opt/so/rules/nids/suri
|
||||
- user: 939
|
||||
- group: 939
|
||||
- makedirs: True
|
||||
|
||||
# Don't show changes because all.rules can be large
|
||||
synclocalnidsrules:
|
||||
file.recurse:
|
||||
- name: /opt/so/rules/nids/suri/
|
||||
- source: salt://idstools/rules/
|
||||
- user: 939
|
||||
- group: 939
|
||||
- show_changes: False
|
||||
- include_pat: 'E@.rules'
|
||||
@@ -1,12 +0,0 @@
|
||||
#!/bin/bash
|
||||
|
||||
# Copyright Security Onion Solutions LLC and/or licensed to Security Onion Solutions LLC under one
|
||||
# or more contributor license agreements. Licensed under the Elastic License 2.0 as shown at
|
||||
# https://securityonion.net/license; you may not use this file except in compliance with the
|
||||
# Elastic License 2.0.
|
||||
|
||||
|
||||
|
||||
. /usr/sbin/so-common
|
||||
|
||||
/usr/sbin/so-restart idstools $1
|
||||
@@ -1,12 +0,0 @@
|
||||
#!/bin/bash
|
||||
|
||||
# Copyright Security Onion Solutions LLC and/or licensed to Security Onion Solutions LLC under one
|
||||
# or more contributor license agreements. Licensed under the Elastic License 2.0 as shown at
|
||||
# https://securityonion.net/license; you may not use this file except in compliance with the
|
||||
# Elastic License 2.0.
|
||||
|
||||
|
||||
|
||||
. /usr/sbin/so-common
|
||||
|
||||
/usr/sbin/so-start idstools $1
|
||||
@@ -1,12 +0,0 @@
|
||||
#!/bin/bash
|
||||
|
||||
# Copyright Security Onion Solutions LLC and/or licensed to Security Onion Solutions LLC under one
|
||||
# or more contributor license agreements. Licensed under the Elastic License 2.0 as shown at
|
||||
# https://securityonion.net/license; you may not use this file except in compliance with the
|
||||
# Elastic License 2.0.
|
||||
|
||||
|
||||
|
||||
. /usr/sbin/so-common
|
||||
|
||||
/usr/sbin/so-stop idstools $1
|
||||
@@ -1,40 +0,0 @@
|
||||
#!/bin/bash
|
||||
|
||||
# if this script isn't already running
|
||||
if [[ ! "`pidof -x $(basename $0) -o %PPID`" ]]; then
|
||||
|
||||
. /usr/sbin/so-common
|
||||
|
||||
{%- from 'vars/globals.map.jinja' import GLOBALS %}
|
||||
{%- from 'idstools/map.jinja' import IDSTOOLSMERGED %}
|
||||
|
||||
{%- set proxy = salt['pillar.get']('manager:proxy') %}
|
||||
{%- set noproxy = salt['pillar.get']('manager:no_proxy', '') %}
|
||||
|
||||
{%- if proxy %}
|
||||
# Download the rules from the internet
|
||||
export http_proxy={{ proxy }}
|
||||
export https_proxy={{ proxy }}
|
||||
export no_proxy="{{ noproxy }}"
|
||||
{%- endif %}
|
||||
|
||||
mkdir -p /nsm/rules/suricata
|
||||
chown -R socore:socore /nsm/rules/suricata
|
||||
{%- if not GLOBALS.airgap %}
|
||||
# Download the rules from the internet
|
||||
{%- if IDSTOOLSMERGED.config.ruleset == 'ETOPEN' %}
|
||||
docker exec so-idstools idstools-rulecat -v --suricata-version 7.0.3 -o /nsm/rules/suricata/ --merged=/nsm/rules/suricata/emerging-all.rules --force
|
||||
{%- elif IDSTOOLSMERGED.config.ruleset == 'ETPRO' %}
|
||||
docker exec so-idstools idstools-rulecat -v --suricata-version 7.0.3 -o /nsm/rules/suricata/ --merged=/nsm/rules/suricata/emerging-all.rules --force --etpro={{ IDSTOOLSMERGED.config.oinkcode }}
|
||||
{%- endif %}
|
||||
{%- endif %}
|
||||
|
||||
|
||||
argstr=""
|
||||
for arg in "$@"; do
|
||||
argstr="${argstr} \"${arg}\""
|
||||
done
|
||||
|
||||
docker exec so-idstools /bin/bash -c "cd /opt/so/idstools/etc && idstools-rulecat --force ${argstr}"
|
||||
|
||||
fi
|
||||
@@ -9,7 +9,6 @@
|
||||
|
||||
include:
|
||||
- salt.minion
|
||||
- ssl
|
||||
|
||||
# Influx DB
|
||||
influxconfdir:
|
||||
|
||||
@@ -11,6 +11,7 @@
|
||||
{% set TOKEN = salt['pillar.get']('influxdb:token') %}
|
||||
|
||||
include:
|
||||
- influxdb.ssl
|
||||
- influxdb.config
|
||||
- influxdb.sostatus
|
||||
|
||||
@@ -59,6 +60,8 @@ so-influxdb:
|
||||
{% endif %}
|
||||
- watch:
|
||||
- file: influxdbconf
|
||||
- x509: influxdb_key
|
||||
- x509: influxdb_crt
|
||||
- require:
|
||||
- file: influxdbconf
|
||||
- x509: influxdb_key
|
||||
|
||||
55
salt/influxdb/ssl.sls
Normal file
55
salt/influxdb/ssl.sls
Normal file
@@ -0,0 +1,55 @@
|
||||
# Copyright Security Onion Solutions LLC and/or licensed to Security Onion Solutions LLC under one
|
||||
# or more contributor license agreements. Licensed under the Elastic License 2.0 as shown at
|
||||
# https://securityonion.net/license; you may not use this file except in compliance with the
|
||||
# Elastic License 2.0.
|
||||
|
||||
{% from 'allowed_states.map.jinja' import allowed_states %}
|
||||
{% if sls.split('.')[0] in allowed_states %}
|
||||
{% from 'vars/globals.map.jinja' import GLOBALS %}
|
||||
{% from 'ca/map.jinja' import CA %}
|
||||
|
||||
influxdb_key:
|
||||
x509.private_key_managed:
|
||||
- name: /etc/pki/influxdb.key
|
||||
- keysize: 4096
|
||||
- backup: True
|
||||
- new: True
|
||||
{% if salt['file.file_exists']('/etc/pki/influxdb.key') -%}
|
||||
- prereq:
|
||||
- x509: /etc/pki/influxdb.crt
|
||||
{%- endif %}
|
||||
- retry:
|
||||
attempts: 5
|
||||
interval: 30
|
||||
|
||||
# Create a cert for the talking to influxdb
|
||||
influxdb_crt:
|
||||
x509.certificate_managed:
|
||||
- name: /etc/pki/influxdb.crt
|
||||
- ca_server: {{ CA.server }}
|
||||
- signing_policy: general
|
||||
- private_key: /etc/pki/influxdb.key
|
||||
- CN: {{ GLOBALS.hostname }}
|
||||
- subjectAltName: DNS:{{ GLOBALS.hostname }}, IP:{{ GLOBALS.node_ip }}
|
||||
- days_remaining: 7
|
||||
- days_valid: 820
|
||||
- backup: True
|
||||
- timeout: 30
|
||||
- retry:
|
||||
attempts: 5
|
||||
interval: 30
|
||||
|
||||
influxkeyperms:
|
||||
file.managed:
|
||||
- replace: False
|
||||
- name: /etc/pki/influxdb.key
|
||||
- mode: 640
|
||||
- group: 939
|
||||
|
||||
{% else %}
|
||||
|
||||
{{sls}}_state_not_allowed:
|
||||
test.fail_without_changes:
|
||||
- name: {{sls}}_state_not_allowed
|
||||
|
||||
{% endif %}
|
||||
@@ -68,6 +68,8 @@ so-kafka:
|
||||
- file: kafka_server_jaas_properties
|
||||
{% endif %}
|
||||
- file: kafkacertz
|
||||
- x509: kafka_client_crt
|
||||
- file: kafka_pkcs12_perms
|
||||
- require:
|
||||
- file: kafkacertz
|
||||
|
||||
@@ -95,4 +97,4 @@ include:
|
||||
test.fail_without_changes:
|
||||
- name: {{sls}}_state_not_allowed
|
||||
|
||||
{% endif %}
|
||||
{% endif %}
|
||||
|
||||
@@ -6,18 +6,11 @@
|
||||
{% from 'allowed_states.map.jinja' import allowed_states %}
|
||||
{% if sls.split('.')[0] in allowed_states or sls in allowed_states %}
|
||||
{% from 'vars/globals.map.jinja' import GLOBALS %}
|
||||
{% from 'ca/map.jinja' import CA %}
|
||||
{% set kafka_password = salt['pillar.get']('kafka:config:password') %}
|
||||
|
||||
include:
|
||||
- ca.dirs
|
||||
{% set global_ca_server = [] %}
|
||||
{% set x509dict = salt['mine.get'](GLOBALS.manager | lower~'*', 'x509.get_pem_entries') %}
|
||||
{% for host in x509dict %}
|
||||
{% if 'manager' in host.split('_')|last or host.split('_')|last == 'standalone' %}
|
||||
{% do global_ca_server.append(host) %}
|
||||
{% endif %}
|
||||
{% endfor %}
|
||||
{% set ca_server = global_ca_server[0] %}
|
||||
- ca
|
||||
|
||||
{% if GLOBALS.pipeline == "KAFKA" %}
|
||||
|
||||
@@ -39,12 +32,12 @@ kafka_client_key:
|
||||
kafka_client_crt:
|
||||
x509.certificate_managed:
|
||||
- name: /etc/pki/kafka-client.crt
|
||||
- ca_server: {{ ca_server }}
|
||||
- ca_server: {{ CA.server }}
|
||||
- subjectAltName: DNS:{{ GLOBALS.hostname }}, IP:{{ GLOBALS.node_ip }}
|
||||
- signing_policy: kafka
|
||||
- signing_policy: general
|
||||
- private_key: /etc/pki/kafka-client.key
|
||||
- CN: {{ GLOBALS.hostname }}
|
||||
- days_remaining: 0
|
||||
- days_remaining: 7
|
||||
- days_valid: 820
|
||||
- backup: True
|
||||
- timeout: 30
|
||||
@@ -87,12 +80,12 @@ kafka_key:
|
||||
kafka_crt:
|
||||
x509.certificate_managed:
|
||||
- name: /etc/pki/kafka.crt
|
||||
- ca_server: {{ ca_server }}
|
||||
- ca_server: {{ CA.server }}
|
||||
- subjectAltName: DNS:{{ GLOBALS.hostname }}, IP:{{ GLOBALS.node_ip }}
|
||||
- signing_policy: kafka
|
||||
- signing_policy: general
|
||||
- private_key: /etc/pki/kafka.key
|
||||
- CN: {{ GLOBALS.hostname }}
|
||||
- days_remaining: 0
|
||||
- days_remaining: 7
|
||||
- days_valid: 820
|
||||
- backup: True
|
||||
- timeout: 30
|
||||
@@ -103,6 +96,7 @@ kafka_crt:
|
||||
- name: "/usr/bin/openssl pkcs12 -inkey /etc/pki/kafka.key -in /etc/pki/kafka.crt -export -out /etc/pki/kafka.p12 -nodes -passout pass:{{ kafka_password }}"
|
||||
- onchanges:
|
||||
- x509: /etc/pki/kafka.key
|
||||
|
||||
kafka_key_perms:
|
||||
file.managed:
|
||||
- replace: False
|
||||
@@ -148,12 +142,12 @@ kafka_logstash_key:
|
||||
kafka_logstash_crt:
|
||||
x509.certificate_managed:
|
||||
- name: /etc/pki/kafka-logstash.crt
|
||||
- ca_server: {{ ca_server }}
|
||||
- ca_server: {{ CA.server }}
|
||||
- subjectAltName: DNS:{{ GLOBALS.hostname }}, IP:{{ GLOBALS.node_ip }}
|
||||
- signing_policy: kafka
|
||||
- signing_policy: general
|
||||
- private_key: /etc/pki/kafka-logstash.key
|
||||
- CN: {{ GLOBALS.hostname }}
|
||||
- days_remaining: 0
|
||||
- days_remaining: 7
|
||||
- days_valid: 820
|
||||
- backup: True
|
||||
- timeout: 30
|
||||
@@ -198,4 +192,4 @@ kafka_logstash_pkcs12_perms:
|
||||
test.fail_without_changes:
|
||||
- name: {{sls}}_state_not_allowed
|
||||
|
||||
{% endif %}
|
||||
{% endif %}
|
||||
|
||||
@@ -75,6 +75,7 @@ kratosconfig:
|
||||
- group: 928
|
||||
- mode: 600
|
||||
- template: jinja
|
||||
- show_changes: False
|
||||
- defaults:
|
||||
KRATOSMERGED: {{ KRATOSMERGED }}
|
||||
|
||||
|
||||
@@ -46,6 +46,7 @@ kratos:
|
||||
ui_url: https://URL_BASE/
|
||||
login:
|
||||
ui_url: https://URL_BASE/login/
|
||||
lifespan: 60m
|
||||
error:
|
||||
ui_url: https://URL_BASE/login/
|
||||
registration:
|
||||
|
||||
@@ -182,6 +182,10 @@ kratos:
|
||||
global: True
|
||||
advanced: True
|
||||
helpLink: kratos.html
|
||||
lifespan:
|
||||
description: Defines the duration that a login form will remain valid.
|
||||
global: True
|
||||
helpLink: kratos.html
|
||||
error:
|
||||
ui_url:
|
||||
description: User accessible URL containing the Security Onion login page. Leave as default to ensure proper operation.
|
||||
|
||||
@@ -1,15 +1,5 @@
|
||||
logrotate:
|
||||
config:
|
||||
/opt/so/log/idstools/*_x_log:
|
||||
- daily
|
||||
- rotate 14
|
||||
- missingok
|
||||
- copytruncate
|
||||
- compress
|
||||
- create
|
||||
- extension .log
|
||||
- dateext
|
||||
- dateyesterday
|
||||
/opt/so/log/nginx/*_x_log:
|
||||
- daily
|
||||
- rotate 14
|
||||
|
||||
@@ -1,12 +1,5 @@
|
||||
logrotate:
|
||||
config:
|
||||
"/opt/so/log/idstools/*_x_log":
|
||||
description: List of logrotate options for this file.
|
||||
title: /opt/so/log/idstools/*.log
|
||||
advanced: True
|
||||
multiline: True
|
||||
global: True
|
||||
forcedType: "[]string"
|
||||
"/opt/so/log/nginx/*_x_log":
|
||||
description: List of logrotate options for this file.
|
||||
title: /opt/so/log/nginx/*.log
|
||||
|
||||
@@ -11,7 +11,6 @@
|
||||
{% set ASSIGNED_PIPELINES = LOGSTASH_MERGED.assigned_pipelines.roles[GLOBALS.role.split('-')[1]] %}
|
||||
|
||||
include:
|
||||
- ssl
|
||||
{% if GLOBALS.role not in ['so-receiver','so-fleet'] %}
|
||||
- elasticsearch
|
||||
{% endif %}
|
||||
|
||||
@@ -63,7 +63,7 @@ logstash:
|
||||
settings:
|
||||
lsheap: 500m
|
||||
config:
|
||||
http_x_host: 0.0.0.0
|
||||
api_x_http_x_host: 0.0.0.0
|
||||
path_x_logs: /var/log/logstash
|
||||
pipeline_x_workers: 1
|
||||
pipeline_x_batch_x_size: 125
|
||||
|
||||
@@ -12,6 +12,7 @@
|
||||
{% set lsheap = LOGSTASH_MERGED.settings.lsheap %}
|
||||
|
||||
include:
|
||||
- ca
|
||||
{% if GLOBALS.role not in ['so-receiver','so-fleet'] %}
|
||||
- elasticsearch.ca
|
||||
{% endif %}
|
||||
@@ -20,9 +21,9 @@ include:
|
||||
- kafka.ca
|
||||
- kafka.ssl
|
||||
{% endif %}
|
||||
- logstash.ssl
|
||||
- logstash.config
|
||||
- logstash.sostatus
|
||||
- ssl
|
||||
|
||||
so-logstash:
|
||||
docker_container.running:
|
||||
@@ -65,22 +66,18 @@ so-logstash:
|
||||
- /opt/so/log/logstash:/var/log/logstash:rw
|
||||
- /sys/fs/cgroup:/sys/fs/cgroup:ro
|
||||
- /opt/so/conf/logstash/etc/certs:/usr/share/logstash/certs:ro
|
||||
{% if GLOBALS.role in ['so-manager', 'so-managerhype', 'so-managersearch', 'so-standalone', 'so-import', 'so-heavynode', 'so-receiver'] %}
|
||||
- /etc/pki/filebeat.crt:/usr/share/logstash/filebeat.crt:ro
|
||||
- /etc/pki/filebeat.p8:/usr/share/logstash/filebeat.key:ro
|
||||
{% endif %}
|
||||
- /etc/pki/tls/certs/intca.crt:/usr/share/filebeat/ca.crt:ro
|
||||
{% if GLOBALS.is_manager or GLOBALS.role in ['so-fleet', 'so-heavynode', 'so-receiver'] %}
|
||||
- /etc/pki/elasticfleet-logstash.crt:/usr/share/logstash/elasticfleet-logstash.crt:ro
|
||||
- /etc/pki/elasticfleet-logstash.key:/usr/share/logstash/elasticfleet-logstash.key:ro
|
||||
- /etc/pki/elasticfleet-lumberjack.crt:/usr/share/logstash/elasticfleet-lumberjack.crt:ro
|
||||
- /etc/pki/elasticfleet-lumberjack.key:/usr/share/logstash/elasticfleet-lumberjack.key:ro
|
||||
{% if GLOBALS.role != 'so-fleet' %}
|
||||
- /etc/pki/filebeat.crt:/usr/share/logstash/filebeat.crt:ro
|
||||
- /etc/pki/filebeat.p8:/usr/share/logstash/filebeat.key:ro
|
||||
{% endif %}
|
||||
{% endif %}
|
||||
{% if GLOBALS.role in ['so-manager', 'so-managerhype', 'so-managersearch', 'so-standalone', 'so-import'] %}
|
||||
- /etc/pki/ca.crt:/usr/share/filebeat/ca.crt:ro
|
||||
{% else %}
|
||||
- /etc/pki/tls/certs/intca.crt:/usr/share/filebeat/ca.crt:ro
|
||||
{% endif %}
|
||||
{% if GLOBALS.role in ['so-manager', 'so-managerhype', 'so-managersearch', 'so-standalone', 'so-import', 'so-heavynode', 'so-searchnode' ] %}
|
||||
{% if GLOBALS.role not in ['so-receiver','so-fleet'] %}
|
||||
- /opt/so/conf/ca/cacerts:/etc/pki/ca-trust/extracted/java/cacerts:ro
|
||||
- /opt/so/conf/ca/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro
|
||||
{% endif %}
|
||||
@@ -100,11 +97,22 @@ so-logstash:
|
||||
{% endfor %}
|
||||
{% endif %}
|
||||
- watch:
|
||||
{% if GLOBALS.is_manager or GLOBALS.role in ['so-fleet', 'so-receiver'] %}
|
||||
- x509: etc_elasticfleet_logstash_key
|
||||
- x509: etc_elasticfleet_logstash_crt
|
||||
{% endif %}
|
||||
- file: lsetcsync
|
||||
- file: trusttheca
|
||||
{% if GLOBALS.is_manager %}
|
||||
- file: elasticsearch_cacerts
|
||||
- file: elasticsearch_capems
|
||||
{% endif %}
|
||||
{% if GLOBALS.is_manager or GLOBALS.role in ['so-fleet', 'so-heavynode', 'so-receiver'] %}
|
||||
- x509: etc_elasticfleet_logstash_crt
|
||||
- x509: etc_elasticfleet_logstash_key
|
||||
- x509: etc_elasticfleetlumberjack_crt
|
||||
- x509: etc_elasticfleetlumberjack_key
|
||||
{% if GLOBALS.role != 'so-fleet' %}
|
||||
- x509: etc_filebeat_crt
|
||||
- file: logstash_filebeat_p8
|
||||
{% endif %}
|
||||
{% endif %}
|
||||
{% for assigned_pipeline in LOGSTASH_MERGED.assigned_pipelines.roles[GLOBALS.role.split('-')[1]] %}
|
||||
- file: ls_pipeline_{{assigned_pipeline}}
|
||||
{% for CONFIGFILE in LOGSTASH_MERGED.defined_pipelines[assigned_pipeline] %}
|
||||
@@ -115,17 +123,20 @@ so-logstash:
|
||||
- file: kafkacertz
|
||||
{% endif %}
|
||||
- require:
|
||||
{% if grains['role'] in ['so-manager', 'so-managerhype', 'so-managersearch', 'so-standalone', 'so-import', 'so-heavynode', 'so-receiver'] %}
|
||||
- file: trusttheca
|
||||
{% if GLOBALS.is_manager %}
|
||||
- file: elasticsearch_cacerts
|
||||
- file: elasticsearch_capems
|
||||
{% endif %}
|
||||
{% if GLOBALS.is_manager or GLOBALS.role in ['so-fleet', 'so-heavynode', 'so-receiver'] %}
|
||||
- x509: etc_elasticfleet_logstash_crt
|
||||
- x509: etc_elasticfleet_logstash_key
|
||||
- x509: etc_elasticfleetlumberjack_crt
|
||||
- x509: etc_elasticfleetlumberjack_key
|
||||
{% if GLOBALS.role != 'so-fleet' %}
|
||||
- x509: etc_filebeat_crt
|
||||
{% endif %}
|
||||
{% if grains['role'] in ['so-manager', 'so-managerhype', 'so-managersearch', 'so-standalone', 'so-import'] %}
|
||||
- x509: pki_public_ca_crt
|
||||
{% else %}
|
||||
- x509: trusttheca
|
||||
{% endif %}
|
||||
{% if grains.role in ['so-manager', 'so-managerhype', 'so-managersearch', 'so-standalone', 'so-import'] %}
|
||||
- file: cacertz
|
||||
- file: capemz
|
||||
- file: logstash_filebeat_p8
|
||||
{% endif %}
|
||||
{% endif %}
|
||||
{% if GLOBALS.pipeline == 'KAFKA' and GLOBALS.role in ['so-manager', 'so-managerhype', 'so-managersearch', 'so-standalone', 'so-searchnode'] %}
|
||||
- file: kafkacertz
|
||||
|
||||
@@ -5,10 +5,10 @@ input {
|
||||
codec => es_bulk
|
||||
request_headers_target_field => client_headers
|
||||
remote_host_target_field => client_host
|
||||
ssl => true
|
||||
ssl_enabled => true
|
||||
ssl_certificate_authorities => ["/usr/share/filebeat/ca.crt"]
|
||||
ssl_certificate => "/usr/share/logstash/filebeat.crt"
|
||||
ssl_key => "/usr/share/logstash/filebeat.key"
|
||||
ssl_verify_mode => "peer"
|
||||
ssl_client_authentication => "required"
|
||||
}
|
||||
}
|
||||
|
||||
@@ -2,11 +2,11 @@ input {
|
||||
elastic_agent {
|
||||
port => 5055
|
||||
tags => [ "elastic-agent", "input-{{ GLOBALS.hostname }}" ]
|
||||
ssl => true
|
||||
ssl_enabled => true
|
||||
ssl_certificate_authorities => ["/usr/share/filebeat/ca.crt"]
|
||||
ssl_certificate => "/usr/share/logstash/elasticfleet-logstash.crt"
|
||||
ssl_key => "/usr/share/logstash/elasticfleet-logstash.key"
|
||||
ssl_verify_mode => "force_peer"
|
||||
ssl_client_authentication => "required"
|
||||
ecs_compatibility => v8
|
||||
}
|
||||
}
|
||||
|
||||
@@ -2,7 +2,7 @@ input {
|
||||
elastic_agent {
|
||||
port => 5056
|
||||
tags => [ "elastic-agent", "fleet-lumberjack-input" ]
|
||||
ssl => true
|
||||
ssl_enabled => true
|
||||
ssl_certificate => "/usr/share/logstash/elasticfleet-lumberjack.crt"
|
||||
ssl_key => "/usr/share/logstash/elasticfleet-lumberjack.key"
|
||||
ecs_compatibility => v8
|
||||
|
||||
@@ -8,8 +8,8 @@ output {
|
||||
document_id => "%{[metadata][_id]}"
|
||||
index => "so-ip-mappings"
|
||||
silence_errors_in_log => ["version_conflict_engine_exception"]
|
||||
ssl => true
|
||||
ssl_certificate_verification => false
|
||||
ssl_enabled => true
|
||||
ssl_verification_mode => "none"
|
||||
}
|
||||
}
|
||||
else {
|
||||
@@ -25,8 +25,8 @@ output {
|
||||
document_id => "%{[metadata][_id]}"
|
||||
pipeline => "%{[metadata][pipeline]}"
|
||||
silence_errors_in_log => ["version_conflict_engine_exception"]
|
||||
ssl => true
|
||||
ssl_certificate_verification => false
|
||||
ssl_enabled => true
|
||||
ssl_verification_mode => "none"
|
||||
}
|
||||
}
|
||||
else {
|
||||
@@ -37,8 +37,8 @@ output {
|
||||
user => "{{ ES_USER }}"
|
||||
password => "{{ ES_PASS }}"
|
||||
pipeline => "%{[metadata][pipeline]}"
|
||||
ssl => true
|
||||
ssl_certificate_verification => false
|
||||
ssl_enabled => true
|
||||
ssl_verification_mode => "none"
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -49,8 +49,8 @@ output {
|
||||
data_stream => true
|
||||
user => "{{ ES_USER }}"
|
||||
password => "{{ ES_PASS }}"
|
||||
ssl => true
|
||||
ssl_certificate_verification => false
|
||||
ssl_enabled => true
|
||||
ssl_verification_mode=> "none"
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@@ -13,8 +13,8 @@ output {
|
||||
user => "{{ ES_USER }}"
|
||||
password => "{{ ES_PASS }}"
|
||||
index => "endgame-%{+YYYY.MM.dd}"
|
||||
ssl => true
|
||||
ssl_certificate_verification => false
|
||||
ssl_enabled => true
|
||||
ssl_verification_mode => "none"
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@@ -56,7 +56,7 @@ logstash:
|
||||
helpLink: logstash.html
|
||||
global: False
|
||||
config:
|
||||
http_x_host:
|
||||
api_x_http_x_host:
|
||||
description: Host interface to listen to connections.
|
||||
helpLink: logstash.html
|
||||
readonly: True
|
||||
|
||||
287
salt/logstash/ssl.sls
Normal file
287
salt/logstash/ssl.sls
Normal file
@@ -0,0 +1,287 @@
|
||||
# Copyright Security Onion Solutions LLC and/or licensed to Security Onion Solutions LLC under one
|
||||
# or more contributor license agreements. Licensed under the Elastic License 2.0 as shown at
|
||||
# https://securityonion.net/license; you may not use this file except in compliance with the
|
||||
# Elastic License 2.0.
|
||||
|
||||
{% from 'allowed_states.map.jinja' import allowed_states %}
|
||||
{% if sls.split('.')[0] in allowed_states %}
|
||||
{% from 'vars/globals.map.jinja' import GLOBALS %}
|
||||
{% from 'elasticfleet/map.jinja' import ELASTICFLEETMERGED %}
|
||||
{% from 'ca/map.jinja' import CA %}
|
||||
|
||||
{% if GLOBALS.is_manager or GLOBALS.role in ['so-heavynode', 'so-fleet', 'so-receiver'] %}
|
||||
|
||||
{% if grains['role'] not in [ 'so-heavynode'] %}
|
||||
# Start -- Elastic Fleet Logstash Input Cert
|
||||
etc_elasticfleet_logstash_key:
|
||||
x509.private_key_managed:
|
||||
- name: /etc/pki/elasticfleet-logstash.key
|
||||
- keysize: 4096
|
||||
- backup: True
|
||||
- new: True
|
||||
{% if salt['file.file_exists']('/etc/pki/elasticfleet-logstash.key') -%}
|
||||
- prereq:
|
||||
- x509: etc_elasticfleet_logstash_crt
|
||||
{%- endif %}
|
||||
- retry:
|
||||
attempts: 5
|
||||
interval: 30
|
||||
|
||||
etc_elasticfleet_logstash_crt:
|
||||
x509.certificate_managed:
|
||||
- name: /etc/pki/elasticfleet-logstash.crt
|
||||
- ca_server: {{ CA.server }}
|
||||
- signing_policy: general
|
||||
- private_key: /etc/pki/elasticfleet-logstash.key
|
||||
- CN: {{ GLOBALS.hostname }}
|
||||
- subjectAltName: DNS:{{ GLOBALS.hostname }},DNS:{{ GLOBALS.url_base }},IP:{{ GLOBALS.node_ip }}{% if ELASTICFLEETMERGED.config.server.custom_fqdn | length > 0 %},DNS:{{ ELASTICFLEETMERGED.config.server.custom_fqdn | join(',DNS:') }}{% endif %}
|
||||
- days_remaining: 7
|
||||
- days_valid: 820
|
||||
- backup: True
|
||||
- timeout: 30
|
||||
- retry:
|
||||
attempts: 5
|
||||
interval: 30
|
||||
cmd.run:
|
||||
- name: "/usr/bin/openssl pkcs8 -in /etc/pki/elasticfleet-logstash.key -topk8 -out /etc/pki/elasticfleet-logstash.p8 -nocrypt"
|
||||
- onchanges:
|
||||
- x509: etc_elasticfleet_logstash_key
|
||||
|
||||
eflogstashperms:
|
||||
file.managed:
|
||||
- replace: False
|
||||
- name: /etc/pki/elasticfleet-logstash.key
|
||||
- mode: 640
|
||||
- group: 939
|
||||
|
||||
chownelasticfleetlogstashcrt:
|
||||
file.managed:
|
||||
- replace: False
|
||||
- name: /etc/pki/elasticfleet-logstash.crt
|
||||
- mode: 640
|
||||
- user: 931
|
||||
- group: 939
|
||||
|
||||
chownelasticfleetlogstashkey:
|
||||
file.managed:
|
||||
- replace: False
|
||||
- name: /etc/pki/elasticfleet-logstash.key
|
||||
- mode: 640
|
||||
- user: 931
|
||||
- group: 939
|
||||
# End -- Elastic Fleet Logstash Input Cert
|
||||
{% endif %} # endif is for not including HeavyNodes
|
||||
|
||||
# Start -- Elastic Fleet Node - Logstash Lumberjack Input / Output
|
||||
# Cert needed on: Managers, Receivers
|
||||
etc_elasticfleetlumberjack_key:
|
||||
x509.private_key_managed:
|
||||
- name: /etc/pki/elasticfleet-lumberjack.key
|
||||
- bits: 4096
|
||||
- backup: True
|
||||
- new: True
|
||||
{% if salt['file.file_exists']('/etc/pki/elasticfleet-lumberjack.key') -%}
|
||||
- prereq:
|
||||
- x509: etc_elasticfleetlumberjack_crt
|
||||
{%- endif %}
|
||||
- retry:
|
||||
attempts: 5
|
||||
interval: 30
|
||||
|
||||
etc_elasticfleetlumberjack_crt:
|
||||
x509.certificate_managed:
|
||||
- name: /etc/pki/elasticfleet-lumberjack.crt
|
||||
- ca_server: {{ CA.server }}
|
||||
- signing_policy: general
|
||||
- private_key: /etc/pki/elasticfleet-lumberjack.key
|
||||
- CN: {{ GLOBALS.node_ip }}
|
||||
- subjectAltName: DNS:{{ GLOBALS.hostname }}
|
||||
- days_remaining: 7
|
||||
- days_valid: 820
|
||||
- backup: True
|
||||
- timeout: 30
|
||||
- retry:
|
||||
attempts: 5
|
||||
interval: 30
|
||||
cmd.run:
|
||||
- name: "/usr/bin/openssl pkcs8 -in /etc/pki/elasticfleet-lumberjack.key -topk8 -out /etc/pki/elasticfleet-lumberjack.p8 -nocrypt"
|
||||
- onchanges:
|
||||
- x509: etc_elasticfleetlumberjack_key
|
||||
|
||||
eflogstashlumberjackperms:
|
||||
file.managed:
|
||||
- replace: False
|
||||
- name: /etc/pki/elasticfleet-lumberjack.key
|
||||
- mode: 640
|
||||
- group: 939
|
||||
|
||||
chownilogstashelasticfleetlumberjackp8:
|
||||
file.managed:
|
||||
- replace: False
|
||||
- name: /etc/pki/elasticfleet-lumberjack.p8
|
||||
- mode: 640
|
||||
- user: 931
|
||||
- group: 939
|
||||
|
||||
chownilogstashelasticfleetlogstashlumberjackcrt:
|
||||
file.managed:
|
||||
- replace: False
|
||||
- name: /etc/pki/elasticfleet-lumberjack.crt
|
||||
- mode: 640
|
||||
- user: 931
|
||||
- group: 939
|
||||
|
||||
chownilogstashelasticfleetlogstashlumberjackkey:
|
||||
file.managed:
|
||||
- replace: False
|
||||
- name: /etc/pki/elasticfleet-lumberjack.key
|
||||
- mode: 640
|
||||
- user: 931
|
||||
- group: 939
|
||||
# End -- Elastic Fleet Node - Logstash Lumberjack Input / Output
|
||||
{% endif %}
|
||||
|
||||
{% if GLOBALS.is_manager or GLOBALS.role in ['so-heavynode', 'so-receiver'] %}
|
||||
etc_filebeat_key:
|
||||
x509.private_key_managed:
|
||||
- name: /etc/pki/filebeat.key
|
||||
- keysize: 4096
|
||||
- backup: True
|
||||
- new: True
|
||||
{% if salt['file.file_exists']('/etc/pki/filebeat.key') -%}
|
||||
- prereq:
|
||||
- x509: etc_filebeat_crt
|
||||
{%- endif %}
|
||||
- retry:
|
||||
attempts: 5
|
||||
interval: 30
|
||||
|
||||
# Request a cert and drop it where it needs to go to be distributed
|
||||
etc_filebeat_crt:
|
||||
x509.certificate_managed:
|
||||
- name: /etc/pki/filebeat.crt
|
||||
- ca_server: {{ CA.server }}
|
||||
- signing_policy: general
|
||||
- private_key: /etc/pki/filebeat.key
|
||||
- CN: {{ GLOBALS.hostname }}
|
||||
- subjectAltName: DNS:{{ GLOBALS.hostname }}, IP:{{ GLOBALS.node_ip }}
|
||||
- days_remaining: 7
|
||||
- days_valid: 820
|
||||
- backup: True
|
||||
- timeout: 30
|
||||
- retry:
|
||||
attempts: 5
|
||||
interval: 30
|
||||
cmd.run:
|
||||
- name: "/usr/bin/openssl pkcs8 -in /etc/pki/filebeat.key -topk8 -out /etc/pki/filebeat.p8 -nocrypt"
|
||||
- onchanges:
|
||||
- x509: etc_filebeat_key
|
||||
|
||||
fbperms:
|
||||
file.managed:
|
||||
- replace: False
|
||||
- name: /etc/pki/filebeat.key
|
||||
- mode: 640
|
||||
- group: 939
|
||||
|
||||
logstash_filebeat_p8:
|
||||
file.managed:
|
||||
- replace: False
|
||||
- name: /etc/pki/filebeat.p8
|
||||
- mode: 640
|
||||
- user: 931
|
||||
- group: 939
|
||||
|
||||
{% if grains.role not in ['so-heavynode', 'so-receiver'] %}
|
||||
# Create Symlinks to the keys so I can distribute it to all the things
|
||||
filebeatdir:
|
||||
file.directory:
|
||||
- name: /opt/so/saltstack/local/salt/filebeat/files
|
||||
- makedirs: True
|
||||
|
||||
fbkeylink:
|
||||
file.symlink:
|
||||
- name: /opt/so/saltstack/local/salt/filebeat/files/filebeat.p8
|
||||
- target: /etc/pki/filebeat.p8
|
||||
- user: socore
|
||||
- group: socore
|
||||
|
||||
fbcrtlink:
|
||||
file.symlink:
|
||||
- name: /opt/so/saltstack/local/salt/filebeat/files/filebeat.crt
|
||||
- target: /etc/pki/filebeat.crt
|
||||
- user: socore
|
||||
- group: socore
|
||||
|
||||
{% endif %}
|
||||
{% endif %}
|
||||
|
||||
{% if GLOBALS.is_manager or GLOBALS.role in ['so-sensor', 'so-searchnode', 'so-heavynode', 'so-fleet', 'so-idh', 'so-receiver'] %}
|
||||
|
||||
fbcertdir:
|
||||
file.directory:
|
||||
- name: /opt/so/conf/filebeat/etc/pki
|
||||
- makedirs: True
|
||||
|
||||
conf_filebeat_key:
|
||||
x509.private_key_managed:
|
||||
- name: /opt/so/conf/filebeat/etc/pki/filebeat.key
|
||||
- keysize: 4096
|
||||
- backup: True
|
||||
- new: True
|
||||
{% if salt['file.file_exists']('/opt/so/conf/filebeat/etc/pki/filebeat.key') -%}
|
||||
- prereq:
|
||||
- x509: conf_filebeat_crt
|
||||
{%- endif %}
|
||||
- retry:
|
||||
attempts: 5
|
||||
interval: 30
|
||||
|
||||
# Request a cert and drop it where it needs to go to be distributed
|
||||
conf_filebeat_crt:
|
||||
x509.certificate_managed:
|
||||
- name: /opt/so/conf/filebeat/etc/pki/filebeat.crt
|
||||
- ca_server: {{ CA.server }}
|
||||
- signing_policy: general
|
||||
- private_key: /opt/so/conf/filebeat/etc/pki/filebeat.key
|
||||
- CN: {{ GLOBALS.hostname }}
|
||||
- subjectAltName: DNS:{{ GLOBALS.hostname }}, IP:{{ GLOBALS.node_ip }}
|
||||
- days_remaining: 7
|
||||
- days_valid: 820
|
||||
- backup: True
|
||||
- timeout: 30
|
||||
- retry:
|
||||
attempts: 5
|
||||
interval: 30
|
||||
|
||||
# Convert the key to pkcs#8 so logstash will work correctly.
|
||||
filebeatpkcs:
|
||||
cmd.run:
|
||||
- name: "/usr/bin/openssl pkcs8 -in /opt/so/conf/filebeat/etc/pki/filebeat.key -topk8 -out /opt/so/conf/filebeat/etc/pki/filebeat.p8 -passout pass:"
|
||||
- onchanges:
|
||||
- x509: conf_filebeat_key
|
||||
|
||||
filebeatkeyperms:
|
||||
file.managed:
|
||||
- replace: False
|
||||
- name: /opt/so/conf/filebeat/etc/pki/filebeat.key
|
||||
- mode: 640
|
||||
- group: 939
|
||||
|
||||
chownfilebeatp8:
|
||||
file.managed:
|
||||
- replace: False
|
||||
- name: /opt/so/conf/filebeat/etc/pki/filebeat.p8
|
||||
- mode: 640
|
||||
- user: 931
|
||||
- group: 939
|
||||
|
||||
{% endif %}
|
||||
|
||||
{% else %}
|
||||
|
||||
{{sls}}_state_not_allowed:
|
||||
test.fail_without_changes:
|
||||
- name: {{sls}}_state_not_allowed
|
||||
|
||||
{% endif %}
|
||||
@@ -1,3 +1,8 @@
|
||||
# Copyright Security Onion Solutions LLC and/or licensed to Security Onion Solutions LLC under one
|
||||
# or more contributor license agreements. Licensed under the Elastic License 2.0 as shown at
|
||||
# https://securityonion.net/license; you may not use this file except in compliance with the
|
||||
# Elastic License 2.0.
|
||||
|
||||
elastic_curl_config_distributed:
|
||||
file.managed:
|
||||
- name: /opt/so/saltstack/local/salt/elasticsearch/curl.config
|
||||
|
||||
@@ -206,10 +206,33 @@ git_config_set_safe_dirs:
|
||||
- multivar:
|
||||
- /nsm/rules/custom-local-repos/local-sigma
|
||||
- /nsm/rules/custom-local-repos/local-yara
|
||||
- /nsm/rules/custom-local-repos/local-suricata
|
||||
- /nsm/securityonion-resources
|
||||
- /opt/so/conf/soc/ai_summary_repos/securityonion-resources
|
||||
- /nsm/airgap-resources/playbooks
|
||||
- /opt/so/conf/soc/playbooks
|
||||
|
||||
surinsmrulesdir:
|
||||
file.directory:
|
||||
- name: /nsm/rules/suricata/etopen
|
||||
- user: 939
|
||||
- group: 939
|
||||
- makedirs: True
|
||||
|
||||
suriextractionrules:
|
||||
file.managed:
|
||||
- name: /nsm/rules/suricata/so_extraction.rules
|
||||
- source: salt://suricata/files/so_extraction.rules
|
||||
- user: 939
|
||||
- group: 939
|
||||
|
||||
surifiltersrules:
|
||||
file.managed:
|
||||
- name: /nsm/rules/suricata/so_filters.rules
|
||||
- source: salt://suricata/files/so_filters.rules
|
||||
- user: 939
|
||||
- group: 939
|
||||
|
||||
{% else %}
|
||||
|
||||
{{sls}}_state_not_allowed:
|
||||
|
||||
@@ -1,3 +1,8 @@
|
||||
# Copyright Security Onion Solutions LLC and/or licensed to Security Onion Solutions LLC under one
|
||||
# or more contributor license agreements. Licensed under the Elastic License 2.0 as shown at
|
||||
# https://securityonion.net/license; you may not use this file except in compliance with the
|
||||
# Elastic License 2.0.
|
||||
|
||||
kibana_curl_config_distributed:
|
||||
file.managed:
|
||||
- name: /opt/so/conf/kibana/curl.config
|
||||
@@ -5,4 +10,4 @@ kibana_curl_config_distributed:
|
||||
- template: jinja
|
||||
- mode: 600
|
||||
- show_changes: False
|
||||
- makedirs: True
|
||||
- makedirs: True
|
||||
|
||||
@@ -1,3 +1,8 @@
|
||||
# Copyright Security Onion Solutions LLC and/or licensed to Security Onion Solutions LLC under one
|
||||
# or more contributor license agreements. Licensed under the Elastic License 2.0 as shown at
|
||||
# https://securityonion.net/license; you may not use this file except in compliance with the
|
||||
# Elastic License 2.0.
|
||||
|
||||
include:
|
||||
- elasticsearch.auth
|
||||
- kratos
|
||||
|
||||
@@ -133,7 +133,7 @@ function getinstallinfo() {
|
||||
return 1
|
||||
fi
|
||||
|
||||
source <(echo $INSTALLVARS)
|
||||
export $(echo "$INSTALLVARS" | xargs)
|
||||
if [ $? -ne 0 ]; then
|
||||
log "ERROR" "Failed to source install variables"
|
||||
return 1
|
||||
@@ -604,16 +604,6 @@ function add_kratos_to_minion() {
|
||||
fi
|
||||
}
|
||||
|
||||
function add_idstools_to_minion() {
|
||||
printf '%s\n'\
|
||||
"idstools:"\
|
||||
" enabled: True"\
|
||||
" " >> $PILLARFILE
|
||||
if [ $? -ne 0 ]; then
|
||||
log "ERROR" "Failed to add idstools configuration to $PILLARFILE"
|
||||
return 1
|
||||
fi
|
||||
}
|
||||
|
||||
function add_elastic_fleet_package_registry_to_minion() {
|
||||
printf '%s\n'\
|
||||
@@ -726,6 +716,18 @@ function checkMine() {
|
||||
}
|
||||
}
|
||||
|
||||
function create_ca_pillar() {
|
||||
local capillar=/opt/so/saltstack/local/pillar/ca/init.sls
|
||||
printf '%s\n'\
|
||||
"ca:"\
|
||||
" server: $MINION_ID"\
|
||||
" " > $capillar
|
||||
if [ $? -ne 0 ]; then
|
||||
log "ERROR" "Failed to add $MINION_ID to $capillar"
|
||||
return 1
|
||||
fi
|
||||
}
|
||||
|
||||
function createEVAL() {
|
||||
log "INFO" "Creating EVAL configuration for minion $MINION_ID"
|
||||
is_pcaplimit=true
|
||||
@@ -741,7 +743,6 @@ function createEVAL() {
|
||||
add_soc_to_minion || return 1
|
||||
add_registry_to_minion || return 1
|
||||
add_kratos_to_minion || return 1
|
||||
add_idstools_to_minion || return 1
|
||||
add_elastic_fleet_package_registry_to_minion || return 1
|
||||
}
|
||||
|
||||
@@ -762,7 +763,6 @@ function createSTANDALONE() {
|
||||
add_soc_to_minion || return 1
|
||||
add_registry_to_minion || return 1
|
||||
add_kratos_to_minion || return 1
|
||||
add_idstools_to_minion || return 1
|
||||
add_elastic_fleet_package_registry_to_minion || return 1
|
||||
}
|
||||
|
||||
@@ -779,7 +779,6 @@ function createMANAGER() {
|
||||
add_soc_to_minion || return 1
|
||||
add_registry_to_minion || return 1
|
||||
add_kratos_to_minion || return 1
|
||||
add_idstools_to_minion || return 1
|
||||
add_elastic_fleet_package_registry_to_minion || return 1
|
||||
}
|
||||
|
||||
@@ -796,7 +795,6 @@ function createMANAGERSEARCH() {
|
||||
add_soc_to_minion || return 1
|
||||
add_registry_to_minion || return 1
|
||||
add_kratos_to_minion || return 1
|
||||
add_idstools_to_minion || return 1
|
||||
add_elastic_fleet_package_registry_to_minion || return 1
|
||||
}
|
||||
|
||||
@@ -811,7 +809,6 @@ function createIMPORT() {
|
||||
add_soc_to_minion || return 1
|
||||
add_registry_to_minion || return 1
|
||||
add_kratos_to_minion || return 1
|
||||
add_idstools_to_minion || return 1
|
||||
add_elastic_fleet_package_registry_to_minion || return 1
|
||||
}
|
||||
|
||||
@@ -896,7 +893,6 @@ function createMANAGERHYPE() {
|
||||
add_soc_to_minion || return 1
|
||||
add_registry_to_minion || return 1
|
||||
add_kratos_to_minion || return 1
|
||||
add_idstools_to_minion || return 1
|
||||
add_elastic_fleet_package_registry_to_minion || return 1
|
||||
}
|
||||
|
||||
@@ -1029,6 +1025,7 @@ function setupMinionFiles() {
|
||||
managers=("EVAL" "STANDALONE" "IMPORT" "MANAGER" "MANAGERSEARCH")
|
||||
if echo "${managers[@]}" | grep -qw "$NODETYPE"; then
|
||||
add_sensoroni_with_analyze_to_minion || return 1
|
||||
create_ca_pillar || return 1
|
||||
else
|
||||
add_sensoroni_to_minion || return 1
|
||||
fi
|
||||
|
||||
@@ -87,6 +87,12 @@ check_err() {
|
||||
113)
|
||||
echo 'No route to host'
|
||||
;;
|
||||
160)
|
||||
echo 'Incompatiable Elasticsearch upgrade'
|
||||
;;
|
||||
161)
|
||||
echo 'Required intermediate Elasticsearch upgrade not complete'
|
||||
;;
|
||||
*)
|
||||
echo 'Unhandled error'
|
||||
echo "$err_msg"
|
||||
@@ -319,6 +325,19 @@ clone_to_tmp() {
|
||||
fi
|
||||
}
|
||||
|
||||
# there is a function like this in so-minion, but we cannot source it since args required for so-minion
|
||||
create_ca_pillar() {
|
||||
local ca_pillar_dir="/opt/so/saltstack/local/pillar/ca"
|
||||
local ca_pillar_file="${ca_pillar_dir}/init.sls"
|
||||
|
||||
echo "Updating CA pillar configuration"
|
||||
mkdir -p "$ca_pillar_dir"
|
||||
echo "ca: {}" > "$ca_pillar_file"
|
||||
|
||||
so-yaml.py add "$ca_pillar_file" ca.server "$MINIONID"
|
||||
chown -R socore:socore "$ca_pillar_dir"
|
||||
}
|
||||
|
||||
disable_logstash_heavynodes() {
|
||||
c=0
|
||||
printf "\nChecking for heavynodes and disabling Logstash if they exist\n"
|
||||
@@ -362,7 +381,6 @@ masterlock() {
|
||||
echo "base:" > $TOPFILE
|
||||
echo " $MINIONID:" >> $TOPFILE
|
||||
echo " - ca" >> $TOPFILE
|
||||
echo " - ssl" >> $TOPFILE
|
||||
echo " - elasticsearch" >> $TOPFILE
|
||||
}
|
||||
|
||||
@@ -426,6 +444,9 @@ preupgrade_changes() {
|
||||
[[ "$INSTALLEDVERSION" == 2.4.160 ]] && up_to_2.4.170
|
||||
[[ "$INSTALLEDVERSION" == 2.4.170 ]] && up_to_2.4.180
|
||||
[[ "$INSTALLEDVERSION" == 2.4.180 ]] && up_to_2.4.190
|
||||
[[ "$INSTALLEDVERSION" == 2.4.190 ]] && up_to_2.4.200
|
||||
[[ "$INSTALLEDVERSION" == 2.4.200 ]] && up_to_2.4.210
|
||||
[[ "$INSTALLEDVERSION" == 2.4.210 ]] && up_to_2.4.220
|
||||
true
|
||||
}
|
||||
|
||||
@@ -457,6 +478,9 @@ postupgrade_changes() {
|
||||
[[ "$POSTVERSION" == 2.4.160 ]] && post_to_2.4.170
|
||||
[[ "$POSTVERSION" == 2.4.170 ]] && post_to_2.4.180
|
||||
[[ "$POSTVERSION" == 2.4.180 ]] && post_to_2.4.190
|
||||
[[ "$POSTVERSION" == 2.4.190 ]] && post_to_2.4.200
|
||||
[[ "$POSTVERSION" == 2.4.200 ]] && post_to_2.4.210
|
||||
[[ "$POSTVERSION" == 2.4.210 ]] && post_to_2.4.220
|
||||
true
|
||||
}
|
||||
|
||||
@@ -613,9 +637,6 @@ post_to_2.4.180() {
|
||||
}
|
||||
|
||||
post_to_2.4.190() {
|
||||
echo "Regenerating Elastic Agent Installers"
|
||||
/sbin/so-elastic-agent-gen-installers
|
||||
|
||||
# Only need to update import / eval nodes
|
||||
if [[ "$MINION_ROLE" == "import" ]] || [[ "$MINION_ROLE" == "eval" ]]; then
|
||||
update_import_fleet_output
|
||||
@@ -636,6 +657,29 @@ post_to_2.4.190() {
|
||||
POSTVERSION=2.4.190
|
||||
}
|
||||
|
||||
post_to_2.4.200() {
|
||||
echo "Initiating Suricata idstools migration..."
|
||||
suricata_idstools_removal_post
|
||||
|
||||
POSTVERSION=2.4.200
|
||||
}
|
||||
|
||||
post_to_2.4.210() {
|
||||
echo "Rolling over Kratos index to apply new index template"
|
||||
|
||||
rollover_index "logs-kratos-so"
|
||||
|
||||
echo "Regenerating Elastic Agent Installers"
|
||||
/sbin/so-elastic-agent-gen-installers
|
||||
|
||||
POSTVERSION=2.4.210
|
||||
}
|
||||
|
||||
post_to_2.4.220() {
|
||||
echo "Nothing to apply"
|
||||
POSTVERSION=2.4.220
|
||||
}
|
||||
|
||||
repo_sync() {
|
||||
echo "Sync the local repo."
|
||||
su socore -c '/usr/sbin/so-repo-sync' || fail "Unable to complete so-repo-sync."
|
||||
@@ -897,10 +941,30 @@ up_to_2.4.180() {
|
||||
}
|
||||
|
||||
up_to_2.4.190() {
|
||||
echo "Nothing to do for 2.4.190"
|
||||
INSTALLEDVERSION=2.4.190
|
||||
}
|
||||
|
||||
up_to_2.4.200() {
|
||||
echo "Backing up idstools config..."
|
||||
suricata_idstools_removal_pre
|
||||
|
||||
touch /opt/so/state/esfleet_logstash_config_pillar
|
||||
|
||||
INSTALLEDVERSION=2.4.200
|
||||
}
|
||||
|
||||
up_to_2.4.210() {
|
||||
# Elastic Update for this release, so download Elastic Agent files
|
||||
determine_elastic_agent_upgrade
|
||||
|
||||
INSTALLEDVERSION=2.4.190
|
||||
INSTALLEDVERSION=2.4.210
|
||||
}
|
||||
|
||||
up_to_2.4.220() {
|
||||
create_ca_pillar
|
||||
|
||||
INSTALLEDVERSION=2.4.220
|
||||
}
|
||||
|
||||
add_hydra_pillars() {
|
||||
@@ -986,6 +1050,8 @@ rollover_index() {
|
||||
}
|
||||
|
||||
suricata_idstools_migration() {
|
||||
# For 2.4.70
|
||||
|
||||
#Backup the pillars for idstools
|
||||
mkdir -p /nsm/backup/detections-migration/idstools
|
||||
rsync -av /opt/so/saltstack/local/pillar/idstools/* /nsm/backup/detections-migration/idstools
|
||||
@@ -1086,6 +1152,217 @@ playbook_migration() {
|
||||
echo "Playbook Migration is complete...."
|
||||
}
|
||||
|
||||
suricata_idstools_removal_pre() {
|
||||
# For SOUPs beginning with 2.4.200 - pre SOUP checks
|
||||
|
||||
# Create syncBlock file
|
||||
install -d -o 939 -g 939 -m 755 /opt/so/conf/soc/fingerprints
|
||||
install -o 939 -g 939 -m 644 /dev/null /opt/so/conf/soc/fingerprints/suricataengine.syncBlock
|
||||
cat > /opt/so/conf/soc/fingerprints/suricataengine.syncBlock << EOF
|
||||
Suricata ruleset sync is blocked until this file is removed. **CRITICAL** Make sure that you have manually added any custom Suricata rulesets via SOC config before removing this file - review the documentation for more details: https://docs.securityonion.net/en/2.4/nids.html#sync-block
|
||||
EOF
|
||||
|
||||
# Remove possible symlink & create salt local rules dir
|
||||
[ -L /opt/so/saltstack/local/salt/suricata/rules ] && rm -f /opt/so/saltstack/local/salt/suricata/rules
|
||||
install -d -o 939 -g 939 /opt/so/saltstack/local/salt/suricata/rules/ || echo "Failed to create Suricata local rules directory"
|
||||
|
||||
# Backup custom rules & overrides
|
||||
mkdir -p /nsm/backup/detections-migration/2-4-200
|
||||
cp /usr/sbin/so-rule-update /nsm/backup/detections-migration/2-4-200
|
||||
cp /opt/so/conf/idstools/etc/rulecat.conf /nsm/backup/detections-migration/2-4-200
|
||||
|
||||
# Backup so-detection index via reindex
|
||||
echo "Creating sos-backup index template..."
|
||||
template_result=$(/sbin/so-elasticsearch-query '_index_template/sos-backup' -X PUT \
|
||||
--retry 5 --retry-delay 15 --retry-all-errors \
|
||||
-d '{"index_patterns":["sos-backup-*"],"priority":501,"template":{"settings":{"index":{"number_of_replicas":0,"number_of_shards":1}}}}')
|
||||
|
||||
if [[ -z "$template_result" ]] || ! echo "$template_result" | jq -e '.acknowledged == true' > /dev/null 2>&1; then
|
||||
echo "Error: Failed to create sos-backup index template"
|
||||
echo "$template_result"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
BACKUP_INDEX="sos-backup-detection-$(date +%Y%m%d-%H%M%S)"
|
||||
echo "Backing up so-detection index to $BACKUP_INDEX..."
|
||||
reindex_result=$(/sbin/so-elasticsearch-query '_reindex?wait_for_completion=true' \
|
||||
--retry 5 --retry-delay 15 --retry-all-errors \
|
||||
-X POST -d "{\"source\": {\"index\": \"so-detection\"}, \"dest\": {\"index\": \"$BACKUP_INDEX\"}}")
|
||||
|
||||
if [[ -z "$reindex_result" ]]; then
|
||||
echo "Error: Backup of detections failed - no response from Elasticsearch"
|
||||
exit 1
|
||||
elif echo "$reindex_result" | jq -e '.created >= 0' > /dev/null 2>&1; then
|
||||
echo "Backup complete: $(echo "$reindex_result" | jq -r '.created') documents copied"
|
||||
elif echo "$reindex_result" | grep -q "index_not_found_exception"; then
|
||||
echo "so-detection index does not exist, skipping backup"
|
||||
else
|
||||
echo "Error: Backup of detections failed"
|
||||
echo "$reindex_result"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
}
|
||||
|
||||
suricata_idstools_removal_post() {
|
||||
# For SOUPs beginning with 2.4.200 - post SOUP checks
|
||||
|
||||
echo "Checking idstools configuration for custom modifications..."
|
||||
|
||||
# Normalize and hash file content for consistent comparison
|
||||
# Args: $1 - file path
|
||||
# Outputs: SHA256 hash to stdout
|
||||
# Returns: 0 on success, 1 on failure
|
||||
hash_normalized_file() {
|
||||
local file="$1"
|
||||
|
||||
if [[ ! -r "$file" ]]; then
|
||||
return 1
|
||||
fi
|
||||
|
||||
# Ensure trailing newline for consistent hashing regardless of source file
|
||||
{ sed -E \
|
||||
-e 's/^[[:space:]]+//; s/[[:space:]]+$//' \
|
||||
-e '/^$/d' \
|
||||
-e 's|--url=http://[^:]+:7788|--url=http://MANAGER:7788|' \
|
||||
"$file"; echo; } | sed '/^$/d' | sha256sum | awk '{print $1}'
|
||||
}
|
||||
|
||||
# Known-default hashes for so-rule-update (ETOPEN ruleset)
|
||||
KNOWN_SO_RULE_UPDATE_HASHES=(
|
||||
# 2.4.100+ (suricata 7.0.3, non-airgap)
|
||||
"5fbd067ced86c8ec72ffb7e1798aa624123b536fb9d78f4b3ad8d3b45db1eae7" # 2.4.100-2.4.190 non-Airgap
|
||||
# 2.4.90+ airgap (same for 2.4.90 and 2.4.100+)
|
||||
"61f632c55791338c438c071040f1490066769bcce808b595b5cc7974a90e653a" # 2.4.90+ Airgap
|
||||
# 2.4.90 (suricata 6.0, non-airgap, comment inside proxy block)
|
||||
"0380ec52a05933244ab0f0bc506576e1d838483647b40612d5fe4b378e47aedd" # 2.4.90 non-Airgap
|
||||
# 2.4.10-2.4.80 (suricata 6.0, non-airgap, comment outside proxy block)
|
||||
"b6e4d1b5a78d57880ad038a9cd2cc6978aeb2dd27d48ea1a44dd866a2aee7ff4" # 2.4.10-2.4.80 non-Airgap
|
||||
# 2.4.10-2.4.80 airgap
|
||||
"b20146526ace2b142fde4664f1386a9a1defa319b3a1d113600ad33a1b037dad" # 2.4.10-2.4.80 Airgap
|
||||
# 2.4.5 and earlier (no pidof check, non-airgap)
|
||||
"d04f5e4015c348133d28a7840839e82d60009781eaaa1c66f7f67747703590dc" # 2.4.5 non-Airgap
|
||||
)
|
||||
|
||||
# Known-default hashes for rulecat.conf
|
||||
KNOWN_RULECAT_CONF_HASHES=(
|
||||
# 2.4.100+ (suricata 7.0.3)
|
||||
"302e75dca9110807f09ade2eec3be1fcfc8b2bf6cf2252b0269bb72efeefe67e" # 2.4.100-2.4.190 without SURICATA md_engine
|
||||
"8029b7718c324a9afa06a5cf180afde703da1277af4bdd30310a6cfa3d6398cb" # 2.4.100-2.4.190 with SURICATA md_engine
|
||||
# 2.4.80-2.4.90 (suricata 6.0, with --suricata-version and --output)
|
||||
"4d8b318e6950a6f60b02f307cf27c929efd39652990c1bd0c8820aa8a307e1e7" # 2.4.80-2.4.90 without SURICATA md_engine
|
||||
"a1ddf264c86c4e91c81c5a317f745a19466d4311e4533ec3a3c91fed04c11678" # 2.4.80-2.4.90 with SURICATA md_engine
|
||||
# 2.4.50-2.4.70 (/suri/ path, no --suricata-version)
|
||||
"86e3afb8d0f00c62337195602636864c98580a13ca9cc85029661a539deae6ae" # 2.4.50-2.4.70 without SURICATA md_engine
|
||||
"5a97604ca5b820a10273a2d6546bb5e00c5122ca5a7dfe0ba0bfbce5fc026f4b" # 2.4.50-2.4.70 with SURICATA md_engine
|
||||
# 2.4.20-2.4.40 (/nids/ path without /suri/)
|
||||
"d098ea9ecd94b5cca35bf33543f8ea8f48066a0785221fabda7fef43d2462c29" # 2.4.20-2.4.40 without SURICATA md_engine
|
||||
"9dbc60df22ae20d65738ba42e620392577857038ba92278e23ec182081d191cd" # 2.4.20-2.4.40 with SURICATA md_engine
|
||||
# 2.4.5-2.4.10 (/sorules/ path for extraction/filters)
|
||||
"490f6843d9fca759ee74db3ada9c702e2440b8393f2cfaf07bbe41aaa6d955c3" # 2.4.5-2.4.10 with SURICATA md_engine
|
||||
# Note: 2.4.5-2.4.10 without SURICATA md_engine has same hash as 2.4.20-2.4.40 without SURICATA md_engine
|
||||
)
|
||||
|
||||
# Check a config file against known hashes
|
||||
# Args: $1 - file path, $2 - array name of known hashes
|
||||
check_config_file() {
|
||||
local file="$1"
|
||||
local known_hashes_array="$2"
|
||||
local file_display_name=$(basename "$file")
|
||||
|
||||
if [[ ! -f "$file" ]]; then
|
||||
echo "Warning: $file not found"
|
||||
echo "$file_display_name not found - manual verification required" >> /opt/so/conf/soc/fingerprints/suricataengine.syncBlock
|
||||
return 1
|
||||
fi
|
||||
|
||||
echo "Hashing $file..."
|
||||
local file_hash
|
||||
if ! file_hash=$(hash_normalized_file "$file"); then
|
||||
echo "Warning: Could not read $file"
|
||||
echo "$file_display_name not readable - manual verification required" >> /opt/so/conf/soc/fingerprints/suricataengine.syncBlock
|
||||
return 1
|
||||
fi
|
||||
|
||||
echo " Hash: $file_hash"
|
||||
|
||||
# Check if hash matches any known default
|
||||
local -n known_hashes=$known_hashes_array
|
||||
for known_hash in "${known_hashes[@]}"; do
|
||||
if [[ "$file_hash" == "$known_hash" ]]; then
|
||||
echo " Matches known default configuration"
|
||||
return 0
|
||||
fi
|
||||
done
|
||||
|
||||
# No match - custom configuration detected
|
||||
echo "Does not match known default - custom configuration detected"
|
||||
echo "Custom $file_display_name detected (hash: $file_hash)" >> /opt/so/conf/soc/fingerprints/suricataengine.syncBlock
|
||||
|
||||
# If this is so-rule-update, check for ETPRO license code and write out to the syncBlock file
|
||||
# If ETPRO is enabled, the license code already exists in the so-rule-update script, this is just making it easier to migrate
|
||||
if [[ "$file_display_name" == "so-rule-update" ]]; then
|
||||
local etpro_code
|
||||
etpro_code=$(grep -oP '\-\-etpro=\K[0-9a-fA-F]+' "$file" 2>/dev/null) || true
|
||||
if [[ -n "$etpro_code" ]]; then
|
||||
echo "ETPRO code found: $etpro_code" >> /opt/so/conf/soc/fingerprints/suricataengine.syncBlock
|
||||
fi
|
||||
fi
|
||||
|
||||
return 1
|
||||
}
|
||||
|
||||
# Check so-rule-update and rulecat.conf
|
||||
SO_RULE_UPDATE="/usr/sbin/so-rule-update"
|
||||
RULECAT_CONF="/opt/so/conf/idstools/etc/rulecat.conf"
|
||||
|
||||
custom_found=0
|
||||
|
||||
check_config_file "$SO_RULE_UPDATE" "KNOWN_SO_RULE_UPDATE_HASHES" || custom_found=1
|
||||
check_config_file "$RULECAT_CONF" "KNOWN_RULECAT_CONF_HASHES" || custom_found=1
|
||||
|
||||
# Check for ETPRO rules on airgap systems
|
||||
if [[ $is_airgap -eq 0 ]] && grep -q 'ETPRO ' /nsm/rules/suricata/emerging-all.rules 2>/dev/null; then
|
||||
echo "ETPRO rules detected on airgap system - custom configuration"
|
||||
echo "ETPRO rules detected on Airgap in /nsm/rules/suricata/emerging-all.rules" >> /opt/so/conf/soc/fingerprints/suricataengine.syncBlock
|
||||
custom_found=1
|
||||
fi
|
||||
|
||||
# If no custom configs found, remove syncBlock
|
||||
if [[ $custom_found -eq 0 ]]; then
|
||||
echo "idstools migration completed successfully - removing Suricata engine syncBlock"
|
||||
rm -f /opt/so/conf/soc/fingerprints/suricataengine.syncBlock
|
||||
else
|
||||
echo "Custom idstools configuration detected - syncBlock remains in place"
|
||||
echo "Review /opt/so/conf/soc/fingerprints/suricataengine.syncBlock for details"
|
||||
fi
|
||||
|
||||
echo "Cleaning up idstools"
|
||||
echo "Stopping and removing the idstools container..."
|
||||
if [ -n "$(docker ps -q -f name=^so-idstools$)" ]; then
|
||||
image_name=$(docker ps -a --filter name=^so-idstools$ --format '{{.Image}}' 2>/dev/null || true)
|
||||
docker stop so-idstools || echo "Warning: failed to stop so-idstools container"
|
||||
docker rm so-idstools || echo "Warning: failed to remove so-idstools container"
|
||||
|
||||
if [[ -n "$image_name" ]]; then
|
||||
echo "Removing idstools image: $image_name"
|
||||
docker rmi "$image_name" || echo "Warning: failed to remove image $image_name"
|
||||
fi
|
||||
fi
|
||||
|
||||
echo "Removing idstools symlink and scripts..."
|
||||
rm -rf /usr/sbin/so-idstools*
|
||||
sed -i '/^#\?so-idstools$/d' /opt/so/conf/so-status/so-status.conf
|
||||
crontab -l | grep -v 'so-rule-update' | crontab -
|
||||
|
||||
# Backup the salt master config & manager pillar before editing it
|
||||
cp /opt/so/saltstack/local/pillar/minions/$MINIONID.sls /nsm/backup/detections-migration/2-4-200/
|
||||
cp /etc/salt/master /nsm/backup/detections-migration/2-4-200/
|
||||
so-yaml.py remove /opt/so/saltstack/local/pillar/minions/$MINIONID.sls idstools
|
||||
so-yaml.py removelistitem /etc/salt/master file_roots.base /opt/so/rules/nids
|
||||
|
||||
}
|
||||
|
||||
determine_elastic_agent_upgrade() {
|
||||
if [[ $is_airgap -eq 0 ]]; then
|
||||
update_elastic_agent_airgap
|
||||
@@ -1132,7 +1409,7 @@ unmount_update() {
|
||||
|
||||
update_airgap_rules() {
|
||||
# Copy the rules over to update them for airgap.
|
||||
rsync -a $UPDATE_DIR/agrules/suricata/* /nsm/rules/suricata/
|
||||
rsync -a --delete $UPDATE_DIR/agrules/suricata/ /nsm/rules/suricata/etopen/
|
||||
rsync -a $UPDATE_DIR/agrules/detect-sigma/* /nsm/rules/detect-sigma/
|
||||
rsync -a $UPDATE_DIR/agrules/detect-yara/* /nsm/rules/detect-yara/
|
||||
# Copy the securityonion-resorces repo over for SOC Detection Summaries and checkout the published summaries branch
|
||||
@@ -1381,6 +1658,243 @@ verify_latest_update_script() {
|
||||
fi
|
||||
|
||||
}
|
||||
|
||||
verify_es_version_compatibility() {
|
||||
|
||||
local es_required_version_statefile="/opt/so/state/so_es_required_upgrade_version.txt"
|
||||
local es_verification_script="/tmp/so_intermediate_upgrade_verification.sh"
|
||||
# supported upgrade paths for SO-ES versions
|
||||
declare -A es_upgrade_map=(
|
||||
["8.14.3"]="8.17.3 8.18.4 8.18.6 8.18.8"
|
||||
["8.17.3"]="8.18.4 8.18.6 8.18.8"
|
||||
["8.18.4"]="8.18.6 8.18.8 9.0.8"
|
||||
["8.18.6"]="8.18.8 9.0.8"
|
||||
["8.18.8"]="9.0.8"
|
||||
)
|
||||
|
||||
# Elasticsearch MUST upgrade through these versions
|
||||
declare -A es_to_so_version=(
|
||||
["8.18.8"]="2.4.190-20251024"
|
||||
)
|
||||
|
||||
# Get current Elasticsearch version
|
||||
if es_version_raw=$(so-elasticsearch-query / --fail --retry 5 --retry-delay 10); then
|
||||
es_version=$(echo "$es_version_raw" | jq -r '.version.number' )
|
||||
else
|
||||
echo "Could not determine current Elasticsearch version to validate compatibility with post soup Elasticsearch version."
|
||||
exit 160
|
||||
fi
|
||||
|
||||
if ! target_es_version=$(so-yaml.py get $UPDATE_DIR/salt/elasticsearch/defaults.yaml elasticsearch.version | sed -n '1p'); then
|
||||
# so-yaml.py failed to get the ES version from upgrade versions elasticsearch/defaults.yaml file. Likely they are upgrading to an SO version older than 2.4.110 prior to the ES version pinning and should be OKAY to continue with the upgrade.
|
||||
|
||||
# if so-yaml.py failed to get the ES version AND the version we are upgrading to is newer than 2.4.110 then we should bail
|
||||
if [[ $(cat $UPDATE_DIR/VERSION | cut -d'.' -f3) > 110 ]]; then
|
||||
echo "Couldn't determine the target Elasticsearch version (post soup version) to ensure compatibility with current Elasticsearch version. Exiting"
|
||||
exit 160
|
||||
fi
|
||||
|
||||
# allow upgrade to version < 2.4.110 without checking ES version compatibility
|
||||
return 0
|
||||
|
||||
fi
|
||||
|
||||
# if this statefile exists then we have done an intermediate upgrade and we need to ensure that ALL ES nodes have been upgraded to the version in the statefile before allowing soup to continue
|
||||
if [[ -f "$es_required_version_statefile" ]]; then
|
||||
# required so verification script should have already been created
|
||||
if [[ ! -f "$es_verification_script" ]]; then
|
||||
create_intermediate_upgrade_verification_script $es_verification_script
|
||||
fi
|
||||
|
||||
local es_required_version_statefile_value=$(cat $es_required_version_statefile)
|
||||
echo -e "\n##############################################################################################################################\n"
|
||||
echo "A previously required intermediate Elasticsearch upgrade was detected. Verifying that all Searchnodes/Heavynodes have successfully upgraded Elasticsearch to $es_required_version_statefile_value before proceeding with soup to avoid potential data loss!"
|
||||
# create script using version in statefile
|
||||
timeout --foreground 4000 bash "$es_verification_script" "$es_required_version_statefile_value" "$es_required_version_statefile"
|
||||
if [[ $? -ne 0 ]]; then
|
||||
echo -e "\n!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!\n"
|
||||
|
||||
echo "A previous required intermediate Elasticsearch upgrade to $es_required_version_statefile_value has yet to successfully complete across the grid. Please allow time for all Searchnodes/Heavynodes to have upgraded Elasticsearch to $es_required_version_statefile_value before running soup again to avoid potential data loss!"
|
||||
|
||||
echo -e "\n!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!\n"
|
||||
exit 161
|
||||
fi
|
||||
echo -e "\n##############################################################################################################################\n"
|
||||
fi
|
||||
|
||||
if [[ " ${es_upgrade_map[$es_version]} " =~ " $target_es_version " || "$es_version" == "$target_es_version" ]]; then
|
||||
# supported upgrade
|
||||
return 0
|
||||
else
|
||||
compatible_versions=${es_upgrade_map[$es_version]}
|
||||
next_step_so_version=${es_to_so_version[${compatible_versions##* }]}
|
||||
echo -e "\n##############################################################################################################################\n"
|
||||
echo -e "You are currently running Security Onion $INSTALLEDVERSION. You will need to update to version $next_step_so_version before updating to $(cat $UPDATE_DIR/VERSION).\n"
|
||||
|
||||
echo "${compatible_versions##* }" > "$es_required_version_statefile"
|
||||
|
||||
# We expect to upgrade to the latest compatiable minor version of ES
|
||||
create_intermediate_upgrade_verification_script $es_verification_script
|
||||
|
||||
if [[ $is_airgap -eq 0 ]]; then
|
||||
echo "You can download the $next_step_so_version ISO image from https://download.securityonion.net/file/securityonion/securityonion-$next_step_so_version.iso"
|
||||
echo "*** Once you have updated to $next_step_so_version, you can then run soup again to update to $(cat $UPDATE_DIR/VERSION). ***"
|
||||
echo -e "\n##############################################################################################################################\n"
|
||||
exit 160
|
||||
else
|
||||
# preserve BRANCH value if set originally
|
||||
if [[ -n "$BRANCH" ]]; then
|
||||
local originally_requested_so_version="$BRANCH"
|
||||
else
|
||||
local originally_requested_so_version="2.4/main"
|
||||
fi
|
||||
|
||||
echo "Starting automated intermediate upgrade to $next_step_so_version."
|
||||
echo "After completion, the system will automatically attempt to upgrade to the latest version."
|
||||
echo -e "\n##############################################################################################################################\n"
|
||||
exec bash -c "BRANCH=$next_step_so_version soup -y && BRANCH=$next_step_so_version soup -y && \
|
||||
echo -e \"\n##############################################################################################################################\n\" && \
|
||||
echo -e \"Verifying Elasticsearch was successfully upgraded to ${compatible_versions##* } across the grid. This part can take a while as Searchnodes/Heavynodes sync up with the Manager! \n\nOnce verification completes the next soup will begin automatically. If verification takes longer than 1 hour it will stop waiting and your grid will remain at $next_step_so_version. Allowing for all Searchnodes/Heavynodes to upgrade Elasticsearch to the required version on their own time.\n\" \
|
||||
&& timeout --foreground 4000 bash /tmp/so_intermediate_upgrade_verification.sh ${compatible_versions##* } $es_required_version_statefile && \
|
||||
echo -e \"\n##############################################################################################################################\n\" \
|
||||
&& BRANCH=$originally_requested_so_version soup -y && BRANCH=$originally_requested_so_version soup -y"
|
||||
fi
|
||||
fi
|
||||
|
||||
}
|
||||
|
||||
create_intermediate_upgrade_verification_script() {
|
||||
# After an intermediate upgrade, verify that ALL nodes running Elasticsearch are at the expected version BEFORE proceeding to the next upgrade step. This is a CRITICAL step
|
||||
local verification_script="$1"
|
||||
|
||||
cat << 'EOF' > "$verification_script"
|
||||
#!/bin/bash
|
||||
|
||||
SOUP_INTERMEDIATE_UPGRADE_FAILURES_LOG_FILE="/root/so_intermediate_upgrade_verification_failures.log"
|
||||
CURRENT_TIME=$(date +%Y%m%d.%H%M%S)
|
||||
EXPECTED_ES_VERSION="$1"
|
||||
|
||||
if [[ -z "$EXPECTED_ES_VERSION" ]]; then
|
||||
echo -e "\nExpected Elasticsearch version not provided. Usage: $0 <expected_es_version>"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
if [[ -f "$SOUP_INTERMEDIATE_UPGRADE_FAILURES_LOG_FILE" ]]; then
|
||||
mv "$SOUP_INTERMEDIATE_UPGRADE_FAILURES_LOG_FILE" "$SOUP_INTERMEDIATE_UPGRADE_FAILURES_LOG_FILE.$CURRENT_TIME"
|
||||
fi
|
||||
|
||||
check_heavynodes_es_version() {
|
||||
# Check if heavynodes are in this grid
|
||||
if ! salt-key -l accepted | grep -q 'heavynode$'; then
|
||||
|
||||
# No heavynodes, skip version check
|
||||
echo "No heavynodes detected in this Security Onion deployment. Skipping heavynode Elasticsearch version verification."
|
||||
return 0
|
||||
fi
|
||||
|
||||
echo -e "\nOne or more heavynodes detected. Verifying their Elasticsearch versions."
|
||||
|
||||
local retries=20
|
||||
local retry_count=0
|
||||
local delay=180
|
||||
|
||||
while [[ $retry_count -lt $retries ]]; do
|
||||
# keep stderr with variable for logging
|
||||
heavynode_versions=$(salt -C 'G@role:so-heavynode' cmd.run 'so-elasticsearch-query / --retry 3 --retry-delay 10 | jq ".version.number"' shell=/bin/bash --out=json 2> /dev/null)
|
||||
local exit_status=$?
|
||||
|
||||
# Check that all heavynodes returned good data
|
||||
if [[ $exit_status -ne 0 ]]; then
|
||||
echo "Failed to retrieve Elasticsearch version from one or more heavynodes... Retrying in $delay seconds. Attempt $((retry_count + 1)) of $retries."
|
||||
((retry_count++))
|
||||
sleep $delay
|
||||
|
||||
continue
|
||||
else
|
||||
if echo "$heavynode_versions" | jq -s --arg expected "\"$EXPECTED_ES_VERSION\"" --exit-status 'all(.[]; . | to_entries | all(.[]; .value == $expected))' > /dev/null; then
|
||||
echo -e "\nAll heavynodes are at the expected Elasticsearch version $EXPECTED_ES_VERSION."
|
||||
|
||||
return 0
|
||||
else
|
||||
echo "One or more heavynodes are not at the expected Elasticsearch version $EXPECTED_ES_VERSION. Rechecking in $delay seconds. Attempt $((retry_count + 1)) of $retries."
|
||||
((retry_count++))
|
||||
sleep $delay
|
||||
|
||||
continue
|
||||
fi
|
||||
fi
|
||||
done
|
||||
|
||||
echo -e "\n!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!\n"
|
||||
echo "One or more heavynodes is not at the expected Elasticsearch version $EXPECTED_ES_VERSION."
|
||||
echo "Current versions:"
|
||||
echo "$heavynode_versions" | jq -s 'add'
|
||||
echo "$heavynode_versions" | jq -s 'add' >> "$SOUP_INTERMEDIATE_UPGRADE_FAILURES_LOG_FILE"
|
||||
echo -e "\n Stopping automatic upgrade to latest Security Onion version. Heavynodes must ALL be at Elasticsearch version $EXPECTED_ES_VERSION before proceeding with the next upgrade step to avoid potential data loss!"
|
||||
echo -e "\n Heavynodes will upgrade themselves to Elasticsearch $EXPECTED_ES_VERSION on their own, but this process can take a long time depending on network link between Manager and Heavynodes."
|
||||
echo -e "\n!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!\n"
|
||||
|
||||
return 1
|
||||
}
|
||||
|
||||
check_searchnodes_es_version() {
|
||||
local retries=20
|
||||
local retry_count=0
|
||||
local delay=180
|
||||
|
||||
while [[ $retry_count -lt $retries ]]; do
|
||||
# keep stderr with variable for logging
|
||||
cluster_versions=$(so-elasticsearch-query _nodes/_all/version --retry 5 --retry-delay 10 --fail 2>&1)
|
||||
local exit_status=$?
|
||||
|
||||
if [[ $exit_status -ne 0 ]]; then
|
||||
echo "Failed to retrieve Elasticsearch versions from searchnodes... Retrying in $delay seconds. Attempt $((retry_count + 1)) of $retries."
|
||||
((retry_count++))
|
||||
sleep $delay
|
||||
|
||||
continue
|
||||
else
|
||||
if echo "$cluster_versions" | jq --arg expected "$EXPECTED_ES_VERSION" --exit-status '.nodes | to_entries | all(.[].value.version; . == $expected)' > /dev/null; then
|
||||
echo "All Searchnodes are at the expected Elasticsearch version $EXPECTED_ES_VERSION."
|
||||
|
||||
return 0
|
||||
else
|
||||
echo "One or more Searchnodes is not at the expected Elasticsearch version $EXPECTED_ES_VERSION. Rechecking in $delay seconds. Attempt $((retry_count + 1)) of $retries."
|
||||
((retry_count++))
|
||||
sleep $delay
|
||||
|
||||
continue
|
||||
fi
|
||||
fi
|
||||
done
|
||||
|
||||
echo -e "\n!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!\n"
|
||||
echo "One or more Searchnodes is not at the expected Elasticsearch version $EXPECTED_ES_VERSION."
|
||||
echo "Current versions:"
|
||||
echo "$cluster_versions" | jq '.nodes | to_entries | map({(.value.name): .value.version}) | sort | add'
|
||||
echo "$cluster_versions" >> "$SOUP_INTERMEDIATE_UPGRADE_FAILURES_LOG_FILE"
|
||||
echo -e "\nStopping automatic upgrade to latest version. Searchnodes must ALL be at Elasticsearch version $EXPECTED_ES_VERSION before proceeding with the next upgrade step to avoid potential data loss!"
|
||||
echo -e "\nSearchnodes will upgrade themselves to Elasticsearch $EXPECTED_ES_VERSION on their own, but this process can take a while depending on cluster size / network link between Manager and Searchnodes."
|
||||
echo -e "\n!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!\n"
|
||||
|
||||
echo "$cluster_versions" > "$SOUP_INTERMEDIATE_UPGRADE_FAILURES_LOG_FILE"
|
||||
|
||||
return 1
|
||||
|
||||
}
|
||||
|
||||
# Need to add a check for heavynodes and ensure all heavynodes get their own "cluster" upgraded before moving on to final upgrade.
|
||||
check_searchnodes_es_version || exit 1
|
||||
check_heavynodes_es_version || exit 1
|
||||
|
||||
# Remove required version state file after successful verification
|
||||
rm -f "$2"
|
||||
|
||||
exit 0
|
||||
|
||||
EOF
|
||||
}
|
||||
|
||||
# Keeping this block in case we need to do a hotfix that requires salt update
|
||||
apply_hotfix() {
|
||||
if [[ "$INSTALLEDVERSION" == "2.4.20" ]] ; then
|
||||
@@ -1407,7 +1921,7 @@ apply_hotfix() {
|
||||
mv /etc/pki/managerssl.crt /etc/pki/managerssl.crt.old
|
||||
mv /etc/pki/managerssl.key /etc/pki/managerssl.key.old
|
||||
systemctl_func "start" "salt-minion"
|
||||
(wait_for_salt_minion "$MINIONID" "5" '/dev/stdout' || fail "Salt minion was not running or ready.") 2>&1 | tee -a "$SOUP_LOG"
|
||||
(wait_for_salt_minion "$MINIONID" "120" "4" "$SOUP_LOG" || fail "Salt minion was not running or ready.") 2>&1 | tee -a "$SOUP_LOG"
|
||||
fi
|
||||
else
|
||||
echo "No actions required. ($INSTALLEDVERSION/$HOTFIXVERSION)"
|
||||
@@ -1477,6 +1991,8 @@ main() {
|
||||
echo "Verifying we have the latest soup script."
|
||||
verify_latest_update_script
|
||||
|
||||
verify_es_version_compatibility
|
||||
|
||||
echo "Let's see if we need to update Security Onion."
|
||||
upgrade_check
|
||||
upgrade_space
|
||||
@@ -1604,7 +2120,7 @@ main() {
|
||||
echo ""
|
||||
echo "Running a highstate. This could take several minutes."
|
||||
set +e
|
||||
(wait_for_salt_minion "$MINIONID" "5" '/dev/stdout' || fail "Salt minion was not running or ready.") 2>&1 | tee -a "$SOUP_LOG"
|
||||
(wait_for_salt_minion "$MINIONID" "120" "4" "$SOUP_LOG" || fail "Salt minion was not running or ready.") 2>&1 | tee -a "$SOUP_LOG"
|
||||
highstate
|
||||
set -e
|
||||
|
||||
@@ -1617,7 +2133,7 @@ main() {
|
||||
check_saltmaster_status
|
||||
|
||||
echo "Running a highstate to complete the Security Onion upgrade on this manager. This could take several minutes."
|
||||
(wait_for_salt_minion "$MINIONID" "5" '/dev/stdout' || fail "Salt minion was not running or ready.") 2>&1 | tee -a "$SOUP_LOG"
|
||||
(wait_for_salt_minion "$MINIONID" "120" "4" "$SOUP_LOG" || fail "Salt minion was not running or ready.") 2>&1 | tee -a "$SOUP_LOG"
|
||||
|
||||
# Stop long-running scripts to allow potentially updated scripts to load on the next execution.
|
||||
killall salt-relay.sh
|
||||
@@ -1642,7 +2158,7 @@ main() {
|
||||
if [[ $is_airgap -eq 0 ]]; then
|
||||
echo ""
|
||||
echo "Cleaning repos on remote Security Onion nodes."
|
||||
salt -C 'not *_eval and not *_manager and not *_managersearch and not *_standalone and G@os:CentOS' cmd.run "yum clean all"
|
||||
salt -C 'not *_eval and not *_manager* and not *_standalone and G@os:OEL' cmd.run "dnf clean all"
|
||||
echo ""
|
||||
fi
|
||||
fi
|
||||
|
||||
@@ -6,9 +6,6 @@
|
||||
{% from 'allowed_states.map.jinja' import allowed_states %}
|
||||
{% if sls.split('.')[0] in allowed_states %}
|
||||
|
||||
include:
|
||||
- ssl
|
||||
|
||||
# Drop the correct nginx config based on role
|
||||
nginxconfdir:
|
||||
file.directory:
|
||||
|
||||
@@ -8,81 +8,14 @@
|
||||
{% from 'vars/globals.map.jinja' import GLOBALS %}
|
||||
{% from 'docker/docker.map.jinja' import DOCKER %}
|
||||
{% from 'nginx/map.jinja' import NGINXMERGED %}
|
||||
{% set ca_server = GLOBALS.minion_id %}
|
||||
|
||||
include:
|
||||
- nginx.ssl
|
||||
- nginx.config
|
||||
- nginx.sostatus
|
||||
|
||||
|
||||
{% if grains.role not in ['so-fleet'] %}
|
||||
|
||||
{# if the user has selected to replace the crt and key in the ui #}
|
||||
{% if NGINXMERGED.ssl.replace_cert %}
|
||||
|
||||
managerssl_key:
|
||||
file.managed:
|
||||
- name: /etc/pki/managerssl.key
|
||||
- source: salt://nginx/ssl/ssl.key
|
||||
- mode: 640
|
||||
- group: 939
|
||||
- watch_in:
|
||||
- docker_container: so-nginx
|
||||
|
||||
managerssl_crt:
|
||||
file.managed:
|
||||
- name: /etc/pki/managerssl.crt
|
||||
- source: salt://nginx/ssl/ssl.crt
|
||||
- mode: 644
|
||||
- watch_in:
|
||||
- docker_container: so-nginx
|
||||
|
||||
{% else %}
|
||||
|
||||
managerssl_key:
|
||||
x509.private_key_managed:
|
||||
- name: /etc/pki/managerssl.key
|
||||
- keysize: 4096
|
||||
- backup: True
|
||||
- new: True
|
||||
{% if salt['file.file_exists']('/etc/pki/managerssl.key') -%}
|
||||
- prereq:
|
||||
- x509: /etc/pki/managerssl.crt
|
||||
{%- endif %}
|
||||
- retry:
|
||||
attempts: 5
|
||||
interval: 30
|
||||
- watch_in:
|
||||
- docker_container: so-nginx
|
||||
|
||||
# Create a cert for the reverse proxy
|
||||
managerssl_crt:
|
||||
x509.certificate_managed:
|
||||
- name: /etc/pki/managerssl.crt
|
||||
- ca_server: {{ ca_server }}
|
||||
- signing_policy: managerssl
|
||||
- private_key: /etc/pki/managerssl.key
|
||||
- CN: {{ GLOBALS.hostname }}
|
||||
- subjectAltName: "DNS:{{ GLOBALS.hostname }}, IP:{{ GLOBALS.node_ip }}, DNS:{{ GLOBALS.url_base }}"
|
||||
- days_remaining: 0
|
||||
- days_valid: 820
|
||||
- backup: True
|
||||
- timeout: 30
|
||||
- retry:
|
||||
attempts: 5
|
||||
interval: 30
|
||||
- watch_in:
|
||||
- docker_container: so-nginx
|
||||
|
||||
{% endif %}
|
||||
|
||||
msslkeyperms:
|
||||
file.managed:
|
||||
- replace: False
|
||||
- name: /etc/pki/managerssl.key
|
||||
- mode: 640
|
||||
- group: 939
|
||||
|
||||
{% if GLOBALS.role != 'so-fleet' %}
|
||||
{% set container_config = 'so-nginx' %}
|
||||
make-rule-dir-nginx:
|
||||
file.directory:
|
||||
- name: /nsm/rules
|
||||
@@ -92,15 +25,11 @@ make-rule-dir-nginx:
|
||||
- user
|
||||
- group
|
||||
- show_changes: False
|
||||
|
||||
{% endif %}
|
||||
|
||||
{# if this is an so-fleet node then we want to use the port bindings, custom bind mounts defined for fleet #}
|
||||
{% if GLOBALS.role == 'so-fleet' %}
|
||||
{% set container_config = 'so-nginx-fleet-node' %}
|
||||
{% else %}
|
||||
{% set container_config = 'so-nginx' %}
|
||||
{% endif %}
|
||||
{% else %}
|
||||
{# if this is an so-fleet node then we want to use the port bindings, custom bind mounts defined for fleet #}
|
||||
{% set container_config = 'so-nginx-fleet-node' %}
|
||||
{% endif %}
|
||||
|
||||
so-nginx:
|
||||
docker_container.running:
|
||||
@@ -154,18 +83,27 @@ so-nginx:
|
||||
- watch:
|
||||
- file: nginxconf
|
||||
- file: nginxconfdir
|
||||
- require:
|
||||
- file: nginxconf
|
||||
{% if GLOBALS.is_manager %}
|
||||
{% if NGINXMERGED.ssl.replace_cert %}
|
||||
{% if GLOBALS.is_manager %}
|
||||
{% if NGINXMERGED.ssl.replace_cert %}
|
||||
- file: managerssl_key
|
||||
- file: managerssl_crt
|
||||
{% else %}
|
||||
{% else %}
|
||||
- x509: managerssl_key
|
||||
- x509: managerssl_crt
|
||||
{% endif%}
|
||||
{% endif%}
|
||||
{% endif %}
|
||||
- require:
|
||||
- file: nginxconf
|
||||
{% if GLOBALS.is_manager %}
|
||||
{% if NGINXMERGED.ssl.replace_cert %}
|
||||
- file: managerssl_key
|
||||
- file: managerssl_crt
|
||||
{% else %}
|
||||
- x509: managerssl_key
|
||||
- x509: managerssl_crt
|
||||
{% endif%}
|
||||
- file: navigatorconfig
|
||||
{% endif %}
|
||||
{% endif %}
|
||||
|
||||
delete_so-nginx_so-status.disabled:
|
||||
file.uncomment:
|
||||
|
||||
87
salt/nginx/ssl.sls
Normal file
87
salt/nginx/ssl.sls
Normal file
@@ -0,0 +1,87 @@
|
||||
# Copyright Security Onion Solutions LLC and/or licensed to Security Onion Solutions LLC under one
|
||||
# or more contributor license agreements. Licensed under the Elastic License 2.0 as shown at
|
||||
# https://securityonion.net/license; you may not use this file except in compliance with the
|
||||
# Elastic License 2.0.
|
||||
|
||||
{% from 'allowed_states.map.jinja' import allowed_states %}
|
||||
{% if sls.split('.')[0] in allowed_states %}
|
||||
{% from 'vars/globals.map.jinja' import GLOBALS %}
|
||||
{% from 'nginx/map.jinja' import NGINXMERGED %}
|
||||
{% from 'ca/map.jinja' import CA %}
|
||||
|
||||
{% if GLOBALS.role != 'so-fleet' %}
|
||||
{# if the user has selected to replace the crt and key in the ui #}
|
||||
{% if NGINXMERGED.ssl.replace_cert %}
|
||||
|
||||
managerssl_key:
|
||||
file.managed:
|
||||
- name: /etc/pki/managerssl.key
|
||||
- source: salt://nginx/ssl/ssl.key
|
||||
- mode: 640
|
||||
- group: 939
|
||||
- watch_in:
|
||||
- docker_container: so-nginx
|
||||
|
||||
managerssl_crt:
|
||||
file.managed:
|
||||
- name: /etc/pki/managerssl.crt
|
||||
- source: salt://nginx/ssl/ssl.crt
|
||||
- mode: 644
|
||||
- watch_in:
|
||||
- docker_container: so-nginx
|
||||
|
||||
{% else %}
|
||||
|
||||
managerssl_key:
|
||||
x509.private_key_managed:
|
||||
- name: /etc/pki/managerssl.key
|
||||
- keysize: 4096
|
||||
- backup: True
|
||||
- new: True
|
||||
{% if salt['file.file_exists']('/etc/pki/managerssl.key') -%}
|
||||
- prereq:
|
||||
- x509: /etc/pki/managerssl.crt
|
||||
{%- endif %}
|
||||
- retry:
|
||||
attempts: 5
|
||||
interval: 30
|
||||
- watch_in:
|
||||
- docker_container: so-nginx
|
||||
|
||||
# Create a cert for the reverse proxy
|
||||
managerssl_crt:
|
||||
x509.certificate_managed:
|
||||
- name: /etc/pki/managerssl.crt
|
||||
- ca_server: {{ CA.server }}
|
||||
- signing_policy: general
|
||||
- private_key: /etc/pki/managerssl.key
|
||||
- CN: {{ GLOBALS.hostname }}
|
||||
- subjectAltName: "DNS:{{ GLOBALS.hostname }}, IP:{{ GLOBALS.node_ip }}, DNS:{{ GLOBALS.url_base }}"
|
||||
- days_remaining: 7
|
||||
- days_valid: 820
|
||||
- backup: True
|
||||
- timeout: 30
|
||||
- retry:
|
||||
attempts: 5
|
||||
interval: 30
|
||||
- watch_in:
|
||||
- docker_container: so-nginx
|
||||
|
||||
{% endif %}
|
||||
|
||||
msslkeyperms:
|
||||
file.managed:
|
||||
- replace: False
|
||||
- name: /etc/pki/managerssl.key
|
||||
- mode: 640
|
||||
- group: 939
|
||||
|
||||
{% endif %}
|
||||
|
||||
{% else %}
|
||||
|
||||
{{sls}}_state_not_allowed:
|
||||
test.fail_without_changes:
|
||||
- name: {{sls}}_state_not_allowed
|
||||
|
||||
{% endif %}
|
||||
@@ -4,13 +4,14 @@
|
||||
# Elastic License 2.0.
|
||||
|
||||
{% from 'allowed_states.map.jinja' import allowed_states %}
|
||||
{% if sls.split('.')[0] in allowed_states %}
|
||||
{% if sls.split('.')[0] in allowed_states or sls in allowed_states%}
|
||||
|
||||
append_so-idstools_so-status.conf:
|
||||
file.append:
|
||||
- name: /opt/so/conf/so-status/so-status.conf
|
||||
- text: so-idstools
|
||||
- unless: grep -q so-idstools /opt/so/conf/so-status/so-status.conf
|
||||
stenoca:
|
||||
file.directory:
|
||||
- name: /opt/so/conf/steno/certs
|
||||
- user: 941
|
||||
- group: 939
|
||||
- makedirs: True
|
||||
|
||||
{% else %}
|
||||
|
||||
@@ -57,12 +57,6 @@ stenoconf:
|
||||
PCAPMERGED: {{ PCAPMERGED }}
|
||||
STENO_BPF_COMPILED: "{{ STENO_BPF_COMPILED }}"
|
||||
|
||||
stenoca:
|
||||
file.directory:
|
||||
- name: /opt/so/conf/steno/certs
|
||||
- user: 941
|
||||
- group: 939
|
||||
|
||||
pcaptmpdir:
|
||||
file.directory:
|
||||
- name: /nsm/pcaptmp
|
||||
|
||||
@@ -10,6 +10,7 @@
|
||||
|
||||
|
||||
include:
|
||||
- pcap.ca
|
||||
- pcap.config
|
||||
- pcap.sostatus
|
||||
|
||||
|
||||
@@ -7,9 +7,6 @@
|
||||
{% if sls.split('.')[0] in allowed_states %}
|
||||
{% from 'redis/map.jinja' import REDISMERGED %}
|
||||
|
||||
include:
|
||||
- ssl
|
||||
|
||||
# Redis Setup
|
||||
redisconfdir:
|
||||
file.directory:
|
||||
|
||||
@@ -9,6 +9,8 @@
|
||||
{% from 'vars/globals.map.jinja' import GLOBALS %}
|
||||
|
||||
include:
|
||||
- ca
|
||||
- redis.ssl
|
||||
- redis.config
|
||||
- redis.sostatus
|
||||
|
||||
@@ -31,11 +33,7 @@ so-redis:
|
||||
- /nsm/redis/data:/data:rw
|
||||
- /etc/pki/redis.crt:/certs/redis.crt:ro
|
||||
- /etc/pki/redis.key:/certs/redis.key:ro
|
||||
{% if grains['role'] in ['so-manager', 'so-managersearch', 'so-standalone', 'so-import'] %}
|
||||
- /etc/pki/ca.crt:/certs/ca.crt:ro
|
||||
{% else %}
|
||||
- /etc/pki/tls/certs/intca.crt:/certs/ca.crt:ro
|
||||
{% endif %}
|
||||
{% if DOCKER.containers['so-redis'].custom_bind_mounts %}
|
||||
{% for BIND in DOCKER.containers['so-redis'].custom_bind_mounts %}
|
||||
- {{ BIND }}
|
||||
@@ -55,16 +53,14 @@ so-redis:
|
||||
{% endif %}
|
||||
- entrypoint: "redis-server /usr/local/etc/redis/redis.conf"
|
||||
- watch:
|
||||
- file: /opt/so/conf/redis/etc
|
||||
- require:
|
||||
- file: redisconf
|
||||
- file: trusttheca
|
||||
- x509: redis_crt
|
||||
- x509: redis_key
|
||||
- file: /opt/so/conf/redis/etc
|
||||
- require:
|
||||
- file: trusttheca
|
||||
- x509: redis_crt
|
||||
- x509: redis_key
|
||||
{% if grains['role'] in ['so-manager', 'so-managersearch', 'so-standalone', 'so-import'] %}
|
||||
- x509: pki_public_ca_crt
|
||||
{% else %}
|
||||
- x509: trusttheca
|
||||
{% endif %}
|
||||
|
||||
delete_so-redis_so-status.disabled:
|
||||
file.uncomment:
|
||||
|
||||
54
salt/redis/ssl.sls
Normal file
54
salt/redis/ssl.sls
Normal file
@@ -0,0 +1,54 @@
|
||||
# Copyright Security Onion Solutions LLC and/or licensed to Security Onion Solutions LLC under one
|
||||
# or more contributor license agreements. Licensed under the Elastic License 2.0 as shown at
|
||||
# https://securityonion.net/license; you may not use this file except in compliance with the
|
||||
# Elastic License 2.0.
|
||||
|
||||
{% from 'allowed_states.map.jinja' import allowed_states %}
|
||||
{% if sls.split('.')[0] in allowed_states %}
|
||||
{% from 'vars/globals.map.jinja' import GLOBALS %}
|
||||
{% from 'ca/map.jinja' import CA %}
|
||||
|
||||
redis_key:
|
||||
x509.private_key_managed:
|
||||
- name: /etc/pki/redis.key
|
||||
- keysize: 4096
|
||||
- backup: True
|
||||
- new: True
|
||||
{% if salt['file.file_exists']('/etc/pki/redis.key') -%}
|
||||
- prereq:
|
||||
- x509: /etc/pki/redis.crt
|
||||
{%- endif %}
|
||||
- retry:
|
||||
attempts: 5
|
||||
interval: 30
|
||||
|
||||
redis_crt:
|
||||
x509.certificate_managed:
|
||||
- name: /etc/pki/redis.crt
|
||||
- ca_server: {{ CA.server }}
|
||||
- subjectAltName: DNS:{{ GLOBALS.hostname }}, IP:{{ GLOBALS.node_ip }}
|
||||
- signing_policy: general
|
||||
- private_key: /etc/pki/redis.key
|
||||
- CN: {{ GLOBALS.hostname }}
|
||||
- days_remaining: 7
|
||||
- days_valid: 820
|
||||
- backup: True
|
||||
- timeout: 30
|
||||
- retry:
|
||||
attempts: 5
|
||||
interval: 30
|
||||
|
||||
rediskeyperms:
|
||||
file.managed:
|
||||
- replace: False
|
||||
- name: /etc/pki/redis.key
|
||||
- mode: 640
|
||||
- group: 939
|
||||
|
||||
{% else %}
|
||||
|
||||
{{sls}}_state_not_allowed:
|
||||
test.fail_without_changes:
|
||||
- name: {{sls}}_state_not_allowed
|
||||
|
||||
{% endif %}
|
||||
@@ -6,9 +6,6 @@
|
||||
{% from 'allowed_states.map.jinja' import allowed_states %}
|
||||
{% if sls.split('.')[0] in allowed_states %}
|
||||
|
||||
include:
|
||||
- ssl
|
||||
|
||||
# Create the config directory for the docker registry
|
||||
dockerregistryconfdir:
|
||||
file.directory:
|
||||
|
||||
@@ -9,6 +9,7 @@
|
||||
{% from 'docker/docker.map.jinja' import DOCKER %}
|
||||
|
||||
include:
|
||||
- registry.ssl
|
||||
- registry.config
|
||||
- registry.sostatus
|
||||
|
||||
@@ -53,6 +54,9 @@ so-dockerregistry:
|
||||
- retry:
|
||||
attempts: 5
|
||||
interval: 30
|
||||
- watch:
|
||||
- x509: registry_crt
|
||||
- x509: registry_key
|
||||
- require:
|
||||
- file: dockerregistryconf
|
||||
- x509: registry_crt
|
||||
|
||||
77
salt/registry/ssl.sls
Normal file
77
salt/registry/ssl.sls
Normal file
@@ -0,0 +1,77 @@
|
||||
# Copyright Security Onion Solutions LLC and/or licensed to Security Onion Solutions LLC under one
|
||||
# or more contributor license agreements. Licensed under the Elastic License 2.0 as shown at
|
||||
# https://securityonion.net/license; you may not use this file except in compliance with the
|
||||
# Elastic License 2.0.
|
||||
|
||||
{% from 'allowed_states.map.jinja' import allowed_states %}
|
||||
{% if sls.split('.')[0] in allowed_states %}
|
||||
{% from 'vars/globals.map.jinja' import GLOBALS %}
|
||||
{% from 'ca/map.jinja' import CA %}
|
||||
|
||||
include:
|
||||
- ca
|
||||
|
||||
# Delete directory if it exists at the key path
|
||||
registry_key_cleanup:
|
||||
file.absent:
|
||||
- name: /etc/pki/registry.key
|
||||
- onlyif:
|
||||
- test -d /etc/pki/registry.key
|
||||
|
||||
registry_key:
|
||||
x509.private_key_managed:
|
||||
- name: /etc/pki/registry.key
|
||||
- keysize: 4096
|
||||
- backup: True
|
||||
- new: True
|
||||
- require:
|
||||
- file: registry_key_cleanup
|
||||
{% if salt['file.file_exists']('/etc/pki/registry.key') -%}
|
||||
- prereq:
|
||||
- x509: /etc/pki/registry.crt
|
||||
{%- endif %}
|
||||
- retry:
|
||||
attempts: 15
|
||||
interval: 10
|
||||
|
||||
# Delete directory if it exists at the crt path
|
||||
registry_crt_cleanup:
|
||||
file.absent:
|
||||
- name: /etc/pki/registry.crt
|
||||
- onlyif:
|
||||
- test -d /etc/pki/registry.crt
|
||||
|
||||
# Create a cert for the docker registry
|
||||
registry_crt:
|
||||
x509.certificate_managed:
|
||||
- name: /etc/pki/registry.crt
|
||||
- ca_server: {{ CA.server }}
|
||||
- subjectAltName: DNS:{{ GLOBALS.manager }}, IP:{{ GLOBALS.manager_ip }}
|
||||
- signing_policy: general
|
||||
- private_key: /etc/pki/registry.key
|
||||
- CN: {{ GLOBALS.manager }}
|
||||
- days_remaining: 7
|
||||
- days_valid: 820
|
||||
- backup: True
|
||||
- require:
|
||||
- file: registry_crt_cleanup
|
||||
- timeout: 30
|
||||
- retry:
|
||||
attempts: 15
|
||||
interval: 10
|
||||
|
||||
|
||||
regkeyperms:
|
||||
file.managed:
|
||||
- replace: False
|
||||
- name: /etc/pki/registry.key
|
||||
- mode: 640
|
||||
- group: 939
|
||||
|
||||
{% else %}
|
||||
|
||||
{{sls}}_state_not_allowed:
|
||||
test.fail_without_changes:
|
||||
- name: {{sls}}_state_not_allowed
|
||||
|
||||
{% endif %}
|
||||
@@ -46,33 +46,6 @@ def start(interval=60):
|
||||
mine_update(minion)
|
||||
continue
|
||||
|
||||
# if a manager check that the ca in in the mine and it is correct
|
||||
if minion.split('_')[-1] in ['manager', 'managersearch', 'eval', 'standalone', 'import']:
|
||||
x509 = __salt__['saltutil.runner']('mine.get', tgt=minion, fun='x509.get_pem_entries')
|
||||
try:
|
||||
ca_crt = x509[minion]['/etc/pki/ca.crt']
|
||||
log.debug('checkmine engine: found minion %s has ca_crt: %s' % (minion, ca_crt))
|
||||
# since the cert is defined, make sure it is valid
|
||||
import salt.modules.x509_v2 as x509_v2
|
||||
if not x509_v2.verify_private_key('/etc/pki/ca.key', '/etc/pki/ca.crt'):
|
||||
log.error('checkmine engine: found minion %s does\'t have a valid ca_crt in the mine' % (minion))
|
||||
log.error('checkmine engine: %s: ca_crt: %s' % (minion, ca_crt))
|
||||
mine_delete(minion, 'x509.get_pem_entries')
|
||||
mine_update(minion)
|
||||
continue
|
||||
else:
|
||||
log.debug('checkmine engine: found minion %s has a valid ca_crt in the mine' % (minion))
|
||||
except IndexError:
|
||||
log.error('checkmine engine: found minion %s does\'t have a ca_crt in the mine' % (minion))
|
||||
mine_delete(minion, 'x509.get_pem_entries')
|
||||
mine_update(minion)
|
||||
continue
|
||||
except KeyError:
|
||||
log.error('checkmine engine: found minion %s is not in the mine' % (minion))
|
||||
mine_flush(minion)
|
||||
mine_update(minion)
|
||||
continue
|
||||
|
||||
# Update the mine if the ip in the mine doesn't match returned from manage.alived
|
||||
network_ip_addrs = __salt__['saltutil.runner']('mine.get', tgt=minion, fun='network.ip_addrs')
|
||||
try:
|
||||
|
||||
@@ -6,30 +6,6 @@ engines:
|
||||
interval: 60
|
||||
- pillarWatch:
|
||||
fpa:
|
||||
- files:
|
||||
- /opt/so/saltstack/local/pillar/idstools/soc_idstools.sls
|
||||
- /opt/so/saltstack/local/pillar/idstools/adv_idstools.sls
|
||||
pillar: idstools.config.ruleset
|
||||
default: ETOPEN
|
||||
actions:
|
||||
from:
|
||||
'*':
|
||||
to:
|
||||
'*':
|
||||
- cmd.run:
|
||||
cmd: /usr/sbin/so-rule-update
|
||||
- files:
|
||||
- /opt/so/saltstack/local/pillar/idstools/soc_idstools.sls
|
||||
- /opt/so/saltstack/local/pillar/idstools/adv_idstools.sls
|
||||
pillar: idstools.config.oinkcode
|
||||
default: ''
|
||||
actions:
|
||||
from:
|
||||
'*':
|
||||
to:
|
||||
'*':
|
||||
- cmd.run:
|
||||
cmd: /usr/sbin/so-rule-update
|
||||
- files:
|
||||
- /opt/so/saltstack/local/pillar/global/soc_global.sls
|
||||
- /opt/so/saltstack/local/pillar/global/adv_global.sls
|
||||
|
||||
@@ -18,10 +18,6 @@ mine_functions:
|
||||
mine_functions:
|
||||
network.ip_addrs:
|
||||
- interface: {{ interface }}
|
||||
{%- if role in ['so-eval','so-import','so-manager','so-managerhype','so-managersearch','so-standalone'] %}
|
||||
x509.get_pem_entries:
|
||||
- glob_path: '/etc/pki/ca.crt'
|
||||
{% endif %}
|
||||
|
||||
mine_update_mine_functions:
|
||||
module.run:
|
||||
|
||||
@@ -17,8 +17,8 @@ include:
|
||||
- repo.client
|
||||
- salt.mine_functions
|
||||
- salt.minion.service_file
|
||||
{% if GLOBALS.role in GLOBALS.manager_roles %}
|
||||
- ca
|
||||
{% if GLOBALS.is_manager %}
|
||||
- ca.signing_policy
|
||||
{% endif %}
|
||||
|
||||
{% if INSTALLEDSALTVERSION|string != SALTVERSION|string %}
|
||||
@@ -111,7 +111,7 @@ salt_minion_service:
|
||||
{% if INSTALLEDSALTVERSION|string == SALTVERSION|string %}
|
||||
- file: set_log_levels
|
||||
{% endif %}
|
||||
{% if GLOBALS.role in GLOBALS.manager_roles %}
|
||||
- file: /etc/salt/minion.d/signing_policies.conf
|
||||
{% if GLOBALS.is_manager %}
|
||||
- file: signing_policy
|
||||
{% endif %}
|
||||
- order: last
|
||||
|
||||
@@ -8,6 +8,9 @@
|
||||
|
||||
|
||||
include:
|
||||
{% if GLOBALS.is_sensor or GLOBALS.role == 'so-import' %}
|
||||
- pcap.ca
|
||||
{% endif %}
|
||||
- sensoroni.config
|
||||
- sensoroni.sostatus
|
||||
|
||||
@@ -16,7 +19,9 @@ so-sensoroni:
|
||||
- image: {{ GLOBALS.registry_host }}:5000/{{ GLOBALS.image_repo }}/so-soc:{{ GLOBALS.so_version }}
|
||||
- network_mode: host
|
||||
- binds:
|
||||
{% if GLOBALS.is_sensor or GLOBALS.role == 'so-import' %}
|
||||
- /opt/so/conf/steno/certs:/etc/stenographer/certs:rw
|
||||
{% endif %}
|
||||
- /nsm/pcap:/nsm/pcap:rw
|
||||
- /nsm/import:/nsm/import:rw
|
||||
- /nsm/pcapout:/nsm/pcapout:rw
|
||||
|
||||
@@ -0,0 +1,91 @@
|
||||
Onion AI Session Report
|
||||
==========================
|
||||
|
||||
## Session Details
|
||||
|
||||
**Session ID:** {{.Session.SessionId}}
|
||||
|
||||
**Title:** {{.Session.Title}}
|
||||
|
||||
**Created:** {{formatDateTime "Mon Jan 02 15:04:05 -0700 2006" .Session.CreateTime}}
|
||||
|
||||
**Updated:** {{formatDateTime "Mon Jan 02 15:04:05 -0700 2006" .Session.UpdateTime}}
|
||||
|
||||
{{ if .Session.DeleteTime }}
|
||||
**Deleted:** {{ formatDateTime "Mon Jan 02 15:04:05 -0700 2006" .Session.DeleteTime}}
|
||||
{{ end }}
|
||||
|
||||
**User ID:** {{getUserDetail "email" .Session.UserId}}
|
||||
|
||||
## Session Usage
|
||||
|
||||
**Total Input Tokens** {{.Session.Usage.TotalInputTokens}}
|
||||
|
||||
**Total Output Tokens** {{.Session.Usage.TotalOutputTokens}}
|
||||
|
||||
**Total Credits:** {{.Session.Usage.TotalCredits}}
|
||||
|
||||
**Total Messages:** {{.Session.Usage.TotalMessages}}
|
||||
|
||||
## Messages
|
||||
|
||||
{{ range $index, $msg := sortAssistantMessages "CreateTime" "asc" .History }}
|
||||
#### Message {{ add $index 1 }}
|
||||
|
||||
**Created:** {{formatDateTime "Mon Jan 02 15:04:05 -0700 2006" $msg.CreateTime}}
|
||||
|
||||
**User ID:** {{getUserDetail "email" $msg.UserId}}
|
||||
|
||||
**Role:** {{$msg.Message.Role}}
|
||||
|
||||
{{ range $i, $block := $msg.Message.ContentBlocks }}
|
||||
|
||||
---
|
||||
|
||||
{{ if eq $block.Type "text" }}
|
||||
**Text:** {{ stripEmoji $block.Text }}
|
||||
{{ else if eq $block.Type "tool_use" }}
|
||||
**Tool:** {{ $block.Name }}
|
||||
{{ if $block.Input }}
|
||||
**Parameters:**
|
||||
{{ range $key, $value := parseJSON $block.Input }}
|
||||
{{ if eq $key "limit" }}- {{ $key }}: {{ $value }}
|
||||
{{ else }}- {{ $key }}: "{{ $value }}"
|
||||
{{ end }}{{ end }}{{ end }}
|
||||
{{ else if $block.ToolResult }}
|
||||
**Tool Result:**
|
||||
{{ if $block.ToolResult.Content }}
|
||||
{{ range $j, $contentBlock := $block.ToolResult.Content }}
|
||||
{{ if gt $j 0 }}
|
||||
|
||||
---
|
||||
|
||||
{{ end }}
|
||||
{{ if $contentBlock.Text }}
|
||||
{{ if $block.ToolResult.IsError }}
|
||||
**Error:** {{ $contentBlock.Text }}
|
||||
{{ else }}
|
||||
{{ $contentBlock.Text }}
|
||||
{{ end }}
|
||||
{{ else if $contentBlock.Json }}
|
||||
```json
|
||||
{{ toJSON $contentBlock.Json }}
|
||||
```
|
||||
{{ end }}{{ end }}
|
||||
{{ end }}{{ end }}{{ end }}
|
||||
|
||||
{{ if eq $msg.Message.Role "assistant" }}{{ if $msg.Message.Usage }}
|
||||
|
||||
---
|
||||
|
||||
**Message Usage:**
|
||||
|
||||
- Input Tokens: {{$msg.Message.Usage.InputTokens}}
|
||||
- Output Tokens: {{$msg.Message.Usage.OutputTokens}}
|
||||
- Credits: {{$msg.Message.Usage.Credits}}
|
||||
|
||||
{{end}}{{end}}
|
||||
|
||||
---
|
||||
|
||||
{{end}}
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user