mirror of
https://github.com/Security-Onion-Solutions/securityonion.git
synced 2026-05-09 12:52:38 +02:00
Compare commits
173 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
| 3d11694d51 | |||
| 23255f88e0 | |||
| d30b52b327 | |||
| 3fad895d6a | |||
| fa8162de02 | |||
| 33abc429d1 | |||
| b22585ca90 | |||
| 9f2ca7012f | |||
| 21aeb68188 | |||
| 81e60ec5bf | |||
| 199c2746f1 | |||
| 8eca465ef6 | |||
| a45e59239f | |||
| 2ad0bcab7c | |||
| 070d150420 | |||
| 90ecbe90d8 | |||
| 813fa03dc3 | |||
| 02381fbbe9 | |||
| 0722b681b1 | |||
| 564815e836 | |||
| 88b30adf7f | |||
| b6acf3b522 | |||
| ba55468da8 | |||
| cdd217283d | |||
| 810a582717 | |||
| a6948e8dcb | |||
| 5f35554fdc | |||
| 0ecc7ae594 | |||
| fdfca469cc | |||
| 5f2ec76ba8 | |||
| b015c8ff14 | |||
| 7e70870a9e | |||
| eadad6c163 | |||
| 22b32a16dd | |||
| 22f869734e | |||
| 398bc9e4ed | |||
| 72dbb69a1c | |||
| 339959d1c0 | |||
| d5c0ec4404 | |||
| e616b4c120 | |||
| f240a99e22 | |||
| 614f32c5e0 | |||
| cd6707a566 | |||
| edd207a9d5 | |||
| 724d76965f | |||
| dbf4fb66a4 | |||
| 5f28e9b191 | |||
| 01bd3b6e06 | |||
| 1abfd77351 | |||
| 06a555fafb | |||
| 81c0f2b464 | |||
| d5dc28e526 | |||
| 7411031e11 | |||
| 247091766c | |||
| 7f93110d68 | |||
| 05f6503d61 | |||
| a149ea7e8f | |||
| bb71e44614 | |||
| 84197fb33b | |||
| 89a6e7c0dd | |||
| a902f667ba | |||
| f72c30abd0 | |||
| 37e9257698 | |||
| 72105f1f2f | |||
| ee89b78751 | |||
| 33ef138866 | |||
| 71da27dc8e | |||
| 80bf07ffd8 | |||
| b69e50542a | |||
| 3ecd19d085 | |||
| b6a3d1889c | |||
| 1cb34b089c | |||
| 1537ba5031 | |||
| 8225d41661 | |||
| ee437265fc | |||
| 3f46caaf02 | |||
| f3181b204a | |||
| dd39db4584 | |||
| 759880a800 | |||
| f5cd90d139 | |||
| 31383bd9d0 | |||
| ebb93b4fa7 | |||
| 21076af01e | |||
| f11e9da83a | |||
| 0fddcd8fe7 | |||
| 927eba566c | |||
| af9330a9dd | |||
| b3fbd5c7a4 | |||
| 5228668be0 | |||
| 7d07f3c8fe | |||
| d9a9029ce5 | |||
| 9fe53d9ccc | |||
| f7b80f5931 | |||
| f11d315fea | |||
| 2013bf9e30 | |||
| a2ffb92b8d | |||
| 8b6d11b118 | |||
| ba00ae8a7b | |||
| 470b3bd4da | |||
| c124186989 | |||
| d24808ff98 | |||
| 7d22f7bd58 | |||
| 88582c94e8 | |||
| cefbe01333 | |||
| 76a6997de2 | |||
| 16a4a42faf | |||
| 0e4623c728 | |||
| d598e20fbb | |||
| 8b0d4b2195 | |||
| cf414423b1 | |||
| 0405a66c72 | |||
| da7c2995b0 | |||
| 696a1a729c | |||
| 5fa7006f11 | |||
| 5634aed679 | |||
| a232cd89cc | |||
| dd40e44530 | |||
| 47d226e189 | |||
| 440537140b | |||
| 29e13b2c0b | |||
| 2006a07637 | |||
| abcad9fde0 | |||
| a43947cca5 | |||
| f51de6569f | |||
| b0584a4dc5 | |||
| 08f34d408f | |||
| 6298397534 | |||
| 9ccd0acb4f | |||
| 1ffdcab3be | |||
| da1045e052 | |||
| 55be1f1119 | |||
| 9272afa9e5 | |||
| 378d1ec81b | |||
| c1b1452bd9 | |||
| cdbacdcd7e | |||
| 6b8a6267da | |||
| 89e49d0bf3 | |||
| 2dfa83dd7d | |||
| f0b67a415a | |||
| b87af8ea3d | |||
| 46e38d39bb | |||
| 81afbd32d4 | |||
| e9c4f40735 | |||
| 61bdfb1a4b | |||
| 9ec4a26f97 | |||
| 358a2e6d3f | |||
| 762e73faf5 | |||
| ef3cfc8722 | |||
| 28d31f4840 | |||
| 2166bb749a | |||
| 868cd11874 | |||
| 7356f3affd | |||
| dd56e7f1ac | |||
| 075b592471 | |||
| 51a3c04c3d | |||
| 1a8aae3039 | |||
| 8101bc4941 | |||
| 88de246ce3 | |||
| 3643b57167 | |||
| 5b3ca98b80 | |||
| 51e0ca2602 | |||
| 664f3fd18a | |||
| 76f4ccf8c8 | |||
| 2a37ad82b2 | |||
| 80540da52f | |||
| e4ba3d6a2a | |||
| 3dec6986b6 | |||
| bbfb58ea4e | |||
| c91deb97b1 | |||
| dc2598d5cf | |||
| ff45e5ebc6 | |||
| 1e2b51eae6 | |||
| 58d332ea94 |
@@ -10,6 +10,7 @@ body:
|
|||||||
options:
|
options:
|
||||||
-
|
-
|
||||||
- 3.0.0
|
- 3.0.0
|
||||||
|
- 3.1.0
|
||||||
- Other (please provide detail below)
|
- Other (please provide detail below)
|
||||||
validations:
|
validations:
|
||||||
required: true
|
required: true
|
||||||
|
|||||||
@@ -0,0 +1,22 @@
|
|||||||
|
## Description
|
||||||
|
|
||||||
|
<!--
|
||||||
|
Explain the purpose of the pull request. Be brief or detailed depending on the scope of the changes.
|
||||||
|
-->
|
||||||
|
|
||||||
|
## Related Issues
|
||||||
|
|
||||||
|
<!--
|
||||||
|
Optionally, list any related issues that this pull request addresses.
|
||||||
|
-->
|
||||||
|
|
||||||
|
## Checklist
|
||||||
|
|
||||||
|
- [ ] I have read and followed the [CONTRIBUTING.md](https://github.com/Security-Onion-Solutions/securityonion/blob/3/main/CONTRIBUTING.md) file.
|
||||||
|
- [ ] I have read and agree to the terms of the [Contributor License Agreement](https://securityonionsolutions.com/cla)
|
||||||
|
|
||||||
|
## Questions or Comments
|
||||||
|
|
||||||
|
<!--
|
||||||
|
If you have any questions or comments about this pull request, add them here.
|
||||||
|
-->
|
||||||
@@ -1,24 +0,0 @@
|
|||||||
name: contrib
|
|
||||||
on:
|
|
||||||
issue_comment:
|
|
||||||
types: [created]
|
|
||||||
pull_request_target:
|
|
||||||
types: [opened,closed,synchronize]
|
|
||||||
|
|
||||||
jobs:
|
|
||||||
CLAssistant:
|
|
||||||
runs-on: ubuntu-latest
|
|
||||||
steps:
|
|
||||||
- name: "Contributor Check"
|
|
||||||
if: (github.event.comment.body == 'recheck' || github.event.comment.body == 'I have read the CLA Document and I hereby sign the CLA') || github.event_name == 'pull_request_target'
|
|
||||||
uses: cla-assistant/github-action@v2.3.1
|
|
||||||
env:
|
|
||||||
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
|
||||||
PERSONAL_ACCESS_TOKEN : ${{ secrets.PERSONAL_ACCESS_TOKEN }}
|
|
||||||
with:
|
|
||||||
path-to-signatures: 'signatures_v1.json'
|
|
||||||
path-to-document: 'https://securityonionsolutions.com/cla'
|
|
||||||
allowlist: dependabot[bot],jertel,dougburks,TOoSmOotH,defensivedepth,m0duspwnens
|
|
||||||
remote-organization-name: Security-Onion-Solutions
|
|
||||||
remote-repository-name: licensing
|
|
||||||
|
|
||||||
+1
-1
@@ -23,7 +23,7 @@
|
|||||||
|
|
||||||
* Link the PR to the related issue, either using [keywords](https://docs.github.com/en/issues/tracking-your-work-with-issues/creating-issues/linking-a-pull-request-to-an-issue#linking-a-pull-request-to-an-issue-using-a-keyword) in the PR description, or [manually](https://docs.github.com/en/issues/tracking-your-work-with-issues/creating-issues/linking-a-pull-request-to-an-issue#manually-linking-a-pull-request-to-an-issue).
|
* Link the PR to the related issue, either using [keywords](https://docs.github.com/en/issues/tracking-your-work-with-issues/creating-issues/linking-a-pull-request-to-an-issue#linking-a-pull-request-to-an-issue-using-a-keyword) in the PR description, or [manually](https://docs.github.com/en/issues/tracking-your-work-with-issues/creating-issues/linking-a-pull-request-to-an-issue#manually-linking-a-pull-request-to-an-issue).
|
||||||
|
|
||||||
* **Pull requests should be opened against the `dev` branch of this repo**, and should clearly describe the problem and solution.
|
* **Pull requests should be opened against the current `?/dev` branch of this repo**, and should clearly describe the problem and solution.
|
||||||
|
|
||||||
* Be sure you have tested your changes and are confident they will not break other parts of the product.
|
* Be sure you have tested your changes and are confident they will not break other parts of the product.
|
||||||
|
|
||||||
|
|||||||
@@ -1,2 +0,0 @@
|
|||||||
elasticsearch:
|
|
||||||
index_settings:
|
|
||||||
@@ -0,0 +1,12 @@
|
|||||||
|
# Copyright Security Onion Solutions LLC and/or licensed to Security Onion Solutions LLC under one
|
||||||
|
# or more contributor license agreements. Licensed under the Elastic License 2.0 as shown at
|
||||||
|
# https://securityonion.net/license; you may not use this file except in compliance with the
|
||||||
|
# Elastic License 2.0.
|
||||||
|
|
||||||
|
# Per-minion Telegraf Postgres credentials. so-telegraf-cred on the manager is
|
||||||
|
# the single writer; it mutates /opt/so/saltstack/local/pillar/telegraf/creds.sls
|
||||||
|
# under flock. Pillar_roots order (local before default) means the populated
|
||||||
|
# copy shadows this default on any real grid; this file exists so the pillar
|
||||||
|
# key is always defined on fresh installs and when no minions have creds yet.
|
||||||
|
telegraf:
|
||||||
|
postgres_creds: {}
|
||||||
+21
-3
@@ -17,6 +17,7 @@ base:
|
|||||||
- sensoroni.adv_sensoroni
|
- sensoroni.adv_sensoroni
|
||||||
- telegraf.soc_telegraf
|
- telegraf.soc_telegraf
|
||||||
- telegraf.adv_telegraf
|
- telegraf.adv_telegraf
|
||||||
|
- telegraf.creds
|
||||||
- versionlock.soc_versionlock
|
- versionlock.soc_versionlock
|
||||||
- versionlock.adv_versionlock
|
- versionlock.adv_versionlock
|
||||||
- soc.license
|
- soc.license
|
||||||
@@ -38,6 +39,9 @@ base:
|
|||||||
{% if salt['file.file_exists']('/opt/so/saltstack/local/pillar/elasticsearch/auth.sls') %}
|
{% if salt['file.file_exists']('/opt/so/saltstack/local/pillar/elasticsearch/auth.sls') %}
|
||||||
- elasticsearch.auth
|
- elasticsearch.auth
|
||||||
{% endif %}
|
{% endif %}
|
||||||
|
{% if salt['file.file_exists']('/opt/so/saltstack/local/pillar/postgres/auth.sls') %}
|
||||||
|
- postgres.auth
|
||||||
|
{% endif %}
|
||||||
{% if salt['file.file_exists']('/opt/so/saltstack/local/pillar/kibana/secrets.sls') %}
|
{% if salt['file.file_exists']('/opt/so/saltstack/local/pillar/kibana/secrets.sls') %}
|
||||||
- kibana.secrets
|
- kibana.secrets
|
||||||
{% endif %}
|
{% endif %}
|
||||||
@@ -60,6 +64,8 @@ base:
|
|||||||
- redis.adv_redis
|
- redis.adv_redis
|
||||||
- influxdb.soc_influxdb
|
- influxdb.soc_influxdb
|
||||||
- influxdb.adv_influxdb
|
- influxdb.adv_influxdb
|
||||||
|
- postgres.soc_postgres
|
||||||
|
- postgres.adv_postgres
|
||||||
- elasticsearch.nodes
|
- elasticsearch.nodes
|
||||||
- elasticsearch.soc_elasticsearch
|
- elasticsearch.soc_elasticsearch
|
||||||
- elasticsearch.adv_elasticsearch
|
- elasticsearch.adv_elasticsearch
|
||||||
@@ -97,10 +103,12 @@ base:
|
|||||||
- node_data.ips
|
- node_data.ips
|
||||||
- secrets
|
- secrets
|
||||||
- healthcheck.eval
|
- healthcheck.eval
|
||||||
- elasticsearch.index_templates
|
|
||||||
{% if salt['file.file_exists']('/opt/so/saltstack/local/pillar/elasticsearch/auth.sls') %}
|
{% if salt['file.file_exists']('/opt/so/saltstack/local/pillar/elasticsearch/auth.sls') %}
|
||||||
- elasticsearch.auth
|
- elasticsearch.auth
|
||||||
{% endif %}
|
{% endif %}
|
||||||
|
{% if salt['file.file_exists']('/opt/so/saltstack/local/pillar/postgres/auth.sls') %}
|
||||||
|
- postgres.auth
|
||||||
|
{% endif %}
|
||||||
{% if salt['file.file_exists']('/opt/so/saltstack/local/pillar/kibana/secrets.sls') %}
|
{% if salt['file.file_exists']('/opt/so/saltstack/local/pillar/kibana/secrets.sls') %}
|
||||||
- kibana.secrets
|
- kibana.secrets
|
||||||
{% endif %}
|
{% endif %}
|
||||||
@@ -126,6 +134,8 @@ base:
|
|||||||
- redis.adv_redis
|
- redis.adv_redis
|
||||||
- influxdb.soc_influxdb
|
- influxdb.soc_influxdb
|
||||||
- influxdb.adv_influxdb
|
- influxdb.adv_influxdb
|
||||||
|
- postgres.soc_postgres
|
||||||
|
- postgres.adv_postgres
|
||||||
- backup.soc_backup
|
- backup.soc_backup
|
||||||
- backup.adv_backup
|
- backup.adv_backup
|
||||||
- zeek.soc_zeek
|
- zeek.soc_zeek
|
||||||
@@ -142,10 +152,12 @@ base:
|
|||||||
- logstash.nodes
|
- logstash.nodes
|
||||||
- logstash.soc_logstash
|
- logstash.soc_logstash
|
||||||
- logstash.adv_logstash
|
- logstash.adv_logstash
|
||||||
- elasticsearch.index_templates
|
|
||||||
{% if salt['file.file_exists']('/opt/so/saltstack/local/pillar/elasticsearch/auth.sls') %}
|
{% if salt['file.file_exists']('/opt/so/saltstack/local/pillar/elasticsearch/auth.sls') %}
|
||||||
- elasticsearch.auth
|
- elasticsearch.auth
|
||||||
{% endif %}
|
{% endif %}
|
||||||
|
{% if salt['file.file_exists']('/opt/so/saltstack/local/pillar/postgres/auth.sls') %}
|
||||||
|
- postgres.auth
|
||||||
|
{% endif %}
|
||||||
{% if salt['file.file_exists']('/opt/so/saltstack/local/pillar/kibana/secrets.sls') %}
|
{% if salt['file.file_exists']('/opt/so/saltstack/local/pillar/kibana/secrets.sls') %}
|
||||||
- kibana.secrets
|
- kibana.secrets
|
||||||
{% endif %}
|
{% endif %}
|
||||||
@@ -160,6 +172,8 @@ base:
|
|||||||
- redis.adv_redis
|
- redis.adv_redis
|
||||||
- influxdb.soc_influxdb
|
- influxdb.soc_influxdb
|
||||||
- influxdb.adv_influxdb
|
- influxdb.adv_influxdb
|
||||||
|
- postgres.soc_postgres
|
||||||
|
- postgres.adv_postgres
|
||||||
- elasticsearch.nodes
|
- elasticsearch.nodes
|
||||||
- elasticsearch.soc_elasticsearch
|
- elasticsearch.soc_elasticsearch
|
||||||
- elasticsearch.adv_elasticsearch
|
- elasticsearch.adv_elasticsearch
|
||||||
@@ -256,10 +270,12 @@ base:
|
|||||||
'*_import':
|
'*_import':
|
||||||
- node_data.ips
|
- node_data.ips
|
||||||
- secrets
|
- secrets
|
||||||
- elasticsearch.index_templates
|
|
||||||
{% if salt['file.file_exists']('/opt/so/saltstack/local/pillar/elasticsearch/auth.sls') %}
|
{% if salt['file.file_exists']('/opt/so/saltstack/local/pillar/elasticsearch/auth.sls') %}
|
||||||
- elasticsearch.auth
|
- elasticsearch.auth
|
||||||
{% endif %}
|
{% endif %}
|
||||||
|
{% if salt['file.file_exists']('/opt/so/saltstack/local/pillar/postgres/auth.sls') %}
|
||||||
|
- postgres.auth
|
||||||
|
{% endif %}
|
||||||
{% if salt['file.file_exists']('/opt/so/saltstack/local/pillar/kibana/secrets.sls') %}
|
{% if salt['file.file_exists']('/opt/so/saltstack/local/pillar/kibana/secrets.sls') %}
|
||||||
- kibana.secrets
|
- kibana.secrets
|
||||||
{% endif %}
|
{% endif %}
|
||||||
@@ -285,6 +301,8 @@ base:
|
|||||||
- redis.adv_redis
|
- redis.adv_redis
|
||||||
- influxdb.soc_influxdb
|
- influxdb.soc_influxdb
|
||||||
- influxdb.adv_influxdb
|
- influxdb.adv_influxdb
|
||||||
|
- postgres.soc_postgres
|
||||||
|
- postgres.adv_postgres
|
||||||
- zeek.soc_zeek
|
- zeek.soc_zeek
|
||||||
- zeek.adv_zeek
|
- zeek.adv_zeek
|
||||||
- bpf.soc_bpf
|
- bpf.soc_bpf
|
||||||
|
|||||||
@@ -29,10 +29,14 @@
|
|||||||
'manager',
|
'manager',
|
||||||
'nginx',
|
'nginx',
|
||||||
'influxdb',
|
'influxdb',
|
||||||
|
'postgres',
|
||||||
|
'postgres.auth',
|
||||||
'soc',
|
'soc',
|
||||||
'kratos',
|
'kratos',
|
||||||
'hydra',
|
'hydra',
|
||||||
'elasticfleet',
|
'elasticfleet',
|
||||||
|
'elasticfleet.manager',
|
||||||
|
'elasticsearch.cluster',
|
||||||
'elastic-fleet-package-registry',
|
'elastic-fleet-package-registry',
|
||||||
'utility'
|
'utility'
|
||||||
] %}
|
] %}
|
||||||
@@ -77,7 +81,7 @@
|
|||||||
),
|
),
|
||||||
'so-heavynode': (
|
'so-heavynode': (
|
||||||
sensor_states +
|
sensor_states +
|
||||||
['elasticagent', 'elasticsearch', 'logstash', 'redis', 'nginx']
|
['elasticagent', 'elasticsearch', 'elasticsearch.cluster', 'logstash', 'redis', 'nginx']
|
||||||
),
|
),
|
||||||
'so-idh': (
|
'so-idh': (
|
||||||
['idh']
|
['idh']
|
||||||
|
|||||||
@@ -32,3 +32,4 @@ so_config_backup:
|
|||||||
- daymonth: '*'
|
- daymonth: '*'
|
||||||
- month: '*'
|
- month: '*'
|
||||||
- dayweek: '*'
|
- dayweek: '*'
|
||||||
|
|
||||||
|
|||||||
@@ -54,6 +54,20 @@ x509_signing_policies:
|
|||||||
- extendedKeyUsage: serverAuth
|
- extendedKeyUsage: serverAuth
|
||||||
- days_valid: 820
|
- days_valid: 820
|
||||||
- copypath: /etc/pki/issued_certs/
|
- copypath: /etc/pki/issued_certs/
|
||||||
|
postgres:
|
||||||
|
- minions: '*'
|
||||||
|
- signing_private_key: /etc/pki/ca.key
|
||||||
|
- signing_cert: /etc/pki/ca.crt
|
||||||
|
- C: US
|
||||||
|
- ST: Utah
|
||||||
|
- L: Salt Lake City
|
||||||
|
- basicConstraints: "critical CA:false"
|
||||||
|
- keyUsage: "critical keyEncipherment"
|
||||||
|
- subjectKeyIdentifier: hash
|
||||||
|
- authorityKeyIdentifier: keyid,issuer:always
|
||||||
|
- extendedKeyUsage: serverAuth
|
||||||
|
- days_valid: 820
|
||||||
|
- copypath: /etc/pki/issued_certs/
|
||||||
elasticfleet:
|
elasticfleet:
|
||||||
- minions: '*'
|
- minions: '*'
|
||||||
- signing_private_key: /etc/pki/ca.key
|
- signing_private_key: /etc/pki/ca.key
|
||||||
|
|||||||
@@ -31,6 +31,7 @@ container_list() {
|
|||||||
"so-hydra"
|
"so-hydra"
|
||||||
"so-nginx"
|
"so-nginx"
|
||||||
"so-pcaptools"
|
"so-pcaptools"
|
||||||
|
"so-postgres"
|
||||||
"so-soc"
|
"so-soc"
|
||||||
"so-suricata"
|
"so-suricata"
|
||||||
"so-telegraf"
|
"so-telegraf"
|
||||||
@@ -55,6 +56,7 @@ container_list() {
|
|||||||
"so-logstash"
|
"so-logstash"
|
||||||
"so-nginx"
|
"so-nginx"
|
||||||
"so-pcaptools"
|
"so-pcaptools"
|
||||||
|
"so-postgres"
|
||||||
"so-redis"
|
"so-redis"
|
||||||
"so-soc"
|
"so-soc"
|
||||||
"so-strelka-backend"
|
"so-strelka-backend"
|
||||||
@@ -186,8 +188,14 @@ update_docker_containers() {
|
|||||||
if [ -z "$HOSTNAME" ]; then
|
if [ -z "$HOSTNAME" ]; then
|
||||||
HOSTNAME=$(hostname)
|
HOSTNAME=$(hostname)
|
||||||
fi
|
fi
|
||||||
docker tag $CONTAINER_REGISTRY/$IMAGEREPO/$image $HOSTNAME:5000/$IMAGEREPO/$image >> "$LOG_FILE" 2>&1
|
docker tag $CONTAINER_REGISTRY/$IMAGEREPO/$image $HOSTNAME:5000/$IMAGEREPO/$image >> "$LOG_FILE" 2>&1 || {
|
||||||
docker push $HOSTNAME:5000/$IMAGEREPO/$image >> "$LOG_FILE" 2>&1
|
echo "Unable to tag $image" >> "$LOG_FILE" 2>&1
|
||||||
|
exit 1
|
||||||
|
}
|
||||||
|
docker push $HOSTNAME:5000/$IMAGEREPO/$image >> "$LOG_FILE" 2>&1 || {
|
||||||
|
echo "Unable to push $image" >> "$LOG_FILE" 2>&1
|
||||||
|
exit 1
|
||||||
|
}
|
||||||
fi
|
fi
|
||||||
else
|
else
|
||||||
echo "There is a problem downloading the $image image. Details: " >> "$LOG_FILE" 2>&1
|
echo "There is a problem downloading the $image image. Details: " >> "$LOG_FILE" 2>&1
|
||||||
|
|||||||
@@ -227,7 +227,7 @@ if [[ $EXCLUDE_KNOWN_ERRORS == 'Y' ]]; then
|
|||||||
EXCLUDED_ERRORS="$EXCLUDED_ERRORS|from NIC checksum offloading" # zeek reporter.log
|
EXCLUDED_ERRORS="$EXCLUDED_ERRORS|from NIC checksum offloading" # zeek reporter.log
|
||||||
EXCLUDED_ERRORS="$EXCLUDED_ERRORS|marked for removal" # docker container getting recycled
|
EXCLUDED_ERRORS="$EXCLUDED_ERRORS|marked for removal" # docker container getting recycled
|
||||||
EXCLUDED_ERRORS="$EXCLUDED_ERRORS|tcp 127.0.0.1:6791: bind: address already in use" # so-elastic-fleet agent restarting. Seen starting w/ 8.18.8 https://github.com/elastic/kibana/issues/201459
|
EXCLUDED_ERRORS="$EXCLUDED_ERRORS|tcp 127.0.0.1:6791: bind: address already in use" # so-elastic-fleet agent restarting. Seen starting w/ 8.18.8 https://github.com/elastic/kibana/issues/201459
|
||||||
EXCLUDED_ERRORS="$EXCLUDED_ERRORS|TransformTask\] \[logs-(tychon|aws_billing|microsoft_defender_endpoint).*user so_kibana lacks the required permissions \[logs-\1" # Known issue with 3 integrations using kibana_system role vs creating unique api creds with proper permissions.
|
EXCLUDED_ERRORS="$EXCLUDED_ERRORS|TransformTask\] \[logs-(tychon|aws_billing|microsoft_defender_endpoint|armis|o365_metrics|microsoft_sentinel|snyk).*user so_kibana lacks the required permissions \[(logs|metrics)-\1" # Known issue with integrations starting transform jobs that are explicitly not allowed to start as a system user. (installed as so_elastic / so_kibana)
|
||||||
EXCLUDED_ERRORS="$EXCLUDED_ERRORS|manifest unknown" # appears in so-dockerregistry log for so-tcpreplay following docker upgrade to 29.2.1-1
|
EXCLUDED_ERRORS="$EXCLUDED_ERRORS|manifest unknown" # appears in so-dockerregistry log for so-tcpreplay following docker upgrade to 29.2.1-1
|
||||||
fi
|
fi
|
||||||
|
|
||||||
|
|||||||
@@ -9,7 +9,7 @@
|
|||||||
|
|
||||||
. /usr/sbin/so-common
|
. /usr/sbin/so-common
|
||||||
|
|
||||||
software_raid=("SOSMN" "SOSMN-DE02" "SOSSNNV" "SOSSNNV-DE02" "SOS10k-DE02" "SOS10KNV" "SOS10KNV-DE02" "SOS10KNV-DE02" "SOS2000-DE02" "SOS-GOFAST-LT-DE02" "SOS-GOFAST-MD-DE02" "SOS-GOFAST-HV-DE02")
|
software_raid=("SOSMN" "SOSMN-DE02" "SOSSNNV" "SOSSNNV-DE02" "SOS10k-DE02" "SOS10KNV" "SOS10KNV-DE02" "SOS10KNV-DE02" "SOS2000-DE02" "SOS-GOFAST-LT-DE02" "SOS-GOFAST-MD-DE02" "SOS-GOFAST-HV-DE02" "HVGUEST")
|
||||||
hardware_raid=("SOS1000" "SOS1000F" "SOSSN7200" "SOS5000" "SOS4000")
|
hardware_raid=("SOS1000" "SOS1000F" "SOSSN7200" "SOS5000" "SOS4000")
|
||||||
|
|
||||||
{%- if salt['grains.get']('sosmodel', '') %}
|
{%- if salt['grains.get']('sosmodel', '') %}
|
||||||
@@ -87,6 +87,11 @@ check_boss_raid() {
|
|||||||
}
|
}
|
||||||
|
|
||||||
check_software_raid() {
|
check_software_raid() {
|
||||||
|
if [[ ! -f /proc/mdstat ]]; then
|
||||||
|
SWRAID=0
|
||||||
|
return
|
||||||
|
fi
|
||||||
|
|
||||||
SWRC=$(grep "_" /proc/mdstat)
|
SWRC=$(grep "_" /proc/mdstat)
|
||||||
if [[ -n $SWRC ]]; then
|
if [[ -n $SWRC ]]; then
|
||||||
# RAID is failed in some way
|
# RAID is failed in some way
|
||||||
@@ -107,8 +112,10 @@ if [[ "$is_hwraid" == "true" ]]; then
|
|||||||
fi
|
fi
|
||||||
if [[ "$is_softwareraid" == "true" ]]; then
|
if [[ "$is_softwareraid" == "true" ]]; then
|
||||||
check_software_raid
|
check_software_raid
|
||||||
|
if [ "$model" != "HVGUEST" ]; then
|
||||||
check_boss_raid
|
check_boss_raid
|
||||||
fi
|
fi
|
||||||
|
fi
|
||||||
|
|
||||||
sum=$(($SWRAID + $BOSSRAID + $HWRAID))
|
sum=$(($SWRAID + $BOSSRAID + $HWRAID))
|
||||||
|
|
||||||
|
|||||||
@@ -237,3 +237,11 @@ docker:
|
|||||||
extra_hosts: []
|
extra_hosts: []
|
||||||
extra_env: []
|
extra_env: []
|
||||||
ulimits: []
|
ulimits: []
|
||||||
|
'so-postgres':
|
||||||
|
final_octet: 47
|
||||||
|
port_bindings:
|
||||||
|
- 0.0.0.0:5432:5432
|
||||||
|
custom_bind_mounts: []
|
||||||
|
extra_hosts: []
|
||||||
|
extra_env: []
|
||||||
|
ulimits: []
|
||||||
|
|||||||
@@ -0,0 +1,123 @@
|
|||||||
|
{# Copyright Security Onion Solutions LLC and/or licensed to Security Onion Solutions LLC under one
|
||||||
|
or more contributor license agreements. Licensed under the Elastic License 2.0; you may not use
|
||||||
|
this file except in compliance with the Elastic License 2.0. #}
|
||||||
|
|
||||||
|
|
||||||
|
{% import_json '/opt/so/state/esfleet_content_package_components.json' as ADDON_CONTENT_PACKAGE_COMPONENTS %}
|
||||||
|
{% import_json '/opt/so/state/esfleet_component_templates.json' as INSTALLED_COMPONENT_TEMPLATES %}
|
||||||
|
{% import_yaml 'elasticfleet/defaults.yaml' as ELASTICFLEETDEFAULTS %}
|
||||||
|
|
||||||
|
{% set CORE_ESFLEET_PACKAGES = ELASTICFLEETDEFAULTS.get('elasticfleet', {}).get('packages', {}) %}
|
||||||
|
{% set ADDON_CONTENT_INTEGRATION_DEFAULTS = {} %}
|
||||||
|
{% set DEBUG_STUFF = {} %}
|
||||||
|
|
||||||
|
{% for pkg in ADDON_CONTENT_PACKAGE_COMPONENTS %}
|
||||||
|
{% if pkg.name in CORE_ESFLEET_PACKAGES %}
|
||||||
|
{# skip core content packages #}
|
||||||
|
{% elif pkg.name not in CORE_ESFLEET_PACKAGES %}
|
||||||
|
{# generate defaults for each content package #}
|
||||||
|
{% if pkg.dataStreams is defined and pkg.dataStreams is not none and pkg.dataStreams | length > 0%}
|
||||||
|
{% for pattern in pkg.dataStreams %}
|
||||||
|
{# in ES 9.3.2 'input' type integrations no longer create default component templates and instead they wait for user input during 'integration' setup (fleet ui config)
|
||||||
|
title: generic is an artifact of that and is not in use #}
|
||||||
|
{% if pattern.title == "generic" %}
|
||||||
|
{% continue %}
|
||||||
|
{% endif %}
|
||||||
|
{% if "metrics-" in pattern.name %}
|
||||||
|
{% set integration_type = "metrics-" %}
|
||||||
|
{% elif "logs-" in pattern.name %}
|
||||||
|
{% set integration_type = "logs-" %}
|
||||||
|
{% else %}
|
||||||
|
{% set integration_type = "" %}
|
||||||
|
{% endif %}
|
||||||
|
{# on content integrations the component name is user defined at the time it is added to an agent policy #}
|
||||||
|
{% set component_name = pattern.title %}
|
||||||
|
{% set index_pattern = pattern.name %}
|
||||||
|
{# component_name_x maintains the functionality of merging local pillar changes with generated 'defaults' via SOC UI #}
|
||||||
|
{% set component_name_x = component_name.replace(".","_x_") %}
|
||||||
|
{# pillar overrides/merge expects the key names to follow the naming in elasticsearch/defaults.yaml eg. so-logs-1password_x_item_usages . The _x_ is replaced later on in elasticsearch/template.map.jinja #}
|
||||||
|
{% set integration_key = "so-" ~ integration_type ~ pkg.name + '_x_' ~ component_name_x %}
|
||||||
|
{# Default integration settings #}
|
||||||
|
{% set integration_defaults = {
|
||||||
|
"index_sorting": false,
|
||||||
|
"index_template": {
|
||||||
|
"composed_of": [integration_type ~ component_name ~ "@package", integration_type ~ component_name ~ "@custom", "so-fleet_integrations.ip_mappings-1", "so-fleet_globals-1", "so-fleet_agent_id_verification-1"],
|
||||||
|
"data_stream": {
|
||||||
|
"allow_custom_routing": false,
|
||||||
|
"hidden": false
|
||||||
|
},
|
||||||
|
"ignore_missing_component_templates": [integration_type ~ component_name ~ "@custom"],
|
||||||
|
"index_patterns": [index_pattern],
|
||||||
|
"priority": 501,
|
||||||
|
"template": {
|
||||||
|
"settings": {
|
||||||
|
"index": {
|
||||||
|
"lifecycle": {"name": "so-" ~ integration_type ~ component_name ~ "-logs"},
|
||||||
|
"number_of_replicas": 0
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"policy": {
|
||||||
|
"phases": {
|
||||||
|
"cold": {
|
||||||
|
"actions": {
|
||||||
|
"allocate":{
|
||||||
|
"number_of_replicas": ""
|
||||||
|
},
|
||||||
|
"set_priority": {"priority": 0}
|
||||||
|
},
|
||||||
|
"min_age": "60d"
|
||||||
|
},
|
||||||
|
"delete": {
|
||||||
|
"actions": {
|
||||||
|
"delete": {}
|
||||||
|
},
|
||||||
|
"min_age": "365d"
|
||||||
|
},
|
||||||
|
"hot": {
|
||||||
|
"actions": {
|
||||||
|
"rollover": {
|
||||||
|
"max_age": "30d",
|
||||||
|
"max_primary_shard_size": "50gb"
|
||||||
|
},
|
||||||
|
"forcemerge":{
|
||||||
|
"max_num_segments": ""
|
||||||
|
},
|
||||||
|
"shrink":{
|
||||||
|
"max_primary_shard_size": "",
|
||||||
|
"method": "COUNT",
|
||||||
|
"number_of_shards": ""
|
||||||
|
},
|
||||||
|
"set_priority": {"priority": 100}
|
||||||
|
},
|
||||||
|
"min_age": "0ms"
|
||||||
|
},
|
||||||
|
"warm": {
|
||||||
|
"actions": {
|
||||||
|
"allocate": {
|
||||||
|
"number_of_replicas": ""
|
||||||
|
},
|
||||||
|
"forcemerge": {
|
||||||
|
"max_num_segments": ""
|
||||||
|
},
|
||||||
|
"shrink":{
|
||||||
|
"max_primary_shard_size": "",
|
||||||
|
"method": "COUNT",
|
||||||
|
"number_of_shards": ""
|
||||||
|
},
|
||||||
|
"set_priority": {"priority": 50}
|
||||||
|
},
|
||||||
|
"min_age": "30d"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
} %}
|
||||||
|
|
||||||
|
|
||||||
|
{% do ADDON_CONTENT_INTEGRATION_DEFAULTS.update({integration_key: integration_defaults}) %}
|
||||||
|
{% endfor %}
|
||||||
|
{% else %}
|
||||||
|
{% endif %}
|
||||||
|
{% endif %}
|
||||||
|
{% endfor %}
|
||||||
@@ -1,5 +1,6 @@
|
|||||||
elasticfleet:
|
elasticfleet:
|
||||||
enabled: False
|
enabled: False
|
||||||
|
patch_version: 9.3.3+build202604082258 # Elastic Agent specific patch release.
|
||||||
enable_manager_output: True
|
enable_manager_output: True
|
||||||
config:
|
config:
|
||||||
server:
|
server:
|
||||||
|
|||||||
@@ -17,65 +17,17 @@ include:
|
|||||||
- logstash.ssl
|
- logstash.ssl
|
||||||
- elasticfleet.config
|
- elasticfleet.config
|
||||||
- elasticfleet.sostatus
|
- elasticfleet.sostatus
|
||||||
|
{%- if GLOBALS.role != "so-fleet" %}
|
||||||
|
- elasticfleet.manager
|
||||||
|
{%- endif %}
|
||||||
|
|
||||||
{% if grains.role not in ['so-fleet'] %}
|
{% if GLOBALS.role != "so-fleet" %}
|
||||||
# Wait for Elasticsearch to be ready - no reason to try running Elastic Fleet server if ES is not ready
|
# Wait for Elasticsearch to be ready - no reason to try running Elastic Fleet server if ES is not ready
|
||||||
wait_for_elasticsearch_elasticfleet:
|
wait_for_elasticsearch_elasticfleet:
|
||||||
cmd.run:
|
cmd.run:
|
||||||
- name: so-elasticsearch-wait
|
- name: so-elasticsearch-wait
|
||||||
{% endif %}
|
|
||||||
|
|
||||||
# If enabled, automatically update Fleet Logstash Outputs
|
|
||||||
{% if ELASTICFLEETMERGED.config.server.enable_auto_configuration and grains.role not in ['so-import', 'so-eval', 'so-fleet'] %}
|
|
||||||
so-elastic-fleet-auto-configure-logstash-outputs:
|
|
||||||
cmd.run:
|
|
||||||
- name: /usr/sbin/so-elastic-fleet-outputs-update
|
|
||||||
- retry:
|
|
||||||
attempts: 4
|
|
||||||
interval: 30
|
|
||||||
|
|
||||||
{# Separate from above in order to catch elasticfleet-logstash.crt changes and force update to fleet output policy #}
|
|
||||||
so-elastic-fleet-auto-configure-logstash-outputs-force:
|
|
||||||
cmd.run:
|
|
||||||
- name: /usr/sbin/so-elastic-fleet-outputs-update --certs
|
|
||||||
- retry:
|
|
||||||
attempts: 4
|
|
||||||
interval: 30
|
|
||||||
- onchanges:
|
|
||||||
- x509: etc_elasticfleet_logstash_crt
|
|
||||||
- x509: elasticfleet_kafka_crt
|
|
||||||
{% endif %}
|
|
||||||
|
|
||||||
# If enabled, automatically update Fleet Server URLs & ES Connection
|
|
||||||
{% if ELASTICFLEETMERGED.config.server.enable_auto_configuration and grains.role not in ['so-fleet'] %}
|
|
||||||
so-elastic-fleet-auto-configure-server-urls:
|
|
||||||
cmd.run:
|
|
||||||
- name: /usr/sbin/so-elastic-fleet-urls-update
|
|
||||||
- retry:
|
|
||||||
attempts: 4
|
|
||||||
interval: 30
|
|
||||||
{% endif %}
|
|
||||||
|
|
||||||
# Automatically update Fleet Server Elasticsearch URLs & Agent Artifact URLs
|
|
||||||
{% if grains.role not in ['so-fleet'] %}
|
|
||||||
so-elastic-fleet-auto-configure-elasticsearch-urls:
|
|
||||||
cmd.run:
|
|
||||||
- name: /usr/sbin/so-elastic-fleet-es-url-update
|
|
||||||
- retry:
|
|
||||||
attempts: 4
|
|
||||||
interval: 30
|
|
||||||
|
|
||||||
so-elastic-fleet-auto-configure-artifact-urls:
|
|
||||||
cmd.run:
|
|
||||||
- name: /usr/sbin/so-elastic-fleet-artifacts-url-update
|
|
||||||
- retry:
|
|
||||||
attempts: 4
|
|
||||||
interval: 30
|
|
||||||
|
|
||||||
{% endif %}
|
|
||||||
|
|
||||||
# Sync Elastic Agent artifacts to Fleet Node
|
# Sync Elastic Agent artifacts to Fleet Node
|
||||||
{% if grains.role in ['so-fleet'] %}
|
|
||||||
elasticagent_syncartifacts:
|
elasticagent_syncartifacts:
|
||||||
file.recurse:
|
file.recurse:
|
||||||
- name: /nsm/elastic-fleet/artifacts/beats
|
- name: /nsm/elastic-fleet/artifacts/beats
|
||||||
@@ -149,57 +101,6 @@ so-elastic-fleet:
|
|||||||
- x509: etc_elasticfleet_crt
|
- x509: etc_elasticfleet_crt
|
||||||
{% endif %}
|
{% endif %}
|
||||||
|
|
||||||
{% if GLOBALS.role != "so-fleet" %}
|
|
||||||
so-elastic-fleet-package-statefile:
|
|
||||||
file.managed:
|
|
||||||
- name: /opt/so/state/elastic_fleet_packages.txt
|
|
||||||
- contents: {{ELASTICFLEETMERGED.packages}}
|
|
||||||
|
|
||||||
so-elastic-fleet-package-upgrade:
|
|
||||||
cmd.run:
|
|
||||||
- name: /usr/sbin/so-elastic-fleet-package-upgrade
|
|
||||||
- retry:
|
|
||||||
attempts: 3
|
|
||||||
interval: 10
|
|
||||||
- onchanges:
|
|
||||||
- file: /opt/so/state/elastic_fleet_packages.txt
|
|
||||||
|
|
||||||
so-elastic-fleet-integrations:
|
|
||||||
cmd.run:
|
|
||||||
- name: /usr/sbin/so-elastic-fleet-integration-policy-load
|
|
||||||
- retry:
|
|
||||||
attempts: 3
|
|
||||||
interval: 10
|
|
||||||
|
|
||||||
so-elastic-agent-grid-upgrade:
|
|
||||||
cmd.run:
|
|
||||||
- name: /usr/sbin/so-elastic-agent-grid-upgrade
|
|
||||||
- retry:
|
|
||||||
attempts: 12
|
|
||||||
interval: 5
|
|
||||||
|
|
||||||
so-elastic-fleet-integration-upgrade:
|
|
||||||
cmd.run:
|
|
||||||
- name: /usr/sbin/so-elastic-fleet-integration-upgrade
|
|
||||||
- retry:
|
|
||||||
attempts: 3
|
|
||||||
interval: 10
|
|
||||||
|
|
||||||
{# Optional integrations script doesn't need the retries like so-elastic-fleet-integration-upgrade which loads the default integrations #}
|
|
||||||
so-elastic-fleet-addon-integrations:
|
|
||||||
cmd.run:
|
|
||||||
- name: /usr/sbin/so-elastic-fleet-optional-integrations-load
|
|
||||||
|
|
||||||
{% if ELASTICFLEETMERGED.config.defend_filters.enable_auto_configuration %}
|
|
||||||
so-elastic-defend-manage-filters-file-watch:
|
|
||||||
cmd.run:
|
|
||||||
- name: python3 /sbin/so-elastic-defend-manage-filters.py -c /opt/so/conf/elasticsearch/curl.config -d /opt/so/conf/elastic-fleet/defend-exclusions/disabled-filters.yaml -i /nsm/securityonion-resources/event_filters/ -i /opt/so/conf/elastic-fleet/defend-exclusions/rulesets/custom-filters/ &>> /opt/so/log/elasticfleet/elastic-defend-manage-filters.log
|
|
||||||
- onchanges:
|
|
||||||
- file: elasticdefendcustom
|
|
||||||
- file: elasticdefenddisabled
|
|
||||||
{% endif %}
|
|
||||||
{% endif %}
|
|
||||||
|
|
||||||
delete_so-elastic-fleet_so-status.disabled:
|
delete_so-elastic-fleet_so-status.disabled:
|
||||||
file.uncomment:
|
file.uncomment:
|
||||||
- name: /opt/so/conf/so-status/so-status.conf
|
- name: /opt/so/conf/so-status/so-status.conf
|
||||||
|
|||||||
+9
-2
@@ -9,16 +9,22 @@
|
|||||||
"namespace": "so",
|
"namespace": "so",
|
||||||
"description": "Zeek Import logs",
|
"description": "Zeek Import logs",
|
||||||
"policy_id": "so-grid-nodes_general",
|
"policy_id": "so-grid-nodes_general",
|
||||||
|
"policy_ids": [
|
||||||
|
"so-grid-nodes_general"
|
||||||
|
],
|
||||||
|
"vars": {},
|
||||||
"inputs": {
|
"inputs": {
|
||||||
"filestream-filestream": {
|
"filestream-filestream": {
|
||||||
"enabled": true,
|
"enabled": true,
|
||||||
"streams": {
|
"streams": {
|
||||||
"filestream.generic": {
|
"filestream.filestream": {
|
||||||
"enabled": true,
|
"enabled": true,
|
||||||
"vars": {
|
"vars": {
|
||||||
"paths": [
|
"paths": [
|
||||||
"/nsm/import/*/zeek/logs/*.log"
|
"/nsm/import/*/zeek/logs/*.log"
|
||||||
],
|
],
|
||||||
|
"compression_gzip": false,
|
||||||
|
"use_logs_stream": false,
|
||||||
"data_stream.dataset": "import",
|
"data_stream.dataset": "import",
|
||||||
"pipeline": "",
|
"pipeline": "",
|
||||||
"parsers": "#- ndjson:\n# target: \"\"\n# message_key: msg\n#- multiline:\n# type: count\n# count_lines: 3\n",
|
"parsers": "#- ndjson:\n# target: \"\"\n# message_key: msg\n#- multiline:\n# type: count\n# count_lines: 3\n",
|
||||||
@@ -34,7 +40,8 @@
|
|||||||
"fingerprint_length": "64",
|
"fingerprint_length": "64",
|
||||||
"file_identity_native": true,
|
"file_identity_native": true,
|
||||||
"exclude_lines": [],
|
"exclude_lines": [],
|
||||||
"include_lines": []
|
"include_lines": [],
|
||||||
|
"delete_enabled": false
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -15,19 +15,25 @@
|
|||||||
"version": ""
|
"version": ""
|
||||||
},
|
},
|
||||||
"name": "kratos-logs",
|
"name": "kratos-logs",
|
||||||
|
"namespace": "so",
|
||||||
"description": "Kratos logs",
|
"description": "Kratos logs",
|
||||||
"policy_id": "so-grid-nodes_general",
|
"policy_id": "so-grid-nodes_general",
|
||||||
"namespace": "so",
|
"policy_ids": [
|
||||||
|
"so-grid-nodes_general"
|
||||||
|
],
|
||||||
|
"vars": {},
|
||||||
"inputs": {
|
"inputs": {
|
||||||
"filestream-filestream": {
|
"filestream-filestream": {
|
||||||
"enabled": true,
|
"enabled": true,
|
||||||
"streams": {
|
"streams": {
|
||||||
"filestream.generic": {
|
"filestream.filestream": {
|
||||||
"enabled": true,
|
"enabled": true,
|
||||||
"vars": {
|
"vars": {
|
||||||
"paths": [
|
"paths": [
|
||||||
"/opt/so/log/kratos/kratos.log"
|
"/opt/so/log/kratos/kratos.log"
|
||||||
],
|
],
|
||||||
|
"compression_gzip": false,
|
||||||
|
"use_logs_stream": false,
|
||||||
"data_stream.dataset": "kratos",
|
"data_stream.dataset": "kratos",
|
||||||
"pipeline": "kratos",
|
"pipeline": "kratos",
|
||||||
"parsers": "#- ndjson:\n# target: \"\"\n# message_key: msg\n#- multiline:\n# type: count\n# count_lines: 3\n",
|
"parsers": "#- ndjson:\n# target: \"\"\n# message_key: msg\n#- multiline:\n# type: count\n# count_lines: 3\n",
|
||||||
@@ -48,10 +54,10 @@
|
|||||||
"harvester_limit": 0,
|
"harvester_limit": 0,
|
||||||
"fingerprint": false,
|
"fingerprint": false,
|
||||||
"fingerprint_offset": 0,
|
"fingerprint_offset": 0,
|
||||||
"fingerprint_length": "64",
|
|
||||||
"file_identity_native": true,
|
"file_identity_native": true,
|
||||||
"exclude_lines": [],
|
"exclude_lines": [],
|
||||||
"include_lines": []
|
"include_lines": [],
|
||||||
|
"delete_enabled": false
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -9,16 +9,22 @@
|
|||||||
"namespace": "so",
|
"namespace": "so",
|
||||||
"description": "Zeek logs",
|
"description": "Zeek logs",
|
||||||
"policy_id": "so-grid-nodes_general",
|
"policy_id": "so-grid-nodes_general",
|
||||||
|
"policy_ids": [
|
||||||
|
"so-grid-nodes_general"
|
||||||
|
],
|
||||||
|
"vars": {},
|
||||||
"inputs": {
|
"inputs": {
|
||||||
"filestream-filestream": {
|
"filestream-filestream": {
|
||||||
"enabled": true,
|
"enabled": true,
|
||||||
"streams": {
|
"streams": {
|
||||||
"filestream.generic": {
|
"filestream.filestream": {
|
||||||
"enabled": true,
|
"enabled": true,
|
||||||
"vars": {
|
"vars": {
|
||||||
"paths": [
|
"paths": [
|
||||||
"/nsm/zeek/logs/current/*.log"
|
"/nsm/zeek/logs/current/*.log"
|
||||||
],
|
],
|
||||||
|
"compression_gzip": false,
|
||||||
|
"use_logs_stream": false,
|
||||||
"data_stream.dataset": "zeek",
|
"data_stream.dataset": "zeek",
|
||||||
"parsers": "#- ndjson:\n# target: \"\"\n# message_key: msg\n#- multiline:\n# type: count\n# count_lines: 3\n",
|
"parsers": "#- ndjson:\n# target: \"\"\n# message_key: msg\n#- multiline:\n# type: count\n# count_lines: 3\n",
|
||||||
"exclude_files": ["({%- endraw -%}{{ ELASTICFLEETMERGED.logging.zeek.excluded | join('|') }}{%- raw -%})(\\..+)?\\.log$"],
|
"exclude_files": ["({%- endraw -%}{{ ELASTICFLEETMERGED.logging.zeek.excluded | join('|') }}{%- raw -%})(\\..+)?\\.log$"],
|
||||||
@@ -30,10 +36,10 @@
|
|||||||
"harvester_limit": 0,
|
"harvester_limit": 0,
|
||||||
"fingerprint": false,
|
"fingerprint": false,
|
||||||
"fingerprint_offset": 0,
|
"fingerprint_offset": 0,
|
||||||
"fingerprint_length": "64",
|
|
||||||
"file_identity_native": true,
|
"file_identity_native": true,
|
||||||
"exclude_lines": [],
|
"exclude_lines": [],
|
||||||
"include_lines": []
|
"include_lines": [],
|
||||||
|
"delete_enabled": false
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -5,7 +5,7 @@
|
|||||||
"package": {
|
"package": {
|
||||||
"name": "endpoint",
|
"name": "endpoint",
|
||||||
"title": "Elastic Defend",
|
"title": "Elastic Defend",
|
||||||
"version": "9.0.2",
|
"version": "9.3.0",
|
||||||
"requires_root": true
|
"requires_root": true
|
||||||
},
|
},
|
||||||
"enabled": true,
|
"enabled": true,
|
||||||
|
|||||||
@@ -6,21 +6,23 @@
|
|||||||
"name": "agent-monitor",
|
"name": "agent-monitor",
|
||||||
"namespace": "",
|
"namespace": "",
|
||||||
"description": "",
|
"description": "",
|
||||||
|
"policy_id": "so-grid-nodes_general",
|
||||||
"policy_ids": [
|
"policy_ids": [
|
||||||
"so-grid-nodes_general"
|
"so-grid-nodes_general"
|
||||||
],
|
],
|
||||||
"output_id": null,
|
|
||||||
"vars": {},
|
"vars": {},
|
||||||
"inputs": {
|
"inputs": {
|
||||||
"filestream-filestream": {
|
"filestream-filestream": {
|
||||||
"enabled": true,
|
"enabled": true,
|
||||||
"streams": {
|
"streams": {
|
||||||
"filestream.generic": {
|
"filestream.filestream": {
|
||||||
"enabled": true,
|
"enabled": true,
|
||||||
"vars": {
|
"vars": {
|
||||||
"paths": [
|
"paths": [
|
||||||
"/opt/so/log/agents/agent-monitor.log"
|
"/opt/so/log/agents/agent-monitor.log"
|
||||||
],
|
],
|
||||||
|
"compression_gzip": false,
|
||||||
|
"use_logs_stream": false,
|
||||||
"data_stream.dataset": "agentmonitor",
|
"data_stream.dataset": "agentmonitor",
|
||||||
"pipeline": "elasticagent.monitor",
|
"pipeline": "elasticagent.monitor",
|
||||||
"parsers": "",
|
"parsers": "",
|
||||||
@@ -34,15 +36,16 @@
|
|||||||
"ignore_older": "72h",
|
"ignore_older": "72h",
|
||||||
"clean_inactive": -1,
|
"clean_inactive": -1,
|
||||||
"harvester_limit": 0,
|
"harvester_limit": 0,
|
||||||
"fingerprint": true,
|
"fingerprint": false,
|
||||||
"fingerprint_offset": 0,
|
"fingerprint_offset": 0,
|
||||||
"fingerprint_length": 64,
|
"file_identity_native": true,
|
||||||
"file_identity_native": false,
|
|
||||||
"exclude_lines": [],
|
"exclude_lines": [],
|
||||||
"include_lines": []
|
"include_lines": [],
|
||||||
}
|
"delete_enabled": false
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
},
|
||||||
|
"force": true
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -4,19 +4,25 @@
|
|||||||
"version": ""
|
"version": ""
|
||||||
},
|
},
|
||||||
"name": "hydra-logs",
|
"name": "hydra-logs",
|
||||||
|
"namespace": "so",
|
||||||
"description": "Hydra logs",
|
"description": "Hydra logs",
|
||||||
"policy_id": "so-grid-nodes_general",
|
"policy_id": "so-grid-nodes_general",
|
||||||
"namespace": "so",
|
"policy_ids": [
|
||||||
|
"so-grid-nodes_general"
|
||||||
|
],
|
||||||
|
"vars": {},
|
||||||
"inputs": {
|
"inputs": {
|
||||||
"filestream-filestream": {
|
"filestream-filestream": {
|
||||||
"enabled": true,
|
"enabled": true,
|
||||||
"streams": {
|
"streams": {
|
||||||
"filestream.generic": {
|
"filestream.filestream": {
|
||||||
"enabled": true,
|
"enabled": true,
|
||||||
"vars": {
|
"vars": {
|
||||||
"paths": [
|
"paths": [
|
||||||
"/opt/so/log/hydra/hydra.log"
|
"/opt/so/log/hydra/hydra.log"
|
||||||
],
|
],
|
||||||
|
"compression_gzip": false,
|
||||||
|
"use_logs_stream": false,
|
||||||
"data_stream.dataset": "hydra",
|
"data_stream.dataset": "hydra",
|
||||||
"pipeline": "hydra",
|
"pipeline": "hydra",
|
||||||
"parsers": "#- ndjson:\n# target: \"\"\n# message_key: msg\n#- multiline:\n# type: count\n# count_lines: 3\n",
|
"parsers": "#- ndjson:\n# target: \"\"\n# message_key: msg\n#- multiline:\n# type: count\n# count_lines: 3\n",
|
||||||
@@ -34,10 +40,10 @@
|
|||||||
"harvester_limit": 0,
|
"harvester_limit": 0,
|
||||||
"fingerprint": false,
|
"fingerprint": false,
|
||||||
"fingerprint_offset": 0,
|
"fingerprint_offset": 0,
|
||||||
"fingerprint_length": "64",
|
|
||||||
"file_identity_native": true,
|
"file_identity_native": true,
|
||||||
"exclude_lines": [],
|
"exclude_lines": [],
|
||||||
"include_lines": []
|
"include_lines": [],
|
||||||
|
"delete_enabled": false
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -4,19 +4,25 @@
|
|||||||
"version": ""
|
"version": ""
|
||||||
},
|
},
|
||||||
"name": "idh-logs",
|
"name": "idh-logs",
|
||||||
|
"namespace": "so",
|
||||||
"description": "IDH integration",
|
"description": "IDH integration",
|
||||||
"policy_id": "so-grid-nodes_general",
|
"policy_id": "so-grid-nodes_general",
|
||||||
"namespace": "so",
|
"policy_ids": [
|
||||||
|
"so-grid-nodes_general"
|
||||||
|
],
|
||||||
|
"vars": {},
|
||||||
"inputs": {
|
"inputs": {
|
||||||
"filestream-filestream": {
|
"filestream-filestream": {
|
||||||
"enabled": true,
|
"enabled": true,
|
||||||
"streams": {
|
"streams": {
|
||||||
"filestream.generic": {
|
"filestream.filestream": {
|
||||||
"enabled": true,
|
"enabled": true,
|
||||||
"vars": {
|
"vars": {
|
||||||
"paths": [
|
"paths": [
|
||||||
"/nsm/idh/opencanary.log"
|
"/nsm/idh/opencanary.log"
|
||||||
],
|
],
|
||||||
|
"compression_gzip": false,
|
||||||
|
"use_logs_stream": false,
|
||||||
"data_stream.dataset": "idh",
|
"data_stream.dataset": "idh",
|
||||||
"pipeline": "common",
|
"pipeline": "common",
|
||||||
"parsers": "#- ndjson:\n# target: \"\"\n# message_key: msg\n#- multiline:\n# type: count\n# count_lines: 3\n",
|
"parsers": "#- ndjson:\n# target: \"\"\n# message_key: msg\n#- multiline:\n# type: count\n# count_lines: 3\n",
|
||||||
@@ -31,10 +37,10 @@
|
|||||||
"harvester_limit": 0,
|
"harvester_limit": 0,
|
||||||
"fingerprint": false,
|
"fingerprint": false,
|
||||||
"fingerprint_offset": 0,
|
"fingerprint_offset": 0,
|
||||||
"fingerprint_length": "64",
|
|
||||||
"file_identity_native": true,
|
"file_identity_native": true,
|
||||||
"exclude_lines": [],
|
"exclude_lines": [],
|
||||||
"include_lines": []
|
"include_lines": [],
|
||||||
|
"delete_enabled": false
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -4,26 +4,32 @@
|
|||||||
"version": ""
|
"version": ""
|
||||||
},
|
},
|
||||||
"name": "import-evtx-logs",
|
"name": "import-evtx-logs",
|
||||||
|
"namespace": "so",
|
||||||
"description": "Import Windows EVTX logs",
|
"description": "Import Windows EVTX logs",
|
||||||
"policy_id": "so-grid-nodes_general",
|
"policy_id": "so-grid-nodes_general",
|
||||||
"namespace": "so",
|
"policy_ids": [
|
||||||
|
"so-grid-nodes_general"
|
||||||
|
],
|
||||||
|
"vars": {},
|
||||||
"inputs": {
|
"inputs": {
|
||||||
"filestream-filestream": {
|
"filestream-filestream": {
|
||||||
"enabled": true,
|
"enabled": true,
|
||||||
"streams": {
|
"streams": {
|
||||||
"filestream.generic": {
|
"filestream.filestream": {
|
||||||
"enabled": true,
|
"enabled": true,
|
||||||
"vars": {
|
"vars": {
|
||||||
"paths": [
|
"paths": [
|
||||||
"/nsm/import/*/evtx/*.json"
|
"/nsm/import/*/evtx/*.json"
|
||||||
],
|
],
|
||||||
|
"compression_gzip": false,
|
||||||
|
"use_logs_stream": false,
|
||||||
"data_stream.dataset": "import",
|
"data_stream.dataset": "import",
|
||||||
"parsers": "#- ndjson:\n# target: \"\"\n# message_key: msg\n#- multiline:\n# type: count\n# count_lines: 3\n",
|
"parsers": "#- ndjson:\n# target: \"\"\n# message_key: msg\n#- multiline:\n# type: count\n# count_lines: 3\n",
|
||||||
"exclude_files": [
|
"exclude_files": [
|
||||||
"\\.gz$"
|
"\\.gz$"
|
||||||
],
|
],
|
||||||
"include_files": [],
|
"include_files": [],
|
||||||
"processors": "- dissect:\n tokenizer: \"/nsm/import/%{import.id}/evtx/%{import.file}\"\n field: \"log.file.path\"\n target_prefix: \"\"\n- decode_json_fields:\n fields: [\"message\"]\n target: \"\"\n- drop_fields:\n fields: [\"host\"]\n ignore_missing: true\n- add_fields:\n target: data_stream\n fields:\n type: logs\n dataset: system.security\n- add_fields:\n target: event\n fields:\n dataset: system.security\n module: system\n imported: true\n- add_fields:\n target: \"@metadata\"\n fields:\n pipeline: logs-system.security-2.6.1\n- if:\n equals:\n winlog.channel: 'Microsoft-Windows-Sysmon/Operational'\n then: \n - add_fields:\n target: data_stream\n fields:\n dataset: windows.sysmon_operational\n - add_fields:\n target: event\n fields:\n dataset: windows.sysmon_operational\n module: windows\n imported: true\n - add_fields:\n target: \"@metadata\"\n fields:\n pipeline: logs-windows.sysmon_operational-3.1.2\n- if:\n equals:\n winlog.channel: 'Application'\n then: \n - add_fields:\n target: data_stream\n fields:\n dataset: system.application\n - add_fields:\n target: event\n fields:\n dataset: system.application\n - add_fields:\n target: \"@metadata\"\n fields:\n pipeline: logs-system.application-2.6.1\n- if:\n equals:\n winlog.channel: 'System'\n then: \n - add_fields:\n target: data_stream\n fields:\n dataset: system.system\n - add_fields:\n target: event\n fields:\n dataset: system.system\n - add_fields:\n target: \"@metadata\"\n fields:\n pipeline: logs-system.system-2.6.1\n \n- if:\n equals:\n winlog.channel: 'Microsoft-Windows-PowerShell/Operational'\n then: \n - add_fields:\n target: data_stream\n fields:\n dataset: windows.powershell_operational\n - add_fields:\n target: event\n fields:\n dataset: windows.powershell_operational\n module: windows\n - add_fields:\n target: \"@metadata\"\n fields:\n pipeline: logs-windows.powershell_operational-3.1.2\n- add_fields:\n target: data_stream\n fields:\n dataset: import",
|
"processors": "- dissect:\n tokenizer: \"/nsm/import/%{import.id}/evtx/%{import.file}\"\n field: \"log.file.path\"\n target_prefix: \"\"\n- decode_json_fields:\n fields: [\"message\"]\n target: \"\"\n- drop_fields:\n fields: [\"host\"]\n ignore_missing: true\n- add_fields:\n target: data_stream\n fields:\n type: logs\n dataset: system.security\n- add_fields:\n target: event\n fields:\n dataset: system.security\n module: system\n imported: true\n- add_fields:\n target: \"@metadata\"\n fields:\n pipeline: logs-system.security-2.15.0\n- if:\n equals:\n winlog.channel: 'Microsoft-Windows-Sysmon/Operational'\n then: \n - add_fields:\n target: data_stream\n fields:\n dataset: windows.sysmon_operational\n - add_fields:\n target: event\n fields:\n dataset: windows.sysmon_operational\n module: windows\n imported: true\n - add_fields:\n target: \"@metadata\"\n fields:\n pipeline: logs-windows.sysmon_operational-3.8.0\n- if:\n equals:\n winlog.channel: 'Application'\n then: \n - add_fields:\n target: data_stream\n fields:\n dataset: system.application\n - add_fields:\n target: event\n fields:\n dataset: system.application\n - add_fields:\n target: \"@metadata\"\n fields:\n pipeline: logs-system.application-2.15.0\n- if:\n equals:\n winlog.channel: 'System'\n then: \n - add_fields:\n target: data_stream\n fields:\n dataset: system.system\n - add_fields:\n target: event\n fields:\n dataset: system.system\n - add_fields:\n target: \"@metadata\"\n fields:\n pipeline: logs-system.system-2.15.0\n \n- if:\n equals:\n winlog.channel: 'Microsoft-Windows-PowerShell/Operational'\n then: \n - add_fields:\n target: data_stream\n fields:\n dataset: windows.powershell_operational\n - add_fields:\n target: event\n fields:\n dataset: windows.powershell_operational\n module: windows\n - add_fields:\n target: \"@metadata\"\n fields:\n pipeline: logs-windows.powershell_operational-3.8.0\n- add_fields:\n target: data_stream\n fields:\n dataset: import",
|
||||||
"tags": [
|
"tags": [
|
||||||
"import"
|
"import"
|
||||||
],
|
],
|
||||||
@@ -33,10 +39,10 @@
|
|||||||
"harvester_limit": 0,
|
"harvester_limit": 0,
|
||||||
"fingerprint": false,
|
"fingerprint": false,
|
||||||
"fingerprint_offset": 0,
|
"fingerprint_offset": 0,
|
||||||
"fingerprint_length": "64",
|
|
||||||
"file_identity_native": true,
|
"file_identity_native": true,
|
||||||
"exclude_lines": [],
|
"exclude_lines": [],
|
||||||
"include_lines": []
|
"include_lines": [],
|
||||||
|
"delete_enabled": false
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -4,19 +4,25 @@
|
|||||||
"version": ""
|
"version": ""
|
||||||
},
|
},
|
||||||
"name": "import-suricata-logs",
|
"name": "import-suricata-logs",
|
||||||
|
"namespace": "so",
|
||||||
"description": "Import Suricata logs",
|
"description": "Import Suricata logs",
|
||||||
"policy_id": "so-grid-nodes_general",
|
"policy_id": "so-grid-nodes_general",
|
||||||
"namespace": "so",
|
"policy_ids": [
|
||||||
|
"so-grid-nodes_general"
|
||||||
|
],
|
||||||
|
"vars": {},
|
||||||
"inputs": {
|
"inputs": {
|
||||||
"filestream-filestream": {
|
"filestream-filestream": {
|
||||||
"enabled": true,
|
"enabled": true,
|
||||||
"streams": {
|
"streams": {
|
||||||
"filestream.generic": {
|
"filestream.filestream": {
|
||||||
"enabled": true,
|
"enabled": true,
|
||||||
"vars": {
|
"vars": {
|
||||||
"paths": [
|
"paths": [
|
||||||
"/nsm/import/*/suricata/eve*.json"
|
"/nsm/import/*/suricata/eve*.json"
|
||||||
],
|
],
|
||||||
|
"compression_gzip": false,
|
||||||
|
"use_logs_stream": false,
|
||||||
"data_stream.dataset": "import",
|
"data_stream.dataset": "import",
|
||||||
"pipeline": "suricata.common",
|
"pipeline": "suricata.common",
|
||||||
"parsers": "#- ndjson:\n# target: \"\"\n# message_key: msg\n#- multiline:\n# type: count\n# count_lines: 3\n",
|
"parsers": "#- ndjson:\n# target: \"\"\n# message_key: msg\n#- multiline:\n# type: count\n# count_lines: 3\n",
|
||||||
@@ -32,10 +38,10 @@
|
|||||||
"harvester_limit": 0,
|
"harvester_limit": 0,
|
||||||
"fingerprint": false,
|
"fingerprint": false,
|
||||||
"fingerprint_offset": 0,
|
"fingerprint_offset": 0,
|
||||||
"fingerprint_length": "64",
|
|
||||||
"file_identity_native": true,
|
"file_identity_native": true,
|
||||||
"exclude_lines": [],
|
"exclude_lines": [],
|
||||||
"include_lines": []
|
"include_lines": [],
|
||||||
|
"delete_enabled": false
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -4,14 +4,18 @@
|
|||||||
"version": ""
|
"version": ""
|
||||||
},
|
},
|
||||||
"name": "rita-logs",
|
"name": "rita-logs",
|
||||||
|
"namespace": "so",
|
||||||
"description": "RITA Logs",
|
"description": "RITA Logs",
|
||||||
"policy_id": "so-grid-nodes_general",
|
"policy_id": "so-grid-nodes_general",
|
||||||
"namespace": "so",
|
"policy_ids": [
|
||||||
|
"so-grid-nodes_general"
|
||||||
|
],
|
||||||
|
"vars": {},
|
||||||
"inputs": {
|
"inputs": {
|
||||||
"filestream-filestream": {
|
"filestream-filestream": {
|
||||||
"enabled": true,
|
"enabled": true,
|
||||||
"streams": {
|
"streams": {
|
||||||
"filestream.generic": {
|
"filestream.filestream": {
|
||||||
"enabled": true,
|
"enabled": true,
|
||||||
"vars": {
|
"vars": {
|
||||||
"paths": [
|
"paths": [
|
||||||
@@ -19,6 +23,8 @@
|
|||||||
"/nsm/rita/exploded-dns.csv",
|
"/nsm/rita/exploded-dns.csv",
|
||||||
"/nsm/rita/long-connections.csv"
|
"/nsm/rita/long-connections.csv"
|
||||||
],
|
],
|
||||||
|
"compression_gzip": false,
|
||||||
|
"use_logs_stream": false,
|
||||||
"data_stream.dataset": "rita",
|
"data_stream.dataset": "rita",
|
||||||
"parsers": "#- ndjson:\n# target: \"\"\n# message_key: msg\n#- multiline:\n# type: count\n# count_lines: 3\n",
|
"parsers": "#- ndjson:\n# target: \"\"\n# message_key: msg\n#- multiline:\n# type: count\n# count_lines: 3\n",
|
||||||
"exclude_files": [
|
"exclude_files": [
|
||||||
@@ -33,10 +39,10 @@
|
|||||||
"harvester_limit": 0,
|
"harvester_limit": 0,
|
||||||
"fingerprint": false,
|
"fingerprint": false,
|
||||||
"fingerprint_offset": 0,
|
"fingerprint_offset": 0,
|
||||||
"fingerprint_length": "64",
|
|
||||||
"file_identity_native": true,
|
"file_identity_native": true,
|
||||||
"exclude_lines": [],
|
"exclude_lines": [],
|
||||||
"include_lines": []
|
"include_lines": [],
|
||||||
|
"delete_enabled": false
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -4,19 +4,25 @@
|
|||||||
"version": ""
|
"version": ""
|
||||||
},
|
},
|
||||||
"name": "so-ip-mappings",
|
"name": "so-ip-mappings",
|
||||||
|
"namespace": "so",
|
||||||
"description": "IP Description mappings",
|
"description": "IP Description mappings",
|
||||||
"policy_id": "so-grid-nodes_general",
|
"policy_id": "so-grid-nodes_general",
|
||||||
"namespace": "so",
|
"policy_ids": [
|
||||||
|
"so-grid-nodes_general"
|
||||||
|
],
|
||||||
|
"vars": {},
|
||||||
"inputs": {
|
"inputs": {
|
||||||
"filestream-filestream": {
|
"filestream-filestream": {
|
||||||
"enabled": true,
|
"enabled": true,
|
||||||
"streams": {
|
"streams": {
|
||||||
"filestream.generic": {
|
"filestream.filestream": {
|
||||||
"enabled": true,
|
"enabled": true,
|
||||||
"vars": {
|
"vars": {
|
||||||
"paths": [
|
"paths": [
|
||||||
"/nsm/custom-mappings/ip-descriptions.csv"
|
"/nsm/custom-mappings/ip-descriptions.csv"
|
||||||
],
|
],
|
||||||
|
"compression_gzip": false,
|
||||||
|
"use_logs_stream": false,
|
||||||
"data_stream.dataset": "hostnamemappings",
|
"data_stream.dataset": "hostnamemappings",
|
||||||
"parsers": "#- ndjson:\n# target: \"\"\n# message_key: msg\n#- multiline:\n# type: count\n# count_lines: 3\n",
|
"parsers": "#- ndjson:\n# target: \"\"\n# message_key: msg\n#- multiline:\n# type: count\n# count_lines: 3\n",
|
||||||
"exclude_files": [
|
"exclude_files": [
|
||||||
@@ -32,10 +38,10 @@
|
|||||||
"harvester_limit": 0,
|
"harvester_limit": 0,
|
||||||
"fingerprint": false,
|
"fingerprint": false,
|
||||||
"fingerprint_offset": 0,
|
"fingerprint_offset": 0,
|
||||||
"fingerprint_length": "64",
|
|
||||||
"file_identity_native": true,
|
"file_identity_native": true,
|
||||||
"exclude_lines": [],
|
"exclude_lines": [],
|
||||||
"include_lines": []
|
"include_lines": [],
|
||||||
|
"delete_enabled": false
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -4,19 +4,25 @@
|
|||||||
"version": ""
|
"version": ""
|
||||||
},
|
},
|
||||||
"name": "soc-auth-sync-logs",
|
"name": "soc-auth-sync-logs",
|
||||||
|
"namespace": "so",
|
||||||
"description": "Security Onion - Elastic Auth Sync - Logs",
|
"description": "Security Onion - Elastic Auth Sync - Logs",
|
||||||
"policy_id": "so-grid-nodes_general",
|
"policy_id": "so-grid-nodes_general",
|
||||||
"namespace": "so",
|
"policy_ids": [
|
||||||
|
"so-grid-nodes_general"
|
||||||
|
],
|
||||||
|
"vars": {},
|
||||||
"inputs": {
|
"inputs": {
|
||||||
"filestream-filestream": {
|
"filestream-filestream": {
|
||||||
"enabled": true,
|
"enabled": true,
|
||||||
"streams": {
|
"streams": {
|
||||||
"filestream.generic": {
|
"filestream.filestream": {
|
||||||
"enabled": true,
|
"enabled": true,
|
||||||
"vars": {
|
"vars": {
|
||||||
"paths": [
|
"paths": [
|
||||||
"/opt/so/log/soc/sync.log"
|
"/opt/so/log/soc/sync.log"
|
||||||
],
|
],
|
||||||
|
"compression_gzip": false,
|
||||||
|
"use_logs_stream": false,
|
||||||
"data_stream.dataset": "soc",
|
"data_stream.dataset": "soc",
|
||||||
"pipeline": "common",
|
"pipeline": "common",
|
||||||
"parsers": "#- ndjson:\n# target: \"\"\n# message_key: msg\n#- multiline:\n# type: count\n# count_lines: 3\n",
|
"parsers": "#- ndjson:\n# target: \"\"\n# message_key: msg\n#- multiline:\n# type: count\n# count_lines: 3\n",
|
||||||
@@ -31,10 +37,10 @@
|
|||||||
"harvester_limit": 0,
|
"harvester_limit": 0,
|
||||||
"fingerprint": false,
|
"fingerprint": false,
|
||||||
"fingerprint_offset": 0,
|
"fingerprint_offset": 0,
|
||||||
"fingerprint_length": "64",
|
|
||||||
"file_identity_native": true,
|
"file_identity_native": true,
|
||||||
"exclude_lines": [],
|
"exclude_lines": [],
|
||||||
"include_lines": []
|
"include_lines": [],
|
||||||
|
"delete_enabled": false
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -4,20 +4,26 @@
|
|||||||
"version": ""
|
"version": ""
|
||||||
},
|
},
|
||||||
"name": "soc-detections-logs",
|
"name": "soc-detections-logs",
|
||||||
|
"namespace": "so",
|
||||||
"description": "Security Onion Console - Detections Logs",
|
"description": "Security Onion Console - Detections Logs",
|
||||||
"policy_id": "so-grid-nodes_general",
|
"policy_id": "so-grid-nodes_general",
|
||||||
"namespace": "so",
|
"policy_ids": [
|
||||||
|
"so-grid-nodes_general"
|
||||||
|
],
|
||||||
|
"vars": {},
|
||||||
"inputs": {
|
"inputs": {
|
||||||
"filestream-filestream": {
|
"filestream-filestream": {
|
||||||
"enabled": true,
|
"enabled": true,
|
||||||
"streams": {
|
"streams": {
|
||||||
"filestream.generic": {
|
"filestream.filestream": {
|
||||||
"enabled": true,
|
"enabled": true,
|
||||||
"vars": {
|
"vars": {
|
||||||
"paths": [
|
"paths": [
|
||||||
"/opt/so/log/soc/detections_runtime-status_sigma.log",
|
"/opt/so/log/soc/detections_runtime-status_sigma.log",
|
||||||
"/opt/so/log/soc/detections_runtime-status_yara.log"
|
"/opt/so/log/soc/detections_runtime-status_yara.log"
|
||||||
],
|
],
|
||||||
|
"compression_gzip": false,
|
||||||
|
"use_logs_stream": false,
|
||||||
"data_stream.dataset": "soc",
|
"data_stream.dataset": "soc",
|
||||||
"pipeline": "common",
|
"pipeline": "common",
|
||||||
"parsers": "#- ndjson:\n# target: \"\"\n# message_key: msg\n#- multiline:\n# type: count\n# count_lines: 3\n",
|
"parsers": "#- ndjson:\n# target: \"\"\n# message_key: msg\n#- multiline:\n# type: count\n# count_lines: 3\n",
|
||||||
@@ -35,10 +41,10 @@
|
|||||||
"harvester_limit": 0,
|
"harvester_limit": 0,
|
||||||
"fingerprint": false,
|
"fingerprint": false,
|
||||||
"fingerprint_offset": 0,
|
"fingerprint_offset": 0,
|
||||||
"fingerprint_length": "64",
|
|
||||||
"file_identity_native": true,
|
"file_identity_native": true,
|
||||||
"exclude_lines": [],
|
"exclude_lines": [],
|
||||||
"include_lines": []
|
"include_lines": [],
|
||||||
|
"delete_enabled": false
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -4,19 +4,25 @@
|
|||||||
"version": ""
|
"version": ""
|
||||||
},
|
},
|
||||||
"name": "soc-salt-relay-logs",
|
"name": "soc-salt-relay-logs",
|
||||||
|
"namespace": "so",
|
||||||
"description": "Security Onion - Salt Relay - Logs",
|
"description": "Security Onion - Salt Relay - Logs",
|
||||||
"policy_id": "so-grid-nodes_general",
|
"policy_id": "so-grid-nodes_general",
|
||||||
"namespace": "so",
|
"policy_ids": [
|
||||||
|
"so-grid-nodes_general"
|
||||||
|
],
|
||||||
|
"vars": {},
|
||||||
"inputs": {
|
"inputs": {
|
||||||
"filestream-filestream": {
|
"filestream-filestream": {
|
||||||
"enabled": true,
|
"enabled": true,
|
||||||
"streams": {
|
"streams": {
|
||||||
"filestream.generic": {
|
"filestream.filestream": {
|
||||||
"enabled": true,
|
"enabled": true,
|
||||||
"vars": {
|
"vars": {
|
||||||
"paths": [
|
"paths": [
|
||||||
"/opt/so/log/soc/salt-relay.log"
|
"/opt/so/log/soc/salt-relay.log"
|
||||||
],
|
],
|
||||||
|
"compression_gzip": false,
|
||||||
|
"use_logs_stream": false,
|
||||||
"data_stream.dataset": "soc",
|
"data_stream.dataset": "soc",
|
||||||
"pipeline": "common",
|
"pipeline": "common",
|
||||||
"parsers": "#- ndjson:\n# target: \"\"\n# message_key: msg\n#- multiline:\n# type: count\n# count_lines: 3\n",
|
"parsers": "#- ndjson:\n# target: \"\"\n# message_key: msg\n#- multiline:\n# type: count\n# count_lines: 3\n",
|
||||||
@@ -33,10 +39,10 @@
|
|||||||
"harvester_limit": 0,
|
"harvester_limit": 0,
|
||||||
"fingerprint": false,
|
"fingerprint": false,
|
||||||
"fingerprint_offset": 0,
|
"fingerprint_offset": 0,
|
||||||
"fingerprint_length": "64",
|
|
||||||
"file_identity_native": true,
|
"file_identity_native": true,
|
||||||
"exclude_lines": [],
|
"exclude_lines": [],
|
||||||
"include_lines": []
|
"include_lines": [],
|
||||||
|
"delete_enabled": false
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -4,19 +4,25 @@
|
|||||||
"version": ""
|
"version": ""
|
||||||
},
|
},
|
||||||
"name": "soc-sensoroni-logs",
|
"name": "soc-sensoroni-logs",
|
||||||
|
"namespace": "so",
|
||||||
"description": "Security Onion - Sensoroni - Logs",
|
"description": "Security Onion - Sensoroni - Logs",
|
||||||
"policy_id": "so-grid-nodes_general",
|
"policy_id": "so-grid-nodes_general",
|
||||||
"namespace": "so",
|
"policy_ids": [
|
||||||
|
"so-grid-nodes_general"
|
||||||
|
],
|
||||||
|
"vars": {},
|
||||||
"inputs": {
|
"inputs": {
|
||||||
"filestream-filestream": {
|
"filestream-filestream": {
|
||||||
"enabled": true,
|
"enabled": true,
|
||||||
"streams": {
|
"streams": {
|
||||||
"filestream.generic": {
|
"filestream.filestream": {
|
||||||
"enabled": true,
|
"enabled": true,
|
||||||
"vars": {
|
"vars": {
|
||||||
"paths": [
|
"paths": [
|
||||||
"/opt/so/log/sensoroni/sensoroni.log"
|
"/opt/so/log/sensoroni/sensoroni.log"
|
||||||
],
|
],
|
||||||
|
"compression_gzip": false,
|
||||||
|
"use_logs_stream": false,
|
||||||
"data_stream.dataset": "soc",
|
"data_stream.dataset": "soc",
|
||||||
"pipeline": "common",
|
"pipeline": "common",
|
||||||
"parsers": "#- ndjson:\n# target: \"\"\n# message_key: msg\n#- multiline:\n# type: count\n# count_lines: 3\n",
|
"parsers": "#- ndjson:\n# target: \"\"\n# message_key: msg\n#- multiline:\n# type: count\n# count_lines: 3\n",
|
||||||
@@ -31,10 +37,10 @@
|
|||||||
"harvester_limit": 0,
|
"harvester_limit": 0,
|
||||||
"fingerprint": false,
|
"fingerprint": false,
|
||||||
"fingerprint_offset": 0,
|
"fingerprint_offset": 0,
|
||||||
"fingerprint_length": "64",
|
|
||||||
"file_identity_native": true,
|
"file_identity_native": true,
|
||||||
"exclude_lines": [],
|
"exclude_lines": [],
|
||||||
"include_lines": []
|
"include_lines": [],
|
||||||
|
"delete_enabled": false
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -4,19 +4,25 @@
|
|||||||
"version": ""
|
"version": ""
|
||||||
},
|
},
|
||||||
"name": "soc-server-logs",
|
"name": "soc-server-logs",
|
||||||
|
"namespace": "so",
|
||||||
"description": "Security Onion Console Logs",
|
"description": "Security Onion Console Logs",
|
||||||
"policy_id": "so-grid-nodes_general",
|
"policy_id": "so-grid-nodes_general",
|
||||||
"namespace": "so",
|
"policy_ids": [
|
||||||
|
"so-grid-nodes_general"
|
||||||
|
],
|
||||||
|
"vars": {},
|
||||||
"inputs": {
|
"inputs": {
|
||||||
"filestream-filestream": {
|
"filestream-filestream": {
|
||||||
"enabled": true,
|
"enabled": true,
|
||||||
"streams": {
|
"streams": {
|
||||||
"filestream.generic": {
|
"filestream.filestream": {
|
||||||
"enabled": true,
|
"enabled": true,
|
||||||
"vars": {
|
"vars": {
|
||||||
"paths": [
|
"paths": [
|
||||||
"/opt/so/log/soc/sensoroni-server.log"
|
"/opt/so/log/soc/sensoroni-server.log"
|
||||||
],
|
],
|
||||||
|
"compression_gzip": false,
|
||||||
|
"use_logs_stream": false,
|
||||||
"data_stream.dataset": "soc",
|
"data_stream.dataset": "soc",
|
||||||
"pipeline": "common",
|
"pipeline": "common",
|
||||||
"parsers": "#- ndjson:\n# target: \"\"\n# message_key: msg\n#- multiline:\n# type: count\n# count_lines: 3\n",
|
"parsers": "#- ndjson:\n# target: \"\"\n# message_key: msg\n#- multiline:\n# type: count\n# count_lines: 3\n",
|
||||||
@@ -33,10 +39,10 @@
|
|||||||
"harvester_limit": 0,
|
"harvester_limit": 0,
|
||||||
"fingerprint": false,
|
"fingerprint": false,
|
||||||
"fingerprint_offset": 0,
|
"fingerprint_offset": 0,
|
||||||
"fingerprint_length": "64",
|
|
||||||
"file_identity_native": true,
|
"file_identity_native": true,
|
||||||
"exclude_lines": [],
|
"exclude_lines": [],
|
||||||
"include_lines": []
|
"include_lines": [],
|
||||||
|
"delete_enabled": false
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -4,19 +4,25 @@
|
|||||||
"version": ""
|
"version": ""
|
||||||
},
|
},
|
||||||
"name": "strelka-logs",
|
"name": "strelka-logs",
|
||||||
|
"namespace": "so",
|
||||||
"description": "Strelka Logs",
|
"description": "Strelka Logs",
|
||||||
"policy_id": "so-grid-nodes_general",
|
"policy_id": "so-grid-nodes_general",
|
||||||
"namespace": "so",
|
"policy_ids": [
|
||||||
|
"so-grid-nodes_general"
|
||||||
|
],
|
||||||
|
"vars": {},
|
||||||
"inputs": {
|
"inputs": {
|
||||||
"filestream-filestream": {
|
"filestream-filestream": {
|
||||||
"enabled": true,
|
"enabled": true,
|
||||||
"streams": {
|
"streams": {
|
||||||
"filestream.generic": {
|
"filestream.filestream": {
|
||||||
"enabled": true,
|
"enabled": true,
|
||||||
"vars": {
|
"vars": {
|
||||||
"paths": [
|
"paths": [
|
||||||
"/nsm/strelka/log/strelka.log"
|
"/nsm/strelka/log/strelka.log"
|
||||||
],
|
],
|
||||||
|
"compression_gzip": false,
|
||||||
|
"use_logs_stream": false,
|
||||||
"data_stream.dataset": "strelka",
|
"data_stream.dataset": "strelka",
|
||||||
"pipeline": "strelka.file",
|
"pipeline": "strelka.file",
|
||||||
"parsers": "#- ndjson:\n# target: \"\"\n# message_key: msg\n#- multiline:\n# type: count\n# count_lines: 3\n",
|
"parsers": "#- ndjson:\n# target: \"\"\n# message_key: msg\n#- multiline:\n# type: count\n# count_lines: 3\n",
|
||||||
@@ -31,10 +37,10 @@
|
|||||||
"harvester_limit": 0,
|
"harvester_limit": 0,
|
||||||
"fingerprint": false,
|
"fingerprint": false,
|
||||||
"fingerprint_offset": 0,
|
"fingerprint_offset": 0,
|
||||||
"fingerprint_length": "64",
|
|
||||||
"file_identity_native": true,
|
"file_identity_native": true,
|
||||||
"exclude_lines": [],
|
"exclude_lines": [],
|
||||||
"include_lines": []
|
"include_lines": [],
|
||||||
|
"delete_enabled": false
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -4,19 +4,25 @@
|
|||||||
"version": ""
|
"version": ""
|
||||||
},
|
},
|
||||||
"name": "suricata-logs",
|
"name": "suricata-logs",
|
||||||
|
"namespace": "so",
|
||||||
"description": "Suricata integration",
|
"description": "Suricata integration",
|
||||||
"policy_id": "so-grid-nodes_general",
|
"policy_id": "so-grid-nodes_general",
|
||||||
"namespace": "so",
|
"policy_ids": [
|
||||||
|
"so-grid-nodes_general"
|
||||||
|
],
|
||||||
|
"vars": {},
|
||||||
"inputs": {
|
"inputs": {
|
||||||
"filestream-filestream": {
|
"filestream-filestream": {
|
||||||
"enabled": true,
|
"enabled": true,
|
||||||
"streams": {
|
"streams": {
|
||||||
"filestream.generic": {
|
"filestream.filestream": {
|
||||||
"enabled": true,
|
"enabled": true,
|
||||||
"vars": {
|
"vars": {
|
||||||
"paths": [
|
"paths": [
|
||||||
"/nsm/suricata/eve*.json"
|
"/nsm/suricata/eve*.json"
|
||||||
],
|
],
|
||||||
|
"compression_gzip": false,
|
||||||
|
"use_logs_stream": false,
|
||||||
"data_stream.dataset": "suricata",
|
"data_stream.dataset": "suricata",
|
||||||
"pipeline": "suricata.common",
|
"pipeline": "suricata.common",
|
||||||
"parsers": "#- ndjson:\n# target: \"\"\n# message_key: msg\n#- multiline:\n# type: count\n# count_lines: 3\n",
|
"parsers": "#- ndjson:\n# target: \"\"\n# message_key: msg\n#- multiline:\n# type: count\n# count_lines: 3\n",
|
||||||
@@ -31,10 +37,10 @@
|
|||||||
"harvester_limit": 0,
|
"harvester_limit": 0,
|
||||||
"fingerprint": false,
|
"fingerprint": false,
|
||||||
"fingerprint_offset": 0,
|
"fingerprint_offset": 0,
|
||||||
"fingerprint_length": "64",
|
|
||||||
"file_identity_native": true,
|
"file_identity_native": true,
|
||||||
"exclude_lines": [],
|
"exclude_lines": [],
|
||||||
"include_lines": []
|
"include_lines": [],
|
||||||
|
"delete_enabled": false
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -0,0 +1,123 @@
|
|||||||
|
{# Copyright Security Onion Solutions LLC and/or licensed to Security Onion Solutions LLC under one
|
||||||
|
or more contributor license agreements. Licensed under the Elastic License 2.0; you may not use
|
||||||
|
this file except in compliance with the Elastic License 2.0. #}
|
||||||
|
|
||||||
|
|
||||||
|
{% import_json '/opt/so/state/esfleet_input_package_components.json' as ADDON_INPUT_PACKAGE_COMPONENTS %}
|
||||||
|
{% import_json '/opt/so/state/esfleet_component_templates.json' as INSTALLED_COMPONENT_TEMPLATES %}
|
||||||
|
{% import_yaml 'elasticfleet/defaults.yaml' as ELASTICFLEETDEFAULTS %}
|
||||||
|
|
||||||
|
{% set CORE_ESFLEET_PACKAGES = ELASTICFLEETDEFAULTS.get('elasticfleet', {}).get('packages', {}) %}
|
||||||
|
{% set ADDON_INPUT_INTEGRATION_DEFAULTS = {} %}
|
||||||
|
{% set DEBUG_STUFF = {} %}
|
||||||
|
|
||||||
|
{% for pkg in ADDON_INPUT_PACKAGE_COMPONENTS %}
|
||||||
|
{% if pkg.name in CORE_ESFLEET_PACKAGES %}
|
||||||
|
{# skip core input packages #}
|
||||||
|
{% elif pkg.name not in CORE_ESFLEET_PACKAGES %}
|
||||||
|
{# generate defaults for each input package #}
|
||||||
|
{% if pkg.dataStreams is defined and pkg.dataStreams is not none and pkg.dataStreams | length > 0 %}
|
||||||
|
{% for pattern in pkg.dataStreams %}
|
||||||
|
{# in ES 9.3.2 'input' type integrations no longer create default component templates and instead they wait for user input during 'integration' setup (fleet ui config)
|
||||||
|
title: generic is an artifact of that and is not in use #}
|
||||||
|
{% if pattern.title == "generic" %}
|
||||||
|
{% continue %}
|
||||||
|
{% endif %}
|
||||||
|
{% if "metrics-" in pattern.name %}
|
||||||
|
{% set integration_type = "metrics-" %}
|
||||||
|
{% elif "logs-" in pattern.name %}
|
||||||
|
{% set integration_type = "logs-" %}
|
||||||
|
{% else %}
|
||||||
|
{% set integration_type = "" %}
|
||||||
|
{% endif %}
|
||||||
|
{# on input integrations the component name is user defined at the time it is added to an agent policy #}
|
||||||
|
{% set component_name = pattern.title %}
|
||||||
|
{% set index_pattern = pattern.name %}
|
||||||
|
{# component_name_x maintains the functionality of merging local pillar changes with generated 'defaults' via SOC UI #}
|
||||||
|
{% set component_name_x = component_name.replace(".","_x_") %}
|
||||||
|
{# pillar overrides/merge expects the key names to follow the naming in elasticsearch/defaults.yaml eg. so-logs-1password_x_item_usages . The _x_ is replaced later on in elasticsearch/template.map.jinja #}
|
||||||
|
{% set integration_key = "so-" ~ integration_type ~ pkg.name + '_x_' ~ component_name_x %}
|
||||||
|
{# Default integration settings #}
|
||||||
|
{% set integration_defaults = {
|
||||||
|
"index_sorting": false,
|
||||||
|
"index_template": {
|
||||||
|
"composed_of": [integration_type ~ component_name ~ "@package", integration_type ~ component_name ~ "@custom", "so-fleet_integrations.ip_mappings-1", "so-fleet_globals-1", "so-fleet_agent_id_verification-1"],
|
||||||
|
"data_stream": {
|
||||||
|
"allow_custom_routing": false,
|
||||||
|
"hidden": false
|
||||||
|
},
|
||||||
|
"ignore_missing_component_templates": [integration_type ~ component_name ~ "@custom"],
|
||||||
|
"index_patterns": [index_pattern],
|
||||||
|
"priority": 501,
|
||||||
|
"template": {
|
||||||
|
"settings": {
|
||||||
|
"index": {
|
||||||
|
"lifecycle": {"name": "so-" ~ integration_type ~ component_name ~ "-logs"},
|
||||||
|
"number_of_replicas": 0
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"policy": {
|
||||||
|
"phases": {
|
||||||
|
"cold": {
|
||||||
|
"actions": {
|
||||||
|
"allocate":{
|
||||||
|
"number_of_replicas": ""
|
||||||
|
},
|
||||||
|
"set_priority": {"priority": 0}
|
||||||
|
},
|
||||||
|
"min_age": "60d"
|
||||||
|
},
|
||||||
|
"delete": {
|
||||||
|
"actions": {
|
||||||
|
"delete": {}
|
||||||
|
},
|
||||||
|
"min_age": "365d"
|
||||||
|
},
|
||||||
|
"hot": {
|
||||||
|
"actions": {
|
||||||
|
"rollover": {
|
||||||
|
"max_age": "30d",
|
||||||
|
"max_primary_shard_size": "50gb"
|
||||||
|
},
|
||||||
|
"forcemerge":{
|
||||||
|
"max_num_segments": ""
|
||||||
|
},
|
||||||
|
"shrink":{
|
||||||
|
"max_primary_shard_size": "",
|
||||||
|
"method": "COUNT",
|
||||||
|
"number_of_shards": ""
|
||||||
|
},
|
||||||
|
"set_priority": {"priority": 100}
|
||||||
|
},
|
||||||
|
"min_age": "0ms"
|
||||||
|
},
|
||||||
|
"warm": {
|
||||||
|
"actions": {
|
||||||
|
"allocate": {
|
||||||
|
"number_of_replicas": ""
|
||||||
|
},
|
||||||
|
"forcemerge": {
|
||||||
|
"max_num_segments": ""
|
||||||
|
},
|
||||||
|
"shrink":{
|
||||||
|
"max_primary_shard_size": "",
|
||||||
|
"method": "COUNT",
|
||||||
|
"number_of_shards": ""
|
||||||
|
},
|
||||||
|
"set_priority": {"priority": 50}
|
||||||
|
},
|
||||||
|
"min_age": "30d"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
} %}
|
||||||
|
|
||||||
|
|
||||||
|
{% do ADDON_INPUT_INTEGRATION_DEFAULTS.update({integration_key: integration_defaults}) %}
|
||||||
|
{% do DEBUG_STUFF.update({integration_key: "Generating defaults for "+ pkg.name })%}
|
||||||
|
{% endfor %}
|
||||||
|
{% endif %}
|
||||||
|
{% endif %}
|
||||||
|
{% endfor %}
|
||||||
@@ -59,8 +59,8 @@
|
|||||||
{# skip core integrations #}
|
{# skip core integrations #}
|
||||||
{% elif pkg.name not in CORE_ESFLEET_PACKAGES %}
|
{% elif pkg.name not in CORE_ESFLEET_PACKAGES %}
|
||||||
{# generate defaults for each integration #}
|
{# generate defaults for each integration #}
|
||||||
{% if pkg.es_index_patterns is defined and pkg.es_index_patterns is not none %}
|
{% if pkg.dataStreams is defined and pkg.dataStreams is not none and pkg.dataStreams | length > 0 %}
|
||||||
{% for pattern in pkg.es_index_patterns %}
|
{% for pattern in pkg.dataStreams %}
|
||||||
{% if "metrics-" in pattern.name %}
|
{% if "metrics-" in pattern.name %}
|
||||||
{% set integration_type = "metrics-" %}
|
{% set integration_type = "metrics-" %}
|
||||||
{% elif "logs-" in pattern.name %}
|
{% elif "logs-" in pattern.name %}
|
||||||
@@ -75,44 +75,27 @@
|
|||||||
{% if component_name in WEIRD_INTEGRATIONS %}
|
{% if component_name in WEIRD_INTEGRATIONS %}
|
||||||
{% set component_name = WEIRD_INTEGRATIONS[component_name] %}
|
{% set component_name = WEIRD_INTEGRATIONS[component_name] %}
|
||||||
{% endif %}
|
{% endif %}
|
||||||
|
|
||||||
{# create duplicate of component_name, so we can split generics from @custom component templates in the index template below and overwrite the default @package when needed
|
|
||||||
eg. having to replace unifiedlogs.generic@package with filestream.generic@package, but keep the ability to customize unifiedlogs.generic@custom and its ILM policy #}
|
|
||||||
{% set custom_component_name = component_name %}
|
|
||||||
|
|
||||||
{# duplicate integration_type to assist with sometimes needing to overwrite component templates with 'logs-filestream.generic@package' (there is no metrics-filestream.generic@package) #}
|
|
||||||
{% set generic_integration_type = integration_type %}
|
|
||||||
|
|
||||||
{# component_name_x maintains the functionality of merging local pillar changes with generated 'defaults' via SOC UI #}
|
{# component_name_x maintains the functionality of merging local pillar changes with generated 'defaults' via SOC UI #}
|
||||||
{% set component_name_x = component_name.replace(".","_x_") %}
|
{% set component_name_x = component_name.replace(".","_x_") %}
|
||||||
{# pillar overrides/merge expects the key names to follow the naming in elasticsearch/defaults.yaml eg. so-logs-1password_x_item_usages . The _x_ is replaced later on in elasticsearch/template.map.jinja #}
|
{# pillar overrides/merge expects the key names to follow the naming in elasticsearch/defaults.yaml eg. so-logs-1password_x_item_usages . The _x_ is replaced later on in elasticsearch/template.map.jinja #}
|
||||||
{% set integration_key = "so-" ~ integration_type ~ component_name_x %}
|
{% set integration_key = "so-" ~ integration_type ~ component_name_x %}
|
||||||
|
|
||||||
{# if its a .generic template make sure that a .generic@package for the integration exists. Else default to logs-filestream.generic@package #}
|
|
||||||
{% if ".generic" in component_name and integration_type ~ component_name ~ "@package" not in INSTALLED_COMPONENT_TEMPLATES %}
|
|
||||||
{# these generic templates by default are directed to index_pattern of 'logs-generic-*', overwrite that here to point to eg gcp_pubsub.generic-* #}
|
|
||||||
{% set index_pattern = integration_type ~ component_name ~ "-*" %}
|
|
||||||
{# includes use of .generic component template, but it doesn't exist in installed component templates. Redirect it to filestream.generic@package #}
|
|
||||||
{% set component_name = "filestream.generic" %}
|
|
||||||
{% set generic_integration_type = "logs-" %}
|
|
||||||
{% endif %}
|
|
||||||
|
|
||||||
{# Default integration settings #}
|
{# Default integration settings #}
|
||||||
{% set integration_defaults = {
|
{% set integration_defaults = {
|
||||||
"index_sorting": false,
|
"index_sorting": false,
|
||||||
"index_template": {
|
"index_template": {
|
||||||
"composed_of": [generic_integration_type ~ component_name ~ "@package", integration_type ~ custom_component_name ~ "@custom", "so-fleet_integrations.ip_mappings-1", "so-fleet_globals-1", "so-fleet_agent_id_verification-1"],
|
"composed_of": [integration_type ~ component_name ~ "@package", integration_type ~ component_name ~ "@custom", "so-fleet_integrations.ip_mappings-1", "so-fleet_globals-1", "so-fleet_agent_id_verification-1"],
|
||||||
"data_stream": {
|
"data_stream": {
|
||||||
"allow_custom_routing": false,
|
"allow_custom_routing": false,
|
||||||
"hidden": false
|
"hidden": false
|
||||||
},
|
},
|
||||||
"ignore_missing_component_templates": [integration_type ~ custom_component_name ~ "@custom"],
|
"ignore_missing_component_templates": [integration_type ~ component_name ~ "@custom"],
|
||||||
"index_patterns": [index_pattern],
|
"index_patterns": [index_pattern],
|
||||||
"priority": 501,
|
"priority": 501,
|
||||||
"template": {
|
"template": {
|
||||||
"settings": {
|
"settings": {
|
||||||
"index": {
|
"index": {
|
||||||
"lifecycle": {"name": "so-" ~ integration_type ~ custom_component_name ~ "-logs"},
|
"lifecycle": {"name": "so-" ~ integration_type ~ component_name ~ "-logs"},
|
||||||
"number_of_replicas": 0
|
"number_of_replicas": 0
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -0,0 +1,112 @@
|
|||||||
|
# Copyright Security Onion Solutions LLC and/or licensed to Security Onion Solutions LLC under one
|
||||||
|
# or more contributor license agreements. Licensed under the Elastic License 2.0 as shown at
|
||||||
|
# https://securityonion.net/license; you may not use this file except in compliance with the
|
||||||
|
# Elastic License 2.0.
|
||||||
|
|
||||||
|
{% from 'allowed_states.map.jinja' import allowed_states %}
|
||||||
|
{% if sls in allowed_states %}
|
||||||
|
{% from 'elasticfleet/map.jinja' import ELASTICFLEETMERGED %}
|
||||||
|
|
||||||
|
include:
|
||||||
|
- elasticfleet.config
|
||||||
|
|
||||||
|
# If enabled, automatically update Fleet Logstash Outputs
|
||||||
|
{% if ELASTICFLEETMERGED.config.server.enable_auto_configuration and grains.role not in ['so-import', 'so-eval'] %}
|
||||||
|
so-elastic-fleet-auto-configure-logstash-outputs:
|
||||||
|
cmd.run:
|
||||||
|
- name: /usr/sbin/so-elastic-fleet-outputs-update
|
||||||
|
- retry:
|
||||||
|
attempts: 4
|
||||||
|
interval: 30
|
||||||
|
|
||||||
|
{# Separate from above in order to catch elasticfleet-logstash.crt changes and force update to fleet output policy #}
|
||||||
|
so-elastic-fleet-auto-configure-logstash-outputs-force:
|
||||||
|
cmd.run:
|
||||||
|
- name: /usr/sbin/so-elastic-fleet-outputs-update --certs
|
||||||
|
- retry:
|
||||||
|
attempts: 4
|
||||||
|
interval: 30
|
||||||
|
- onchanges:
|
||||||
|
- x509: etc_elasticfleet_logstash_crt
|
||||||
|
- x509: elasticfleet_kafka_crt
|
||||||
|
{% endif %}
|
||||||
|
|
||||||
|
# If enabled, automatically update Fleet Server URLs & ES Connection
|
||||||
|
so-elastic-fleet-auto-configure-server-urls:
|
||||||
|
cmd.run:
|
||||||
|
- name: /usr/sbin/so-elastic-fleet-urls-update
|
||||||
|
- retry:
|
||||||
|
attempts: 4
|
||||||
|
interval: 30
|
||||||
|
|
||||||
|
# Automatically update Fleet Server Elasticsearch URLs & Agent Artifact URLs
|
||||||
|
so-elastic-fleet-auto-configure-elasticsearch-urls:
|
||||||
|
cmd.run:
|
||||||
|
- name: /usr/sbin/so-elastic-fleet-es-url-update
|
||||||
|
- retry:
|
||||||
|
attempts: 4
|
||||||
|
interval: 30
|
||||||
|
|
||||||
|
so-elastic-fleet-auto-configure-artifact-urls:
|
||||||
|
cmd.run:
|
||||||
|
- name: /usr/sbin/so-elastic-fleet-artifacts-url-update
|
||||||
|
- retry:
|
||||||
|
attempts: 4
|
||||||
|
interval: 30
|
||||||
|
|
||||||
|
so-elastic-fleet-package-statefile:
|
||||||
|
file.managed:
|
||||||
|
- name: /opt/so/state/elastic_fleet_packages.txt
|
||||||
|
- contents: {{ELASTICFLEETMERGED.packages}}
|
||||||
|
|
||||||
|
so-elastic-fleet-package-upgrade:
|
||||||
|
cmd.run:
|
||||||
|
- name: /usr/sbin/so-elastic-fleet-package-upgrade
|
||||||
|
- retry:
|
||||||
|
attempts: 3
|
||||||
|
interval: 10
|
||||||
|
- onchanges:
|
||||||
|
- file: /opt/so/state/elastic_fleet_packages.txt
|
||||||
|
|
||||||
|
so-elastic-fleet-integrations:
|
||||||
|
cmd.run:
|
||||||
|
- name: /usr/sbin/so-elastic-fleet-integration-policy-load
|
||||||
|
- retry:
|
||||||
|
attempts: 3
|
||||||
|
interval: 10
|
||||||
|
|
||||||
|
so-elastic-agent-grid-upgrade:
|
||||||
|
cmd.run:
|
||||||
|
- name: /usr/sbin/so-elastic-agent-grid-upgrade
|
||||||
|
- retry:
|
||||||
|
attempts: 12
|
||||||
|
interval: 5
|
||||||
|
|
||||||
|
so-elastic-fleet-integration-upgrade:
|
||||||
|
cmd.run:
|
||||||
|
- name: /usr/sbin/so-elastic-fleet-integration-upgrade
|
||||||
|
- retry:
|
||||||
|
attempts: 3
|
||||||
|
interval: 10
|
||||||
|
|
||||||
|
{# Optional integrations script doesn't need the retries like so-elastic-fleet-integration-upgrade which loads the default integrations #}
|
||||||
|
so-elastic-fleet-addon-integrations:
|
||||||
|
cmd.run:
|
||||||
|
- name: /usr/sbin/so-elastic-fleet-optional-integrations-load
|
||||||
|
|
||||||
|
{% if ELASTICFLEETMERGED.config.defend_filters.enable_auto_configuration %}
|
||||||
|
so-elastic-defend-manage-filters-file-watch:
|
||||||
|
cmd.run:
|
||||||
|
- name: python3 /sbin/so-elastic-defend-manage-filters.py -c /opt/so/conf/elasticsearch/curl.config -d /opt/so/conf/elastic-fleet/defend-exclusions/disabled-filters.yaml -i /nsm/securityonion-resources/event_filters/ -i /opt/so/conf/elastic-fleet/defend-exclusions/rulesets/custom-filters/ &>> /opt/so/log/elasticfleet/elastic-defend-manage-filters.log
|
||||||
|
- onchanges:
|
||||||
|
- file: elasticdefendcustom
|
||||||
|
- file: elasticdefenddisabled
|
||||||
|
{% endif %}
|
||||||
|
|
||||||
|
{% else %}
|
||||||
|
|
||||||
|
{{sls}}_state_not_allowed:
|
||||||
|
test.fail_without_changes:
|
||||||
|
- name: {{sls}}_state_not_allowed
|
||||||
|
|
||||||
|
{% endif %}
|
||||||
@@ -135,9 +135,33 @@ elastic_fleet_bulk_package_install() {
|
|||||||
fi
|
fi
|
||||||
}
|
}
|
||||||
|
|
||||||
elastic_fleet_installed_packages() {
|
elastic_fleet_get_package_list_by_type() {
|
||||||
if ! fleet_api "epm/packages/installed?perPage=500"; then
|
if ! output=$(fleet_api "epm/packages"); then
|
||||||
return 1
|
return 1
|
||||||
|
else
|
||||||
|
is_integration=$(jq '[.items[] | select(.type=="integration") | .name ]' <<< "$output")
|
||||||
|
is_input=$(jq '[.items[] | select(.type=="input") | .name ]' <<< "$output")
|
||||||
|
is_content=$(jq '[.items[] | select(.type=="content") | .name ]' <<< "$output")
|
||||||
|
jq -n --argjson is_integration "${is_integration:-[]}" \
|
||||||
|
--argjson is_input "${is_input:-[]}" \
|
||||||
|
--argjson is_content "${is_content:-[]}" \
|
||||||
|
'{"integration": $is_integration,"input": $is_input, "content": $is_content}'
|
||||||
|
fi
|
||||||
|
}
|
||||||
|
elastic_fleet_installed_packages_components() {
|
||||||
|
package_type=${1,,}
|
||||||
|
if [[ "$package_type" != "integration" && "$package_type" != "input" && "$package_type" != "content" ]]; then
|
||||||
|
echo "Error: Invalid package type ${package_type}. Valid types are 'integration', 'input', or 'content'."
|
||||||
|
return 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
packages_by_type=$(elastic_fleet_get_package_list_by_type)
|
||||||
|
packages=$(jq --arg package_type "$package_type" '.[$package_type]' <<< "$packages_by_type")
|
||||||
|
|
||||||
|
if ! output=$(fleet_api "epm/packages/installed?perPage=500"); then
|
||||||
|
return 1
|
||||||
|
else
|
||||||
|
jq -c --argjson packages "$packages" '[.items[] | select(.name | IN($packages[])) | {name: .name, dataStreams: .dataStreams}]' <<< "$output"
|
||||||
fi
|
fi
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
@@ -5,7 +5,13 @@
|
|||||||
# this file except in compliance with the Elastic License 2.0.
|
# this file except in compliance with the Elastic License 2.0.
|
||||||
|
|
||||||
. /usr/sbin/so-common
|
. /usr/sbin/so-common
|
||||||
|
. /usr/sbin/so-elastic-fleet-common
|
||||||
{%- import_yaml 'elasticsearch/defaults.yaml' as ELASTICSEARCHDEFAULTS %}
|
{%- import_yaml 'elasticsearch/defaults.yaml' as ELASTICSEARCHDEFAULTS %}
|
||||||
|
{%- import_yaml 'elasticfleet/defaults.yaml' as ELASTICFLEETDEFAULTS %}
|
||||||
|
{# Optionally override Elasticsearch version for Elastic Agent patch releases #}
|
||||||
|
{%- if ELASTICFLEETDEFAULTS.elasticfleet.patch_version is defined %}
|
||||||
|
{%- do ELASTICSEARCHDEFAULTS.elasticsearch.update({'version': ELASTICFLEETDEFAULTS.elasticfleet.patch_version}) %}
|
||||||
|
{%- endif %}
|
||||||
|
|
||||||
# Only run on Managers
|
# Only run on Managers
|
||||||
if ! is_manager_node; then
|
if ! is_manager_node; then
|
||||||
@@ -14,11 +20,8 @@ if ! is_manager_node; then
|
|||||||
fi
|
fi
|
||||||
|
|
||||||
# Get current list of Grid Node Agents that need to be upgraded
|
# Get current list of Grid Node Agents that need to be upgraded
|
||||||
RAW_JSON=$(curl -K /opt/so/conf/elasticsearch/curl.config -L "http://localhost:5601/api/fleet/agents?perPage=20&page=1&kuery=NOT%20agent.version%3A%20{{ELASTICSEARCHDEFAULTS.elasticsearch.version}}%20AND%20policy_id%3A%20so-grid-nodes_%2A&showInactive=false&getStatusSummary=true" --retry 3 --retry-delay 30 --fail 2>/dev/null)
|
if ! RAW_JSON=$(fleet_api "agents?perPage=20&page=1&kuery=NOT%20agent.version%3A%20{{ELASTICSEARCHDEFAULTS.elasticsearch.version | urlencode }}%20AND%20policy_id%3A%20so-grid-nodes_%2A&showInactive=false&getStatusSummary=true" -H 'kbn-xsrf: true' -H 'Content-Type: application/json'); then
|
||||||
|
|
||||||
# Check to make sure that the server responded with good data - else, bail from script
|
|
||||||
CHECKSUM=$(jq -r '.page' <<< "$RAW_JSON")
|
|
||||||
if [ "$CHECKSUM" -ne 1 ]; then
|
|
||||||
printf "Failed to query for current Grid Agents...\n"
|
printf "Failed to query for current Grid Agents...\n"
|
||||||
exit 1
|
exit 1
|
||||||
fi
|
fi
|
||||||
@@ -31,10 +34,12 @@ if [ "$OUTDATED_LIST" != '[]' ]; then
|
|||||||
printf "Initiating upgrades for $AGENTNUMBERS Agents to Elastic {{ELASTICSEARCHDEFAULTS.elasticsearch.version}}...\n\n"
|
printf "Initiating upgrades for $AGENTNUMBERS Agents to Elastic {{ELASTICSEARCHDEFAULTS.elasticsearch.version}}...\n\n"
|
||||||
|
|
||||||
# Generate updated JSON payload
|
# Generate updated JSON payload
|
||||||
JSON_STRING=$(jq -n --arg ELASTICVERSION {{ELASTICSEARCHDEFAULTS.elasticsearch.version}} --arg UPDATELIST $OUTDATED_LIST '{"version": $ELASTICVERSION,"agents": $UPDATELIST }')
|
JSON_STRING=$(jq -n --arg ELASTICVERSION "{{ELASTICSEARCHDEFAULTS.elasticsearch.version}}" --argjson UPDATELIST "$OUTDATED_LIST" '{"version": $ELASTICVERSION,"agents": $UPDATELIST }')
|
||||||
|
|
||||||
# Update Node Agents
|
# Update Node Agents
|
||||||
curl -K /opt/so/conf/elasticsearch/curl.config -L -X POST "http://localhost:5601/api/fleet/agents/bulk_upgrade" -H 'kbn-xsrf: true' -H 'Content-Type: application/json' -d "$JSON_STRING"
|
if ! fleet_api "agents/bulk_upgrade" -XPOST -H 'kbn-xsrf: true' -H 'Content-Type: application/json' -d "$JSON_STRING"; then
|
||||||
|
printf "Failed to initiate Agent upgrades...\n"
|
||||||
|
fi
|
||||||
else
|
else
|
||||||
printf "No Agents need updates... Exiting\n\n"
|
printf "No Agents need updates... Exiting\n\n"
|
||||||
exit 0
|
exit 0
|
||||||
|
|||||||
@@ -18,7 +18,9 @@ INSTALLED_PACKAGE_LIST=/tmp/esfleet_installed_packages.json
|
|||||||
BULK_INSTALL_PACKAGE_LIST=/tmp/esfleet_bulk_install.json
|
BULK_INSTALL_PACKAGE_LIST=/tmp/esfleet_bulk_install.json
|
||||||
BULK_INSTALL_PACKAGE_TMP=/tmp/esfleet_bulk_install_tmp.json
|
BULK_INSTALL_PACKAGE_TMP=/tmp/esfleet_bulk_install_tmp.json
|
||||||
BULK_INSTALL_OUTPUT=/opt/so/state/esfleet_bulk_install_results.json
|
BULK_INSTALL_OUTPUT=/opt/so/state/esfleet_bulk_install_results.json
|
||||||
PACKAGE_COMPONENTS=/opt/so/state/esfleet_package_components.json
|
INTEGRATION_PACKAGE_COMPONENTS=/opt/so/state/esfleet_package_components.json
|
||||||
|
INPUT_PACKAGE_COMPONENTS=/opt/so/state/esfleet_input_package_components.json
|
||||||
|
CONTENT_PACKAGE_COMPONENTS=/opt/so/state/esfleet_content_package_components.json
|
||||||
COMPONENT_TEMPLATES=/opt/so/state/esfleet_component_templates.json
|
COMPONENT_TEMPLATES=/opt/so/state/esfleet_component_templates.json
|
||||||
|
|
||||||
PENDING_UPDATE=false
|
PENDING_UPDATE=false
|
||||||
@@ -179,10 +181,13 @@ if [[ -f $STATE_FILE_SUCCESS ]]; then
|
|||||||
else
|
else
|
||||||
echo "Elastic integrations don't appear to need installation/updating..."
|
echo "Elastic integrations don't appear to need installation/updating..."
|
||||||
fi
|
fi
|
||||||
# Write out file for generating index/component/ilm templates
|
# Write out file for generating index/component/ilm templates, keeping each package type separate
|
||||||
if latest_installed_package_list=$(elastic_fleet_installed_packages); then
|
for package_type in "INTEGRATION" "INPUT" "CONTENT"; do
|
||||||
echo $latest_installed_package_list | jq '[.items[] | {name: .name, es_index_patterns: .dataStreams}]' > $PACKAGE_COMPONENTS
|
if latest_installed_package_list=$(elastic_fleet_installed_packages_components "$package_type"); then
|
||||||
|
outfile="${package_type}_PACKAGE_COMPONENTS"
|
||||||
|
echo $latest_installed_package_list > "${!outfile}"
|
||||||
fi
|
fi
|
||||||
|
done
|
||||||
if retry 3 1 "so-elasticsearch-query / --fail --output /dev/null"; then
|
if retry 3 1 "so-elasticsearch-query / --fail --output /dev/null"; then
|
||||||
# Refresh installed component template list
|
# Refresh installed component template list
|
||||||
latest_component_templates_list=$(so-elasticsearch-query _component_template | jq '.component_templates[] | .name' | jq -s '.')
|
latest_component_templates_list=$(so-elasticsearch-query _component_template | jq '.component_templates[] | .name' | jq -s '.')
|
||||||
|
|||||||
@@ -0,0 +1,164 @@
|
|||||||
|
# Copyright Security Onion Solutions LLC and/or licensed to Security Onion Solutions LLC under one
|
||||||
|
# or more contributor license agreements. Licensed under the Elastic License 2.0 as shown at
|
||||||
|
# https://securityonion.net/license; you may not use this file except in compliance with the
|
||||||
|
# Elastic License 2.0.
|
||||||
|
|
||||||
|
{% from 'allowed_states.map.jinja' import allowed_states %}
|
||||||
|
{% if sls in allowed_states %}
|
||||||
|
{% from 'vars/globals.map.jinja' import GLOBALS %}
|
||||||
|
{% from 'elasticsearch/config.map.jinja' import ELASTICSEARCHMERGED %}
|
||||||
|
{% from 'elasticsearch/template.map.jinja' import ES_INDEX_SETTINGS, SO_MANAGED_INDICES %}
|
||||||
|
{% if GLOBALS.role != 'so-heavynode' %}
|
||||||
|
{% from 'elasticsearch/template.map.jinja' import ALL_ADDON_SETTINGS %}
|
||||||
|
{% endif %}
|
||||||
|
|
||||||
|
escomponenttemplates:
|
||||||
|
file.recurse:
|
||||||
|
- name: /opt/so/conf/elasticsearch/templates/component
|
||||||
|
- source: salt://elasticsearch/templates/component
|
||||||
|
- user: 930
|
||||||
|
- group: 939
|
||||||
|
- clean: True
|
||||||
|
- onchanges_in:
|
||||||
|
- file: so-elasticsearch-templates-reload
|
||||||
|
- show_changes: False
|
||||||
|
|
||||||
|
# Clean up legacy and non-SO managed templates from the elasticsearch/templates/index/ directory
|
||||||
|
so_index_template_dir:
|
||||||
|
file.directory:
|
||||||
|
- name: /opt/so/conf/elasticsearch/templates/index
|
||||||
|
- clean: True
|
||||||
|
{%- if SO_MANAGED_INDICES %}
|
||||||
|
- require:
|
||||||
|
{%- for index in SO_MANAGED_INDICES %}
|
||||||
|
- file: so_index_template_{{index}}
|
||||||
|
{%- endfor %}
|
||||||
|
{%- endif %}
|
||||||
|
|
||||||
|
# Auto-generate index templates for SO managed indices (directly defined in elasticsearch/defaults.yaml)
|
||||||
|
# These index templates are for the core SO datasets and are always required
|
||||||
|
{% for index, settings in ES_INDEX_SETTINGS.items() %}
|
||||||
|
{% if settings.index_template is defined %}
|
||||||
|
so_index_template_{{index}}:
|
||||||
|
file.managed:
|
||||||
|
- name: /opt/so/conf/elasticsearch/templates/index/{{ index }}-template.json
|
||||||
|
- source: salt://elasticsearch/base-template.json.jinja
|
||||||
|
- defaults:
|
||||||
|
TEMPLATE_CONFIG: {{ settings.index_template }}
|
||||||
|
- template: jinja
|
||||||
|
- onchanges_in:
|
||||||
|
- file: so-elasticsearch-templates-reload
|
||||||
|
{% endif %}
|
||||||
|
{% endfor %}
|
||||||
|
|
||||||
|
{% if GLOBALS.role != "so-heavynode" %}
|
||||||
|
# Auto-generate optional index templates for integration | input | content packages
|
||||||
|
# These index templates are not used by default (until user adds package to an agent policy).
|
||||||
|
# Pre-configured with standard defaults, and incorporated into SOC configuration for user customization.
|
||||||
|
{% for index,settings in ALL_ADDON_SETTINGS.items() %}
|
||||||
|
{% if settings.index_template is defined %}
|
||||||
|
addon_index_template_{{index}}:
|
||||||
|
file.managed:
|
||||||
|
- name: /opt/so/conf/elasticsearch/templates/addon-index/{{ index }}-template.json
|
||||||
|
- source: salt://elasticsearch/base-template.json.jinja
|
||||||
|
- defaults:
|
||||||
|
TEMPLATE_CONFIG: {{ settings.index_template }}
|
||||||
|
- template: jinja
|
||||||
|
- show_changes: False
|
||||||
|
- onchanges_in:
|
||||||
|
- file: addon-elasticsearch-templates-reload
|
||||||
|
{% endif %}
|
||||||
|
{% endfor %}
|
||||||
|
{% endif %}
|
||||||
|
|
||||||
|
{% if GLOBALS.role in GLOBALS.manager_roles %}
|
||||||
|
so-es-cluster-settings:
|
||||||
|
cmd.run:
|
||||||
|
- name: /usr/sbin/so-elasticsearch-cluster-settings
|
||||||
|
- cwd: /opt/so
|
||||||
|
- template: jinja
|
||||||
|
- require:
|
||||||
|
- docker_container: so-elasticsearch
|
||||||
|
- file: elasticsearch_sbin_jinja
|
||||||
|
- http: wait_for_so-elasticsearch
|
||||||
|
{% endif %}
|
||||||
|
|
||||||
|
# heavynodes will only load ILM policies for SO managed indices. (Indicies defined in elasticsearch/defaults.yaml)
|
||||||
|
so-elasticsearch-ilm-policy-load:
|
||||||
|
cmd.run:
|
||||||
|
- name: /usr/sbin/so-elasticsearch-ilm-policy-load
|
||||||
|
- cwd: /opt/so
|
||||||
|
- require:
|
||||||
|
- docker_container: so-elasticsearch
|
||||||
|
- file: so-elasticsearch-ilm-policy-load-script
|
||||||
|
- onchanges:
|
||||||
|
- file: so-elasticsearch-ilm-policy-load-script
|
||||||
|
|
||||||
|
so-elasticsearch-templates-reload:
|
||||||
|
file.absent:
|
||||||
|
- name: /opt/so/state/estemplates.txt
|
||||||
|
|
||||||
|
addon-elasticsearch-templates-reload:
|
||||||
|
file.absent:
|
||||||
|
- name: /opt/so/state/addon_estemplates.txt
|
||||||
|
|
||||||
|
# so-elasticsearch-templates-load will have its first successful run during the 'so-elastic-fleet-setup' script
|
||||||
|
so-elasticsearch-templates:
|
||||||
|
cmd.run:
|
||||||
|
{%- if GLOBALS.role == "so-heavynode" %}
|
||||||
|
- name: /usr/sbin/so-elasticsearch-templates-load --heavynode
|
||||||
|
{%- else %}
|
||||||
|
- name: /usr/sbin/so-elasticsearch-templates-load
|
||||||
|
{%- endif %}
|
||||||
|
- cwd: /opt/so
|
||||||
|
- template: jinja
|
||||||
|
- require:
|
||||||
|
- docker_container: so-elasticsearch
|
||||||
|
- file: elasticsearch_sbin_jinja
|
||||||
|
|
||||||
|
so-elasticsearch-pipelines:
|
||||||
|
cmd.run:
|
||||||
|
- name: /usr/sbin/so-elasticsearch-pipelines {{ GLOBALS.hostname }}
|
||||||
|
- require:
|
||||||
|
- docker_container: so-elasticsearch
|
||||||
|
- file: so-elasticsearch-pipelines-script
|
||||||
|
|
||||||
|
so-elasticsearch-roles-load:
|
||||||
|
cmd.run:
|
||||||
|
- name: /usr/sbin/so-elasticsearch-roles-load
|
||||||
|
- cwd: /opt/so
|
||||||
|
- template: jinja
|
||||||
|
- require:
|
||||||
|
- docker_container: so-elasticsearch
|
||||||
|
- file: elasticsearch_sbin_jinja
|
||||||
|
|
||||||
|
{% if grains.role in ['so-managersearch', 'so-manager', 'so-managerhype'] %}
|
||||||
|
{% set ap = "absent" %}
|
||||||
|
{% endif %}
|
||||||
|
{% if grains.role in ['so-eval', 'so-standalone', 'so-heavynode'] %}
|
||||||
|
{% if ELASTICSEARCHMERGED.index_clean %}
|
||||||
|
{% set ap = "present" %}
|
||||||
|
{% else %}
|
||||||
|
{% set ap = "absent" %}
|
||||||
|
{% endif %}
|
||||||
|
{% endif %}
|
||||||
|
{% if grains.role in ['so-eval', 'so-standalone', 'so-managersearch', 'so-heavynode', 'so-manager'] %}
|
||||||
|
so-elasticsearch-indices-delete:
|
||||||
|
cron.{{ap}}:
|
||||||
|
- name: /usr/sbin/so-elasticsearch-indices-delete > /opt/so/log/elasticsearch/cron-elasticsearch-indices-delete.log 2>&1
|
||||||
|
- identifier: so-elasticsearch-indices-delete
|
||||||
|
- user: root
|
||||||
|
- minute: '*/5'
|
||||||
|
- hour: '*'
|
||||||
|
- daymonth: '*'
|
||||||
|
- month: '*'
|
||||||
|
- dayweek: '*'
|
||||||
|
{% endif %}
|
||||||
|
|
||||||
|
{% else %}
|
||||||
|
|
||||||
|
{{sls}}_state_not_allowed:
|
||||||
|
test.fail_without_changes:
|
||||||
|
- name: {{sls}}_state_not_allowed
|
||||||
|
|
||||||
|
{% endif %}
|
||||||
@@ -66,6 +66,8 @@ so-elasticsearch-ilm-policy-load-script:
|
|||||||
- group: 939
|
- group: 939
|
||||||
- mode: 754
|
- mode: 754
|
||||||
- template: jinja
|
- template: jinja
|
||||||
|
- defaults:
|
||||||
|
GLOBALS: {{ GLOBALS }}
|
||||||
- show_changes: False
|
- show_changes: False
|
||||||
|
|
||||||
so-elasticsearch-pipelines-script:
|
so-elasticsearch-pipelines-script:
|
||||||
@@ -91,6 +93,13 @@ estemplatedir:
|
|||||||
- group: 939
|
- group: 939
|
||||||
- makedirs: True
|
- makedirs: True
|
||||||
|
|
||||||
|
esaddontemplatedir:
|
||||||
|
file.directory:
|
||||||
|
- name: /opt/so/conf/elasticsearch/templates/addon-index
|
||||||
|
- user: 930
|
||||||
|
- group: 939
|
||||||
|
- makedirs: True
|
||||||
|
|
||||||
esrolesdir:
|
esrolesdir:
|
||||||
file.directory:
|
file.directory:
|
||||||
- name: /opt/so/conf/elasticsearch/roles
|
- name: /opt/so/conf/elasticsearch/roles
|
||||||
|
|||||||
@@ -1,6 +1,6 @@
|
|||||||
elasticsearch:
|
elasticsearch:
|
||||||
enabled: false
|
enabled: false
|
||||||
version: 9.0.8
|
version: 9.3.3
|
||||||
index_clean: true
|
index_clean: true
|
||||||
vm:
|
vm:
|
||||||
max_map_count: 1048576
|
max_map_count: 1048576
|
||||||
|
|||||||
+16
-125
@@ -10,8 +10,6 @@
|
|||||||
{% from 'elasticsearch/config.map.jinja' import ELASTICSEARCH_NODES %}
|
{% from 'elasticsearch/config.map.jinja' import ELASTICSEARCH_NODES %}
|
||||||
{% from 'elasticsearch/config.map.jinja' import ELASTICSEARCH_SEED_HOSTS %}
|
{% from 'elasticsearch/config.map.jinja' import ELASTICSEARCH_SEED_HOSTS %}
|
||||||
{% from 'elasticsearch/config.map.jinja' import ELASTICSEARCHMERGED %}
|
{% from 'elasticsearch/config.map.jinja' import ELASTICSEARCHMERGED %}
|
||||||
{% set TEMPLATES = salt['pillar.get']('elasticsearch:templates', {}) %}
|
|
||||||
{% from 'elasticsearch/template.map.jinja' import ES_INDEX_SETTINGS %}
|
|
||||||
|
|
||||||
include:
|
include:
|
||||||
- ca
|
- ca
|
||||||
@@ -19,6 +17,9 @@ include:
|
|||||||
- elasticsearch.ssl
|
- elasticsearch.ssl
|
||||||
- elasticsearch.config
|
- elasticsearch.config
|
||||||
- elasticsearch.sostatus
|
- elasticsearch.sostatus
|
||||||
|
{%- if GLOBALS.role != "so-searchnode" %}
|
||||||
|
- elasticsearch.cluster
|
||||||
|
{%- endif%}
|
||||||
|
|
||||||
so-elasticsearch:
|
so-elasticsearch:
|
||||||
docker_container.running:
|
docker_container.running:
|
||||||
@@ -101,134 +102,24 @@ so-elasticsearch:
|
|||||||
- cmd: auth_users_roles_inode
|
- cmd: auth_users_roles_inode
|
||||||
- cmd: auth_users_inode
|
- cmd: auth_users_inode
|
||||||
|
|
||||||
|
wait_for_so-elasticsearch:
|
||||||
|
http.wait_for_successful_query:
|
||||||
|
- name: "https://localhost:9200/"
|
||||||
|
- username: 'so_elastic'
|
||||||
|
- password: '{{ ELASTICSEARCHMERGED.auth.users.so_elastic_user.pass }}'
|
||||||
|
- ssl: True
|
||||||
|
- verify_ssl: False
|
||||||
|
- status: 200
|
||||||
|
- wait_for: 300
|
||||||
|
- request_interval: 15
|
||||||
|
- require:
|
||||||
|
- docker_container: so-elasticsearch
|
||||||
|
|
||||||
delete_so-elasticsearch_so-status.disabled:
|
delete_so-elasticsearch_so-status.disabled:
|
||||||
file.uncomment:
|
file.uncomment:
|
||||||
- name: /opt/so/conf/so-status/so-status.conf
|
- name: /opt/so/conf/so-status/so-status.conf
|
||||||
- regex: ^so-elasticsearch$
|
- regex: ^so-elasticsearch$
|
||||||
|
|
||||||
{% if GLOBALS.role != "so-searchnode" %}
|
|
||||||
escomponenttemplates:
|
|
||||||
file.recurse:
|
|
||||||
- name: /opt/so/conf/elasticsearch/templates/component
|
|
||||||
- source: salt://elasticsearch/templates/component
|
|
||||||
- user: 930
|
|
||||||
- group: 939
|
|
||||||
- clean: True
|
|
||||||
- onchanges_in:
|
|
||||||
- file: so-elasticsearch-templates-reload
|
|
||||||
- show_changes: False
|
|
||||||
|
|
||||||
# Auto-generate templates from defaults file
|
|
||||||
{% for index, settings in ES_INDEX_SETTINGS.items() %}
|
|
||||||
{% if settings.index_template is defined %}
|
|
||||||
es_index_template_{{index}}:
|
|
||||||
file.managed:
|
|
||||||
- name: /opt/so/conf/elasticsearch/templates/index/{{ index }}-template.json
|
|
||||||
- source: salt://elasticsearch/base-template.json.jinja
|
|
||||||
- defaults:
|
|
||||||
TEMPLATE_CONFIG: {{ settings.index_template }}
|
|
||||||
- template: jinja
|
|
||||||
- show_changes: False
|
|
||||||
- onchanges_in:
|
|
||||||
- file: so-elasticsearch-templates-reload
|
|
||||||
{% endif %}
|
|
||||||
{% endfor %}
|
|
||||||
|
|
||||||
{% if TEMPLATES %}
|
|
||||||
# Sync custom templates to /opt/so/conf/elasticsearch/templates
|
|
||||||
{% for TEMPLATE in TEMPLATES %}
|
|
||||||
es_template_{{TEMPLATE.split('.')[0] | replace("/","_") }}:
|
|
||||||
file.managed:
|
|
||||||
- source: salt://elasticsearch/templates/index/{{TEMPLATE}}
|
|
||||||
{% if 'jinja' in TEMPLATE.split('.')[-1] %}
|
|
||||||
- name: /opt/so/conf/elasticsearch/templates/index/{{TEMPLATE.split('/')[1] | replace(".jinja", "")}}
|
|
||||||
- template: jinja
|
|
||||||
{% else %}
|
|
||||||
- name: /opt/so/conf/elasticsearch/templates/index/{{TEMPLATE.split('/')[1]}}
|
|
||||||
{% endif %}
|
|
||||||
- user: 930
|
|
||||||
- group: 939
|
|
||||||
- show_changes: False
|
|
||||||
- onchanges_in:
|
|
||||||
- file: so-elasticsearch-templates-reload
|
|
||||||
{% endfor %}
|
|
||||||
{% endif %}
|
|
||||||
|
|
||||||
{% if GLOBALS.role in GLOBALS.manager_roles %}
|
|
||||||
so-es-cluster-settings:
|
|
||||||
cmd.run:
|
|
||||||
- name: /usr/sbin/so-elasticsearch-cluster-settings
|
|
||||||
- cwd: /opt/so
|
|
||||||
- template: jinja
|
|
||||||
- require:
|
|
||||||
- docker_container: so-elasticsearch
|
|
||||||
- file: elasticsearch_sbin_jinja
|
|
||||||
{% endif %}
|
|
||||||
|
|
||||||
so-elasticsearch-ilm-policy-load:
|
|
||||||
cmd.run:
|
|
||||||
- name: /usr/sbin/so-elasticsearch-ilm-policy-load
|
|
||||||
- cwd: /opt/so
|
|
||||||
- require:
|
|
||||||
- docker_container: so-elasticsearch
|
|
||||||
- file: so-elasticsearch-ilm-policy-load-script
|
|
||||||
- onchanges:
|
|
||||||
- file: so-elasticsearch-ilm-policy-load-script
|
|
||||||
|
|
||||||
so-elasticsearch-templates-reload:
|
|
||||||
file.absent:
|
|
||||||
- name: /opt/so/state/estemplates.txt
|
|
||||||
|
|
||||||
so-elasticsearch-templates:
|
|
||||||
cmd.run:
|
|
||||||
- name: /usr/sbin/so-elasticsearch-templates-load
|
|
||||||
- cwd: /opt/so
|
|
||||||
- template: jinja
|
|
||||||
- require:
|
|
||||||
- docker_container: so-elasticsearch
|
|
||||||
- file: elasticsearch_sbin_jinja
|
|
||||||
|
|
||||||
so-elasticsearch-pipelines:
|
|
||||||
cmd.run:
|
|
||||||
- name: /usr/sbin/so-elasticsearch-pipelines {{ GLOBALS.hostname }}
|
|
||||||
- require:
|
|
||||||
- docker_container: so-elasticsearch
|
|
||||||
- file: so-elasticsearch-pipelines-script
|
|
||||||
|
|
||||||
so-elasticsearch-roles-load:
|
|
||||||
cmd.run:
|
|
||||||
- name: /usr/sbin/so-elasticsearch-roles-load
|
|
||||||
- cwd: /opt/so
|
|
||||||
- template: jinja
|
|
||||||
- require:
|
|
||||||
- docker_container: so-elasticsearch
|
|
||||||
- file: elasticsearch_sbin_jinja
|
|
||||||
|
|
||||||
{% if grains.role in ['so-managersearch', 'so-manager', 'so-managerhype'] %}
|
|
||||||
{% set ap = "absent" %}
|
|
||||||
{% endif %}
|
|
||||||
{% if grains.role in ['so-eval', 'so-standalone', 'so-heavynode'] %}
|
|
||||||
{% if ELASTICSEARCHMERGED.index_clean %}
|
|
||||||
{% set ap = "present" %}
|
|
||||||
{% else %}
|
|
||||||
{% set ap = "absent" %}
|
|
||||||
{% endif %}
|
|
||||||
{% endif %}
|
|
||||||
{% if grains.role in ['so-eval', 'so-standalone', 'so-managersearch', 'so-heavynode', 'so-manager'] %}
|
|
||||||
so-elasticsearch-indices-delete:
|
|
||||||
cron.{{ap}}:
|
|
||||||
- name: /usr/sbin/so-elasticsearch-indices-delete > /opt/so/log/elasticsearch/cron-elasticsearch-indices-delete.log 2>&1
|
|
||||||
- identifier: so-elasticsearch-indices-delete
|
|
||||||
- user: root
|
|
||||||
- minute: '*/5'
|
|
||||||
- hour: '*'
|
|
||||||
- daymonth: '*'
|
|
||||||
- month: '*'
|
|
||||||
- dayweek: '*'
|
|
||||||
{% endif %}
|
|
||||||
|
|
||||||
{% endif %}
|
|
||||||
|
|
||||||
{% else %}
|
{% else %}
|
||||||
|
|
||||||
{{sls}}_state_not_allowed:
|
{{sls}}_state_not_allowed:
|
||||||
|
|||||||
+74
-13
@@ -10,24 +10,28 @@
|
|||||||
"processors": [
|
"processors": [
|
||||||
{
|
{
|
||||||
"set": {
|
"set": {
|
||||||
|
"tag": "set_ecs_version_f5923549",
|
||||||
"field": "ecs.version",
|
"field": "ecs.version",
|
||||||
"value": "8.17.0"
|
"value": "8.17.0"
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"set": {
|
"set": {
|
||||||
|
"tag": "set_observer_vendor_ad9d35cc",
|
||||||
"field": "observer.vendor",
|
"field": "observer.vendor",
|
||||||
"value": "netgate"
|
"value": "netgate"
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"set": {
|
"set": {
|
||||||
|
"tag": "set_observer_type_5dddf3ba",
|
||||||
"field": "observer.type",
|
"field": "observer.type",
|
||||||
"value": "firewall"
|
"value": "firewall"
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"rename": {
|
"rename": {
|
||||||
|
"tag": "rename_message_to_event_original_56a77271",
|
||||||
"field": "message",
|
"field": "message",
|
||||||
"target_field": "event.original",
|
"target_field": "event.original",
|
||||||
"ignore_missing": true,
|
"ignore_missing": true,
|
||||||
@@ -36,12 +40,14 @@
|
|||||||
},
|
},
|
||||||
{
|
{
|
||||||
"set": {
|
"set": {
|
||||||
|
"tag": "set_event_kind_de80643c",
|
||||||
"field": "event.kind",
|
"field": "event.kind",
|
||||||
"value": "event"
|
"value": "event"
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"set": {
|
"set": {
|
||||||
|
"tag": "set_event_timezone_4ca44cac",
|
||||||
"field": "event.timezone",
|
"field": "event.timezone",
|
||||||
"value": "{{{_tmp.tz_offset}}}",
|
"value": "{{{_tmp.tz_offset}}}",
|
||||||
"if": "ctx._tmp?.tz_offset != null && ctx._tmp?.tz_offset != 'local'"
|
"if": "ctx._tmp?.tz_offset != null && ctx._tmp?.tz_offset != 'local'"
|
||||||
@@ -49,6 +55,7 @@
|
|||||||
},
|
},
|
||||||
{
|
{
|
||||||
"grok": {
|
"grok": {
|
||||||
|
"tag": "grok_event_original_27d9c8c7",
|
||||||
"description": "Parse syslog header",
|
"description": "Parse syslog header",
|
||||||
"field": "event.original",
|
"field": "event.original",
|
||||||
"patterns": [
|
"patterns": [
|
||||||
@@ -72,6 +79,7 @@
|
|||||||
},
|
},
|
||||||
{
|
{
|
||||||
"date": {
|
"date": {
|
||||||
|
"tag": "date__tmp_timestamp8601_to_timestamp_6ac9d3ce",
|
||||||
"if": "ctx._tmp.timestamp8601 != null",
|
"if": "ctx._tmp.timestamp8601 != null",
|
||||||
"field": "_tmp.timestamp8601",
|
"field": "_tmp.timestamp8601",
|
||||||
"target_field": "@timestamp",
|
"target_field": "@timestamp",
|
||||||
@@ -82,6 +90,7 @@
|
|||||||
},
|
},
|
||||||
{
|
{
|
||||||
"date": {
|
"date": {
|
||||||
|
"tag": "date__tmp_timestamp_to_timestamp_f21e536e",
|
||||||
"if": "ctx.event?.timezone != null && ctx._tmp?.timestamp != null",
|
"if": "ctx.event?.timezone != null && ctx._tmp?.timestamp != null",
|
||||||
"field": "_tmp.timestamp",
|
"field": "_tmp.timestamp",
|
||||||
"target_field": "@timestamp",
|
"target_field": "@timestamp",
|
||||||
@@ -95,6 +104,7 @@
|
|||||||
},
|
},
|
||||||
{
|
{
|
||||||
"grok": {
|
"grok": {
|
||||||
|
"tag": "grok_process_name_cef3d489",
|
||||||
"description": "Set Event Provider",
|
"description": "Set Event Provider",
|
||||||
"field": "process.name",
|
"field": "process.name",
|
||||||
"patterns": [
|
"patterns": [
|
||||||
@@ -107,71 +117,83 @@
|
|||||||
},
|
},
|
||||||
{
|
{
|
||||||
"pipeline": {
|
"pipeline": {
|
||||||
"name": "logs-pfsense.log-1.23.1-firewall",
|
"tag": "pipeline_e16851a7",
|
||||||
|
"name": "logs-pfsense.log-1.25.2-firewall",
|
||||||
"if": "ctx.event.provider == 'filterlog'"
|
"if": "ctx.event.provider == 'filterlog'"
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"pipeline": {
|
"pipeline": {
|
||||||
"name": "logs-pfsense.log-1.23.1-openvpn",
|
"tag": "pipeline_828590b5",
|
||||||
|
"name": "logs-pfsense.log-1.25.2-openvpn",
|
||||||
"if": "ctx.event.provider == 'openvpn'"
|
"if": "ctx.event.provider == 'openvpn'"
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"pipeline": {
|
"pipeline": {
|
||||||
"name": "logs-pfsense.log-1.23.1-ipsec",
|
"tag": "pipeline_9d37039c",
|
||||||
|
"name": "logs-pfsense.log-1.25.2-ipsec",
|
||||||
"if": "ctx.event.provider == 'charon'"
|
"if": "ctx.event.provider == 'charon'"
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"pipeline": {
|
"pipeline": {
|
||||||
"name": "logs-pfsense.log-1.23.1-dhcp",
|
"tag": "pipeline_ad56bbca",
|
||||||
"if": "[\"dhcpd\", \"dhclient\", \"dhcp6c\"].contains(ctx.event.provider)"
|
"name": "logs-pfsense.log-1.25.2-dhcp",
|
||||||
|
"if": "[\"dhcpd\", \"dhclient\", \"dhcp6c\", \"dnsmasq-dhcp\"].contains(ctx.event.provider)"
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"pipeline": {
|
"pipeline": {
|
||||||
"name": "logs-pfsense.log-1.23.1-unbound",
|
"tag": "pipeline_dd85553d",
|
||||||
|
"name": "logs-pfsense.log-1.25.2-unbound",
|
||||||
"if": "ctx.event.provider == 'unbound'"
|
"if": "ctx.event.provider == 'unbound'"
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"pipeline": {
|
"pipeline": {
|
||||||
"name": "logs-pfsense.log-1.23.1-haproxy",
|
"tag": "pipeline_720ed255",
|
||||||
|
"name": "logs-pfsense.log-1.25.2-haproxy",
|
||||||
"if": "ctx.event.provider == 'haproxy'"
|
"if": "ctx.event.provider == 'haproxy'"
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"pipeline": {
|
"pipeline": {
|
||||||
"name": "logs-pfsense.log-1.23.1-php-fpm",
|
"tag": "pipeline_456beba5",
|
||||||
|
"name": "logs-pfsense.log-1.25.2-php-fpm",
|
||||||
"if": "ctx.event.provider == 'php-fpm'"
|
"if": "ctx.event.provider == 'php-fpm'"
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"pipeline": {
|
"pipeline": {
|
||||||
"name": "logs-pfsense.log-1.23.1-squid",
|
"tag": "pipeline_a0d89375",
|
||||||
|
"name": "logs-pfsense.log-1.25.2-squid",
|
||||||
"if": "ctx.event.provider == 'squid'"
|
"if": "ctx.event.provider == 'squid'"
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"pipeline": {
|
"pipeline": {
|
||||||
"name": "logs-pfsense.log-1.23.1-snort",
|
"tag": "pipeline_c2f1ed55",
|
||||||
|
"name": "logs-pfsense.log-1.25.2-snort",
|
||||||
"if": "ctx.event.provider == 'snort'"
|
"if": "ctx.event.provider == 'snort'"
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"pipeline": {
|
"pipeline": {
|
||||||
"name": "logs-pfsense.log-1.23.1-suricata",
|
"tag":"pipeline_33db1c9e",
|
||||||
|
"name": "logs-pfsense.log-1.25.2-suricata",
|
||||||
"if": "ctx.event.provider == 'suricata'"
|
"if": "ctx.event.provider == 'suricata'"
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"drop": {
|
"drop": {
|
||||||
"if": "![\"filterlog\", \"openvpn\", \"charon\", \"dhcpd\", \"dhclient\", \"dhcp6c\", \"unbound\", \"haproxy\", \"php-fpm\", \"squid\", \"snort\", \"suricata\"].contains(ctx.event?.provider)"
|
"tag": "drop_9d7c46f8",
|
||||||
|
"if": "![\"filterlog\", \"openvpn\", \"charon\", \"dhcpd\", \"dnsmasq-dhcp\", \"dhclient\", \"dhcp6c\", \"unbound\", \"haproxy\", \"php-fpm\", \"squid\", \"snort\", \"suricata\"].contains(ctx.event?.provider)"
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"append": {
|
"append": {
|
||||||
|
"tag": "append_event_category_4780a983",
|
||||||
"field": "event.category",
|
"field": "event.category",
|
||||||
"value": "network",
|
"value": "network",
|
||||||
"if": "ctx.network != null"
|
"if": "ctx.network != null"
|
||||||
@@ -179,6 +201,7 @@
|
|||||||
},
|
},
|
||||||
{
|
{
|
||||||
"convert": {
|
"convert": {
|
||||||
|
"tag": "convert_source_address_to_source_ip_f5632a20",
|
||||||
"field": "source.address",
|
"field": "source.address",
|
||||||
"target_field": "source.ip",
|
"target_field": "source.ip",
|
||||||
"type": "ip",
|
"type": "ip",
|
||||||
@@ -188,6 +211,7 @@
|
|||||||
},
|
},
|
||||||
{
|
{
|
||||||
"convert": {
|
"convert": {
|
||||||
|
"tag": "convert_destination_address_to_destination_ip_f1388f0c",
|
||||||
"field": "destination.address",
|
"field": "destination.address",
|
||||||
"target_field": "destination.ip",
|
"target_field": "destination.ip",
|
||||||
"type": "ip",
|
"type": "ip",
|
||||||
@@ -197,6 +221,7 @@
|
|||||||
},
|
},
|
||||||
{
|
{
|
||||||
"set": {
|
"set": {
|
||||||
|
"tag": "set_network_type_1f1d940a",
|
||||||
"field": "network.type",
|
"field": "network.type",
|
||||||
"value": "ipv6",
|
"value": "ipv6",
|
||||||
"if": "ctx.source?.ip != null && ctx.source.ip.contains(\":\")"
|
"if": "ctx.source?.ip != null && ctx.source.ip.contains(\":\")"
|
||||||
@@ -204,6 +229,7 @@
|
|||||||
},
|
},
|
||||||
{
|
{
|
||||||
"set": {
|
"set": {
|
||||||
|
"tag": "set_network_type_69deca38",
|
||||||
"field": "network.type",
|
"field": "network.type",
|
||||||
"value": "ipv4",
|
"value": "ipv4",
|
||||||
"if": "ctx.source?.ip != null && ctx.source.ip.contains(\".\")"
|
"if": "ctx.source?.ip != null && ctx.source.ip.contains(\".\")"
|
||||||
@@ -211,6 +237,7 @@
|
|||||||
},
|
},
|
||||||
{
|
{
|
||||||
"geoip": {
|
"geoip": {
|
||||||
|
"tag": "geoip_source_ip_to_source_geo_da2e41b2",
|
||||||
"field": "source.ip",
|
"field": "source.ip",
|
||||||
"target_field": "source.geo",
|
"target_field": "source.geo",
|
||||||
"ignore_missing": true
|
"ignore_missing": true
|
||||||
@@ -218,6 +245,7 @@
|
|||||||
},
|
},
|
||||||
{
|
{
|
||||||
"geoip": {
|
"geoip": {
|
||||||
|
"tag": "geoip_destination_ip_to_destination_geo_ab5e2968",
|
||||||
"field": "destination.ip",
|
"field": "destination.ip",
|
||||||
"target_field": "destination.geo",
|
"target_field": "destination.geo",
|
||||||
"ignore_missing": true
|
"ignore_missing": true
|
||||||
@@ -225,6 +253,7 @@
|
|||||||
},
|
},
|
||||||
{
|
{
|
||||||
"geoip": {
|
"geoip": {
|
||||||
|
"tag": "geoip_source_ip_to_source_as_28d69883",
|
||||||
"ignore_missing": true,
|
"ignore_missing": true,
|
||||||
"database_file": "GeoLite2-ASN.mmdb",
|
"database_file": "GeoLite2-ASN.mmdb",
|
||||||
"field": "source.ip",
|
"field": "source.ip",
|
||||||
@@ -237,6 +266,7 @@
|
|||||||
},
|
},
|
||||||
{
|
{
|
||||||
"geoip": {
|
"geoip": {
|
||||||
|
"tag": "geoip_destination_ip_to_destination_as_8a007787",
|
||||||
"database_file": "GeoLite2-ASN.mmdb",
|
"database_file": "GeoLite2-ASN.mmdb",
|
||||||
"field": "destination.ip",
|
"field": "destination.ip",
|
||||||
"target_field": "destination.as",
|
"target_field": "destination.as",
|
||||||
@@ -249,6 +279,7 @@
|
|||||||
},
|
},
|
||||||
{
|
{
|
||||||
"rename": {
|
"rename": {
|
||||||
|
"tag": "rename_source_as_asn_to_source_as_number_a917047d",
|
||||||
"field": "source.as.asn",
|
"field": "source.as.asn",
|
||||||
"target_field": "source.as.number",
|
"target_field": "source.as.number",
|
||||||
"ignore_missing": true
|
"ignore_missing": true
|
||||||
@@ -256,6 +287,7 @@
|
|||||||
},
|
},
|
||||||
{
|
{
|
||||||
"rename": {
|
"rename": {
|
||||||
|
"tag": "rename_source_as_organization_name_to_source_as_organization_name_f1362d0b",
|
||||||
"field": "source.as.organization_name",
|
"field": "source.as.organization_name",
|
||||||
"target_field": "source.as.organization.name",
|
"target_field": "source.as.organization.name",
|
||||||
"ignore_missing": true
|
"ignore_missing": true
|
||||||
@@ -263,6 +295,7 @@
|
|||||||
},
|
},
|
||||||
{
|
{
|
||||||
"rename": {
|
"rename": {
|
||||||
|
"tag": "rename_destination_as_asn_to_destination_as_number_3b459fcd",
|
||||||
"field": "destination.as.asn",
|
"field": "destination.as.asn",
|
||||||
"target_field": "destination.as.number",
|
"target_field": "destination.as.number",
|
||||||
"ignore_missing": true
|
"ignore_missing": true
|
||||||
@@ -270,6 +303,7 @@
|
|||||||
},
|
},
|
||||||
{
|
{
|
||||||
"rename": {
|
"rename": {
|
||||||
|
"tag": "rename_destination_as_organization_name_to_destination_as_organization_name_814bd459",
|
||||||
"field": "destination.as.organization_name",
|
"field": "destination.as.organization_name",
|
||||||
"target_field": "destination.as.organization.name",
|
"target_field": "destination.as.organization.name",
|
||||||
"ignore_missing": true
|
"ignore_missing": true
|
||||||
@@ -277,12 +311,14 @@
|
|||||||
},
|
},
|
||||||
{
|
{
|
||||||
"community_id": {
|
"community_id": {
|
||||||
|
"tag": "community_id_d2308e7a",
|
||||||
"target_field": "network.community_id",
|
"target_field": "network.community_id",
|
||||||
"ignore_failure": true
|
"ignore_failure": true
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"grok": {
|
"grok": {
|
||||||
|
"tag": "grok_observer_ingress_interface_name_968018d3",
|
||||||
"field": "observer.ingress.interface.name",
|
"field": "observer.ingress.interface.name",
|
||||||
"patterns": [
|
"patterns": [
|
||||||
"%{DATA}.%{NONNEGINT:observer.ingress.vlan.id}"
|
"%{DATA}.%{NONNEGINT:observer.ingress.vlan.id}"
|
||||||
@@ -293,6 +329,7 @@
|
|||||||
},
|
},
|
||||||
{
|
{
|
||||||
"set": {
|
"set": {
|
||||||
|
"tag": "set_network_vlan_id_efd4d96a",
|
||||||
"field": "network.vlan.id",
|
"field": "network.vlan.id",
|
||||||
"copy_from": "observer.ingress.vlan.id",
|
"copy_from": "observer.ingress.vlan.id",
|
||||||
"ignore_empty_value": true
|
"ignore_empty_value": true
|
||||||
@@ -300,6 +337,7 @@
|
|||||||
},
|
},
|
||||||
{
|
{
|
||||||
"append": {
|
"append": {
|
||||||
|
"tag": "append_related_ip_c1a6356b",
|
||||||
"field": "related.ip",
|
"field": "related.ip",
|
||||||
"value": "{{{destination.ip}}}",
|
"value": "{{{destination.ip}}}",
|
||||||
"allow_duplicates": false,
|
"allow_duplicates": false,
|
||||||
@@ -308,6 +346,7 @@
|
|||||||
},
|
},
|
||||||
{
|
{
|
||||||
"append": {
|
"append": {
|
||||||
|
"tag": "append_related_ip_8121c591",
|
||||||
"field": "related.ip",
|
"field": "related.ip",
|
||||||
"value": "{{{source.ip}}}",
|
"value": "{{{source.ip}}}",
|
||||||
"allow_duplicates": false,
|
"allow_duplicates": false,
|
||||||
@@ -316,6 +355,7 @@
|
|||||||
},
|
},
|
||||||
{
|
{
|
||||||
"append": {
|
"append": {
|
||||||
|
"tag": "append_related_ip_53b62ed8",
|
||||||
"field": "related.ip",
|
"field": "related.ip",
|
||||||
"value": "{{{source.nat.ip}}}",
|
"value": "{{{source.nat.ip}}}",
|
||||||
"allow_duplicates": false,
|
"allow_duplicates": false,
|
||||||
@@ -324,6 +364,7 @@
|
|||||||
},
|
},
|
||||||
{
|
{
|
||||||
"append": {
|
"append": {
|
||||||
|
"tag": "append_related_hosts_6f162628",
|
||||||
"field": "related.hosts",
|
"field": "related.hosts",
|
||||||
"value": "{{{destination.domain}}}",
|
"value": "{{{destination.domain}}}",
|
||||||
"if": "ctx.destination?.domain != null"
|
"if": "ctx.destination?.domain != null"
|
||||||
@@ -331,6 +372,7 @@
|
|||||||
},
|
},
|
||||||
{
|
{
|
||||||
"append": {
|
"append": {
|
||||||
|
"tag": "append_related_user_c036eec2",
|
||||||
"field": "related.user",
|
"field": "related.user",
|
||||||
"value": "{{{user.name}}}",
|
"value": "{{{user.name}}}",
|
||||||
"if": "ctx.user?.name != null"
|
"if": "ctx.user?.name != null"
|
||||||
@@ -338,6 +380,7 @@
|
|||||||
},
|
},
|
||||||
{
|
{
|
||||||
"set": {
|
"set": {
|
||||||
|
"tag": "set_network_direction_cb1e3125",
|
||||||
"field": "network.direction",
|
"field": "network.direction",
|
||||||
"value": "{{{network.direction}}}bound",
|
"value": "{{{network.direction}}}bound",
|
||||||
"if": "ctx.network?.direction != null && ctx.network?.direction =~ /^(in|out)$/"
|
"if": "ctx.network?.direction != null && ctx.network?.direction =~ /^(in|out)$/"
|
||||||
@@ -345,6 +388,7 @@
|
|||||||
},
|
},
|
||||||
{
|
{
|
||||||
"remove": {
|
"remove": {
|
||||||
|
"tag": "remove_a82e20f2",
|
||||||
"field": [
|
"field": [
|
||||||
"_tmp"
|
"_tmp"
|
||||||
],
|
],
|
||||||
@@ -353,11 +397,21 @@
|
|||||||
},
|
},
|
||||||
{
|
{
|
||||||
"script": {
|
"script": {
|
||||||
|
"tag": "script_a7f2c062",
|
||||||
"lang": "painless",
|
"lang": "painless",
|
||||||
"description": "This script processor iterates over the whole document to remove fields with null values.",
|
"description": "This script processor iterates over the whole document to remove fields with null values.",
|
||||||
"source": "void handleMap(Map map) {\n for (def x : map.values()) {\n if (x instanceof Map) {\n handleMap(x);\n } else if (x instanceof List) {\n handleList(x);\n }\n }\n map.values().removeIf(v -> v == null || (v instanceof String && v == \"-\"));\n}\nvoid handleList(List list) {\n for (def x : list) {\n if (x instanceof Map) {\n handleMap(x);\n } else if (x instanceof List) {\n handleList(x);\n }\n }\n}\nhandleMap(ctx);\n"
|
"source": "void handleMap(Map map) {\n for (def x : map.values()) {\n if (x instanceof Map) {\n handleMap(x);\n } else if (x instanceof List) {\n handleList(x);\n }\n }\n map.values().removeIf(v -> v == null || (v instanceof String && v == \"-\"));\n}\nvoid handleList(List list) {\n for (def x : list) {\n if (x instanceof Map) {\n handleMap(x);\n } else if (x instanceof List) {\n handleList(x);\n }\n }\n}\nhandleMap(ctx);\n"
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
|
{
|
||||||
|
"append": {
|
||||||
|
"tag": "append_preserve_original_event_on_error",
|
||||||
|
"field": "tags",
|
||||||
|
"value": "preserve_original_event",
|
||||||
|
"allow_duplicates": false,
|
||||||
|
"if": "ctx.error?.message != null"
|
||||||
|
}
|
||||||
|
},
|
||||||
{
|
{
|
||||||
"pipeline": {
|
"pipeline": {
|
||||||
"name": "global@custom",
|
"name": "global@custom",
|
||||||
@@ -405,7 +459,14 @@
|
|||||||
{
|
{
|
||||||
"append": {
|
"append": {
|
||||||
"field": "error.message",
|
"field": "error.message",
|
||||||
"value": "{{{ _ingest.on_failure_message }}}"
|
"value": "Processor '{{{ _ingest.on_failure_processor_type }}}' {{#_ingest.on_failure_processor_tag}}with tag '{{{ _ingest.on_failure_processor_tag }}}' {{/_ingest.on_failure_processor_tag}}in pipeline '{{{ _ingest.pipeline }}}' failed with message '{{{ _ingest.on_failure_message }}}'"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"append": {
|
||||||
|
"field": "tags",
|
||||||
|
"value": "preserve_original_event",
|
||||||
|
"allow_duplicates": false
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
]
|
]
|
||||||
@@ -45,3 +45,7 @@ appender.rolling_json.strategy.action.condition.nested_condition.age = 1D
|
|||||||
rootLogger.level = info
|
rootLogger.level = info
|
||||||
rootLogger.appenderRef.rolling.ref = rolling
|
rootLogger.appenderRef.rolling.ref = rolling
|
||||||
rootLogger.appenderRef.rolling_json.ref = rolling_json
|
rootLogger.appenderRef.rolling_json.ref = rolling_json
|
||||||
|
|
||||||
|
# Suppress NotEntitledException WARNs (ES 9.3.3 bug)
|
||||||
|
logger.entitlement_security.name = org.elasticsearch.entitlement.runtime.policy.PolicyManager.x-pack-security.org.elasticsearch.security.org.elasticsearch.xpack.security
|
||||||
|
logger.entitlement_security.level = error
|
||||||
@@ -14,16 +14,43 @@
|
|||||||
|
|
||||||
{% set ES_INDEX_SETTINGS_ORIG = ELASTICSEARCHDEFAULTS.elasticsearch.index_settings %}
|
{% set ES_INDEX_SETTINGS_ORIG = ELASTICSEARCHDEFAULTS.elasticsearch.index_settings %}
|
||||||
|
|
||||||
|
{% set ALL_ADDON_INTEGRATION_DEFAULTS = {} %}
|
||||||
|
{% set ALL_ADDON_SETTINGS_ORIG = {} %}
|
||||||
|
{% set ALL_ADDON_SETTINGS_GLOBAL_OVERRIDES = {} %}
|
||||||
|
{% set ALL_ADDON_SETTINGS = {} %}
|
||||||
{# start generation of integration default index_settings #}
|
{# start generation of integration default index_settings #}
|
||||||
{% if salt['file.file_exists']('/opt/so/state/esfleet_package_components.json') and salt['file.file_exists']('/opt/so/state/esfleet_component_templates.json') %}
|
{% if salt['file.file_exists']('/opt/so/state/esfleet_component_templates.json') %}
|
||||||
{% set check_package_components = salt['file.stats']('/opt/so/state/esfleet_package_components.json') %}
|
{# import integration type defaults #}
|
||||||
{% if check_package_components.size > 1 %}
|
{% if salt['file.file_exists']('/opt/so/state/esfleet_package_components.json') %}
|
||||||
|
{% set check_integration_package_components = salt['file.stats']('/opt/so/state/esfleet_package_components.json') %}
|
||||||
|
{% if check_integration_package_components.size > 1 %}
|
||||||
{% from 'elasticfleet/integration-defaults.map.jinja' import ADDON_INTEGRATION_DEFAULTS %}
|
{% from 'elasticfleet/integration-defaults.map.jinja' import ADDON_INTEGRATION_DEFAULTS %}
|
||||||
{% for index, settings in ADDON_INTEGRATION_DEFAULTS.items() %}
|
{% do ALL_ADDON_INTEGRATION_DEFAULTS.update(ADDON_INTEGRATION_DEFAULTS) %}
|
||||||
{% do ES_INDEX_SETTINGS_ORIG.update({index: settings}) %}
|
|
||||||
{% endfor %}
|
|
||||||
{% endif %}
|
{% endif %}
|
||||||
{% endif %}
|
{% endif %}
|
||||||
|
|
||||||
|
{# import input type defaults #}
|
||||||
|
{% if salt['file.file_exists']('/opt/so/state/esfleet_input_package_components.json') %}
|
||||||
|
{% set check_input_package_components = salt['file.stats']('/opt/so/state/esfleet_input_package_components.json') %}
|
||||||
|
{% if check_input_package_components.size > 1 %}
|
||||||
|
{% from 'elasticfleet/input-defaults.map.jinja' import ADDON_INPUT_INTEGRATION_DEFAULTS %}
|
||||||
|
{% do ALL_ADDON_INTEGRATION_DEFAULTS.update(ADDON_INPUT_INTEGRATION_DEFAULTS) %}
|
||||||
|
{% endif %}
|
||||||
|
{% endif %}
|
||||||
|
|
||||||
|
{# import content type defaults #}
|
||||||
|
{% if salt['file.file_exists']('/opt/so/state/esfleet_content_package_components.json') %}
|
||||||
|
{% set check_content_package_components = salt['file.stats']('/opt/so/state/esfleet_content_package_components.json') %}
|
||||||
|
{% if check_content_package_components.size > 1 %}
|
||||||
|
{% from 'elasticfleet/content-defaults.map.jinja' import ADDON_CONTENT_INTEGRATION_DEFAULTS %}
|
||||||
|
{% do ALL_ADDON_INTEGRATION_DEFAULTS.update(ADDON_CONTENT_INTEGRATION_DEFAULTS) %}
|
||||||
|
{% endif %}
|
||||||
|
{% endif %}
|
||||||
|
|
||||||
|
{% for index, settings in ALL_ADDON_INTEGRATION_DEFAULTS.items() %}
|
||||||
|
{% do ALL_ADDON_SETTINGS_ORIG.update({index: settings}) %}
|
||||||
|
{% endfor %}
|
||||||
|
{% endif %}
|
||||||
{# end generation of integration default index_settings #}
|
{# end generation of integration default index_settings #}
|
||||||
|
|
||||||
{% set ES_INDEX_SETTINGS_GLOBAL_OVERRIDES = {} %}
|
{% set ES_INDEX_SETTINGS_GLOBAL_OVERRIDES = {} %}
|
||||||
@@ -31,25 +58,33 @@
|
|||||||
{% do ES_INDEX_SETTINGS_GLOBAL_OVERRIDES.update({index: salt['defaults.merge'](ELASTICSEARCHDEFAULTS.elasticsearch.index_settings[index], PILLAR_GLOBAL_OVERRIDES, in_place=False)}) %}
|
{% do ES_INDEX_SETTINGS_GLOBAL_OVERRIDES.update({index: salt['defaults.merge'](ELASTICSEARCHDEFAULTS.elasticsearch.index_settings[index], PILLAR_GLOBAL_OVERRIDES, in_place=False)}) %}
|
||||||
{% endfor %}
|
{% endfor %}
|
||||||
|
|
||||||
|
{% if ALL_ADDON_SETTINGS_ORIG.keys() | length > 0 %}
|
||||||
|
{% for index in ALL_ADDON_SETTINGS_ORIG.keys() %}
|
||||||
|
{% do ALL_ADDON_SETTINGS_GLOBAL_OVERRIDES.update({index: salt['defaults.merge'](ALL_ADDON_SETTINGS_ORIG[index], PILLAR_GLOBAL_OVERRIDES, in_place=False)}) %}
|
||||||
|
{% endfor %}
|
||||||
|
{% endif %}
|
||||||
|
|
||||||
{% set ES_INDEX_SETTINGS = {} %}
|
{% set ES_INDEX_SETTINGS = {} %}
|
||||||
{% do ES_INDEX_SETTINGS_GLOBAL_OVERRIDES.update(salt['defaults.merge'](ES_INDEX_SETTINGS_GLOBAL_OVERRIDES, ES_INDEX_PILLAR, in_place=False)) %}
|
{% macro create_final_index_template(DEFINED_SETTINGS, GLOBAL_OVERRIDES, FINAL_INDEX_SETTINGS) %}
|
||||||
{% for index, settings in ES_INDEX_SETTINGS_GLOBAL_OVERRIDES.items() %}
|
|
||||||
|
{% do GLOBAL_OVERRIDES.update(salt['defaults.merge'](GLOBAL_OVERRIDES, ES_INDEX_PILLAR, in_place=False)) %}
|
||||||
|
{% for index, settings in GLOBAL_OVERRIDES.items() %}
|
||||||
|
|
||||||
{# prevent this action from being performed on custom defined indices. #}
|
{# prevent this action from being performed on custom defined indices. #}
|
||||||
{# the custom defined index is not present in either of the dictionaries and fails to reder. #}
|
{# the custom defined index is not present in either of the dictionaries and fails to reder. #}
|
||||||
{% if index in ES_INDEX_SETTINGS_ORIG and index in ES_INDEX_SETTINGS_GLOBAL_OVERRIDES %}
|
{% if index in DEFINED_SETTINGS and index in GLOBAL_OVERRIDES %}
|
||||||
|
|
||||||
{# dont merge policy from the global_overrides if policy isn't defined in the original index settingss #}
|
{# dont merge policy from the global_overrides if policy isn't defined in the original index settingss #}
|
||||||
{# this will prevent so-elasticsearch-ilm-policy-load from trying to load policy on non ILM manged indices #}
|
{# this will prevent so-elasticsearch-ilm-policy-load from trying to load policy on non ILM manged indices #}
|
||||||
{% if not ES_INDEX_SETTINGS_ORIG[index].policy is defined and ES_INDEX_SETTINGS_GLOBAL_OVERRIDES[index].policy is defined %}
|
{% if not DEFINED_SETTINGS[index].policy is defined and GLOBAL_OVERRIDES[index].policy is defined %}
|
||||||
{% do ES_INDEX_SETTINGS_GLOBAL_OVERRIDES[index].pop('policy') %}
|
{% do GLOBAL_OVERRIDES[index].pop('policy') %}
|
||||||
{% endif %}
|
{% endif %}
|
||||||
|
|
||||||
{# this prevents and index from inderiting a policy phase from global overrides if it wasnt defined in the defaults. #}
|
{# this prevents and index from inderiting a policy phase from global overrides if it wasnt defined in the defaults. #}
|
||||||
{% if ES_INDEX_SETTINGS_GLOBAL_OVERRIDES[index].policy is defined %}
|
{% if GLOBAL_OVERRIDES[index].policy is defined %}
|
||||||
{% for phase in ES_INDEX_SETTINGS_GLOBAL_OVERRIDES[index].policy.phases.copy() %}
|
{% for phase in GLOBAL_OVERRIDES[index].policy.phases.copy() %}
|
||||||
{% if ES_INDEX_SETTINGS_ORIG[index].policy.phases[phase] is not defined %}
|
{% if DEFINED_SETTINGS[index].policy.phases[phase] is not defined %}
|
||||||
{% do ES_INDEX_SETTINGS_GLOBAL_OVERRIDES[index].policy.phases.pop(phase) %}
|
{% do GLOBAL_OVERRIDES[index].policy.phases.pop(phase) %}
|
||||||
{% endif %}
|
{% endif %}
|
||||||
{% endfor %}
|
{% endfor %}
|
||||||
{% endif %}
|
{% endif %}
|
||||||
@@ -111,5 +146,14 @@
|
|||||||
{% endfor %}
|
{% endfor %}
|
||||||
{% endif %}
|
{% endif %}
|
||||||
|
|
||||||
{% do ES_INDEX_SETTINGS.update({index | replace("_x_", "."): ES_INDEX_SETTINGS_GLOBAL_OVERRIDES[index]}) %}
|
{% do FINAL_INDEX_SETTINGS.update({index | replace("_x_", "."): GLOBAL_OVERRIDES[index]}) %}
|
||||||
|
{% endfor %}
|
||||||
|
{% endmacro %}
|
||||||
|
|
||||||
|
{{ create_final_index_template(ES_INDEX_SETTINGS_ORIG, ES_INDEX_SETTINGS_GLOBAL_OVERRIDES, ES_INDEX_SETTINGS) }}
|
||||||
|
{{ create_final_index_template(ALL_ADDON_SETTINGS_ORIG, ALL_ADDON_SETTINGS_GLOBAL_OVERRIDES, ALL_ADDON_SETTINGS) }}
|
||||||
|
|
||||||
|
{% set SO_MANAGED_INDICES = [] %}
|
||||||
|
{% for index, settings in ES_INDEX_SETTINGS.items() %}
|
||||||
|
{% do SO_MANAGED_INDICES.append(index) %}
|
||||||
{% endfor %}
|
{% endfor %}
|
||||||
@@ -6,8 +6,19 @@
|
|||||||
# Elastic License 2.0.
|
# Elastic License 2.0.
|
||||||
|
|
||||||
. /usr/sbin/so-common
|
. /usr/sbin/so-common
|
||||||
if [ "$1" == "" ]; then
|
|
||||||
curl -K /opt/so/conf/elasticsearch/curl.config -s -k -L https://localhost:9200/_component_template | jq '.component_templates[] |.name'| sort
|
if [[ -z "$1" ]]; then
|
||||||
|
if output=$(so-elasticsearch-query "_component_template" --retry 3 --retry-delay 1 --fail); then
|
||||||
|
jq '[.component_templates[] | .name] | sort' <<< "$output"
|
||||||
else
|
else
|
||||||
curl -K /opt/so/conf/elasticsearch/curl.config -s -k -L https://localhost:9200/_component_template/$1 | jq
|
echo "Failed to retrieve component templates from Elasticsearch."
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
else
|
||||||
|
if output=$(so-elasticsearch-query "_component_template/$1" --retry 3 --retry-delay 1 --fail); then
|
||||||
|
jq <<< "$output"
|
||||||
|
else
|
||||||
|
echo "Failed to retrieve component template '$1' from Elasticsearch."
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
fi
|
fi
|
||||||
@@ -0,0 +1,276 @@
|
|||||||
|
#!/bin/bash
|
||||||
|
# Copyright Security Onion Solutions LLC and/or licensed to Security Onion Solutions LLC under one
|
||||||
|
# or more contributor license agreements. Licensed under the Elastic License 2.0 as shown at
|
||||||
|
# https://securityonion.net/license; you may not use this file except in compliance with the
|
||||||
|
# Elastic License 2.0.
|
||||||
|
|
||||||
|
. /usr/sbin/so-common
|
||||||
|
|
||||||
|
SO_STATEFILE_SUCCESS=/opt/so/state/estemplates.txt
|
||||||
|
ADDON_STATEFILE_SUCCESS=/opt/so/state/addon_estemplates.txt
|
||||||
|
ELASTICSEARCH_TEMPLATES_DIR="/opt/so/conf/elasticsearch/templates"
|
||||||
|
SO_TEMPLATES_DIR="${ELASTICSEARCH_TEMPLATES_DIR}/index"
|
||||||
|
ADDON_TEMPLATES_DIR="${ELASTICSEARCH_TEMPLATES_DIR}/addon-index"
|
||||||
|
SO_LOAD_FAILURES=0
|
||||||
|
ADDON_LOAD_FAILURES=0
|
||||||
|
SO_LOAD_FAILURES_NAMES=()
|
||||||
|
ADDON_LOAD_FAILURES_NAMES=()
|
||||||
|
IS_HEAVYNODE="false"
|
||||||
|
FORCE="false"
|
||||||
|
VERBOSE="false"
|
||||||
|
SHOULD_EXIT_ON_FAILURE="true"
|
||||||
|
|
||||||
|
# If soup is running, ignore errors
|
||||||
|
pgrep soup >/dev/null && SHOULD_EXIT_ON_FAILURE="false"
|
||||||
|
|
||||||
|
while [[ $# -gt 0 ]]; do
|
||||||
|
case "$1" in
|
||||||
|
--heavynode)
|
||||||
|
IS_HEAVYNODE="true"
|
||||||
|
;;
|
||||||
|
--force)
|
||||||
|
FORCE="true"
|
||||||
|
;;
|
||||||
|
--verbose)
|
||||||
|
VERBOSE="true"
|
||||||
|
;;
|
||||||
|
*)
|
||||||
|
echo "Usage: $0 [options]"
|
||||||
|
echo "Options:"
|
||||||
|
echo " --heavynode Only loads index templates specific to heavynodes"
|
||||||
|
echo " --force Force reload all templates regardless of statefiles (default: false)"
|
||||||
|
echo " --verbose Enable verbose output"
|
||||||
|
exit 1
|
||||||
|
;;
|
||||||
|
esac
|
||||||
|
shift
|
||||||
|
done
|
||||||
|
|
||||||
|
load_template() {
|
||||||
|
local uri="$1"
|
||||||
|
local file="$2"
|
||||||
|
|
||||||
|
echo "Loading template file $file"
|
||||||
|
if ! output=$(retry 3 3 "so-elasticsearch-query $uri -d@$file -XPUT" "{\"acknowledged\":true}"); then
|
||||||
|
echo "$output"
|
||||||
|
|
||||||
|
return 1
|
||||||
|
|
||||||
|
elif [[ "$VERBOSE" == "true" ]]; then
|
||||||
|
echo "$output"
|
||||||
|
fi
|
||||||
|
|
||||||
|
}
|
||||||
|
|
||||||
|
check_required_component_template_exists() {
|
||||||
|
local required
|
||||||
|
local missing
|
||||||
|
local file=$1
|
||||||
|
|
||||||
|
required=$(jq '[((.composed_of //[]) - (.ignore_missing_component_templates // []))[]]' "$file")
|
||||||
|
missing=$(jq -n --argjson required "$required" --argjson component_templates "$component_templates" '(($required) - ($component_templates))')
|
||||||
|
|
||||||
|
if [[ $(jq length <<<"$missing") -gt 0 ]]; then
|
||||||
|
|
||||||
|
return 1
|
||||||
|
fi
|
||||||
|
}
|
||||||
|
|
||||||
|
check_heavynode_compatiable_index_template() {
|
||||||
|
# The only templates that are relevant to heavynodes are from datasets defined in elasticagent/files/elastic-agent.yml.jinja.
|
||||||
|
# Heavynodes do not have fleet server packages installed and do not support elastic agents reporting directly to them.
|
||||||
|
local -A heavynode_index_templates=(
|
||||||
|
["so-import"]=1
|
||||||
|
["so-syslog"]=1
|
||||||
|
["so-logs-soc"]=1
|
||||||
|
["so-suricata"]=1
|
||||||
|
["so-suricata.alerts"]=1
|
||||||
|
["so-zeek"]=1
|
||||||
|
["so-strelka"]=1
|
||||||
|
)
|
||||||
|
|
||||||
|
local template_name="$1"
|
||||||
|
|
||||||
|
if [[ ! -v heavynode_index_templates["$template_name"] ]]; then
|
||||||
|
|
||||||
|
return 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
}
|
||||||
|
|
||||||
|
load_component_templates() {
|
||||||
|
local printed_name="$1"
|
||||||
|
local pattern="${ELASTICSEARCH_TEMPLATES_DIR}/component/$2"
|
||||||
|
local append_mappings="${3:-"false"}"
|
||||||
|
|
||||||
|
echo -e "\nLoading $printed_name component templates...\n"
|
||||||
|
|
||||||
|
if ! compgen -G "${pattern}/*.json" > /dev/null; then
|
||||||
|
echo "No $printed_name component templates found in ${pattern}, skipping."
|
||||||
|
return
|
||||||
|
fi
|
||||||
|
|
||||||
|
for component in "$pattern"/*.json; do
|
||||||
|
tmpl_name=$(basename "${component%.json}")
|
||||||
|
|
||||||
|
if [[ "$append_mappings" == "true" ]]; then
|
||||||
|
# avoid duplicating "-mappings" if it already exists in the component template filename
|
||||||
|
tmpl_name="${tmpl_name%-mappings}-mappings"
|
||||||
|
fi
|
||||||
|
|
||||||
|
if ! load_template "_component_template/${tmpl_name}" "$component"; then
|
||||||
|
SO_LOAD_FAILURES=$((SO_LOAD_FAILURES + 1))
|
||||||
|
SO_LOAD_FAILURES_NAMES+=("$component")
|
||||||
|
fi
|
||||||
|
done
|
||||||
|
}
|
||||||
|
|
||||||
|
check_elasticsearch_responsive() {
|
||||||
|
# Cannot load templates if Elasticsearch is not responding.
|
||||||
|
# NOTE: Slightly faster exit w/ failure than previous "retry 240 1" if there is a problem with Elasticsearch the
|
||||||
|
# script should exit sooner rather than hang at the 'so-elasticsearch-templates' salt state.
|
||||||
|
retry 3 15 "so-elasticsearch-query / --output /dev/null --fail" ||
|
||||||
|
fail "Elasticsearch is not responding. Please review Elasticsearch logs /opt/so/log/elasticsearch/securityonion.log for more details. Additionally, consider running so-elasticsearch-troubleshoot."
|
||||||
|
}
|
||||||
|
|
||||||
|
index_templates_exist() {
|
||||||
|
local templates_dir="$1"
|
||||||
|
|
||||||
|
if [[ ! -d "$templates_dir" ]]; then
|
||||||
|
return 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
compgen -G "${templates_dir}/*.json" > /dev/null
|
||||||
|
}
|
||||||
|
|
||||||
|
should_load_addon_templates() {
|
||||||
|
if [[ "$IS_HEAVYNODE" == "true" ]]; then
|
||||||
|
return 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Skip statefile checks when forcing template load
|
||||||
|
if [[ "$FORCE" != "true" ]]; then
|
||||||
|
if [[ ! -f "$SO_STATEFILE_SUCCESS" || -f "$ADDON_STATEFILE_SUCCESS" ]]; then
|
||||||
|
return 1
|
||||||
|
fi
|
||||||
|
fi
|
||||||
|
|
||||||
|
index_templates_exist "$ADDON_TEMPLATES_DIR"
|
||||||
|
}
|
||||||
|
|
||||||
|
if [[ "$FORCE" == "true" || ! -f "$SO_STATEFILE_SUCCESS" ]] && index_templates_exist "$SO_TEMPLATES_DIR"; then
|
||||||
|
check_elasticsearch_responsive
|
||||||
|
|
||||||
|
if [[ "$IS_HEAVYNODE" == "false" ]]; then
|
||||||
|
# TODO: Better way to check if fleet server is installed vs checking for Elastic Defend component template.
|
||||||
|
fleet_check="logs-endpoint.alerts@package"
|
||||||
|
if ! so-elasticsearch-query "_component_template/$fleet_check" --output /dev/null --retry 5 --retry-delay 3 --fail; then
|
||||||
|
# This check prevents so-elasticsearch-templates-load from running before so-elastic-fleet-setup has run.
|
||||||
|
echo -e "\nPackage $fleet_check not yet installed. Fleet Server may not be fully configured yet."
|
||||||
|
# Fleet Server is required because some SO index templates depend on components installed via
|
||||||
|
# specific integrations eg Elastic Defend. These are components that we do not manually create / manage
|
||||||
|
# via /opt/so/saltstack/salt/elasticsearch/templates/component/
|
||||||
|
|
||||||
|
exit 0
|
||||||
|
fi
|
||||||
|
fi
|
||||||
|
|
||||||
|
# load_component_templates "Name" "directory" "append '-mappings'?"
|
||||||
|
load_component_templates "ECS" "ecs" "true"
|
||||||
|
load_component_templates "Elastic Agent" "elastic-agent"
|
||||||
|
load_component_templates "Security Onion" "so"
|
||||||
|
|
||||||
|
component_templates=$(so-elasticsearch-component-templates-list)
|
||||||
|
echo -e "Loading Security Onion index templates...\n"
|
||||||
|
for so_idx_tmpl in "${SO_TEMPLATES_DIR}"/*.json; do
|
||||||
|
tmpl_name=$(basename "${so_idx_tmpl%-template.json}")
|
||||||
|
|
||||||
|
if [[ "$IS_HEAVYNODE" == "true" ]]; then
|
||||||
|
# TODO: Better way to load only heavynode specific templates
|
||||||
|
if ! check_heavynode_compatiable_index_template "$tmpl_name"; then
|
||||||
|
if [[ "$VERBOSE" == "true" ]]; then
|
||||||
|
echo "Skipping over $so_idx_tmpl, template is not a heavynode specific index template."
|
||||||
|
fi
|
||||||
|
|
||||||
|
continue
|
||||||
|
fi
|
||||||
|
fi
|
||||||
|
|
||||||
|
if check_required_component_template_exists "$so_idx_tmpl"; then
|
||||||
|
if ! load_template "_index_template/$tmpl_name" "$so_idx_tmpl"; then
|
||||||
|
SO_LOAD_FAILURES=$((SO_LOAD_FAILURES + 1))
|
||||||
|
SO_LOAD_FAILURES_NAMES+=("$so_idx_tmpl")
|
||||||
|
fi
|
||||||
|
else
|
||||||
|
echo "Skipping over $so_idx_tmpl due to missing required component template(s)."
|
||||||
|
SO_LOAD_FAILURES=$((SO_LOAD_FAILURES + 1))
|
||||||
|
SO_LOAD_FAILURES_NAMES+=("$so_idx_tmpl")
|
||||||
|
|
||||||
|
continue
|
||||||
|
fi
|
||||||
|
done
|
||||||
|
|
||||||
|
if [[ $SO_LOAD_FAILURES -eq 0 ]]; then
|
||||||
|
echo "All Security Onion core templates loaded successfully."
|
||||||
|
|
||||||
|
touch "$SO_STATEFILE_SUCCESS"
|
||||||
|
else
|
||||||
|
echo "Encountered $SO_LOAD_FAILURES failure(s) loading templates:"
|
||||||
|
for failed_template in "${SO_LOAD_FAILURES_NAMES[@]}"; do
|
||||||
|
echo " - $failed_template"
|
||||||
|
done
|
||||||
|
if [[ "$SHOULD_EXIT_ON_FAILURE" == "true" ]]; then
|
||||||
|
fail "Failed to load all Security Onion core templates successfully."
|
||||||
|
fi
|
||||||
|
fi
|
||||||
|
elif ! index_templates_exist "$SO_TEMPLATES_DIR"; then
|
||||||
|
echo "No Security Onion core index templates found in ${SO_TEMPLATES_DIR}, skipping."
|
||||||
|
elif [[ -f "$SO_STATEFILE_SUCCESS" ]]; then
|
||||||
|
echo "Security Onion core templates already loaded"
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Start loading addon templates
|
||||||
|
if should_load_addon_templates; then
|
||||||
|
|
||||||
|
check_elasticsearch_responsive
|
||||||
|
|
||||||
|
echo -e "\nLoading addon integration index templates...\n"
|
||||||
|
component_templates=$(so-elasticsearch-component-templates-list)
|
||||||
|
|
||||||
|
for addon_idx_tmpl in "${ADDON_TEMPLATES_DIR}"/*.json; do
|
||||||
|
tmpl_name=$(basename "${addon_idx_tmpl%-template.json}")
|
||||||
|
|
||||||
|
if check_required_component_template_exists "$addon_idx_tmpl"; then
|
||||||
|
if ! load_template "_index_template/${tmpl_name}" "$addon_idx_tmpl"; then
|
||||||
|
ADDON_LOAD_FAILURES=$((ADDON_LOAD_FAILURES + 1))
|
||||||
|
ADDON_LOAD_FAILURES_NAMES+=("$addon_idx_tmpl")
|
||||||
|
fi
|
||||||
|
else
|
||||||
|
echo "Skipping over $addon_idx_tmpl due to missing required component template(s)."
|
||||||
|
ADDON_LOAD_FAILURES=$((ADDON_LOAD_FAILURES + 1))
|
||||||
|
ADDON_LOAD_FAILURES_NAMES+=("$addon_idx_tmpl")
|
||||||
|
|
||||||
|
continue
|
||||||
|
fi
|
||||||
|
done
|
||||||
|
|
||||||
|
if [[ $ADDON_LOAD_FAILURES -eq 0 ]]; then
|
||||||
|
echo "All addon integration templates loaded successfully."
|
||||||
|
|
||||||
|
touch "$ADDON_STATEFILE_SUCCESS"
|
||||||
|
else
|
||||||
|
echo "Encountered $ADDON_LOAD_FAILURES failure(s) loading addon integration templates:"
|
||||||
|
for failed_template in "${ADDON_LOAD_FAILURES_NAMES[@]}"; do
|
||||||
|
echo " - $failed_template"
|
||||||
|
done
|
||||||
|
if [[ "$SHOULD_EXIT_ON_FAILURE" == "true" ]]; then
|
||||||
|
fail "Failed to load all addon integration templates successfully."
|
||||||
|
fi
|
||||||
|
fi
|
||||||
|
|
||||||
|
elif [[ ! -f "$SO_STATEFILE_SUCCESS" && "$IS_HEAVYNODE" == "false" ]]; then
|
||||||
|
echo "Skipping loading addon integration templates until Security Onion core templates have been loaded."
|
||||||
|
|
||||||
|
elif [[ -f "$ADDON_STATEFILE_SUCCESS" && "$IS_HEAVYNODE" == "false" && "$FORCE" == "false" ]]; then
|
||||||
|
echo "Addon integration templates already loaded"
|
||||||
|
fi
|
||||||
@@ -7,6 +7,9 @@
|
|||||||
. /usr/sbin/so-common
|
. /usr/sbin/so-common
|
||||||
|
|
||||||
{%- from 'elasticsearch/template.map.jinja' import ES_INDEX_SETTINGS %}
|
{%- from 'elasticsearch/template.map.jinja' import ES_INDEX_SETTINGS %}
|
||||||
|
{%- if GLOBALS.role != "so-heavynode" %}
|
||||||
|
{%- from 'elasticsearch/template.map.jinja' import ALL_ADDON_SETTINGS %}
|
||||||
|
{%- endif %}
|
||||||
|
|
||||||
{%- for index, settings in ES_INDEX_SETTINGS.items() %}
|
{%- for index, settings in ES_INDEX_SETTINGS.items() %}
|
||||||
{%- if settings.policy is defined %}
|
{%- if settings.policy is defined %}
|
||||||
@@ -33,3 +36,13 @@
|
|||||||
{%- endif %}
|
{%- endif %}
|
||||||
{%- endfor %}
|
{%- endfor %}
|
||||||
echo
|
echo
|
||||||
|
{%- if GLOBALS.role != "so-heavynode" %}
|
||||||
|
{%- for index, settings in ALL_ADDON_SETTINGS.items() %}
|
||||||
|
{%- if settings.policy is defined %}
|
||||||
|
echo
|
||||||
|
echo "Setting up {{ index }}-logs policy..."
|
||||||
|
curl -K /opt/so/conf/elasticsearch/curl.config -b "sid=$SESSIONCOOKIE" -s -k -L -X PUT "https://localhost:9200/_ilm/policy/{{ index }}-logs" -H 'Content-Type: application/json' -d'{ "policy": {{ settings.policy | tojson(true) }} }'
|
||||||
|
echo
|
||||||
|
{%- endif %}
|
||||||
|
{%- endfor %}
|
||||||
|
{%- endif %}
|
||||||
|
|||||||
@@ -1,165 +0,0 @@
|
|||||||
#!/bin/bash
|
|
||||||
# Copyright Security Onion Solutions LLC and/or licensed to Security Onion Solutions LLC under one
|
|
||||||
# or more contributor license agreements. Licensed under the Elastic License 2.0 as shown at
|
|
||||||
# https://securityonion.net/license; you may not use this file except in compliance with the
|
|
||||||
# Elastic License 2.0.
|
|
||||||
{%- import_yaml 'elasticfleet/defaults.yaml' as ELASTICFLEETDEFAULTS %}
|
|
||||||
{% from 'vars/globals.map.jinja' import GLOBALS %}
|
|
||||||
|
|
||||||
STATE_FILE_INITIAL=/opt/so/state/estemplates_initial_load_attempt.txt
|
|
||||||
STATE_FILE_SUCCESS=/opt/so/state/estemplates.txt
|
|
||||||
|
|
||||||
if [[ -f $STATE_FILE_INITIAL ]]; then
|
|
||||||
# The initial template load has already run. As this is a subsequent load, all dependencies should
|
|
||||||
# already be satisified. Therefore, immediately exit/abort this script upon any template load failure
|
|
||||||
# since this is an unrecoverable failure.
|
|
||||||
should_exit_on_failure=1
|
|
||||||
else
|
|
||||||
# This is the initial template load, and there likely are some components not yet setup in Elasticsearch.
|
|
||||||
# Therefore load as many templates as possible at this time and if an error occurs proceed to the next
|
|
||||||
# template. But if at least one template fails to load do not mark the templates as having been loaded.
|
|
||||||
# This will allow the next load to resume the load of the templates that failed to load initially.
|
|
||||||
should_exit_on_failure=0
|
|
||||||
echo "This is the initial template load"
|
|
||||||
fi
|
|
||||||
|
|
||||||
# If soup is running, ignore errors
|
|
||||||
pgrep soup > /dev/null && should_exit_on_failure=0
|
|
||||||
|
|
||||||
load_failures=0
|
|
||||||
|
|
||||||
load_template() {
|
|
||||||
uri=$1
|
|
||||||
file=$2
|
|
||||||
|
|
||||||
echo "Loading template file $i"
|
|
||||||
if ! retry 3 1 "so-elasticsearch-query $uri -d@$file -XPUT" "{\"acknowledged\":true}"; then
|
|
||||||
if [[ $should_exit_on_failure -eq 1 ]]; then
|
|
||||||
fail "Could not load template file: $file"
|
|
||||||
else
|
|
||||||
load_failures=$((load_failures+1))
|
|
||||||
echo "Incremented load failure counter: $load_failures"
|
|
||||||
fi
|
|
||||||
fi
|
|
||||||
}
|
|
||||||
|
|
||||||
if [ ! -f $STATE_FILE_SUCCESS ]; then
|
|
||||||
echo "State file $STATE_FILE_SUCCESS not found. Running so-elasticsearch-templates-load."
|
|
||||||
|
|
||||||
. /usr/sbin/so-common
|
|
||||||
|
|
||||||
{% if GLOBALS.role != 'so-heavynode' %}
|
|
||||||
if [ -f /usr/sbin/so-elastic-fleet-common ]; then
|
|
||||||
. /usr/sbin/so-elastic-fleet-common
|
|
||||||
fi
|
|
||||||
{% endif %}
|
|
||||||
|
|
||||||
default_conf_dir=/opt/so/conf
|
|
||||||
|
|
||||||
# Define a default directory to load pipelines from
|
|
||||||
ELASTICSEARCH_TEMPLATES="$default_conf_dir/elasticsearch/templates/"
|
|
||||||
|
|
||||||
{% if GLOBALS.role == 'so-heavynode' %}
|
|
||||||
file="/opt/so/conf/elasticsearch/templates/index/so-common-template.json"
|
|
||||||
{% else %}
|
|
||||||
file="/usr/sbin/so-elastic-fleet-common"
|
|
||||||
{% endif %}
|
|
||||||
|
|
||||||
if [ -f "$file" ]; then
|
|
||||||
# Wait for ElasticSearch to initialize
|
|
||||||
echo -n "Waiting for ElasticSearch..."
|
|
||||||
retry 240 1 "so-elasticsearch-query / -k --output /dev/null --silent --head --fail" || fail "Connection attempt timed out. Unable to connect to ElasticSearch. \nPlease try: \n -checking log(s) in /var/log/elasticsearch/\n -running 'sudo docker ps' \n -running 'sudo so-elastic-restart'"
|
|
||||||
{% if GLOBALS.role != 'so-heavynode' %}
|
|
||||||
TEMPLATE="logs-endpoint.alerts@package"
|
|
||||||
INSTALLED=$(so-elasticsearch-query _component_template/$TEMPLATE | jq -r .component_templates[0].name)
|
|
||||||
if [ "$INSTALLED" != "$TEMPLATE" ]; then
|
|
||||||
echo
|
|
||||||
echo "Packages not yet installed."
|
|
||||||
echo
|
|
||||||
exit 0
|
|
||||||
fi
|
|
||||||
{% endif %}
|
|
||||||
|
|
||||||
touch $STATE_FILE_INITIAL
|
|
||||||
|
|
||||||
cd ${ELASTICSEARCH_TEMPLATES}/component/ecs
|
|
||||||
|
|
||||||
echo "Loading ECS component templates..."
|
|
||||||
for i in *; do
|
|
||||||
TEMPLATE=$(echo $i | cut -d '.' -f1)
|
|
||||||
load_template "_component_template/${TEMPLATE}-mappings" "$i"
|
|
||||||
done
|
|
||||||
echo
|
|
||||||
|
|
||||||
cd ${ELASTICSEARCH_TEMPLATES}/component/elastic-agent
|
|
||||||
|
|
||||||
echo "Loading Elastic Agent component templates..."
|
|
||||||
{% if GLOBALS.role == 'so-heavynode' %}
|
|
||||||
component_pattern="so-*"
|
|
||||||
{% else %}
|
|
||||||
component_pattern="*"
|
|
||||||
{% endif %}
|
|
||||||
for i in $component_pattern; do
|
|
||||||
TEMPLATE=${i::-5}
|
|
||||||
load_template "_component_template/$TEMPLATE" "$i"
|
|
||||||
done
|
|
||||||
echo
|
|
||||||
|
|
||||||
# Load SO-specific component templates
|
|
||||||
cd ${ELASTICSEARCH_TEMPLATES}/component/so
|
|
||||||
|
|
||||||
echo "Loading Security Onion component templates..."
|
|
||||||
for i in *; do
|
|
||||||
TEMPLATE=$(echo $i | cut -d '.' -f1);
|
|
||||||
load_template "_component_template/$TEMPLATE" "$i"
|
|
||||||
done
|
|
||||||
echo
|
|
||||||
|
|
||||||
# Load SO index templates
|
|
||||||
cd ${ELASTICSEARCH_TEMPLATES}/index
|
|
||||||
|
|
||||||
echo "Loading Security Onion index templates..."
|
|
||||||
shopt -s extglob
|
|
||||||
{% if GLOBALS.role == 'so-heavynode' %}
|
|
||||||
pattern="!(*1password*|*aws*|*azure*|*cloudflare*|*elastic_agent*|*fim*|*github*|*google*|*osquery*|*system*|*windows*|*endpoint*|*elasticsearch*|*generic*|*fleet_server*|*soc*)"
|
|
||||||
{% else %}
|
|
||||||
pattern="*"
|
|
||||||
{% endif %}
|
|
||||||
# Index templates will be skipped if the following conditions are met:
|
|
||||||
# 1. The template is part of the "so-logs-" template group
|
|
||||||
# 2. The template name does not correlate to at least one existing component template
|
|
||||||
# In this situation, the script will treat the skipped template as a temporary failure
|
|
||||||
# and allow the templates to be loaded again on the next run or highstate, whichever
|
|
||||||
# comes first.
|
|
||||||
COMPONENT_LIST=$(so-elasticsearch-component-templates-list)
|
|
||||||
for i in $pattern; do
|
|
||||||
TEMPLATE=${i::-14}
|
|
||||||
COMPONENT_PATTERN=${TEMPLATE:3}
|
|
||||||
MATCH=$(echo "$TEMPLATE" | grep -E "^so-logs-|^so-metrics" | grep -vE "detections|osquery")
|
|
||||||
if [[ -n "$MATCH" && ! "$COMPONENT_LIST" =~ "$COMPONENT_PATTERN" && ! "$COMPONENT_PATTERN" =~ \.generic|logs-winlog\.winlog ]]; then
|
|
||||||
load_failures=$((load_failures+1))
|
|
||||||
echo "Component template does not exist for $COMPONENT_PATTERN. The index template will not be loaded. Load failures: $load_failures"
|
|
||||||
else
|
|
||||||
load_template "_index_template/$TEMPLATE" "$i"
|
|
||||||
fi
|
|
||||||
done
|
|
||||||
else
|
|
||||||
{% if GLOBALS.role == 'so-heavynode' %}
|
|
||||||
echo "Common template does not exist. Exiting..."
|
|
||||||
{% else %}
|
|
||||||
echo "Elastic Fleet not configured. Exiting..."
|
|
||||||
{% endif %}
|
|
||||||
exit 0
|
|
||||||
fi
|
|
||||||
|
|
||||||
cd - >/dev/null
|
|
||||||
|
|
||||||
if [[ $load_failures -eq 0 ]]; then
|
|
||||||
echo "All templates loaded successfully"
|
|
||||||
touch $STATE_FILE_SUCCESS
|
|
||||||
else
|
|
||||||
echo "Encountered $load_failures templates that were unable to load, likely due to missing dependencies that will be available later; will retry on next highstate"
|
|
||||||
fi
|
|
||||||
else
|
|
||||||
echo "Templates already loaded"
|
|
||||||
fi
|
|
||||||
@@ -11,6 +11,7 @@
|
|||||||
'so-kratos',
|
'so-kratos',
|
||||||
'so-hydra',
|
'so-hydra',
|
||||||
'so-nginx',
|
'so-nginx',
|
||||||
|
'so-postgres',
|
||||||
'so-redis',
|
'so-redis',
|
||||||
'so-soc',
|
'so-soc',
|
||||||
'so-strelka-coordinator',
|
'so-strelka-coordinator',
|
||||||
@@ -34,6 +35,7 @@
|
|||||||
'so-hydra',
|
'so-hydra',
|
||||||
'so-logstash',
|
'so-logstash',
|
||||||
'so-nginx',
|
'so-nginx',
|
||||||
|
'so-postgres',
|
||||||
'so-redis',
|
'so-redis',
|
||||||
'so-soc',
|
'so-soc',
|
||||||
'so-strelka-coordinator',
|
'so-strelka-coordinator',
|
||||||
@@ -77,6 +79,7 @@
|
|||||||
'so-kratos',
|
'so-kratos',
|
||||||
'so-hydra',
|
'so-hydra',
|
||||||
'so-nginx',
|
'so-nginx',
|
||||||
|
'so-postgres',
|
||||||
'so-soc'
|
'so-soc'
|
||||||
] %}
|
] %}
|
||||||
|
|
||||||
|
|||||||
@@ -98,6 +98,10 @@ firewall:
|
|||||||
tcp:
|
tcp:
|
||||||
- 8086
|
- 8086
|
||||||
udp: []
|
udp: []
|
||||||
|
postgres:
|
||||||
|
tcp:
|
||||||
|
- 5432
|
||||||
|
udp: []
|
||||||
kafka_controller:
|
kafka_controller:
|
||||||
tcp:
|
tcp:
|
||||||
- 9093
|
- 9093
|
||||||
@@ -193,6 +197,7 @@ firewall:
|
|||||||
- kibana
|
- kibana
|
||||||
- redis
|
- redis
|
||||||
- influxdb
|
- influxdb
|
||||||
|
- postgres
|
||||||
- elasticsearch_rest
|
- elasticsearch_rest
|
||||||
- elasticsearch_node
|
- elasticsearch_node
|
||||||
- localrules
|
- localrules
|
||||||
@@ -379,6 +384,7 @@ firewall:
|
|||||||
- kibana
|
- kibana
|
||||||
- redis
|
- redis
|
||||||
- influxdb
|
- influxdb
|
||||||
|
- postgres
|
||||||
- elasticsearch_rest
|
- elasticsearch_rest
|
||||||
- elasticsearch_node
|
- elasticsearch_node
|
||||||
- docker_registry
|
- docker_registry
|
||||||
@@ -590,6 +596,7 @@ firewall:
|
|||||||
- kibana
|
- kibana
|
||||||
- redis
|
- redis
|
||||||
- influxdb
|
- influxdb
|
||||||
|
- postgres
|
||||||
- elasticsearch_rest
|
- elasticsearch_rest
|
||||||
- elasticsearch_node
|
- elasticsearch_node
|
||||||
- docker_registry
|
- docker_registry
|
||||||
@@ -799,6 +806,7 @@ firewall:
|
|||||||
- kibana
|
- kibana
|
||||||
- redis
|
- redis
|
||||||
- influxdb
|
- influxdb
|
||||||
|
- postgres
|
||||||
- elasticsearch_rest
|
- elasticsearch_rest
|
||||||
- elasticsearch_node
|
- elasticsearch_node
|
||||||
- docker_registry
|
- docker_registry
|
||||||
@@ -1011,6 +1019,7 @@ firewall:
|
|||||||
- kibana
|
- kibana
|
||||||
- redis
|
- redis
|
||||||
- influxdb
|
- influxdb
|
||||||
|
- postgres
|
||||||
- elasticsearch_rest
|
- elasticsearch_rest
|
||||||
- elasticsearch_node
|
- elasticsearch_node
|
||||||
- docker_registry
|
- docker_registry
|
||||||
|
|||||||
@@ -1,5 +1,6 @@
|
|||||||
{% from 'vars/globals.map.jinja' import GLOBALS %}
|
{% from 'vars/globals.map.jinja' import GLOBALS %}
|
||||||
{% from 'docker/docker.map.jinja' import DOCKERMERGED %}
|
{% from 'docker/docker.map.jinja' import DOCKERMERGED %}
|
||||||
|
{% from 'telegraf/map.jinja' import TELEGRAFMERGED %}
|
||||||
{% import_yaml 'firewall/defaults.yaml' as FIREWALL_DEFAULT %}
|
{% import_yaml 'firewall/defaults.yaml' as FIREWALL_DEFAULT %}
|
||||||
|
|
||||||
{# add our ip to self #}
|
{# add our ip to self #}
|
||||||
@@ -55,4 +56,16 @@
|
|||||||
|
|
||||||
{% endif %}
|
{% endif %}
|
||||||
|
|
||||||
|
{# Open Postgres (5432) to minion hostgroups when Telegraf is configured to write to Postgres #}
|
||||||
|
{% set TG_OUT = TELEGRAFMERGED.output | upper %}
|
||||||
|
{% if TG_OUT in ['POSTGRES', 'BOTH'] %}
|
||||||
|
{% if role.startswith('manager') or role == 'standalone' or role == 'eval' %}
|
||||||
|
{% for r in ['sensor', 'searchnode', 'heavynode', 'receiver', 'fleet', 'idh', 'desktop', 'import'] %}
|
||||||
|
{% if FIREWALL_DEFAULT.firewall.role[role].chain["DOCKER-USER"].hostgroups[r] is defined %}
|
||||||
|
{% do FIREWALL_DEFAULT.firewall.role[role].chain["DOCKER-USER"].hostgroups[r].portgroups.append('postgres') %}
|
||||||
|
{% endif %}
|
||||||
|
{% endfor %}
|
||||||
|
{% endif %}
|
||||||
|
{% endif %}
|
||||||
|
|
||||||
{% set FIREWALL_MERGED = salt['pillar.get']('firewall', FIREWALL_DEFAULT.firewall, merge=True) %}
|
{% set FIREWALL_MERGED = salt['pillar.get']('firewall', FIREWALL_DEFAULT.firewall, merge=True) %}
|
||||||
|
|||||||
@@ -11,18 +11,14 @@ global:
|
|||||||
regexFailureMessage: You must enter a valid IP address or CIDR.
|
regexFailureMessage: You must enter a valid IP address or CIDR.
|
||||||
mdengine:
|
mdengine:
|
||||||
description: Which engine to use for meta data generation. Options are ZEEK and SURICATA.
|
description: Which engine to use for meta data generation. Options are ZEEK and SURICATA.
|
||||||
regex: ^(ZEEK|SURICATA)$
|
|
||||||
options:
|
options:
|
||||||
- ZEEK
|
- ZEEK
|
||||||
- SURICATA
|
- SURICATA
|
||||||
regexFailureMessage: You must enter either ZEEK or SURICATA.
|
|
||||||
global: True
|
global: True
|
||||||
pcapengine:
|
pcapengine:
|
||||||
description: Which engine to use for generating pcap. Currently only SURICATA is supported.
|
description: Which engine to use for generating pcap. Currently only SURICATA is supported.
|
||||||
regex: ^(SURICATA)$
|
|
||||||
options:
|
options:
|
||||||
- SURICATA
|
- SURICATA
|
||||||
regexFailureMessage: You must enter either SURICATA.
|
|
||||||
global: True
|
global: True
|
||||||
ids:
|
ids:
|
||||||
description: Which IDS engine to use. Currently only Suricata is supported.
|
description: Which IDS engine to use. Currently only Suricata is supported.
|
||||||
@@ -42,11 +38,9 @@ global:
|
|||||||
advanced: True
|
advanced: True
|
||||||
pipeline:
|
pipeline:
|
||||||
description: Sets which pipeline technology for events to use. The use of Kafka requires a Security Onion Pro license.
|
description: Sets which pipeline technology for events to use. The use of Kafka requires a Security Onion Pro license.
|
||||||
regex: ^(REDIS|KAFKA)$
|
|
||||||
options:
|
options:
|
||||||
- REDIS
|
- REDIS
|
||||||
- KAFKA
|
- KAFKA
|
||||||
regexFailureMessage: You must enter either REDIS or KAFKA.
|
|
||||||
global: True
|
global: True
|
||||||
advanced: True
|
advanced: True
|
||||||
repo_host:
|
repo_host:
|
||||||
|
|||||||
@@ -85,7 +85,10 @@ influxdb:
|
|||||||
description: The log level to use for outputting log statements. Allowed values are debug, info, or error.
|
description: The log level to use for outputting log statements. Allowed values are debug, info, or error.
|
||||||
global: True
|
global: True
|
||||||
advanced: false
|
advanced: false
|
||||||
regex: ^(info|debug|error)$
|
options:
|
||||||
|
- info
|
||||||
|
- debug
|
||||||
|
- error
|
||||||
helpLink: influxdb
|
helpLink: influxdb
|
||||||
metrics-disabled:
|
metrics-disabled:
|
||||||
description: If true, the HTTP endpoint that exposes internal InfluxDB metrics will be inaccessible.
|
description: If true, the HTTP endpoint that exposes internal InfluxDB metrics will be inaccessible.
|
||||||
@@ -140,7 +143,9 @@ influxdb:
|
|||||||
description: Determines the type of storage used for secrets. Allowed values are bolt or vault.
|
description: Determines the type of storage used for secrets. Allowed values are bolt or vault.
|
||||||
global: True
|
global: True
|
||||||
advanced: True
|
advanced: True
|
||||||
regex: ^(bolt|vault)$
|
options:
|
||||||
|
- bolt
|
||||||
|
- vault
|
||||||
helpLink: influxdb
|
helpLink: influxdb
|
||||||
session-length:
|
session-length:
|
||||||
description: Number of minutes that a user login session can remain authenticated.
|
description: Number of minutes that a user login session can remain authenticated.
|
||||||
@@ -260,7 +265,9 @@ influxdb:
|
|||||||
description: The type of data store to use for HTTP resources. Allowed values are disk or memory. Memory should not be used for production Security Onion installations.
|
description: The type of data store to use for HTTP resources. Allowed values are disk or memory. Memory should not be used for production Security Onion installations.
|
||||||
global: True
|
global: True
|
||||||
advanced: True
|
advanced: True
|
||||||
regex: ^(disk|memory)$
|
options:
|
||||||
|
- disk
|
||||||
|
- memory
|
||||||
helpLink: influxdb
|
helpLink: influxdb
|
||||||
tls-cert:
|
tls-cert:
|
||||||
description: The container path to the certificate to use for TLS encryption of the HTTP requests and responses.
|
description: The container path to the certificate to use for TLS encryption of the HTTP requests and responses.
|
||||||
|
|||||||
@@ -131,7 +131,10 @@ kafka:
|
|||||||
ssl_x_keystore_x_type:
|
ssl_x_keystore_x_type:
|
||||||
description: The key store file format.
|
description: The key store file format.
|
||||||
title: ssl.keystore.type
|
title: ssl.keystore.type
|
||||||
regex: ^(JKS|PKCS12|PEM)$
|
options:
|
||||||
|
- JKS
|
||||||
|
- PKCS12
|
||||||
|
- PEM
|
||||||
helpLink: kafka
|
helpLink: kafka
|
||||||
ssl_x_truststore_x_location:
|
ssl_x_truststore_x_location:
|
||||||
description: The trust store file location within the Docker container.
|
description: The trust store file location within the Docker container.
|
||||||
@@ -160,7 +163,11 @@ kafka:
|
|||||||
security_x_protocol:
|
security_x_protocol:
|
||||||
description: 'Broker communication protocol. Options are: SASL_SSL, PLAINTEXT, SSL, SASL_PLAINTEXT'
|
description: 'Broker communication protocol. Options are: SASL_SSL, PLAINTEXT, SSL, SASL_PLAINTEXT'
|
||||||
title: security.protocol
|
title: security.protocol
|
||||||
regex: ^(SASL_SSL|PLAINTEXT|SSL|SASL_PLAINTEXT)
|
options:
|
||||||
|
- SASL_SSL
|
||||||
|
- PLAINTEXT
|
||||||
|
- SSL
|
||||||
|
- SASL_PLAINTEXT
|
||||||
helpLink: kafka
|
helpLink: kafka
|
||||||
ssl_x_keystore_x_location:
|
ssl_x_keystore_x_location:
|
||||||
description: The key store file location within the Docker container.
|
description: The key store file location within the Docker container.
|
||||||
@@ -174,7 +181,10 @@ kafka:
|
|||||||
ssl_x_keystore_x_type:
|
ssl_x_keystore_x_type:
|
||||||
description: The key store file format.
|
description: The key store file format.
|
||||||
title: ssl.keystore.type
|
title: ssl.keystore.type
|
||||||
regex: ^(JKS|PKCS12|PEM)$
|
options:
|
||||||
|
- JKS
|
||||||
|
- PKCS12
|
||||||
|
- PEM
|
||||||
helpLink: kafka
|
helpLink: kafka
|
||||||
ssl_x_truststore_x_location:
|
ssl_x_truststore_x_location:
|
||||||
description: The trust store file location within the Docker container.
|
description: The trust store file location within the Docker container.
|
||||||
|
|||||||
@@ -22,7 +22,7 @@ kibana:
|
|||||||
- default
|
- default
|
||||||
- file
|
- file
|
||||||
migrations:
|
migrations:
|
||||||
discardCorruptObjects: "8.18.8"
|
discardCorruptObjects: "9.3.3"
|
||||||
telemetry:
|
telemetry:
|
||||||
enabled: False
|
enabled: False
|
||||||
xpack:
|
xpack:
|
||||||
|
|||||||
@@ -9,5 +9,5 @@ SESSIONCOOKIE=$(curl -K /opt/so/conf/elasticsearch/curl.config -c - -X GET http:
|
|||||||
# Disable certain Features from showing up in the Kibana UI
|
# Disable certain Features from showing up in the Kibana UI
|
||||||
echo
|
echo
|
||||||
echo "Setting up default Kibana Space:"
|
echo "Setting up default Kibana Space:"
|
||||||
curl -K /opt/so/conf/elasticsearch/curl.config -b "sid=$SESSIONCOOKIE" -L -X PUT "localhost:5601/api/spaces/space/default" -H 'kbn-xsrf: true' -H 'Content-Type: application/json' -d' {"id":"default","name":"Default","disabledFeatures":["ml","enterpriseSearch","logs","infrastructure","apm","uptime","monitoring","stackAlerts","actions","securitySolutionCasesV3","inventory","dataQuality","searchSynonyms","enterpriseSearchApplications","enterpriseSearchAnalytics","securitySolutionTimeline","securitySolutionNotes","entityManager"]} ' >> /opt/so/log/kibana/misc.log
|
curl -K /opt/so/conf/elasticsearch/curl.config -b "sid=$SESSIONCOOKIE" -L -X PUT "localhost:5601/api/spaces/space/default" -H 'kbn-xsrf: true' -H 'Content-Type: application/json' -d' {"id":"default","name":"Default","disabledFeatures":["ml","enterpriseSearch","logs","infrastructure","apm","uptime","monitoring","stackAlerts","actions","securitySolutionCasesV3","inventory","dataQuality","searchSynonyms","searchQueryRules","enterpriseSearchApplications","enterpriseSearchAnalytics","securitySolutionTimeline","securitySolutionNotes","securitySolutionRulesV1","entityManager","streams","cloudConnect","slo"]} ' >> /opt/so/log/kibana/misc.log
|
||||||
echo
|
echo
|
||||||
|
|||||||
@@ -3,8 +3,8 @@ kratos:
|
|||||||
description: Enables or disables the Kratos authentication system. WARNING - Disabling this process will cause the grid to malfunction. Re-enabling this setting will require manual effort via SSH.
|
description: Enables or disables the Kratos authentication system. WARNING - Disabling this process will cause the grid to malfunction. Re-enabling this setting will require manual effort via SSH.
|
||||||
forcedType: bool
|
forcedType: bool
|
||||||
advanced: True
|
advanced: True
|
||||||
|
readonly: True
|
||||||
helpLink: kratos
|
helpLink: kratos
|
||||||
|
|
||||||
oidc:
|
oidc:
|
||||||
enabled:
|
enabled:
|
||||||
description: Set to True to enable OIDC / Single Sign-On (SSO) to SOC. Requires a valid Security Onion license key.
|
description: Set to True to enable OIDC / Single Sign-On (SSO) to SOC. Requires a valid Security Onion license key.
|
||||||
@@ -21,8 +21,12 @@ kratos:
|
|||||||
description: "Specify the provider type. Required. Valid values are: auth0, generic, github, google, microsoft"
|
description: "Specify the provider type. Required. Valid values are: auth0, generic, github, google, microsoft"
|
||||||
global: True
|
global: True
|
||||||
forcedType: string
|
forcedType: string
|
||||||
regex: "auth0|generic|github|google|microsoft"
|
options:
|
||||||
regexFailureMessage: "Valid values are: auth0, generic, github, google, microsoft"
|
- auth0
|
||||||
|
- generic
|
||||||
|
- github
|
||||||
|
- google
|
||||||
|
- microsoft
|
||||||
helpLink: oidc
|
helpLink: oidc
|
||||||
client_id:
|
client_id:
|
||||||
description: Specify the client ID, also referenced as the application ID. Required.
|
description: Specify the client ID, also referenced as the application ID. Required.
|
||||||
@@ -43,8 +47,9 @@ kratos:
|
|||||||
description: The source of the subject identifier. Typically 'userinfo'. Only used when provider is 'microsoft'.
|
description: The source of the subject identifier. Typically 'userinfo'. Only used when provider is 'microsoft'.
|
||||||
global: True
|
global: True
|
||||||
forcedType: string
|
forcedType: string
|
||||||
regex: me|userinfo
|
options:
|
||||||
regexFailureMessage: "Valid values are: me, userinfo"
|
- me
|
||||||
|
- userinfo
|
||||||
helpLink: oidc
|
helpLink: oidc
|
||||||
auth_url:
|
auth_url:
|
||||||
description: Provider's auth URL. Required when provider is 'generic'.
|
description: Provider's auth URL. Required when provider is 'generic'.
|
||||||
|
|||||||
@@ -133,7 +133,7 @@ function getinstallinfo() {
|
|||||||
return 1
|
return 1
|
||||||
fi
|
fi
|
||||||
|
|
||||||
export $(echo "$INSTALLVARS" | xargs)
|
while read -r var; do export "$var"; done <<< "$INSTALLVARS"
|
||||||
if [ $? -ne 0 ]; then
|
if [ $? -ne 0 ]; then
|
||||||
log "ERROR" "Failed to source install variables"
|
log "ERROR" "Failed to source install variables"
|
||||||
return 1
|
return 1
|
||||||
@@ -281,6 +281,39 @@ function deleteMinionFiles () {
|
|||||||
fi
|
fi
|
||||||
}
|
}
|
||||||
|
|
||||||
|
# Remove this minion's postgres Telegraf credential from the shared creds
|
||||||
|
# pillar and drop the matching role in Postgres. Always returns 0 so a dead
|
||||||
|
# or unreachable so-postgres doesn't block minion deletion — in that case we
|
||||||
|
# log a warning and leave the role behind for manual cleanup.
|
||||||
|
function remove_postgres_telegraf_from_minion() {
|
||||||
|
local MINION_SAFE
|
||||||
|
MINION_SAFE=$(echo "$MINION_ID" | tr '.-' '__' | tr '[:upper:]' '[:lower:]')
|
||||||
|
local PG_USER="so_telegraf_${MINION_SAFE}"
|
||||||
|
|
||||||
|
log "INFO" "Removing postgres telegraf cred for $MINION_ID"
|
||||||
|
|
||||||
|
so-telegraf-cred remove "$MINION_ID" >/dev/null 2>&1 || true
|
||||||
|
|
||||||
|
if docker ps --format '{{.Names}}' 2>/dev/null | grep -q '^so-postgres$'; then
|
||||||
|
if ! docker exec -i so-postgres psql -v ON_ERROR_STOP=1 -U postgres -d so_telegraf >/dev/null 2>&1 <<EOSQL
|
||||||
|
DO \$\$
|
||||||
|
BEGIN
|
||||||
|
IF EXISTS (SELECT FROM pg_catalog.pg_roles WHERE rolname = '$PG_USER') THEN
|
||||||
|
EXECUTE format('REASSIGN OWNED BY %I TO so_telegraf', '$PG_USER');
|
||||||
|
EXECUTE format('DROP OWNED BY %I', '$PG_USER');
|
||||||
|
EXECUTE format('DROP ROLE %I', '$PG_USER');
|
||||||
|
END IF;
|
||||||
|
END
|
||||||
|
\$\$;
|
||||||
|
EOSQL
|
||||||
|
then
|
||||||
|
log "WARN" "Failed to drop postgres role $PG_USER; pillar entry was removed — drop manually if the role persists"
|
||||||
|
fi
|
||||||
|
else
|
||||||
|
log "WARN" "so-postgres container is not running; skipping DB role cleanup for $PG_USER"
|
||||||
|
fi
|
||||||
|
}
|
||||||
|
|
||||||
# Create the minion file
|
# Create the minion file
|
||||||
function ensure_socore_ownership() {
|
function ensure_socore_ownership() {
|
||||||
log "INFO" "Setting socore ownership on minion files"
|
log "INFO" "Setting socore ownership on minion files"
|
||||||
@@ -542,6 +575,17 @@ function add_telegraf_to_minion() {
|
|||||||
log "ERROR" "Failed to add telegraf configuration to $PILLARFILE"
|
log "ERROR" "Failed to add telegraf configuration to $PILLARFILE"
|
||||||
return 1
|
return 1
|
||||||
fi
|
fi
|
||||||
|
|
||||||
|
# Provision the per-minion postgres Telegraf credential in the shared
|
||||||
|
# telegraf/creds.sls pillar. so-telegraf-cred is the only writer; it
|
||||||
|
# generates a password on first add and is a no-op on re-add so the cred
|
||||||
|
# is stable across repeated so-minion runs. postgres.telegraf_users on the
|
||||||
|
# manager creates/updates the DB role from the same pillar.
|
||||||
|
so-telegraf-cred add "$MINION_ID"
|
||||||
|
if [ $? -ne 0 ]; then
|
||||||
|
log "ERROR" "Failed to provision postgres telegraf cred for $MINION_ID"
|
||||||
|
return 1
|
||||||
|
fi
|
||||||
}
|
}
|
||||||
|
|
||||||
function add_influxdb_to_minion() {
|
function add_influxdb_to_minion() {
|
||||||
@@ -1069,6 +1113,7 @@ case "$OPERATION" in
|
|||||||
|
|
||||||
"delete")
|
"delete")
|
||||||
log "INFO" "Removing minion $MINION_ID"
|
log "INFO" "Removing minion $MINION_ID"
|
||||||
|
remove_postgres_telegraf_from_minion
|
||||||
deleteMinionFiles || {
|
deleteMinionFiles || {
|
||||||
log "ERROR" "Failed to delete minion files for $MINION_ID"
|
log "ERROR" "Failed to delete minion files for $MINION_ID"
|
||||||
exit 1
|
exit 1
|
||||||
|
|||||||
Executable
+329
@@ -0,0 +1,329 @@
|
|||||||
|
#!/usr/bin/env python3
|
||||||
|
|
||||||
|
# Copyright Security Onion Solutions LLC and/or licensed to Security Onion Solutions LLC under one
|
||||||
|
# or more contributor license agreements. Licensed under the Elastic License 2.0 as shown at
|
||||||
|
# https://securityonion.net/license; you may not use this file except in compliance with the
|
||||||
|
# Elastic License 2.0.
|
||||||
|
|
||||||
|
"""
|
||||||
|
so-pillar-import — populate the so_pillar.* schema in so-postgres from the
|
||||||
|
on-disk Salt pillar tree.
|
||||||
|
|
||||||
|
Reads /opt/so/saltstack/local/pillar/, decomposes each .sls file into a
|
||||||
|
(scope, role|minion_id, pillar_path, data) tuple, and UPSERTs it into
|
||||||
|
so_pillar.pillar_entry. Idempotent — re-running with no SLS edits produces
|
||||||
|
no version bumps because the audit trigger only writes a row when data
|
||||||
|
actually changes.
|
||||||
|
|
||||||
|
Bootstrap and mine-driven files are skipped (see EXCLUDE_BASENAMES /
|
||||||
|
EXCLUDE_PREFIXES below). Files containing Jinja templates ({% or {{) are
|
||||||
|
also skipped — those stay disk-authoritative and ext_pillar_first: False
|
||||||
|
means they render before the PG overlay anyway.
|
||||||
|
|
||||||
|
All SQL goes through `docker exec so-postgres psql` so no separate DSN
|
||||||
|
config is required at first-install time. Designed to be called by
|
||||||
|
salt/postgres/schema_pillar.sls (initial seed) and by salt/manager/tools/
|
||||||
|
sbin/so-minion (per-minion sync on add/delete).
|
||||||
|
"""
|
||||||
|
|
||||||
|
import argparse
|
||||||
|
import json
|
||||||
|
import os
|
||||||
|
import shlex
|
||||||
|
import subprocess
|
||||||
|
import sys
|
||||||
|
from pathlib import Path
|
||||||
|
|
||||||
|
import yaml
|
||||||
|
|
||||||
|
|
||||||
|
PILLAR_LOCAL_ROOT = Path("/opt/so/saltstack/local/pillar")
|
||||||
|
PILLAR_DEFAULT_ROOT = Path("/opt/so/saltstack/default/pillar")
|
||||||
|
DOCKER_CONTAINER = "so-postgres"
|
||||||
|
PG_SUPERUSER = "postgres"
|
||||||
|
PG_DATABASE = "securityonion"
|
||||||
|
|
||||||
|
# Files that must NEVER move to Postgres. These are read by Salt before
|
||||||
|
# Postgres is reachable, or contain renderer-time computed values (mine, etc.).
|
||||||
|
EXCLUDE_BASENAMES = {
|
||||||
|
"secrets.sls",
|
||||||
|
"auth.sls", # postgres/auth.sls bootstrap
|
||||||
|
"top.sls",
|
||||||
|
}
|
||||||
|
# Filename prefixes to skip — these are renderer-time computed pillars
|
||||||
|
# (Salt mine, file_exists guards, etc.) that have to stay on disk.
|
||||||
|
EXCLUDE_PATH_FRAGMENTS = (
|
||||||
|
"/elasticsearch/nodes.sls",
|
||||||
|
"/redis/nodes.sls",
|
||||||
|
"/kafka/nodes.sls",
|
||||||
|
"/hypervisor/nodes.sls",
|
||||||
|
"/logstash/nodes.sls",
|
||||||
|
"/node_data/ips.sls",
|
||||||
|
"/postgres/auth.sls",
|
||||||
|
"/elasticsearch/auth.sls",
|
||||||
|
"/kibana/secrets.sls",
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
def log(level, msg):
|
||||||
|
print(f"[{level}] {msg}", file=sys.stderr)
|
||||||
|
|
||||||
|
|
||||||
|
def is_jinja_templated(content_bytes):
|
||||||
|
return b"{%" in content_bytes or b"{{" in content_bytes
|
||||||
|
|
||||||
|
|
||||||
|
def classify(path):
|
||||||
|
"""Return (scope, role_name, minion_id, pillar_path) for a pillar file
|
||||||
|
or None to skip it. role_name is None for now — the importer leaves role
|
||||||
|
membership to the so_pillar.minion trigger and the salt/auth reactor."""
|
||||||
|
rel_str = str(path)
|
||||||
|
if path.name in EXCLUDE_BASENAMES:
|
||||||
|
return None
|
||||||
|
for frag in EXCLUDE_PATH_FRAGMENTS:
|
||||||
|
if frag in rel_str:
|
||||||
|
return None
|
||||||
|
|
||||||
|
# /local/pillar/minions/<id>.sls or adv_<id>.sls
|
||||||
|
if path.parent.name == "minions":
|
||||||
|
stem = path.stem # filename without .sls
|
||||||
|
if stem.startswith("adv_"):
|
||||||
|
mid = stem[4:]
|
||||||
|
return ("minion", None, mid, f"minions.adv_{mid}")
|
||||||
|
return ("minion", None, stem, f"minions.{stem}")
|
||||||
|
|
||||||
|
# /local/pillar/<section>/<file>.sls
|
||||||
|
if path.parent.parent == PILLAR_LOCAL_ROOT or path.parent.parent == PILLAR_DEFAULT_ROOT:
|
||||||
|
section = path.parent.name
|
||||||
|
stem = path.stem
|
||||||
|
# Only soc_<section>.sls and adv_<section>.sls are SOC-managed pillar
|
||||||
|
# surfaces. Other files (e.g. nodes.sls, auth.sls, *.token) are
|
||||||
|
# either covered by EXCLUDE_PATH_FRAGMENTS or are bootstrap surfaces
|
||||||
|
# we leave alone for now.
|
||||||
|
if stem.startswith("soc_") or stem.startswith("adv_"):
|
||||||
|
return ("global", None, None, f"{section}.{stem}")
|
||||||
|
return None
|
||||||
|
|
||||||
|
return None
|
||||||
|
|
||||||
|
|
||||||
|
def parse_yaml_file(path):
|
||||||
|
with open(path, "rb") as f:
|
||||||
|
content = f.read()
|
||||||
|
if not content.strip():
|
||||||
|
return {}
|
||||||
|
if is_jinja_templated(content):
|
||||||
|
return None
|
||||||
|
data = yaml.safe_load(content)
|
||||||
|
if data is None:
|
||||||
|
return {}
|
||||||
|
if not isinstance(data, dict):
|
||||||
|
return {"_raw": data}
|
||||||
|
return data
|
||||||
|
|
||||||
|
|
||||||
|
def derive_node_type(minion_id):
|
||||||
|
"""Conventional Security Onion minion ids are <host>_<role>. Take the
|
||||||
|
last underscore-delimited token as the canonical role suffix."""
|
||||||
|
parts = minion_id.rsplit("_", 1)
|
||||||
|
if len(parts) == 2:
|
||||||
|
return parts[1]
|
||||||
|
return None
|
||||||
|
|
||||||
|
|
||||||
|
def docker_psql(sql, *, db=PG_DATABASE, user=PG_SUPERUSER, on_error_stop=True, capture=True):
|
||||||
|
"""Run sql via docker exec ... psql. Returns stdout as str."""
|
||||||
|
args = [
|
||||||
|
"docker", "exec", "-i", DOCKER_CONTAINER,
|
||||||
|
"psql", "-U", user, "-d", db, "-tA", "-q",
|
||||||
|
]
|
||||||
|
if on_error_stop:
|
||||||
|
args += ["-v", "ON_ERROR_STOP=1"]
|
||||||
|
proc = subprocess.run(
|
||||||
|
args, input=sql.encode(),
|
||||||
|
capture_output=capture, check=False,
|
||||||
|
)
|
||||||
|
if proc.returncode != 0:
|
||||||
|
sys.stderr.write(proc.stderr.decode(errors="replace"))
|
||||||
|
raise RuntimeError(f"docker exec psql failed (rc={proc.returncode})")
|
||||||
|
return proc.stdout.decode(errors="replace")
|
||||||
|
|
||||||
|
|
||||||
|
def upsert_minion(minion_id, node_type):
|
||||||
|
sql = (
|
||||||
|
"INSERT INTO so_pillar.minion (minion_id, node_type) "
|
||||||
|
f"VALUES ({pg_str(minion_id)}, {pg_str(node_type) if node_type else 'NULL'}) "
|
||||||
|
"ON CONFLICT (minion_id) DO UPDATE SET node_type = EXCLUDED.node_type;"
|
||||||
|
)
|
||||||
|
docker_psql(sql)
|
||||||
|
|
||||||
|
|
||||||
|
def delete_minion(minion_id):
|
||||||
|
"""CASCADE removes pillar_entry + role_member rows."""
|
||||||
|
sql = f"DELETE FROM so_pillar.minion WHERE minion_id = {pg_str(minion_id)};"
|
||||||
|
docker_psql(sql)
|
||||||
|
|
||||||
|
|
||||||
|
def upsert_pillar_entry(scope, role_name, minion_id, pillar_path, data, reason):
|
||||||
|
"""Insert or update the row keyed by the partial unique index that
|
||||||
|
matches scope. Audit trigger handles history; versioning trigger bumps
|
||||||
|
version only when data changes."""
|
||||||
|
data_json = json.dumps(data)
|
||||||
|
role_sql = pg_str(role_name) if role_name else "NULL"
|
||||||
|
minion_sql = pg_str(minion_id) if minion_id else "NULL"
|
||||||
|
reason_sql = pg_str(reason)
|
||||||
|
|
||||||
|
if scope == "global":
|
||||||
|
conflict = "(pillar_path) WHERE scope='global'"
|
||||||
|
elif scope == "role":
|
||||||
|
conflict = "(role_name, pillar_path) WHERE scope='role'"
|
||||||
|
elif scope == "minion":
|
||||||
|
conflict = "(minion_id, pillar_path) WHERE scope='minion'"
|
||||||
|
else:
|
||||||
|
raise ValueError(f"unknown scope {scope!r}")
|
||||||
|
|
||||||
|
sql = (
|
||||||
|
"BEGIN;\n"
|
||||||
|
f"SELECT set_config('so_pillar.change_reason', {reason_sql}, true);\n"
|
||||||
|
f"INSERT INTO so_pillar.pillar_entry "
|
||||||
|
f"(scope, role_name, minion_id, pillar_path, data, change_reason) "
|
||||||
|
f"VALUES ({pg_str(scope)}, {role_sql}, {minion_sql}, {pg_str(pillar_path)}, {pg_jsonb(data_json)}, {reason_sql}) "
|
||||||
|
f"ON CONFLICT {conflict} DO UPDATE "
|
||||||
|
f"SET data = EXCLUDED.data, change_reason = EXCLUDED.change_reason;\n"
|
||||||
|
"COMMIT;\n"
|
||||||
|
)
|
||||||
|
docker_psql(sql)
|
||||||
|
|
||||||
|
|
||||||
|
def pg_str(s):
|
||||||
|
"""Escape a Python str for inclusion in literal SQL. Pillar content has
|
||||||
|
already been validated as YAML; we just need standard SQL escaping."""
|
||||||
|
if s is None:
|
||||||
|
return "NULL"
|
||||||
|
return "'" + str(s).replace("'", "''") + "'"
|
||||||
|
|
||||||
|
|
||||||
|
def pg_jsonb(json_str):
|
||||||
|
return pg_str(json_str) + "::jsonb"
|
||||||
|
|
||||||
|
|
||||||
|
def walk_pillar_root(root, paths):
|
||||||
|
if not root.is_dir():
|
||||||
|
return
|
||||||
|
for path in root.rglob("*.sls"):
|
||||||
|
if path.is_file():
|
||||||
|
paths.append(path)
|
||||||
|
|
||||||
|
|
||||||
|
def import_minion(minion_id, node_type, dry_run, reason):
|
||||||
|
"""Re-import every pillar file for a single minion."""
|
||||||
|
if not minion_id:
|
||||||
|
raise ValueError("minion_id required for --scope minion")
|
||||||
|
|
||||||
|
upsert_minion(minion_id, node_type)
|
||||||
|
log("INFO", f"Upserted minion row {minion_id} (node_type={node_type})")
|
||||||
|
|
||||||
|
targets = [
|
||||||
|
PILLAR_LOCAL_ROOT / "minions" / f"{minion_id}.sls",
|
||||||
|
PILLAR_LOCAL_ROOT / "minions" / f"adv_{minion_id}.sls",
|
||||||
|
]
|
||||||
|
for path in targets:
|
||||||
|
if not path.exists():
|
||||||
|
log("INFO", f" (no file at {path})")
|
||||||
|
continue
|
||||||
|
klass = classify(path)
|
||||||
|
if not klass:
|
||||||
|
log("INFO", f" skip {path} (excluded)")
|
||||||
|
continue
|
||||||
|
scope, role, mid, pillar_path = klass
|
||||||
|
data = parse_yaml_file(path)
|
||||||
|
if data is None:
|
||||||
|
log("WARN", f" skip {path} (Jinja-templated; stays disk-only)")
|
||||||
|
continue
|
||||||
|
if dry_run:
|
||||||
|
log("DRY", f" would upsert {scope}/{pillar_path} = {len(json.dumps(data))} bytes")
|
||||||
|
continue
|
||||||
|
upsert_pillar_entry(scope, role, mid, pillar_path, data, reason)
|
||||||
|
log("INFO", f" imported {scope}/{pillar_path}")
|
||||||
|
|
||||||
|
|
||||||
|
def import_all(dry_run, reason):
|
||||||
|
"""Walk the entire local pillar tree and import every eligible file."""
|
||||||
|
paths = []
|
||||||
|
walk_pillar_root(PILLAR_LOCAL_ROOT, paths)
|
||||||
|
|
||||||
|
imported = 0
|
||||||
|
skipped = 0
|
||||||
|
minions_seen = set()
|
||||||
|
|
||||||
|
for path in sorted(paths):
|
||||||
|
klass = classify(path)
|
||||||
|
if not klass:
|
||||||
|
skipped += 1
|
||||||
|
continue
|
||||||
|
scope, role, minion_id, pillar_path = klass
|
||||||
|
data = parse_yaml_file(path)
|
||||||
|
if data is None:
|
||||||
|
log("WARN", f"skip {path} (Jinja-templated; stays disk-only)")
|
||||||
|
skipped += 1
|
||||||
|
continue
|
||||||
|
|
||||||
|
if scope == "minion" and minion_id not in minions_seen:
|
||||||
|
node_type = derive_node_type(minion_id)
|
||||||
|
if not dry_run:
|
||||||
|
upsert_minion(minion_id, node_type)
|
||||||
|
minions_seen.add(minion_id)
|
||||||
|
|
||||||
|
if dry_run:
|
||||||
|
log("DRY", f"would upsert {scope}/{pillar_path} ({len(json.dumps(data))} bytes)")
|
||||||
|
else:
|
||||||
|
upsert_pillar_entry(scope, role, minion_id, pillar_path, data, reason)
|
||||||
|
log("INFO", f"imported {scope}/{pillar_path}")
|
||||||
|
imported += 1
|
||||||
|
|
||||||
|
log("INFO", f"done: {imported} imported, {skipped} skipped")
|
||||||
|
|
||||||
|
|
||||||
|
def main():
|
||||||
|
ap = argparse.ArgumentParser(description=__doc__)
|
||||||
|
ap.add_argument("--scope", choices=("global", "role", "minion", "all"), default="all")
|
||||||
|
ap.add_argument("--minion-id")
|
||||||
|
ap.add_argument("--node-type", help="override node_type for --scope minion (default: derived from minion_id)")
|
||||||
|
ap.add_argument("--delete", action="store_true",
|
||||||
|
help="With --scope minion, remove the minion row (and its pillar rows via CASCADE)")
|
||||||
|
ap.add_argument("--dry-run", action="store_true")
|
||||||
|
ap.add_argument("--diff", action="store_true",
|
||||||
|
help="(reserved) print structural diffs vs current DB content")
|
||||||
|
ap.add_argument("--yes", action="store_true",
|
||||||
|
help="Skip confirmation prompts (currently unused; reserved)")
|
||||||
|
ap.add_argument("--reason", default="so-pillar-import",
|
||||||
|
help="change_reason recorded in pillar_entry_history")
|
||||||
|
args = ap.parse_args()
|
||||||
|
|
||||||
|
try:
|
||||||
|
if args.scope == "minion":
|
||||||
|
if not args.minion_id:
|
||||||
|
ap.error("--minion-id required when --scope minion")
|
||||||
|
if args.delete:
|
||||||
|
if args.dry_run:
|
||||||
|
log("DRY", f"would delete {args.minion_id}")
|
||||||
|
else:
|
||||||
|
delete_minion(args.minion_id)
|
||||||
|
log("INFO", f"deleted {args.minion_id}")
|
||||||
|
else:
|
||||||
|
node_type = args.node_type or derive_node_type(args.minion_id)
|
||||||
|
import_minion(args.minion_id, node_type, args.dry_run, args.reason)
|
||||||
|
elif args.scope == "all":
|
||||||
|
import_all(args.dry_run, args.reason)
|
||||||
|
else:
|
||||||
|
log("ERROR", f"--scope {args.scope} not yet implemented; use --scope all or --scope minion")
|
||||||
|
return 2
|
||||||
|
except Exception as e:
|
||||||
|
log("ERROR", str(e))
|
||||||
|
return 1
|
||||||
|
|
||||||
|
return 0
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
sys.exit(main())
|
||||||
Executable
+54
@@ -0,0 +1,54 @@
|
|||||||
|
#!/bin/bash
|
||||||
|
|
||||||
|
# Copyright Security Onion Solutions LLC and/or licensed to Security Onion Solutions LLC under one
|
||||||
|
# or more contributor license agreements. Licensed under the Elastic License 2.0 as shown at
|
||||||
|
# https://securityonion.net/license; you may not use this file except in compliance with the
|
||||||
|
# Elastic License 2.0.
|
||||||
|
|
||||||
|
# Single writer for the Telegraf Postgres credentials pillar. Thin wrapper
|
||||||
|
# around so-yaml.py that generates a password on first add and no-ops on
|
||||||
|
# re-add so the cred is stable across repeated so-minion runs.
|
||||||
|
#
|
||||||
|
# Note: so-yaml.py splits keys on '.' with no escape. SO minion ids are
|
||||||
|
# dot-free by construction (setup/so-functions:1884 takes the short_name
|
||||||
|
# before the first '.'), so using the raw minion id as the key is safe.
|
||||||
|
|
||||||
|
CREDS=/opt/so/saltstack/local/pillar/telegraf/creds.sls
|
||||||
|
|
||||||
|
usage() {
|
||||||
|
echo "Usage: $0 <add|remove> <minion_id>" >&2
|
||||||
|
exit 2
|
||||||
|
}
|
||||||
|
|
||||||
|
seed_creds_file() {
|
||||||
|
mkdir -p "$(dirname "$CREDS")" || return 1
|
||||||
|
if [[ ! -f "$CREDS" ]]; then
|
||||||
|
(umask 027 && printf 'telegraf:\n postgres_creds: {}\n' > "$CREDS") || return 1
|
||||||
|
chown socore:socore "$CREDS" 2>/dev/null || true
|
||||||
|
chmod 640 "$CREDS" || return 1
|
||||||
|
fi
|
||||||
|
}
|
||||||
|
|
||||||
|
OP=$1
|
||||||
|
MID=$2
|
||||||
|
[[ -z "$OP" || -z "$MID" ]] && usage
|
||||||
|
|
||||||
|
case "$OP" in
|
||||||
|
add)
|
||||||
|
SAFE=$(echo "$MID" | tr '.-' '__' | tr '[:upper:]' '[:lower:]')
|
||||||
|
seed_creds_file || exit 1
|
||||||
|
if so-yaml.py get -r "$CREDS" "telegraf.postgres_creds.${MID}.user" >/dev/null 2>&1; then
|
||||||
|
exit 0
|
||||||
|
fi
|
||||||
|
PASS=$(tr -dc 'A-Za-z0-9~!@#^&*()_=+[]|;:,.<>?-' < /dev/urandom | head -c 72)
|
||||||
|
so-yaml.py replace "$CREDS" "telegraf.postgres_creds.${MID}.user" "so_telegraf_${SAFE}" >/dev/null
|
||||||
|
so-yaml.py replace "$CREDS" "telegraf.postgres_creds.${MID}.pass" "$PASS" >/dev/null
|
||||||
|
;;
|
||||||
|
remove)
|
||||||
|
[[ -f "$CREDS" ]] || exit 0
|
||||||
|
so-yaml.py remove "$CREDS" "telegraf.postgres_creds.${MID}" >/dev/null 2>&1 || true
|
||||||
|
;;
|
||||||
|
*)
|
||||||
|
usage
|
||||||
|
;;
|
||||||
|
esac
|
||||||
@@ -13,6 +13,64 @@ import json
|
|||||||
|
|
||||||
lockFile = "/tmp/so-yaml.lock"
|
lockFile = "/tmp/so-yaml.lock"
|
||||||
|
|
||||||
|
# postsalt: so-yaml supports three backend modes for PG-managed pillar paths:
|
||||||
|
#
|
||||||
|
# dual — write disk + mirror to so_pillar.*. Reads from disk.
|
||||||
|
# Used during the migration transition when disk is still
|
||||||
|
# canonical and PG runs as a shadow.
|
||||||
|
# postgres — write to so_pillar.* only. Reads from so_pillar.*. No disk
|
||||||
|
# file is touched. The end state once cutover is complete.
|
||||||
|
# disk — disk only, no PG. Emergency rollback escape hatch.
|
||||||
|
#
|
||||||
|
# Bootstrap and mine-driven files (secrets.sls, ca/init.sls, */nodes.sls,
|
||||||
|
# top.sls, etc.) are always handled on disk regardless of mode — those paths
|
||||||
|
# are explicitly excluded by so_yaml_postgres.locate() raising SkipPath.
|
||||||
|
#
|
||||||
|
# Mode resolution: SO_YAML_BACKEND env var, then /opt/so/conf/so-yaml/mode,
|
||||||
|
# then default 'dual' (safe upgrade behavior — flipping to 'postgres' is
|
||||||
|
# done by schema_pillar.sls after the schema is in place and the importer
|
||||||
|
# has run at least once).
|
||||||
|
|
||||||
|
MODE_FILE = "/opt/so/conf/so-yaml/mode"
|
||||||
|
VALID_MODES = ("dual", "postgres", "disk")
|
||||||
|
DEFAULT_MODE = "dual"
|
||||||
|
|
||||||
|
try:
|
||||||
|
sys.path.insert(0, os.path.dirname(os.path.abspath(__file__)))
|
||||||
|
import so_yaml_postgres
|
||||||
|
_SO_YAML_PG_AVAILABLE = True
|
||||||
|
except Exception as _exc:
|
||||||
|
_SO_YAML_PG_AVAILABLE = False
|
||||||
|
|
||||||
|
|
||||||
|
def _resolveBackendMode():
|
||||||
|
env = os.environ.get("SO_YAML_BACKEND")
|
||||||
|
if env and env in VALID_MODES:
|
||||||
|
return env
|
||||||
|
try:
|
||||||
|
with open(MODE_FILE, "r") as fh:
|
||||||
|
value = fh.read().strip()
|
||||||
|
if value in VALID_MODES:
|
||||||
|
return value
|
||||||
|
except (IOError, OSError):
|
||||||
|
pass
|
||||||
|
return DEFAULT_MODE
|
||||||
|
|
||||||
|
|
||||||
|
_BACKEND_MODE = _resolveBackendMode()
|
||||||
|
|
||||||
|
|
||||||
|
def _isPgManaged(filename):
|
||||||
|
"""True when so-yaml should route this file's reads/writes through
|
||||||
|
so_pillar.*. False for bootstrap/mine-driven files that always live on
|
||||||
|
disk, and for arbitrary YAML paths outside the pillar tree."""
|
||||||
|
if not _SO_YAML_PG_AVAILABLE:
|
||||||
|
return False
|
||||||
|
try:
|
||||||
|
return so_yaml_postgres.is_pg_managed(filename)
|
||||||
|
except Exception:
|
||||||
|
return False
|
||||||
|
|
||||||
|
|
||||||
def showUsage(args):
|
def showUsage(args):
|
||||||
print('Usage: {} <COMMAND> <YAML_FILE> [ARGS...]'.format(sys.argv[0]), file=sys.stderr)
|
print('Usage: {} <COMMAND> <YAML_FILE> [ARGS...]'.format(sys.argv[0]), file=sys.stderr)
|
||||||
@@ -25,8 +83,14 @@ def showUsage(args):
|
|||||||
print(' get [-r] - Displays (to stdout) the value stored in the given key. Requires KEY arg. Use -r for raw output without YAML formatting.', file=sys.stderr)
|
print(' get [-r] - Displays (to stdout) the value stored in the given key. Requires KEY arg. Use -r for raw output without YAML formatting.', file=sys.stderr)
|
||||||
print(' remove - Removes a yaml key, if it exists. Requires KEY arg.', file=sys.stderr)
|
print(' remove - Removes a yaml key, if it exists. Requires KEY arg.', file=sys.stderr)
|
||||||
print(' replace - Replaces (or adds) a new key and set its value. Requires KEY and VALUE args.', file=sys.stderr)
|
print(' replace - Replaces (or adds) a new key and set its value. Requires KEY and VALUE args.', file=sys.stderr)
|
||||||
|
print(' purge - Delete the YAML file from disk and remove its rows from so_pillar.* (no KEY arg).', file=sys.stderr)
|
||||||
print(' help - Prints this usage information.', file=sys.stderr)
|
print(' help - Prints this usage information.', file=sys.stderr)
|
||||||
print('', file=sys.stderr)
|
print('', file=sys.stderr)
|
||||||
|
print(' Backend mode:', file=sys.stderr)
|
||||||
|
print(' Resolved from $SO_YAML_BACKEND, then /opt/so/conf/so-yaml/mode, default "dual".', file=sys.stderr)
|
||||||
|
print(' Valid values: dual | postgres | disk. Bootstrap pillar files (secrets, ca, *.nodes.sls)', file=sys.stderr)
|
||||||
|
print(' are always handled on disk regardless of mode.', file=sys.stderr)
|
||||||
|
print('', file=sys.stderr)
|
||||||
print(' Where:', file=sys.stderr)
|
print(' Where:', file=sys.stderr)
|
||||||
print(' YAML_FILE - Path to the file that will be modified. Ex: /opt/so/conf/service/conf.yaml', file=sys.stderr)
|
print(' YAML_FILE - Path to the file that will be modified. Ex: /opt/so/conf/service/conf.yaml', file=sys.stderr)
|
||||||
print(' KEY - YAML key, does not support \' or " characters at this time. Ex: level1.level2', file=sys.stderr)
|
print(' KEY - YAML key, does not support \' or " characters at this time. Ex: level1.level2', file=sys.stderr)
|
||||||
@@ -39,14 +103,128 @@ def showUsage(args):
|
|||||||
|
|
||||||
|
|
||||||
def loadYaml(filename):
|
def loadYaml(filename):
|
||||||
file = open(filename, "r")
|
"""Load a YAML file's content as a dict.
|
||||||
|
|
||||||
|
PG-canonical mode (`postgres`): for PG-managed paths, read from
|
||||||
|
so_pillar.pillar_entry. A missing row is treated as an empty dict so
|
||||||
|
that `replace`/`add` on a fresh path can populate it from scratch.
|
||||||
|
|
||||||
|
Other modes / non-PG-managed paths: read from disk as today.
|
||||||
|
"""
|
||||||
|
if _BACKEND_MODE == "postgres" and _isPgManaged(filename):
|
||||||
|
try:
|
||||||
|
data = so_yaml_postgres.read_yaml(filename)
|
||||||
|
except so_yaml_postgres.SkipPath:
|
||||||
|
data = None
|
||||||
|
except Exception as e:
|
||||||
|
print(f"so-yaml: pg read failed for {filename}: {e}", file=sys.stderr)
|
||||||
|
sys.exit(1)
|
||||||
|
return data if data is not None else {}
|
||||||
|
|
||||||
|
try:
|
||||||
|
with open(filename, "r") as file:
|
||||||
content = file.read()
|
content = file.read()
|
||||||
return yaml.safe_load(content)
|
return yaml.safe_load(content)
|
||||||
|
except FileNotFoundError:
|
||||||
|
print(f"File not found: {filename}", file=sys.stderr)
|
||||||
|
sys.exit(1)
|
||||||
|
except Exception as e:
|
||||||
|
print(f"Error reading file {filename}: {e}", file=sys.stderr)
|
||||||
|
sys.exit(1)
|
||||||
|
|
||||||
|
|
||||||
def writeYaml(filename, content):
|
def writeYaml(filename, content):
|
||||||
|
"""Persist `content` for `filename`.
|
||||||
|
|
||||||
|
PG-canonical mode + PG-managed path: write only to so_pillar.*. A PG
|
||||||
|
failure is fatal (no disk fallback) — caller must retry.
|
||||||
|
|
||||||
|
Dual mode: write disk, then mirror to PG (failures are warnings).
|
||||||
|
|
||||||
|
Disk mode or non-PG-managed path: write disk only.
|
||||||
|
"""
|
||||||
|
if _BACKEND_MODE == "postgres" and _isPgManaged(filename):
|
||||||
|
if not _SO_YAML_PG_AVAILABLE:
|
||||||
|
print("so-yaml: PG-canonical mode requires so_yaml_postgres module", file=sys.stderr)
|
||||||
|
sys.exit(1)
|
||||||
|
ok, msg = so_yaml_postgres.write_yaml(
|
||||||
|
filename, content,
|
||||||
|
reason="so-yaml " + " ".join(sys.argv[1:2]))
|
||||||
|
if not ok:
|
||||||
|
print(f"so-yaml: pg write failed for {filename}: {msg}", file=sys.stderr)
|
||||||
|
sys.exit(1)
|
||||||
|
return None
|
||||||
|
|
||||||
file = open(filename, "w")
|
file = open(filename, "w")
|
||||||
return yaml.safe_dump(content, file)
|
result = yaml.safe_dump(content, file)
|
||||||
|
file.close()
|
||||||
|
|
||||||
|
if _BACKEND_MODE == "dual":
|
||||||
|
_mirrorToPostgres(filename, content)
|
||||||
|
return result
|
||||||
|
|
||||||
|
|
||||||
|
def _mirrorToPostgres(filename, content):
|
||||||
|
"""Best-effort dual-write of a YAML mutation into so_pillar.*. Skips
|
||||||
|
files outside the PG-managed pillar surface (secrets.sls,
|
||||||
|
elasticsearch/nodes.sls, etc.) and silently degrades when so-postgres
|
||||||
|
is unreachable. Disk write is canonical in dual mode; this never
|
||||||
|
raises.
|
||||||
|
|
||||||
|
Only real PG failures (`pg write failed: ...`) are logged so the
|
||||||
|
common cases (skipped path, postgres not running) don't pollute
|
||||||
|
stderr."""
|
||||||
|
if not _SO_YAML_PG_AVAILABLE:
|
||||||
|
return
|
||||||
|
try:
|
||||||
|
ok, msg = so_yaml_postgres.write_yaml(filename, content,
|
||||||
|
reason="so-yaml " + " ".join(sys.argv[1:2]))
|
||||||
|
if not ok and msg.startswith("pg write failed"):
|
||||||
|
print(f"so-yaml: {msg}", file=sys.stderr)
|
||||||
|
except Exception as e: # pragma: no cover — defensive: never break disk write
|
||||||
|
print(f"so-yaml: pg mirror exception: {e}", file=sys.stderr)
|
||||||
|
|
||||||
|
|
||||||
|
def purgeFile(filename):
|
||||||
|
"""Delete a YAML file from disk and remove the matching rows from
|
||||||
|
so_pillar.*. Idempotent — missing file/row counts as success.
|
||||||
|
|
||||||
|
PG-canonical mode + PG-managed path: PG delete is canonical. If a stale
|
||||||
|
disk file from the dual-write era happens to still exist, it's removed
|
||||||
|
too as a cleanup courtesy. PG failure is fatal in this mode.
|
||||||
|
|
||||||
|
Dual / disk modes: remove disk first; PG cleanup is best-effort."""
|
||||||
|
if _BACKEND_MODE == "postgres" and _isPgManaged(filename):
|
||||||
|
if not _SO_YAML_PG_AVAILABLE:
|
||||||
|
print("so-yaml: PG-canonical mode requires so_yaml_postgres module", file=sys.stderr)
|
||||||
|
return 1
|
||||||
|
ok, msg = so_yaml_postgres.purge_yaml(filename, reason="so-yaml purge")
|
||||||
|
if not ok:
|
||||||
|
print(f"so-yaml: pg purge failed for {filename}: {msg}", file=sys.stderr)
|
||||||
|
return 1
|
||||||
|
if os.path.exists(filename):
|
||||||
|
try:
|
||||||
|
os.remove(filename)
|
||||||
|
except Exception as e:
|
||||||
|
print(f"so-yaml: warn — could not remove stale disk file {filename}: {e}", file=sys.stderr)
|
||||||
|
return 0
|
||||||
|
|
||||||
|
if os.path.exists(filename):
|
||||||
|
try:
|
||||||
|
os.remove(filename)
|
||||||
|
except Exception as e:
|
||||||
|
print(f"Failed to remove {filename}: {e}", file=sys.stderr)
|
||||||
|
return 1
|
||||||
|
|
||||||
|
if _BACKEND_MODE == "dual" and _SO_YAML_PG_AVAILABLE:
|
||||||
|
try:
|
||||||
|
ok, msg = so_yaml_postgres.purge_yaml(filename,
|
||||||
|
reason="so-yaml purge")
|
||||||
|
if not ok and msg.startswith("pg purge failed"):
|
||||||
|
print(f"so-yaml: {msg}", file=sys.stderr)
|
||||||
|
except Exception as e:
|
||||||
|
print(f"so-yaml: pg purge exception: {e}", file=sys.stderr)
|
||||||
|
return 0
|
||||||
|
|
||||||
|
|
||||||
def appendItem(content, key, listItem):
|
def appendItem(content, key, listItem):
|
||||||
@@ -285,6 +463,7 @@ def add(args):
|
|||||||
def removeKey(content, key):
|
def removeKey(content, key):
|
||||||
pieces = key.split(".", 1)
|
pieces = key.split(".", 1)
|
||||||
if len(pieces) > 1:
|
if len(pieces) > 1:
|
||||||
|
if pieces[0] in content:
|
||||||
removeKey(content[pieces[0]], pieces[1])
|
removeKey(content[pieces[0]], pieces[1])
|
||||||
else:
|
else:
|
||||||
content.pop(key, None)
|
content.pop(key, None)
|
||||||
@@ -363,6 +542,18 @@ def get(args):
|
|||||||
return 0
|
return 0
|
||||||
|
|
||||||
|
|
||||||
|
def purge(args):
|
||||||
|
"""purge YAML_FILE — delete the file from disk and remove the matching
|
||||||
|
rows from so_pillar.* in so-postgres. Used by so-minion's delete path
|
||||||
|
(in place of `rm -f`) so the audit log captures the deletion and
|
||||||
|
role_member rows get cleaned up via FK CASCADE on so_pillar.minion."""
|
||||||
|
if len(args) != 1:
|
||||||
|
print('Missing filename arg', file=sys.stderr)
|
||||||
|
showUsage(None)
|
||||||
|
return 1
|
||||||
|
return purgeFile(args[0])
|
||||||
|
|
||||||
|
|
||||||
def main():
|
def main():
|
||||||
args = sys.argv[1:]
|
args = sys.argv[1:]
|
||||||
|
|
||||||
@@ -380,6 +571,7 @@ def main():
|
|||||||
"get": get,
|
"get": get,
|
||||||
"remove": remove,
|
"remove": remove,
|
||||||
"replace": replace,
|
"replace": replace,
|
||||||
|
"purge": purge,
|
||||||
}
|
}
|
||||||
|
|
||||||
code = 1
|
code = 1
|
||||||
|
|||||||
@@ -973,3 +973,347 @@ class TestReplaceListObject(unittest.TestCase):
|
|||||||
|
|
||||||
expected = "key1:\n- id: '1'\n status: updated\n- id: '2'\n status: inactive\n"
|
expected = "key1:\n- id: '1'\n status: updated\n- id: '2'\n status: inactive\n"
|
||||||
self.assertEqual(actual, expected)
|
self.assertEqual(actual, expected)
|
||||||
|
|
||||||
|
|
||||||
|
class TestLoadYaml(unittest.TestCase):
|
||||||
|
|
||||||
|
def test_load_yaml_missing_file(self):
|
||||||
|
with patch('sys.exit', new=MagicMock()) as sysmock:
|
||||||
|
with patch('sys.stderr', new=StringIO()) as mock_stderr:
|
||||||
|
soyaml.loadYaml("/tmp/so-yaml_test-does-not-exist.yaml")
|
||||||
|
sysmock.assert_called_with(1)
|
||||||
|
self.assertIn("File not found:", mock_stderr.getvalue())
|
||||||
|
|
||||||
|
def test_load_yaml_read_error(self):
|
||||||
|
with patch('sys.exit', new=MagicMock()) as sysmock:
|
||||||
|
with patch('sys.stderr', new=StringIO()) as mock_stderr:
|
||||||
|
with patch('builtins.open', side_effect=PermissionError("denied")):
|
||||||
|
soyaml.loadYaml("/tmp/so-yaml_test-unreadable.yaml")
|
||||||
|
sysmock.assert_called_with(1)
|
||||||
|
self.assertIn("Error reading file", mock_stderr.getvalue())
|
||||||
|
|
||||||
|
|
||||||
|
class TestPurge(unittest.TestCase):
|
||||||
|
|
||||||
|
def test_purge_missing_arg(self):
|
||||||
|
# showUsage calls sys.exit(1); patch it like the other tests do.
|
||||||
|
with patch('sys.exit', new=MagicMock()):
|
||||||
|
with patch('sys.stderr', new=StringIO()) as mock_stderr:
|
||||||
|
rc = soyaml.purge([])
|
||||||
|
self.assertEqual(rc, 1)
|
||||||
|
self.assertIn("Missing filename", mock_stderr.getvalue())
|
||||||
|
|
||||||
|
def test_purge_existing_file(self):
|
||||||
|
filename = "/tmp/so-yaml_test_purge.yaml"
|
||||||
|
with open(filename, "w") as f:
|
||||||
|
f.write("key: value\n")
|
||||||
|
# Disable PG mirror so the test doesn't shell out to docker.
|
||||||
|
with patch.object(soyaml, '_SO_YAML_PG_AVAILABLE', False):
|
||||||
|
rc = soyaml.purge([filename])
|
||||||
|
self.assertEqual(rc, 0)
|
||||||
|
import os as _os
|
||||||
|
self.assertFalse(_os.path.exists(filename))
|
||||||
|
|
||||||
|
def test_purge_missing_file_idempotent(self):
|
||||||
|
filename = "/tmp/so-yaml_test_purge_missing.yaml"
|
||||||
|
import os as _os
|
||||||
|
if _os.path.exists(filename):
|
||||||
|
_os.remove(filename)
|
||||||
|
with patch.object(soyaml, '_SO_YAML_PG_AVAILABLE', False):
|
||||||
|
rc = soyaml.purge([filename])
|
||||||
|
self.assertEqual(rc, 0)
|
||||||
|
|
||||||
|
|
||||||
|
class TestSoYamlPostgres(unittest.TestCase):
|
||||||
|
"""Tests the path-locator and write/purge contract of the dual-write
|
||||||
|
backend module without actually contacting Postgres."""
|
||||||
|
|
||||||
|
def setUp(self):
|
||||||
|
import importlib
|
||||||
|
self.mod = importlib.import_module("so_yaml_postgres")
|
||||||
|
|
||||||
|
def test_locate_global_soc(self):
|
||||||
|
scope, role, mid, path = self.mod.locate(
|
||||||
|
"/opt/so/saltstack/local/pillar/soc/soc_soc.sls")
|
||||||
|
self.assertEqual(scope, "global")
|
||||||
|
self.assertIsNone(role)
|
||||||
|
self.assertIsNone(mid)
|
||||||
|
self.assertEqual(path, "soc.soc_soc")
|
||||||
|
|
||||||
|
def test_locate_global_advanced(self):
|
||||||
|
scope, role, mid, path = self.mod.locate(
|
||||||
|
"/opt/so/saltstack/local/pillar/soc/adv_soc.sls")
|
||||||
|
self.assertEqual(scope, "global")
|
||||||
|
self.assertEqual(path, "soc.adv_soc")
|
||||||
|
|
||||||
|
def test_locate_minion(self):
|
||||||
|
scope, role, mid, path = self.mod.locate(
|
||||||
|
"/opt/so/saltstack/local/pillar/minions/h1_sensor.sls")
|
||||||
|
self.assertEqual(scope, "minion")
|
||||||
|
self.assertEqual(mid, "h1_sensor")
|
||||||
|
self.assertEqual(path, "minions.h1_sensor")
|
||||||
|
|
||||||
|
def test_locate_minion_advanced(self):
|
||||||
|
scope, role, mid, path = self.mod.locate(
|
||||||
|
"/opt/so/saltstack/local/pillar/minions/adv_h1_sensor.sls")
|
||||||
|
self.assertEqual(scope, "minion")
|
||||||
|
self.assertEqual(mid, "h1_sensor")
|
||||||
|
self.assertEqual(path, "minions.adv_h1_sensor")
|
||||||
|
|
||||||
|
def test_locate_skip_secrets(self):
|
||||||
|
with self.assertRaises(self.mod.SkipPath):
|
||||||
|
self.mod.locate("/opt/so/saltstack/local/pillar/secrets.sls")
|
||||||
|
|
||||||
|
def test_locate_skip_postgres_auth(self):
|
||||||
|
with self.assertRaises(self.mod.SkipPath):
|
||||||
|
self.mod.locate("/opt/so/saltstack/local/pillar/postgres/auth.sls")
|
||||||
|
|
||||||
|
def test_locate_skip_mine_driven(self):
|
||||||
|
with self.assertRaises(self.mod.SkipPath):
|
||||||
|
self.mod.locate("/opt/so/saltstack/local/pillar/elasticsearch/nodes.sls")
|
||||||
|
|
||||||
|
def test_locate_skip_top(self):
|
||||||
|
with self.assertRaises(self.mod.SkipPath):
|
||||||
|
self.mod.locate("/opt/so/saltstack/local/pillar/top.sls")
|
||||||
|
|
||||||
|
def test_locate_skip_unrelated(self):
|
||||||
|
with self.assertRaises(self.mod.SkipPath):
|
||||||
|
self.mod.locate("/etc/hostname")
|
||||||
|
|
||||||
|
def test_pg_str_escapes(self):
|
||||||
|
self.assertEqual(self.mod._pg_str("a'b"), "'a''b'")
|
||||||
|
self.assertEqual(self.mod._pg_str(None), "NULL")
|
||||||
|
|
||||||
|
def test_conflict_target(self):
|
||||||
|
self.assertIn("scope='global'", self.mod._conflict_target("global"))
|
||||||
|
self.assertIn("scope='role'", self.mod._conflict_target("role"))
|
||||||
|
self.assertIn("scope='minion'", self.mod._conflict_target("minion"))
|
||||||
|
with self.assertRaises(ValueError):
|
||||||
|
self.mod._conflict_target("bogus")
|
||||||
|
|
||||||
|
def test_write_yaml_skips_disk_only_path(self):
|
||||||
|
with patch.object(self.mod, '_is_enabled', return_value=True):
|
||||||
|
ok, msg = self.mod.write_yaml(
|
||||||
|
"/opt/so/saltstack/local/pillar/secrets.sls",
|
||||||
|
{"secrets": {"foo": "bar"}})
|
||||||
|
self.assertFalse(ok)
|
||||||
|
self.assertIn("disk-only", msg)
|
||||||
|
|
||||||
|
def test_write_yaml_unreachable(self):
|
||||||
|
with patch.object(self.mod, '_is_enabled', return_value=False):
|
||||||
|
ok, msg = self.mod.write_yaml(
|
||||||
|
"/opt/so/saltstack/local/pillar/soc/soc_soc.sls",
|
||||||
|
{"soc": {"foo": "bar"}})
|
||||||
|
self.assertFalse(ok)
|
||||||
|
self.assertEqual(msg, "postgres unreachable")
|
||||||
|
|
||||||
|
def test_is_pg_managed_true(self):
|
||||||
|
self.assertTrue(self.mod.is_pg_managed(
|
||||||
|
"/opt/so/saltstack/local/pillar/minions/h1_sensor.sls"))
|
||||||
|
self.assertTrue(self.mod.is_pg_managed(
|
||||||
|
"/opt/so/saltstack/local/pillar/soc/soc_soc.sls"))
|
||||||
|
|
||||||
|
def test_is_pg_managed_false_for_bootstrap(self):
|
||||||
|
self.assertFalse(self.mod.is_pg_managed(
|
||||||
|
"/opt/so/saltstack/local/pillar/secrets.sls"))
|
||||||
|
self.assertFalse(self.mod.is_pg_managed(
|
||||||
|
"/opt/so/saltstack/local/pillar/postgres/auth.sls"))
|
||||||
|
self.assertFalse(self.mod.is_pg_managed(
|
||||||
|
"/opt/so/saltstack/local/pillar/elasticsearch/nodes.sls"))
|
||||||
|
|
||||||
|
def test_read_yaml_unreachable(self):
|
||||||
|
with patch.object(self.mod, '_is_enabled', return_value=False):
|
||||||
|
self.assertIsNone(self.mod.read_yaml(
|
||||||
|
"/opt/so/saltstack/local/pillar/soc/soc_soc.sls"))
|
||||||
|
|
||||||
|
def test_read_yaml_skips_disk_only(self):
|
||||||
|
with patch.object(self.mod, '_is_enabled', return_value=True):
|
||||||
|
with self.assertRaises(self.mod.SkipPath):
|
||||||
|
self.mod.read_yaml(
|
||||||
|
"/opt/so/saltstack/local/pillar/secrets.sls")
|
||||||
|
|
||||||
|
def test_read_yaml_returns_data(self):
|
||||||
|
with patch.object(self.mod, '_is_enabled', return_value=True):
|
||||||
|
with patch.object(self.mod, '_docker_psql',
|
||||||
|
return_value='{"soc": {"foo": "bar"}}\n'):
|
||||||
|
data = self.mod.read_yaml(
|
||||||
|
"/opt/so/saltstack/local/pillar/soc/soc_soc.sls")
|
||||||
|
self.assertEqual(data, {"soc": {"foo": "bar"}})
|
||||||
|
|
||||||
|
def test_read_yaml_returns_none_when_no_row(self):
|
||||||
|
with patch.object(self.mod, '_is_enabled', return_value=True):
|
||||||
|
with patch.object(self.mod, '_docker_psql', return_value=''):
|
||||||
|
data = self.mod.read_yaml(
|
||||||
|
"/opt/so/saltstack/local/pillar/soc/soc_soc.sls")
|
||||||
|
self.assertIsNone(data)
|
||||||
|
|
||||||
|
def test_read_yaml_minion_query_shape(self):
|
||||||
|
captured = {}
|
||||||
|
|
||||||
|
def fake_psql(sql):
|
||||||
|
captured['sql'] = sql
|
||||||
|
return '{"host": {"mainip": "10.0.0.1"}}'
|
||||||
|
|
||||||
|
with patch.object(self.mod, '_is_enabled', return_value=True):
|
||||||
|
with patch.object(self.mod, '_docker_psql', side_effect=fake_psql):
|
||||||
|
data = self.mod.read_yaml(
|
||||||
|
"/opt/so/saltstack/local/pillar/minions/h1_sensor.sls")
|
||||||
|
self.assertEqual(data, {"host": {"mainip": "10.0.0.1"}})
|
||||||
|
self.assertIn("scope='minion'", captured['sql'])
|
||||||
|
self.assertIn("'h1_sensor'", captured['sql'])
|
||||||
|
self.assertIn("'minions.h1_sensor'", captured['sql'])
|
||||||
|
|
||||||
|
def test_is_enabled_public_alias(self):
|
||||||
|
with patch.object(self.mod, '_is_enabled', return_value=True):
|
||||||
|
self.assertTrue(self.mod.is_enabled())
|
||||||
|
with patch.object(self.mod, '_is_enabled', return_value=False):
|
||||||
|
self.assertFalse(self.mod.is_enabled())
|
||||||
|
|
||||||
|
|
||||||
|
class TestSoYamlBackendMode(unittest.TestCase):
|
||||||
|
"""Tests so-yaml's backend-mode resolution and PG-canonical routing
|
||||||
|
for read/write/purge. The PG calls themselves are stubbed; what we're
|
||||||
|
asserting is that the right backend is chosen for each (mode, path)
|
||||||
|
combination."""
|
||||||
|
|
||||||
|
def test_resolve_mode_env_overrides_file(self):
|
||||||
|
with patch.dict('os.environ', {'SO_YAML_BACKEND': 'postgres'}):
|
||||||
|
self.assertEqual(soyaml._resolveBackendMode(), 'postgres')
|
||||||
|
with patch.dict('os.environ', {'SO_YAML_BACKEND': 'disk'}):
|
||||||
|
self.assertEqual(soyaml._resolveBackendMode(), 'disk')
|
||||||
|
|
||||||
|
def test_resolve_mode_invalid_env_falls_back(self):
|
||||||
|
with patch.dict('os.environ', {'SO_YAML_BACKEND': 'garbage'}, clear=False):
|
||||||
|
with patch('builtins.open', side_effect=IOError):
|
||||||
|
self.assertEqual(soyaml._resolveBackendMode(), 'dual')
|
||||||
|
|
||||||
|
def test_resolve_mode_default_dual(self):
|
||||||
|
env = {k: v for k, v in __import__('os').environ.items()
|
||||||
|
if k != 'SO_YAML_BACKEND'}
|
||||||
|
with patch.dict('os.environ', env, clear=True):
|
||||||
|
with patch('builtins.open', side_effect=IOError):
|
||||||
|
self.assertEqual(soyaml._resolveBackendMode(), 'dual')
|
||||||
|
|
||||||
|
def test_is_pg_managed_proxies(self):
|
||||||
|
with patch.object(soyaml, '_SO_YAML_PG_AVAILABLE', True):
|
||||||
|
self.assertTrue(soyaml._isPgManaged(
|
||||||
|
"/opt/so/saltstack/local/pillar/minions/h1_sensor.sls"))
|
||||||
|
self.assertFalse(soyaml._isPgManaged(
|
||||||
|
"/opt/so/saltstack/local/pillar/secrets.sls"))
|
||||||
|
|
||||||
|
def test_is_pg_managed_false_when_module_unavailable(self):
|
||||||
|
with patch.object(soyaml, '_SO_YAML_PG_AVAILABLE', False):
|
||||||
|
self.assertFalse(soyaml._isPgManaged(
|
||||||
|
"/opt/so/saltstack/local/pillar/minions/h1_sensor.sls"))
|
||||||
|
|
||||||
|
def test_load_yaml_postgres_mode_reads_pg(self):
|
||||||
|
with patch.object(soyaml, '_BACKEND_MODE', 'postgres'):
|
||||||
|
with patch.object(soyaml, '_SO_YAML_PG_AVAILABLE', True):
|
||||||
|
with patch.object(soyaml.so_yaml_postgres, 'is_pg_managed',
|
||||||
|
return_value=True):
|
||||||
|
with patch.object(soyaml.so_yaml_postgres, 'read_yaml',
|
||||||
|
return_value={"a": 1}):
|
||||||
|
result = soyaml.loadYaml(
|
||||||
|
"/opt/so/saltstack/local/pillar/soc/soc_soc.sls")
|
||||||
|
self.assertEqual(result, {"a": 1})
|
||||||
|
|
||||||
|
def test_load_yaml_postgres_mode_returns_empty_when_no_row(self):
|
||||||
|
with patch.object(soyaml, '_BACKEND_MODE', 'postgres'):
|
||||||
|
with patch.object(soyaml, '_SO_YAML_PG_AVAILABLE', True):
|
||||||
|
with patch.object(soyaml.so_yaml_postgres, 'is_pg_managed',
|
||||||
|
return_value=True):
|
||||||
|
with patch.object(soyaml.so_yaml_postgres, 'read_yaml',
|
||||||
|
return_value=None):
|
||||||
|
result = soyaml.loadYaml(
|
||||||
|
"/opt/so/saltstack/local/pillar/soc/soc_soc.sls")
|
||||||
|
self.assertEqual(result, {})
|
||||||
|
|
||||||
|
def test_load_yaml_postgres_mode_reads_disk_for_bootstrap(self):
|
||||||
|
import tempfile, os as _os
|
||||||
|
with tempfile.NamedTemporaryFile(mode='w', suffix='.yaml', delete=False) as f:
|
||||||
|
f.write("foo: bar\n")
|
||||||
|
tmp = f.name
|
||||||
|
try:
|
||||||
|
with patch.object(soyaml, '_BACKEND_MODE', 'postgres'):
|
||||||
|
with patch.object(soyaml, '_SO_YAML_PG_AVAILABLE', True):
|
||||||
|
with patch.object(soyaml.so_yaml_postgres,
|
||||||
|
'is_pg_managed', return_value=False):
|
||||||
|
result = soyaml.loadYaml(tmp)
|
||||||
|
self.assertEqual(result, {"foo": "bar"})
|
||||||
|
finally:
|
||||||
|
_os.unlink(tmp)
|
||||||
|
|
||||||
|
def test_write_yaml_postgres_mode_skips_disk(self):
|
||||||
|
import tempfile, os as _os
|
||||||
|
with tempfile.NamedTemporaryFile(mode='w', suffix='.yaml', delete=False) as f:
|
||||||
|
tmp = f.name
|
||||||
|
_os.unlink(tmp)
|
||||||
|
try:
|
||||||
|
with patch.object(soyaml, '_BACKEND_MODE', 'postgres'):
|
||||||
|
with patch.object(soyaml, '_SO_YAML_PG_AVAILABLE', True):
|
||||||
|
with patch.object(soyaml.so_yaml_postgres, 'is_pg_managed',
|
||||||
|
return_value=True):
|
||||||
|
with patch.object(soyaml.so_yaml_postgres, 'write_yaml',
|
||||||
|
return_value=(True, 'ok')) as mock_w:
|
||||||
|
soyaml.writeYaml(tmp, {"x": 1})
|
||||||
|
self.assertFalse(_os.path.exists(tmp))
|
||||||
|
mock_w.assert_called_once()
|
||||||
|
finally:
|
||||||
|
if _os.path.exists(tmp):
|
||||||
|
_os.unlink(tmp)
|
||||||
|
|
||||||
|
def test_write_yaml_postgres_mode_failure_is_fatal(self):
|
||||||
|
with patch.object(soyaml, '_BACKEND_MODE', 'postgres'):
|
||||||
|
with patch.object(soyaml, '_SO_YAML_PG_AVAILABLE', True):
|
||||||
|
with patch.object(soyaml.so_yaml_postgres, 'is_pg_managed',
|
||||||
|
return_value=True):
|
||||||
|
with patch.object(soyaml.so_yaml_postgres, 'write_yaml',
|
||||||
|
return_value=(False, 'pg write failed: connection refused')):
|
||||||
|
with patch('sys.exit', new=MagicMock()) as sysmock:
|
||||||
|
with patch('sys.stderr', new=StringIO()) as mock_err:
|
||||||
|
soyaml.writeYaml(
|
||||||
|
"/opt/so/saltstack/local/pillar/soc/soc_soc.sls",
|
||||||
|
{"x": 1})
|
||||||
|
sysmock.assert_called_with(1)
|
||||||
|
|
||||||
|
def test_write_yaml_disk_mode_skips_pg(self):
|
||||||
|
import tempfile, os as _os
|
||||||
|
with tempfile.NamedTemporaryFile(mode='w', suffix='.yaml', delete=False) as f:
|
||||||
|
tmp = f.name
|
||||||
|
try:
|
||||||
|
with patch.object(soyaml, '_BACKEND_MODE', 'disk'):
|
||||||
|
with patch.object(soyaml, '_SO_YAML_PG_AVAILABLE', True):
|
||||||
|
with patch.object(soyaml.so_yaml_postgres, 'write_yaml') as mock_w:
|
||||||
|
soyaml.writeYaml(tmp, {"x": 1})
|
||||||
|
mock_w.assert_not_called()
|
||||||
|
with open(tmp) as f:
|
||||||
|
self.assertIn('x: 1', f.read())
|
||||||
|
finally:
|
||||||
|
_os.unlink(tmp)
|
||||||
|
|
||||||
|
def test_purge_postgres_mode_calls_pg_only(self):
|
||||||
|
import tempfile, os as _os
|
||||||
|
with tempfile.NamedTemporaryFile(mode='w', suffix='.yaml', delete=False) as f:
|
||||||
|
tmp = f.name
|
||||||
|
_os.unlink(tmp)
|
||||||
|
with patch.object(soyaml, '_BACKEND_MODE', 'postgres'):
|
||||||
|
with patch.object(soyaml, '_SO_YAML_PG_AVAILABLE', True):
|
||||||
|
with patch.object(soyaml.so_yaml_postgres, 'is_pg_managed',
|
||||||
|
return_value=True):
|
||||||
|
with patch.object(soyaml.so_yaml_postgres, 'purge_yaml',
|
||||||
|
return_value=(True, 'ok')) as mock_p:
|
||||||
|
rc = soyaml.purgeFile(tmp)
|
||||||
|
self.assertEqual(rc, 0)
|
||||||
|
mock_p.assert_called_once()
|
||||||
|
|
||||||
|
def test_purge_postgres_mode_failure_returns_nonzero(self):
|
||||||
|
with patch.object(soyaml, '_BACKEND_MODE', 'postgres'):
|
||||||
|
with patch.object(soyaml, '_SO_YAML_PG_AVAILABLE', True):
|
||||||
|
with patch.object(soyaml.so_yaml_postgres, 'is_pg_managed',
|
||||||
|
return_value=True):
|
||||||
|
with patch.object(soyaml.so_yaml_postgres, 'purge_yaml',
|
||||||
|
return_value=(False, 'pg purge failed: x')):
|
||||||
|
with patch('sys.stderr', new=StringIO()):
|
||||||
|
rc = soyaml.purgeFile(
|
||||||
|
"/opt/so/saltstack/local/pillar/minions/h1_sensor.sls")
|
||||||
|
self.assertEqual(rc, 1)
|
||||||
|
|||||||
@@ -0,0 +1,320 @@
|
|||||||
|
# Copyright Security Onion Solutions LLC and/or licensed to Security Onion Solutions LLC under one
|
||||||
|
# or more contributor license agreements. Licensed under the Elastic License 2.0 as shown at
|
||||||
|
# https://securityonion.net/license; you may not use this file except in compliance with the
|
||||||
|
# Elastic License 2.0.
|
||||||
|
|
||||||
|
"""
|
||||||
|
so_yaml_postgres — Postgres-backed dual-write helpers for so-yaml.py.
|
||||||
|
|
||||||
|
so-yaml.py writes YAML pillar files on disk; this module mirrors those
|
||||||
|
writes into so_pillar.* in so-postgres so ext_pillar and the SOC
|
||||||
|
PostgresConfigstore see the same data. During the postsalt transition
|
||||||
|
disk is canonical; PG writes are best-effort and never fail the disk
|
||||||
|
operation.
|
||||||
|
|
||||||
|
Connection: shells out to `docker exec so-postgres psql -U postgres -d
|
||||||
|
securityonion`. Same pattern so-pillar-import uses; avoids needing a
|
||||||
|
separate DSN config at install time. Performance is fine because so-yaml
|
||||||
|
is invoked from infrequent code paths (setup scripts, so-minion,
|
||||||
|
so-firewall); SOC's hot path uses the in-process pgxpool in
|
||||||
|
PostgresConfigstore, not so-yaml.
|
||||||
|
|
||||||
|
Path-to-row mapping mirrors PostgresConfigstore.locateSetting in
|
||||||
|
securityonion-soc:
|
||||||
|
|
||||||
|
/opt/so/saltstack/local/pillar/<section>/soc_<section>.sls
|
||||||
|
-> scope=global, pillar_path=<section>.soc_<section>
|
||||||
|
/opt/so/saltstack/local/pillar/<section>/adv_<section>.sls
|
||||||
|
-> scope=global, pillar_path=<section>.adv_<section>
|
||||||
|
/opt/so/saltstack/local/pillar/minions/<id>.sls
|
||||||
|
-> scope=minion, minion_id=<id>, pillar_path=minions.<id>
|
||||||
|
/opt/so/saltstack/local/pillar/minions/adv_<id>.sls
|
||||||
|
-> scope=minion, minion_id=<id>, pillar_path=minions.adv_<id>
|
||||||
|
|
||||||
|
Files outside that mapping (notably secrets.sls, postgres/auth.sls,
|
||||||
|
elasticsearch/nodes.sls, etc.) are skipped — they stay disk-only forever
|
||||||
|
or render dynamically and don't belong in PG.
|
||||||
|
"""
|
||||||
|
|
||||||
|
import json
|
||||||
|
import os
|
||||||
|
import shlex
|
||||||
|
import subprocess
|
||||||
|
import sys
|
||||||
|
|
||||||
|
DOCKER_CONTAINER = os.environ.get("SO_PILLAR_PG_CONTAINER", "so-postgres")
|
||||||
|
PG_DATABASE = os.environ.get("SO_PILLAR_PG_DATABASE", "securityonion")
|
||||||
|
PG_USER = os.environ.get("SO_PILLAR_PG_USER", "postgres")
|
||||||
|
|
||||||
|
# File paths whose mutations stay disk-only forever. Mirrors EXCLUDE_*
|
||||||
|
# in so-pillar-import.
|
||||||
|
DISK_ONLY_PATHS = (
|
||||||
|
"/opt/so/saltstack/local/pillar/secrets.sls",
|
||||||
|
"/opt/so/saltstack/local/pillar/postgres/auth.sls",
|
||||||
|
"/opt/so/saltstack/local/pillar/elasticsearch/auth.sls",
|
||||||
|
"/opt/so/saltstack/local/pillar/kibana/secrets.sls",
|
||||||
|
)
|
||||||
|
DISK_ONLY_FRAGMENTS = (
|
||||||
|
"/elasticsearch/nodes.sls",
|
||||||
|
"/redis/nodes.sls",
|
||||||
|
"/kafka/nodes.sls",
|
||||||
|
"/hypervisor/nodes.sls",
|
||||||
|
"/logstash/nodes.sls",
|
||||||
|
"/node_data/ips.sls",
|
||||||
|
"/top.sls",
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
class SkipPath(Exception):
|
||||||
|
"""Raised when a file path is intentionally not mirrored to PG."""
|
||||||
|
|
||||||
|
|
||||||
|
def is_enabled():
|
||||||
|
"""Public alias for callers that want to probe PG reachability without
|
||||||
|
relying on a leading-underscore private name."""
|
||||||
|
return _is_enabled()
|
||||||
|
|
||||||
|
|
||||||
|
def _is_enabled():
|
||||||
|
"""PG dual-write only fires if so-postgres is reachable. Cheap probe.
|
||||||
|
Returns True when docker exec succeeds, False otherwise. We never
|
||||||
|
want a PG hiccup to fail a disk write on a manager whose Postgres is
|
||||||
|
momentarily unreachable."""
|
||||||
|
try:
|
||||||
|
proc = subprocess.run(
|
||||||
|
["docker", "exec", DOCKER_CONTAINER,
|
||||||
|
"pg_isready", "-h", "127.0.0.1", "-U", PG_USER, "-q"],
|
||||||
|
capture_output=True, timeout=5, check=False,
|
||||||
|
)
|
||||||
|
return proc.returncode == 0
|
||||||
|
except (FileNotFoundError, subprocess.TimeoutExpired, OSError):
|
||||||
|
return False
|
||||||
|
|
||||||
|
|
||||||
|
def locate(path):
|
||||||
|
"""Translate a so-yaml file path to (scope, role_name, minion_id, pillar_path).
|
||||||
|
Raises SkipPath when the file is not part of the PG-managed surface."""
|
||||||
|
norm = os.path.normpath(path)
|
||||||
|
|
||||||
|
if norm in DISK_ONLY_PATHS:
|
||||||
|
raise SkipPath(f"{path}: explicit disk-only allowlist")
|
||||||
|
for frag in DISK_ONLY_FRAGMENTS:
|
||||||
|
if frag in norm:
|
||||||
|
raise SkipPath(f"{path}: matches disk-only fragment {frag}")
|
||||||
|
|
||||||
|
parent = os.path.basename(os.path.dirname(norm))
|
||||||
|
grandparent = os.path.basename(os.path.dirname(os.path.dirname(norm)))
|
||||||
|
name = os.path.basename(norm)
|
||||||
|
if not name.endswith(".sls"):
|
||||||
|
raise SkipPath(f"{path}: not a .sls file")
|
||||||
|
stem = name[:-4]
|
||||||
|
|
||||||
|
if parent == "minions":
|
||||||
|
if stem.startswith("adv_"):
|
||||||
|
mid = stem[4:]
|
||||||
|
return ("minion", None, mid, f"minions.adv_{mid}")
|
||||||
|
return ("minion", None, stem, f"minions.{stem}")
|
||||||
|
|
||||||
|
# /local/pillar/<section>/<file>.sls
|
||||||
|
if grandparent == "pillar" and parent and parent != "":
|
||||||
|
if stem.startswith("soc_") or stem.startswith("adv_"):
|
||||||
|
return ("global", None, None, f"{parent}.{stem}")
|
||||||
|
raise SkipPath(f"{path}: <section>/{stem}.sls is not a soc_/adv_ file")
|
||||||
|
|
||||||
|
raise SkipPath(f"{path}: unrecognised pillar layout")
|
||||||
|
|
||||||
|
|
||||||
|
def _pg_str(s):
|
||||||
|
if s is None:
|
||||||
|
return "NULL"
|
||||||
|
return "'" + str(s).replace("'", "''") + "'"
|
||||||
|
|
||||||
|
|
||||||
|
def _docker_psql(sql):
|
||||||
|
"""Run sql via docker exec ... psql. Returns stdout. Caller catches
|
||||||
|
exceptions and downgrades to a warning."""
|
||||||
|
proc = subprocess.run(
|
||||||
|
["docker", "exec", "-i", DOCKER_CONTAINER,
|
||||||
|
"psql", "-U", PG_USER, "-d", PG_DATABASE,
|
||||||
|
"-tA", "-q", "-v", "ON_ERROR_STOP=1"],
|
||||||
|
input=sql.encode(), capture_output=True, check=False, timeout=30,
|
||||||
|
)
|
||||||
|
if proc.returncode != 0:
|
||||||
|
raise RuntimeError(proc.stderr.decode(errors="replace") or
|
||||||
|
f"docker exec psql exit {proc.returncode}")
|
||||||
|
return proc.stdout.decode(errors="replace")
|
||||||
|
|
||||||
|
|
||||||
|
def _conflict_target(scope):
|
||||||
|
if scope == "global":
|
||||||
|
return "(pillar_path) WHERE scope='global'"
|
||||||
|
if scope == "role":
|
||||||
|
return "(role_name, pillar_path) WHERE scope='role'"
|
||||||
|
if scope == "minion":
|
||||||
|
return "(minion_id, pillar_path) WHERE scope='minion'"
|
||||||
|
raise ValueError(f"unknown scope {scope!r}")
|
||||||
|
|
||||||
|
|
||||||
|
def is_pg_managed(path):
|
||||||
|
"""True if this path maps to a so_pillar.* row (locate() succeeds).
|
||||||
|
Bootstrap and mine-driven files return False — they always live on
|
||||||
|
disk regardless of so-yaml's backend mode."""
|
||||||
|
try:
|
||||||
|
locate(path)
|
||||||
|
return True
|
||||||
|
except SkipPath:
|
||||||
|
return False
|
||||||
|
|
||||||
|
|
||||||
|
def read_yaml(path):
|
||||||
|
"""Return the content dict stored in so_pillar.pillar_entry for `path`,
|
||||||
|
or None when no row exists. Raises SkipPath when `path` is not part of
|
||||||
|
the PG-managed surface (caller should read disk in that case).
|
||||||
|
|
||||||
|
Used by so-yaml.py PG-canonical mode so `replace`, `get`, etc. resolve
|
||||||
|
against the database rather than a stale (or absent) disk file."""
|
||||||
|
if not _is_enabled():
|
||||||
|
return None
|
||||||
|
scope, role, minion_id, pillar_path = locate(path)
|
||||||
|
|
||||||
|
if scope == "minion":
|
||||||
|
sql = ("SELECT data FROM so_pillar.pillar_entry "
|
||||||
|
"WHERE scope='minion' "
|
||||||
|
f"AND minion_id={_pg_str(minion_id)} "
|
||||||
|
f"AND pillar_path={_pg_str(pillar_path)}")
|
||||||
|
elif scope == "role":
|
||||||
|
sql = ("SELECT data FROM so_pillar.pillar_entry "
|
||||||
|
"WHERE scope='role' "
|
||||||
|
f"AND role_name={_pg_str(role)} "
|
||||||
|
f"AND pillar_path={_pg_str(pillar_path)}")
|
||||||
|
else:
|
||||||
|
sql = ("SELECT data FROM so_pillar.pillar_entry "
|
||||||
|
"WHERE scope='global' "
|
||||||
|
f"AND pillar_path={_pg_str(pillar_path)}")
|
||||||
|
|
||||||
|
try:
|
||||||
|
out = _docker_psql(sql).strip()
|
||||||
|
except Exception:
|
||||||
|
return None
|
||||||
|
if not out:
|
||||||
|
return None
|
||||||
|
try:
|
||||||
|
return json.loads(out)
|
||||||
|
except (ValueError, TypeError):
|
||||||
|
return None
|
||||||
|
|
||||||
|
|
||||||
|
def write_yaml(path, content_dict, *, reason="so-yaml dual-write"):
|
||||||
|
"""Mirror the disk write at `path` (whose content was just rendered as
|
||||||
|
`content_dict`) into so_pillar.pillar_entry. Best-effort: any failure
|
||||||
|
is swallowed so the caller (so-yaml.py) does not see it as a fatal."""
|
||||||
|
if not _is_enabled():
|
||||||
|
return False, "postgres unreachable"
|
||||||
|
try:
|
||||||
|
scope, role, minion_id, pillar_path = locate(path)
|
||||||
|
except SkipPath as e:
|
||||||
|
return False, str(e)
|
||||||
|
|
||||||
|
data_json = json.dumps(content_dict if content_dict is not None else {})
|
||||||
|
role_sql = _pg_str(role)
|
||||||
|
minion_sql = _pg_str(minion_id)
|
||||||
|
reason_sql = _pg_str(reason)
|
||||||
|
conflict = _conflict_target(scope)
|
||||||
|
|
||||||
|
sql_parts = []
|
||||||
|
if scope == "minion":
|
||||||
|
# FK requires the minion row before pillar_entry can reference it.
|
||||||
|
sql_parts.append(
|
||||||
|
f"INSERT INTO so_pillar.minion (minion_id) VALUES ({minion_sql}) "
|
||||||
|
"ON CONFLICT (minion_id) DO NOTHING;"
|
||||||
|
)
|
||||||
|
sql_parts.append(
|
||||||
|
"BEGIN;\n"
|
||||||
|
f"SELECT set_config('so_pillar.change_reason', {reason_sql}, true);\n"
|
||||||
|
"INSERT INTO so_pillar.pillar_entry "
|
||||||
|
"(scope, role_name, minion_id, pillar_path, data, change_reason) "
|
||||||
|
f"VALUES ({_pg_str(scope)}, {role_sql}, {minion_sql}, "
|
||||||
|
f"{_pg_str(pillar_path)}, {_pg_str(data_json)}::jsonb, {reason_sql}) "
|
||||||
|
f"ON CONFLICT {conflict} DO UPDATE "
|
||||||
|
"SET data = EXCLUDED.data, change_reason = EXCLUDED.change_reason;\n"
|
||||||
|
"COMMIT;\n"
|
||||||
|
)
|
||||||
|
|
||||||
|
try:
|
||||||
|
_docker_psql("\n".join(sql_parts))
|
||||||
|
except Exception as e:
|
||||||
|
return False, f"pg write failed: {e}"
|
||||||
|
return True, "ok"
|
||||||
|
|
||||||
|
|
||||||
|
def purge_yaml(path, *, reason="so-yaml purge"):
|
||||||
|
"""Mirror the disk file deletion at `path` by deleting the matching
|
||||||
|
pillar_entry rows. For minion files also deletes the so_pillar.minion
|
||||||
|
row (CASCADE removes pillar_entry + role_member rows)."""
|
||||||
|
if not _is_enabled():
|
||||||
|
return False, "postgres unreachable"
|
||||||
|
try:
|
||||||
|
scope, role, minion_id, pillar_path = locate(path)
|
||||||
|
except SkipPath as e:
|
||||||
|
return False, str(e)
|
||||||
|
|
||||||
|
reason_sql = _pg_str(reason)
|
||||||
|
parts = ["BEGIN;",
|
||||||
|
f"SELECT set_config('so_pillar.change_reason', {reason_sql}, true);"]
|
||||||
|
|
||||||
|
if scope == "minion":
|
||||||
|
# If both <id>.sls and adv_<id>.sls are gone the trigger / CASCADE
|
||||||
|
# cleans up role_member; otherwise we just remove this one row.
|
||||||
|
parts.append(
|
||||||
|
f"DELETE FROM so_pillar.pillar_entry "
|
||||||
|
f"WHERE scope='minion' AND minion_id={_pg_str(minion_id)} "
|
||||||
|
f"AND pillar_path={_pg_str(pillar_path)};"
|
||||||
|
)
|
||||||
|
parts.append(
|
||||||
|
f"DELETE FROM so_pillar.minion WHERE minion_id={_pg_str(minion_id)} "
|
||||||
|
"AND NOT EXISTS (SELECT 1 FROM so_pillar.pillar_entry "
|
||||||
|
f"WHERE minion_id={_pg_str(minion_id)});"
|
||||||
|
)
|
||||||
|
else:
|
||||||
|
parts.append(
|
||||||
|
f"DELETE FROM so_pillar.pillar_entry "
|
||||||
|
f"WHERE scope={_pg_str(scope)} AND pillar_path={_pg_str(pillar_path)};"
|
||||||
|
)
|
||||||
|
|
||||||
|
parts.append("COMMIT;")
|
||||||
|
|
||||||
|
try:
|
||||||
|
_docker_psql("\n".join(parts))
|
||||||
|
except Exception as e:
|
||||||
|
return False, f"pg purge failed: {e}"
|
||||||
|
return True, "ok"
|
||||||
|
|
||||||
|
|
||||||
|
# CLI for diagnostics. Not exercised by so-yaml.py itself.
|
||||||
|
def _main(argv):
|
||||||
|
import argparse
|
||||||
|
ap = argparse.ArgumentParser()
|
||||||
|
ap.add_argument("op", choices=("locate", "ping"))
|
||||||
|
ap.add_argument("path", nargs="?")
|
||||||
|
args = ap.parse_args(argv)
|
||||||
|
|
||||||
|
if args.op == "ping":
|
||||||
|
ok = _is_enabled()
|
||||||
|
print("ok" if ok else "unreachable")
|
||||||
|
return 0 if ok else 1
|
||||||
|
if args.op == "locate":
|
||||||
|
if not args.path:
|
||||||
|
ap.error("locate requires PATH")
|
||||||
|
try:
|
||||||
|
scope, role, minion_id, pillar_path = locate(args.path)
|
||||||
|
print(f"scope={scope} role={role} minion_id={minion_id} pillar_path={pillar_path}")
|
||||||
|
return 0
|
||||||
|
except SkipPath as e:
|
||||||
|
print(f"SKIP: {e}", file=sys.stderr)
|
||||||
|
return 2
|
||||||
|
|
||||||
|
return 1
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
sys.exit(_main(sys.argv[1:]))
|
||||||
+119
-45
@@ -24,6 +24,14 @@ BACKUPTOPFILE=/opt/so/saltstack/default/salt/top.sls.backup
|
|||||||
SALTUPGRADED=false
|
SALTUPGRADED=false
|
||||||
SALT_CLOUD_INSTALLED=false
|
SALT_CLOUD_INSTALLED=false
|
||||||
SALT_CLOUD_CONFIGURED=false
|
SALT_CLOUD_CONFIGURED=false
|
||||||
|
# Check if salt-cloud is installed
|
||||||
|
if rpm -q salt-cloud &>/dev/null; then
|
||||||
|
SALT_CLOUD_INSTALLED=true
|
||||||
|
fi
|
||||||
|
# Check if salt-cloud is configured
|
||||||
|
if [[ -f /etc/salt/cloud.profiles.d/socloud.conf ]]; then
|
||||||
|
SALT_CLOUD_CONFIGURED=true
|
||||||
|
fi
|
||||||
# used to display messages to the user at the end of soup
|
# used to display messages to the user at the end of soup
|
||||||
declare -a FINAL_MESSAGE_QUEUE=()
|
declare -a FINAL_MESSAGE_QUEUE=()
|
||||||
|
|
||||||
@@ -305,7 +313,7 @@ clone_to_tmp() {
|
|||||||
# Make a temp location for the files
|
# Make a temp location for the files
|
||||||
mkdir -p /tmp/sogh
|
mkdir -p /tmp/sogh
|
||||||
cd /tmp/sogh
|
cd /tmp/sogh
|
||||||
SOUP_BRANCH="-b 2.4/main"
|
SOUP_BRANCH="-b 3/main"
|
||||||
if [ -n "$BRANCH" ]; then
|
if [ -n "$BRANCH" ]; then
|
||||||
SOUP_BRANCH="-b $BRANCH"
|
SOUP_BRANCH="-b $BRANCH"
|
||||||
fi
|
fi
|
||||||
@@ -363,6 +371,7 @@ preupgrade_changes() {
|
|||||||
echo "Checking to see if changes are needed."
|
echo "Checking to see if changes are needed."
|
||||||
|
|
||||||
[[ "$INSTALLEDVERSION" =~ ^2\.4\.21[0-9]+$ ]] && up_to_3.0.0
|
[[ "$INSTALLEDVERSION" =~ ^2\.4\.21[0-9]+$ ]] && up_to_3.0.0
|
||||||
|
[[ "$INSTALLEDVERSION" == "3.0.0" ]] && up_to_3.1.0
|
||||||
true
|
true
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -371,6 +380,7 @@ postupgrade_changes() {
|
|||||||
echo "Running post upgrade processes."
|
echo "Running post upgrade processes."
|
||||||
|
|
||||||
[[ "$POSTVERSION" =~ ^2\.4\.21[0-9]+$ ]] && post_to_3.0.0
|
[[ "$POSTVERSION" =~ ^2\.4\.21[0-9]+$ ]] && post_to_3.0.0
|
||||||
|
[[ "$POSTVERSION" == "3.0.0" ]] && post_to_3.1.0
|
||||||
true
|
true
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -445,7 +455,6 @@ migrate_pcap_to_suricata() {
|
|||||||
}
|
}
|
||||||
|
|
||||||
up_to_3.0.0() {
|
up_to_3.0.0() {
|
||||||
determine_elastic_agent_upgrade
|
|
||||||
migrate_pcap_to_suricata
|
migrate_pcap_to_suricata
|
||||||
|
|
||||||
INSTALLEDVERSION=3.0.0
|
INSTALLEDVERSION=3.0.0
|
||||||
@@ -469,6 +478,87 @@ post_to_3.0.0() {
|
|||||||
|
|
||||||
### 3.0.0 End ###
|
### 3.0.0 End ###
|
||||||
|
|
||||||
|
### 3.1.0 Scripts ###
|
||||||
|
|
||||||
|
elasticsearch_backup_index_templates() {
|
||||||
|
echo "Backing up current elasticsearch index templates in /opt/so/conf/elasticsearch/templates/index/ to /nsm/backup/3.0.0_elasticsearch_index_templates.tar.gz"
|
||||||
|
tar -czf /nsm/backup/3.0.0_elasticsearch_index_templates.tar.gz -C /opt/so/conf/elasticsearch/templates/index/ .
|
||||||
|
}
|
||||||
|
|
||||||
|
ensure_postgres_local_pillar() {
|
||||||
|
# Postgres was added as a service after 3.0.0, so the new pillar/top.sls
|
||||||
|
# references postgres.soc_postgres / postgres.adv_postgres unconditionally.
|
||||||
|
# Managers upgrading from 3.0.0 have no /opt/so/saltstack/local/pillar/postgres/
|
||||||
|
# (make_some_dirs only runs at install time), so the stubs must be created
|
||||||
|
# here before salt-master restarts against the new top.sls.
|
||||||
|
echo "Ensuring postgres local pillar stubs exist."
|
||||||
|
local dir=/opt/so/saltstack/local/pillar/postgres
|
||||||
|
mkdir -p "$dir"
|
||||||
|
[[ -f "$dir/soc_postgres.sls" ]] || touch "$dir/soc_postgres.sls"
|
||||||
|
[[ -f "$dir/adv_postgres.sls" ]] || touch "$dir/adv_postgres.sls"
|
||||||
|
chown -R socore:socore "$dir"
|
||||||
|
}
|
||||||
|
|
||||||
|
ensure_postgres_secret() {
|
||||||
|
# On a fresh install, generate_passwords + secrets_pillar seed
|
||||||
|
# secrets:postgres_pass in /opt/so/saltstack/local/pillar/secrets.sls. That
|
||||||
|
# code path is skipped on upgrade (secrets.sls already exists from 3.0.0
|
||||||
|
# with import_pass/influx_pass but no postgres_pass), so the postgres
|
||||||
|
# container's POSTGRES_PASSWORD_FILE and SOC's PG_ADMIN_PASS would be empty
|
||||||
|
# after highstate. Generate one now if missing.
|
||||||
|
local secrets_file=/opt/so/saltstack/local/pillar/secrets.sls
|
||||||
|
if [[ ! -f "$secrets_file" ]]; then
|
||||||
|
echo "WARNING: $secrets_file missing; skipping postgres_pass backfill."
|
||||||
|
return 0
|
||||||
|
fi
|
||||||
|
if so-yaml.py get -r "$secrets_file" secrets.postgres_pass >/dev/null 2>&1; then
|
||||||
|
echo "secrets.postgres_pass already set; leaving as-is."
|
||||||
|
return 0
|
||||||
|
fi
|
||||||
|
echo "Seeding secrets.postgres_pass in $secrets_file."
|
||||||
|
so-yaml.py add "$secrets_file" secrets.postgres_pass "$(get_random_value)"
|
||||||
|
chown socore:socore "$secrets_file"
|
||||||
|
}
|
||||||
|
|
||||||
|
up_to_3.1.0() {
|
||||||
|
ensure_postgres_local_pillar
|
||||||
|
ensure_postgres_secret
|
||||||
|
determine_elastic_agent_upgrade
|
||||||
|
elasticsearch_backup_index_templates
|
||||||
|
# Clear existing component template state file.
|
||||||
|
rm -f /opt/so/state/esfleet_component_templates.json
|
||||||
|
|
||||||
|
|
||||||
|
INSTALLEDVERSION=3.1.0
|
||||||
|
}
|
||||||
|
|
||||||
|
post_to_3.1.0() {
|
||||||
|
/usr/sbin/so-kibana-space-defaults
|
||||||
|
# ensure manager has new version of socloud.conf
|
||||||
|
if [[ $SALT_CLOUD_CONFIGURED == true ]]; then
|
||||||
|
salt-call state.apply salt.cloud.config concurrent=True
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Backfill the Telegraf creds pillar for every accepted minion. so-telegraf-cred
|
||||||
|
# add is idempotent — it no-ops when an entry already exists — so this is safe
|
||||||
|
# to run on every soup. The subsequent state.apply creates/updates the matching
|
||||||
|
# Postgres roles from the reconciled pillar.
|
||||||
|
echo "Reconciling Telegraf Postgres creds for accepted minions."
|
||||||
|
for mid in $(salt-key --out=json --list=accepted 2>/dev/null | jq -r '.minions[]?' 2>/dev/null); do
|
||||||
|
[[ -n "$mid" ]] || continue
|
||||||
|
/usr/sbin/so-telegraf-cred add "$mid" || echo " warning: so-telegraf-cred add $mid failed" >&2
|
||||||
|
done
|
||||||
|
# Run through the master (not --local) so state compilation uses the
|
||||||
|
# master's configured file_roots; the manager's /etc/salt/minion has no
|
||||||
|
# file_roots of its own and --local would fail with "No matching sls found".
|
||||||
|
salt-call state.apply postgres.telegraf_users queue=True || true
|
||||||
|
|
||||||
|
POSTVERSION=3.1.0
|
||||||
|
}
|
||||||
|
|
||||||
|
### 3.1.0 End ###
|
||||||
|
|
||||||
|
|
||||||
repo_sync() {
|
repo_sync() {
|
||||||
echo "Sync the local repo."
|
echo "Sync the local repo."
|
||||||
su socore -c '/usr/sbin/so-repo-sync' || fail "Unable to complete so-repo-sync."
|
su socore -c '/usr/sbin/so-repo-sync' || fail "Unable to complete so-repo-sync."
|
||||||
@@ -636,15 +726,6 @@ upgrade_check_salt() {
|
|||||||
upgrade_salt() {
|
upgrade_salt() {
|
||||||
echo "Performing upgrade of Salt from $INSTALLEDSALTVERSION to $NEWSALTVERSION."
|
echo "Performing upgrade of Salt from $INSTALLEDSALTVERSION to $NEWSALTVERSION."
|
||||||
echo ""
|
echo ""
|
||||||
# Check if salt-cloud is installed
|
|
||||||
if rpm -q salt-cloud &>/dev/null; then
|
|
||||||
SALT_CLOUD_INSTALLED=true
|
|
||||||
fi
|
|
||||||
# Check if salt-cloud is configured
|
|
||||||
if [[ -f /etc/salt/cloud.profiles.d/socloud.conf ]]; then
|
|
||||||
SALT_CLOUD_CONFIGURED=true
|
|
||||||
fi
|
|
||||||
|
|
||||||
echo "Removing yum versionlock for Salt."
|
echo "Removing yum versionlock for Salt."
|
||||||
echo ""
|
echo ""
|
||||||
yum versionlock delete "salt"
|
yum versionlock delete "salt"
|
||||||
@@ -728,12 +809,12 @@ verify_es_version_compatibility() {
|
|||||||
local is_active_intermediate_upgrade=1
|
local is_active_intermediate_upgrade=1
|
||||||
# supported upgrade paths for SO-ES versions
|
# supported upgrade paths for SO-ES versions
|
||||||
declare -A es_upgrade_map=(
|
declare -A es_upgrade_map=(
|
||||||
["8.18.8"]="9.0.8"
|
["9.0.8"]="9.3.3"
|
||||||
)
|
)
|
||||||
|
|
||||||
# Elasticsearch MUST upgrade through these versions
|
# Elasticsearch MUST upgrade through these versions
|
||||||
declare -A es_to_so_version=(
|
declare -A es_to_so_version=(
|
||||||
["8.18.8"]="2.4.190-20251024"
|
["9.0.8"]="3.0.0-20260331"
|
||||||
)
|
)
|
||||||
|
|
||||||
# Get current Elasticsearch version
|
# Get current Elasticsearch version
|
||||||
@@ -745,26 +826,17 @@ verify_es_version_compatibility() {
|
|||||||
exit 160
|
exit 160
|
||||||
fi
|
fi
|
||||||
|
|
||||||
if ! target_es_version_raw=$(so-yaml.py get $UPDATE_DIR/salt/elasticsearch/defaults.yaml elasticsearch.version); then
|
if ! target_es_version=$(so-yaml.py get -r $UPDATE_DIR/salt/elasticsearch/defaults.yaml elasticsearch.version); then
|
||||||
# so-yaml.py failed to get the ES version from upgrade versions elasticsearch/defaults.yaml file. Likely they are upgrading to an SO version older than 2.4.110 prior to the ES version pinning and should be OKAY to continue with the upgrade.
|
|
||||||
|
|
||||||
# if so-yaml.py failed to get the ES version AND the version we are upgrading to is newer than 2.4.110 then we should bail
|
|
||||||
if [[ $(cat $UPDATE_DIR/VERSION | cut -d'.' -f3) > 110 ]]; then
|
|
||||||
echo "Couldn't determine the target Elasticsearch version (post soup version) to ensure compatibility with current Elasticsearch version. Exiting"
|
echo "Couldn't determine the target Elasticsearch version (post soup version) to ensure compatibility with current Elasticsearch version. Exiting"
|
||||||
|
|
||||||
exit 160
|
exit 160
|
||||||
fi
|
fi
|
||||||
|
|
||||||
# allow upgrade to version < 2.4.110 without checking ES version compatibility
|
|
||||||
return 0
|
|
||||||
else
|
|
||||||
target_es_version=$(sed -n '1p' <<< "$target_es_version_raw")
|
|
||||||
fi
|
|
||||||
|
|
||||||
for statefile in "${es_required_version_statefile_base}"-*; do
|
for statefile in "${es_required_version_statefile_base}"-*; do
|
||||||
[[ -f $statefile ]] || continue
|
[[ -f $statefile ]] || continue
|
||||||
|
|
||||||
local es_required_version_statefile_value=$(cat "$statefile")
|
local es_required_version_statefile_value
|
||||||
|
es_required_version_statefile_value=$(cat "$statefile")
|
||||||
|
|
||||||
if [[ "$es_required_version_statefile_value" == "$target_es_version" ]]; then
|
if [[ "$es_required_version_statefile_value" == "$target_es_version" ]]; then
|
||||||
echo "Intermediate upgrade to ES $target_es_version is in progress. Skipping Elasticsearch version compatibility check."
|
echo "Intermediate upgrade to ES $target_es_version is in progress. Skipping Elasticsearch version compatibility check."
|
||||||
@@ -773,7 +845,7 @@ verify_es_version_compatibility() {
|
|||||||
fi
|
fi
|
||||||
|
|
||||||
# use sort to check if es_required_statefile_value is < the current es_version.
|
# use sort to check if es_required_statefile_value is < the current es_version.
|
||||||
if [[ "$(printf '%s\n' $es_required_version_statefile_value $es_version | sort -V | head -n1)" == "$es_required_version_statefile_value" ]]; then
|
if [[ "$(printf '%s\n' "$es_required_version_statefile_value" "$es_version" | sort -V | head -n1)" == "$es_required_version_statefile_value" ]]; then
|
||||||
rm -f "$statefile"
|
rm -f "$statefile"
|
||||||
continue
|
continue
|
||||||
fi
|
fi
|
||||||
@@ -784,8 +856,7 @@ verify_es_version_compatibility() {
|
|||||||
|
|
||||||
echo -e "\n##############################################################################################################################\n"
|
echo -e "\n##############################################################################################################################\n"
|
||||||
echo "A previously required intermediate Elasticsearch upgrade was detected. Verifying that all Searchnodes/Heavynodes have successfully upgraded Elasticsearch to $es_required_version_statefile_value before proceeding with soup to avoid potential data loss! This command can take up to an hour to complete."
|
echo "A previously required intermediate Elasticsearch upgrade was detected. Verifying that all Searchnodes/Heavynodes have successfully upgraded Elasticsearch to $es_required_version_statefile_value before proceeding with soup to avoid potential data loss! This command can take up to an hour to complete."
|
||||||
timeout --foreground 4000 bash "$es_verification_script" "$es_required_version_statefile_value" "$statefile"
|
if ! timeout --foreground 4000 bash "$es_verification_script" "$es_required_version_statefile_value" "$statefile"; then
|
||||||
if [[ $? -ne 0 ]]; then
|
|
||||||
echo -e "\n!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!\n"
|
echo -e "\n!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!\n"
|
||||||
|
|
||||||
echo "A previous required intermediate Elasticsearch upgrade to $es_required_version_statefile_value has yet to successfully complete across the grid. Please allow time for all Searchnodes/Heavynodes to have upgraded Elasticsearch to $es_required_version_statefile_value before running soup again to avoid potential data loss!"
|
echo "A previous required intermediate Elasticsearch upgrade to $es_required_version_statefile_value has yet to successfully complete across the grid. Please allow time for all Searchnodes/Heavynodes to have upgraded Elasticsearch to $es_required_version_statefile_value before running soup again to avoid potential data loss!"
|
||||||
@@ -802,6 +873,7 @@ verify_es_version_compatibility() {
|
|||||||
return 0
|
return 0
|
||||||
fi
|
fi
|
||||||
|
|
||||||
|
# shellcheck disable=SC2076 # Do not want a regex here eg usage " 8.18.8 9.0.8 " =~ " 9.0.8 "
|
||||||
if [[ " ${es_upgrade_map[$es_version]} " =~ " $target_es_version " || "$es_version" == "$target_es_version" ]]; then
|
if [[ " ${es_upgrade_map[$es_version]} " =~ " $target_es_version " || "$es_version" == "$target_es_version" ]]; then
|
||||||
# supported upgrade
|
# supported upgrade
|
||||||
return 0
|
return 0
|
||||||
@@ -810,7 +882,7 @@ verify_es_version_compatibility() {
|
|||||||
if [[ -z "$compatible_versions" ]]; then
|
if [[ -z "$compatible_versions" ]]; then
|
||||||
# If current ES version is not explicitly defined in the upgrade map, we know they have an intermediate upgrade to do.
|
# If current ES version is not explicitly defined in the upgrade map, we know they have an intermediate upgrade to do.
|
||||||
# We default to the lowest ES version defined in es_to_so_version as $first_es_required_version
|
# We default to the lowest ES version defined in es_to_so_version as $first_es_required_version
|
||||||
local first_es_required_version=$(printf '%s\n' "${!es_to_so_version[@]}" | sort -V | head -n1)
|
first_es_required_version=$(printf '%s\n' "${!es_to_so_version[@]}" | sort -V | head -n1)
|
||||||
next_step_so_version=${es_to_so_version[$first_es_required_version]}
|
next_step_so_version=${es_to_so_version[$first_es_required_version]}
|
||||||
required_es_upgrade_version="$first_es_required_version"
|
required_es_upgrade_version="$first_es_required_version"
|
||||||
else
|
else
|
||||||
@@ -829,7 +901,7 @@ verify_es_version_compatibility() {
|
|||||||
if [[ $is_airgap -eq 0 ]]; then
|
if [[ $is_airgap -eq 0 ]]; then
|
||||||
run_airgap_intermediate_upgrade
|
run_airgap_intermediate_upgrade
|
||||||
else
|
else
|
||||||
if [[ ! -z $ISOLOC ]]; then
|
if [[ -n $ISOLOC ]]; then
|
||||||
originally_requested_iso_location="$ISOLOC"
|
originally_requested_iso_location="$ISOLOC"
|
||||||
fi
|
fi
|
||||||
# Make sure ISOLOC is not set. Network installs that used soup -f would have ISOLOC set.
|
# Make sure ISOLOC is not set. Network installs that used soup -f would have ISOLOC set.
|
||||||
@@ -861,7 +933,8 @@ wait_for_salt_minion_with_restart() {
|
|||||||
}
|
}
|
||||||
|
|
||||||
run_airgap_intermediate_upgrade() {
|
run_airgap_intermediate_upgrade() {
|
||||||
local originally_requested_so_version=$(cat $UPDATE_DIR/VERSION)
|
local originally_requested_so_version
|
||||||
|
originally_requested_so_version=$(cat "$UPDATE_DIR/VERSION")
|
||||||
# preserve ISOLOC value, so we can try to use it post intermediate upgrade
|
# preserve ISOLOC value, so we can try to use it post intermediate upgrade
|
||||||
local originally_requested_iso_location="$ISOLOC"
|
local originally_requested_iso_location="$ISOLOC"
|
||||||
|
|
||||||
@@ -873,7 +946,8 @@ run_airgap_intermediate_upgrade() {
|
|||||||
|
|
||||||
while [[ -z "$next_iso_location" ]] || [[ ! -f "$next_iso_location" && ! -b "$next_iso_location" ]]; do
|
while [[ -z "$next_iso_location" ]] || [[ ! -f "$next_iso_location" && ! -b "$next_iso_location" ]]; do
|
||||||
# List removable devices if any are present
|
# List removable devices if any are present
|
||||||
local removable_devices=$(lsblk -no PATH,SIZE,TYPE,MOUNTPOINTS,RM | awk '$NF==1')
|
local removable_devices
|
||||||
|
removable_devices=$(lsblk -no PATH,SIZE,TYPE,MOUNTPOINTS,RM | awk '$NF==1')
|
||||||
if [[ -n "$removable_devices" ]]; then
|
if [[ -n "$removable_devices" ]]; then
|
||||||
echo "PATH SIZE TYPE MOUNTPOINTS RM"
|
echo "PATH SIZE TYPE MOUNTPOINTS RM"
|
||||||
echo "$removable_devices"
|
echo "$removable_devices"
|
||||||
@@ -894,21 +968,21 @@ run_airgap_intermediate_upgrade() {
|
|||||||
|
|
||||||
echo "Using $next_iso_location for required intermediary upgrade."
|
echo "Using $next_iso_location for required intermediary upgrade."
|
||||||
exec bash <<EOF
|
exec bash <<EOF
|
||||||
ISOLOC=$next_iso_location soup -y && \
|
ISOLOC="$next_iso_location" soup -y && \
|
||||||
ISOLOC=$next_iso_location soup -y && \
|
ISOLOC="$next_iso_location" soup -y && \
|
||||||
|
|
||||||
echo -e "\n##############################################################################################################################\n" && \
|
echo -e "\n##############################################################################################################################\n" && \
|
||||||
echo -e "Verifying Elasticsearch was successfully upgraded to $required_es_upgrade_version across the grid. This part can take a while as Searchnodes/Heavynodes sync up with the Manager! \n\nOnce verification completes the next soup will begin automatically. If verification takes longer than 1 hour it will stop waiting and your grid will remain at $next_step_so_version. Allowing for all Searchnodes/Heavynodes to upgrade Elasticsearch to the required version on their own time.\n" && \
|
echo -e "Verifying Elasticsearch was successfully upgraded to $required_es_upgrade_version across the grid. This part can take a while as Searchnodes/Heavynodes sync up with the Manager! \n\nOnce verification completes the next soup will begin automatically. If verification takes longer than 1 hour it will stop waiting and your grid will remain at $next_step_so_version. Allowing for all Searchnodes/Heavynodes to upgrade Elasticsearch to the required version on their own time.\n" && \
|
||||||
|
|
||||||
timeout --foreground 4000 bash /tmp/so_intermediate_upgrade_verification.sh $required_es_upgrade_version $es_required_version_statefile && \
|
timeout --foreground 4000 bash /tmp/so_intermediate_upgrade_verification.sh "$required_es_upgrade_version" "$es_required_version_statefile" && \
|
||||||
|
|
||||||
echo -e "\n##############################################################################################################################\n" && \
|
echo -e "\n##############################################################################################################################\n" && \
|
||||||
|
|
||||||
# automatically start the next soup if the original ISO isn't using the same block device we just used
|
# automatically start the next soup if the original ISO isn't using the same block device we just used
|
||||||
if [[ -n "$originally_requested_iso_location" ]] && [[ "$originally_requested_iso_location" != "$next_iso_location" ]]; then
|
if [[ -n "$originally_requested_iso_location" ]] && [[ "$originally_requested_iso_location" != "$next_iso_location" ]]; then
|
||||||
umount /tmp/soagupdate
|
umount /tmp/soagupdate
|
||||||
ISOLOC=$originally_requested_iso_location soup -y && \
|
ISOLOC="$originally_requested_iso_location" soup -y && \
|
||||||
ISOLOC=$originally_requested_iso_location soup -y
|
ISOLOC="$originally_requested_iso_location" soup -y
|
||||||
else
|
else
|
||||||
echo "Could not automatically start next soup to $originally_requested_so_version. Soup will now exit here at $(cat /etc/soversion)" && \
|
echo "Could not automatically start next soup to $originally_requested_so_version. Soup will now exit here at $(cat /etc/soversion)" && \
|
||||||
|
|
||||||
@@ -924,29 +998,29 @@ run_network_intermediate_upgrade() {
|
|||||||
if [[ -n "$BRANCH" ]]; then
|
if [[ -n "$BRANCH" ]]; then
|
||||||
local originally_requested_so_branch="$BRANCH"
|
local originally_requested_so_branch="$BRANCH"
|
||||||
else
|
else
|
||||||
local originally_requested_so_branch="2.4/main"
|
local originally_requested_so_branch="3/main"
|
||||||
fi
|
fi
|
||||||
|
|
||||||
echo "Starting automated intermediate upgrade to $next_step_so_version."
|
echo "Starting automated intermediate upgrade to $next_step_so_version."
|
||||||
echo "After completion, the system will automatically attempt to upgrade to the latest version."
|
echo "After completion, the system will automatically attempt to upgrade to the latest version."
|
||||||
echo -e "\n##############################################################################################################################\n"
|
echo -e "\n##############################################################################################################################\n"
|
||||||
exec bash << EOF
|
exec bash << EOF
|
||||||
BRANCH=$next_step_so_version soup -y && \
|
BRANCH="$next_step_so_version" soup -y && \
|
||||||
BRANCH=$next_step_so_version soup -y && \
|
BRANCH="$next_step_so_version" soup -y && \
|
||||||
|
|
||||||
echo -e "\n##############################################################################################################################\n" && \
|
echo -e "\n##############################################################################################################################\n" && \
|
||||||
echo -e "Verifying Elasticsearch was successfully upgraded to $required_es_upgrade_version across the grid. This part can take a while as Searchnodes/Heavynodes sync up with the Manager! \n\nOnce verification completes the next soup will begin automatically. If verification takes longer than 1 hour it will stop waiting and your grid will remain at $next_step_so_version. Allowing for all Searchnodes/Heavynodes to upgrade Elasticsearch to the required version on their own time.\n" && \
|
echo -e "Verifying Elasticsearch was successfully upgraded to $required_es_upgrade_version across the grid. This part can take a while as Searchnodes/Heavynodes sync up with the Manager! \n\nOnce verification completes the next soup will begin automatically. If verification takes longer than 1 hour it will stop waiting and your grid will remain at $next_step_so_version. Allowing for all Searchnodes/Heavynodes to upgrade Elasticsearch to the required version on their own time.\n" && \
|
||||||
|
|
||||||
timeout --foreground 4000 bash /tmp/so_intermediate_upgrade_verification.sh $required_es_upgrade_version $es_required_version_statefile && \
|
timeout --foreground 4000 bash /tmp/so_intermediate_upgrade_verification.sh "$required_es_upgrade_version" "$es_required_version_statefile" && \
|
||||||
|
|
||||||
echo -e "\n##############################################################################################################################\n" && \
|
echo -e "\n##############################################################################################################################\n" && \
|
||||||
if [[ -n "$originally_requested_iso_location" ]]; then
|
if [[ -n "$originally_requested_iso_location" ]]; then
|
||||||
# nonairgap soup that used -f originally, runs intermediate upgrade using network + BRANCH, later coming back to the original ISO for the last soup
|
# nonairgap soup that used -f originally, runs intermediate upgrade using network + BRANCH, later coming back to the original ISO for the last soup
|
||||||
ISOLOC=$originally_requested_iso_location soup -y && \
|
ISOLOC="$originally_requested_iso_location" soup -y && \
|
||||||
ISOLOC=$originally_requested_iso_location soup -y
|
ISOLOC="$originally_requested_iso_location" soup -y
|
||||||
else
|
else
|
||||||
BRANCH=$originally_requested_so_branch soup -y && \
|
BRANCH="$originally_requested_so_branch" soup -y && \
|
||||||
BRANCH=$originally_requested_so_branch soup -y
|
BRANCH="$originally_requested_so_branch" soup -y
|
||||||
fi
|
fi
|
||||||
echo -e "\n##############################################################################################################################\n"
|
echo -e "\n##############################################################################################################################\n"
|
||||||
EOF
|
EOF
|
||||||
|
|||||||
@@ -25,8 +25,33 @@ manager_run_es_soc:
|
|||||||
- salt: {{NEWNODE}}_update_mine
|
- salt: {{NEWNODE}}_update_mine
|
||||||
{% endif %}
|
{% endif %}
|
||||||
|
|
||||||
|
# so-minion has already added the new minion's entry to telegraf/creds.sls
|
||||||
|
# via so-telegraf-cred before this orch fires. Reconcile the Postgres role
|
||||||
|
# on the manager so the new minion can authenticate on its first highstate,
|
||||||
|
# then refresh the minion's pillar so its telegraf.conf renders with the
|
||||||
|
# freshly-written cred.
|
||||||
|
manager_create_postgres_telegraf_role:
|
||||||
|
salt.state:
|
||||||
|
- tgt: {{ MANAGER }}
|
||||||
|
- sls:
|
||||||
|
- postgres.telegraf_users
|
||||||
|
- queue: True
|
||||||
|
- require:
|
||||||
|
- salt: {{NEWNODE}}_update_mine
|
||||||
|
|
||||||
|
{{NEWNODE}}_refresh_pillar:
|
||||||
|
salt.function:
|
||||||
|
- name: saltutil.refresh_pillar
|
||||||
|
- tgt: {{ NEWNODE }}
|
||||||
|
- kwarg:
|
||||||
|
wait: True
|
||||||
|
- require:
|
||||||
|
- salt: manager_create_postgres_telegraf_role
|
||||||
|
|
||||||
{{NEWNODE}}_run_highstate:
|
{{NEWNODE}}_run_highstate:
|
||||||
salt.state:
|
salt.state:
|
||||||
- tgt: {{ NEWNODE }}
|
- tgt: {{ NEWNODE }}
|
||||||
- highstate: True
|
- highstate: True
|
||||||
- queue: True
|
- queue: True
|
||||||
|
- require:
|
||||||
|
- salt: {{NEWNODE}}_refresh_pillar
|
||||||
|
|||||||
@@ -0,0 +1,112 @@
|
|||||||
|
# Copyright Security Onion Solutions LLC and/or licensed to Security Onion Solutions LLC under one
|
||||||
|
# or more contributor license agreements. Licensed under the Elastic License 2.0 as shown at
|
||||||
|
# https://securityonion.net/license; you may not use this file except in compliance with the
|
||||||
|
# Elastic License 2.0.
|
||||||
|
|
||||||
|
# Driven by the so_pillar_changed reactor. Translates a so_pillar.pillar_entry
|
||||||
|
# change into (cache.clear_pillar -> saltutil.refresh_pillar -> state.apply)
|
||||||
|
# on the appropriate target.
|
||||||
|
#
|
||||||
|
# Routing rules live in the DISPATCH map below — one entry per
|
||||||
|
# (pillar_path prefix) -> (state sls, role grain). Add new services here
|
||||||
|
# rather than wiring more reactors.
|
||||||
|
#
|
||||||
|
# Idempotent: state.apply is idempotent; if the pillar value didn't actually
|
||||||
|
# change anything observable, the affected state runs a no-op. Bulk imports
|
||||||
|
# and replays are safe.
|
||||||
|
|
||||||
|
{% set change = salt['pillar.get']('so_pillar_change', {}) %}
|
||||||
|
{% set scope = change.get('scope') %}
|
||||||
|
{% set role = change.get('role_name') %}
|
||||||
|
{% set minion = change.get('minion_id') %}
|
||||||
|
{% set changes = change.get('changes', []) %}
|
||||||
|
|
||||||
|
{# (pillar_path prefix) -> {sls: <state to apply>, role: <role grain that runs it>}
|
||||||
|
role is a grain value (e.g. 'so-sensor'), used to compute compound targets
|
||||||
|
when the change is global or role-scoped. #}
|
||||||
|
{% set DISPATCH = {
|
||||||
|
'suricata.': {'sls': 'suricata.config', 'roles': ['so-sensor', 'so-heavynode', 'so-standalone']},
|
||||||
|
'sensor.': {'sls': 'suricata.config', 'roles': ['so-sensor', 'so-heavynode', 'so-standalone']},
|
||||||
|
'zeek.': {'sls': 'zeek.config', 'roles': ['so-sensor', 'so-heavynode', 'so-standalone']},
|
||||||
|
'stenographer.': {'sls': 'stenographer.config', 'roles': ['so-sensor', 'so-heavynode', 'so-standalone']},
|
||||||
|
'pcap.': {'sls': 'pcap.config', 'roles': ['so-sensor', 'so-heavynode', 'so-standalone']},
|
||||||
|
'logstash.': {'sls': 'logstash.config', 'roles': ['so-manager', 'so-managersearch', 'so-managerhype', 'so-receiver']},
|
||||||
|
'redis.': {'sls': 'redis.config', 'roles': ['so-manager', 'so-managersearch', 'so-managerhype', 'so-standalone']},
|
||||||
|
'kafka.': {'sls': 'kafka.config', 'roles': ['so-manager', 'so-managersearch', 'so-managerhype', 'so-receiver', 'so-searchnode']},
|
||||||
|
'elasticsearch.': {'sls': 'elasticsearch.config','roles': ['so-manager', 'so-managersearch', 'so-managerhype', 'so-searchnode', 'so-heavynode', 'so-standalone']},
|
||||||
|
'kibana.': {'sls': 'kibana.config', 'roles': ['so-manager', 'so-managersearch', 'so-managerhype', 'so-standalone']},
|
||||||
|
'soc.': {'sls': 'soc.config', 'roles': ['so-manager', 'so-managersearch', 'so-managerhype', 'so-standalone']},
|
||||||
|
'telegraf.': {'sls': 'telegraf.config', 'roles': ['*']},
|
||||||
|
'fleet.': {'sls': 'fleet.config', 'roles': ['so-fleet']},
|
||||||
|
'strelka.': {'sls': 'strelka.config', 'roles': ['so-sensor', 'so-heavynode', 'so-standalone']},
|
||||||
|
} %}
|
||||||
|
|
||||||
|
{# Collect a deduplicated set of (sls, target_kind) actions. target_kind is
|
||||||
|
either 'minion:<id>' (scope=minion) or 'roles:so-x,so-y' (scope=role/global). #}
|
||||||
|
{% set actions = {} %}
|
||||||
|
|
||||||
|
{% for c in changes %}
|
||||||
|
{% set path = c.get('pillar_path', '') %}
|
||||||
|
{% for prefix, action in DISPATCH.items() %}
|
||||||
|
{% if path.startswith(prefix) %}
|
||||||
|
{% set sls = action['sls'] %}
|
||||||
|
{% if scope == 'minion' and minion %}
|
||||||
|
{% set key = sls ~ '|minion|' ~ minion %}
|
||||||
|
{% set _ = actions.update({key: {'sls': sls, 'tgt': minion, 'tgt_type': 'glob'}}) %}
|
||||||
|
{% else %}
|
||||||
|
{% set role_targets = action['roles'] %}
|
||||||
|
{% if '*' in role_targets %}
|
||||||
|
{% set tgt = '*' %}
|
||||||
|
{% set tgt_type = 'glob' %}
|
||||||
|
{% else %}
|
||||||
|
{% set tgt = ('I@role:' ~ role_targets|join(' or I@role:')) %}
|
||||||
|
{% set tgt_type = 'compound' %}
|
||||||
|
{% endif %}
|
||||||
|
{% set key = sls ~ '|' ~ tgt %}
|
||||||
|
{% set _ = actions.update({key: {'sls': sls, 'tgt': tgt, 'tgt_type': tgt_type}}) %}
|
||||||
|
{% endif %}
|
||||||
|
{% endif %}
|
||||||
|
{% endfor %}
|
||||||
|
{% endfor %}
|
||||||
|
|
||||||
|
{% if actions %}
|
||||||
|
|
||||||
|
{% for key, action in actions.items() %}
|
||||||
|
{% set safe_id = loop.index0 | string %}
|
||||||
|
|
||||||
|
so_pillar_reload_clear_cache_{{ safe_id }}:
|
||||||
|
salt.runner:
|
||||||
|
- name: cache.clear_pillar
|
||||||
|
- tgt: '{{ action.tgt }}'
|
||||||
|
- tgt_type: '{{ action.tgt_type }}'
|
||||||
|
|
||||||
|
so_pillar_reload_refresh_pillar_{{ safe_id }}:
|
||||||
|
salt.function:
|
||||||
|
- name: saltutil.refresh_pillar
|
||||||
|
- tgt: '{{ action.tgt }}'
|
||||||
|
- tgt_type: '{{ action.tgt_type }}'
|
||||||
|
- kwarg:
|
||||||
|
wait: True
|
||||||
|
- require:
|
||||||
|
- salt: so_pillar_reload_clear_cache_{{ safe_id }}
|
||||||
|
|
||||||
|
so_pillar_reload_apply_state_{{ safe_id }}:
|
||||||
|
salt.state:
|
||||||
|
- tgt: '{{ action.tgt }}'
|
||||||
|
- tgt_type: '{{ action.tgt_type }}'
|
||||||
|
- sls:
|
||||||
|
- {{ action.sls }}
|
||||||
|
- queue: True
|
||||||
|
- require:
|
||||||
|
- salt: so_pillar_reload_refresh_pillar_{{ safe_id }}
|
||||||
|
{% endfor %}
|
||||||
|
|
||||||
|
{% else %}
|
||||||
|
|
||||||
|
{# No DISPATCH entry matched. Pillar still gets refreshed so any other states
|
||||||
|
read fresh values, but no service-specific reload is invoked. #}
|
||||||
|
so_pillar_reload_unmapped_path_noop:
|
||||||
|
test.nop
|
||||||
|
{% do salt.log.info('orch.so_pillar_reload: no dispatch match for %s' % changes) %}
|
||||||
|
|
||||||
|
{% endif %}
|
||||||
@@ -0,0 +1,37 @@
|
|||||||
|
# Copyright Security Onion Solutions LLC and/or licensed to Security Onion Solutions LLC under one
|
||||||
|
# or more contributor license agreements. Licensed under the Elastic License 2.0 as shown at
|
||||||
|
# https://securityonion.net/license; you may not use this file except in compliance with the
|
||||||
|
# Elastic License 2.0.
|
||||||
|
|
||||||
|
{% from 'allowed_states.map.jinja' import allowed_states %}
|
||||||
|
{% if sls in allowed_states %}
|
||||||
|
|
||||||
|
{% set DIGITS = "1234567890" %}
|
||||||
|
{% set LOWERCASE = "qwertyuiopasdfghjklzxcvbnm" %}
|
||||||
|
{% set UPPERCASE = "QWERTYUIOPASDFGHJKLZXCVBNM" %}
|
||||||
|
{% set SYMBOLS = "~!@#^&*()-_=+[]|;:,.<>?" %}
|
||||||
|
{% set CHARS = DIGITS~LOWERCASE~UPPERCASE~SYMBOLS %}
|
||||||
|
{% set so_postgres_user_pass = salt['pillar.get']('postgres:auth:users:so_postgres_user:pass', salt['random.get_str'](72, chars=CHARS)) %}
|
||||||
|
|
||||||
|
# Admin cred only. Per-minion Telegraf creds live in telegraf/creds.sls,
|
||||||
|
# managed by /usr/sbin/so-telegraf-cred (called from so-minion).
|
||||||
|
postgres_auth_pillar:
|
||||||
|
file.managed:
|
||||||
|
- name: /opt/so/saltstack/local/pillar/postgres/auth.sls
|
||||||
|
- mode: 640
|
||||||
|
- reload_pillar: True
|
||||||
|
- contents: |
|
||||||
|
postgres:
|
||||||
|
auth:
|
||||||
|
users:
|
||||||
|
so_postgres_user:
|
||||||
|
user: so_postgres
|
||||||
|
pass: "{{ so_postgres_user_pass }}"
|
||||||
|
- show_changes: False
|
||||||
|
{% else %}
|
||||||
|
|
||||||
|
{{sls}}_state_not_allowed:
|
||||||
|
test.fail_without_changes:
|
||||||
|
- name: {{sls}}_state_not_allowed
|
||||||
|
|
||||||
|
{% endif %}
|
||||||
@@ -0,0 +1,111 @@
|
|||||||
|
# Copyright Security Onion Solutions LLC and/or licensed to Security Onion Solutions LLC under one
|
||||||
|
# or more contributor license agreements. Licensed under the Elastic License 2.0 as shown at
|
||||||
|
# https://securityonion.net/license; you may not use this file except in compliance with the
|
||||||
|
# Elastic License 2.0.
|
||||||
|
|
||||||
|
{% from 'allowed_states.map.jinja' import allowed_states %}
|
||||||
|
{% if sls.split('.')[0] in allowed_states %}
|
||||||
|
{% from 'postgres/map.jinja' import PGMERGED %}
|
||||||
|
|
||||||
|
# Postgres Setup
|
||||||
|
postgresconfdir:
|
||||||
|
file.directory:
|
||||||
|
- name: /opt/so/conf/postgres
|
||||||
|
- user: 939
|
||||||
|
- group: 939
|
||||||
|
- makedirs: True
|
||||||
|
|
||||||
|
postgressecretsdir:
|
||||||
|
file.directory:
|
||||||
|
- name: /opt/so/conf/postgres/secrets
|
||||||
|
- user: 939
|
||||||
|
- group: 939
|
||||||
|
- mode: 700
|
||||||
|
- require:
|
||||||
|
- file: postgresconfdir
|
||||||
|
|
||||||
|
postgresdatadir:
|
||||||
|
file.directory:
|
||||||
|
- name: /nsm/postgres
|
||||||
|
- user: 939
|
||||||
|
- group: 939
|
||||||
|
- makedirs: True
|
||||||
|
|
||||||
|
postgreslogdir:
|
||||||
|
file.directory:
|
||||||
|
- name: /opt/so/log/postgres
|
||||||
|
- user: 939
|
||||||
|
- group: 939
|
||||||
|
- makedirs: True
|
||||||
|
|
||||||
|
postgresinitdir:
|
||||||
|
file.directory:
|
||||||
|
- name: /opt/so/conf/postgres/init
|
||||||
|
- user: 939
|
||||||
|
- group: 939
|
||||||
|
- require:
|
||||||
|
- file: postgresconfdir
|
||||||
|
|
||||||
|
postgresinitusers:
|
||||||
|
file.managed:
|
||||||
|
- name: /opt/so/conf/postgres/init/init-users.sh
|
||||||
|
- source: salt://postgres/files/init-users.sh
|
||||||
|
- user: 939
|
||||||
|
- group: 939
|
||||||
|
- mode: 755
|
||||||
|
|
||||||
|
postgresconf:
|
||||||
|
file.managed:
|
||||||
|
- name: /opt/so/conf/postgres/postgresql.conf
|
||||||
|
- source: salt://postgres/files/postgresql.conf.jinja
|
||||||
|
- user: 939
|
||||||
|
- group: 939
|
||||||
|
- template: jinja
|
||||||
|
- defaults:
|
||||||
|
PGMERGED: {{ PGMERGED }}
|
||||||
|
|
||||||
|
postgreshba:
|
||||||
|
file.managed:
|
||||||
|
- name: /opt/so/conf/postgres/pg_hba.conf
|
||||||
|
- source: salt://postgres/files/pg_hba.conf
|
||||||
|
- user: 939
|
||||||
|
- group: 939
|
||||||
|
- mode: 640
|
||||||
|
|
||||||
|
postgres_super_secret:
|
||||||
|
file.managed:
|
||||||
|
- name: /opt/so/conf/postgres/secrets/postgres_password
|
||||||
|
- user: 939
|
||||||
|
- group: 939
|
||||||
|
- mode: 600
|
||||||
|
- contents_pillar: 'secrets:postgres_pass'
|
||||||
|
- show_changes: False
|
||||||
|
- require:
|
||||||
|
- file: postgressecretsdir
|
||||||
|
|
||||||
|
postgres_app_secret:
|
||||||
|
file.managed:
|
||||||
|
- name: /opt/so/conf/postgres/secrets/so_postgres_pass
|
||||||
|
- user: 939
|
||||||
|
- group: 939
|
||||||
|
- mode: 600
|
||||||
|
- contents_pillar: 'postgres:auth:users:so_postgres_user:pass'
|
||||||
|
- show_changes: False
|
||||||
|
- require:
|
||||||
|
- file: postgressecretsdir
|
||||||
|
|
||||||
|
postgres_sbin:
|
||||||
|
file.recurse:
|
||||||
|
- name: /usr/sbin
|
||||||
|
- source: salt://postgres/tools/sbin
|
||||||
|
- user: root
|
||||||
|
- group: root
|
||||||
|
- file_mode: 755
|
||||||
|
|
||||||
|
{% else %}
|
||||||
|
|
||||||
|
{{sls}}_state_not_allowed:
|
||||||
|
test.fail_without_changes:
|
||||||
|
- name: {{sls}}_state_not_allowed
|
||||||
|
|
||||||
|
{% endif %}
|
||||||
@@ -0,0 +1,19 @@
|
|||||||
|
postgres:
|
||||||
|
enabled: True
|
||||||
|
telegraf:
|
||||||
|
retention_days: 14
|
||||||
|
config:
|
||||||
|
listen_addresses: '*'
|
||||||
|
port: 5432
|
||||||
|
max_connections: 100
|
||||||
|
shared_buffers: 256MB
|
||||||
|
ssl: 'on'
|
||||||
|
ssl_cert_file: '/conf/postgres.crt'
|
||||||
|
ssl_key_file: '/conf/postgres.key'
|
||||||
|
ssl_ca_file: '/conf/ca.crt'
|
||||||
|
hba_file: '/conf/pg_hba.conf'
|
||||||
|
log_destination: 'stderr'
|
||||||
|
logging_collector: 'off'
|
||||||
|
log_min_messages: 'warning'
|
||||||
|
shared_preload_libraries: pg_cron
|
||||||
|
cron.database_name: so_telegraf
|
||||||
@@ -0,0 +1,33 @@
|
|||||||
|
# Copyright Security Onion Solutions LLC and/or licensed to Security Onion Solutions LLC under one
|
||||||
|
# or more contributor license agreements. Licensed under the Elastic License 2.0 as shown at
|
||||||
|
# https://securityonion.net/license; you may not use this file except in compliance with the
|
||||||
|
# Elastic License 2.0.
|
||||||
|
|
||||||
|
{% from 'allowed_states.map.jinja' import allowed_states %}
|
||||||
|
{% if sls.split('.')[0] in allowed_states %}
|
||||||
|
|
||||||
|
include:
|
||||||
|
- postgres.sostatus
|
||||||
|
|
||||||
|
so-postgres:
|
||||||
|
docker_container.absent:
|
||||||
|
- force: True
|
||||||
|
|
||||||
|
so-postgres_so-status.disabled:
|
||||||
|
file.comment:
|
||||||
|
- name: /opt/so/conf/so-status/so-status.conf
|
||||||
|
- regex: ^so-postgres$
|
||||||
|
|
||||||
|
so_postgres_backup:
|
||||||
|
cron.absent:
|
||||||
|
- name: /usr/sbin/so-postgres-backup > /dev/null 2>&1
|
||||||
|
- identifier: so_postgres_backup
|
||||||
|
- user: root
|
||||||
|
|
||||||
|
{% else %}
|
||||||
|
|
||||||
|
{{sls}}_state_not_allowed:
|
||||||
|
test.fail_without_changes:
|
||||||
|
- name: {{sls}}_state_not_allowed
|
||||||
|
|
||||||
|
{% endif %}
|
||||||
@@ -0,0 +1,109 @@
|
|||||||
|
# Copyright Security Onion Solutions LLC and/or licensed to Security Onion Solutions LLC under one
|
||||||
|
# or more contributor license agreements. Licensed under the Elastic License 2.0 as shown at
|
||||||
|
# https://securityonion.net/license; you may not use this file except in compliance with the
|
||||||
|
# Elastic License 2.0.
|
||||||
|
|
||||||
|
{% from 'allowed_states.map.jinja' import allowed_states %}
|
||||||
|
{% if sls.split('.')[0] in allowed_states %}
|
||||||
|
{% from 'vars/globals.map.jinja' import GLOBALS %}
|
||||||
|
{% from 'docker/docker.map.jinja' import DOCKERMERGED %}
|
||||||
|
{% set SO_POSTGRES_USER = salt['pillar.get']('postgres:auth:users:so_postgres_user:user', 'so_postgres') %}
|
||||||
|
|
||||||
|
include:
|
||||||
|
- postgres.auth
|
||||||
|
- postgres.ssl
|
||||||
|
- postgres.config
|
||||||
|
- postgres.sostatus
|
||||||
|
- postgres.telegraf_users
|
||||||
|
|
||||||
|
so-postgres:
|
||||||
|
docker_container.running:
|
||||||
|
- image: {{ GLOBALS.registry_host }}:5000/{{ GLOBALS.image_repo }}/so-postgres:{{ GLOBALS.so_version }}
|
||||||
|
- hostname: so-postgres
|
||||||
|
- networks:
|
||||||
|
- sobridge:
|
||||||
|
- ipv4_address: {{ DOCKERMERGED.containers['so-postgres'].ip }}
|
||||||
|
- port_bindings:
|
||||||
|
{% for BINDING in DOCKERMERGED.containers['so-postgres'].port_bindings %}
|
||||||
|
- {{ BINDING }}
|
||||||
|
{% endfor %}
|
||||||
|
- environment:
|
||||||
|
- POSTGRES_DB=securityonion
|
||||||
|
# Passwords are delivered via mounted 0600 secret files, not plaintext env vars.
|
||||||
|
# The upstream postgres image resolves POSTGRES_PASSWORD_FILE; entrypoint.sh and
|
||||||
|
# init-users.sh resolve SO_POSTGRES_PASS_FILE the same way.
|
||||||
|
- POSTGRES_PASSWORD_FILE=/run/secrets/postgres_password
|
||||||
|
- SO_POSTGRES_USER={{ SO_POSTGRES_USER }}
|
||||||
|
- SO_POSTGRES_PASS_FILE=/run/secrets/so_postgres_pass
|
||||||
|
{% if DOCKERMERGED.containers['so-postgres'].extra_env %}
|
||||||
|
{% for XTRAENV in DOCKERMERGED.containers['so-postgres'].extra_env %}
|
||||||
|
- {{ XTRAENV }}
|
||||||
|
{% endfor %}
|
||||||
|
{% endif %}
|
||||||
|
- binds:
|
||||||
|
- /opt/so/log/postgres/:/log:rw
|
||||||
|
- /nsm/postgres:/var/lib/postgresql/data:rw
|
||||||
|
- /opt/so/conf/postgres/postgresql.conf:/conf/postgresql.conf:ro
|
||||||
|
- /opt/so/conf/postgres/pg_hba.conf:/conf/pg_hba.conf:ro
|
||||||
|
- /opt/so/conf/postgres/secrets:/run/secrets:ro
|
||||||
|
- /opt/so/conf/postgres/init/init-users.sh:/docker-entrypoint-initdb.d/init-users.sh:ro
|
||||||
|
- /etc/pki/postgres.crt:/conf/postgres.crt:ro
|
||||||
|
- /etc/pki/postgres.key:/conf/postgres.key:ro
|
||||||
|
- /etc/pki/tls/certs/intca.crt:/conf/ca.crt:ro
|
||||||
|
{% if DOCKERMERGED.containers['so-postgres'].custom_bind_mounts %}
|
||||||
|
{% for BIND in DOCKERMERGED.containers['so-postgres'].custom_bind_mounts %}
|
||||||
|
- {{ BIND }}
|
||||||
|
{% endfor %}
|
||||||
|
{% endif %}
|
||||||
|
{% if DOCKERMERGED.containers['so-postgres'].extra_hosts %}
|
||||||
|
- extra_hosts:
|
||||||
|
{% for XTRAHOST in DOCKERMERGED.containers['so-postgres'].extra_hosts %}
|
||||||
|
- {{ XTRAHOST }}
|
||||||
|
{% endfor %}
|
||||||
|
{% endif %}
|
||||||
|
{% if DOCKERMERGED.containers['so-postgres'].ulimits %}
|
||||||
|
- ulimits:
|
||||||
|
{% for ULIMIT in DOCKERMERGED.containers['so-postgres'].ulimits %}
|
||||||
|
- {{ ULIMIT.name }}={{ ULIMIT.soft }}:{{ ULIMIT.hard }}
|
||||||
|
{% endfor %}
|
||||||
|
{% endif %}
|
||||||
|
- watch:
|
||||||
|
- file: postgresconf
|
||||||
|
- file: postgreshba
|
||||||
|
- file: postgresinitusers
|
||||||
|
- file: postgres_super_secret
|
||||||
|
- file: postgres_app_secret
|
||||||
|
- x509: postgres_crt
|
||||||
|
- x509: postgres_key
|
||||||
|
- require:
|
||||||
|
- file: postgresconf
|
||||||
|
- file: postgreshba
|
||||||
|
- file: postgresinitusers
|
||||||
|
- file: postgres_super_secret
|
||||||
|
- file: postgres_app_secret
|
||||||
|
- x509: postgres_crt
|
||||||
|
- x509: postgres_key
|
||||||
|
|
||||||
|
delete_so-postgres_so-status.disabled:
|
||||||
|
file.uncomment:
|
||||||
|
- name: /opt/so/conf/so-status/so-status.conf
|
||||||
|
- regex: ^so-postgres$
|
||||||
|
|
||||||
|
so_postgres_backup:
|
||||||
|
cron.present:
|
||||||
|
- name: /usr/sbin/so-postgres-backup > /dev/null 2>&1
|
||||||
|
- identifier: so_postgres_backup
|
||||||
|
- user: root
|
||||||
|
- minute: '5'
|
||||||
|
- hour: '0'
|
||||||
|
- daymonth: '*'
|
||||||
|
- month: '*'
|
||||||
|
- dayweek: '*'
|
||||||
|
|
||||||
|
{% else %}
|
||||||
|
|
||||||
|
{{sls}}_state_not_allowed:
|
||||||
|
test.fail_without_changes:
|
||||||
|
- name: {{sls}}_state_not_allowed
|
||||||
|
|
||||||
|
{% endif %}
|
||||||
@@ -0,0 +1,34 @@
|
|||||||
|
#!/bin/bash
|
||||||
|
set -e
|
||||||
|
|
||||||
|
# Create or update application user for SOC platform access
|
||||||
|
# This script runs on first database initialization via docker-entrypoint-initdb.d
|
||||||
|
# The password is properly escaped to handle special characters
|
||||||
|
if [ -z "${SO_POSTGRES_PASS:-}" ] && [ -n "${SO_POSTGRES_PASS_FILE:-}" ] && [ -r "$SO_POSTGRES_PASS_FILE" ]; then
|
||||||
|
SO_POSTGRES_PASS="$(< "$SO_POSTGRES_PASS_FILE")"
|
||||||
|
fi
|
||||||
|
psql -v ON_ERROR_STOP=1 --username "$POSTGRES_USER" --dbname "$POSTGRES_DB" <<-EOSQL
|
||||||
|
DO \$\$
|
||||||
|
BEGIN
|
||||||
|
IF NOT EXISTS (SELECT FROM pg_catalog.pg_roles WHERE rolname = '${SO_POSTGRES_USER}') THEN
|
||||||
|
EXECUTE format('CREATE ROLE %I WITH LOGIN PASSWORD %L', '${SO_POSTGRES_USER}', '${SO_POSTGRES_PASS}');
|
||||||
|
ELSE
|
||||||
|
EXECUTE format('ALTER ROLE %I WITH PASSWORD %L', '${SO_POSTGRES_USER}', '${SO_POSTGRES_PASS}');
|
||||||
|
END IF;
|
||||||
|
END
|
||||||
|
\$\$;
|
||||||
|
GRANT ALL PRIVILEGES ON DATABASE "$POSTGRES_DB" TO "$SO_POSTGRES_USER";
|
||||||
|
-- Lock the SOC database down at the connect layer; PUBLIC gets CONNECT
|
||||||
|
-- by default, which would let per-minion telegraf roles open sessions
|
||||||
|
-- here. They have no schema/table grants inside so reads fail, but
|
||||||
|
-- revoking CONNECT closes the soft edge entirely.
|
||||||
|
REVOKE CONNECT ON DATABASE "$POSTGRES_DB" FROM PUBLIC;
|
||||||
|
GRANT CONNECT ON DATABASE "$POSTGRES_DB" TO "$SO_POSTGRES_USER";
|
||||||
|
EOSQL
|
||||||
|
|
||||||
|
# Bootstrap the Telegraf metrics database. Per-minion roles + schemas are
|
||||||
|
# reconciled on every state.apply by postgres/telegraf_users.sls; this block
|
||||||
|
# only ensures the shared database exists on first initialization.
|
||||||
|
if ! psql -U "$POSTGRES_USER" -tAc "SELECT 1 FROM pg_database WHERE datname='so_telegraf'" | grep -q 1; then
|
||||||
|
psql -v ON_ERROR_STOP=1 -U "$POSTGRES_USER" -c "CREATE DATABASE so_telegraf"
|
||||||
|
fi
|
||||||
@@ -0,0 +1,16 @@
|
|||||||
|
# Copyright Security Onion Solutions LLC and/or licensed to Security Onion Solutions LLC under one
|
||||||
|
# or more contributor license agreements. Licensed under the Elastic License 2.0 as shown at
|
||||||
|
# https://securityonion.net/license; you may not use this file except in compliance with the
|
||||||
|
# Elastic License 2.0.
|
||||||
|
#
|
||||||
|
# Managed by Salt — do not edit by hand.
|
||||||
|
# Client authentication config: only local (Unix socket) connections and TLS-wrapped TCP
|
||||||
|
# connections are accepted. Plain-text `host ...` lines are intentionally omitted so a
|
||||||
|
# misconfigured client with sslmode=disable cannot negotiate a cleartext session.
|
||||||
|
|
||||||
|
# Local connections (Unix socket, container-internal) use peer/trust.
|
||||||
|
local all all trust
|
||||||
|
|
||||||
|
# TCP connections MUST use TLS (hostssl) and authenticate with SCRAM.
|
||||||
|
hostssl all all 0.0.0.0/0 scram-sha-256
|
||||||
|
hostssl all all ::/0 scram-sha-256
|
||||||
@@ -0,0 +1,8 @@
|
|||||||
|
{# Copyright Security Onion Solutions LLC and/or licensed to Security Onion Solutions LLC under one
|
||||||
|
or more contributor license agreements. Licensed under the Elastic License 2.0 as shown at
|
||||||
|
https://securityonion.net/license; you may not use this file except in compliance with the
|
||||||
|
Elastic License 2.0. #}
|
||||||
|
|
||||||
|
{% for key, value in PGMERGED.config.items() %}
|
||||||
|
{{ key }} = '{{ value | string | replace("'", "''") }}'
|
||||||
|
{% endfor %}
|
||||||
@@ -0,0 +1,124 @@
|
|||||||
|
-- so_pillar schema: queryable, versioned, audited pillar config store.
|
||||||
|
-- Replaces flat-file Salt pillar consumed via salt.pillar.postgres ext_pillar.
|
||||||
|
-- Idempotent. Run via salt/postgres/schema_pillar.sls inside the so-postgres container.
|
||||||
|
|
||||||
|
CREATE SCHEMA IF NOT EXISTS so_pillar;
|
||||||
|
|
||||||
|
CREATE TABLE IF NOT EXISTS so_pillar.scope (
|
||||||
|
scope_kind text PRIMARY KEY,
|
||||||
|
precedence int NOT NULL,
|
||||||
|
description text
|
||||||
|
);
|
||||||
|
|
||||||
|
INSERT INTO so_pillar.scope(scope_kind, precedence, description) VALUES
|
||||||
|
('global', 100, 'Applies to every minion'),
|
||||||
|
('role', 200, 'Applies to minions whose minion_id matches a top.sls compound role match'),
|
||||||
|
('minion', 300, 'Applies only to a single minion (per-minion overlay)')
|
||||||
|
ON CONFLICT (scope_kind) DO NOTHING;
|
||||||
|
|
||||||
|
CREATE TABLE IF NOT EXISTS so_pillar.role (
|
||||||
|
role_name text PRIMARY KEY,
|
||||||
|
match_kind text NOT NULL CHECK (match_kind IN ('compound','grain','glob','list')),
|
||||||
|
match_expr text NOT NULL,
|
||||||
|
description text
|
||||||
|
);
|
||||||
|
|
||||||
|
CREATE TABLE IF NOT EXISTS so_pillar.minion (
|
||||||
|
minion_id text PRIMARY KEY,
|
||||||
|
node_type text,
|
||||||
|
hostname text,
|
||||||
|
extra_roles text[] NOT NULL DEFAULT '{}',
|
||||||
|
created_at timestamptz NOT NULL DEFAULT now(),
|
||||||
|
updated_at timestamptz NOT NULL DEFAULT now()
|
||||||
|
);
|
||||||
|
|
||||||
|
CREATE TABLE IF NOT EXISTS so_pillar.role_member (
|
||||||
|
role_name text NOT NULL REFERENCES so_pillar.role(role_name) ON DELETE CASCADE,
|
||||||
|
minion_id text NOT NULL REFERENCES so_pillar.minion(minion_id) ON DELETE CASCADE,
|
||||||
|
source text NOT NULL DEFAULT 'computed' CHECK (source IN ('computed','manual','imported')),
|
||||||
|
PRIMARY KEY (role_name, minion_id)
|
||||||
|
);
|
||||||
|
|
||||||
|
CREATE INDEX IF NOT EXISTS ix_role_member_minion ON so_pillar.role_member(minion_id);
|
||||||
|
|
||||||
|
-- pillar_entry holds the actual data. as_json=True ext_pillar reads `data` directly.
|
||||||
|
CREATE TABLE IF NOT EXISTS so_pillar.pillar_entry (
|
||||||
|
id bigserial PRIMARY KEY,
|
||||||
|
scope text NOT NULL REFERENCES so_pillar.scope(scope_kind),
|
||||||
|
role_name text REFERENCES so_pillar.role(role_name) ON DELETE CASCADE,
|
||||||
|
minion_id text REFERENCES so_pillar.minion(minion_id) ON DELETE CASCADE,
|
||||||
|
pillar_path text NOT NULL,
|
||||||
|
data jsonb NOT NULL,
|
||||||
|
is_secret boolean NOT NULL DEFAULT false,
|
||||||
|
sort_key int NOT NULL DEFAULT 0,
|
||||||
|
version int NOT NULL DEFAULT 1,
|
||||||
|
updated_at timestamptz NOT NULL DEFAULT now(),
|
||||||
|
updated_by text NOT NULL DEFAULT current_user,
|
||||||
|
change_reason text,
|
||||||
|
CONSTRAINT pillar_entry_scope_target CHECK (
|
||||||
|
(scope='global' AND role_name IS NULL AND minion_id IS NULL)
|
||||||
|
OR (scope='role' AND role_name IS NOT NULL AND minion_id IS NULL)
|
||||||
|
OR (scope='minion' AND role_name IS NULL AND minion_id IS NOT NULL)
|
||||||
|
),
|
||||||
|
-- Reserved namespaces that MUST stay rendered from SLS (mine-driven). Nothing
|
||||||
|
-- under these prefixes is allowed in the database; the merge logic relies on
|
||||||
|
-- ext_pillar leaving these subtrees alone.
|
||||||
|
CONSTRAINT pillar_entry_reserved_paths CHECK (
|
||||||
|
pillar_path NOT LIKE 'elasticsearch.nodes%'
|
||||||
|
AND pillar_path NOT LIKE 'redis.nodes%'
|
||||||
|
AND pillar_path NOT LIKE 'kafka.nodes%'
|
||||||
|
AND pillar_path NOT LIKE 'hypervisor.nodes%'
|
||||||
|
AND pillar_path NOT LIKE 'logstash.nodes%'
|
||||||
|
AND pillar_path NOT LIKE 'node_data.ips%'
|
||||||
|
)
|
||||||
|
);
|
||||||
|
|
||||||
|
CREATE UNIQUE INDEX IF NOT EXISTS ux_pillar_entry_global ON so_pillar.pillar_entry(pillar_path)
|
||||||
|
WHERE scope = 'global';
|
||||||
|
CREATE UNIQUE INDEX IF NOT EXISTS ux_pillar_entry_role ON so_pillar.pillar_entry(role_name, pillar_path)
|
||||||
|
WHERE scope = 'role';
|
||||||
|
CREATE UNIQUE INDEX IF NOT EXISTS ux_pillar_entry_minion ON so_pillar.pillar_entry(minion_id, pillar_path)
|
||||||
|
WHERE scope = 'minion';
|
||||||
|
|
||||||
|
CREATE INDEX IF NOT EXISTS ix_pillar_entry_minion_hot ON so_pillar.pillar_entry(minion_id)
|
||||||
|
WHERE scope = 'minion';
|
||||||
|
CREATE INDEX IF NOT EXISTS ix_pillar_entry_role_hot ON so_pillar.pillar_entry(role_name)
|
||||||
|
WHERE scope = 'role';
|
||||||
|
|
||||||
|
-- Append-only audit log for every change to pillar_entry. No FK to entry so DELETE
|
||||||
|
-- history survives the row removal.
|
||||||
|
CREATE TABLE IF NOT EXISTS so_pillar.pillar_entry_history (
|
||||||
|
history_id bigserial PRIMARY KEY,
|
||||||
|
entry_id bigint,
|
||||||
|
op text NOT NULL CHECK (op IN ('INSERT','UPDATE','DELETE')),
|
||||||
|
scope text NOT NULL,
|
||||||
|
role_name text,
|
||||||
|
minion_id text,
|
||||||
|
pillar_path text NOT NULL,
|
||||||
|
old_data jsonb,
|
||||||
|
new_data jsonb,
|
||||||
|
is_secret boolean,
|
||||||
|
version int,
|
||||||
|
changed_at timestamptz NOT NULL DEFAULT now(),
|
||||||
|
changed_by text NOT NULL DEFAULT current_user,
|
||||||
|
change_reason text
|
||||||
|
);
|
||||||
|
|
||||||
|
CREATE INDEX IF NOT EXISTS ix_pillar_history_entry ON so_pillar.pillar_entry_history(entry_id, changed_at DESC);
|
||||||
|
CREATE INDEX IF NOT EXISTS ix_pillar_history_minion ON so_pillar.pillar_entry_history(minion_id, changed_at DESC);
|
||||||
|
CREATE INDEX IF NOT EXISTS ix_pillar_history_role ON so_pillar.pillar_entry_history(role_name, changed_at DESC);
|
||||||
|
|
||||||
|
-- Drift watch — populated by a pg_cron job that re-renders the on-disk SLS files
|
||||||
|
-- and compares them to pillar_entry. Cleared once cutover completes.
|
||||||
|
CREATE TABLE IF NOT EXISTS so_pillar.drift_log (
|
||||||
|
id bigserial PRIMARY KEY,
|
||||||
|
scope text NOT NULL,
|
||||||
|
role_name text,
|
||||||
|
minion_id text,
|
||||||
|
pillar_path text NOT NULL,
|
||||||
|
disk_data jsonb,
|
||||||
|
db_data jsonb,
|
||||||
|
detected_at timestamptz NOT NULL DEFAULT now()
|
||||||
|
);
|
||||||
|
|
||||||
|
CREATE INDEX IF NOT EXISTS ix_drift_log_detected ON so_pillar.drift_log(detected_at DESC);
|
||||||
@@ -0,0 +1,49 @@
|
|||||||
|
-- Views consumed by the Salt master's salt.pillar.postgres ext_pillar with
|
||||||
|
-- as_json=True. Each view exposes data ordered by (sort_key, pillar_path) so
|
||||||
|
-- the deep-merge in ext_pillar resolves precedence deterministically.
|
||||||
|
--
|
||||||
|
-- ext_pillar always binds exactly one parameter to the query: (minion_id,).
|
||||||
|
-- Master-config queries reference these views and add WHERE clauses, e.g.:
|
||||||
|
-- SELECT data FROM so_pillar.v_pillar_role WHERE minion_id = %s
|
||||||
|
-- SELECT data FROM so_pillar.v_pillar_minion WHERE minion_id = %s
|
||||||
|
-- For v_pillar_global the binding is satisfied with `WHERE %s IS NOT NULL`.
|
||||||
|
|
||||||
|
CREATE OR REPLACE VIEW so_pillar.v_pillar_global AS
|
||||||
|
SELECT pillar_path, sort_key, data
|
||||||
|
FROM so_pillar.pillar_entry
|
||||||
|
WHERE scope = 'global'
|
||||||
|
AND is_secret = false
|
||||||
|
ORDER BY sort_key, pillar_path;
|
||||||
|
|
||||||
|
-- Role view exposes minion_id so the master-config WHERE clause can filter to
|
||||||
|
-- the rows that apply to the requesting minion. JOIN to role_member fans out
|
||||||
|
-- one row per (role assignment, pillar entry) tuple.
|
||||||
|
CREATE OR REPLACE VIEW so_pillar.v_pillar_role AS
|
||||||
|
SELECT rm.minion_id,
|
||||||
|
pe.role_name,
|
||||||
|
pe.pillar_path,
|
||||||
|
pe.sort_key,
|
||||||
|
pe.data
|
||||||
|
FROM so_pillar.pillar_entry pe
|
||||||
|
JOIN so_pillar.role_member rm ON rm.role_name = pe.role_name
|
||||||
|
WHERE pe.scope = 'role'
|
||||||
|
AND pe.is_secret = false;
|
||||||
|
|
||||||
|
CREATE OR REPLACE VIEW so_pillar.v_pillar_minion AS
|
||||||
|
SELECT minion_id,
|
||||||
|
pillar_path,
|
||||||
|
sort_key,
|
||||||
|
data
|
||||||
|
FROM so_pillar.pillar_entry
|
||||||
|
WHERE scope = 'minion'
|
||||||
|
AND is_secret = false;
|
||||||
|
|
||||||
|
-- v_pillar_secrets is filled in by 004_secrets.sql once pgcrypto is available;
|
||||||
|
-- placeholder here returns no rows so initial schema deploy succeeds even on a
|
||||||
|
-- container that has not yet loaded pgcrypto.
|
||||||
|
CREATE OR REPLACE VIEW so_pillar.v_pillar_secrets AS
|
||||||
|
SELECT NULL::text AS minion_id,
|
||||||
|
NULL::text AS pillar_path,
|
||||||
|
NULL::int AS sort_key,
|
||||||
|
'{}'::jsonb AS data
|
||||||
|
WHERE false;
|
||||||
@@ -0,0 +1,120 @@
|
|||||||
|
-- Audit trigger: every INSERT/UPDATE/DELETE on so_pillar.pillar_entry writes a
|
||||||
|
-- row to pillar_entry_history. Captures the actor (current_user), reason
|
||||||
|
-- (passed via SET LOCAL so_pillar.change_reason), and full before/after data.
|
||||||
|
|
||||||
|
CREATE OR REPLACE FUNCTION so_pillar.fn_pillar_entry_audit() RETURNS trigger
|
||||||
|
LANGUAGE plpgsql AS $fn$
|
||||||
|
DECLARE
|
||||||
|
v_reason text := current_setting('so_pillar.change_reason', true);
|
||||||
|
BEGIN
|
||||||
|
IF (TG_OP = 'INSERT') THEN
|
||||||
|
INSERT INTO so_pillar.pillar_entry_history(
|
||||||
|
entry_id, op, scope, role_name, minion_id, pillar_path,
|
||||||
|
old_data, new_data, is_secret, version, changed_by, change_reason)
|
||||||
|
VALUES (NEW.id, 'INSERT', NEW.scope, NEW.role_name, NEW.minion_id, NEW.pillar_path,
|
||||||
|
NULL, NEW.data, NEW.is_secret, NEW.version, NEW.updated_by, v_reason);
|
||||||
|
RETURN NEW;
|
||||||
|
ELSIF (TG_OP = 'UPDATE') THEN
|
||||||
|
IF OLD.data IS DISTINCT FROM NEW.data
|
||||||
|
OR OLD.is_secret IS DISTINCT FROM NEW.is_secret THEN
|
||||||
|
INSERT INTO so_pillar.pillar_entry_history(
|
||||||
|
entry_id, op, scope, role_name, minion_id, pillar_path,
|
||||||
|
old_data, new_data, is_secret, version, changed_by, change_reason)
|
||||||
|
VALUES (NEW.id, 'UPDATE', NEW.scope, NEW.role_name, NEW.minion_id, NEW.pillar_path,
|
||||||
|
OLD.data, NEW.data, NEW.is_secret, NEW.version, NEW.updated_by, v_reason);
|
||||||
|
END IF;
|
||||||
|
RETURN NEW;
|
||||||
|
ELSIF (TG_OP = 'DELETE') THEN
|
||||||
|
INSERT INTO so_pillar.pillar_entry_history(
|
||||||
|
entry_id, op, scope, role_name, minion_id, pillar_path,
|
||||||
|
old_data, new_data, is_secret, version, changed_by, change_reason)
|
||||||
|
VALUES (OLD.id, 'DELETE', OLD.scope, OLD.role_name, OLD.minion_id, OLD.pillar_path,
|
||||||
|
OLD.data, NULL, OLD.is_secret, OLD.version, current_user, v_reason);
|
||||||
|
RETURN OLD;
|
||||||
|
END IF;
|
||||||
|
RETURN NULL;
|
||||||
|
END
|
||||||
|
$fn$;
|
||||||
|
|
||||||
|
DROP TRIGGER IF EXISTS pillar_entry_audit ON so_pillar.pillar_entry;
|
||||||
|
CREATE TRIGGER pillar_entry_audit
|
||||||
|
AFTER INSERT OR UPDATE OR DELETE ON so_pillar.pillar_entry
|
||||||
|
FOR EACH ROW EXECUTE FUNCTION so_pillar.fn_pillar_entry_audit();
|
||||||
|
|
||||||
|
-- updated_at + version maintenance: bump version on every UPDATE that changes data.
|
||||||
|
CREATE OR REPLACE FUNCTION so_pillar.fn_pillar_entry_versioning() RETURNS trigger
|
||||||
|
LANGUAGE plpgsql AS $fn$
|
||||||
|
BEGIN
|
||||||
|
IF (TG_OP = 'UPDATE') THEN
|
||||||
|
IF OLD.data IS DISTINCT FROM NEW.data
|
||||||
|
OR OLD.is_secret IS DISTINCT FROM NEW.is_secret THEN
|
||||||
|
NEW.version := OLD.version + 1;
|
||||||
|
NEW.updated_at := now();
|
||||||
|
ELSE
|
||||||
|
NEW.version := OLD.version;
|
||||||
|
NEW.updated_at := OLD.updated_at;
|
||||||
|
END IF;
|
||||||
|
END IF;
|
||||||
|
RETURN NEW;
|
||||||
|
END
|
||||||
|
$fn$;
|
||||||
|
|
||||||
|
DROP TRIGGER IF EXISTS pillar_entry_versioning ON so_pillar.pillar_entry;
|
||||||
|
CREATE TRIGGER pillar_entry_versioning
|
||||||
|
BEFORE UPDATE ON so_pillar.pillar_entry
|
||||||
|
FOR EACH ROW EXECUTE FUNCTION so_pillar.fn_pillar_entry_versioning();
|
||||||
|
|
||||||
|
-- Recompute role_member rows for a minion based on node_type.
|
||||||
|
-- Compound matchers in pillar/top.sls are pure suffix patterns of the form
|
||||||
|
-- '*_<rolename>' plus the special multi-role 'manager/managersearch/managerhype'
|
||||||
|
-- bucket. node_type is split on common dashes/underscores; any token that
|
||||||
|
-- matches a known role_name produces a role_member row.
|
||||||
|
CREATE OR REPLACE FUNCTION so_pillar.fn_recompute_role_members(p_minion_id text)
|
||||||
|
RETURNS void LANGUAGE plpgsql AS $fn$
|
||||||
|
DECLARE
|
||||||
|
v_node_type text;
|
||||||
|
v_extra text[];
|
||||||
|
v_role text;
|
||||||
|
BEGIN
|
||||||
|
SELECT node_type, extra_roles INTO v_node_type, v_extra
|
||||||
|
FROM so_pillar.minion WHERE minion_id = p_minion_id;
|
||||||
|
|
||||||
|
IF v_node_type IS NULL THEN
|
||||||
|
RETURN;
|
||||||
|
END IF;
|
||||||
|
|
||||||
|
DELETE FROM so_pillar.role_member
|
||||||
|
WHERE minion_id = p_minion_id AND source = 'computed';
|
||||||
|
|
||||||
|
-- Main role from node_type.
|
||||||
|
IF EXISTS (SELECT 1 FROM so_pillar.role WHERE role_name = lower(v_node_type)) THEN
|
||||||
|
INSERT INTO so_pillar.role_member(role_name, minion_id, source)
|
||||||
|
VALUES (lower(v_node_type), p_minion_id, 'computed')
|
||||||
|
ON CONFLICT DO NOTHING;
|
||||||
|
END IF;
|
||||||
|
|
||||||
|
-- Extra roles supplied by the importer / reactor for compound matchers
|
||||||
|
-- that need to apply multiple buckets (e.g. managersearch also gets the
|
||||||
|
-- 'manager' bucket per top.sls line 36 grouping).
|
||||||
|
FOREACH v_role IN ARRAY COALESCE(v_extra, '{}'::text[]) LOOP
|
||||||
|
IF EXISTS (SELECT 1 FROM so_pillar.role WHERE role_name = v_role) THEN
|
||||||
|
INSERT INTO so_pillar.role_member(role_name, minion_id, source)
|
||||||
|
VALUES (v_role, p_minion_id, 'computed')
|
||||||
|
ON CONFLICT DO NOTHING;
|
||||||
|
END IF;
|
||||||
|
END LOOP;
|
||||||
|
END
|
||||||
|
$fn$;
|
||||||
|
|
||||||
|
CREATE OR REPLACE FUNCTION so_pillar.fn_minion_after_change() RETURNS trigger
|
||||||
|
LANGUAGE plpgsql AS $fn$
|
||||||
|
BEGIN
|
||||||
|
PERFORM so_pillar.fn_recompute_role_members(COALESCE(NEW.minion_id, OLD.minion_id));
|
||||||
|
RETURN COALESCE(NEW, OLD);
|
||||||
|
END
|
||||||
|
$fn$;
|
||||||
|
|
||||||
|
DROP TRIGGER IF EXISTS minion_role_sync ON so_pillar.minion;
|
||||||
|
CREATE TRIGGER minion_role_sync
|
||||||
|
AFTER INSERT OR UPDATE OF node_type, extra_roles ON so_pillar.minion
|
||||||
|
FOR EACH ROW EXECUTE FUNCTION so_pillar.fn_minion_after_change();
|
||||||
@@ -0,0 +1,130 @@
|
|||||||
|
-- pgcrypto-backed secret storage for pillar_entry rows where is_secret = true.
|
||||||
|
-- The plaintext value is encrypted with a symmetric key held in a server-side
|
||||||
|
-- GUC (so_pillar.master_key) which is set per-role via ALTER ROLE so the key
|
||||||
|
-- never touches a flat file readable by Salt itself.
|
||||||
|
|
||||||
|
CREATE EXTENSION IF NOT EXISTS pgcrypto WITH SCHEMA public;
|
||||||
|
|
||||||
|
-- Encrypt a JSONB value using the configured master key. Stored as a JSONB
|
||||||
|
-- envelope {"_enc": "<armored ciphertext>"} so the same column type is reused.
|
||||||
|
CREATE OR REPLACE FUNCTION so_pillar.fn_encrypt_jsonb(p_value jsonb)
|
||||||
|
RETURNS jsonb LANGUAGE plpgsql AS $fn$
|
||||||
|
DECLARE
|
||||||
|
v_key text := current_setting('so_pillar.master_key', true);
|
||||||
|
BEGIN
|
||||||
|
IF v_key IS NULL OR v_key = '' THEN
|
||||||
|
RAISE EXCEPTION 'so_pillar.master_key GUC not configured';
|
||||||
|
END IF;
|
||||||
|
RETURN jsonb_build_object(
|
||||||
|
'_enc',
|
||||||
|
encode(pgp_sym_encrypt(p_value::text, v_key), 'base64')
|
||||||
|
);
|
||||||
|
END
|
||||||
|
$fn$;
|
||||||
|
|
||||||
|
-- Decrypt the envelope produced by fn_encrypt_jsonb. SECURITY DEFINER so callers
|
||||||
|
-- with no direct access to pgcrypto/master_key can still pull plaintext via the
|
||||||
|
-- v_pillar_secrets view.
|
||||||
|
CREATE OR REPLACE FUNCTION so_pillar.fn_decrypt_jsonb(p_envelope jsonb)
|
||||||
|
RETURNS jsonb LANGUAGE plpgsql SECURITY DEFINER AS $fn$
|
||||||
|
DECLARE
|
||||||
|
v_key text := current_setting('so_pillar.master_key', true);
|
||||||
|
v_ct text;
|
||||||
|
BEGIN
|
||||||
|
IF v_key IS NULL OR v_key = '' THEN
|
||||||
|
RAISE EXCEPTION 'so_pillar.master_key GUC not configured';
|
||||||
|
END IF;
|
||||||
|
v_ct := p_envelope->>'_enc';
|
||||||
|
IF v_ct IS NULL THEN
|
||||||
|
RETURN p_envelope; -- not encrypted; pass through
|
||||||
|
END IF;
|
||||||
|
RETURN pgp_sym_decrypt(decode(v_ct, 'base64'), v_key)::jsonb;
|
||||||
|
END
|
||||||
|
$fn$;
|
||||||
|
|
||||||
|
REVOKE ALL ON FUNCTION so_pillar.fn_decrypt_jsonb(jsonb) FROM PUBLIC;
|
||||||
|
|
||||||
|
-- Secrets view consumed by ext_pillar. Decrypts at the boundary so Salt sees
|
||||||
|
-- plaintext JSONB. Filters the rows to those that apply to the requesting
|
||||||
|
-- minion via current_setting, since views can't take parameters and ext_pillar
|
||||||
|
-- can only bind one parameter per query.
|
||||||
|
--
|
||||||
|
-- Master-config query: SELECT data FROM so_pillar.v_pillar_secrets WHERE %s IS NOT NULL
|
||||||
|
-- The %s satisfies the bound parameter; the view itself reads the minion_id
|
||||||
|
-- from a session GUC set by a small wrapper function (see fn_pillar_secrets).
|
||||||
|
CREATE OR REPLACE FUNCTION so_pillar.fn_pillar_secrets(p_minion_id text)
|
||||||
|
RETURNS TABLE(data jsonb)
|
||||||
|
LANGUAGE sql STABLE SECURITY DEFINER AS $fn$
|
||||||
|
SELECT so_pillar.fn_decrypt_jsonb(pe.data)
|
||||||
|
FROM so_pillar.pillar_entry pe
|
||||||
|
WHERE pe.is_secret = true
|
||||||
|
AND ( pe.scope = 'global'
|
||||||
|
OR (pe.scope = 'role'
|
||||||
|
AND pe.role_name IN (
|
||||||
|
SELECT role_name FROM so_pillar.role_member
|
||||||
|
WHERE minion_id = p_minion_id))
|
||||||
|
OR (pe.scope = 'minion' AND pe.minion_id = p_minion_id))
|
||||||
|
ORDER BY pe.sort_key, pe.pillar_path;
|
||||||
|
$fn$;
|
||||||
|
|
||||||
|
-- Replace the placeholder view from 002 with a parameterised version. Master
|
||||||
|
-- config query becomes:
|
||||||
|
-- SELECT data FROM so_pillar.fn_pillar_secrets(%s) AS s
|
||||||
|
DROP VIEW IF EXISTS so_pillar.v_pillar_secrets;
|
||||||
|
CREATE OR REPLACE VIEW so_pillar.v_pillar_secrets AS
|
||||||
|
SELECT NULL::text AS minion_id,
|
||||||
|
NULL::text AS pillar_path,
|
||||||
|
NULL::int AS sort_key,
|
||||||
|
'{}'::jsonb AS data
|
||||||
|
WHERE false;
|
||||||
|
COMMENT ON VIEW so_pillar.v_pillar_secrets IS
|
||||||
|
'Deprecated placeholder; use SELECT data FROM so_pillar.fn_pillar_secrets(minion_id) instead';
|
||||||
|
|
||||||
|
-- Convenience helper for so-yaml.py and the importer to set a secret without
|
||||||
|
-- ever exposing the master_key to the caller. SECURITY DEFINER means the
|
||||||
|
-- caller does not need read access to so_pillar.master_key.
|
||||||
|
CREATE OR REPLACE FUNCTION so_pillar.fn_set_secret(
|
||||||
|
p_scope text,
|
||||||
|
p_role_name text,
|
||||||
|
p_minion_id text,
|
||||||
|
p_pillar_path text,
|
||||||
|
p_value jsonb,
|
||||||
|
p_change_reason text DEFAULT NULL
|
||||||
|
) RETURNS bigint LANGUAGE plpgsql SECURITY DEFINER AS $fn$
|
||||||
|
DECLARE
|
||||||
|
v_envelope jsonb := so_pillar.fn_encrypt_jsonb(p_value);
|
||||||
|
v_id bigint;
|
||||||
|
BEGIN
|
||||||
|
PERFORM set_config('so_pillar.change_reason',
|
||||||
|
COALESCE(p_change_reason, 'fn_set_secret'),
|
||||||
|
true);
|
||||||
|
|
||||||
|
INSERT INTO so_pillar.pillar_entry(
|
||||||
|
scope, role_name, minion_id, pillar_path, data, is_secret, change_reason)
|
||||||
|
VALUES (p_scope, p_role_name, p_minion_id, p_pillar_path, v_envelope, true, p_change_reason)
|
||||||
|
ON CONFLICT (pillar_path) WHERE scope='global' DO UPDATE
|
||||||
|
SET data = EXCLUDED.data, is_secret = true, change_reason = EXCLUDED.change_reason
|
||||||
|
RETURNING id INTO v_id;
|
||||||
|
|
||||||
|
IF v_id IS NULL THEN
|
||||||
|
UPDATE so_pillar.pillar_entry
|
||||||
|
SET data = v_envelope, is_secret = true, change_reason = p_change_reason
|
||||||
|
WHERE scope = p_scope
|
||||||
|
AND COALESCE(role_name,'') = COALESCE(p_role_name,'')
|
||||||
|
AND COALESCE(minion_id,'') = COALESCE(p_minion_id,'')
|
||||||
|
AND pillar_path = p_pillar_path
|
||||||
|
RETURNING id INTO v_id;
|
||||||
|
|
||||||
|
IF v_id IS NULL THEN
|
||||||
|
INSERT INTO so_pillar.pillar_entry(
|
||||||
|
scope, role_name, minion_id, pillar_path, data, is_secret, change_reason)
|
||||||
|
VALUES (p_scope, p_role_name, p_minion_id, p_pillar_path, v_envelope, true, p_change_reason)
|
||||||
|
RETURNING id INTO v_id;
|
||||||
|
END IF;
|
||||||
|
END IF;
|
||||||
|
|
||||||
|
RETURN v_id;
|
||||||
|
END
|
||||||
|
$fn$;
|
||||||
|
|
||||||
|
REVOKE ALL ON FUNCTION so_pillar.fn_set_secret(text,text,text,text,jsonb,text) FROM PUBLIC;
|
||||||
@@ -0,0 +1,39 @@
|
|||||||
|
-- Seed the so_pillar.role table with the role buckets defined in pillar/top.sls.
|
||||||
|
-- The match_expr column preserves the original Salt compound expression purely
|
||||||
|
-- as documentation; PG-side membership is materialised in role_member.
|
||||||
|
-- Idempotent: ON CONFLICT lets re-application leave existing rows untouched.
|
||||||
|
|
||||||
|
INSERT INTO so_pillar.role(role_name, match_kind, match_expr, description) VALUES
|
||||||
|
('manager', 'compound', '*_manager or *_managersearch or *_managerhype',
|
||||||
|
'Manager-class node. Includes managersearch and managerhype subtypes.'),
|
||||||
|
('managersearch', 'compound', '*_managersearch',
|
||||||
|
'Combined manager + searchnode role.'),
|
||||||
|
('managerhype', 'compound', '*_managerhype',
|
||||||
|
'Combined manager + hypervisor role.'),
|
||||||
|
('sensor', 'compound', '*_sensor',
|
||||||
|
'Sensor node running zeek/suricata/strelka.'),
|
||||||
|
('eval', 'compound', '*_eval',
|
||||||
|
'Single-node evaluation install (manager + sensor + storage on one host).'),
|
||||||
|
('standalone', 'compound', '*_standalone',
|
||||||
|
'Single-node production install (no distributed cluster).'),
|
||||||
|
('heavynode', 'compound', '*_heavynode',
|
||||||
|
'Distributed manager node carrying logstash + ES.'),
|
||||||
|
('idh', 'compound', '*_idh',
|
||||||
|
'Intrusion-detection-honeypot node.'),
|
||||||
|
('searchnode', 'compound', '*_searchnode',
|
||||||
|
'Distributed Elasticsearch search node.'),
|
||||||
|
('receiver', 'compound', '*_receiver',
|
||||||
|
'Kafka receiver node.'),
|
||||||
|
('import', 'compound', '*_import',
|
||||||
|
'Single-node import-only install.'),
|
||||||
|
('fleet', 'compound', '*_fleet',
|
||||||
|
'Elastic Fleet server node.'),
|
||||||
|
('hypervisor', 'compound', '*_hypervisor',
|
||||||
|
'Hypervisor host (libvirt). Hosts VM minions.'),
|
||||||
|
('desktop', 'compound', '*_desktop',
|
||||||
|
'Desktop minion (no firewall/nginx pillars apply).'),
|
||||||
|
('not_desktop', 'compound', '* and not *_desktop',
|
||||||
|
'Pseudo-role; matches every minion that is not a desktop. Used for global firewall/nginx.'),
|
||||||
|
('libvirt', 'grain', 'salt-cloud:driver:libvirt',
|
||||||
|
'Pseudo-role; matches any minion with grain salt-cloud.driver = libvirt.')
|
||||||
|
ON CONFLICT (role_name) DO NOTHING;
|
||||||
@@ -0,0 +1,106 @@
|
|||||||
|
-- Roles + Row-Level Security policies for the so_pillar schema.
|
||||||
|
-- Three roles:
|
||||||
|
-- so_pillar_master — connected by salt-master ext_pillar. Read-only.
|
||||||
|
-- RLS forces it to skip is_secret rows; reads
|
||||||
|
-- encrypted secrets only via fn_pillar_secrets().
|
||||||
|
-- so_pillar_writer — connected by so-yaml dual-write and the SOC
|
||||||
|
-- PostgresConfigstore. Read+write on pillar_entry,
|
||||||
|
-- minion, role_member.
|
||||||
|
-- so_pillar_secret_owner — owns the master encryption key GUC; sole role
|
||||||
|
-- allowed to call fn_set_secret directly. Other
|
||||||
|
-- writers reach this function only via grants.
|
||||||
|
--
|
||||||
|
-- The existing app role so_postgres_user (created by init-users.sh) is granted
|
||||||
|
-- INTO so_pillar_writer so SOC keeps using its existing connection but inherits
|
||||||
|
-- pillar-write capability.
|
||||||
|
|
||||||
|
DO $$
|
||||||
|
BEGIN
|
||||||
|
IF NOT EXISTS (SELECT 1 FROM pg_roles WHERE rolname = 'so_pillar_master') THEN
|
||||||
|
CREATE ROLE so_pillar_master NOLOGIN;
|
||||||
|
END IF;
|
||||||
|
IF NOT EXISTS (SELECT 1 FROM pg_roles WHERE rolname = 'so_pillar_writer') THEN
|
||||||
|
CREATE ROLE so_pillar_writer NOLOGIN;
|
||||||
|
END IF;
|
||||||
|
IF NOT EXISTS (SELECT 1 FROM pg_roles WHERE rolname = 'so_pillar_secret_owner') THEN
|
||||||
|
CREATE ROLE so_pillar_secret_owner NOLOGIN;
|
||||||
|
END IF;
|
||||||
|
END
|
||||||
|
$$;
|
||||||
|
|
||||||
|
GRANT USAGE ON SCHEMA so_pillar TO so_pillar_master, so_pillar_writer, so_pillar_secret_owner;
|
||||||
|
|
||||||
|
-- Read access for ext_pillar through the views only.
|
||||||
|
GRANT SELECT ON so_pillar.v_pillar_global,
|
||||||
|
so_pillar.v_pillar_role,
|
||||||
|
so_pillar.v_pillar_minion
|
||||||
|
TO so_pillar_master;
|
||||||
|
GRANT EXECUTE ON FUNCTION so_pillar.fn_pillar_secrets(text) TO so_pillar_master;
|
||||||
|
|
||||||
|
-- Engine reads + drains the change queue from the salt-master process. It
|
||||||
|
-- needs SELECT to find unprocessed rows and UPDATE to mark them processed.
|
||||||
|
-- The queue contains only locator metadata (no pillar data), so the master
|
||||||
|
-- role's existing privilege footprint is unchanged in practice.
|
||||||
|
GRANT SELECT, UPDATE ON so_pillar.change_queue TO so_pillar_master;
|
||||||
|
GRANT USAGE ON SEQUENCE so_pillar.change_queue_id_seq TO so_pillar_master;
|
||||||
|
-- Writer needs INSERT (the trigger runs as table owner, so this is just for
|
||||||
|
-- direct testing / manual replays from psql).
|
||||||
|
GRANT INSERT ON so_pillar.change_queue TO so_pillar_writer;
|
||||||
|
|
||||||
|
-- Writer needs CRUD on pillar_entry/minion/role_member plus access to seed tables.
|
||||||
|
GRANT SELECT, INSERT, UPDATE, DELETE
|
||||||
|
ON so_pillar.pillar_entry,
|
||||||
|
so_pillar.minion,
|
||||||
|
so_pillar.role_member
|
||||||
|
TO so_pillar_writer;
|
||||||
|
GRANT SELECT ON so_pillar.role, so_pillar.scope TO so_pillar_writer;
|
||||||
|
GRANT SELECT, INSERT, UPDATE, DELETE ON so_pillar.drift_log TO so_pillar_writer;
|
||||||
|
GRANT USAGE, SELECT ON ALL SEQUENCES IN SCHEMA so_pillar TO so_pillar_writer;
|
||||||
|
GRANT SELECT ON so_pillar.pillar_entry_history TO so_pillar_writer;
|
||||||
|
|
||||||
|
-- Secret owner can call fn_set_secret directly; writer goes through it via the
|
||||||
|
-- function's SECURITY DEFINER attribute, which executes as the function owner.
|
||||||
|
GRANT EXECUTE ON FUNCTION so_pillar.fn_set_secret(text,text,text,text,jsonb,text)
|
||||||
|
TO so_pillar_writer, so_pillar_secret_owner;
|
||||||
|
|
||||||
|
-- so_postgres_user (SOC's existing app user, created by init-users.sh) inherits
|
||||||
|
-- writer privilege so the PostgresConfigstore in SOC can mutate pillars without
|
||||||
|
-- a second connection pool. Inheritance is per-PG default (NOINHERIT must be
|
||||||
|
-- explicit), so this just works.
|
||||||
|
DO $$
|
||||||
|
BEGIN
|
||||||
|
IF EXISTS (SELECT 1 FROM pg_roles WHERE rolname = current_setting('so_pillar.app_role', true))
|
||||||
|
THEN
|
||||||
|
EXECUTE format('GRANT so_pillar_writer TO %I',
|
||||||
|
current_setting('so_pillar.app_role', true));
|
||||||
|
ELSIF EXISTS (SELECT 1 FROM pg_roles WHERE rolname = 'so_postgres_user') THEN
|
||||||
|
GRANT so_pillar_writer TO so_postgres_user;
|
||||||
|
END IF;
|
||||||
|
END
|
||||||
|
$$;
|
||||||
|
|
||||||
|
-- RLS on pillar_entry: master sees only non-secret rows. Writer sees all
|
||||||
|
-- (it must, to UPDATE secret rows when so-yaml replaces them). Secret rows
|
||||||
|
-- still require fn_decrypt_jsonb to read plaintext.
|
||||||
|
ALTER TABLE so_pillar.pillar_entry ENABLE ROW LEVEL SECURITY;
|
||||||
|
ALTER TABLE so_pillar.pillar_entry FORCE ROW LEVEL SECURITY;
|
||||||
|
|
||||||
|
DROP POLICY IF EXISTS pillar_entry_master_read ON so_pillar.pillar_entry;
|
||||||
|
DROP POLICY IF EXISTS pillar_entry_writer_all ON so_pillar.pillar_entry;
|
||||||
|
DROP POLICY IF EXISTS pillar_entry_owner_all ON so_pillar.pillar_entry;
|
||||||
|
|
||||||
|
CREATE POLICY pillar_entry_master_read ON so_pillar.pillar_entry
|
||||||
|
FOR SELECT TO so_pillar_master
|
||||||
|
USING (NOT is_secret);
|
||||||
|
|
||||||
|
CREATE POLICY pillar_entry_writer_all ON so_pillar.pillar_entry
|
||||||
|
FOR ALL TO so_pillar_writer
|
||||||
|
USING (true)
|
||||||
|
WITH CHECK (true);
|
||||||
|
|
||||||
|
CREATE POLICY pillar_entry_owner_all ON so_pillar.pillar_entry
|
||||||
|
FOR ALL TO so_pillar_secret_owner
|
||||||
|
USING (true)
|
||||||
|
WITH CHECK (true);
|
||||||
|
|
||||||
|
-- minion / role_member do not need RLS — they hold no secrets.
|
||||||
@@ -0,0 +1,43 @@
|
|||||||
|
-- Drift detection + retention via pg_cron. Optional — the schema_pillar.sls
|
||||||
|
-- state guards this file behind the postgres:so_pillar:drift_check_enabled
|
||||||
|
-- pillar flag because pg_cron may not be loaded on every install.
|
||||||
|
|
||||||
|
CREATE EXTENSION IF NOT EXISTS pg_cron;
|
||||||
|
|
||||||
|
-- Retention: trim pillar_entry_history older than a year. Adjustable via the
|
||||||
|
-- so_pillar.history_retention_days GUC (default 365 if unset).
|
||||||
|
CREATE OR REPLACE FUNCTION so_pillar.fn_history_retain()
|
||||||
|
RETURNS void LANGUAGE plpgsql AS $fn$
|
||||||
|
DECLARE
|
||||||
|
v_days int := COALESCE(current_setting('so_pillar.history_retention_days', true)::int, 365);
|
||||||
|
BEGIN
|
||||||
|
DELETE FROM so_pillar.pillar_entry_history
|
||||||
|
WHERE changed_at < (now() - (v_days::text || ' days')::interval);
|
||||||
|
END
|
||||||
|
$fn$;
|
||||||
|
|
||||||
|
-- Drift retention: keep two weeks of drift_log.
|
||||||
|
CREATE OR REPLACE FUNCTION so_pillar.fn_drift_retain()
|
||||||
|
RETURNS void LANGUAGE plpgsql AS $fn$
|
||||||
|
BEGIN
|
||||||
|
DELETE FROM so_pillar.drift_log
|
||||||
|
WHERE detected_at < (now() - interval '14 days');
|
||||||
|
END
|
||||||
|
$fn$;
|
||||||
|
|
||||||
|
-- pg_cron schedules (idempotent — unschedule any existing same-named job first).
|
||||||
|
DO $$
|
||||||
|
DECLARE
|
||||||
|
v_jobid bigint;
|
||||||
|
BEGIN
|
||||||
|
SELECT jobid INTO v_jobid FROM cron.job WHERE jobname = 'so_pillar_history_retain';
|
||||||
|
IF v_jobid IS NOT NULL THEN PERFORM cron.unschedule(v_jobid); END IF;
|
||||||
|
PERFORM cron.schedule('so_pillar_history_retain', '15 3 * * *',
|
||||||
|
'SELECT so_pillar.fn_history_retain();');
|
||||||
|
|
||||||
|
SELECT jobid INTO v_jobid FROM cron.job WHERE jobname = 'so_pillar_drift_retain';
|
||||||
|
IF v_jobid IS NOT NULL THEN PERFORM cron.unschedule(v_jobid); END IF;
|
||||||
|
PERFORM cron.schedule('so_pillar_drift_retain', '20 3 * * *',
|
||||||
|
'SELECT so_pillar.fn_drift_retain();');
|
||||||
|
END
|
||||||
|
$$;
|
||||||
@@ -0,0 +1,77 @@
|
|||||||
|
-- pg_notify-driven change fan-out for so_pillar.pillar_entry.
|
||||||
|
--
|
||||||
|
-- Two layers:
|
||||||
|
-- 1. so_pillar.change_queue — durable, drained by the salt-master
|
||||||
|
-- engine. Survives engine downtime,
|
||||||
|
-- de-duplicated by id, processed once.
|
||||||
|
-- 2. pg_notify('so_pillar_change') — wakeup signal. Payload is the
|
||||||
|
-- change_queue row id and locator
|
||||||
|
-- (no secret data — channels are
|
||||||
|
-- snoopable by anyone with LISTEN).
|
||||||
|
--
|
||||||
|
-- The salt-master engine LISTENs on the channel for low-latency wakeup,
|
||||||
|
-- then SELECTs unprocessed change_queue rows so a missed notification
|
||||||
|
-- (engine restart, network blip) self-heals on the next event.
|
||||||
|
|
||||||
|
CREATE TABLE IF NOT EXISTS so_pillar.change_queue (
|
||||||
|
id bigserial PRIMARY KEY,
|
||||||
|
scope text NOT NULL,
|
||||||
|
role_name text,
|
||||||
|
minion_id text,
|
||||||
|
pillar_path text NOT NULL,
|
||||||
|
op text NOT NULL CHECK (op IN ('INSERT','UPDATE','DELETE')),
|
||||||
|
enqueued_at timestamptz NOT NULL DEFAULT now(),
|
||||||
|
processed_at timestamptz
|
||||||
|
);
|
||||||
|
|
||||||
|
-- Hot index for the engine's drain query.
|
||||||
|
CREATE INDEX IF NOT EXISTS ix_change_queue_unprocessed
|
||||||
|
ON so_pillar.change_queue (id)
|
||||||
|
WHERE processed_at IS NULL;
|
||||||
|
|
||||||
|
-- Retention index: pg_cron job in 007 sweeps processed rows older than 7d.
|
||||||
|
CREATE INDEX IF NOT EXISTS ix_change_queue_processed_at
|
||||||
|
ON so_pillar.change_queue (processed_at)
|
||||||
|
WHERE processed_at IS NOT NULL;
|
||||||
|
|
||||||
|
CREATE OR REPLACE FUNCTION so_pillar.fn_pillar_entry_notify()
|
||||||
|
RETURNS trigger
|
||||||
|
LANGUAGE plpgsql
|
||||||
|
AS $$
|
||||||
|
DECLARE
|
||||||
|
v_row record;
|
||||||
|
v_id bigint;
|
||||||
|
BEGIN
|
||||||
|
IF TG_OP = 'DELETE' THEN
|
||||||
|
v_row := OLD;
|
||||||
|
ELSE
|
||||||
|
v_row := NEW;
|
||||||
|
END IF;
|
||||||
|
|
||||||
|
INSERT INTO so_pillar.change_queue
|
||||||
|
(scope, role_name, minion_id, pillar_path, op)
|
||||||
|
VALUES
|
||||||
|
(v_row.scope, v_row.role_name, v_row.minion_id, v_row.pillar_path, TG_OP)
|
||||||
|
RETURNING id INTO v_id;
|
||||||
|
|
||||||
|
-- Payload is the queue id + locator only. Engine joins back to
|
||||||
|
-- pillar_entry if it needs the data — keeps secrets off the wire.
|
||||||
|
PERFORM pg_notify('so_pillar_change', json_build_object(
|
||||||
|
'queue_id', v_id,
|
||||||
|
'scope', v_row.scope,
|
||||||
|
'role_name', v_row.role_name,
|
||||||
|
'minion_id', v_row.minion_id,
|
||||||
|
'pillar_path', v_row.pillar_path,
|
||||||
|
'op', TG_OP
|
||||||
|
)::text);
|
||||||
|
|
||||||
|
RETURN NULL;
|
||||||
|
END;
|
||||||
|
$$;
|
||||||
|
|
||||||
|
DROP TRIGGER IF EXISTS tg_pillar_entry_notify ON so_pillar.pillar_entry;
|
||||||
|
CREATE TRIGGER tg_pillar_entry_notify
|
||||||
|
AFTER INSERT OR UPDATE OR DELETE
|
||||||
|
ON so_pillar.pillar_entry
|
||||||
|
FOR EACH ROW
|
||||||
|
EXECUTE FUNCTION so_pillar.fn_pillar_entry_notify();
|
||||||
@@ -0,0 +1,14 @@
|
|||||||
|
# Copyright Security Onion Solutions LLC and/or licensed to Security Onion Solutions LLC under one
|
||||||
|
# or more contributor license agreements. Licensed under the Elastic License 2.0 as shown at
|
||||||
|
# https://securityonion.net/license; you may not use this file except in compliance with the
|
||||||
|
# Elastic License 2.0.
|
||||||
|
|
||||||
|
{% from 'postgres/map.jinja' import PGMERGED %}
|
||||||
|
|
||||||
|
include:
|
||||||
|
{% if PGMERGED.enabled %}
|
||||||
|
- postgres.enabled
|
||||||
|
- postgres.schema_pillar
|
||||||
|
{% else %}
|
||||||
|
- postgres.disabled
|
||||||
|
{% endif %}
|
||||||
@@ -0,0 +1,7 @@
|
|||||||
|
{# Copyright Security Onion Solutions LLC and/or licensed to Security Onion Solutions LLC under one
|
||||||
|
or more contributor license agreements. Licensed under the Elastic License 2.0 as shown at
|
||||||
|
https://securityonion.net/license; you may not use this file except in compliance with the
|
||||||
|
Elastic License 2.0. #}
|
||||||
|
|
||||||
|
{% import_yaml 'postgres/defaults.yaml' as PGDEFAULTS %}
|
||||||
|
{% set PGMERGED = salt['pillar.get']('postgres', PGDEFAULTS.postgres, merge=True) %}
|
||||||
@@ -0,0 +1,140 @@
|
|||||||
|
# Copyright Security Onion Solutions LLC and/or licensed to Security Onion Solutions LLC under one
|
||||||
|
# or more contributor license agreements. Licensed under the Elastic License 2.0 as shown at
|
||||||
|
# https://securityonion.net/license; you may not use this file except in compliance with the
|
||||||
|
# Elastic License 2.0.
|
||||||
|
|
||||||
|
{% from 'allowed_states.map.jinja' import allowed_states %}
|
||||||
|
{% if sls.split('.')[0] in allowed_states %}
|
||||||
|
{% from 'vars/globals.map.jinja' import GLOBALS %}
|
||||||
|
|
||||||
|
# Deploys the so_pillar schema (tables, views, audit triggers, secrets,
|
||||||
|
# RLS, pg_cron retention) inside the so-postgres container. Idempotent —
|
||||||
|
# every CREATE / GRANT is wrapped in IF NOT EXISTS / ON CONFLICT or DO
|
||||||
|
# blocks so re-running the state is a no-op when the schema is current.
|
||||||
|
#
|
||||||
|
# Gated on the postgres:so_pillar:enabled feature flag (default false).
|
||||||
|
# Flip to true once the postsalt branch is ready to bring ext_pillar live.
|
||||||
|
|
||||||
|
include:
|
||||||
|
- postgres.enabled
|
||||||
|
|
||||||
|
{% set so_pillar_enabled = salt['pillar.get']('postgres:so_pillar:enabled', False) %}
|
||||||
|
{% if so_pillar_enabled %}
|
||||||
|
|
||||||
|
{% set drift_enabled = salt['pillar.get']('postgres:so_pillar:drift_check_enabled', False) %}
|
||||||
|
{% set schema_dir = '/opt/so/saltstack/default/salt/postgres/files/schema/pillar' %}
|
||||||
|
|
||||||
|
# Wait for postgres to actually accept TCP connections. Same idiom as
|
||||||
|
# telegraf_users.sls. The docker_container.running state returns earlier than
|
||||||
|
# the database is ready on first init.
|
||||||
|
so_pillar_postgres_wait_ready:
|
||||||
|
cmd.run:
|
||||||
|
- name: |
|
||||||
|
for i in $(seq 1 60); do
|
||||||
|
if docker exec so-postgres pg_isready -h 127.0.0.1 -U postgres -q 2>/dev/null; then
|
||||||
|
exit 0
|
||||||
|
fi
|
||||||
|
sleep 2
|
||||||
|
done
|
||||||
|
echo "so-postgres did not accept TCP connections within 120s" >&2
|
||||||
|
exit 1
|
||||||
|
- require:
|
||||||
|
- docker_container: so-postgres
|
||||||
|
|
||||||
|
{% set sql_files = [
|
||||||
|
'001_schema.sql',
|
||||||
|
'002_views.sql',
|
||||||
|
'003_history_trigger.sql',
|
||||||
|
'004_secrets.sql',
|
||||||
|
'005_seed_roles.sql',
|
||||||
|
'006_rls.sql',
|
||||||
|
] %}
|
||||||
|
|
||||||
|
{% if drift_enabled %}
|
||||||
|
{% do sql_files.append('007_drift_pgcron.sql') %}
|
||||||
|
{% endif %}
|
||||||
|
|
||||||
|
# 008 always applies — pg_notify-driven change fan-out is what the salt-master
|
||||||
|
# pg_notify_pillar engine consumes. Without it reactor wiring sees no events.
|
||||||
|
{% do sql_files.append('008_change_notify.sql') %}
|
||||||
|
|
||||||
|
{% for sql_file in sql_files %}
|
||||||
|
so_pillar_apply_{{ sql_file | replace('.', '_') }}:
|
||||||
|
cmd.run:
|
||||||
|
- name: |
|
||||||
|
docker exec -i so-postgres psql -v ON_ERROR_STOP=1 -U postgres -d securityonion \
|
||||||
|
< {{ schema_dir }}/{{ sql_file }}
|
||||||
|
- require:
|
||||||
|
- cmd: so_pillar_postgres_wait_ready
|
||||||
|
{% if not loop.first %}
|
||||||
|
- cmd: so_pillar_apply_{{ sql_files[loop.index0 - 1] | replace('.', '_') }}
|
||||||
|
{% endif %}
|
||||||
|
{% endfor %}
|
||||||
|
|
||||||
|
# Set the master encryption key GUC on the secret-owner role. The key itself
|
||||||
|
# is generated by setup/so-functions::secrets_pillar() (extended for postsalt)
|
||||||
|
# and lives in /opt/so/conf/postgres/so_pillar.key (mode 0400) — never read by
|
||||||
|
# Salt itself; the value flows into PG via ALTER ROLE so it sits only in the
|
||||||
|
# server's role catalog.
|
||||||
|
so_pillar_master_key_configure:
|
||||||
|
cmd.run:
|
||||||
|
- name: |
|
||||||
|
if [ -r /opt/so/conf/postgres/so_pillar.key ]; then
|
||||||
|
KEY="$(< /opt/so/conf/postgres/so_pillar.key)"
|
||||||
|
docker exec -i so-postgres psql -v ON_ERROR_STOP=1 -U postgres -d securityonion <<EOSQL
|
||||||
|
ALTER ROLE so_pillar_secret_owner SET so_pillar.master_key = '$KEY';
|
||||||
|
ALTER ROLE so_pillar_master SET so_pillar.master_key = '$KEY';
|
||||||
|
ALTER ROLE so_pillar_writer SET so_pillar.master_key = '$KEY';
|
||||||
|
EOSQL
|
||||||
|
else
|
||||||
|
echo "so_pillar.key not present yet; setup/so-functions must generate it before schema_pillar.sls" >&2
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
- require:
|
||||||
|
- cmd: so_pillar_apply_{{ sql_files[-1] | replace('.', '_') }}
|
||||||
|
|
||||||
|
# Run the importer once after the schema is in place. Idempotent — re-runs
|
||||||
|
# with no SLS edits produce zero row changes.
|
||||||
|
so_pillar_initial_import:
|
||||||
|
cmd.run:
|
||||||
|
- name: /usr/sbin/so-pillar-import --yes --reason 'schema_pillar.sls initial import'
|
||||||
|
- require:
|
||||||
|
- cmd: so_pillar_master_key_configure
|
||||||
|
|
||||||
|
# Flip so-yaml from dual-write to PG-canonical for managed paths now that
|
||||||
|
# the schema and importer are both in place. Bootstrap files (secrets.sls,
|
||||||
|
# postgres/auth.sls, ca/init.sls, *.nodes.sls, top.sls, ...) remain on disk
|
||||||
|
# regardless because so_yaml_postgres.locate() raises SkipPath for them.
|
||||||
|
so_pillar_so_yaml_mode_dir:
|
||||||
|
file.directory:
|
||||||
|
- name: /opt/so/conf/so-yaml
|
||||||
|
- user: socore
|
||||||
|
- group: socore
|
||||||
|
- mode: '0755'
|
||||||
|
- makedirs: True
|
||||||
|
|
||||||
|
so_pillar_so_yaml_mode_postgres:
|
||||||
|
file.managed:
|
||||||
|
- name: /opt/so/conf/so-yaml/mode
|
||||||
|
- contents: postgres
|
||||||
|
- user: socore
|
||||||
|
- group: socore
|
||||||
|
- mode: '0644'
|
||||||
|
- require:
|
||||||
|
- file: so_pillar_so_yaml_mode_dir
|
||||||
|
- cmd: so_pillar_initial_import
|
||||||
|
|
||||||
|
{% else %}
|
||||||
|
|
||||||
|
so_pillar_disabled_noop:
|
||||||
|
test.nop
|
||||||
|
|
||||||
|
{% endif %}
|
||||||
|
|
||||||
|
{% else %}
|
||||||
|
|
||||||
|
{{sls}}_state_not_allowed:
|
||||||
|
test.fail_without_changes:
|
||||||
|
- name: {{sls}}_state_not_allowed
|
||||||
|
|
||||||
|
{% endif %}
|
||||||
@@ -0,0 +1,89 @@
|
|||||||
|
postgres:
|
||||||
|
enabled:
|
||||||
|
description: Whether the PostgreSQL database container is enabled on this grid. Backs the assistant store and the Telegraf metrics database.
|
||||||
|
forcedType: bool
|
||||||
|
readonly: True
|
||||||
|
helpLink: influxdb
|
||||||
|
telegraf:
|
||||||
|
retention_days:
|
||||||
|
description: Number of days of Telegraf metrics to keep in the so_telegraf database. Older partitions are dropped hourly by pg_partman.
|
||||||
|
forcedType: int
|
||||||
|
helpLink: postgres
|
||||||
|
config:
|
||||||
|
max_connections:
|
||||||
|
description: Maximum number of concurrent PostgreSQL connections.
|
||||||
|
forcedType: int
|
||||||
|
global: True
|
||||||
|
helpLink: postgres
|
||||||
|
shared_buffers:
|
||||||
|
description: Amount of memory PostgreSQL uses for shared buffers (e.g. 256MB, 1GB). Raising this improves read cache hit rate at the cost of system RAM.
|
||||||
|
global: True
|
||||||
|
helpLink: postgres
|
||||||
|
log_min_messages:
|
||||||
|
description: Minimum severity of server messages written to the PostgreSQL log.
|
||||||
|
options:
|
||||||
|
- debug1
|
||||||
|
- info
|
||||||
|
- notice
|
||||||
|
- warning
|
||||||
|
- error
|
||||||
|
- log
|
||||||
|
- fatal
|
||||||
|
global: True
|
||||||
|
helpLink: postgres
|
||||||
|
listen_addresses:
|
||||||
|
description: Interfaces PostgreSQL listens on. Must remain '*' so clients on the docker bridge network can connect.
|
||||||
|
global: True
|
||||||
|
advanced: True
|
||||||
|
helpLink: postgres
|
||||||
|
port:
|
||||||
|
description: TCP port PostgreSQL listens on inside the container. Firewall rules and container port mapping assume 5432.
|
||||||
|
forcedType: int
|
||||||
|
global: True
|
||||||
|
advanced: True
|
||||||
|
helpLink: postgres
|
||||||
|
ssl:
|
||||||
|
description: Whether PostgreSQL accepts TLS connections. Must remain 'on' — pg_hba.conf requires hostssl for TCP.
|
||||||
|
global: True
|
||||||
|
advanced: True
|
||||||
|
helpLink: postgres
|
||||||
|
ssl_cert_file:
|
||||||
|
description: Path (inside the container) to the TLS server certificate. Salt-managed.
|
||||||
|
global: True
|
||||||
|
advanced: True
|
||||||
|
helpLink: postgres
|
||||||
|
ssl_key_file:
|
||||||
|
description: Path (inside the container) to the TLS server private key. Salt-managed.
|
||||||
|
global: True
|
||||||
|
advanced: True
|
||||||
|
helpLink: postgres
|
||||||
|
ssl_ca_file:
|
||||||
|
description: Path (inside the container) to the CA bundle PostgreSQL uses to verify client certificates. Salt-managed.
|
||||||
|
global: True
|
||||||
|
advanced: True
|
||||||
|
helpLink: postgres
|
||||||
|
hba_file:
|
||||||
|
description: Path (inside the container) to the pg_hba.conf authentication file. Salt-managed — edit salt/postgres/files/pg_hba.conf.
|
||||||
|
global: True
|
||||||
|
advanced: True
|
||||||
|
helpLink: postgres
|
||||||
|
log_destination:
|
||||||
|
description: Where PostgreSQL writes its server log. 'stderr' routes to the container log stream.
|
||||||
|
global: True
|
||||||
|
advanced: True
|
||||||
|
helpLink: postgres
|
||||||
|
logging_collector:
|
||||||
|
description: Whether to run a separate logging collector process. Disabled because the docker log stream already captures stderr.
|
||||||
|
global: True
|
||||||
|
advanced: True
|
||||||
|
helpLink: postgres
|
||||||
|
shared_preload_libraries:
|
||||||
|
description: Comma-separated list of extensions loaded at server start. Required for pg_cron which drives pg_partman maintenance — do not remove.
|
||||||
|
global: True
|
||||||
|
advanced: True
|
||||||
|
helpLink: postgres
|
||||||
|
cron.database_name:
|
||||||
|
description: Database pg_cron schedules jobs in. Must be so_telegraf so partman maintenance runs in the right database context.
|
||||||
|
global: True
|
||||||
|
advanced: True
|
||||||
|
helpLink: postgres
|
||||||
@@ -0,0 +1,21 @@
|
|||||||
|
# Copyright Security Onion Solutions LLC and/or licensed to Security Onion Solutions LLC under one
|
||||||
|
# or more contributor license agreements. Licensed under the Elastic License 2.0 as shown at
|
||||||
|
# https://securityonion.net/license; you may not use this file except in compliance with the
|
||||||
|
# Elastic License 2.0.
|
||||||
|
|
||||||
|
{% from 'allowed_states.map.jinja' import allowed_states %}
|
||||||
|
{% if sls.split('.')[0] in allowed_states %}
|
||||||
|
|
||||||
|
append_so-postgres_so-status.conf:
|
||||||
|
file.append:
|
||||||
|
- name: /opt/so/conf/so-status/so-status.conf
|
||||||
|
- text: so-postgres
|
||||||
|
- unless: grep -q so-postgres /opt/so/conf/so-status/so-status.conf
|
||||||
|
|
||||||
|
{% else %}
|
||||||
|
|
||||||
|
{{sls}}_state_not_allowed:
|
||||||
|
test.fail_without_changes:
|
||||||
|
- name: {{sls}}_state_not_allowed
|
||||||
|
|
||||||
|
{% endif %}
|
||||||
@@ -0,0 +1,55 @@
|
|||||||
|
# Copyright Security Onion Solutions LLC and/or licensed to Security Onion Solutions LLC under one
|
||||||
|
# or more contributor license agreements. Licensed under the Elastic License 2.0 as shown at
|
||||||
|
# https://securityonion.net/license; you may not use this file except in compliance with the
|
||||||
|
# Elastic License 2.0.
|
||||||
|
|
||||||
|
{% from 'allowed_states.map.jinja' import allowed_states %}
|
||||||
|
{% if sls.split('.')[0] in allowed_states %}
|
||||||
|
{% from 'vars/globals.map.jinja' import GLOBALS %}
|
||||||
|
{% from 'ca/map.jinja' import CA %}
|
||||||
|
|
||||||
|
postgres_key:
|
||||||
|
x509.private_key_managed:
|
||||||
|
- name: /etc/pki/postgres.key
|
||||||
|
- keysize: 4096
|
||||||
|
- backup: True
|
||||||
|
- new: True
|
||||||
|
{% if salt['file.file_exists']('/etc/pki/postgres.key') -%}
|
||||||
|
- prereq:
|
||||||
|
- x509: /etc/pki/postgres.crt
|
||||||
|
{%- endif %}
|
||||||
|
- retry:
|
||||||
|
attempts: 5
|
||||||
|
interval: 30
|
||||||
|
|
||||||
|
postgres_crt:
|
||||||
|
x509.certificate_managed:
|
||||||
|
- name: /etc/pki/postgres.crt
|
||||||
|
- ca_server: {{ CA.server }}
|
||||||
|
- subjectAltName: DNS:{{ GLOBALS.hostname }}, IP:{{ GLOBALS.node_ip }}
|
||||||
|
- signing_policy: postgres
|
||||||
|
- private_key: /etc/pki/postgres.key
|
||||||
|
- CN: {{ GLOBALS.hostname }}
|
||||||
|
- days_remaining: 7
|
||||||
|
- days_valid: 820
|
||||||
|
- backup: True
|
||||||
|
- timeout: 30
|
||||||
|
- retry:
|
||||||
|
attempts: 5
|
||||||
|
interval: 30
|
||||||
|
|
||||||
|
postgresKeyperms:
|
||||||
|
file.managed:
|
||||||
|
- replace: False
|
||||||
|
- name: /etc/pki/postgres.key
|
||||||
|
- mode: 400
|
||||||
|
- user: 939
|
||||||
|
- group: 939
|
||||||
|
|
||||||
|
{% else %}
|
||||||
|
|
||||||
|
{{sls}}_state_not_allowed:
|
||||||
|
test.fail_without_changes:
|
||||||
|
- name: {{sls}}_state_not_allowed
|
||||||
|
|
||||||
|
{% endif %}
|
||||||
@@ -0,0 +1,157 @@
|
|||||||
|
# Copyright Security Onion Solutions LLC and/or licensed to Security Onion Solutions LLC under one
|
||||||
|
# or more contributor license agreements. Licensed under the Elastic License 2.0 as shown at
|
||||||
|
# https://securityonion.net/license; you may not use this file except in compliance with the
|
||||||
|
# Elastic License 2.0.
|
||||||
|
|
||||||
|
{% from 'allowed_states.map.jinja' import allowed_states %}
|
||||||
|
{% if sls.split('.')[0] in allowed_states %}
|
||||||
|
{% from 'vars/globals.map.jinja' import GLOBALS %}
|
||||||
|
{% from 'telegraf/map.jinja' import TELEGRAFMERGED %}
|
||||||
|
|
||||||
|
{# postgres_wait_ready below requires `docker_container: so-postgres`, which is
|
||||||
|
declared in postgres.enabled. Include it here so state.apply postgres.telegraf_users
|
||||||
|
on its own (e.g. from orch.deploy_newnode) still has that ID in scope. Salt
|
||||||
|
de-duplicates the circular include. #}
|
||||||
|
include:
|
||||||
|
- postgres.enabled
|
||||||
|
|
||||||
|
{% set TG_OUT = TELEGRAFMERGED.output | upper %}
|
||||||
|
{% if TG_OUT in ['POSTGRES', 'BOTH'] %}
|
||||||
|
|
||||||
|
# docker_container.running returns as soon as the container starts, but on
|
||||||
|
# first-init docker-entrypoint.sh starts a temporary postgres with
|
||||||
|
# `listen_addresses=''` to run /docker-entrypoint-initdb.d scripts, then
|
||||||
|
# shuts it down before exec'ing the real CMD. A default pg_isready check
|
||||||
|
# (Unix socket) passes during that ephemeral phase and races the shutdown
|
||||||
|
# with "the database system is shutting down". Checking TCP readiness on
|
||||||
|
# 127.0.0.1 only succeeds after the final postgres binds the port.
|
||||||
|
postgres_wait_ready:
|
||||||
|
cmd.run:
|
||||||
|
- name: |
|
||||||
|
for i in $(seq 1 60); do
|
||||||
|
if docker exec so-postgres pg_isready -h 127.0.0.1 -U postgres -q 2>/dev/null; then
|
||||||
|
exit 0
|
||||||
|
fi
|
||||||
|
sleep 2
|
||||||
|
done
|
||||||
|
echo "so-postgres did not accept TCP connections within 120s" >&2
|
||||||
|
exit 1
|
||||||
|
- require:
|
||||||
|
- docker_container: so-postgres
|
||||||
|
|
||||||
|
# Ensure the shared Telegraf database exists. init-users.sh only runs on a
|
||||||
|
# fresh data dir, so hosts upgraded onto an existing /nsm/postgres volume
|
||||||
|
# would otherwise never get so_telegraf.
|
||||||
|
postgres_create_telegraf_db:
|
||||||
|
cmd.run:
|
||||||
|
- name: |
|
||||||
|
if ! docker exec so-postgres psql -U postgres -tAc "SELECT 1 FROM pg_database WHERE datname='so_telegraf'" | grep -q 1; then
|
||||||
|
docker exec so-postgres psql -v ON_ERROR_STOP=1 -U postgres -c "CREATE DATABASE so_telegraf"
|
||||||
|
fi
|
||||||
|
- require:
|
||||||
|
- cmd: postgres_wait_ready
|
||||||
|
|
||||||
|
# Provision the shared group role and schema once. Every per-minion role is a
|
||||||
|
# member of so_telegraf, and each Telegraf connection does SET ROLE so_telegraf
|
||||||
|
# (via options='-c role=so_telegraf' in the connection string) so tables created
|
||||||
|
# on first write are owned by the group role and every member can INSERT/SELECT.
|
||||||
|
postgres_telegraf_group_role:
|
||||||
|
cmd.run:
|
||||||
|
- name: |
|
||||||
|
docker exec -i so-postgres psql -v ON_ERROR_STOP=1 -U postgres -d so_telegraf <<'EOSQL'
|
||||||
|
DO $$
|
||||||
|
BEGIN
|
||||||
|
IF NOT EXISTS (SELECT FROM pg_catalog.pg_roles WHERE rolname = 'so_telegraf') THEN
|
||||||
|
CREATE ROLE so_telegraf NOLOGIN;
|
||||||
|
END IF;
|
||||||
|
END
|
||||||
|
$$;
|
||||||
|
GRANT CONNECT ON DATABASE so_telegraf TO so_telegraf;
|
||||||
|
CREATE SCHEMA IF NOT EXISTS telegraf AUTHORIZATION so_telegraf;
|
||||||
|
GRANT USAGE, CREATE ON SCHEMA telegraf TO so_telegraf;
|
||||||
|
CREATE SCHEMA IF NOT EXISTS partman;
|
||||||
|
CREATE EXTENSION IF NOT EXISTS pg_partman SCHEMA partman;
|
||||||
|
CREATE EXTENSION IF NOT EXISTS pg_cron;
|
||||||
|
-- Telegraf (running as so_telegraf) calls partman.create_parent()
|
||||||
|
-- on first write of each metric, which needs USAGE on the partman
|
||||||
|
-- schema, EXECUTE on its functions/procedures, and write access to
|
||||||
|
-- partman.part_config so it can register new partitioned parents.
|
||||||
|
GRANT USAGE, CREATE ON SCHEMA partman TO so_telegraf;
|
||||||
|
GRANT SELECT, INSERT, UPDATE, DELETE ON ALL TABLES IN SCHEMA partman TO so_telegraf;
|
||||||
|
GRANT EXECUTE ON ALL FUNCTIONS IN SCHEMA partman TO so_telegraf;
|
||||||
|
GRANT EXECUTE ON ALL PROCEDURES IN SCHEMA partman TO so_telegraf;
|
||||||
|
-- partman creates per-parent template tables (partman.template_*) at
|
||||||
|
-- runtime; default privileges extend DML/sequence access to them.
|
||||||
|
ALTER DEFAULT PRIVILEGES IN SCHEMA partman
|
||||||
|
GRANT SELECT, INSERT, UPDATE, DELETE ON TABLES TO so_telegraf;
|
||||||
|
ALTER DEFAULT PRIVILEGES IN SCHEMA partman
|
||||||
|
GRANT USAGE, SELECT, UPDATE ON SEQUENCES TO so_telegraf;
|
||||||
|
-- Hourly partman maintenance. cron.schedule is idempotent by jobname.
|
||||||
|
SELECT cron.schedule(
|
||||||
|
'telegraf-partman-maintenance',
|
||||||
|
'17 * * * *',
|
||||||
|
'CALL partman.run_maintenance_proc()'
|
||||||
|
);
|
||||||
|
EOSQL
|
||||||
|
- require:
|
||||||
|
- cmd: postgres_create_telegraf_db
|
||||||
|
|
||||||
|
{% set creds = salt['pillar.get']('telegraf:postgres_creds', {}) %}
|
||||||
|
{% for mid, entry in creds.items() %}
|
||||||
|
{% if entry.get('user') and entry.get('pass') %}
|
||||||
|
{% set u = entry.user %}
|
||||||
|
{% set p = entry.pass | replace("'", "''") %}
|
||||||
|
|
||||||
|
postgres_telegraf_role_{{ u }}:
|
||||||
|
cmd.run:
|
||||||
|
- name: |
|
||||||
|
docker exec -i so-postgres psql -v ON_ERROR_STOP=1 -U postgres -d so_telegraf <<'EOSQL'
|
||||||
|
DO $$
|
||||||
|
BEGIN
|
||||||
|
IF NOT EXISTS (SELECT FROM pg_catalog.pg_roles WHERE rolname = '{{ u }}') THEN
|
||||||
|
EXECUTE format('CREATE ROLE %I WITH LOGIN PASSWORD %L', '{{ u }}', '{{ p }}');
|
||||||
|
ELSE
|
||||||
|
EXECUTE format('ALTER ROLE %I WITH PASSWORD %L', '{{ u }}', '{{ p }}');
|
||||||
|
END IF;
|
||||||
|
END
|
||||||
|
$$;
|
||||||
|
GRANT CONNECT ON DATABASE so_telegraf TO "{{ u }}";
|
||||||
|
GRANT so_telegraf TO "{{ u }}";
|
||||||
|
EOSQL
|
||||||
|
- require:
|
||||||
|
- cmd: postgres_telegraf_group_role
|
||||||
|
|
||||||
|
{% endif %}
|
||||||
|
{% endfor %}
|
||||||
|
|
||||||
|
# Reconcile partman retention from pillar. Runs after role/schema setup so
|
||||||
|
# any partitioned parents Telegraf has already created get their retention
|
||||||
|
# refreshed whenever postgres.telegraf.retention_days changes.
|
||||||
|
{% set retention = salt['pillar.get']('postgres:telegraf:retention_days', 14) | int %}
|
||||||
|
postgres_telegraf_retention_reconcile:
|
||||||
|
cmd.run:
|
||||||
|
- name: |
|
||||||
|
docker exec -i so-postgres psql -v ON_ERROR_STOP=1 -U postgres -d so_telegraf <<'EOSQL'
|
||||||
|
DO $$
|
||||||
|
BEGIN
|
||||||
|
IF EXISTS (SELECT 1 FROM pg_catalog.pg_extension WHERE extname = 'pg_partman') THEN
|
||||||
|
UPDATE partman.part_config
|
||||||
|
SET retention = '{{ retention }} days',
|
||||||
|
retention_keep_table = false
|
||||||
|
WHERE parent_table LIKE 'telegraf.%';
|
||||||
|
END IF;
|
||||||
|
END
|
||||||
|
$$;
|
||||||
|
EOSQL
|
||||||
|
- require:
|
||||||
|
- cmd: postgres_telegraf_group_role
|
||||||
|
|
||||||
|
{% endif %}
|
||||||
|
|
||||||
|
{% else %}
|
||||||
|
|
||||||
|
{{sls}}_state_not_allowed:
|
||||||
|
test.fail_without_changes:
|
||||||
|
- name: {{sls}}_state_not_allowed
|
||||||
|
|
||||||
|
{% endif %}
|
||||||
@@ -0,0 +1,39 @@
|
|||||||
|
#!/bin/bash
|
||||||
|
#
|
||||||
|
# Copyright Security Onion Solutions LLC and/or licensed to Security Onion Solutions LLC under one
|
||||||
|
# or more contributor license agreements. Licensed under the Elastic License 2.0 as shown at
|
||||||
|
# https://securityonion.net/license; you may not use this file except in compliance with the
|
||||||
|
# Elastic License 2.0.
|
||||||
|
|
||||||
|
. /usr/sbin/so-common
|
||||||
|
|
||||||
|
# Backups contain role password hashes and full chat data; keep them 0600.
|
||||||
|
umask 0077
|
||||||
|
|
||||||
|
TODAY=$(date '+%Y_%m_%d')
|
||||||
|
BACKUPDIR=/nsm/backup
|
||||||
|
BACKUPFILE="$BACKUPDIR/so-postgres-backup-$TODAY.sql.gz"
|
||||||
|
MAXBACKUPS=7
|
||||||
|
|
||||||
|
mkdir -p $BACKUPDIR
|
||||||
|
|
||||||
|
# Skip if already backed up today
|
||||||
|
if [ -f "$BACKUPFILE" ]; then
|
||||||
|
exit 0
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Skip if container isn't running
|
||||||
|
if ! docker ps --format '{{.Names}}' | grep -q '^so-postgres$'; then
|
||||||
|
exit 0
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Dump all databases and roles, compress
|
||||||
|
docker exec so-postgres pg_dumpall -U postgres | gzip > "$BACKUPFILE"
|
||||||
|
|
||||||
|
# Retention cleanup
|
||||||
|
NUMBACKUPS=$(find $BACKUPDIR -type f -name "so-postgres-backup*" | wc -l)
|
||||||
|
while [ "$NUMBACKUPS" -gt "$MAXBACKUPS" ]; do
|
||||||
|
OLDEST=$(find $BACKUPDIR -type f -name "so-postgres-backup*" -printf '%T+ %p\n' | sort | head -n 1 | awk -F" " '{print $2}')
|
||||||
|
rm -f "$OLDEST"
|
||||||
|
NUMBACKUPS=$(find $BACKUPDIR -type f -name "so-postgres-backup*" | wc -l)
|
||||||
|
done
|
||||||
@@ -0,0 +1,80 @@
|
|||||||
|
#!/bin/bash
|
||||||
|
|
||||||
|
# Copyright Security Onion Solutions LLC and/or licensed to Security Onion Solutions LLC under one
|
||||||
|
# or more contributor license agreements. Licensed under the Elastic License 2.0 as shown at
|
||||||
|
# https://securityonion.net/license; you may not use this file except in compliance with the
|
||||||
|
# Elastic License 2.0.
|
||||||
|
|
||||||
|
. /usr/sbin/so-common
|
||||||
|
|
||||||
|
usage() {
|
||||||
|
echo "Usage: $0 <operation> [args]"
|
||||||
|
echo ""
|
||||||
|
echo "Supported Operations:"
|
||||||
|
echo " sql Execute a SQL command, requires: <sql>"
|
||||||
|
echo " sqlfile Execute a SQL file, requires: <path>"
|
||||||
|
echo " shell Open an interactive psql shell"
|
||||||
|
echo " dblist List databases"
|
||||||
|
echo " userlist List database roles"
|
||||||
|
echo ""
|
||||||
|
exit 1
|
||||||
|
}
|
||||||
|
|
||||||
|
if [ $# -lt 1 ]; then
|
||||||
|
usage
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Check for prerequisites
|
||||||
|
if [ "$(id -u)" -ne 0 ]; then
|
||||||
|
echo "This script must be run using sudo!"
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
COMMAND=$(basename $0)
|
||||||
|
OP=$1
|
||||||
|
shift
|
||||||
|
|
||||||
|
set -eo pipefail
|
||||||
|
|
||||||
|
log() {
|
||||||
|
echo -e "$(date) | $COMMAND | $@" >&2
|
||||||
|
}
|
||||||
|
|
||||||
|
so_psql() {
|
||||||
|
docker exec so-postgres psql -U postgres -d securityonion "$@"
|
||||||
|
}
|
||||||
|
|
||||||
|
case "$OP" in
|
||||||
|
|
||||||
|
sql)
|
||||||
|
[ $# -lt 1 ] && usage
|
||||||
|
so_psql -c "$1"
|
||||||
|
;;
|
||||||
|
|
||||||
|
sqlfile)
|
||||||
|
[ $# -ne 1 ] && usage
|
||||||
|
if [ ! -f "$1" ]; then
|
||||||
|
log "File not found: $1"
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
docker cp "$1" so-postgres:/tmp/sqlfile.sql
|
||||||
|
docker exec so-postgres psql -U postgres -d securityonion -f /tmp/sqlfile.sql
|
||||||
|
docker exec so-postgres rm -f /tmp/sqlfile.sql
|
||||||
|
;;
|
||||||
|
|
||||||
|
shell)
|
||||||
|
docker exec -it so-postgres psql -U postgres -d securityonion
|
||||||
|
;;
|
||||||
|
|
||||||
|
dblist)
|
||||||
|
so_psql -c "\l"
|
||||||
|
;;
|
||||||
|
|
||||||
|
userlist)
|
||||||
|
so_psql -c "\du"
|
||||||
|
;;
|
||||||
|
|
||||||
|
*)
|
||||||
|
usage
|
||||||
|
;;
|
||||||
|
esac
|
||||||
@@ -0,0 +1,10 @@
|
|||||||
|
#!/bin/bash
|
||||||
|
|
||||||
|
# Copyright Security Onion Solutions LLC and/or licensed to Security Onion Solutions LLC under one
|
||||||
|
# or more contributor license agreements. Licensed under the Elastic License 2.0 as shown at
|
||||||
|
# https://securityonion.net/license; you may not use this file except in compliance with the
|
||||||
|
# Elastic License 2.0.
|
||||||
|
|
||||||
|
. /usr/sbin/so-common
|
||||||
|
|
||||||
|
/usr/sbin/so-restart postgres $1
|
||||||
@@ -0,0 +1,10 @@
|
|||||||
|
#!/bin/bash
|
||||||
|
|
||||||
|
# Copyright Security Onion Solutions LLC and/or licensed to Security Onion Solutions LLC under one
|
||||||
|
# or more contributor license agreements. Licensed under the Elastic License 2.0 as shown at
|
||||||
|
# https://securityonion.net/license; you may not use this file except in compliance with the
|
||||||
|
# Elastic License 2.0.
|
||||||
|
|
||||||
|
. /usr/sbin/so-common
|
||||||
|
|
||||||
|
/usr/sbin/so-start postgres $1
|
||||||
@@ -0,0 +1,10 @@
|
|||||||
|
#!/bin/bash
|
||||||
|
|
||||||
|
# Copyright Security Onion Solutions LLC and/or licensed to Security Onion Solutions LLC under one
|
||||||
|
# or more contributor license agreements. Licensed under the Elastic License 2.0 as shown at
|
||||||
|
# https://securityonion.net/license; you may not use this file except in compliance with the
|
||||||
|
# Elastic License 2.0.
|
||||||
|
|
||||||
|
. /usr/sbin/so-common
|
||||||
|
|
||||||
|
/usr/sbin/so-stop postgres $1
|
||||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user