Compare commits

...

27 Commits

Author SHA1 Message Date
Josh Patterson 652ac5d61f fix regex 2026-05-05 14:26:04 -04:00
Josh Patterson f888a2ba6b Merge remote-tracking branch 'origin/3/dev' into fixhype 2026-05-05 10:28:49 -04:00
Mike Reeves 8a1ee02335 Merge pull request #15846 from Security-Onion-Solutions/feature/ensure-pyyaml
Ensure python3-pyyaml is installed before continuing setup
2026-05-05 10:24:25 -04:00
Josh Patterson 192f6cfe13 Merge remote-tracking branch 'origin/3/dev' into fixhype 2026-05-05 08:18:26 -04:00
Mike Reeves 5bca81d833 Merge pull request #15858 from Security-Onion-Solutions/security-fix
Fix unsafe PyYAML load in filecheck
2026-05-04 16:16:40 -04:00
Josh Patterson 1c6574c694 ensure minion ids 2026-05-04 14:03:14 -04:00
Mike Reeves b701664e04 Fix unsafe PyYAML load in filecheck 2026-05-04 12:09:35 -04:00
Jorge Reyes bc64f1431d Merge pull request #15857 from Security-Onion-Solutions/reyesj2/package-registry-health
fleet package registry health check
2026-05-04 11:05:23 -05:00
reyesj2 2203037ce7 fleet package registry health check 2026-05-04 10:52:37 -05:00
Jorge Reyes 77a4ad877e Merge pull request #15851 from Security-Onion-Solutions/reyesj2/integration-transforms 2026-05-01 14:11:12 -05:00
reyesj2 702b3585cc excluding additional integration transform job failures 2026-05-01 12:57:59 -05:00
reyesj2 86966d2778 reauthorize unhealthy transform jobs using kibana 9.3.3 auth flow 2026-05-01 12:44:08 -05:00
Jorge Reyes ce3ad3a895 Merge pull request #15844 from Security-Onion-Solutions/reyesj2/elastic-agent-warning
update default elastic agent logging level to warning
2026-04-30 09:46:28 -05:00
Mike Reeves 3a4b7b50de ensure python3-pyyaml is installed before continuing setup 2026-04-30 10:15:09 -04:00
reyesj2 39d0947102 update default elastic agent logging level to warning 2026-04-29 17:38:40 -05:00
Jorge Reyes 0085d9a353 Merge pull request #15842 from Security-Onion-Solutions/reyesj2-patch-1
so-elastic-fleet-outputs-update now checks for cert drift. Remove run…
2026-04-29 12:37:04 -05:00
Jorge Reyes 2f01ce3b23 so-elastic-fleet-outputs-update now checks for cert drift. Remove running --cert arg on cert change to prevent highstate from running outputs-update 2x 2026-04-29 12:33:28 -05:00
Mike Reeves 71b19c1b5f Merge pull request #15840 from Security-Onion-Solutions/fix/import-postgres-firewall
Open postgres in DOCKER-USER firewall everywhere influxdb is open
2026-04-29 09:20:03 -04:00
Mike Reeves 82e55ae87f Open postgres on every hostgroup that opens influxdb
The static defaults only listed postgres on each role's self-hostgroup,
leaving sensor/searchnode/heavynode/receiver/fleet/idh/desktop/hypervisor
hostgroups unable to reach the manager's so-postgres in distributed
grids. A dynamic block in firewall/map.jinja added postgres to those
hostgroups only when telegraf.output was switched to POSTGRES/BOTH,
which left postgres unreachable by default.

Mirror influxdb statically across manager/managerhype/managersearch/
standalone for every hostgroup that already lists influxdb, and drop
the now-redundant telegraf-gated dynamic block from firewall/map.jinja.
2026-04-29 09:09:50 -04:00
Mike Reeves 3e02001544 Open postgres port for import role in DOCKER-USER firewall
When so-postgres was wired in (868cd1187), the import role's firewall
defaults were missed while every other manager-class role (manager,
managerhype, managersearch, standalone, eval) had postgres added to
their DOCKER-USER manager-hostgroup portgroups. As a result, on a
fresh import install the so-postgres container starts but tcp/5432 is
dropped at DOCKER-USER, so soc/kratos/telegraf can't reach it.

Add postgres alongside the existing influxdb entry so import nodes
match the other roles.
2026-04-29 08:48:45 -04:00
Mike Reeves 82f70bb53a Merge pull request #15839 from Security-Onion-Solutions/fix/drop-postgres-soc-module-injection
drop postgres module from soc defaults injection
2026-04-28 15:48:49 -04:00
Mike Reeves 2dcded6cca drop postgres module from soc defaults injection
The soc binary on 3/dev does not register a postgres module, so injecting
postgres into soc.config.server.modules makes soc abort at launch with
'Module does not exist: postgres'. The soc-side module is staged on
feature/postgres but is not landing this release. Drop the injection
until the module ships; salt/postgres state and pillars are unchanged.
2026-04-28 15:46:56 -04:00
Mike Reeves 8ca59e6f0c Merge pull request #15838 from Security-Onion-Solutions/fix/docker-refresh-multiarch-pull
Fix/docker refresh multiarch pull
2026-04-28 15:14:27 -04:00
Mike Reeves 82dac82d15 drop platform/digest pull resolution
The digest-pull logic was added to make `docker push` work for multi-arch
upstream tags. Now that the push step is `docker buildx imagetools create`
pinned to the gpg-verified RepoDigest, the registry-to-registry copy
handles single- and multi-arch sources without help. Reverts the pull
back to the original line and removes the unused PLATFORM_OS/_ARCH
detection.
2026-04-28 14:54:25 -04:00
Mike Reeves 288a823edf push images via buildx imagetools create
Replaces `docker push` with a registry-to-registry copy. On Docker 29.x
with the containerd image store, `docker push` of a freshly-pulled image
hits a path that wraps single-platform manifests in a synthetic index
and then can't push the layers it claims to reference, producing
`NotFound: content digest ...` even when the image is fully present.

Keep the local `docker tag` so so-image-pull's `docker images | grep :5000`
existence check continues to work.
2026-04-28 14:49:02 -04:00
Jorge Reyes f9e3d30a71 Merge pull request #15837 from Security-Onion-Solutions/reyesj2/elastic-fleet-cert-check
check current fleet policy cert against cert on disk
2026-04-28 13:47:55 -05:00
Mike Reeves c86399327b fix so-docker-refresh push for multi-arch source images
docker pull of a multi-arch tag on Docker 29.x leaves the local tag
pointing at the image index rather than the platform-specific manifest.
The subsequent docker push then tries to push every sub-manifest the
index references and fails on layers we never fetched.

Resolve the local-platform manifest digest from the upstream index via
docker buildx imagetools inspect, pull by that digest, and re-tag locally
to the canonical tag. The signing flow and the existing tag/push to the
embedded registry are unchanged.
2026-04-28 14:27:59 -04:00
17 changed files with 301 additions and 69 deletions
+18 -5
View File
@@ -164,8 +164,8 @@ update_docker_containers() {
# Pull down the trusted docker image
run_check_net_err \
"docker pull $CONTAINER_REGISTRY/$IMAGEREPO/$image" \
"Could not pull $image, please ensure connectivity to $CONTAINER_REGISTRY" >> "$LOG_FILE" 2>&1
"Could not pull $image, please ensure connectivity to $CONTAINER_REGISTRY" >> "$LOG_FILE" 2>&1
# Get signature
run_check_net_err \
"curl --retry 5 --retry-delay 60 -A '$CURLTYPE/$CURRENTVERSION/$OS/$(uname -r)' $sig_url --output $SIGNPATH/$image.sig" \
@@ -189,11 +189,24 @@ update_docker_containers() {
HOSTNAME=$(hostname)
fi
docker tag $CONTAINER_REGISTRY/$IMAGEREPO/$image $HOSTNAME:5000/$IMAGEREPO/$image >> "$LOG_FILE" 2>&1 || {
echo "Unable to tag $image" >> "$LOG_FILE" 2>&1
echo "Unable to tag $image" >> "$LOG_FILE" 2>&1
exit 1
}
docker push $HOSTNAME:5000/$IMAGEREPO/$image >> "$LOG_FILE" 2>&1 || {
echo "Unable to push $image" >> "$LOG_FILE" 2>&1
# Push to the embedded registry via a registry-to-registry copy. Avoids
# `docker push`, which on Docker 29.x with the containerd image store
# represents freshly-pulled images as an index whose layer content
# isn't reachable through the push path. The local `docker tag` above
# is preserved so so-image-pull's `:5000` existence check still works.
# Pin to the digest already gpg-verified above so we copy exactly the
# bytes we approved.
local VERIFIED_REF
VERIFIED_REF=$(echo "$DOCKERINSPECT" | jq -r ".[0].RepoDigests[] | select(. | contains(\"$CONTAINER_REGISTRY\"))" | head -n 1)
if [ -z "$VERIFIED_REF" ] || [ "$VERIFIED_REF" = "null" ]; then
echo "Unable to determine verified digest for $image" >> "$LOG_FILE" 2>&1
exit 1
fi
docker buildx imagetools create --tag $HOSTNAME:5000/$IMAGEREPO/$image "$VERIFIED_REF" >> "$LOG_FILE" 2>&1 || {
echo "Unable to copy $image to embedded registry" >> "$LOG_FILE" 2>&1
exit 1
}
fi
+1 -1
View File
@@ -227,7 +227,7 @@ if [[ $EXCLUDE_KNOWN_ERRORS == 'Y' ]]; then
EXCLUDED_ERRORS="$EXCLUDED_ERRORS|from NIC checksum offloading" # zeek reporter.log
EXCLUDED_ERRORS="$EXCLUDED_ERRORS|marked for removal" # docker container getting recycled
EXCLUDED_ERRORS="$EXCLUDED_ERRORS|tcp 127.0.0.1:6791: bind: address already in use" # so-elastic-fleet agent restarting. Seen starting w/ 8.18.8 https://github.com/elastic/kibana/issues/201459
EXCLUDED_ERRORS="$EXCLUDED_ERRORS|TransformTask\] \[logs-(tychon|aws_billing|microsoft_defender_endpoint|armis|o365_metrics|microsoft_sentinel|snyk).*user so_kibana lacks the required permissions \[(logs|metrics)-\1" # Known issue with integrations starting transform jobs that are explicitly not allowed to start as a system user. (installed as so_elastic / so_kibana)
EXCLUDED_ERRORS="$EXCLUDED_ERRORS|TransformTask\] \[logs-(tychon|aws_billing|microsoft_defender_endpoint|armis|o365_metrics|microsoft_sentinel|snyk|cyera|island_browser).*user so_kibana lacks the required permissions \[(logs|metrics)-\1" # Known issue with integrations starting transform jobs that are explicitly not allowed to start as a system user. This error should not be seen on fresh ES 9.3.3 installs or after SO 3.1.0 with soups addition of check_transform_health_and_reauthorize()
EXCLUDED_ERRORS="$EXCLUDED_ERRORS|manifest unknown" # appears in so-dockerregistry log for so-tcpreplay following docker upgrade to 29.2.1-1
fi
@@ -51,6 +51,16 @@ so-elastic-fleet-package-registry:
- {{ ULIMIT.name }}={{ ULIMIT.soft }}:{{ ULIMIT.hard }}
{% endfor %}
{% endif %}
wait_for_so-elastic-fleet-package-registry:
http.wait_for_successful_query:
- name: "http://localhost:8080/health"
- status: 200
- wait_for: 300
- request_interval: 15
- require:
- docker_container: so-elastic-fleet-package-registry
delete_so-elastic-fleet-package-registry_so-status.disabled:
file.uncomment:
- name: /opt/so/conf/so-status/so-status.conf
-11
View File
@@ -18,17 +18,6 @@ so-elastic-fleet-auto-configure-logstash-outputs:
- retry:
attempts: 4
interval: 30
{# Separate from above in order to catch elasticfleet-logstash.crt changes and force update to fleet output policy #}
so-elastic-fleet-auto-configure-logstash-outputs-force:
cmd.run:
- name: /usr/sbin/so-elastic-fleet-outputs-update --certs
- retry:
attempts: 4
interval: 30
- onchanges:
- x509: etc_elasticfleet_logstash_crt
- x509: elasticfleet_kafka_crt
{% endif %}
# If enabled, automatically update Fleet Server URLs & ES Connection
@@ -240,7 +240,7 @@ elastic_fleet_policy_create() {
--arg DESC "$DESC" \
--arg TIMEOUT $TIMEOUT \
--arg FLEETSERVER "$FLEETSERVER" \
'{"name": $NAME,"id":$NAME,"description":$DESC,"namespace":"default","monitoring_enabled":["logs"],"inactivity_timeout":$TIMEOUT,"has_fleet_server":$FLEETSERVER}'
'{"name": $NAME,"id":$NAME,"description":$DESC,"namespace":"default","monitoring_enabled":["logs"],"inactivity_timeout":$TIMEOUT,"has_fleet_server":$FLEETSERVER,"advanced_settings":{"agent_logging_level": "warning"}}'
)
# Create Fleet Policy
if ! fleet_api "agent_policies" -XPOST -H 'kbn-xsrf: true' -H 'Content-Type: application/json' -d "$JSON_STRING"; then
+33
View File
@@ -398,6 +398,7 @@ firewall:
- elasticsearch_rest
- docker_registry
- influxdb
- postgres
- sensoroni
- yum
- beats_5044
@@ -410,6 +411,7 @@ firewall:
portgroups:
- docker_registry
- influxdb
- postgres
- sensoroni
- yum
- beats_5044
@@ -427,6 +429,7 @@ firewall:
- yum
- docker_registry
- influxdb
- postgres
- sensoroni
searchnode:
portgroups:
@@ -437,6 +440,7 @@ firewall:
- yum
- docker_registry
- influxdb
- postgres
- elastic_agent_control
- elastic_agent_data
- elastic_agent_update
@@ -450,6 +454,7 @@ firewall:
- yum
- docker_registry
- influxdb
- postgres
- elastic_agent_control
- elastic_agent_data
- elastic_agent_update
@@ -459,6 +464,7 @@ firewall:
- yum
- docker_registry
- influxdb
- postgres
- elastic_agent_control
- elastic_agent_data
- elastic_agent_update
@@ -492,6 +498,7 @@ firewall:
portgroups:
- docker_registry
- influxdb
- postgres
- sensoroni
- yum
- elastic_agent_control
@@ -502,6 +509,7 @@ firewall:
- yum
- docker_registry
- influxdb
- postgres
- elastic_agent_control
- elastic_agent_data
- elastic_agent_update
@@ -610,6 +618,7 @@ firewall:
- elasticsearch_rest
- docker_registry
- influxdb
- postgres
- sensoroni
- yum
- beats_5044
@@ -622,6 +631,7 @@ firewall:
portgroups:
- docker_registry
- influxdb
- postgres
- sensoroni
- yum
- beats_5044
@@ -639,6 +649,7 @@ firewall:
- yum
- docker_registry
- influxdb
- postgres
- sensoroni
searchnode:
portgroups:
@@ -649,6 +660,7 @@ firewall:
- yum
- docker_registry
- influxdb
- postgres
- elastic_agent_control
- elastic_agent_data
- elastic_agent_update
@@ -662,6 +674,7 @@ firewall:
- yum
- docker_registry
- influxdb
- postgres
- elastic_agent_control
- elastic_agent_data
- elastic_agent_update
@@ -671,6 +684,7 @@ firewall:
- yum
- docker_registry
- influxdb
- postgres
- elastic_agent_control
- elastic_agent_data
- elastic_agent_update
@@ -702,6 +716,7 @@ firewall:
portgroups:
- docker_registry
- influxdb
- postgres
- sensoroni
- yum
- elastic_agent_control
@@ -712,6 +727,7 @@ firewall:
- yum
- docker_registry
- influxdb
- postgres
- elastic_agent_control
- elastic_agent_data
- elastic_agent_update
@@ -820,6 +836,7 @@ firewall:
- elasticsearch_rest
- docker_registry
- influxdb
- postgres
- sensoroni
- yum
- beats_5044
@@ -832,6 +849,7 @@ firewall:
portgroups:
- docker_registry
- influxdb
- postgres
- sensoroni
- yum
- beats_5044
@@ -849,6 +867,7 @@ firewall:
- yum
- docker_registry
- influxdb
- postgres
- sensoroni
searchnode:
portgroups:
@@ -858,6 +877,7 @@ firewall:
- yum
- docker_registry
- influxdb
- postgres
- elastic_agent_control
- elastic_agent_data
- elastic_agent_update
@@ -870,6 +890,7 @@ firewall:
- yum
- docker_registry
- influxdb
- postgres
- elastic_agent_control
- elastic_agent_data
- elastic_agent_update
@@ -879,6 +900,7 @@ firewall:
- yum
- docker_registry
- influxdb
- postgres
- elastic_agent_control
- elastic_agent_data
- elastic_agent_update
@@ -912,6 +934,7 @@ firewall:
portgroups:
- docker_registry
- influxdb
- postgres
- sensoroni
- yum
- elastic_agent_control
@@ -922,6 +945,7 @@ firewall:
- yum
- docker_registry
- influxdb
- postgres
- elastic_agent_control
- elastic_agent_data
- elastic_agent_update
@@ -1040,6 +1064,7 @@ firewall:
- elasticsearch_rest
- docker_registry
- influxdb
- postgres
- sensoroni
- yum
- beats_5044
@@ -1052,6 +1077,7 @@ firewall:
portgroups:
- docker_registry
- influxdb
- postgres
- sensoroni
- yum
- beats_5044
@@ -1063,6 +1089,7 @@ firewall:
portgroups:
- docker_registry
- influxdb
- postgres
- sensoroni
- yum
- beats_5044
@@ -1074,6 +1101,7 @@ firewall:
portgroups:
- docker_registry
- influxdb
- postgres
- sensoroni
- yum
- redis
@@ -1083,6 +1111,7 @@ firewall:
portgroups:
- docker_registry
- influxdb
- postgres
- sensoroni
- yum
- redis
@@ -1093,6 +1122,7 @@ firewall:
- yum
- docker_registry
- influxdb
- postgres
- elastic_agent_control
- elastic_agent_data
- elastic_agent_update
@@ -1129,6 +1159,7 @@ firewall:
portgroups:
- docker_registry
- influxdb
- postgres
- sensoroni
- yum
- elastic_agent_control
@@ -1139,6 +1170,7 @@ firewall:
- yum
- docker_registry
- influxdb
- postgres
- elastic_agent_control
- elastic_agent_data
- elastic_agent_update
@@ -1482,6 +1514,7 @@ firewall:
- kibana
- redis
- influxdb
- postgres
- elasticsearch_rest
- elasticsearch_node
- elastic_agent_control
-13
View File
@@ -1,6 +1,5 @@
{% from 'vars/globals.map.jinja' import GLOBALS %}
{% from 'docker/docker.map.jinja' import DOCKERMERGED %}
{% from 'telegraf/map.jinja' import TELEGRAFMERGED %}
{% import_yaml 'firewall/defaults.yaml' as FIREWALL_DEFAULT %}
{# add our ip to self #}
@@ -56,16 +55,4 @@
{% endif %}
{# Open Postgres (5432) to minion hostgroups when Telegraf is configured to write to Postgres #}
{% set TG_OUT = TELEGRAFMERGED.output | upper %}
{% if TG_OUT in ['POSTGRES', 'BOTH'] %}
{% if role.startswith('manager') or role == 'standalone' or role == 'eval' %}
{% for r in ['sensor', 'searchnode', 'heavynode', 'receiver', 'fleet', 'idh', 'desktop', 'import'] %}
{% if FIREWALL_DEFAULT.firewall.role[role].chain["DOCKER-USER"].hostgroups[r] is defined %}
{% do FIREWALL_DEFAULT.firewall.role[role].chain["DOCKER-USER"].hostgroups[r].portgroups.append('postgres') %}
{% endif %}
{% endfor %}
{% endif %}
{% endif %}
{% set FIREWALL_MERGED = salt['pillar.get']('firewall', FIREWALL_DEFAULT.firewall, merge=True) %}
+130
View File
@@ -485,6 +485,130 @@ elasticsearch_backup_index_templates() {
tar -czf /nsm/backup/3.0.0_elasticsearch_index_templates.tar.gz -C /opt/so/conf/elasticsearch/templates/index/ .
}
elasticfleet_set_agent_logging_level_warn() {
. /usr/sbin/so-elastic-fleet-common
local current_agent_policies
if ! current_agent_policies=$(fleet_api "agent_policies?perPage=1000"); then
echo "Warning: unable to retrieve Fleet agent policies"
return 0
fi
# Only updating policies that are within Security Onion defaults and do not already have any user configured advanced_settings.
local policies_to_update
policies_to_update=$(jq -c '
.items[]
| select(has("advanced_settings") | not)
| select(
.id == "so-grid-nodes_general"
or .id == "so-grid-nodes_heavy"
or .id == "endpoints-initial"
or (.id | startswith("FleetServer_"))
)
' <<< "$current_agent_policies")
if [[ -z "$policies_to_update" ]]; then
return 0
fi
while IFS= read -r policy; do
[[ -z "$policy" ]] && continue
local policy_id policy_name policy_namespace
policy_id=$(jq -r '.id' <<< "$policy")
policy_name=$(jq -r '.name' <<< "$policy")
policy_namespace=$(jq -r '.namespace' <<< "$policy")
local update_logging
update_logging=$(jq -n \
--arg name "$policy_name" \
--arg namespace "$policy_namespace" \
'{name: $name, namespace: $namespace, advanced_settings: {agent_logging_level: "warning"}}'
)
echo "Setting elastic agent_logging_level to warning on policy '$policy_name' ($policy_id)."
if ! fleet_api "agent_policies/$policy_id" -XPUT -H 'kbn-xsrf: true' -H 'Content-Type: application/json' -d "$update_logging" >/dev/null; then
echo " warning: failed to update agent policy '$policy_name' ($policy_id)" >&2
fi
done <<< "$policies_to_update"
}
check_transform_health_and_reauthorize() {
. /usr/sbin/so-elastic-fleet-common
echo "Checking integration transform jobs for unhealthy / unauthorized status..."
local transforms_doc stats_doc installed_doc
if ! transforms_doc=$(so-elasticsearch-query "_transform/_all?size=1000" --fail --retry 3 --retry-delay 5 2>/dev/null); then
echo "Unable to query for transform jobs, skipping reauthorization."
return 0
fi
if ! stats_doc=$(so-elasticsearch-query "_transform/_all/_stats?size=1000" --fail --retry 3 --retry-delay 5 2>/dev/null); then
echo "Unable to query for transform job stats, skipping reauthorization."
return 0
fi
if ! installed_doc=$(fleet_api "epm/packages/installed?perPage=500"); then
echo "Unable to list installed Fleet packages, skipping reauthorization."
return 0
fi
# Get all transforms that meet the following
# - unhealthy (any non-green health status)
# - metadata has run_as_kibana_system: false (this fix is specific to transforms started prior to Kibana 9.3.3)
# - are not orphaned (integration is not somehow missing/corrupt/uninstalled)
local unhealthy_transforms
unhealthy_transforms=$(jq -c -n \
--argjson t "$transforms_doc" \
--argjson s "$stats_doc" \
--argjson i "$installed_doc" '
($i.items | map({key: .name, value: .version}) | from_entries) as $pkg_ver
| ($s.transforms | map({key: .id, value: .health.status}) | from_entries) as $health
| [ $t.transforms[]
| select(._meta.run_as_kibana_system == false)
| select(($health[.id] // "unknown") != "green")
| {id, pkg: ._meta.package.name, ver: ($pkg_ver[._meta.package.name])}
]
| if length == 0 then empty else . end
| (map(select(.ver == null)) | map({orphan: .id})[]),
(map(select(.ver != null))
| group_by(.pkg)
| map({pkg: .[0].pkg, ver: .[0].ver, transformIds: map(.id)})[])
')
if [[ -z "$unhealthy_transforms" ]]; then
return 0
fi
local unhealthy_count
unhealthy_count=$(jq -s '[.[].transformIds? // empty | .[]] | length' <<< "$unhealthy_transforms")
echo "Found $unhealthy_count transform(s) needing reauthorization."
local total_failures=0
while IFS= read -r transform; do
[[ -z "$transform" ]] && continue
if jq -e 'has("orphan")' <<< "$transform" >/dev/null 2>&1; then
echo "Skipping transform not owned by any installed Fleet package: $(jq -r '.orphan' <<< "$transform")"
continue
fi
local pkg ver body resp
pkg=$(jq -r '.pkg' <<< "$transform")
ver=$(jq -r '.ver' <<< "$transform")
body=$(jq -c '{transforms: (.transformIds | map({transformId: .}))}' <<< "$transform")
echo "Reauthorizing transform(s) for ${pkg}-${ver}..."
resp=$(fleet_api "epm/packages/${pkg}/${ver}/transforms/authorize" \
-XPOST -H 'kbn-xsrf: true' -H 'Content-Type: application/json' \
-d "$body") || { echo "Could not reauthorize transform(s) for ${pkg}-${ver}"; continue; }
(( total_failures += $(jq 'map(select(.success != true)) | length' <<< "$resp" 2>/dev/null) ))
done <<< "$unhealthy_transforms"
if [[ "$total_failures" -gt 0 ]]; then
echo "Some transform(s) failed to reauthorize."
fi
}
ensure_postgres_local_pillar() {
# Postgres was added as a service after 3.0.0, so the new pillar/top.sls
# references postgres.soc_postgres / postgres.adv_postgres unconditionally.
@@ -553,6 +677,12 @@ post_to_3.1.0() {
# file_roots of its own and --local would fail with "No matching sls found".
salt-call state.apply postgres.telegraf_users queue=True || true
# Update default agent policies to use logging level warn.
elasticfleet_set_agent_logging_level_warn || true
# Check for unhealthy / unauthorized integration transform jobs and attempt reauthorizations
check_transform_health_and_reauthorize || true
POSTVERSION=3.1.0
}
+10 -1
View File
@@ -3,7 +3,14 @@
# https://securityonion.net/license; you may not use this file except in compliance with the
# Elastic License 2.0.
{% set hypervisor = pillar.minion_id %}
{% set hypervisor = pillar.get('minion_id', '') %}
{% if not hypervisor|regex_match('^([A-Za-z0-9._-]{1,253})$') %}
{% do salt.log.error('delete_hypervisor_orch: refusing unsafe minion_id=' ~ hypervisor) %}
delete_hypervisor_invalid_minion_id:
test.fail_without_changes:
- name: delete_hypervisor_invalid_minion_id
{% else %}
ensure_hypervisor_mine_deleted:
salt.function:
@@ -20,3 +27,5 @@ update_salt_cloud_profile:
- sls:
- salt.cloud.config
- concurrent: True
{% endif %}
+10 -1
View File
@@ -12,7 +12,14 @@
{% if 'vrt' in salt['pillar.get']('features', []) %}
{% do salt.log.debug('vm_pillar_clean_orch: Running') %}
{% set vm_name = pillar.get('vm_name') %}
{% set vm_name = pillar.get('vm_name', '') %}
{% if not vm_name|regex_match('^([A-Za-z0-9._-]{1,253})$') %}
{% do salt.log.error('vm_pillar_clean_orch: refusing unsafe vm_name=' ~ vm_name) %}
vm_pillar_clean_invalid_name:
test.fail_without_changes:
- name: vm_pillar_clean_invalid_name
{% else %}
delete_adv_{{ vm_name }}_pillar:
module.run:
@@ -24,6 +31,8 @@ delete_{{ vm_name }}_pillar:
- file.remove:
- path: /opt/so/saltstack/local/pillar/minions/{{ vm_name }}.sls
{% endif %}
{% else %}
{% do salt.log.error(
+6 -4
View File
@@ -3,12 +3,15 @@
# https://securityonion.net/license; you may not use this file except in compliance with the
# Elastic License 2.0.
{% if data['id'].endswith('_hypervisor') and data['result'] == True %}
{% set hid = data['id'] %}
{% if hid|regex_match('^([A-Za-z0-9._-]{1,253})$')
and hid.endswith('_hypervisor')
and data['result'] == True %}
{% if data['act'] == 'accept' %}
check_and_trigger:
runner.setup_hypervisor.setup_environment:
- minion_id: {{ data['id'] }}
- minion_id: {{ hid }}
{% endif %}
{% if data['act'] == 'delete' %}
@@ -17,8 +20,7 @@ delete_hypervisor:
- args:
- mods: orch.delete_hypervisor
- pillar:
minion_id: {{ data['id'] }}
minion_id: {{ hid }}
{% endif %}
{% endif %}
+27 -15
View File
@@ -1,7 +1,7 @@
#!py
# Copyright Security Onion Solutions LLC and/or licensed to Security Onion Solutions LLC under one
# or more contributor license agreements. Licensed under the Elastic License 2.0 as shown at
# or more contributor license agreements. Licensed under the Elastic License 2.0 as shown at
# https://securityonion.net/license; you may not use this file except in compliance with the
# Elastic License 2.0.
@@ -9,30 +9,42 @@ import logging
import os
import pwd
import grp
import re
log = logging.getLogger(__name__)
PILLAR_ROOT = '/opt/so/saltstack/local/pillar/minions/'
_VMNAME_RE = re.compile(r'^[A-Za-z0-9._-]{1,253}$')
def run():
vm_name = data['kwargs']['name']
logging.error("createEmptyPillar reactor: vm_name: %s" % vm_name)
pillar_root = '/opt/so/saltstack/local/pillar/minions/'
vm_name = data.get('kwargs', {}).get('name', '')
if not _VMNAME_RE.match(str(vm_name)):
log.error("createEmptyPillar reactor: refusing unsafe vm_name=%r", vm_name)
return {}
log.info("createEmptyPillar reactor: vm_name: %s", vm_name)
pillar_files = ['adv_' + vm_name + '.sls', vm_name + '.sls']
try:
# Get socore user and group IDs
socore_uid = pwd.getpwnam('socore').pw_uid
socore_gid = grp.getgrnam('socore').gr_gid
pillar_root_real = os.path.realpath(PILLAR_ROOT)
for f in pillar_files:
full_path = pillar_root + f
if not os.path.exists(full_path):
# Create empty file
os.mknod(full_path)
# Set ownership to socore:socore
os.chown(full_path, socore_uid, socore_gid)
# Set mode to 644 (rw-r--r--)
os.chmod(full_path, 0o640)
logging.error("createEmptyPillar reactor: created %s with socore:socore ownership and mode 644" % f)
full_path = os.path.join(PILLAR_ROOT, f)
resolved = os.path.realpath(full_path)
if os.path.dirname(resolved) != pillar_root_real:
log.error("createEmptyPillar reactor: refusing path outside pillar root: %s", resolved)
continue
if os.path.exists(resolved):
continue
os.mknod(resolved)
os.chown(resolved, socore_uid, socore_gid)
os.chmod(resolved, 0o640)
log.info("createEmptyPillar reactor: created %s with socore:socore ownership and mode 0640", f)
except (KeyError, OSError) as e:
logging.error("createEmptyPillar reactor: Error setting ownership/permissions: %s" % str(e))
log.error("createEmptyPillar reactor: Error setting ownership/permissions: %s", e)
return {}
+33 -11
View File
@@ -1,18 +1,40 @@
#!py
# Copyright Security Onion Solutions LLC and/or licensed to Security Onion Solutions LLC under one
# or more contributor license agreements. Licensed under the Elastic License 2.0 as shown at
# https://securityonion.net/license; you may not use this file except in compliance with the
# Elastic License 2.0.
remove_key:
wheel.key.delete:
- args:
- match: {{ data['name'] }}
import logging
import re
{{ data['name'] }}_pillar_clean:
runner.state.orchestrate:
- args:
- mods: orch.vm_pillar_clean
- pillar:
vm_name: {{ data['name'] }}
log = logging.getLogger(__name__)
{% do salt.log.info('deleteKey reactor: deleted minion key: %s' % data['name']) %}
_VMNAME_RE = re.compile(r'^[A-Za-z0-9._-]{1,253}$')
def run():
name = data.get('name', '')
if not _VMNAME_RE.match(str(name)):
log.error("deleteKey reactor: refusing unsafe name=%r", name)
return {}
log.info("deleteKey reactor: deleted minion key: %s", name)
return {
'remove_key': {
'wheel.key.delete': [
{'args': [
{'match': name},
]},
],
},
'%s_pillar_clean' % name: {
'runner.state.orchestrate': [
{'args': [
{'mods': 'orch.vm_pillar_clean'},
{'pillar': {'vm_name': name}},
]},
],
},
}
-5
View File
@@ -24,11 +24,6 @@
{% do SOCDEFAULTS.soc.config.server.modules.elastic.update({'username': GLOBALS.elasticsearch.auth.users.so_elastic_user.user, 'password': GLOBALS.elasticsearch.auth.users.so_elastic_user.pass}) %}
{% if GLOBALS.postgres is defined and GLOBALS.postgres.auth is defined %}
{% set PG_ADMIN_PASS = salt['pillar.get']('secrets:postgres_pass', '') %}
{% do SOCDEFAULTS.soc.config.server.modules.update({'postgres': {'hostUrl': GLOBALS.manager_ip, 'port': 5432, 'username': GLOBALS.postgres.auth.users.so_postgres_user.user, 'password': GLOBALS.postgres.auth.users.so_postgres_user.pass, 'adminUser': 'postgres', 'adminPassword': PG_ADMIN_PASS, 'dbname': 'securityonion', 'sslMode': 'require', 'assistantEnabled': true, 'esHostUrl': 'https://' ~ GLOBALS.manager_ip ~ ':9200', 'esUsername': GLOBALS.elasticsearch.auth.users.so_elastic_user.user, 'esPassword': GLOBALS.elasticsearch.auth.users.so_elastic_user.pass, 'esVerifyCert': false}}) %}
{% endif %}
{% do SOCDEFAULTS.soc.config.server.modules.influxdb.update({'hostUrl': 'https://' ~ GLOBALS.influxdb_host ~ ':8086'}) %}
{% do SOCDEFAULTS.soc.config.server.modules.influxdb.update({'token': INFLUXDB_TOKEN}) %}
{% for tool in SOCDEFAULTS.soc.config.server.client.tools %}
+1 -1
View File
@@ -15,7 +15,7 @@ from watchdog.observers import Observer
from watchdog.events import FileSystemEventHandler
with open("/opt/so/conf/strelka/filecheck.yaml", "r") as ymlfile:
cfg = yaml.load(ymlfile, Loader=yaml.Loader)
cfg = yaml.safe_load(ymlfile)
extract_path = cfg["filecheck"]["extract_path"]
historypath = cfg["filecheck"]["historypath"]
+18
View File
@@ -1701,6 +1701,24 @@ remove_package() {
fi
}
ensure_pyyaml() {
title "Ensuring python3-pyyaml is installed"
if rpm -q python3-pyyaml >/dev/null 2>&1; then
info "python3-pyyaml already installed"
return 0
fi
info "python3-pyyaml not found, attempting to install"
set -o pipefail
dnf -y install python3-pyyaml 2>&1 | tee -a "$setup_log"
local result=$?
set +o pipefail
if [[ $result -ne 0 ]] || ! rpm -q python3-pyyaml >/dev/null 2>&1; then
error "Failed to install python3-pyyaml (exit=$result)"
fail_setup
fi
info "python3-pyyaml installed successfully"
}
# When updating the salt version, also update the version in securityonion-builds/images/iso-task/Dockerfile and salt/salt/master.defaults.yaml and salt/salt/minion.defaults.yaml
# CAUTION! SALT VERSION UDDATES - READ BELOW
# When updating the salt version, also update the version in:
+3
View File
@@ -66,6 +66,9 @@ set_timezone
# Let's see what OS we are dealing with here
detect_os
# Ensure python3-pyyaml is available before any code that may need so-yaml/PyYAML
ensure_pyyaml
# Check to see if this is the setup type of "desktop".
is_desktop=