Compare commits

...

233 Commits

Author SHA1 Message Date
Mike Reeves 6bca92da4a fix: stop pip's patchelf 'ERROR' line from polluting sosetup.log
The cmd.run for psycopg2 install was already tolerating pip's
non-zero exit with `|| true`, but pip's stderr — which contains the
literal string "ERROR: Could not install packages due to an OSError:
[Errno 2] No such file or directory: 'patchelf'" — was still being
captured into salt's state-result dict. so-setup logs salt state
output to /root/sosetup.log, and verify_setup() then greps for the
substring "ERROR" to build /root/errors.log. The patchelf line then
shows up at the end of every install as "WARNING: Errors detected
during setup" even though the install is in fact green.

Redirect pip's combined stdout/stderr to
/opt/so/log/so_pillar/psycopg2_install.log so the noise lives in a
dedicated, predictable triage location instead of leaking into salt's
state result. The `unless: import psycopg2` check is still the
actual readiness gate, so a real install failure (rather than just
the patchelf RPATH-rewrite step that has no functional effect on the
wheel) would still surface via the state being re-run on every apply
and `import psycopg2` failing.
2026-05-05 10:38:57 -04:00
Mike Reeves a7efabd90d fix: tolerate pip's non-zero exit on psycopg2 patchelf step
salt's pip.installed flagged so_pillar_psycopg2_in_salt_python as
failed because pip exits non-zero when it can't find the patchelf
binary to rewrite the psycopg2 wheel's RPATH after extraction. The
wheel is fully installed and importable regardless — the patchelf
step is a cosmetic post-install rewrite, not a build dependency. But
salt's failure cascade then short-circuited so_pillar_initial_import
and the so-yaml mode flip, leaving the install in dual-pillar mode
instead of PG-canonical.

Replaced with cmd.run that runs pip with `|| true` and uses an
`import psycopg2` check as the actual readiness gate — same idea as
how salt's own bootstrap does it. Also fixed the require: ref on
so_pillar_initial_import (was `pip:`, needs to be `cmd:` for the new
state type).
2026-05-04 22:08:31 -04:00
Mike Reeves b25b221076 postsalt: move PG-canonical enable to AFTER the install highstate
Supersedes the pre-install placement (right after secrets_pillar) from
the previous commit, which was broken: salt's ext_pillar overlay
shadowed disk pillar's elasticsearch subtree before so-pillar-import
had populated PG, so elasticsearch.enabled.sls failed rendering on
ELASTICSEARCHMERGED.auth.users.so_elastic_user.pass — that key lives
in elasticsearch/auth.sls, which is on the importer's secrets
allowlist and never makes it into so_pillar.pillar_entry. The install
would then hang forever waiting for the elasticsearch container that
the broken state never deployed.

The new placement is right after the final state.highstate completes:
  1. drop adv_postgres.sls flipping the flag to True
  2. salt-call saltutil.refresh_pillar so the next state sees it
  3. salt-call state.apply postgres.schema_pillar — deploys schema,
     ALTERs role login passwords, installs psycopg2 into salt's
     bundled python, runs so-pillar-import, writes
     /opt/so/conf/so-yaml/mode=postgres
  4. salt-call state.apply salt.master — re-renders engines.conf
     with the pg_notify_pillar engine block, drops master.d
     ext_pillar config, watch_in restarts salt-master and ext_pillar
     takes over

verify_setup runs after this so its final checks see PG-canonical
mode in place. Same end state as the previous commit's intent, just
without the bootstrap chicken-and-egg.
2026-05-04 21:02:08 -04:00
Mike Reeves 7b9ab2d9d1 postsalt: enable PG-canonical pillar mode by default during so-setup
Drops a local pillar override (postgres.so_pillar.enabled = True) right
after secrets_pillar so the install-time highstate brings up
schema_pillar, ext_pillar_postgres, and the pg_notify_pillar engine
without operator intervention. Without this the whole PG-canonical
stack stays gated off on the default-False flag and the install lands
in legacy disk-pillar mode — which defeats the point of being on the
postsalt branch at all.

The new enable_so_pillar_postgres() function in so-functions is
idempotent (overwrites adv_postgres.sls with a fixed body) and the
generated file is mode 0644 socore:socore so it merges into pillar
under the existing local-pillar directory ownership convention.

Rollback path: edit /opt/so/saltstack/local/pillar/postgres/adv_postgres.sls
to set enabled: False, or delete the file. The schema and engine
config states will tear themselves down on the next highstate via
their existing else-branch absent states.
2026-05-04 19:56:14 -04:00
Mike Reeves 92a7bb3053 fix: get postsalt's PG-canonical pillar actually working end-to-end
Five blockers turned up the first time the so_pillar schema was applied
against a fresh standalone install. Fixing them in order:

1. 006_rls.sql ordering bug
   006 GRANTed on so_pillar.change_queue and its sequence, but the table
   isn't created until 008_change_notify.sql. 006 errored mid-file with
   "relation so_pillar.change_queue does not exist", short-circuiting the
   rest of the pillar staging chain. Moved the three change_queue grants
   into 008 alongside the table creation so each file is self-contained.

2. so_pillar_* roles unable to log in
   006 created the roles as NOLOGIN and set no password. Salt-master's
   ext_pillar (postgres) and the pg_notify_pillar engine both connect as
   so_pillar_master via TCP, so both came up with "password authentication
   failed for user so_pillar_master". Added a templated cmd.run step in
   schema_pillar.sls (so_pillar_role_login_passwords) that ALTERs all three
   roles WITH LOGIN PASSWORD pulling from secrets:pillar_master_pass — the
   same password ext_pillar_postgres.conf.jinja and the engines.conf
   pg_notify_pillar block render with.

3. Missing GRANT CONNECT ON DATABASE securityonion
   USAGE on the schema is granted in 006 but CONNECT on the database isn't.
   Engine + ext_pillar succeeded auth then died with "permission denied
   for database securityonion". Added the explicit GRANT CONNECT in 006.

4. psycopg2 missing from salt's bundled python
   /opt/saltstack/salt/bin/python3 doesn't ship psycopg by default, so
   when salt-master tries to load the pg_notify_pillar engine its
   `import psycopg2` fails inside salt's loader and the engine silently
   doesn't start (no error in the salt log — you only notice when nothing
   ever drains so_pillar.change_queue). Added a pip.installed state in
   schema_pillar.sls bound to that interpreter via bin_env.

5. engines.conf vs pg_notify_pillar_engine.conf list-replace
   Salt's master.d/*.conf merge replaces top-level lists rather than
   concatenating them. The engine config used to live in its own
   master.d/pg_notify_pillar_engine.conf with `engines: [pg_notify_pillar]`
   alongside the legacy `engines.conf` carrying `engines: [checkmine,
   pillarWatch]`. Whichever loaded last won, so the engine never showed
   up in the loaded set even when the file existed. Fold the
   pg_notify_pillar declaration into engines.conf (now jinja-rendered,
   gated on postgres:so_pillar:enabled), drop the standalone state from
   pg_notify_pillar_engine.sls, and delete the now-orphaned conf jinja.

End state validated against a live standalone-net install on the dev rig:
salt-master ext_pillar reads from so_pillar.* with no errors, the
pg_notify_pillar engine LISTENs on so_pillar_change and drains the
change_queue (134-row backlog → 0 within seconds), and a so-yaml replace
on a pillar key flows disk → PG → ext_pillar → salt pillar.get with the
new value visible after a saltutil.refresh_pillar.
2026-05-04 19:47:38 -04:00
Mike Reeves 155b5c5d66 fix: consistent allowed_states guard in postgres.schema_pillar
Same `sls.split('.')[0]` pattern as ext_pillar_postgres + pg_notify_pillar_engine.
For sls='postgres.schema_pillar' the split happened to evaluate 'postgres',
which is in manager_states, so the guard worked accidentally — but it would
break silently if anyone ever moved the file under a deeper SLS path. Switch
to a literal `{% if 'postgres' in allowed_states %}` for the same intent-
revealing pattern as the master.d guards.
2026-05-04 19:25:14 -04:00
Mike Reeves f1746b0f59 fix: correct allowed_states guard in ext_pillar_postgres + pg_notify_pillar_engine
Both SLS files used `sls.split('.')[0]` to derive what to look up in
allowed_states. For these files (sls='salt.master.ext_pillar_postgres'
and sls='salt.master.pg_notify_pillar_engine') that returns 'salt',
which is never in any role's allowed_states list — only specific keys
like 'salt.master', 'salt.minion', 'salt.cloud' are. The guard's else
branch fired on every highstate, emitting two cosmetic
  ID: <sls>_state_not_allowed
  Function: test.fail_without_changes
  Comment: Failure!
entries that polluted the so-setup error summary even on green installs.

Both states drop config under /etc/salt/master.d/ and watch_in the
salt-master service, so the natural intent is "only run when this node
hosts the salt master". Switching the guard to a literal
  {% if 'salt.master' in allowed_states %}
expresses that directly without string-parsing the SLS path, and
matches the existing membership in manager_states (which is in turn
included in every manager-bearing role: so-eval, so-manager,
so-managerhype, so-managersearch, so-standalone, so-import).
2026-05-04 19:17:30 -04:00
Mike Reeves 2e411625c4 fix: subshell-scope umask 077 in so_pillar key generation
The unscoped `umask 077` on postsalt's secrets_pillar path leaked into
every subsequent file write by so-setup (and the salt-call processes
it spawned) for the rest of the install. Every state-rendered config
file under /opt/so/conf landed at mode 0600 instead of 0644, which
broke any container that bind-mounts its config read-only and runs as
a non-root user after the entrypoint's gosu drop. The first concrete
casualty was the influxdb container, which exits with
  "failed to load config file: open /conf/config.yaml: permission denied"
after init mode completes and re-execs as the influxdb user.

The chmod 0400 immediately after the printf already enforces the
intended file mode, so the umask was redundant for the key file
itself; scoping it to a subshell preserves the defense-in-depth
between the printf and the chmod without polluting the parent shell.
2026-05-04 18:02:58 -04:00
Mike Reeves e43ad2ff74 Merge remote-tracking branch 'origin/feature/ensure-pyyaml' into postsalt 2026-05-04 16:37:42 -04:00
Mike Reeves b39d259101 Merge remote-tracking branch 'origin/3/dev' into postsalt 2026-05-04 16:19:17 -04:00
Mike Reeves 5bca81d833 Merge pull request #15858 from Security-Onion-Solutions/security-fix
Fix unsafe PyYAML load in filecheck
2026-05-04 16:16:40 -04:00
Mike Reeves b701664e04 Fix unsafe PyYAML load in filecheck 2026-05-04 12:09:35 -04:00
Jorge Reyes bc64f1431d Merge pull request #15857 from Security-Onion-Solutions/reyesj2/package-registry-health
fleet package registry health check
2026-05-04 11:05:23 -05:00
reyesj2 2203037ce7 fleet package registry health check 2026-05-04 10:52:37 -05:00
Jorge Reyes 77a4ad877e Merge pull request #15851 from Security-Onion-Solutions/reyesj2/integration-transforms 2026-05-01 14:11:12 -05:00
reyesj2 702b3585cc excluding additional integration transform job failures 2026-05-01 12:57:59 -05:00
reyesj2 86966d2778 reauthorize unhealthy transform jobs using kibana 9.3.3 auth flow 2026-05-01 12:44:08 -05:00
Mike Reeves 3d11694d51 make so-yaml PG-canonical and add pillar-change reactor stack
Two coupled changes that together let so_pillar.* be the canonical
config store, with config edits driving service reloads automatically:

so-yaml PG-canonical mode
- Adds /opt/so/conf/so-yaml/mode (and SO_YAML_BACKEND env override) with
  three values: dual (legacy), postgres (PG-only for managed paths),
  disk (emergency rollback). Bootstrap files (secrets.sls, ca/init.sls,
  *.nodes.sls, top.sls, ...) stay disk-only regardless via the existing
  SkipPath allowlist in so_yaml_postgres.locate.
- loadYaml/writeYaml/purgeFile now route to so_pillar.* in postgres
  mode: replace/add/get all read+write the database with no disk file
  ever appearing. PG failure is fatal in postgres mode (no silent
  fallback); dual mode preserves the prior best-effort mirror.
- so_yaml_postgres gains read_yaml(path), is_pg_managed(path), and
  is_enabled() so so-yaml can answer "is this path PG-managed and is
  PG up" without reaching into private helpers.
- schema_pillar.sls writes /opt/so/conf/so-yaml/mode = postgres after
  the importer succeeds, so flipping postgres:so_pillar:enabled flips
  so-yaml's behavior in lockstep with the schema being live.

pg_notify-driven change fan-out
- 008_change_notify.sql adds so_pillar.change_queue + an AFTER trigger
  on pillar_entry that enqueues the locator and pg_notifies
  'so_pillar_change'. Queue is drained at-least-once so engine restarts
  don't lose events; pg_notify is just the wakeup signal.
- New salt-master engine pg_notify_pillar.py LISTENs on the channel,
  drains the queue with FOR UPDATE SKIP LOCKED, debounces bursts, and
  fires 'so/pillar/changed' events grouped by (scope, role, minion).
- Reactor so_pillar_changed.sls catches the tag and dispatches to
  orch.so_pillar_reload, which carries a DISPATCH map of pillar-path
  prefix -> (state sls, role grain set) so adding a new service to
  the auto-reload list is a one-line edit instead of a new reactor.
- Engine + reactor wiring is gated on the same postgres:so_pillar:enabled
  flag as the schema and ext_pillar config so the whole stack flips
  on/off together.

Tests: 21 new cases (112 total, all passing) covering mode resolution,
PG-managed detection, and PG-canonical read/write/purge routing with
the PG client stubbed.
2026-05-01 09:31:48 -04:00
Mike Reeves 23255f88e0 add so-yaml dual-write to so_pillar.* + purge verb
Hooks every so-yaml.py write through a new so_yaml_postgres helper that
mirrors disk YAML mutations into so_pillar.pillar_entry via docker exec
psql. Disk remains canonical during the transition; PG mirror failures
are logged only when a real write error occurs (skipped paths and
postgres-unreachable cases stay silent so existing callers don't see
new noise on stderr).

Adds a `purge YAML_FILE` verb on so-yaml that deletes the file from
disk and removes the matching pillar_entry rows. For minion files it
also drops the so_pillar.minion row, which CASCADEs to pillar_entry +
role_member. Designed for so-minion's delete path (replaces rm -f) so
the audit log captures the deletion.

setup/so-functions::generate_passwords + secrets_pillar generate
secrets:pillar_master_pass and /opt/so/conf/postgres/so_pillar.key on
fresh installs, and append the password to existing secrets.sls files
on upgrade.

- salt/manager/tools/sbin/so_yaml_postgres.py: locate(), write_yaml(),
  purge_yaml(), and a small CLI for diagnostics. Skips bootstrap and
  mine-driven paths via the same allowlist used by so-pillar-import.
- salt/manager/tools/sbin/so-yaml.py: import the helper, hook
  writeYaml() to mirror after every disk write, add purgeFile() and
  the purge verb.
- salt/manager/tools/sbin/so-yaml_test.py: 16 new tests covering the
  purge verb and the path-locator / write contract of so_yaml_postgres
  without contacting Postgres. All 91 tests pass.
- setup/so-functions: generate_passwords adds PILLARMASTERPASS and
  SO_PILLAR_KEY; secrets_pillar writes pillar_master_pass and the
  pgcrypto master key file.
2026-04-30 17:09:58 -04:00
Mike Reeves d30b52b327 add so-pillar-import — seeds so_pillar.* from on-disk pillar tree
Idempotent importer that schema_pillar.sls runs once at end of postgres
state on first install, and that so-minion can call per-minion on add /
delete. UPSERTs into so_pillar.pillar_entry; the audit trigger handles
versioning so re-runs without SLS edits produce no version bumps.

Connects via docker exec so-postgres psql, so no DSN config is required
at first-install time. Skips bootstrap files (secrets.sls, postgres/
auth.sls, etc.), mine-driven nodes.sls files, and any file containing
Jinja templates — those stay disk-authoritative and ext_pillar_first:
False means they render before the PG overlay.

Auto-syncs to /usr/sbin via the existing manager_sbin file.recurse.
2026-04-30 16:34:05 -04:00
Mike Reeves 3fad895d6a add so_pillar schema + ext_pillar wiring (postsalt foundation)
Lays the database-backed pillar foundation for the postsalt branch. Salt
continues to read on-disk SLS first; the new ext_pillar config overlays
values from the so_pillar.* schema in so-postgres.

- salt/postgres/files/schema/pillar/00{1..7}_*.sql: idempotent DDL for
  scope/role/role_member/minion/pillar_entry/pillar_entry_history/
  drift_log, secret pgcrypto helpers, RLS, pg_cron retention.
- salt/postgres/schema_pillar.sls: applies the SQL files inside the
  so-postgres container after it's healthy, configures the master_key
  GUC, and runs so-pillar-import once. Gated on
  postgres:so_pillar:enabled feature flag (default false).
- salt/salt/master/ext_pillar_postgres.{sls,conf.jinja}: drops
  /etc/salt/master.d/ext_pillar_postgres.conf with list-form ext_pillar
  queries (global/role/minion/secrets) and ext_pillar_first: False so
  bootstrap pillars on disk render before the PG overlay.
- salt/postgres/init.sls + salt/salt/master.sls: include the new states.

Both new state branches are guarded so a default install with the flag
off is a no-op.
2026-04-30 16:30:57 -04:00
Jorge Reyes ce3ad3a895 Merge pull request #15844 from Security-Onion-Solutions/reyesj2/elastic-agent-warning
update default elastic agent logging level to warning
2026-04-30 09:46:28 -05:00
Mike Reeves 3a4b7b50de ensure python3-pyyaml is installed before continuing setup 2026-04-30 10:15:09 -04:00
reyesj2 39d0947102 update default elastic agent logging level to warning 2026-04-29 17:38:40 -05:00
Jorge Reyes 0085d9a353 Merge pull request #15842 from Security-Onion-Solutions/reyesj2-patch-1
so-elastic-fleet-outputs-update now checks for cert drift. Remove run…
2026-04-29 12:37:04 -05:00
Jorge Reyes 2f01ce3b23 so-elastic-fleet-outputs-update now checks for cert drift. Remove running --cert arg on cert change to prevent highstate from running outputs-update 2x 2026-04-29 12:33:28 -05:00
Mike Reeves 71b19c1b5f Merge pull request #15840 from Security-Onion-Solutions/fix/import-postgres-firewall
Open postgres in DOCKER-USER firewall everywhere influxdb is open
2026-04-29 09:20:03 -04:00
Mike Reeves 82e55ae87f Open postgres on every hostgroup that opens influxdb
The static defaults only listed postgres on each role's self-hostgroup,
leaving sensor/searchnode/heavynode/receiver/fleet/idh/desktop/hypervisor
hostgroups unable to reach the manager's so-postgres in distributed
grids. A dynamic block in firewall/map.jinja added postgres to those
hostgroups only when telegraf.output was switched to POSTGRES/BOTH,
which left postgres unreachable by default.

Mirror influxdb statically across manager/managerhype/managersearch/
standalone for every hostgroup that already lists influxdb, and drop
the now-redundant telegraf-gated dynamic block from firewall/map.jinja.
2026-04-29 09:09:50 -04:00
Mike Reeves 3e02001544 Open postgres port for import role in DOCKER-USER firewall
When so-postgres was wired in (868cd1187), the import role's firewall
defaults were missed while every other manager-class role (manager,
managerhype, managersearch, standalone, eval) had postgres added to
their DOCKER-USER manager-hostgroup portgroups. As a result, on a
fresh import install the so-postgres container starts but tcp/5432 is
dropped at DOCKER-USER, so soc/kratos/telegraf can't reach it.

Add postgres alongside the existing influxdb entry so import nodes
match the other roles.
2026-04-29 08:48:45 -04:00
Mike Reeves 82f70bb53a Merge pull request #15839 from Security-Onion-Solutions/fix/drop-postgres-soc-module-injection
drop postgres module from soc defaults injection
2026-04-28 15:48:49 -04:00
Mike Reeves 2dcded6cca drop postgres module from soc defaults injection
The soc binary on 3/dev does not register a postgres module, so injecting
postgres into soc.config.server.modules makes soc abort at launch with
'Module does not exist: postgres'. The soc-side module is staged on
feature/postgres but is not landing this release. Drop the injection
until the module ships; salt/postgres state and pillars are unchanged.
2026-04-28 15:46:56 -04:00
Mike Reeves 8ca59e6f0c Merge pull request #15838 from Security-Onion-Solutions/fix/docker-refresh-multiarch-pull
Fix/docker refresh multiarch pull
2026-04-28 15:14:27 -04:00
Mike Reeves 82dac82d15 drop platform/digest pull resolution
The digest-pull logic was added to make `docker push` work for multi-arch
upstream tags. Now that the push step is `docker buildx imagetools create`
pinned to the gpg-verified RepoDigest, the registry-to-registry copy
handles single- and multi-arch sources without help. Reverts the pull
back to the original line and removes the unused PLATFORM_OS/_ARCH
detection.
2026-04-28 14:54:25 -04:00
Mike Reeves 288a823edf push images via buildx imagetools create
Replaces `docker push` with a registry-to-registry copy. On Docker 29.x
with the containerd image store, `docker push` of a freshly-pulled image
hits a path that wraps single-platform manifests in a synthetic index
and then can't push the layers it claims to reference, producing
`NotFound: content digest ...` even when the image is fully present.

Keep the local `docker tag` so so-image-pull's `docker images | grep :5000`
existence check continues to work.
2026-04-28 14:49:02 -04:00
Jorge Reyes f9e3d30a71 Merge pull request #15837 from Security-Onion-Solutions/reyesj2/elastic-fleet-cert-check
check current fleet policy cert against cert on disk
2026-04-28 13:47:55 -05:00
reyesj2 9cec79b299 check current fleet policy cert against cert on disk
Co-authored-by: Copilot <copilot@github.com>
2026-04-28 13:34:39 -05:00
Mike Reeves c86399327b fix so-docker-refresh push for multi-arch source images
docker pull of a multi-arch tag on Docker 29.x leaves the local tag
pointing at the image index rather than the platform-specific manifest.
The subsequent docker push then tries to push every sub-manifest the
index references and fails on layers we never fetched.

Resolve the local-platform manifest digest from the upstream index via
docker buildx imagetools inspect, pull by that digest, and re-tag locally
to the canonical tag. The signing flow and the existing tag/push to the
embedded registry are unchanged.
2026-04-28 14:27:59 -04:00
Mike Reeves fa8162de02 Merge pull request #15749 from Security-Onion-Solutions/feature/postgres
Add so-postgres Salt states and infrastructure
2026-04-28 10:15:47 -04:00
Josh Patterson 33abc429d1 Merge pull request #15835 from Security-Onion-Solutions/fix/reactor/sominon_setup
fix sominion_setup reactor
2026-04-28 08:55:58 -04:00
Jorge Reyes b22585ca90 Merge pull request #15833 from Security-Onion-Solutions/reyesj2-es933
exclude more transform job errors
2026-04-27 15:05:11 -05:00
reyesj2 9f2ca7012f exclude more transform job errors 2026-04-27 15:02:13 -05:00
Josh Patterson 21aeb68188 fix sominion_setup reactor 2026-04-27 14:30:41 -04:00
Josh Patterson 81e60ec5bf Merge pull request #15829 from Security-Onion-Solutions/fix/reinstall2
fix reinstall
2026-04-24 16:20:53 -04:00
Josh Patterson 199c2746f1 stop salt-minion and salt-master regardless of install type. display reinstall on console and save to logfile 2026-04-24 15:24:11 -04:00
Josh Patterson 8eca465ef6 uninstall elastic-agent before stopping dockers on reinstall 2026-04-24 14:35:11 -04:00
Jorge Reyes a45e59239f Merge pull request #15826 from Security-Onion-Solutions/reyesj2-es933
heavynode should run es cluster state
2026-04-24 13:07:48 -05:00
Josh Patterson 2ad0bcab7c Merge pull request #15828 from Security-Onion-Solutions/fix/annotations
readonly soc and kratos enabled
2026-04-24 14:00:02 -04:00
Josh Patterson 070d150420 readonly soc and kratos enabled 2026-04-24 13:56:35 -04:00
reyesj2 90ecbe90d8 allow heavynodes to run elasticsearch/cluster state 2026-04-24 12:56:27 -05:00
Josh Patterson 813fa03dc3 Merge pull request #15824 from Security-Onion-Solutions/fix/reinstall2
fix reinstall issue with salt
2026-04-24 12:22:54 -04:00
Josh Patterson 02381fbbe9 stop salt-cloud , belt-and-suspenders against a broken/incomplete salt RPM 2026-04-24 11:33:21 -04:00
Josh Patterson 0722b681b1 redo service stop on reinstall 2026-04-24 11:04:46 -04:00
Josh Patterson 564815e836 redo how services are stopped during reinstall 2026-04-24 10:46:29 -04:00
Jorge Reyes 88b30adf7f Merge pull request #15823 from Security-Onion-Solutions/reyesj2-es933
typo
2026-04-24 09:27:08 -05:00
reyesj2 b6acf3b522 typo 2026-04-24 09:24:58 -05:00
Jason Ertel ba55468da8 Merge pull request #15822 from Security-Onion-Solutions/jertel/wip
numeric test description
2026-04-24 08:26:55 -04:00
Jason Ertel cdd217283d numeric test description 2026-04-24 08:13:36 -04:00
Jorge Reyes 810a582717 Merge pull request #15813 from Security-Onion-Solutions/reyesj2-es933
split up Elastic Fleet state
2026-04-23 14:51:32 -05:00
Mike Reeves a6948e8dcb Remove helpLink for influxdb in soc_global.yaml
Removed helpLink for influxdb from endgamehost configuration.
2026-04-23 13:56:41 -04:00
Mike Reeves 5f35554fdc Merge pull request #15712 from Security-Onion-Solutions/soupfix
Fix soup
2026-04-23 12:39:50 -04:00
Mike Reeves 0ecc7ae594 soup: drop --local from postgres.telegraf_users reconcile
The manager's /etc/salt/minion (written by so-functions:configure_minion)
has no file_roots, so salt-call --local falls back to Salt's default
/srv/salt and fails with "No matching sls found for 'postgres.telegraf_users'
in env 'base'". || true was silently swallowing the error, which meant the
DB roles for the pillar entries just populated by the so-telegraf-cred
backfill loop never actually got created.

Route through salt-master instead; its file_roots already points at the
default/local salt trees.
2026-04-23 11:25:44 -04:00
reyesj2 fdfca469cc prevent non-manager nodes from running elasticsearch.cluster state manually 2026-04-23 09:53:07 -05:00
reyesj2 5f2ec76ba8 prevent fleetnode from being able to run elasticfleet.manager state manually 2026-04-23 09:50:45 -05:00
reyesj2 b015c8ff14 remove docker import 2026-04-23 09:31:30 -05:00
reyesj2 7e70870a9e remove globals import 2026-04-23 09:25:36 -05:00
Mike Reeves eadad6c163 soup: bootstrap postgres pillar stubs and secret on 3.0.0 upgrade
pillar/top.sls now references postgres.soc_postgres / postgres.adv_postgres
unconditionally, but make_some_dirs only runs at install time so managers
upgrading from 3.0.0 have no local/pillar/postgres/ and salt-master fails
pillar render on the first post-upgrade restart. Similarly, secrets_pillar
is a no-op on upgrade (secrets.sls already exists), so secrets:postgres_pass
never gets seeded and the postgres container's POSTGRES_PASSWORD_FILE and
SOC's PG_ADMIN_PASS would land empty after highstate.

Add ensure_postgres_local_pillar and ensure_postgres_secret to up_to_3.1.0
so the stubs and secret exist before masterlock/salt-master restart. Both
are idempotent and safe to re-run.
2026-04-23 10:01:38 -04:00
reyesj2 22b32a16dd include elasticfleet.config 2026-04-23 08:30:47 -05:00
reyesj2 22f869734e add check for files before attempting to use file pattern to load templates 2026-04-22 23:11:31 -05:00
reyesj2 398bc9e4ed update kibana discardCorruptObjects version 2026-04-22 20:38:13 -05:00
reyesj2 72dbb69a1c fix searchnodes running elasticsearch/cluster state 2026-04-22 20:37:48 -05:00
reyesj2 339959d1c0 split up elasticfleet/enabled state 2026-04-22 20:30:40 -05:00
Mike Reeves d5c0ec4404 so-yaml_test: cover loadYaml error paths
Exercises the FileNotFoundError and generic-exception branches added to
loadYaml in the previous commit, restoring 100% coverage required by
the build.
2026-04-22 14:30:51 -04:00
Mike Reeves e616b4c120 so-telegraf-cred: make executable and harden error handling
so-telegraf-cred was committed with mode 644, causing
`so-telegraf-cred add "$MINION_ID"` in so-minion's add_telegraf_to_minion
to fail with "Permission denied" and log "Failed to provision postgres
telegraf cred for <minion>". Mark it executable.

Also bail early in seed_creds_file if mkdir/printf/chmod fail, and in
so-yaml.py loadYaml surface a clear stderr message with the filename
instead of an unhandled FileNotFoundError traceback.
2026-04-22 14:25:19 -04:00
Mike Reeves f240a99e22 so-telegraf-cred: thin bash wrapper around so-yaml.py
Swap the ~150-line Python implementation for a 48-line bash script that
delegates YAML mutation to so-yaml.py — the same helper so-minion and
soup already use. Same semantics: seed the creds pillar on first use,
idempotent add, silent remove.

SO minion ids are dot-free by construction (setup/so-functions:1884
strips everything after the first '.'), so using the raw id as the
so-yaml.py key path is safe.
2026-04-22 11:09:53 -04:00
Mike Reeves 614f32c5e0 Split postgres auth from per-minion telegraf creds
The old flow had two writers for each per-minion Telegraf password
(so-minion wrote the minion pillar; postgres.auth regenerated any
missing aggregate entries). They drifted on first-boot and there was
no trigger to create DB roles when a new minion joined.

Split responsibilities:

- pillar/postgres/auth.sls (manager-scoped) keeps only the so_postgres
  admin cred.
- pillar/telegraf/creds.sls (grid-wide) holds a {minion_id: {user,
  pass}} map, shadowed per-install by the local-pillar copy.
- salt/manager/tools/sbin/so-telegraf-cred is the single writer:
  flock, atomic YAML write, PyYAML safe_dump so passwords never
  round-trip through so-yaml.py's type coercion. Idempotent add, quiet
  remove.
- so-minion's add/remove hooks now shell out to so-telegraf-cred
  instead of editing pillar files directly.
- postgres.telegraf_users iterates the new pillar key and CREATE/ALTERs
  roles from it; telegraf.conf reads its own entry via grains.id.
- orch.deploy_newnode runs postgres.telegraf_users on the manager and
  refreshes the new minion's pillar before the new node highstates,
  so the DB role is in place the first time telegraf tries to connect.
- soup's post_to_3.1.0 backfills the creds pillar from accepted salt
  keys (idempotent) and runs postgres.telegraf_users once to reconcile
  the DB.
2026-04-22 10:55:15 -04:00
Josh Patterson cd6707a566 Merge pull request #15800 from Security-Onion-Solutions/feature/vm-raid-status
monitor raid for vms
2026-04-22 09:42:44 -04:00
Josh Patterson edd207a9d5 soup update socloud.conf 2026-04-22 09:20:53 -04:00
Mike Reeves 724d76965f soup: update postgres backfill comment to reflect reactor removal
The reactor path is gone; so-minion now owns add/delete for new
minions. The backfill itself is unchanged — postgres.auth's up_minions
fallback fills the aggregate, postgres.telegraf_users creates the
roles, and the bash loop fans to per-minion pillar files — so the
pre-feature upgrade story still works end-to-end. Just refresh the
comment so it isn't misleading.
2026-04-21 15:45:05 -04:00
Mike Reeves dbf4fb66a4 Clean up postgres telegraf cred on so-minion delete
Paired with the add path in add_telegraf_to_minion: when a minion is
removed, drop its entry from the aggregate postgres pillar and drop the
matching so_telegraf_<safe> role from the database. Without this, stale
entries and DB roles accumulate over time.

Makes rotate-password and compromise-recovery both a clean delete+add:

  so-minion -o=delete -m=<id>
  so-minion -o=add    -m=<id>

The first call drops the role and clears the aggregate pillar; the
second generates a brand-new password.

The cleanup is best-effort — if so-postgres isn't running or the DROP
ROLE fails (e.g., the role owns unexpected objects), we log a warning
and continue so the minion delete itself never gets blocked by postgres
state. Admins can mop up stray roles manually if that happens.
2026-04-21 15:43:01 -04:00
Mike Reeves 5f28e9b191 Move per-minion telegraf cred provisioning into so-minion
Simpler, race-free replacement for the reactor + orch + fan-out chain.

- salt/manager/tools/sbin/so-minion: expand add_telegraf_to_minion to
  generate a random 72-char password, reuse any existing password from
  the aggregate pillar, write postgres.telegraf.{user,pass} into the
  minion's own pillar file, and update the aggregate pillar so
  postgres.telegraf_users can CREATE ROLE on the next manager apply.
  Every create<ROLE> function already calls this hook, so add / addVM /
  setup dispatches are all covered identically and synchronously.
- salt/postgres/auth.sls: strip the fanout_targets loop and the
  postgres_telegraf_minion_pillar_<safe> cmd.run block — it's now
  redundant. The state still manages the so_postgres admin user and
  writes the aggregate pillar for postgres.telegraf_users to consume.
- salt/reactor/telegraf_user_sync.sls: deleted.
- salt/orch/telegraf_postgres_sync.sls: deleted.
- salt/salt/master.sls: drop the reactor_config_telegraf block that
  registered the reactor on /etc/salt/master.d/reactor_telegraf.conf.
- salt/orch/deploy_newnode.sls: drop the manager_fanout_postgres_telegraf
  step and the require: it added to the newnode highstate. Back to its
  original 3/dev shape.

No more ephemeral postgres_fanout_minion pillar, no more async salt/key
reactor, no more so-minion setupMinionFiles race: the pillar write
happens inline inside setupMinionFiles itself.
2026-04-21 15:34:15 -04:00
Jorge Reyes 01bd3b6e06 Merge pull request #15807 from Security-Onion-Solutions/reyesj2-es933
urlencode elasticsearch version
2026-04-21 14:11:04 -05:00
Mike Reeves 1abfd77351 Hide telegraf password from console and close so-minion race
Two fixes on the postgres telegraf fan-out path:

1. postgres.auth cmd.run leaked the password to the console because
   Salt always prints the Name: field and `show_changes: False` does
   not apply to cmd.run. Move the user and password into the `env:`
   attribute so the shell body still sees them via $PG_USER / $PG_PASS
   but Salt's state reporter never renders them.

2. so-minion's addMinion -> setupMinionFiles sequence removes the
   minion pillar file and rewrites it from scratch, which wipes the
   postgres.telegraf.* entries the reactor may have already written on
   salt-key accept. Add a postgres.auth fan-out step to
   orch.deploy_newnode (the orch so-minion kicks off after
   setupMinionFiles) and require it from the new minion's highstate.
   Idempotent via the existing unless: guard in postgres.auth.
2026-04-21 15:10:57 -04:00
reyesj2 06a555fafb urlencode elasticsearch version 2026-04-21 14:01:31 -05:00
Mike Reeves 81c0f2b464 so-yaml.py: tolerate missing ancestors in removeKey
replace calls removeKey before addKey, so running `so-yaml.py replace`
on a new dotted key whose parent doesn't exist — e.g., postgres.auth
fanning postgres.telegraf.user into a minion pillar file that has
never carried any postgres.* keys — crashed with
    KeyError: 'postgres'
from removeKey recursing into a missing parent dict.

Make removeKey a no-op when an intermediate key is absent so that:
  - `remove` has the natural "remove if exists" semantics, and
  - `replace` works for brand-new nested keys.
2026-04-21 14:43:10 -04:00
Mike Reeves d5dc28e526 Fan postgres telegraf cred for manager on every auth run
The empty-pillar case produced a telegraf.conf with `user= password=`
which libpq misparses ("password=" gets consumed as the user value),
yielding `password authentication failed for user "password="` on
every manager without a prior fan-out (fresh install, not the salt-key
path the reactor handles).

Two fixes:

- salt/postgres/auth.sls: always fan for grains.id in addition to any
  postgres_fanout_minion from the reactor, so the manager's own pillar
  is populated on every postgres.auth run. The existing `unless` guard
  keeps re-runs idempotent.
- salt/telegraf/etc/telegraf.conf: gate the [[outputs.postgresql]]
  block on PG_USER and PG_PASS being non-empty. If a minion hasn't
  received its pillar yet the output block simply isn't rendered — the
  next highstate picks up the creds once the fan-out completes, and in
  the meantime telegraf keeps running the other outputs instead of
  erroring with a malformed connection string.
2026-04-21 14:40:19 -04:00
Jason Ertel 7411031e11 Merge pull request #15803 from Security-Onion-Solutions/jertel/wip
more error handling during image updates
2026-04-21 10:21:56 -04:00
Jason Ertel 247091766c more error handling during image updates 2026-04-21 10:18:05 -04:00
Josh Patterson 7f93110d68 Merge remote-tracking branch 'origin/3/dev' into feature/vm-raid-status 2026-04-21 10:10:38 -04:00
Mike Reeves 05f6503d61 Gate postgres telegraf fan-out on reactor-provided minion id
postgres.auth was running an `unless` shell check per up-minion on every
manager highstate, even when nothing had changed — N fork+python starts
of so-yaml.py add up on large grids. The work is only needed when a
specific minion's key is accepted.

- salt/postgres/auth.sls: fan out only when postgres_fanout_minion
  pillar is set (targets that single minion). Manager highstates with
  no pillar take a zero-N code path.
- salt/reactor/telegraf_user_sync.sls: re-pass the accepted minion id
  as postgres_fanout_minion to the orch.
- salt/orch/telegraf_postgres_sync.sls: forward the pillar to the
  salt.state invocation so the state render sees it.
- salt/manager/tools/sbin/soup: for the one-time 3.1.0 backfill, drop
  the per-minion state.apply and do an in-shell loop over the minion
  pillar files using so-yaml.py directly. Skips minions that already
  have postgres.telegraf.user set.
2026-04-21 10:05:08 -04:00
Mike Reeves a149ea7e8f Skip per-minion pillar fan-out when cred is already in place
Every postgres.auth run was rewriting every minion pillar file via
two so-yaml.py replace calls, even when nothing had changed. Passwords
are only generated on first encounter (see the `if key not in
telegraf_users` guard) and never rotate, so re-writing the same values
on every apply is wasted work and noisy state output.

Add an `unless:` check that compares the already-written
postgres.telegraf.user to the one we'd set. If they match, skip the
fan-out entirely. On first apply for a new minion the key isn't there,
so the replace runs; on subsequent applies it's a no-op.
2026-04-21 09:59:46 -04:00
Mike Reeves bb71e44614 Write per-minion telegraf creds to each minion's own pillar file
pillar/top.sls only distributes postgres.auth to manager-class roles,
so sensors / heavynodes / searchnodes / receivers / fleet / idh /
hypervisor / desktop minions never received the postgres telegraf
password they need to write metrics. Broadcasting the aggregate
postgres.auth pillar to every role would leak the so_postgres admin
password and every other minion's cred.

Fan out per-minion credentials into each minion's own pillar file at
/opt/so/saltstack/local/pillar/minions/<id>.sls. That file is already
distributed by pillar/top.sls exclusively to the matching minion via
`- minions.{{ grains.id }}`, so each minion sees only its own
postgres.telegraf.{user,pass} and nothing else.

- salt/postgres/auth.sls: after writing the manager-scoped aggregate
  pillar, fan the per-minion creds out via so-yaml.py replace for every
  up-minion. Creates the minion pillar file if missing. Requires
  postgres_auth_pillar so the manager pillar lands first.
- salt/telegraf/etc/telegraf.conf: consume postgres:telegraf:user and
  postgres:telegraf:pass directly from the minion's own pillar instead
  of walking postgres:auth:users which isn't visible off the manager.
2026-04-21 09:57:35 -04:00
Mike Reeves 84197fb33b Move postgres backup script and cron to the postgres states
The so-postgres-backup script and its cron were living under
salt/backup/config_backup.sls, which meant the backup script and cron
were deployed independently of whether postgres was enabled/disabled.

- Relocate salt/backup/tools/sbin/so-postgres-backup to
  salt/postgres/tools/sbin/so-postgres-backup so the existing
  postgres_sbin file.recurse in postgres/config.sls picks it up with
  everything else — no separate file.managed needed.
- Remove postgres_backup_script and so_postgres_backup from
  salt/backup/config_backup.sls.
- Add cron.present for so_postgres_backup to salt/postgres/enabled.sls
  and the matching cron.absent to salt/postgres/disabled.sls so the
  cron follows the container's lifecycle.
2026-04-21 09:42:41 -04:00
Mike Reeves 89a6e7c0dd Tidy config.sls makedirs and postgres helpLinks
- config.sls: postgresconfdir creates /opt/so/conf/postgres, so the
  two subdirectories under it (postgressecretsdir, postgresinitdir)
  don't need their own makedirs — require the parent instead.
- soc_postgres.yaml: helpLink for every annotated key now points to
  'postgres' instead of the carried-over 'influxdb' slug.
2026-04-21 09:39:58 -04:00
Mike Reeves a902f667ba Target manager by role grain in telegraf_postgres_sync orch
The previous MANAGER resolution used pillar.get('setup:manager') with a
fallback to grains.get('master'). Neither works from the reactor:
setup:manager is only populated by the setup workflow (not by reactor
runs), and grains.master returns the minion's master-hostname setting,
not a targetable minion id.

Match the pattern used by orch/delete_hypervisor.sls: compound-target
whichever minion is the manager via role grain.
2026-04-21 09:37:35 -04:00
Mike Reeves f72c30abd0 Have postgres.telegraf_users include postgres.enabled
postgres_wait_ready requires docker_container: so-postgres, which is
declared in postgres.enabled. Running postgres.telegraf_users on its own
— as the reactor orch and the soup post-upgrade step both do — errored
because Salt couldn't resolve the require.

Include postgres.enabled from postgres.telegraf_users so the container
state is always in the render. postgres.enabled already includes
telegraf_users; Salt de-duplicates the circular include and the included
states are all idempotent, so repeated application is a no-op.
2026-04-21 09:35:59 -04:00
Mike Reeves 37e9257698 Change so-postgres final_octet to 47 2026-04-21 09:33:47 -04:00
Mike Reeves 72105f1f2f Drop telegraf push from new-minion orch; highstate covers it
New minions run highstate as part of onboarding, which already applies
the telegraf state with the fresh pillar entry we just wrote. Pushing
telegraf a second time from the reactor is redundant.

- Remove the MINION-scoped salt.state block from the orch; keep only
  the manager-side postgres.auth + postgres.telegraf_users provisioning.
- Stop passing minion_id as pillar in the reactor; the orch doesn't
  reference it anymore.
2026-04-21 09:31:45 -04:00
Mike Reeves ee89b78751 Fire telegraf user sync on salt/key accept, not salt/auth
salt/auth fires on every minion authentication — including every minion
restart and every master restart — so the reactor was re-running the
postgres.auth + postgres.telegraf_users + telegraf orchestration for
every already-accepted minion on every reconnect. The underlying states
are idempotent, so this was wasted work and log noise, not a correctness
issue.

Switch the subscription to salt/key, which fires only when the master
actually changes a key's state (accept / reject / delete). Match the
pattern used by salt/reactor/check_hypervisor.sls (registered in
salt/salt/cloud/reactor_config_hypervisor.sls) and add the result==True
guard so half-failed key operations don't trigger the orchestration.
2026-04-20 19:54:06 -04:00
Jason Ertel 33ef138866 Merge pull request #15797 from Security-Onion-Solutions/jertel/wip
fix template annotation
2026-04-20 17:14:53 -04:00
Jason Ertel 71da27dc8e fix template annotation 2026-04-20 17:02:25 -04:00
Mike Reeves 80bf07ffd8 Flesh out soc_postgres.yaml annotations
Add Configuration-UI annotations for every postgres pillar key defined
in defaults.yaml, not just telegraf.retention_days:

- postgres.enabled          — readonly; admin-visible but toggled via state
- postgres.telegraf.retention_days — drop advanced so user-tunable knobs
  surface in the default view
- postgres.config.max_connections, shared_buffers, log_min_messages —
  user-tunable performance/verbosity knobs, not advanced
- postgres.config.listen_addresses, port, ssl, ssl_cert_file, ssl_key_file,
  ssl_ca_file, hba_file, log_destination, logging_collector,
  shared_preload_libraries, cron.database_name — infra/Salt-managed,
  marked advanced so they're visible but out of the way

No defaults.yaml change; value-side stays the same.
2026-04-20 16:36:37 -04:00
Mike Reeves b69e50542a Use TELEGRAFMERGED for telegraf.output and de-jinja pg_hba.conf
- firewall/map.jinja and postgres/telegraf_users.sls now pull the
  telegraf output selector through TELEGRAFMERGED so the defaults.yaml
  value (BOTH) is the source of truth and pillar overrides merge in
  cleanly. pillar.get with a hardcoded fallback was brittle and would
  disagree with defaults.yaml if the two ever diverged.
- Rename salt/postgres/files/pg_hba.conf.jinja to pg_hba.conf and drop
  template: jinja from config.sls — the file has no jinja besides the
  comment header.
2026-04-20 16:06:01 -04:00
Mike Reeves 3ecd19d085 Move telegraf_output from global pillar to telegraf pillar
The Telegraf backend selector lived at global.telegraf_output but it is
a Telegraf-scoped setting, not a cross-cutting grid global. Move both
the value and the UI annotation under the telegraf pillar so it shows
up alongside the other Telegraf tuning knobs in the Configuration UI.

- salt/telegraf/defaults.yaml:    add telegraf.output: BOTH
- salt/telegraf/soc_telegraf.yaml: add telegraf.output annotation
- salt/global/defaults.yaml:      remove global.telegraf_output
- salt/global/soc_global.yaml:    remove global.telegraf_output annotation
- salt/vars/globals.map.jinja:    drop telegraf_output from GLOBALS
- salt/firewall/map.jinja:        read via pillar.get('telegraf:output')
- salt/postgres/telegraf_users.sls: read via pillar.get('telegraf:output')
- salt/telegraf/etc/telegraf.conf: read via TELEGRAFMERGED.output
- salt/postgres/tools/sbin/so-stats-show: update user-facing docs

No behavioral change — default stays BOTH.
2026-04-20 16:03:02 -04:00
Mike Reeves b6a3d1889c Fix soup state.apply args for postgres provisioning
state.apply takes a single mods argument; space-separated names are not
a list, so `state.apply postgres.auth postgres.telegraf_users` was only
applying postgres.auth and silently dropping the telegraf_users state.

Use comma-separated mods and add queue=True to match the rest of soup.
2026-04-20 14:40:32 -04:00
Mike Reeves 1cb34b089c Restore 3/dev soup and add postgres users to post_to_3.1.0
feature/postgres had rewritten the 3.1.0 upgrade block, dropping the
elastic upgrade work 3/dev landed for 9.0.8→9.3.3: elasticsearch_backup_index_templates,
the component template state cleanup, and the /usr/sbin/so-kibana-space-defaults
post-upgrade call. It also carried an older ES upgrade mapping
(8.18.8→9.0.8) that was superseded on 3/dev (9.0.8→9.3.3 for
3.0.0-20260331), and a handful of latent shell-quoting regressions in
verify_es_version_compatibility and the intermediate-upgrade helpers.

Adopt the 3/dev soup verbatim and only add the new Telegraf Postgres
provisioning to post_to_3.1.0 on top of so-kibana-space-defaults.
2026-04-20 14:38:55 -04:00
Mike Reeves 1537ba5031 Merge remote-tracking branch 'origin/3/dev' into feature/postgres 2026-04-20 14:32:05 -04:00
Mike Reeves 8225d41661 Harden postgres secrets, TLS enforcement, and admin tooling
- Deliver postgres super and app passwords via mounted 0600 secret files
  (POSTGRES_PASSWORD_FILE, SO_POSTGRES_PASS_FILE) instead of plaintext env
  vars visible in docker inspect output
- Mount a managed pg_hba.conf that only allows local trust and hostssl
  scram-sha-256 so TCP clients cannot negotiate cleartext sessions
- Restrict postgres.key to 0400 and ensure owner/group 939
- Set umask 0077 on so-postgres-backup output
- Validate host values in so-stats-show against [A-Za-z0-9._-] before SQL
  interpolation so a compromised minion cannot inject SQL via a tag value
- Coerce postgres:telegraf:retention_days to int before rendering into SQL
- Escape single quotes when rendering pillar values into postgresql.conf
- Own postgres tooling in /usr/sbin as root:root so a container escape
  cannot rewrite admin scripts
- Gate ES migration TLS verification on esVerifyCert (default false,
  matching the elastic module's existing pattern)
2026-04-20 12:36:17 -04:00
Josh Patterson ee437265fc monitor raid for vms 2026-04-20 12:00:02 -04:00
Mike Reeves 3f46caaf02 Revoke PUBLIC CONNECT on securityonion database
Per-minion telegraf roles inherit CONNECT via PUBLIC by default and
could open sessions to the SOC database (though they have no readable
grants inside). Close the soft edge by revoking PUBLIC's CONNECT and
re-granting it to so_postgres only.
2026-04-17 19:10:07 -04:00
Mike Reeves f3181b204a Remove so-telegraf-trim and update retention description
pg_partman drops old partitions hourly; row-DELETE retention is
obsolete and a confusing emergency fallback on partitioned tables.
2026-04-17 19:06:16 -04:00
Mike Reeves dd39db4584 Drop so_telegraf_trim cron.absent tombstone
feature/postgres never shipped the original cron.present, so this
cleanup state is a no-op on every fresh install. The script itself
stays on disk for emergency use.
2026-04-17 18:59:39 -04:00
Mike Reeves 759880a800 Wait for TCP-ready postgres, not the init-phase Unix socket
docker-entrypoint.sh runs the init-scripts phase with listen_addresses=''
(Unix socket only). The old pg_isready check passed there and then raced
the docker_temp_server_stop shutdown before the final postgres started.
pg_isready -h 127.0.0.1 only returns success once the real CMD binds
TCP, so downstream psql execs never land during the shutdown window.
2026-04-17 16:43:41 -04:00
Jorge Reyes f5cd90d139 Merge pull request #15786 from Security-Onion-Solutions/reyesj2-es933
add wait_for_so-elasticsearch state and split elasticsearch cluster c…
2026-04-17 14:47:11 -05:00
Mike Reeves 31383bd9d0 Make Telegraf Postgres templates idempotent
Use CREATE TABLE IF NOT EXISTS and a WHERE-guarded create_parent() so
a Telegraf restart can re-run the templates safely after manual DB
surgery. Add an explicit tag_table_create_templates mirroring the
plugin default with IF NOT EXISTS for the same reason.
2026-04-17 15:43:50 -04:00
reyesj2 ebb93b4fa7 add wait_for_so-elasticsearch state and split elasticsearch cluster configuration out of enabled.sls 2026-04-17 14:43:07 -05:00
Mike Reeves 21076af01e Grant so_telegraf CREATE on partman schema
pg_partman 5.x's create_partition() creates a per-parent template
table inside the partman schema at runtime, which requires CREATE on
that schema. Also extend ALTER DEFAULT PRIVILEGES so the runtime-
created template tables are accessible to so_telegraf.
2026-04-17 15:34:19 -04:00
Mike Reeves f11e9da83a Mark time column NOT NULL before partman.create_parent
pg_partman 5.x requires the control column to be NOT NULL; Telegraf's
generated columns are nullable by default.
2026-04-17 15:27:06 -04:00
Mike Reeves 0fddcd8fe7 Pass unquoted schema.name to partman.create_parent
pg_partman 5.x splits p_parent_table on '.' and looks up the parts as
raw identifiers, so the literal must be 'schema.name' rather than the
double-quoted form quoteLiteral emits for .table.
2026-04-17 15:22:57 -04:00
Mike Reeves 927eba566c Grant so_telegraf access to partman schema
Telegraf calls partman.create_parent() on first write of each metric,
which needs USAGE on the partman schema, EXECUTE on its functions and
procedures, and DML on partman.part_config.
2026-04-17 15:13:08 -04:00
Mike Reeves af9330a9dd Escape Go-template placeholders from Jinja in telegraf.conf 2026-04-17 15:04:37 -04:00
Mike Reeves b3fbd5c7a4 Use Go-template placeholders and shell-guarded CREATE DATABASE
- Telegraf's outputs.postgresql plugin uses Go text/template syntax,
  not uppercase tokens. The {TABLE}/{COLUMNS}/{TABLELITERAL} strings
  were passed through to Postgres literally, producing syntax errors
  on every metric's first write. Switch to {{ .table }}, {{ .columns }},
  and {{ .table|quoteLiteral }} so partitioned parents and the partman
  create_parent() call succeed.
- Replace the \gexec "CREATE DATABASE ... WHERE NOT EXISTS" idiom in
  both init-users.sh and telegraf_users.sls with an explicit shell
  conditional. The prior idiom occasionally fired CREATE DATABASE even
  when so_telegraf already existed, producing duplicate-key failures.
2026-04-17 14:55:13 -04:00
Mike Reeves 5228668be0 Fix Telegraf→Postgres table creation and state.apply race
- Telegraf's partman template passed p_type:='native', which pg_partman
  5.x (the version shipped by postgresql-17-partman on Debian) rejects.
  Switched to 'range' so partman.create_parent() actually creates
  partitions and Telegraf's INSERTs succeed.
- Added a postgres_wait_ready gate in telegraf_users.sls so psql execs
  don't race the init-time restart that docker-entrypoint.sh performs.
- so-verify now ignores the literal "-v ON_ERROR_STOP=1" token in the
  setup log. Dropped the matching entry from so-log-check, which scans
  container stdout where that token never appears.
2026-04-17 13:00:12 -04:00
Mike Reeves 7d07f3c8fe Create so_telegraf DB from Salt and pin pg_partman schema
init-users.sh only runs on a fresh data dir, so upgrades onto an
existing /nsm/postgres volume never got so_telegraf. Pinning partman's
schema also makes partman.part_config reliably resolvable.
2026-04-17 10:51:08 -04:00
Mike Reeves d9a9029ce5 Adopt pg_partman + pg_cron for Telegraf metric tables
Every telegraf.* metric table is now a daily time-range partitioned
parent managed by pg_partman. Retention drops old partitions instead
of the row-by-row DELETE that so-telegraf-trim used to run nightly,
and dashboards will benefit from partition pruning at query time.

- Load pg_cron at server start via shared_preload_libraries and point
  cron.database_name at so_telegraf so job metadata lives alongside
  the metrics
- Telegraf create_templates override makes every new metric table a
  PARTITION BY RANGE (time) parent registered with partman.create_parent
  in one transaction (1 day interval, 3 premade)
- postgres_telegraf_group_role now also creates pg_partman and pg_cron
  extensions and schedules hourly partman.run_maintenance_proc
- New retention reconcile state updates partman.part_config.retention
  from postgres.telegraf.retention_days on every apply
- so_telegraf_trim cron is now unconditionally absent; script stays on
  disk as a manual fallback
2026-04-16 17:27:15 -04:00
Mike Reeves 9fe53d9ccc Use JSONB for Telegraf fields/tags to avoid 1600-column limit
High-cardinality inputs (docker, procstat, kafka) trigger ALTER TABLE
ADD COLUMN on every new field name, and with all minions writing into
a shared 'telegraf' schema the metric tables hit Postgres's 1600-column
per-table ceiling quickly. Setting fields_as_jsonb and tags_as_jsonb on
the postgresql output keeps metric tables fixed at (time, tag_id,
fields jsonb) and tag tables at (tag_id, tags jsonb).

- so-stats-show rewritten to use JSONB accessors
  ((fields->>'x')::numeric, tags->>'host', etc.) and cast memory/disk
  sizes to bigint so pg_size_pretty works
- Drop regex/regexFailureMessage from telegraf_output SOC UI entry to
  match the convention upstream used when removing them from
  mdengine/pcapengine/pipeline; options: list drives validation
2026-04-16 17:02:21 -04:00
Mike Reeves f7b80f5931 Merge branch '3/dev' into feature/postgres 2026-04-16 16:37:02 -04:00
Mike Reeves f11d315fea Fix soup 2026-04-16 16:35:24 -04:00
Mike Reeves 2013bf9e30 Fix soup 2026-04-16 16:20:25 -04:00
Mike Reeves a2ffb92b8d Fix soup 2026-04-16 16:19:53 -04:00
Jorge Reyes 8b6d11b118 Merge pull request #15780 from Security-Onion-Solutions/reyesj2-es932
supress noisy warning from ES 9.3.3
2026-04-16 14:42:46 -05:00
reyesj2 ba00ae8a7b supress noisy warning from ES 9.3.3 2026-04-16 14:41:25 -05:00
Mike Reeves 470b3bd4da Comingle Telegraf metrics into shared schema
Per-minion schemas cause table count to explode (N minions * M metrics)
and the per-minion revocation story isn't worth it when retention is
short. Move all minions to a shared 'telegraf' schema while keeping
per-minion login credentials for audit.

- New so_telegraf NOLOGIN group role owns the telegraf schema; each
  per-minion role is a member and inherits insert/select via role
  inheritance
- Telegraf connection string uses options='-c role=so_telegraf' so
  tables auto-created on first write belong to the group role
- so-telegraf-trim walks the flat telegraf.* table set instead of
  per-minion schemas
- so-stats-show filters by host tag; CLI arg is now the hostname as
  tagged by Telegraf rather than a sanitized schema suffix
- Also renames so-show-stats -> so-stats-show
2026-04-16 15:40:54 -04:00
Mike Reeves c124186989 so-log-check: exclude psql ON_ERROR_STOP flag
The psql invocation flag '-v ON_ERROR_STOP=1' used by the so-postgres
init script gets flagged by so-log-check because the token 'ERROR'
matches its error regex. Add to the exclusion list.
2026-04-15 19:45:42 -04:00
Mike Reeves d24808ff98 Fix so-show-stats tag column resolution
Telegraf's postgresql output stores tag values either as individual
columns on <metric>_tag or as a single JSONB 'tags' column, depending
on plugin version. Introspect information_schema.columns and build the
right accessor per tag instead of assuming one layout.
2026-04-15 19:28:10 -04:00
Jorge Reyes 7d22f7bd58 Merge pull request #15776 from Security-Onion-Solutions/foxtrot
ES 9.3.3
2026-04-15 16:29:34 -05:00
Jorge Reyes 88582c94e8 remove foxtrot version 2026-04-15 15:04:20 -05:00
Mike Reeves cefbe01333 Add telegraf_output selector for InfluxDB/Postgres dual-write
Introduces global.telegraf_output (INFLUXDB|POSTGRES|BOTH, default BOTH)
so Telegraf can write metrics to Postgres alongside or instead of
InfluxDB. Each minion authenticates with its own so_telegraf_<minion>
role and writes to a matching schema inside a shared so_telegraf
database, keeping blast radius per-credential to that minion's data.

- Per-minion credentials auto-generated and persisted in postgres/auth.sls
- postgres/telegraf_users.sls reconciles roles/schemas on every apply
- Firewall opens 5432 only to minion hostgroups when Postgres output is active
- Reactor on salt/auth + orch/telegraf_postgres_sync.sls provision new
  minions automatically on key accept
- soup post_to_3.1.0 backfills users for existing minions on upgrade
- so-show-stats prints latest CPU/mem/disk/load per minion for sanity checks
- so-telegraf-trim + nightly cron prune rows older than
  postgres.telegraf.retention_days (default 14)
2026-04-15 14:32:10 -04:00
Jorge Reyes 76a6997de2 Merge pull request #15775 from Security-Onion-Solutions/reyesj2-es932
check for addon-index templates dir before attempting to load addon i…
2026-04-14 19:27:02 -05:00
reyesj2 16a4a42faf check for addon-index templates dir before attempting to load addon index templates 2026-04-14 19:26:37 -05:00
Jorge Reyes 0e4623c728 Merge pull request #15772 from Security-Onion-Solutions/reyesj2-es932
soup to 3.1.0
2026-04-14 15:04:46 -05:00
reyesj2 d598e20fbb soup 3.1.0 2026-04-14 14:55:33 -05:00
Jason Ertel 8b0d4b2195 Merge pull request #15769 from Security-Onion-Solutions/jertel/wip
Improve test scenario for node descriptions
2026-04-13 18:43:01 -04:00
Jorge Reyes cf414423b1 Merge pull request #15770 from Security-Onion-Solutions/reyesj2-es932
enable elastic agent patch release for 9.3.3
2026-04-13 16:28:20 -05:00
reyesj2 0405a66c72 enable elastic agent patch release for 9.3.3 2026-04-13 16:27:28 -05:00
Jason Ertel da7c2995b0 include trailing numbers as an additional test 2026-04-13 17:09:10 -04:00
Jorge Reyes 696a1a729c Merge pull request #15768 from Security-Onion-Solutions/reyesj2-es932
ES 9.3.3
2026-04-13 15:02:19 -05:00
Jason Ertel 5fa7006f11 Merge pull request #15766 from Security-Onion-Solutions/jertel/wip
support minion node descriptions containing spaces
2026-04-13 15:24:45 -04:00
Jason Ertel 5634aed679 support minion node descriptions containing spaces 2026-04-13 15:19:39 -04:00
reyesj2 a232cd89cc ES 9.3.3 2026-04-13 13:36:51 -05:00
reyesj2 dd40e44530 show when addon integrations are already loaded 2026-04-13 12:36:42 -05:00
Jorge Reyes 47d226e189 Merge pull request #15765 from Security-Onion-Solutions/3/dev
3/dev
2026-04-13 10:40:38 -05:00
Jorge Reyes 440537140b Merge pull request #15764 from Security-Onion-Solutions/reyesj2-es932
elasticsearch ilm policy load script
2026-04-13 10:39:12 -05:00
reyesj2 29e13b2c0b elasticsearch ilm policy load script 2026-04-13 10:00:17 -05:00
Jorge Reyes 2006a07637 Merge pull request #15763 from Security-Onion-Solutions/reyesj2-es932
start loading addon integration index templates
2026-04-12 00:40:18 -05:00
reyesj2 abcad9fde0 addon statefile 2026-04-12 00:36:30 -05:00
reyesj2 a43947cca5 elasticsearch template load script -- for addon index templates 2026-04-12 00:23:26 -05:00
Jorge Reyes f51de6569f Merge pull request #15762 from Security-Onion-Solutions/reyesj2-es932
only append "-mappings" to component template names as needed
2026-04-11 15:42:33 -05:00
reyesj2 b0584a4dc5 only append "-mappings" to component template names as needed 2026-04-11 15:22:50 -05:00
Jorge Reyes 08f34d408f Merge pull request #15761 from Security-Onion-Solutions/reyesj2-es932
rework elasticsearch template load script -- for core templates
2026-04-11 04:42:45 -05:00
reyesj2 6298397534 rework elasticsearch template load script -- for core templates 2026-04-11 04:40:47 -05:00
Mike Reeves 9ccd0acb4f Add ES credentials to postgres module config for migration
Postgres module now queries Elasticsearch directly via HTTP
for the chat migration (bypasses RBAC that needs user context).
Pass esHostUrl, esUsername, esPassword alongside postgres creds.
2026-04-10 11:41:33 -04:00
Mike Reeves 1ffdcab3be Add postgres adminPassword to SOC module config
Injects the postgres superuser password from secrets pillar so
SOC can run schema migrations as admin before switching to the
app user for normal operations.
2026-04-09 22:21:35 -04:00
Mike Reeves da1045e052 Fix init-users.sh password escaping for special characters
Use format() with %L for SQL literal escaping instead of raw
string interpolation. Also ALTER ROLE if user already exists
to keep password in sync with pillar.
2026-04-09 21:52:20 -04:00
Mike Reeves 55be1f1119 Only add postgres module config on manager nodes
Removed postgres from soc/defaults.yaml (shared by all nodes)
and moved it entirely into defaults.map.jinja, which only injects
the config when postgres auth pillar exists (manager-type nodes).
Sensors and other non-manager nodes will not have a postgres module
section in their sensoroni.json, so sensoroni won't try to connect.
2026-04-09 21:09:43 -04:00
Jorge Reyes 9272afa9e5 Merge pull request #15754 from Security-Onion-Solutions/reyesj2-es932
initialize vars
2026-04-09 18:42:14 -05:00
reyesj2 378d1ec81b initialize vars 2026-04-09 18:41:40 -05:00
Mike Reeves c1b1452bd9 Use manager IP for postgres hostUrl instead of container hostname
SOC connects to postgres via the host network, not the Docker
bridge network, so it needs the manager's IP address rather than
the container hostname.
2026-04-09 19:34:14 -04:00
Jorge Reyes cdbacdcd7e Merge pull request #15751 from Security-Onion-Solutions/reyesj2-es932
rework elasticsearch index template generation
2026-04-09 16:46:56 -05:00
reyesj2 6b8a6267da remove unused elasticsearch:index_template pillar references 2026-04-09 16:45:26 -05:00
reyesj2 89e49d0bf3 rework elasticsearch index template generation 2026-04-09 16:44:51 -05:00
Mike Reeves 2dfa83dd7d Wire postgres credentials into SOC module config
- Create vars/postgres.map.jinja for postgres auth globals
- Add POSTGRES_GLOBALS to all manager-type role vars
  (manager, eval, standalone, managersearch, import)
- Add postgres module config to soc/defaults.yaml
- Inject so_postgres credentials from auth pillar into
  soc/defaults.map.jinja (conditional on auth pillar existing)
2026-04-09 14:09:32 -04:00
reyesj2 f0b67a415a more filestream integration policy updates 2026-04-09 12:40:55 -05:00
Mike Reeves b87af8ea3d Add postgres.auth to allowed_states
Matches the elasticsearch.auth pattern where auth states use
the full sls path check and are explicitly listed.
2026-04-09 12:39:46 -04:00
Mike Reeves 46e38d39bb Enable postgres by default
Safe because postgres states are only applied to manager-type
nodes via top.sls and allowed_states.map.jinja.
2026-04-09 12:23:47 -04:00
Matthew Wright 81afbd32d4 Merge pull request #15742 from Security-Onion-Solutions/mwright/ai-query-length
Assistant: charsPerTokenEstimate
2026-04-09 11:28:37 -04:00
Josh Patterson e9c4f40735 Merge pull request #15745 from Security-Onion-Solutions/delta
define options in annotation files
2026-04-09 10:39:13 -04:00
Mike Reeves 61bdfb1a4b Add daily PostgreSQL database backup
- pg_dumpall piped through gzip, stored in /nsm/backup/
- Runs daily at 00:05 (4 minutes after config backup)
- 7-day retention matching existing config backup policy
- Skips gracefully if container isn't running
2026-04-09 10:29:10 -04:00
Josh Patterson 9ec4a26f97 define options in annotation files 2026-04-09 10:18:36 -04:00
Mike Reeves 358a2e6d3f Add so-postgres to container image pull list
Add to both the import and default manager container lists so
the image gets downloaded during installation.
2026-04-09 10:02:41 -04:00
Mike Reeves 762e73faf5 Add so-postgres host management scripts
- so-postgres-manage: wraps docker exec for psql operations
  (sql, sqlfile, shell, dblist, userlist)
- so-postgres-start/stop/restart: standard container lifecycle
- Scripts installed to /usr/sbin via file.recurse in config.sls
2026-04-09 09:55:42 -04:00
Josh Patterson ef3cfc8722 Merge pull request #15741 from Security-Onion-Solutions/fix/suricata-pcap-log-max-files
ensure max-files is 1 at minimum
2026-04-08 16:00:26 -04:00
Matthew Wright 28d31f4840 add charsPerTokenEstimate 2026-04-08 15:25:51 -04:00
Josh Patterson 2166bb749a ensure max-files is 1 at minimum 2026-04-08 14:59:05 -04:00
Mike Reeves 868cd11874 Add so-postgres Salt states and integration wiring
Phase 1 of the PostgreSQL central data platform:
- Salt states: init, enabled, disabled, config, ssl, auth, sostatus
- TLS via SO CA-signed certs with postgresql.conf template
- Two-tier auth: postgres superuser + so_postgres application user
- Firewall restricts port 5432 to manager-only (HA-ready)
- Wired into top.sls, pillar/top.sls, allowed_states, firewall
  containers map, docker defaults, CA signing policies, and setup
  scripts for all manager-type roles
2026-04-08 10:58:52 -04:00
Jorge Reyes 7356f3affd Merge pull request #15733 from Security-Onion-Solutions/reyesj2-es932
filestream integration policy updates
2026-04-07 11:14:10 -05:00
reyesj2 dd56e7f1ac filestream integration policy updates 2026-04-07 11:08:10 -05:00
Jorge Reyes 075b592471 Merge pull request #15728 from Security-Onion-Solutions/reyesj2-es932
foxtrot version
2026-04-06 17:36:08 -05:00
reyesj2 51a3c04c3d foxtrot version 2026-04-06 17:35:08 -05:00
Jorge Reyes 1a8aae3039 Merge pull request #15727 from Security-Onion-Solutions/reyesj2-es932
ES 9.3.2
2026-04-06 15:09:45 -05:00
reyesj2 8101bc4941 ES 9.3.2 2026-04-06 15:08:30 -05:00
Mike Reeves 88de246ce3 Merge pull request #15725 from Security-Onion-Solutions/3/main
License Link to dev
2026-04-06 10:59:22 -04:00
Mike Reeves 3643b57167 Merge pull request #15724 from Security-Onion-Solutions/TOoSmOotH-patch-2
Fix JA4+ license link in soc_zeek.yaml
2026-04-06 10:24:04 -04:00
Mike Reeves 5b3ca98b80 Fix JA4+ license link in soc_zeek.yaml
Updated the license link in the JA4+ fingerprinting description.
2026-04-06 10:12:37 -04:00
reyesj2 51e0ca2602 Merge branch '3/main' of github.com:Security-Onion-Solutions/securityonion into reyesj2-es932 2026-04-01 14:46:05 -05:00
Mike Reeves 664f3fd18a Fix soup 2026-04-01 14:47:05 -04:00
Jason Ertel 76f4ccf8c8 Merge pull request #15705 from Security-Onion-Solutions/3/main
Merge pr/workflow changes back to dev
2026-04-01 10:57:34 -04:00
Jason Ertel 2a37ad82b2 Merge pull request #15704 from Security-Onion-Solutions/jertel/mainpr
pr/workflow changes
2026-04-01 10:55:57 -04:00
Jason Ertel 80540da52f pr/workflow changes 2026-04-01 10:48:47 -04:00
Jason Ertel e4ba3d6a2a pr/workflow changes 2026-04-01 10:47:59 -04:00
Mike Reeves 3dec6986b6 Merge pull request #15702 from Security-Onion-Solutions/3/main
soup fix
2026-03-31 15:12:01 -04:00
Mike Reeves bbfb58ea4e Merge pull request #15701 from Security-Onion-Solutions/TOoSmOotH-patch-1
Update SOUP_BRANCH to use 3/main instead of 2.4/main
2026-03-31 15:09:34 -04:00
Mike Reeves c91deb97b1 Update SOUP_BRANCH to use 3/main instead of 2.4/main 2026-03-31 15:07:23 -04:00
reyesj2 dc2598d5cf Merge branch '3/main' of github.com:Security-Onion-Solutions/securityonion into HEAD 2026-03-31 14:01:58 -05:00
Mike Reeves ff45e5ebc6 Merge pull request #15699 from Security-Onion-Solutions/TOoSmOotH-patch-4
Version Bump
2026-03-31 13:55:55 -04:00
Mike Reeves 1e2b51eae6 Add version 3.1.0 to discussion template options 2026-03-31 13:53:00 -04:00
Mike Reeves 58d332ea94 Bump version from 3.0.0 to 3.1.0 2026-03-31 13:52:07 -04:00
Mike Reeves dcc67b9b8f Merge pull request #15696 from Security-Onion-Solutions/3/dev
3.0.0
2026-03-31 13:47:03 -04:00
Mike Reeves cd886dd0f9 Merge pull request #15698 from Security-Onion-Solutions/merge-main-into-dev
Merge 3/main into 3/dev
2026-03-31 09:49:36 -04:00
Mike Reeves 37a6e28a6c Merge remote-tracking branch 'origin/3/dev' into merge-main-into-dev 2026-03-31 09:48:06 -04:00
Mike Reeves 434a2e7866 Merge pull request #15695 from Security-Onion-Solutions/3.0.0
3.0.0
2026-03-31 09:33:34 -04:00
Mike Reeves 79707db6ee 3.0.0 2026-03-31 09:17:08 -04:00
Josh Brower 0707507412 Merge pull request #15694 from Security-Onion-Solutions/fixpath
Remove hardcoded index
2026-03-30 12:47:55 -04:00
Josh Brower c7e865aa1c Remove hardcoded index 2026-03-30 12:42:48 -04:00
Josh Brower a89db79854 Merge pull request #15691 from Security-Onion-Solutions/jertel/wip
revisit workflows
2026-03-27 16:24:30 -04:00
Jason Ertel 812f65eee8 revisit workflows 2026-03-27 16:11:31 -04:00
Josh Patterson cfa530ba9c Merge pull request #15690 from Security-Onion-Solutions/delta
ensure bool sliders soc
2026-03-27 15:19:30 -04:00
Josh Patterson 922c008b11 ensure bool sliders soc 2026-03-27 15:02:54 -04:00
Mike Reeves ea30749512 Merge pull request #15676 from Security-Onion-Solutions/TOoSmOotH-patch-3
Make AI adapter settings visible
2026-03-26 09:43:58 -04:00
Mike Reeves 0a55592d7e Make AI adapter settings visible
Changed 'advanced' field from True to False for AI adapters and available models.
2026-03-26 09:37:39 -04:00
Josh Brower 115ca2c41d Merge pull request #15672 from Security-Onion-Solutions/yaracomments
update yara template
2026-03-24 15:59:48 -04:00
Josh Brower 9e53bd3f2d update yara template 2026-03-24 15:56:26 -04:00
Josh Brower d4f1078f84 Merge pull request #15669 from Security-Onion-Solutions/lowercasefix
Lowercase network transport
2026-03-24 11:30:13 -04:00
Josh Brower 1f9bf45b66 Lowercase network transport 2026-03-24 11:24:59 -04:00
Mike Reeves 271de757e7 Merge pull request #15667 from Security-Onion-Solutions/TOoSmOotH-patch-1
Enable clean option for Zeek configuration
2026-03-24 09:56:03 -04:00
Mike Reeves d4ac352b5a Enable clean option for Zeek configuration 2026-03-24 09:54:49 -04:00
Jorge Reyes afcef1d0e7 Merge pull request #15661 from Security-Onion-Solutions/reyesj2-361
update stig profile v1r3
2026-03-23 18:09:33 -05:00
Josh Patterson 91b164b728 Merge pull request #15665 from Security-Onion-Solutions/delta
allow negation in suricata address-group vars
2026-03-23 17:34:21 -04:00
Josh Patterson 6a4501241d allow negation in suricata address-group vars 2026-03-23 17:24:12 -04:00
Josh Brower c6978f9037 Merge pull request #15663 from Security-Onion-Solutions/fix/idh-skins
Remove hardcoded path
2026-03-23 16:30:51 -04:00
Jorge Reyes fb7b73c601 Merge pull request #15662 from Security-Onion-Solutions/reyesj2-patch-1
exclude oscap profile from gitleaks
2026-03-23 14:23:24 -05:00
Jorge Reyes f2b6d59c65 exclude oscap profile from gitleaks 2026-03-23 14:17:39 -05:00
reyesj2 67162357a3 update stig profile v1r3 2026-03-23 14:04:48 -05:00
Mike Reeves cd0d88e2c0 Merge pull request #15440 from Security-Onion-Solutions/3/dev
Change version from 2.4.201 to UNRELEASED
2026-01-29 12:56:54 -05:00
175 changed files with 80721 additions and 45144 deletions
-546
View File
@@ -1,546 +0,0 @@
title = "gitleaks config"
# Gitleaks rules are defined by regular expressions and entropy ranges.
# Some secrets have unique signatures which make detecting those secrets easy.
# Examples of those secrets would be GitLab Personal Access Tokens, AWS keys, and GitHub Access Tokens.
# All these examples have defined prefixes like `glpat`, `AKIA`, `ghp_`, etc.
#
# Other secrets might just be a hash which means we need to write more complex rules to verify
# that what we are matching is a secret.
#
# Here is an example of a semi-generic secret
#
# discord_client_secret = "8dyfuiRyq=vVc3RRr_edRk-fK__JItpZ"
#
# We can write a regular expression to capture the variable name (identifier),
# the assignment symbol (like '=' or ':='), and finally the actual secret.
# The structure of a rule to match this example secret is below:
#
# Beginning string
# quotation
# │ End string quotation
# │ │
# ▼ ▼
# (?i)(discord[a-z0-9_ .\-,]{0,25})(=|>|:=|\|\|:|<=|=>|:).{0,5}['\"]([a-z0-9=_\-]{32})['\"]
#
# ▲ ▲ ▲
# │ │ │
# │ │ │
# identifier assignment symbol
# Secret
#
[[rules]]
id = "gitlab-pat"
description = "GitLab Personal Access Token"
regex = '''glpat-[0-9a-zA-Z\-\_]{20}'''
[[rules]]
id = "aws-access-token"
description = "AWS"
regex = '''(A3T[A-Z0-9]|AKIA|AGPA|AIDA|AROA|AIPA|ANPA|ANVA|ASIA)[A-Z0-9]{16}'''
# Cryptographic keys
[[rules]]
id = "PKCS8-PK"
description = "PKCS8 private key"
regex = '''-----BEGIN PRIVATE KEY-----'''
[[rules]]
id = "RSA-PK"
description = "RSA private key"
regex = '''-----BEGIN RSA PRIVATE KEY-----'''
[[rules]]
id = "OPENSSH-PK"
description = "SSH private key"
regex = '''-----BEGIN OPENSSH PRIVATE KEY-----'''
[[rules]]
id = "PGP-PK"
description = "PGP private key"
regex = '''-----BEGIN PGP PRIVATE KEY BLOCK-----'''
[[rules]]
id = "github-pat"
description = "GitHub Personal Access Token"
regex = '''ghp_[0-9a-zA-Z]{36}'''
[[rules]]
id = "github-oauth"
description = "GitHub OAuth Access Token"
regex = '''gho_[0-9a-zA-Z]{36}'''
[[rules]]
id = "SSH-DSA-PK"
description = "SSH (DSA) private key"
regex = '''-----BEGIN DSA PRIVATE KEY-----'''
[[rules]]
id = "SSH-EC-PK"
description = "SSH (EC) private key"
regex = '''-----BEGIN EC PRIVATE KEY-----'''
[[rules]]
id = "github-app-token"
description = "GitHub App Token"
regex = '''(ghu|ghs)_[0-9a-zA-Z]{36}'''
[[rules]]
id = "github-refresh-token"
description = "GitHub Refresh Token"
regex = '''ghr_[0-9a-zA-Z]{76}'''
[[rules]]
id = "shopify-shared-secret"
description = "Shopify shared secret"
regex = '''shpss_[a-fA-F0-9]{32}'''
[[rules]]
id = "shopify-access-token"
description = "Shopify access token"
regex = '''shpat_[a-fA-F0-9]{32}'''
[[rules]]
id = "shopify-custom-access-token"
description = "Shopify custom app access token"
regex = '''shpca_[a-fA-F0-9]{32}'''
[[rules]]
id = "shopify-private-app-access-token"
description = "Shopify private app access token"
regex = '''shppa_[a-fA-F0-9]{32}'''
[[rules]]
id = "slack-access-token"
description = "Slack token"
regex = '''xox[baprs]-([0-9a-zA-Z]{10,48})?'''
[[rules]]
id = "stripe-access-token"
description = "Stripe"
regex = '''(?i)(sk|pk)_(test|live)_[0-9a-z]{10,32}'''
[[rules]]
id = "pypi-upload-token"
description = "PyPI upload token"
regex = '''pypi-AgEIcHlwaS5vcmc[A-Za-z0-9\-_]{50,1000}'''
[[rules]]
id = "gcp-service-account"
description = "Google (GCP) Service-account"
regex = '''\"type\": \"service_account\"'''
[[rules]]
id = "heroku-api-key"
description = "Heroku API Key"
regex = ''' (?i)(heroku[a-z0-9_ .\-,]{0,25})(=|>|:=|\|\|:|<=|=>|:).{0,5}['\"]([0-9A-F]{8}-[0-9A-F]{4}-[0-9A-F]{4}-[0-9A-F]{4}-[0-9A-F]{12})['\"]'''
secretGroup = 3
[[rules]]
id = "slack-web-hook"
description = "Slack Webhook"
regex = '''https://hooks.slack.com/services/T[a-zA-Z0-9_]{8}/B[a-zA-Z0-9_]{8,12}/[a-zA-Z0-9_]{24}'''
[[rules]]
id = "twilio-api-key"
description = "Twilio API Key"
regex = '''SK[0-9a-fA-F]{32}'''
[[rules]]
id = "age-secret-key"
description = "Age secret key"
regex = '''AGE-SECRET-KEY-1[QPZRY9X8GF2TVDW0S3JN54KHCE6MUA7L]{58}'''
[[rules]]
id = "facebook-token"
description = "Facebook token"
regex = '''(?i)(facebook[a-z0-9_ .\-,]{0,25})(=|>|:=|\|\|:|<=|=>|:).{0,5}['\"]([a-f0-9]{32})['\"]'''
secretGroup = 3
[[rules]]
id = "twitter-token"
description = "Twitter token"
regex = '''(?i)(twitter[a-z0-9_ .\-,]{0,25})(=|>|:=|\|\|:|<=|=>|:).{0,5}['\"]([a-f0-9]{35,44})['\"]'''
secretGroup = 3
[[rules]]
id = "adobe-client-id"
description = "Adobe Client ID (Oauth Web)"
regex = '''(?i)(adobe[a-z0-9_ .\-,]{0,25})(=|>|:=|\|\|:|<=|=>|:).{0,5}['\"]([a-f0-9]{32})['\"]'''
secretGroup = 3
[[rules]]
id = "adobe-client-secret"
description = "Adobe Client Secret"
regex = '''(p8e-)(?i)[a-z0-9]{32}'''
[[rules]]
id = "alibaba-access-key-id"
description = "Alibaba AccessKey ID"
regex = '''(LTAI)(?i)[a-z0-9]{20}'''
[[rules]]
id = "alibaba-secret-key"
description = "Alibaba Secret Key"
regex = '''(?i)(alibaba[a-z0-9_ .\-,]{0,25})(=|>|:=|\|\|:|<=|=>|:).{0,5}['\"]([a-z0-9]{30})['\"]'''
secretGroup = 3
[[rules]]
id = "asana-client-id"
description = "Asana Client ID"
regex = '''(?i)(asana[a-z0-9_ .\-,]{0,25})(=|>|:=|\|\|:|<=|=>|:).{0,5}['\"]([0-9]{16})['\"]'''
secretGroup = 3
[[rules]]
id = "asana-client-secret"
description = "Asana Client Secret"
regex = '''(?i)(asana[a-z0-9_ .\-,]{0,25})(=|>|:=|\|\|:|<=|=>|:).{0,5}['\"]([a-z0-9]{32})['\"]'''
secretGroup = 3
[[rules]]
id = "atlassian-api-token"
description = "Atlassian API token"
regex = '''(?i)(atlassian[a-z0-9_ .\-,]{0,25})(=|>|:=|\|\|:|<=|=>|:).{0,5}['\"]([a-z0-9]{24})['\"]'''
secretGroup = 3
[[rules]]
id = "bitbucket-client-id"
description = "Bitbucket client ID"
regex = '''(?i)(bitbucket[a-z0-9_ .\-,]{0,25})(=|>|:=|\|\|:|<=|=>|:).{0,5}['\"]([a-z0-9]{32})['\"]'''
secretGroup = 3
[[rules]]
id = "bitbucket-client-secret"
description = "Bitbucket client secret"
regex = '''(?i)(bitbucket[a-z0-9_ .\-,]{0,25})(=|>|:=|\|\|:|<=|=>|:).{0,5}['\"]([a-z0-9_\-]{64})['\"]'''
secretGroup = 3
[[rules]]
id = "beamer-api-token"
description = "Beamer API token"
regex = '''(?i)(beamer[a-z0-9_ .\-,]{0,25})(=|>|:=|\|\|:|<=|=>|:).{0,5}['\"](b_[a-z0-9=_\-]{44})['\"]'''
secretGroup = 3
[[rules]]
id = "clojars-api-token"
description = "Clojars API token"
regex = '''(CLOJARS_)(?i)[a-z0-9]{60}'''
[[rules]]
id = "contentful-delivery-api-token"
description = "Contentful delivery API token"
regex = '''(?i)(contentful[a-z0-9_ .\-,]{0,25})(=|>|:=|\|\|:|<=|=>|:).{0,5}['\"]([a-z0-9\-=_]{43})['\"]'''
secretGroup = 3
[[rules]]
id = "databricks-api-token"
description = "Databricks API token"
regex = '''dapi[a-h0-9]{32}'''
[[rules]]
id = "discord-api-token"
description = "Discord API key"
regex = '''(?i)(discord[a-z0-9_ .\-,]{0,25})(=|>|:=|\|\|:|<=|=>|:).{0,5}['\"]([a-h0-9]{64})['\"]'''
secretGroup = 3
[[rules]]
id = "discord-client-id"
description = "Discord client ID"
regex = '''(?i)(discord[a-z0-9_ .\-,]{0,25})(=|>|:=|\|\|:|<=|=>|:).{0,5}['\"]([0-9]{18})['\"]'''
secretGroup = 3
[[rules]]
id = "discord-client-secret"
description = "Discord client secret"
regex = '''(?i)(discord[a-z0-9_ .\-,]{0,25})(=|>|:=|\|\|:|<=|=>|:).{0,5}['\"]([a-z0-9=_\-]{32})['\"]'''
secretGroup = 3
[[rules]]
id = "doppler-api-token"
description = "Doppler API token"
regex = '''['\"](dp\.pt\.)(?i)[a-z0-9]{43}['\"]'''
[[rules]]
id = "dropbox-api-secret"
description = "Dropbox API secret/key"
regex = '''(?i)(dropbox[a-z0-9_ .\-,]{0,25})(=|>|:=|\|\|:|<=|=>|:).{0,5}['\"]([a-z0-9]{15})['\"]'''
[[rules]]
id = "dropbox--api-key"
description = "Dropbox API secret/key"
regex = '''(?i)(dropbox[a-z0-9_ .\-,]{0,25})(=|>|:=|\|\|:|<=|=>|:).{0,5}['\"]([a-z0-9]{15})['\"]'''
[[rules]]
id = "dropbox-short-lived-api-token"
description = "Dropbox short lived API token"
regex = '''(?i)(dropbox[a-z0-9_ .\-,]{0,25})(=|>|:=|\|\|:|<=|=>|:).{0,5}['\"](sl\.[a-z0-9\-=_]{135})['\"]'''
[[rules]]
id = "dropbox-long-lived-api-token"
description = "Dropbox long lived API token"
regex = '''(?i)(dropbox[a-z0-9_ .\-,]{0,25})(=|>|:=|\|\|:|<=|=>|:).{0,5}['\"][a-z0-9]{11}(AAAAAAAAAA)[a-z0-9\-_=]{43}['\"]'''
[[rules]]
id = "duffel-api-token"
description = "Duffel API token"
regex = '''['\"]duffel_(test|live)_(?i)[a-z0-9_-]{43}['\"]'''
[[rules]]
id = "dynatrace-api-token"
description = "Dynatrace API token"
regex = '''['\"]dt0c01\.(?i)[a-z0-9]{24}\.[a-z0-9]{64}['\"]'''
[[rules]]
id = "easypost-api-token"
description = "EasyPost API token"
regex = '''['\"]EZAK(?i)[a-z0-9]{54}['\"]'''
[[rules]]
id = "easypost-test-api-token"
description = "EasyPost test API token"
regex = '''['\"]EZTK(?i)[a-z0-9]{54}['\"]'''
[[rules]]
id = "fastly-api-token"
description = "Fastly API token"
regex = '''(?i)(fastly[a-z0-9_ .\-,]{0,25})(=|>|:=|\|\|:|<=|=>|:).{0,5}['\"]([a-z0-9\-=_]{32})['\"]'''
secretGroup = 3
[[rules]]
id = "finicity-client-secret"
description = "Finicity client secret"
regex = '''(?i)(finicity[a-z0-9_ .\-,]{0,25})(=|>|:=|\|\|:|<=|=>|:).{0,5}['\"]([a-z0-9]{20})['\"]'''
secretGroup = 3
[[rules]]
id = "finicity-api-token"
description = "Finicity API token"
regex = '''(?i)(finicity[a-z0-9_ .\-,]{0,25})(=|>|:=|\|\|:|<=|=>|:).{0,5}['\"]([a-f0-9]{32})['\"]'''
secretGroup = 3
[[rules]]
id = "flutterwave-public-key"
description = "Flutterwave public key"
regex = '''FLWPUBK_TEST-(?i)[a-h0-9]{32}-X'''
[[rules]]
id = "flutterwave-secret-key"
description = "Flutterwave secret key"
regex = '''FLWSECK_TEST-(?i)[a-h0-9]{32}-X'''
[[rules]]
id = "flutterwave-enc-key"
description = "Flutterwave encrypted key"
regex = '''FLWSECK_TEST[a-h0-9]{12}'''
[[rules]]
id = "frameio-api-token"
description = "Frame.io API token"
regex = '''fio-u-(?i)[a-z0-9\-_=]{64}'''
[[rules]]
id = "gocardless-api-token"
description = "GoCardless API token"
regex = '''['\"]live_(?i)[a-z0-9\-_=]{40}['\"]'''
[[rules]]
id = "grafana-api-token"
description = "Grafana API token"
regex = '''['\"]eyJrIjoi(?i)[a-z0-9\-_=]{72,92}['\"]'''
[[rules]]
id = "hashicorp-tf-api-token"
description = "HashiCorp Terraform user/org API token"
regex = '''['\"](?i)[a-z0-9]{14}\.atlasv1\.[a-z0-9\-_=]{60,70}['\"]'''
[[rules]]
id = "hubspot-api-token"
description = "HubSpot API token"
regex = '''(?i)(hubspot[a-z0-9_ .\-,]{0,25})(=|>|:=|\|\|:|<=|=>|:).{0,5}['\"]([a-h0-9]{8}-[a-h0-9]{4}-[a-h0-9]{4}-[a-h0-9]{4}-[a-h0-9]{12})['\"]'''
secretGroup = 3
[[rules]]
id = "intercom-api-token"
description = "Intercom API token"
regex = '''(?i)(intercom[a-z0-9_ .\-,]{0,25})(=|>|:=|\|\|:|<=|=>|:).{0,5}['\"]([a-z0-9=_]{60})['\"]'''
secretGroup = 3
[[rules]]
id = "intercom-client-secret"
description = "Intercom client secret/ID"
regex = '''(?i)(intercom[a-z0-9_ .\-,]{0,25})(=|>|:=|\|\|:|<=|=>|:).{0,5}['\"]([a-h0-9]{8}-[a-h0-9]{4}-[a-h0-9]{4}-[a-h0-9]{4}-[a-h0-9]{12})['\"]'''
secretGroup = 3
[[rules]]
id = "ionic-api-token"
description = "Ionic API token"
regex = '''(?i)(ionic[a-z0-9_ .\-,]{0,25})(=|>|:=|\|\|:|<=|=>|:).{0,5}['\"](ion_[a-z0-9]{42})['\"]'''
[[rules]]
id = "linear-api-token"
description = "Linear API token"
regex = '''lin_api_(?i)[a-z0-9]{40}'''
[[rules]]
id = "linear-client-secret"
description = "Linear client secret/ID"
regex = '''(?i)(linear[a-z0-9_ .\-,]{0,25})(=|>|:=|\|\|:|<=|=>|:).{0,5}['\"]([a-f0-9]{32})['\"]'''
secretGroup = 3
[[rules]]
id = "lob-api-key"
description = "Lob API Key"
regex = '''(?i)(lob[a-z0-9_ .\-,]{0,25})(=|>|:=|\|\|:|<=|=>|:).{0,5}['\"]((live|test)_[a-f0-9]{35})['\"]'''
secretGroup = 3
[[rules]]
id = "lob-pub-api-key"
description = "Lob Publishable API Key"
regex = '''(?i)(lob[a-z0-9_ .\-,]{0,25})(=|>|:=|\|\|:|<=|=>|:).{0,5}['\"]((test|live)_pub_[a-f0-9]{31})['\"]'''
secretGroup = 3
[[rules]]
id = "mailchimp-api-key"
description = "Mailchimp API key"
regex = '''(?i)(mailchimp[a-z0-9_ .\-,]{0,25})(=|>|:=|\|\|:|<=|=>|:).{0,5}['\"]([a-f0-9]{32}-us20)['\"]'''
secretGroup = 3
[[rules]]
id = "mailgun-private-api-token"
description = "Mailgun private API token"
regex = '''(?i)(mailgun[a-z0-9_ .\-,]{0,25})(=|>|:=|\|\|:|<=|=>|:).{0,5}['\"](key-[a-f0-9]{32})['\"]'''
secretGroup = 3
[[rules]]
id = "mailgun-pub-key"
description = "Mailgun public validation key"
regex = '''(?i)(mailgun[a-z0-9_ .\-,]{0,25})(=|>|:=|\|\|:|<=|=>|:).{0,5}['\"](pubkey-[a-f0-9]{32})['\"]'''
secretGroup = 3
[[rules]]
id = "mailgun-signing-key"
description = "Mailgun webhook signing key"
regex = '''(?i)(mailgun[a-z0-9_ .\-,]{0,25})(=|>|:=|\|\|:|<=|=>|:).{0,5}['\"]([a-h0-9]{32}-[a-h0-9]{8}-[a-h0-9]{8})['\"]'''
secretGroup = 3
[[rules]]
id = "mapbox-api-token"
description = "Mapbox API token"
regex = '''(?i)(pk\.[a-z0-9]{60}\.[a-z0-9]{22})'''
[[rules]]
id = "messagebird-api-token"
description = "MessageBird API token"
regex = '''(?i)(messagebird[a-z0-9_ .\-,]{0,25})(=|>|:=|\|\|:|<=|=>|:).{0,5}['\"]([a-z0-9]{25})['\"]'''
secretGroup = 3
[[rules]]
id = "messagebird-client-id"
description = "MessageBird API client ID"
regex = '''(?i)(messagebird[a-z0-9_ .\-,]{0,25})(=|>|:=|\|\|:|<=|=>|:).{0,5}['\"]([a-h0-9]{8}-[a-h0-9]{4}-[a-h0-9]{4}-[a-h0-9]{4}-[a-h0-9]{12})['\"]'''
secretGroup = 3
[[rules]]
id = "new-relic-user-api-key"
description = "New Relic user API Key"
regex = '''['\"](NRAK-[A-Z0-9]{27})['\"]'''
[[rules]]
id = "new-relic-user-api-id"
description = "New Relic user API ID"
regex = '''(?i)(newrelic[a-z0-9_ .\-,]{0,25})(=|>|:=|\|\|:|<=|=>|:).{0,5}['\"]([A-Z0-9]{64})['\"]'''
secretGroup = 3
[[rules]]
id = "new-relic-browser-api-token"
description = "New Relic ingest browser API token"
regex = '''['\"](NRJS-[a-f0-9]{19})['\"]'''
[[rules]]
id = "npm-access-token"
description = "npm access token"
regex = '''['\"](npm_(?i)[a-z0-9]{36})['\"]'''
[[rules]]
id = "planetscale-password"
description = "PlanetScale password"
regex = '''pscale_pw_(?i)[a-z0-9\-_\.]{43}'''
[[rules]]
id = "planetscale-api-token"
description = "PlanetScale API token"
regex = '''pscale_tkn_(?i)[a-z0-9\-_\.]{43}'''
[[rules]]
id = "postman-api-token"
description = "Postman API token"
regex = '''PMAK-(?i)[a-f0-9]{24}\-[a-f0-9]{34}'''
[[rules]]
id = "pulumi-api-token"
description = "Pulumi API token"
regex = '''pul-[a-f0-9]{40}'''
[[rules]]
id = "rubygems-api-token"
description = "Rubygem API token"
regex = '''rubygems_[a-f0-9]{48}'''
[[rules]]
id = "sendgrid-api-token"
description = "SendGrid API token"
regex = '''SG\.(?i)[a-z0-9_\-\.]{66}'''
[[rules]]
id = "sendinblue-api-token"
description = "Sendinblue API token"
regex = '''xkeysib-[a-f0-9]{64}\-(?i)[a-z0-9]{16}'''
[[rules]]
id = "shippo-api-token"
description = "Shippo API token"
regex = '''shippo_(live|test)_[a-f0-9]{40}'''
[[rules]]
id = "linkedin-client-secret"
description = "LinkedIn Client secret"
regex = '''(?i)(linkedin[a-z0-9_ .\-,]{0,25})(=|>|:=|\|\|:|<=|=>|:).{0,5}['\"]([a-z]{16})['\"]'''
secretGroup = 3
[[rules]]
id = "linkedin-client-id"
description = "LinkedIn Client ID"
regex = '''(?i)(linkedin[a-z0-9_ .\-,]{0,25})(=|>|:=|\|\|:|<=|=>|:).{0,5}['\"]([a-z0-9]{14})['\"]'''
secretGroup = 3
[[rules]]
id = "twitch-api-token"
description = "Twitch API token"
regex = '''(?i)(twitch[a-z0-9_ .\-,]{0,25})(=|>|:=|\|\|:|<=|=>|:).{0,5}['\"]([a-z0-9]{30})['\"]'''
secretGroup = 3
[[rules]]
id = "typeform-api-token"
description = "Typeform API token"
regex = '''(?i)(typeform[a-z0-9_ .\-,]{0,25})(=|>|:=|\|\|:|<=|=>|:).{0,5}(tfp_[a-z0-9\-_\.=]{59})'''
secretGroup = 3
[[rules]]
id = "generic-api-key"
description = "Generic API Key"
regex = '''(?i)((key|api[^Version]|token|secret|password)[a-z0-9_ .\-,]{0,25})(=|>|:=|\|\|:|<=|=>|:).{0,5}['\"]([0-9a-zA-Z\-_=]{8,64})['\"]'''
entropy = 3.7
secretGroup = 4
[allowlist]
description = "global allow lists"
regexes = ['''219-09-9999''', '''078-05-1120''', '''(9[0-9]{2}|666)-\d{2}-\d{4}''', '''RPM-GPG-KEY.*''', '''.*:.*StrelkaHexDump.*''', '''.*:.*PLACEHOLDER.*''', '''ssl_.*password''', '''integration_key\s=\s"so-logs-"''']
paths = [
'''gitleaks.toml''',
'''(.*?)(jpg|gif|doc|pdf|bin|svg|socket)$''',
'''(go.mod|go.sum)$''',
'''salt/nginx/files/enterprise-attack.json''',
'''(.*?)whl$'''
]
+1
View File
@@ -10,6 +10,7 @@ body:
options:
-
- 3.0.0
- 3.1.0
- Other (please provide detail below)
validations:
required: true
+22
View File
@@ -0,0 +1,22 @@
## Description
<!--
Explain the purpose of the pull request. Be brief or detailed depending on the scope of the changes.
-->
## Related Issues
<!--
Optionally, list any related issues that this pull request addresses.
-->
## Checklist
- [ ] I have read and followed the [CONTRIBUTING.md](https://github.com/Security-Onion-Solutions/securityonion/blob/3/main/CONTRIBUTING.md) file.
- [ ] I have read and agree to the terms of the [Contributor License Agreement](https://securityonionsolutions.com/cla)
## Questions or Comments
<!--
If you have any questions or comments about this pull request, add them here.
-->
-24
View File
@@ -1,24 +0,0 @@
name: contrib
on:
issue_comment:
types: [created]
pull_request_target:
types: [opened,closed,synchronize]
jobs:
CLAssistant:
runs-on: ubuntu-latest
steps:
- name: "Contributor Check"
if: (github.event.comment.body == 'recheck' || github.event.comment.body == 'I have read the CLA Document and I hereby sign the CLA') || github.event_name == 'pull_request_target'
uses: cla-assistant/github-action@v2.3.1
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
PERSONAL_ACCESS_TOKEN : ${{ secrets.PERSONAL_ACCESS_TOKEN }}
with:
path-to-signatures: 'signatures_v1.json'
path-to-document: 'https://securityonionsolutions.com/cla'
allowlist: dependabot[bot],jertel,dougburks,TOoSmOotH,defensivedepth,m0duspwnens
remote-organization-name: Security-Onion-Solutions
remote-repository-name: licensing
-17
View File
@@ -1,17 +0,0 @@
name: leak-test
on: [pull_request]
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
with:
fetch-depth: '0'
- name: Gitleaks
uses: gitleaks/gitleaks-action@v1.6.0
with:
config-path: .github/.gitleaks.toml
+1 -1
View File
@@ -23,7 +23,7 @@
* Link the PR to the related issue, either using [keywords](https://docs.github.com/en/issues/tracking-your-work-with-issues/creating-issues/linking-a-pull-request-to-an-issue#linking-a-pull-request-to-an-issue-using-a-keyword) in the PR description, or [manually](https://docs.github.com/en/issues/tracking-your-work-with-issues/creating-issues/linking-a-pull-request-to-an-issue#manually-linking-a-pull-request-to-an-issue).
* **Pull requests should be opened against the `dev` branch of this repo**, and should clearly describe the problem and solution.
* **Pull requests should be opened against the current `?/dev` branch of this repo**, and should clearly describe the problem and solution.
* Be sure you have tested your changes and are confident they will not break other parts of the product.
+13 -13
View File
@@ -1,46 +1,46 @@
### 2.4.210-20260302 ISO image released on 2026/03/02
### 3.0.0-20260331 ISO image released on 2026/03/31
### Download and Verify
2.4.210-20260302 ISO image:
https://download.securityonion.net/file/securityonion/securityonion-2.4.210-20260302.iso
3.0.0-20260331 ISO image:
https://download.securityonion.net/file/securityonion/securityonion-3.0.0-20260331.iso
MD5: 575F316981891EBED2EE4E1F42A1F016
SHA1: 600945E8823221CBC5F1C056084A71355308227E
SHA256: A6AA6471125F07FA6E2796430E94BEAFDEF728E833E9728FDFA7106351EBC47E
MD5: ECD318A1662A6FDE0EF213F5A9BD4B07
SHA1: E55BE314440CCF3392DC0B06BC5E270B43176D9C
SHA256: 7FC47405E335CBE5C2B6C51FE7AC60248F35CBE504907B8B5A33822B23F8F4D5
Signature for ISO image:
https://github.com/Security-Onion-Solutions/securityonion/raw/2.4/main/sigs/securityonion-2.4.210-20260302.iso.sig
https://github.com/Security-Onion-Solutions/securityonion/raw/3/main/sigs/securityonion-3.0.0-20260331.iso.sig
Signing key:
https://raw.githubusercontent.com/Security-Onion-Solutions/securityonion/2.4/main/KEYS
https://raw.githubusercontent.com/Security-Onion-Solutions/securityonion/3/main/KEYS
For example, here are the steps you can use on most Linux distributions to download and verify our Security Onion ISO image.
Download and import the signing key:
```
wget https://raw.githubusercontent.com/Security-Onion-Solutions/securityonion/2.4/main/KEYS -O - | gpg --import -
wget https://raw.githubusercontent.com/Security-Onion-Solutions/securityonion/3/main/KEYS -O - | gpg --import -
```
Download the signature file for the ISO:
```
wget https://github.com/Security-Onion-Solutions/securityonion/raw/2.4/main/sigs/securityonion-2.4.210-20260302.iso.sig
wget https://github.com/Security-Onion-Solutions/securityonion/raw/3/main/sigs/securityonion-3.0.0-20260331.iso.sig
```
Download the ISO image:
```
wget https://download.securityonion.net/file/securityonion/securityonion-2.4.210-20260302.iso
wget https://download.securityonion.net/file/securityonion/securityonion-3.0.0-20260331.iso
```
Verify the downloaded ISO image using the signature file:
```
gpg --verify securityonion-2.4.210-20260302.iso.sig securityonion-2.4.210-20260302.iso
gpg --verify securityonion-3.0.0-20260331.iso.sig securityonion-3.0.0-20260331.iso
```
The output should show "Good signature" and the Primary key fingerprint should match what's shown below:
```
gpg: Signature made Mon 02 Mar 2026 11:55:24 AM EST using RSA key ID FE507013
gpg: Signature made Mon 30 Mar 2026 06:22:14 PM EDT using RSA key ID FE507013
gpg: Good signature from "Security Onion Solutions, LLC <info@securityonionsolutions.com>"
gpg: WARNING: This key is not certified with a trusted signature!
gpg: There is no indication that the signature belongs to the owner.
+1 -1
View File
@@ -1 +1 @@
3.0.0
3.1.0
-2
View File
@@ -1,2 +0,0 @@
elasticsearch:
index_settings:
+12
View File
@@ -0,0 +1,12 @@
# Copyright Security Onion Solutions LLC and/or licensed to Security Onion Solutions LLC under one
# or more contributor license agreements. Licensed under the Elastic License 2.0 as shown at
# https://securityonion.net/license; you may not use this file except in compliance with the
# Elastic License 2.0.
# Per-minion Telegraf Postgres credentials. so-telegraf-cred on the manager is
# the single writer; it mutates /opt/so/saltstack/local/pillar/telegraf/creds.sls
# under flock. Pillar_roots order (local before default) means the populated
# copy shadows this default on any real grid; this file exists so the pillar
# key is always defined on fresh installs and when no minions have creds yet.
telegraf:
postgres_creds: {}
+21 -3
View File
@@ -17,6 +17,7 @@ base:
- sensoroni.adv_sensoroni
- telegraf.soc_telegraf
- telegraf.adv_telegraf
- telegraf.creds
- versionlock.soc_versionlock
- versionlock.adv_versionlock
- soc.license
@@ -38,6 +39,9 @@ base:
{% if salt['file.file_exists']('/opt/so/saltstack/local/pillar/elasticsearch/auth.sls') %}
- elasticsearch.auth
{% endif %}
{% if salt['file.file_exists']('/opt/so/saltstack/local/pillar/postgres/auth.sls') %}
- postgres.auth
{% endif %}
{% if salt['file.file_exists']('/opt/so/saltstack/local/pillar/kibana/secrets.sls') %}
- kibana.secrets
{% endif %}
@@ -60,6 +64,8 @@ base:
- redis.adv_redis
- influxdb.soc_influxdb
- influxdb.adv_influxdb
- postgres.soc_postgres
- postgres.adv_postgres
- elasticsearch.nodes
- elasticsearch.soc_elasticsearch
- elasticsearch.adv_elasticsearch
@@ -97,10 +103,12 @@ base:
- node_data.ips
- secrets
- healthcheck.eval
- elasticsearch.index_templates
{% if salt['file.file_exists']('/opt/so/saltstack/local/pillar/elasticsearch/auth.sls') %}
- elasticsearch.auth
{% endif %}
{% if salt['file.file_exists']('/opt/so/saltstack/local/pillar/postgres/auth.sls') %}
- postgres.auth
{% endif %}
{% if salt['file.file_exists']('/opt/so/saltstack/local/pillar/kibana/secrets.sls') %}
- kibana.secrets
{% endif %}
@@ -126,6 +134,8 @@ base:
- redis.adv_redis
- influxdb.soc_influxdb
- influxdb.adv_influxdb
- postgres.soc_postgres
- postgres.adv_postgres
- backup.soc_backup
- backup.adv_backup
- zeek.soc_zeek
@@ -142,10 +152,12 @@ base:
- logstash.nodes
- logstash.soc_logstash
- logstash.adv_logstash
- elasticsearch.index_templates
{% if salt['file.file_exists']('/opt/so/saltstack/local/pillar/elasticsearch/auth.sls') %}
- elasticsearch.auth
{% endif %}
{% if salt['file.file_exists']('/opt/so/saltstack/local/pillar/postgres/auth.sls') %}
- postgres.auth
{% endif %}
{% if salt['file.file_exists']('/opt/so/saltstack/local/pillar/kibana/secrets.sls') %}
- kibana.secrets
{% endif %}
@@ -160,6 +172,8 @@ base:
- redis.adv_redis
- influxdb.soc_influxdb
- influxdb.adv_influxdb
- postgres.soc_postgres
- postgres.adv_postgres
- elasticsearch.nodes
- elasticsearch.soc_elasticsearch
- elasticsearch.adv_elasticsearch
@@ -256,10 +270,12 @@ base:
'*_import':
- node_data.ips
- secrets
- elasticsearch.index_templates
{% if salt['file.file_exists']('/opt/so/saltstack/local/pillar/elasticsearch/auth.sls') %}
- elasticsearch.auth
{% endif %}
{% if salt['file.file_exists']('/opt/so/saltstack/local/pillar/postgres/auth.sls') %}
- postgres.auth
{% endif %}
{% if salt['file.file_exists']('/opt/so/saltstack/local/pillar/kibana/secrets.sls') %}
- kibana.secrets
{% endif %}
@@ -285,6 +301,8 @@ base:
- redis.adv_redis
- influxdb.soc_influxdb
- influxdb.adv_influxdb
- postgres.soc_postgres
- postgres.adv_postgres
- zeek.soc_zeek
- zeek.adv_zeek
- bpf.soc_bpf
+5 -1
View File
@@ -29,10 +29,14 @@
'manager',
'nginx',
'influxdb',
'postgres',
'postgres.auth',
'soc',
'kratos',
'hydra',
'elasticfleet',
'elasticfleet.manager',
'elasticsearch.cluster',
'elastic-fleet-package-registry',
'utility'
] %}
@@ -77,7 +81,7 @@
),
'so-heavynode': (
sensor_states +
['elasticagent', 'elasticsearch', 'logstash', 'redis', 'nginx']
['elasticagent', 'elasticsearch', 'elasticsearch.cluster', 'logstash', 'redis', 'nginx']
),
'so-idh': (
['idh']
+1
View File
@@ -32,3 +32,4 @@ so_config_backup:
- daymonth: '*'
- month: '*'
- dayweek: '*'
+14
View File
@@ -54,6 +54,20 @@ x509_signing_policies:
- extendedKeyUsage: serverAuth
- days_valid: 820
- copypath: /etc/pki/issued_certs/
postgres:
- minions: '*'
- signing_private_key: /etc/pki/ca.key
- signing_cert: /etc/pki/ca.crt
- C: US
- ST: Utah
- L: Salt Lake City
- basicConstraints: "critical CA:false"
- keyUsage: "critical keyEncipherment"
- subjectKeyIdentifier: hash
- authorityKeyIdentifier: keyid,issuer:always
- extendedKeyUsage: serverAuth
- days_valid: 820
- copypath: /etc/pki/issued_certs/
elasticfleet:
- minions: '*'
- signing_private_key: /etc/pki/ca.key
+25 -4
View File
@@ -31,6 +31,7 @@ container_list() {
"so-hydra"
"so-nginx"
"so-pcaptools"
"so-postgres"
"so-soc"
"so-suricata"
"so-telegraf"
@@ -55,6 +56,7 @@ container_list() {
"so-logstash"
"so-nginx"
"so-pcaptools"
"so-postgres"
"so-redis"
"so-soc"
"so-strelka-backend"
@@ -162,8 +164,8 @@ update_docker_containers() {
# Pull down the trusted docker image
run_check_net_err \
"docker pull $CONTAINER_REGISTRY/$IMAGEREPO/$image" \
"Could not pull $image, please ensure connectivity to $CONTAINER_REGISTRY" >> "$LOG_FILE" 2>&1
"Could not pull $image, please ensure connectivity to $CONTAINER_REGISTRY" >> "$LOG_FILE" 2>&1
# Get signature
run_check_net_err \
"curl --retry 5 --retry-delay 60 -A '$CURLTYPE/$CURRENTVERSION/$OS/$(uname -r)' $sig_url --output $SIGNPATH/$image.sig" \
@@ -186,8 +188,27 @@ update_docker_containers() {
if [ -z "$HOSTNAME" ]; then
HOSTNAME=$(hostname)
fi
docker tag $CONTAINER_REGISTRY/$IMAGEREPO/$image $HOSTNAME:5000/$IMAGEREPO/$image >> "$LOG_FILE" 2>&1
docker push $HOSTNAME:5000/$IMAGEREPO/$image >> "$LOG_FILE" 2>&1
docker tag $CONTAINER_REGISTRY/$IMAGEREPO/$image $HOSTNAME:5000/$IMAGEREPO/$image >> "$LOG_FILE" 2>&1 || {
echo "Unable to tag $image" >> "$LOG_FILE" 2>&1
exit 1
}
# Push to the embedded registry via a registry-to-registry copy. Avoids
# `docker push`, which on Docker 29.x with the containerd image store
# represents freshly-pulled images as an index whose layer content
# isn't reachable through the push path. The local `docker tag` above
# is preserved so so-image-pull's `:5000` existence check still works.
# Pin to the digest already gpg-verified above so we copy exactly the
# bytes we approved.
local VERIFIED_REF
VERIFIED_REF=$(echo "$DOCKERINSPECT" | jq -r ".[0].RepoDigests[] | select(. | contains(\"$CONTAINER_REGISTRY\"))" | head -n 1)
if [ -z "$VERIFIED_REF" ] || [ "$VERIFIED_REF" = "null" ]; then
echo "Unable to determine verified digest for $image" >> "$LOG_FILE" 2>&1
exit 1
fi
docker buildx imagetools create --tag $HOSTNAME:5000/$IMAGEREPO/$image "$VERIFIED_REF" >> "$LOG_FILE" 2>&1 || {
echo "Unable to copy $image to embedded registry" >> "$LOG_FILE" 2>&1
exit 1
}
fi
else
echo "There is a problem downloading the $image image. Details: " >> "$LOG_FILE" 2>&1
+1 -1
View File
@@ -227,7 +227,7 @@ if [[ $EXCLUDE_KNOWN_ERRORS == 'Y' ]]; then
EXCLUDED_ERRORS="$EXCLUDED_ERRORS|from NIC checksum offloading" # zeek reporter.log
EXCLUDED_ERRORS="$EXCLUDED_ERRORS|marked for removal" # docker container getting recycled
EXCLUDED_ERRORS="$EXCLUDED_ERRORS|tcp 127.0.0.1:6791: bind: address already in use" # so-elastic-fleet agent restarting. Seen starting w/ 8.18.8 https://github.com/elastic/kibana/issues/201459
EXCLUDED_ERRORS="$EXCLUDED_ERRORS|TransformTask\] \[logs-(tychon|aws_billing|microsoft_defender_endpoint).*user so_kibana lacks the required permissions \[logs-\1" # Known issue with 3 integrations using kibana_system role vs creating unique api creds with proper permissions.
EXCLUDED_ERRORS="$EXCLUDED_ERRORS|TransformTask\] \[logs-(tychon|aws_billing|microsoft_defender_endpoint|armis|o365_metrics|microsoft_sentinel|snyk|cyera|island_browser).*user so_kibana lacks the required permissions \[(logs|metrics)-\1" # Known issue with integrations starting transform jobs that are explicitly not allowed to start as a system user. This error should not be seen on fresh ES 9.3.3 installs or after SO 3.1.0 with soups addition of check_transform_health_and_reauthorize()
EXCLUDED_ERRORS="$EXCLUDED_ERRORS|manifest unknown" # appears in so-dockerregistry log for so-tcpreplay following docker upgrade to 29.2.1-1
fi
+9 -2
View File
@@ -9,7 +9,7 @@
. /usr/sbin/so-common
software_raid=("SOSMN" "SOSMN-DE02" "SOSSNNV" "SOSSNNV-DE02" "SOS10k-DE02" "SOS10KNV" "SOS10KNV-DE02" "SOS10KNV-DE02" "SOS2000-DE02" "SOS-GOFAST-LT-DE02" "SOS-GOFAST-MD-DE02" "SOS-GOFAST-HV-DE02")
software_raid=("SOSMN" "SOSMN-DE02" "SOSSNNV" "SOSSNNV-DE02" "SOS10k-DE02" "SOS10KNV" "SOS10KNV-DE02" "SOS10KNV-DE02" "SOS2000-DE02" "SOS-GOFAST-LT-DE02" "SOS-GOFAST-MD-DE02" "SOS-GOFAST-HV-DE02" "HVGUEST")
hardware_raid=("SOS1000" "SOS1000F" "SOSSN7200" "SOS5000" "SOS4000")
{%- if salt['grains.get']('sosmodel', '') %}
@@ -87,6 +87,11 @@ check_boss_raid() {
}
check_software_raid() {
if [[ ! -f /proc/mdstat ]]; then
SWRAID=0
return
fi
SWRC=$(grep "_" /proc/mdstat)
if [[ -n $SWRC ]]; then
# RAID is failed in some way
@@ -107,7 +112,9 @@ if [[ "$is_hwraid" == "true" ]]; then
fi
if [[ "$is_softwareraid" == "true" ]]; then
check_software_raid
check_boss_raid
if [ "$model" != "HVGUEST" ]; then
check_boss_raid
fi
fi
sum=$(($SWRAID + $BOSSRAID + $HWRAID))
+8
View File
@@ -237,3 +237,11 @@ docker:
extra_hosts: []
extra_env: []
ulimits: []
'so-postgres':
final_octet: 47
port_bindings:
- 0.0.0.0:5432:5432
custom_bind_mounts: []
extra_hosts: []
extra_env: []
ulimits: []
@@ -51,6 +51,16 @@ so-elastic-fleet-package-registry:
- {{ ULIMIT.name }}={{ ULIMIT.soft }}:{{ ULIMIT.hard }}
{% endfor %}
{% endif %}
wait_for_so-elastic-fleet-package-registry:
http.wait_for_successful_query:
- name: "http://localhost:8080/health"
- status: 200
- wait_for: 300
- request_interval: 15
- require:
- docker_container: so-elastic-fleet-package-registry
delete_so-elastic-fleet-package-registry_so-status.disabled:
file.uncomment:
- name: /opt/so/conf/so-status/so-status.conf
@@ -0,0 +1,123 @@
{# Copyright Security Onion Solutions LLC and/or licensed to Security Onion Solutions LLC under one
or more contributor license agreements. Licensed under the Elastic License 2.0; you may not use
this file except in compliance with the Elastic License 2.0. #}
{% import_json '/opt/so/state/esfleet_content_package_components.json' as ADDON_CONTENT_PACKAGE_COMPONENTS %}
{% import_json '/opt/so/state/esfleet_component_templates.json' as INSTALLED_COMPONENT_TEMPLATES %}
{% import_yaml 'elasticfleet/defaults.yaml' as ELASTICFLEETDEFAULTS %}
{% set CORE_ESFLEET_PACKAGES = ELASTICFLEETDEFAULTS.get('elasticfleet', {}).get('packages', {}) %}
{% set ADDON_CONTENT_INTEGRATION_DEFAULTS = {} %}
{% set DEBUG_STUFF = {} %}
{% for pkg in ADDON_CONTENT_PACKAGE_COMPONENTS %}
{% if pkg.name in CORE_ESFLEET_PACKAGES %}
{# skip core content packages #}
{% elif pkg.name not in CORE_ESFLEET_PACKAGES %}
{# generate defaults for each content package #}
{% if pkg.dataStreams is defined and pkg.dataStreams is not none and pkg.dataStreams | length > 0%}
{% for pattern in pkg.dataStreams %}
{# in ES 9.3.2 'input' type integrations no longer create default component templates and instead they wait for user input during 'integration' setup (fleet ui config)
title: generic is an artifact of that and is not in use #}
{% if pattern.title == "generic" %}
{% continue %}
{% endif %}
{% if "metrics-" in pattern.name %}
{% set integration_type = "metrics-" %}
{% elif "logs-" in pattern.name %}
{% set integration_type = "logs-" %}
{% else %}
{% set integration_type = "" %}
{% endif %}
{# on content integrations the component name is user defined at the time it is added to an agent policy #}
{% set component_name = pattern.title %}
{% set index_pattern = pattern.name %}
{# component_name_x maintains the functionality of merging local pillar changes with generated 'defaults' via SOC UI #}
{% set component_name_x = component_name.replace(".","_x_") %}
{# pillar overrides/merge expects the key names to follow the naming in elasticsearch/defaults.yaml eg. so-logs-1password_x_item_usages . The _x_ is replaced later on in elasticsearch/template.map.jinja #}
{% set integration_key = "so-" ~ integration_type ~ pkg.name + '_x_' ~ component_name_x %}
{# Default integration settings #}
{% set integration_defaults = {
"index_sorting": false,
"index_template": {
"composed_of": [integration_type ~ component_name ~ "@package", integration_type ~ component_name ~ "@custom", "so-fleet_integrations.ip_mappings-1", "so-fleet_globals-1", "so-fleet_agent_id_verification-1"],
"data_stream": {
"allow_custom_routing": false,
"hidden": false
},
"ignore_missing_component_templates": [integration_type ~ component_name ~ "@custom"],
"index_patterns": [index_pattern],
"priority": 501,
"template": {
"settings": {
"index": {
"lifecycle": {"name": "so-" ~ integration_type ~ component_name ~ "-logs"},
"number_of_replicas": 0
}
}
}
},
"policy": {
"phases": {
"cold": {
"actions": {
"allocate":{
"number_of_replicas": ""
},
"set_priority": {"priority": 0}
},
"min_age": "60d"
},
"delete": {
"actions": {
"delete": {}
},
"min_age": "365d"
},
"hot": {
"actions": {
"rollover": {
"max_age": "30d",
"max_primary_shard_size": "50gb"
},
"forcemerge":{
"max_num_segments": ""
},
"shrink":{
"max_primary_shard_size": "",
"method": "COUNT",
"number_of_shards": ""
},
"set_priority": {"priority": 100}
},
"min_age": "0ms"
},
"warm": {
"actions": {
"allocate": {
"number_of_replicas": ""
},
"forcemerge": {
"max_num_segments": ""
},
"shrink":{
"max_primary_shard_size": "",
"method": "COUNT",
"number_of_shards": ""
},
"set_priority": {"priority": 50}
},
"min_age": "30d"
}
}
}
} %}
{% do ADDON_CONTENT_INTEGRATION_DEFAULTS.update({integration_key: integration_defaults}) %}
{% endfor %}
{% else %}
{% endif %}
{% endif %}
{% endfor %}
+1
View File
@@ -1,5 +1,6 @@
elasticfleet:
enabled: False
patch_version: 9.3.3+build202604082258 # Elastic Agent specific patch release.
enable_manager_output: True
config:
server:
+4 -103
View File
@@ -17,65 +17,17 @@ include:
- logstash.ssl
- elasticfleet.config
- elasticfleet.sostatus
{%- if GLOBALS.role != "so-fleet" %}
- elasticfleet.manager
{%- endif %}
{% if grains.role not in ['so-fleet'] %}
{% if GLOBALS.role != "so-fleet" %}
# Wait for Elasticsearch to be ready - no reason to try running Elastic Fleet server if ES is not ready
wait_for_elasticsearch_elasticfleet:
cmd.run:
- name: so-elasticsearch-wait
{% endif %}
# If enabled, automatically update Fleet Logstash Outputs
{% if ELASTICFLEETMERGED.config.server.enable_auto_configuration and grains.role not in ['so-import', 'so-eval', 'so-fleet'] %}
so-elastic-fleet-auto-configure-logstash-outputs:
cmd.run:
- name: /usr/sbin/so-elastic-fleet-outputs-update
- retry:
attempts: 4
interval: 30
{# Separate from above in order to catch elasticfleet-logstash.crt changes and force update to fleet output policy #}
so-elastic-fleet-auto-configure-logstash-outputs-force:
cmd.run:
- name: /usr/sbin/so-elastic-fleet-outputs-update --certs
- retry:
attempts: 4
interval: 30
- onchanges:
- x509: etc_elasticfleet_logstash_crt
- x509: elasticfleet_kafka_crt
{% endif %}
# If enabled, automatically update Fleet Server URLs & ES Connection
{% if ELASTICFLEETMERGED.config.server.enable_auto_configuration and grains.role not in ['so-fleet'] %}
so-elastic-fleet-auto-configure-server-urls:
cmd.run:
- name: /usr/sbin/so-elastic-fleet-urls-update
- retry:
attempts: 4
interval: 30
{% endif %}
# Automatically update Fleet Server Elasticsearch URLs & Agent Artifact URLs
{% if grains.role not in ['so-fleet'] %}
so-elastic-fleet-auto-configure-elasticsearch-urls:
cmd.run:
- name: /usr/sbin/so-elastic-fleet-es-url-update
- retry:
attempts: 4
interval: 30
so-elastic-fleet-auto-configure-artifact-urls:
cmd.run:
- name: /usr/sbin/so-elastic-fleet-artifacts-url-update
- retry:
attempts: 4
interval: 30
{% endif %}
# Sync Elastic Agent artifacts to Fleet Node
{% if grains.role in ['so-fleet'] %}
elasticagent_syncartifacts:
file.recurse:
- name: /nsm/elastic-fleet/artifacts/beats
@@ -149,57 +101,6 @@ so-elastic-fleet:
- x509: etc_elasticfleet_crt
{% endif %}
{% if GLOBALS.role != "so-fleet" %}
so-elastic-fleet-package-statefile:
file.managed:
- name: /opt/so/state/elastic_fleet_packages.txt
- contents: {{ELASTICFLEETMERGED.packages}}
so-elastic-fleet-package-upgrade:
cmd.run:
- name: /usr/sbin/so-elastic-fleet-package-upgrade
- retry:
attempts: 3
interval: 10
- onchanges:
- file: /opt/so/state/elastic_fleet_packages.txt
so-elastic-fleet-integrations:
cmd.run:
- name: /usr/sbin/so-elastic-fleet-integration-policy-load
- retry:
attempts: 3
interval: 10
so-elastic-agent-grid-upgrade:
cmd.run:
- name: /usr/sbin/so-elastic-agent-grid-upgrade
- retry:
attempts: 12
interval: 5
so-elastic-fleet-integration-upgrade:
cmd.run:
- name: /usr/sbin/so-elastic-fleet-integration-upgrade
- retry:
attempts: 3
interval: 10
{# Optional integrations script doesn't need the retries like so-elastic-fleet-integration-upgrade which loads the default integrations #}
so-elastic-fleet-addon-integrations:
cmd.run:
- name: /usr/sbin/so-elastic-fleet-optional-integrations-load
{% if ELASTICFLEETMERGED.config.defend_filters.enable_auto_configuration %}
so-elastic-defend-manage-filters-file-watch:
cmd.run:
- name: python3 /sbin/so-elastic-defend-manage-filters.py -c /opt/so/conf/elasticsearch/curl.config -d /opt/so/conf/elastic-fleet/defend-exclusions/disabled-filters.yaml -i /nsm/securityonion-resources/event_filters/ -i /opt/so/conf/elastic-fleet/defend-exclusions/rulesets/custom-filters/ &>> /opt/so/log/elasticfleet/elastic-defend-manage-filters.log
- onchanges:
- file: elasticdefendcustom
- file: elasticdefenddisabled
{% endif %}
{% endif %}
delete_so-elastic-fleet_so-status.disabled:
file.uncomment:
- name: /opt/so/conf/so-status/so-status.conf
@@ -9,16 +9,22 @@
"namespace": "so",
"description": "Zeek Import logs",
"policy_id": "so-grid-nodes_general",
"policy_ids": [
"so-grid-nodes_general"
],
"vars": {},
"inputs": {
"filestream-filestream": {
"enabled": true,
"streams": {
"filestream.generic": {
"filestream.filestream": {
"enabled": true,
"vars": {
"paths": [
"/nsm/import/*/zeek/logs/*.log"
],
"compression_gzip": false,
"use_logs_stream": false,
"data_stream.dataset": "import",
"pipeline": "",
"parsers": "#- ndjson:\n# target: \"\"\n# message_key: msg\n#- multiline:\n# type: count\n# count_lines: 3\n",
@@ -34,7 +40,8 @@
"fingerprint_length": "64",
"file_identity_native": true,
"exclude_lines": [],
"include_lines": []
"include_lines": [],
"delete_enabled": false
}
}
}
@@ -15,19 +15,25 @@
"version": ""
},
"name": "kratos-logs",
"namespace": "so",
"description": "Kratos logs",
"policy_id": "so-grid-nodes_general",
"namespace": "so",
"policy_ids": [
"so-grid-nodes_general"
],
"vars": {},
"inputs": {
"filestream-filestream": {
"enabled": true,
"streams": {
"filestream.generic": {
"filestream.filestream": {
"enabled": true,
"vars": {
"paths": [
"/opt/so/log/kratos/kratos.log"
],
"compression_gzip": false,
"use_logs_stream": false,
"data_stream.dataset": "kratos",
"pipeline": "kratos",
"parsers": "#- ndjson:\n# target: \"\"\n# message_key: msg\n#- multiline:\n# type: count\n# count_lines: 3\n",
@@ -48,10 +54,10 @@
"harvester_limit": 0,
"fingerprint": false,
"fingerprint_offset": 0,
"fingerprint_length": "64",
"file_identity_native": true,
"exclude_lines": [],
"include_lines": []
"include_lines": [],
"delete_enabled": false
}
}
}
@@ -9,16 +9,22 @@
"namespace": "so",
"description": "Zeek logs",
"policy_id": "so-grid-nodes_general",
"policy_ids": [
"so-grid-nodes_general"
],
"vars": {},
"inputs": {
"filestream-filestream": {
"enabled": true,
"streams": {
"filestream.generic": {
"filestream.filestream": {
"enabled": true,
"vars": {
"paths": [
"/nsm/zeek/logs/current/*.log"
],
"compression_gzip": false,
"use_logs_stream": false,
"data_stream.dataset": "zeek",
"parsers": "#- ndjson:\n# target: \"\"\n# message_key: msg\n#- multiline:\n# type: count\n# count_lines: 3\n",
"exclude_files": ["({%- endraw -%}{{ ELASTICFLEETMERGED.logging.zeek.excluded | join('|') }}{%- raw -%})(\\..+)?\\.log$"],
@@ -30,10 +36,10 @@
"harvester_limit": 0,
"fingerprint": false,
"fingerprint_offset": 0,
"fingerprint_length": "64",
"file_identity_native": true,
"exclude_lines": [],
"include_lines": []
"include_lines": [],
"delete_enabled": false
}
}
}
@@ -5,7 +5,7 @@
"package": {
"name": "endpoint",
"title": "Elastic Defend",
"version": "9.0.2",
"version": "9.3.0",
"requires_root": true
},
"enabled": true,
@@ -6,21 +6,23 @@
"name": "agent-monitor",
"namespace": "",
"description": "",
"policy_id": "so-grid-nodes_general",
"policy_ids": [
"so-grid-nodes_general"
],
"output_id": null,
"vars": {},
"inputs": {
"filestream-filestream": {
"enabled": true,
"streams": {
"filestream.generic": {
"filestream.filestream": {
"enabled": true,
"vars": {
"paths": [
"/opt/so/log/agents/agent-monitor.log"
],
"compression_gzip": false,
"use_logs_stream": false,
"data_stream.dataset": "agentmonitor",
"pipeline": "elasticagent.monitor",
"parsers": "",
@@ -34,15 +36,16 @@
"ignore_older": "72h",
"clean_inactive": -1,
"harvester_limit": 0,
"fingerprint": true,
"fingerprint": false,
"fingerprint_offset": 0,
"fingerprint_length": 64,
"file_identity_native": false,
"file_identity_native": true,
"exclude_lines": [],
"include_lines": []
"include_lines": [],
"delete_enabled": false
}
}
}
}
}
},
"force": true
}
@@ -4,19 +4,25 @@
"version": ""
},
"name": "hydra-logs",
"namespace": "so",
"description": "Hydra logs",
"policy_id": "so-grid-nodes_general",
"namespace": "so",
"policy_ids": [
"so-grid-nodes_general"
],
"vars": {},
"inputs": {
"filestream-filestream": {
"enabled": true,
"streams": {
"filestream.generic": {
"filestream.filestream": {
"enabled": true,
"vars": {
"paths": [
"/opt/so/log/hydra/hydra.log"
],
"compression_gzip": false,
"use_logs_stream": false,
"data_stream.dataset": "hydra",
"pipeline": "hydra",
"parsers": "#- ndjson:\n# target: \"\"\n# message_key: msg\n#- multiline:\n# type: count\n# count_lines: 3\n",
@@ -34,10 +40,10 @@
"harvester_limit": 0,
"fingerprint": false,
"fingerprint_offset": 0,
"fingerprint_length": "64",
"file_identity_native": true,
"exclude_lines": [],
"include_lines": []
"include_lines": [],
"delete_enabled": false
}
}
}
@@ -4,19 +4,25 @@
"version": ""
},
"name": "idh-logs",
"namespace": "so",
"description": "IDH integration",
"policy_id": "so-grid-nodes_general",
"namespace": "so",
"policy_ids": [
"so-grid-nodes_general"
],
"vars": {},
"inputs": {
"filestream-filestream": {
"enabled": true,
"streams": {
"filestream.generic": {
"filestream.filestream": {
"enabled": true,
"vars": {
"paths": [
"/nsm/idh/opencanary.log"
],
"compression_gzip": false,
"use_logs_stream": false,
"data_stream.dataset": "idh",
"pipeline": "common",
"parsers": "#- ndjson:\n# target: \"\"\n# message_key: msg\n#- multiline:\n# type: count\n# count_lines: 3\n",
@@ -31,10 +37,10 @@
"harvester_limit": 0,
"fingerprint": false,
"fingerprint_offset": 0,
"fingerprint_length": "64",
"file_identity_native": true,
"exclude_lines": [],
"include_lines": []
"include_lines": [],
"delete_enabled": false
}
}
}
@@ -4,26 +4,32 @@
"version": ""
},
"name": "import-evtx-logs",
"namespace": "so",
"description": "Import Windows EVTX logs",
"policy_id": "so-grid-nodes_general",
"namespace": "so",
"policy_ids": [
"so-grid-nodes_general"
],
"vars": {},
"inputs": {
"filestream-filestream": {
"enabled": true,
"streams": {
"filestream.generic": {
"filestream.filestream": {
"enabled": true,
"vars": {
"paths": [
"/nsm/import/*/evtx/*.json"
],
"compression_gzip": false,
"use_logs_stream": false,
"data_stream.dataset": "import",
"parsers": "#- ndjson:\n# target: \"\"\n# message_key: msg\n#- multiline:\n# type: count\n# count_lines: 3\n",
"exclude_files": [
"\\.gz$"
],
"include_files": [],
"processors": "- dissect:\n tokenizer: \"/nsm/import/%{import.id}/evtx/%{import.file}\"\n field: \"log.file.path\"\n target_prefix: \"\"\n- decode_json_fields:\n fields: [\"message\"]\n target: \"\"\n- drop_fields:\n fields: [\"host\"]\n ignore_missing: true\n- add_fields:\n target: data_stream\n fields:\n type: logs\n dataset: system.security\n- add_fields:\n target: event\n fields:\n dataset: system.security\n module: system\n imported: true\n- add_fields:\n target: \"@metadata\"\n fields:\n pipeline: logs-system.security-2.6.1\n- if:\n equals:\n winlog.channel: 'Microsoft-Windows-Sysmon/Operational'\n then: \n - add_fields:\n target: data_stream\n fields:\n dataset: windows.sysmon_operational\n - add_fields:\n target: event\n fields:\n dataset: windows.sysmon_operational\n module: windows\n imported: true\n - add_fields:\n target: \"@metadata\"\n fields:\n pipeline: logs-windows.sysmon_operational-3.1.2\n- if:\n equals:\n winlog.channel: 'Application'\n then: \n - add_fields:\n target: data_stream\n fields:\n dataset: system.application\n - add_fields:\n target: event\n fields:\n dataset: system.application\n - add_fields:\n target: \"@metadata\"\n fields:\n pipeline: logs-system.application-2.6.1\n- if:\n equals:\n winlog.channel: 'System'\n then: \n - add_fields:\n target: data_stream\n fields:\n dataset: system.system\n - add_fields:\n target: event\n fields:\n dataset: system.system\n - add_fields:\n target: \"@metadata\"\n fields:\n pipeline: logs-system.system-2.6.1\n \n- if:\n equals:\n winlog.channel: 'Microsoft-Windows-PowerShell/Operational'\n then: \n - add_fields:\n target: data_stream\n fields:\n dataset: windows.powershell_operational\n - add_fields:\n target: event\n fields:\n dataset: windows.powershell_operational\n module: windows\n - add_fields:\n target: \"@metadata\"\n fields:\n pipeline: logs-windows.powershell_operational-3.1.2\n- add_fields:\n target: data_stream\n fields:\n dataset: import",
"processors": "- dissect:\n tokenizer: \"/nsm/import/%{import.id}/evtx/%{import.file}\"\n field: \"log.file.path\"\n target_prefix: \"\"\n- decode_json_fields:\n fields: [\"message\"]\n target: \"\"\n- drop_fields:\n fields: [\"host\"]\n ignore_missing: true\n- add_fields:\n target: data_stream\n fields:\n type: logs\n dataset: system.security\n- add_fields:\n target: event\n fields:\n dataset: system.security\n module: system\n imported: true\n- add_fields:\n target: \"@metadata\"\n fields:\n pipeline: logs-system.security-2.15.0\n- if:\n equals:\n winlog.channel: 'Microsoft-Windows-Sysmon/Operational'\n then: \n - add_fields:\n target: data_stream\n fields:\n dataset: windows.sysmon_operational\n - add_fields:\n target: event\n fields:\n dataset: windows.sysmon_operational\n module: windows\n imported: true\n - add_fields:\n target: \"@metadata\"\n fields:\n pipeline: logs-windows.sysmon_operational-3.8.0\n- if:\n equals:\n winlog.channel: 'Application'\n then: \n - add_fields:\n target: data_stream\n fields:\n dataset: system.application\n - add_fields:\n target: event\n fields:\n dataset: system.application\n - add_fields:\n target: \"@metadata\"\n fields:\n pipeline: logs-system.application-2.15.0\n- if:\n equals:\n winlog.channel: 'System'\n then: \n - add_fields:\n target: data_stream\n fields:\n dataset: system.system\n - add_fields:\n target: event\n fields:\n dataset: system.system\n - add_fields:\n target: \"@metadata\"\n fields:\n pipeline: logs-system.system-2.15.0\n \n- if:\n equals:\n winlog.channel: 'Microsoft-Windows-PowerShell/Operational'\n then: \n - add_fields:\n target: data_stream\n fields:\n dataset: windows.powershell_operational\n - add_fields:\n target: event\n fields:\n dataset: windows.powershell_operational\n module: windows\n - add_fields:\n target: \"@metadata\"\n fields:\n pipeline: logs-windows.powershell_operational-3.8.0\n- add_fields:\n target: data_stream\n fields:\n dataset: import",
"tags": [
"import"
],
@@ -33,10 +39,10 @@
"harvester_limit": 0,
"fingerprint": false,
"fingerprint_offset": 0,
"fingerprint_length": "64",
"file_identity_native": true,
"exclude_lines": [],
"include_lines": []
"include_lines": [],
"delete_enabled": false
}
}
}
@@ -4,19 +4,25 @@
"version": ""
},
"name": "import-suricata-logs",
"namespace": "so",
"description": "Import Suricata logs",
"policy_id": "so-grid-nodes_general",
"namespace": "so",
"policy_ids": [
"so-grid-nodes_general"
],
"vars": {},
"inputs": {
"filestream-filestream": {
"enabled": true,
"streams": {
"filestream.generic": {
"filestream.filestream": {
"enabled": true,
"vars": {
"paths": [
"/nsm/import/*/suricata/eve*.json"
],
"compression_gzip": false,
"use_logs_stream": false,
"data_stream.dataset": "import",
"pipeline": "suricata.common",
"parsers": "#- ndjson:\n# target: \"\"\n# message_key: msg\n#- multiline:\n# type: count\n# count_lines: 3\n",
@@ -32,10 +38,10 @@
"harvester_limit": 0,
"fingerprint": false,
"fingerprint_offset": 0,
"fingerprint_length": "64",
"file_identity_native": true,
"exclude_lines": [],
"include_lines": []
"include_lines": [],
"delete_enabled": false
}
}
}
@@ -4,14 +4,18 @@
"version": ""
},
"name": "rita-logs",
"namespace": "so",
"description": "RITA Logs",
"policy_id": "so-grid-nodes_general",
"namespace": "so",
"policy_ids": [
"so-grid-nodes_general"
],
"vars": {},
"inputs": {
"filestream-filestream": {
"enabled": true,
"streams": {
"filestream.generic": {
"filestream.filestream": {
"enabled": true,
"vars": {
"paths": [
@@ -19,6 +23,8 @@
"/nsm/rita/exploded-dns.csv",
"/nsm/rita/long-connections.csv"
],
"compression_gzip": false,
"use_logs_stream": false,
"data_stream.dataset": "rita",
"parsers": "#- ndjson:\n# target: \"\"\n# message_key: msg\n#- multiline:\n# type: count\n# count_lines: 3\n",
"exclude_files": [
@@ -33,10 +39,10 @@
"harvester_limit": 0,
"fingerprint": false,
"fingerprint_offset": 0,
"fingerprint_length": "64",
"file_identity_native": true,
"exclude_lines": [],
"include_lines": []
"include_lines": [],
"delete_enabled": false
}
}
}
@@ -4,19 +4,25 @@
"version": ""
},
"name": "so-ip-mappings",
"namespace": "so",
"description": "IP Description mappings",
"policy_id": "so-grid-nodes_general",
"namespace": "so",
"policy_ids": [
"so-grid-nodes_general"
],
"vars": {},
"inputs": {
"filestream-filestream": {
"enabled": true,
"streams": {
"filestream.generic": {
"filestream.filestream": {
"enabled": true,
"vars": {
"paths": [
"/nsm/custom-mappings/ip-descriptions.csv"
],
"compression_gzip": false,
"use_logs_stream": false,
"data_stream.dataset": "hostnamemappings",
"parsers": "#- ndjson:\n# target: \"\"\n# message_key: msg\n#- multiline:\n# type: count\n# count_lines: 3\n",
"exclude_files": [
@@ -32,10 +38,10 @@
"harvester_limit": 0,
"fingerprint": false,
"fingerprint_offset": 0,
"fingerprint_length": "64",
"file_identity_native": true,
"exclude_lines": [],
"include_lines": []
"include_lines": [],
"delete_enabled": false
}
}
}
@@ -4,19 +4,25 @@
"version": ""
},
"name": "soc-auth-sync-logs",
"namespace": "so",
"description": "Security Onion - Elastic Auth Sync - Logs",
"policy_id": "so-grid-nodes_general",
"namespace": "so",
"policy_ids": [
"so-grid-nodes_general"
],
"vars": {},
"inputs": {
"filestream-filestream": {
"enabled": true,
"streams": {
"filestream.generic": {
"filestream.filestream": {
"enabled": true,
"vars": {
"paths": [
"/opt/so/log/soc/sync.log"
],
"compression_gzip": false,
"use_logs_stream": false,
"data_stream.dataset": "soc",
"pipeline": "common",
"parsers": "#- ndjson:\n# target: \"\"\n# message_key: msg\n#- multiline:\n# type: count\n# count_lines: 3\n",
@@ -31,10 +37,10 @@
"harvester_limit": 0,
"fingerprint": false,
"fingerprint_offset": 0,
"fingerprint_length": "64",
"file_identity_native": true,
"exclude_lines": [],
"include_lines": []
"include_lines": [],
"delete_enabled": false
}
}
}
@@ -4,20 +4,26 @@
"version": ""
},
"name": "soc-detections-logs",
"namespace": "so",
"description": "Security Onion Console - Detections Logs",
"policy_id": "so-grid-nodes_general",
"namespace": "so",
"policy_ids": [
"so-grid-nodes_general"
],
"vars": {},
"inputs": {
"filestream-filestream": {
"enabled": true,
"streams": {
"filestream.generic": {
"filestream.filestream": {
"enabled": true,
"vars": {
"paths": [
"/opt/so/log/soc/detections_runtime-status_sigma.log",
"/opt/so/log/soc/detections_runtime-status_yara.log"
],
"compression_gzip": false,
"use_logs_stream": false,
"data_stream.dataset": "soc",
"pipeline": "common",
"parsers": "#- ndjson:\n# target: \"\"\n# message_key: msg\n#- multiline:\n# type: count\n# count_lines: 3\n",
@@ -35,10 +41,10 @@
"harvester_limit": 0,
"fingerprint": false,
"fingerprint_offset": 0,
"fingerprint_length": "64",
"file_identity_native": true,
"exclude_lines": [],
"include_lines": []
"include_lines": [],
"delete_enabled": false
}
}
}
@@ -4,19 +4,25 @@
"version": ""
},
"name": "soc-salt-relay-logs",
"namespace": "so",
"description": "Security Onion - Salt Relay - Logs",
"policy_id": "so-grid-nodes_general",
"namespace": "so",
"policy_ids": [
"so-grid-nodes_general"
],
"vars": {},
"inputs": {
"filestream-filestream": {
"enabled": true,
"streams": {
"filestream.generic": {
"filestream.filestream": {
"enabled": true,
"vars": {
"paths": [
"/opt/so/log/soc/salt-relay.log"
],
"compression_gzip": false,
"use_logs_stream": false,
"data_stream.dataset": "soc",
"pipeline": "common",
"parsers": "#- ndjson:\n# target: \"\"\n# message_key: msg\n#- multiline:\n# type: count\n# count_lines: 3\n",
@@ -33,10 +39,10 @@
"harvester_limit": 0,
"fingerprint": false,
"fingerprint_offset": 0,
"fingerprint_length": "64",
"file_identity_native": true,
"exclude_lines": [],
"include_lines": []
"include_lines": [],
"delete_enabled": false
}
}
}
@@ -4,19 +4,25 @@
"version": ""
},
"name": "soc-sensoroni-logs",
"namespace": "so",
"description": "Security Onion - Sensoroni - Logs",
"policy_id": "so-grid-nodes_general",
"namespace": "so",
"policy_ids": [
"so-grid-nodes_general"
],
"vars": {},
"inputs": {
"filestream-filestream": {
"enabled": true,
"streams": {
"filestream.generic": {
"filestream.filestream": {
"enabled": true,
"vars": {
"paths": [
"/opt/so/log/sensoroni/sensoroni.log"
],
"compression_gzip": false,
"use_logs_stream": false,
"data_stream.dataset": "soc",
"pipeline": "common",
"parsers": "#- ndjson:\n# target: \"\"\n# message_key: msg\n#- multiline:\n# type: count\n# count_lines: 3\n",
@@ -31,10 +37,10 @@
"harvester_limit": 0,
"fingerprint": false,
"fingerprint_offset": 0,
"fingerprint_length": "64",
"file_identity_native": true,
"exclude_lines": [],
"include_lines": []
"include_lines": [],
"delete_enabled": false
}
}
}
@@ -4,19 +4,25 @@
"version": ""
},
"name": "soc-server-logs",
"namespace": "so",
"description": "Security Onion Console Logs",
"policy_id": "so-grid-nodes_general",
"namespace": "so",
"policy_ids": [
"so-grid-nodes_general"
],
"vars": {},
"inputs": {
"filestream-filestream": {
"enabled": true,
"streams": {
"filestream.generic": {
"filestream.filestream": {
"enabled": true,
"vars": {
"paths": [
"/opt/so/log/soc/sensoroni-server.log"
],
"compression_gzip": false,
"use_logs_stream": false,
"data_stream.dataset": "soc",
"pipeline": "common",
"parsers": "#- ndjson:\n# target: \"\"\n# message_key: msg\n#- multiline:\n# type: count\n# count_lines: 3\n",
@@ -33,10 +39,10 @@
"harvester_limit": 0,
"fingerprint": false,
"fingerprint_offset": 0,
"fingerprint_length": "64",
"file_identity_native": true,
"exclude_lines": [],
"include_lines": []
"include_lines": [],
"delete_enabled": false
}
}
}
@@ -4,19 +4,25 @@
"version": ""
},
"name": "strelka-logs",
"namespace": "so",
"description": "Strelka Logs",
"policy_id": "so-grid-nodes_general",
"namespace": "so",
"policy_ids": [
"so-grid-nodes_general"
],
"vars": {},
"inputs": {
"filestream-filestream": {
"enabled": true,
"streams": {
"filestream.generic": {
"filestream.filestream": {
"enabled": true,
"vars": {
"paths": [
"/nsm/strelka/log/strelka.log"
],
"compression_gzip": false,
"use_logs_stream": false,
"data_stream.dataset": "strelka",
"pipeline": "strelka.file",
"parsers": "#- ndjson:\n# target: \"\"\n# message_key: msg\n#- multiline:\n# type: count\n# count_lines: 3\n",
@@ -31,10 +37,10 @@
"harvester_limit": 0,
"fingerprint": false,
"fingerprint_offset": 0,
"fingerprint_length": "64",
"file_identity_native": true,
"exclude_lines": [],
"include_lines": []
"include_lines": [],
"delete_enabled": false
}
}
}
@@ -4,19 +4,25 @@
"version": ""
},
"name": "suricata-logs",
"namespace": "so",
"description": "Suricata integration",
"policy_id": "so-grid-nodes_general",
"namespace": "so",
"policy_ids": [
"so-grid-nodes_general"
],
"vars": {},
"inputs": {
"filestream-filestream": {
"enabled": true,
"streams": {
"filestream.generic": {
"filestream.filestream": {
"enabled": true,
"vars": {
"paths": [
"/nsm/suricata/eve*.json"
],
"compression_gzip": false,
"use_logs_stream": false,
"data_stream.dataset": "suricata",
"pipeline": "suricata.common",
"parsers": "#- ndjson:\n# target: \"\"\n# message_key: msg\n#- multiline:\n# type: count\n# count_lines: 3\n",
@@ -31,10 +37,10 @@
"harvester_limit": 0,
"fingerprint": false,
"fingerprint_offset": 0,
"fingerprint_length": "64",
"file_identity_native": true,
"exclude_lines": [],
"include_lines": []
"include_lines": [],
"delete_enabled": false
}
}
}
+123
View File
@@ -0,0 +1,123 @@
{# Copyright Security Onion Solutions LLC and/or licensed to Security Onion Solutions LLC under one
or more contributor license agreements. Licensed under the Elastic License 2.0; you may not use
this file except in compliance with the Elastic License 2.0. #}
{% import_json '/opt/so/state/esfleet_input_package_components.json' as ADDON_INPUT_PACKAGE_COMPONENTS %}
{% import_json '/opt/so/state/esfleet_component_templates.json' as INSTALLED_COMPONENT_TEMPLATES %}
{% import_yaml 'elasticfleet/defaults.yaml' as ELASTICFLEETDEFAULTS %}
{% set CORE_ESFLEET_PACKAGES = ELASTICFLEETDEFAULTS.get('elasticfleet', {}).get('packages', {}) %}
{% set ADDON_INPUT_INTEGRATION_DEFAULTS = {} %}
{% set DEBUG_STUFF = {} %}
{% for pkg in ADDON_INPUT_PACKAGE_COMPONENTS %}
{% if pkg.name in CORE_ESFLEET_PACKAGES %}
{# skip core input packages #}
{% elif pkg.name not in CORE_ESFLEET_PACKAGES %}
{# generate defaults for each input package #}
{% if pkg.dataStreams is defined and pkg.dataStreams is not none and pkg.dataStreams | length > 0 %}
{% for pattern in pkg.dataStreams %}
{# in ES 9.3.2 'input' type integrations no longer create default component templates and instead they wait for user input during 'integration' setup (fleet ui config)
title: generic is an artifact of that and is not in use #}
{% if pattern.title == "generic" %}
{% continue %}
{% endif %}
{% if "metrics-" in pattern.name %}
{% set integration_type = "metrics-" %}
{% elif "logs-" in pattern.name %}
{% set integration_type = "logs-" %}
{% else %}
{% set integration_type = "" %}
{% endif %}
{# on input integrations the component name is user defined at the time it is added to an agent policy #}
{% set component_name = pattern.title %}
{% set index_pattern = pattern.name %}
{# component_name_x maintains the functionality of merging local pillar changes with generated 'defaults' via SOC UI #}
{% set component_name_x = component_name.replace(".","_x_") %}
{# pillar overrides/merge expects the key names to follow the naming in elasticsearch/defaults.yaml eg. so-logs-1password_x_item_usages . The _x_ is replaced later on in elasticsearch/template.map.jinja #}
{% set integration_key = "so-" ~ integration_type ~ pkg.name + '_x_' ~ component_name_x %}
{# Default integration settings #}
{% set integration_defaults = {
"index_sorting": false,
"index_template": {
"composed_of": [integration_type ~ component_name ~ "@package", integration_type ~ component_name ~ "@custom", "so-fleet_integrations.ip_mappings-1", "so-fleet_globals-1", "so-fleet_agent_id_verification-1"],
"data_stream": {
"allow_custom_routing": false,
"hidden": false
},
"ignore_missing_component_templates": [integration_type ~ component_name ~ "@custom"],
"index_patterns": [index_pattern],
"priority": 501,
"template": {
"settings": {
"index": {
"lifecycle": {"name": "so-" ~ integration_type ~ component_name ~ "-logs"},
"number_of_replicas": 0
}
}
}
},
"policy": {
"phases": {
"cold": {
"actions": {
"allocate":{
"number_of_replicas": ""
},
"set_priority": {"priority": 0}
},
"min_age": "60d"
},
"delete": {
"actions": {
"delete": {}
},
"min_age": "365d"
},
"hot": {
"actions": {
"rollover": {
"max_age": "30d",
"max_primary_shard_size": "50gb"
},
"forcemerge":{
"max_num_segments": ""
},
"shrink":{
"max_primary_shard_size": "",
"method": "COUNT",
"number_of_shards": ""
},
"set_priority": {"priority": 100}
},
"min_age": "0ms"
},
"warm": {
"actions": {
"allocate": {
"number_of_replicas": ""
},
"forcemerge": {
"max_num_segments": ""
},
"shrink":{
"max_primary_shard_size": "",
"method": "COUNT",
"number_of_shards": ""
},
"set_priority": {"priority": 50}
},
"min_age": "30d"
}
}
}
} %}
{% do ADDON_INPUT_INTEGRATION_DEFAULTS.update({integration_key: integration_defaults}) %}
{% do DEBUG_STUFF.update({integration_key: "Generating defaults for "+ pkg.name })%}
{% endfor %}
{% endif %}
{% endif %}
{% endfor %}
@@ -59,8 +59,8 @@
{# skip core integrations #}
{% elif pkg.name not in CORE_ESFLEET_PACKAGES %}
{# generate defaults for each integration #}
{% if pkg.es_index_patterns is defined and pkg.es_index_patterns is not none %}
{% for pattern in pkg.es_index_patterns %}
{% if pkg.dataStreams is defined and pkg.dataStreams is not none and pkg.dataStreams | length > 0 %}
{% for pattern in pkg.dataStreams %}
{% if "metrics-" in pattern.name %}
{% set integration_type = "metrics-" %}
{% elif "logs-" in pattern.name %}
@@ -75,44 +75,27 @@
{% if component_name in WEIRD_INTEGRATIONS %}
{% set component_name = WEIRD_INTEGRATIONS[component_name] %}
{% endif %}
{# create duplicate of component_name, so we can split generics from @custom component templates in the index template below and overwrite the default @package when needed
eg. having to replace unifiedlogs.generic@package with filestream.generic@package, but keep the ability to customize unifiedlogs.generic@custom and its ILM policy #}
{% set custom_component_name = component_name %}
{# duplicate integration_type to assist with sometimes needing to overwrite component templates with 'logs-filestream.generic@package' (there is no metrics-filestream.generic@package) #}
{% set generic_integration_type = integration_type %}
{# component_name_x maintains the functionality of merging local pillar changes with generated 'defaults' via SOC UI #}
{% set component_name_x = component_name.replace(".","_x_") %}
{# pillar overrides/merge expects the key names to follow the naming in elasticsearch/defaults.yaml eg. so-logs-1password_x_item_usages . The _x_ is replaced later on in elasticsearch/template.map.jinja #}
{% set integration_key = "so-" ~ integration_type ~ component_name_x %}
{# if its a .generic template make sure that a .generic@package for the integration exists. Else default to logs-filestream.generic@package #}
{% if ".generic" in component_name and integration_type ~ component_name ~ "@package" not in INSTALLED_COMPONENT_TEMPLATES %}
{# these generic templates by default are directed to index_pattern of 'logs-generic-*', overwrite that here to point to eg gcp_pubsub.generic-* #}
{% set index_pattern = integration_type ~ component_name ~ "-*" %}
{# includes use of .generic component template, but it doesn't exist in installed component templates. Redirect it to filestream.generic@package #}
{% set component_name = "filestream.generic" %}
{% set generic_integration_type = "logs-" %}
{% endif %}
{# Default integration settings #}
{% set integration_defaults = {
"index_sorting": false,
"index_template": {
"composed_of": [generic_integration_type ~ component_name ~ "@package", integration_type ~ custom_component_name ~ "@custom", "so-fleet_integrations.ip_mappings-1", "so-fleet_globals-1", "so-fleet_agent_id_verification-1"],
"composed_of": [integration_type ~ component_name ~ "@package", integration_type ~ component_name ~ "@custom", "so-fleet_integrations.ip_mappings-1", "so-fleet_globals-1", "so-fleet_agent_id_verification-1"],
"data_stream": {
"allow_custom_routing": false,
"hidden": false
},
"ignore_missing_component_templates": [integration_type ~ custom_component_name ~ "@custom"],
"ignore_missing_component_templates": [integration_type ~ component_name ~ "@custom"],
"index_patterns": [index_pattern],
"priority": 501,
"template": {
"settings": {
"index": {
"lifecycle": {"name": "so-" ~ integration_type ~ custom_component_name ~ "-logs"},
"lifecycle": {"name": "so-" ~ integration_type ~ component_name ~ "-logs"},
"number_of_replicas": 0
}
}
+101
View File
@@ -0,0 +1,101 @@
# Copyright Security Onion Solutions LLC and/or licensed to Security Onion Solutions LLC under one
# or more contributor license agreements. Licensed under the Elastic License 2.0 as shown at
# https://securityonion.net/license; you may not use this file except in compliance with the
# Elastic License 2.0.
{% from 'allowed_states.map.jinja' import allowed_states %}
{% if sls in allowed_states %}
{% from 'elasticfleet/map.jinja' import ELASTICFLEETMERGED %}
include:
- elasticfleet.config
# If enabled, automatically update Fleet Logstash Outputs
{% if ELASTICFLEETMERGED.config.server.enable_auto_configuration and grains.role not in ['so-import', 'so-eval'] %}
so-elastic-fleet-auto-configure-logstash-outputs:
cmd.run:
- name: /usr/sbin/so-elastic-fleet-outputs-update
- retry:
attempts: 4
interval: 30
{% endif %}
# If enabled, automatically update Fleet Server URLs & ES Connection
so-elastic-fleet-auto-configure-server-urls:
cmd.run:
- name: /usr/sbin/so-elastic-fleet-urls-update
- retry:
attempts: 4
interval: 30
# Automatically update Fleet Server Elasticsearch URLs & Agent Artifact URLs
so-elastic-fleet-auto-configure-elasticsearch-urls:
cmd.run:
- name: /usr/sbin/so-elastic-fleet-es-url-update
- retry:
attempts: 4
interval: 30
so-elastic-fleet-auto-configure-artifact-urls:
cmd.run:
- name: /usr/sbin/so-elastic-fleet-artifacts-url-update
- retry:
attempts: 4
interval: 30
so-elastic-fleet-package-statefile:
file.managed:
- name: /opt/so/state/elastic_fleet_packages.txt
- contents: {{ELASTICFLEETMERGED.packages}}
so-elastic-fleet-package-upgrade:
cmd.run:
- name: /usr/sbin/so-elastic-fleet-package-upgrade
- retry:
attempts: 3
interval: 10
- onchanges:
- file: /opt/so/state/elastic_fleet_packages.txt
so-elastic-fleet-integrations:
cmd.run:
- name: /usr/sbin/so-elastic-fleet-integration-policy-load
- retry:
attempts: 3
interval: 10
so-elastic-agent-grid-upgrade:
cmd.run:
- name: /usr/sbin/so-elastic-agent-grid-upgrade
- retry:
attempts: 12
interval: 5
so-elastic-fleet-integration-upgrade:
cmd.run:
- name: /usr/sbin/so-elastic-fleet-integration-upgrade
- retry:
attempts: 3
interval: 10
{# Optional integrations script doesn't need the retries like so-elastic-fleet-integration-upgrade which loads the default integrations #}
so-elastic-fleet-addon-integrations:
cmd.run:
- name: /usr/sbin/so-elastic-fleet-optional-integrations-load
{% if ELASTICFLEETMERGED.config.defend_filters.enable_auto_configuration %}
so-elastic-defend-manage-filters-file-watch:
cmd.run:
- name: python3 /sbin/so-elastic-defend-manage-filters.py -c /opt/so/conf/elasticsearch/curl.config -d /opt/so/conf/elastic-fleet/defend-exclusions/disabled-filters.yaml -i /nsm/securityonion-resources/event_filters/ -i /opt/so/conf/elastic-fleet/defend-exclusions/rulesets/custom-filters/ &>> /opt/so/log/elasticfleet/elastic-defend-manage-filters.log
- onchanges:
- file: elasticdefendcustom
- file: elasticdefenddisabled
{% endif %}
{% else %}
{{sls}}_state_not_allowed:
test.fail_without_changes:
- name: {{sls}}_state_not_allowed
{% endif %}
@@ -135,9 +135,33 @@ elastic_fleet_bulk_package_install() {
fi
}
elastic_fleet_installed_packages() {
if ! fleet_api "epm/packages/installed?perPage=500"; then
elastic_fleet_get_package_list_by_type() {
if ! output=$(fleet_api "epm/packages"); then
return 1
else
is_integration=$(jq '[.items[] | select(.type=="integration") | .name ]' <<< "$output")
is_input=$(jq '[.items[] | select(.type=="input") | .name ]' <<< "$output")
is_content=$(jq '[.items[] | select(.type=="content") | .name ]' <<< "$output")
jq -n --argjson is_integration "${is_integration:-[]}" \
--argjson is_input "${is_input:-[]}" \
--argjson is_content "${is_content:-[]}" \
'{"integration": $is_integration,"input": $is_input, "content": $is_content}'
fi
}
elastic_fleet_installed_packages_components() {
package_type=${1,,}
if [[ "$package_type" != "integration" && "$package_type" != "input" && "$package_type" != "content" ]]; then
echo "Error: Invalid package type ${package_type}. Valid types are 'integration', 'input', or 'content'."
return 1
fi
packages_by_type=$(elastic_fleet_get_package_list_by_type)
packages=$(jq --arg package_type "$package_type" '.[$package_type]' <<< "$packages_by_type")
if ! output=$(fleet_api "epm/packages/installed?perPage=500"); then
return 1
else
jq -c --argjson packages "$packages" '[.items[] | select(.name | IN($packages[])) | {name: .name, dataStreams: .dataStreams}]' <<< "$output"
fi
}
@@ -216,7 +240,7 @@ elastic_fleet_policy_create() {
--arg DESC "$DESC" \
--arg TIMEOUT $TIMEOUT \
--arg FLEETSERVER "$FLEETSERVER" \
'{"name": $NAME,"id":$NAME,"description":$DESC,"namespace":"default","monitoring_enabled":["logs"],"inactivity_timeout":$TIMEOUT,"has_fleet_server":$FLEETSERVER}'
'{"name": $NAME,"id":$NAME,"description":$DESC,"namespace":"default","monitoring_enabled":["logs"],"inactivity_timeout":$TIMEOUT,"has_fleet_server":$FLEETSERVER,"advanced_settings":{"agent_logging_level": "warning"}}'
)
# Create Fleet Policy
if ! fleet_api "agent_policies" -XPOST -H 'kbn-xsrf: true' -H 'Content-Type: application/json' -d "$JSON_STRING"; then
@@ -5,7 +5,13 @@
# this file except in compliance with the Elastic License 2.0.
. /usr/sbin/so-common
. /usr/sbin/so-elastic-fleet-common
{%- import_yaml 'elasticsearch/defaults.yaml' as ELASTICSEARCHDEFAULTS %}
{%- import_yaml 'elasticfleet/defaults.yaml' as ELASTICFLEETDEFAULTS %}
{# Optionally override Elasticsearch version for Elastic Agent patch releases #}
{%- if ELASTICFLEETDEFAULTS.elasticfleet.patch_version is defined %}
{%- do ELASTICSEARCHDEFAULTS.elasticsearch.update({'version': ELASTICFLEETDEFAULTS.elasticfleet.patch_version}) %}
{%- endif %}
# Only run on Managers
if ! is_manager_node; then
@@ -14,13 +20,10 @@ if ! is_manager_node; then
fi
# Get current list of Grid Node Agents that need to be upgraded
RAW_JSON=$(curl -K /opt/so/conf/elasticsearch/curl.config -L "http://localhost:5601/api/fleet/agents?perPage=20&page=1&kuery=NOT%20agent.version%3A%20{{ELASTICSEARCHDEFAULTS.elasticsearch.version}}%20AND%20policy_id%3A%20so-grid-nodes_%2A&showInactive=false&getStatusSummary=true" --retry 3 --retry-delay 30 --fail 2>/dev/null)
if ! RAW_JSON=$(fleet_api "agents?perPage=20&page=1&kuery=NOT%20agent.version%3A%20{{ELASTICSEARCHDEFAULTS.elasticsearch.version | urlencode }}%20AND%20policy_id%3A%20so-grid-nodes_%2A&showInactive=false&getStatusSummary=true" -H 'kbn-xsrf: true' -H 'Content-Type: application/json'); then
# Check to make sure that the server responded with good data - else, bail from script
CHECKSUM=$(jq -r '.page' <<< "$RAW_JSON")
if [ "$CHECKSUM" -ne 1 ]; then
printf "Failed to query for current Grid Agents...\n"
exit 1
printf "Failed to query for current Grid Agents...\n"
exit 1
fi
# Generate list of Node Agents that need updates
@@ -31,10 +34,12 @@ if [ "$OUTDATED_LIST" != '[]' ]; then
printf "Initiating upgrades for $AGENTNUMBERS Agents to Elastic {{ELASTICSEARCHDEFAULTS.elasticsearch.version}}...\n\n"
# Generate updated JSON payload
JSON_STRING=$(jq -n --arg ELASTICVERSION {{ELASTICSEARCHDEFAULTS.elasticsearch.version}} --arg UPDATELIST $OUTDATED_LIST '{"version": $ELASTICVERSION,"agents": $UPDATELIST }')
JSON_STRING=$(jq -n --arg ELASTICVERSION "{{ELASTICSEARCHDEFAULTS.elasticsearch.version}}" --argjson UPDATELIST "$OUTDATED_LIST" '{"version": $ELASTICVERSION,"agents": $UPDATELIST }')
# Update Node Agents
curl -K /opt/so/conf/elasticsearch/curl.config -L -X POST "http://localhost:5601/api/fleet/agents/bulk_upgrade" -H 'kbn-xsrf: true' -H 'Content-Type: application/json' -d "$JSON_STRING"
if ! fleet_api "agents/bulk_upgrade" -XPOST -H 'kbn-xsrf: true' -H 'Content-Type: application/json' -d "$JSON_STRING"; then
printf "Failed to initiate Agent upgrades...\n"
fi
else
printf "No Agents need updates... Exiting\n\n"
exit 0
@@ -18,7 +18,9 @@ INSTALLED_PACKAGE_LIST=/tmp/esfleet_installed_packages.json
BULK_INSTALL_PACKAGE_LIST=/tmp/esfleet_bulk_install.json
BULK_INSTALL_PACKAGE_TMP=/tmp/esfleet_bulk_install_tmp.json
BULK_INSTALL_OUTPUT=/opt/so/state/esfleet_bulk_install_results.json
PACKAGE_COMPONENTS=/opt/so/state/esfleet_package_components.json
INTEGRATION_PACKAGE_COMPONENTS=/opt/so/state/esfleet_package_components.json
INPUT_PACKAGE_COMPONENTS=/opt/so/state/esfleet_input_package_components.json
CONTENT_PACKAGE_COMPONENTS=/opt/so/state/esfleet_content_package_components.json
COMPONENT_TEMPLATES=/opt/so/state/esfleet_component_templates.json
PENDING_UPDATE=false
@@ -179,10 +181,13 @@ if [[ -f $STATE_FILE_SUCCESS ]]; then
else
echo "Elastic integrations don't appear to need installation/updating..."
fi
# Write out file for generating index/component/ilm templates
if latest_installed_package_list=$(elastic_fleet_installed_packages); then
echo $latest_installed_package_list | jq '[.items[] | {name: .name, es_index_patterns: .dataStreams}]' > $PACKAGE_COMPONENTS
fi
# Write out file for generating index/component/ilm templates, keeping each package type separate
for package_type in "INTEGRATION" "INPUT" "CONTENT"; do
if latest_installed_package_list=$(elastic_fleet_installed_packages_components "$package_type"); then
outfile="${package_type}_PACKAGE_COMPONENTS"
echo $latest_installed_package_list > "${!outfile}"
fi
done
if retry 3 1 "so-elasticsearch-query / --fail --output /dev/null"; then
# Refresh installed component template list
latest_component_templates_list=$(so-elasticsearch-query _component_template | jq '.component_templates[] | .name' | jq -s '.')
@@ -235,6 +235,16 @@ function update_kafka_outputs() {
{% endif %}
# Compare the current Elastic Fleet certificate against what is on disk
POLICY_CERT_SHA=$(jq -r '.item.ssl.certificate' <<< $RAW_JSON | openssl x509 -noout -sha256 -fingerprint)
DISK_CERT_SHA=$(openssl x509 -in /etc/pki/elasticfleet-logstash.crt -noout -sha256 -fingerprint)
if [[ "$POLICY_CERT_SHA" != "$DISK_CERT_SHA" ]]; then
printf "Certificate on disk doesn't match certificate in policy - forcing update\n"
UPDATE_CERTS=true
FORCE_UPDATE=true
fi
# Sort & hash the new list of Logstash Outputs
NEW_LIST_JSON=$(jq --compact-output --null-input '$ARGS.positional' --args -- "${NEW_LIST[@]}")
NEW_HASH=$(sha256sum <<< "$NEW_LIST_JSON" | awk '{print $1}')
+164
View File
@@ -0,0 +1,164 @@
# Copyright Security Onion Solutions LLC and/or licensed to Security Onion Solutions LLC under one
# or more contributor license agreements. Licensed under the Elastic License 2.0 as shown at
# https://securityonion.net/license; you may not use this file except in compliance with the
# Elastic License 2.0.
{% from 'allowed_states.map.jinja' import allowed_states %}
{% if sls in allowed_states %}
{% from 'vars/globals.map.jinja' import GLOBALS %}
{% from 'elasticsearch/config.map.jinja' import ELASTICSEARCHMERGED %}
{% from 'elasticsearch/template.map.jinja' import ES_INDEX_SETTINGS, SO_MANAGED_INDICES %}
{% if GLOBALS.role != 'so-heavynode' %}
{% from 'elasticsearch/template.map.jinja' import ALL_ADDON_SETTINGS %}
{% endif %}
escomponenttemplates:
file.recurse:
- name: /opt/so/conf/elasticsearch/templates/component
- source: salt://elasticsearch/templates/component
- user: 930
- group: 939
- clean: True
- onchanges_in:
- file: so-elasticsearch-templates-reload
- show_changes: False
# Clean up legacy and non-SO managed templates from the elasticsearch/templates/index/ directory
so_index_template_dir:
file.directory:
- name: /opt/so/conf/elasticsearch/templates/index
- clean: True
{%- if SO_MANAGED_INDICES %}
- require:
{%- for index in SO_MANAGED_INDICES %}
- file: so_index_template_{{index}}
{%- endfor %}
{%- endif %}
# Auto-generate index templates for SO managed indices (directly defined in elasticsearch/defaults.yaml)
# These index templates are for the core SO datasets and are always required
{% for index, settings in ES_INDEX_SETTINGS.items() %}
{% if settings.index_template is defined %}
so_index_template_{{index}}:
file.managed:
- name: /opt/so/conf/elasticsearch/templates/index/{{ index }}-template.json
- source: salt://elasticsearch/base-template.json.jinja
- defaults:
TEMPLATE_CONFIG: {{ settings.index_template }}
- template: jinja
- onchanges_in:
- file: so-elasticsearch-templates-reload
{% endif %}
{% endfor %}
{% if GLOBALS.role != "so-heavynode" %}
# Auto-generate optional index templates for integration | input | content packages
# These index templates are not used by default (until user adds package to an agent policy).
# Pre-configured with standard defaults, and incorporated into SOC configuration for user customization.
{% for index,settings in ALL_ADDON_SETTINGS.items() %}
{% if settings.index_template is defined %}
addon_index_template_{{index}}:
file.managed:
- name: /opt/so/conf/elasticsearch/templates/addon-index/{{ index }}-template.json
- source: salt://elasticsearch/base-template.json.jinja
- defaults:
TEMPLATE_CONFIG: {{ settings.index_template }}
- template: jinja
- show_changes: False
- onchanges_in:
- file: addon-elasticsearch-templates-reload
{% endif %}
{% endfor %}
{% endif %}
{% if GLOBALS.role in GLOBALS.manager_roles %}
so-es-cluster-settings:
cmd.run:
- name: /usr/sbin/so-elasticsearch-cluster-settings
- cwd: /opt/so
- template: jinja
- require:
- docker_container: so-elasticsearch
- file: elasticsearch_sbin_jinja
- http: wait_for_so-elasticsearch
{% endif %}
# heavynodes will only load ILM policies for SO managed indices. (Indicies defined in elasticsearch/defaults.yaml)
so-elasticsearch-ilm-policy-load:
cmd.run:
- name: /usr/sbin/so-elasticsearch-ilm-policy-load
- cwd: /opt/so
- require:
- docker_container: so-elasticsearch
- file: so-elasticsearch-ilm-policy-load-script
- onchanges:
- file: so-elasticsearch-ilm-policy-load-script
so-elasticsearch-templates-reload:
file.absent:
- name: /opt/so/state/estemplates.txt
addon-elasticsearch-templates-reload:
file.absent:
- name: /opt/so/state/addon_estemplates.txt
# so-elasticsearch-templates-load will have its first successful run during the 'so-elastic-fleet-setup' script
so-elasticsearch-templates:
cmd.run:
{%- if GLOBALS.role == "so-heavynode" %}
- name: /usr/sbin/so-elasticsearch-templates-load --heavynode
{%- else %}
- name: /usr/sbin/so-elasticsearch-templates-load
{%- endif %}
- cwd: /opt/so
- template: jinja
- require:
- docker_container: so-elasticsearch
- file: elasticsearch_sbin_jinja
so-elasticsearch-pipelines:
cmd.run:
- name: /usr/sbin/so-elasticsearch-pipelines {{ GLOBALS.hostname }}
- require:
- docker_container: so-elasticsearch
- file: so-elasticsearch-pipelines-script
so-elasticsearch-roles-load:
cmd.run:
- name: /usr/sbin/so-elasticsearch-roles-load
- cwd: /opt/so
- template: jinja
- require:
- docker_container: so-elasticsearch
- file: elasticsearch_sbin_jinja
{% if grains.role in ['so-managersearch', 'so-manager', 'so-managerhype'] %}
{% set ap = "absent" %}
{% endif %}
{% if grains.role in ['so-eval', 'so-standalone', 'so-heavynode'] %}
{% if ELASTICSEARCHMERGED.index_clean %}
{% set ap = "present" %}
{% else %}
{% set ap = "absent" %}
{% endif %}
{% endif %}
{% if grains.role in ['so-eval', 'so-standalone', 'so-managersearch', 'so-heavynode', 'so-manager'] %}
so-elasticsearch-indices-delete:
cron.{{ap}}:
- name: /usr/sbin/so-elasticsearch-indices-delete > /opt/so/log/elasticsearch/cron-elasticsearch-indices-delete.log 2>&1
- identifier: so-elasticsearch-indices-delete
- user: root
- minute: '*/5'
- hour: '*'
- daymonth: '*'
- month: '*'
- dayweek: '*'
{% endif %}
{% else %}
{{sls}}_state_not_allowed:
test.fail_without_changes:
- name: {{sls}}_state_not_allowed
{% endif %}
+9
View File
@@ -66,6 +66,8 @@ so-elasticsearch-ilm-policy-load-script:
- group: 939
- mode: 754
- template: jinja
- defaults:
GLOBALS: {{ GLOBALS }}
- show_changes: False
so-elasticsearch-pipelines-script:
@@ -91,6 +93,13 @@ estemplatedir:
- group: 939
- makedirs: True
esaddontemplatedir:
file.directory:
- name: /opt/so/conf/elasticsearch/templates/addon-index
- user: 930
- group: 939
- makedirs: True
esrolesdir:
file.directory:
- name: /opt/so/conf/elasticsearch/roles
+1 -1
View File
@@ -1,6 +1,6 @@
elasticsearch:
enabled: false
version: 9.0.8
version: 9.3.3
index_clean: true
vm:
max_map_count: 1048576
+16 -125
View File
@@ -10,8 +10,6 @@
{% from 'elasticsearch/config.map.jinja' import ELASTICSEARCH_NODES %}
{% from 'elasticsearch/config.map.jinja' import ELASTICSEARCH_SEED_HOSTS %}
{% from 'elasticsearch/config.map.jinja' import ELASTICSEARCHMERGED %}
{% set TEMPLATES = salt['pillar.get']('elasticsearch:templates', {}) %}
{% from 'elasticsearch/template.map.jinja' import ES_INDEX_SETTINGS %}
include:
- ca
@@ -19,6 +17,9 @@ include:
- elasticsearch.ssl
- elasticsearch.config
- elasticsearch.sostatus
{%- if GLOBALS.role != "so-searchnode" %}
- elasticsearch.cluster
{%- endif%}
so-elasticsearch:
docker_container.running:
@@ -101,134 +102,24 @@ so-elasticsearch:
- cmd: auth_users_roles_inode
- cmd: auth_users_inode
wait_for_so-elasticsearch:
http.wait_for_successful_query:
- name: "https://localhost:9200/"
- username: 'so_elastic'
- password: '{{ ELASTICSEARCHMERGED.auth.users.so_elastic_user.pass }}'
- ssl: True
- verify_ssl: False
- status: 200
- wait_for: 300
- request_interval: 15
- require:
- docker_container: so-elasticsearch
delete_so-elasticsearch_so-status.disabled:
file.uncomment:
- name: /opt/so/conf/so-status/so-status.conf
- regex: ^so-elasticsearch$
{% if GLOBALS.role != "so-searchnode" %}
escomponenttemplates:
file.recurse:
- name: /opt/so/conf/elasticsearch/templates/component
- source: salt://elasticsearch/templates/component
- user: 930
- group: 939
- clean: True
- onchanges_in:
- file: so-elasticsearch-templates-reload
- show_changes: False
# Auto-generate templates from defaults file
{% for index, settings in ES_INDEX_SETTINGS.items() %}
{% if settings.index_template is defined %}
es_index_template_{{index}}:
file.managed:
- name: /opt/so/conf/elasticsearch/templates/index/{{ index }}-template.json
- source: salt://elasticsearch/base-template.json.jinja
- defaults:
TEMPLATE_CONFIG: {{ settings.index_template }}
- template: jinja
- show_changes: False
- onchanges_in:
- file: so-elasticsearch-templates-reload
{% endif %}
{% endfor %}
{% if TEMPLATES %}
# Sync custom templates to /opt/so/conf/elasticsearch/templates
{% for TEMPLATE in TEMPLATES %}
es_template_{{TEMPLATE.split('.')[0] | replace("/","_") }}:
file.managed:
- source: salt://elasticsearch/templates/index/{{TEMPLATE}}
{% if 'jinja' in TEMPLATE.split('.')[-1] %}
- name: /opt/so/conf/elasticsearch/templates/index/{{TEMPLATE.split('/')[1] | replace(".jinja", "")}}
- template: jinja
{% else %}
- name: /opt/so/conf/elasticsearch/templates/index/{{TEMPLATE.split('/')[1]}}
{% endif %}
- user: 930
- group: 939
- show_changes: False
- onchanges_in:
- file: so-elasticsearch-templates-reload
{% endfor %}
{% endif %}
{% if GLOBALS.role in GLOBALS.manager_roles %}
so-es-cluster-settings:
cmd.run:
- name: /usr/sbin/so-elasticsearch-cluster-settings
- cwd: /opt/so
- template: jinja
- require:
- docker_container: so-elasticsearch
- file: elasticsearch_sbin_jinja
{% endif %}
so-elasticsearch-ilm-policy-load:
cmd.run:
- name: /usr/sbin/so-elasticsearch-ilm-policy-load
- cwd: /opt/so
- require:
- docker_container: so-elasticsearch
- file: so-elasticsearch-ilm-policy-load-script
- onchanges:
- file: so-elasticsearch-ilm-policy-load-script
so-elasticsearch-templates-reload:
file.absent:
- name: /opt/so/state/estemplates.txt
so-elasticsearch-templates:
cmd.run:
- name: /usr/sbin/so-elasticsearch-templates-load
- cwd: /opt/so
- template: jinja
- require:
- docker_container: so-elasticsearch
- file: elasticsearch_sbin_jinja
so-elasticsearch-pipelines:
cmd.run:
- name: /usr/sbin/so-elasticsearch-pipelines {{ GLOBALS.hostname }}
- require:
- docker_container: so-elasticsearch
- file: so-elasticsearch-pipelines-script
so-elasticsearch-roles-load:
cmd.run:
- name: /usr/sbin/so-elasticsearch-roles-load
- cwd: /opt/so
- template: jinja
- require:
- docker_container: so-elasticsearch
- file: elasticsearch_sbin_jinja
{% if grains.role in ['so-managersearch', 'so-manager', 'so-managerhype'] %}
{% set ap = "absent" %}
{% endif %}
{% if grains.role in ['so-eval', 'so-standalone', 'so-heavynode'] %}
{% if ELASTICSEARCHMERGED.index_clean %}
{% set ap = "present" %}
{% else %}
{% set ap = "absent" %}
{% endif %}
{% endif %}
{% if grains.role in ['so-eval', 'so-standalone', 'so-managersearch', 'so-heavynode', 'so-manager'] %}
so-elasticsearch-indices-delete:
cron.{{ap}}:
- name: /usr/sbin/so-elasticsearch-indices-delete > /opt/so/log/elasticsearch/cron-elasticsearch-indices-delete.log 2>&1
- identifier: so-elasticsearch-indices-delete
- user: root
- minute: '*/5'
- hour: '*'
- daymonth: '*'
- month: '*'
- dayweek: '*'
{% endif %}
{% endif %}
{% else %}
{{sls}}_state_not_allowed:
@@ -10,24 +10,28 @@
"processors": [
{
"set": {
"tag": "set_ecs_version_f5923549",
"field": "ecs.version",
"value": "8.17.0"
}
},
{
"set": {
"tag": "set_observer_vendor_ad9d35cc",
"field": "observer.vendor",
"value": "netgate"
}
},
{
"set": {
"tag": "set_observer_type_5dddf3ba",
"field": "observer.type",
"value": "firewall"
}
},
{
"rename": {
"tag": "rename_message_to_event_original_56a77271",
"field": "message",
"target_field": "event.original",
"ignore_missing": true,
@@ -36,12 +40,14 @@
},
{
"set": {
"tag": "set_event_kind_de80643c",
"field": "event.kind",
"value": "event"
}
},
{
"set": {
"tag": "set_event_timezone_4ca44cac",
"field": "event.timezone",
"value": "{{{_tmp.tz_offset}}}",
"if": "ctx._tmp?.tz_offset != null && ctx._tmp?.tz_offset != 'local'"
@@ -49,6 +55,7 @@
},
{
"grok": {
"tag": "grok_event_original_27d9c8c7",
"description": "Parse syslog header",
"field": "event.original",
"patterns": [
@@ -72,6 +79,7 @@
},
{
"date": {
"tag": "date__tmp_timestamp8601_to_timestamp_6ac9d3ce",
"if": "ctx._tmp.timestamp8601 != null",
"field": "_tmp.timestamp8601",
"target_field": "@timestamp",
@@ -82,6 +90,7 @@
},
{
"date": {
"tag": "date__tmp_timestamp_to_timestamp_f21e536e",
"if": "ctx.event?.timezone != null && ctx._tmp?.timestamp != null",
"field": "_tmp.timestamp",
"target_field": "@timestamp",
@@ -95,6 +104,7 @@
},
{
"grok": {
"tag": "grok_process_name_cef3d489",
"description": "Set Event Provider",
"field": "process.name",
"patterns": [
@@ -107,71 +117,83 @@
},
{
"pipeline": {
"name": "logs-pfsense.log-1.23.1-firewall",
"tag": "pipeline_e16851a7",
"name": "logs-pfsense.log-1.25.2-firewall",
"if": "ctx.event.provider == 'filterlog'"
}
},
{
"pipeline": {
"name": "logs-pfsense.log-1.23.1-openvpn",
"tag": "pipeline_828590b5",
"name": "logs-pfsense.log-1.25.2-openvpn",
"if": "ctx.event.provider == 'openvpn'"
}
},
{
"pipeline": {
"name": "logs-pfsense.log-1.23.1-ipsec",
"tag": "pipeline_9d37039c",
"name": "logs-pfsense.log-1.25.2-ipsec",
"if": "ctx.event.provider == 'charon'"
}
},
{
"pipeline": {
"name": "logs-pfsense.log-1.23.1-dhcp",
"if": "[\"dhcpd\", \"dhclient\", \"dhcp6c\"].contains(ctx.event.provider)"
"tag": "pipeline_ad56bbca",
"name": "logs-pfsense.log-1.25.2-dhcp",
"if": "[\"dhcpd\", \"dhclient\", \"dhcp6c\", \"dnsmasq-dhcp\"].contains(ctx.event.provider)"
}
},
{
"pipeline": {
"name": "logs-pfsense.log-1.23.1-unbound",
"tag": "pipeline_dd85553d",
"name": "logs-pfsense.log-1.25.2-unbound",
"if": "ctx.event.provider == 'unbound'"
}
},
{
"pipeline": {
"name": "logs-pfsense.log-1.23.1-haproxy",
"tag": "pipeline_720ed255",
"name": "logs-pfsense.log-1.25.2-haproxy",
"if": "ctx.event.provider == 'haproxy'"
}
},
{
"pipeline": {
"name": "logs-pfsense.log-1.23.1-php-fpm",
"tag": "pipeline_456beba5",
"name": "logs-pfsense.log-1.25.2-php-fpm",
"if": "ctx.event.provider == 'php-fpm'"
}
},
{
"pipeline": {
"name": "logs-pfsense.log-1.23.1-squid",
"tag": "pipeline_a0d89375",
"name": "logs-pfsense.log-1.25.2-squid",
"if": "ctx.event.provider == 'squid'"
}
},
{
"pipeline": {
"name": "logs-pfsense.log-1.23.1-snort",
"tag": "pipeline_c2f1ed55",
"name": "logs-pfsense.log-1.25.2-snort",
"if": "ctx.event.provider == 'snort'"
}
},
{
"pipeline": {
"name": "logs-pfsense.log-1.23.1-suricata",
"tag":"pipeline_33db1c9e",
"name": "logs-pfsense.log-1.25.2-suricata",
"if": "ctx.event.provider == 'suricata'"
}
},
{
"drop": {
"if": "![\"filterlog\", \"openvpn\", \"charon\", \"dhcpd\", \"dhclient\", \"dhcp6c\", \"unbound\", \"haproxy\", \"php-fpm\", \"squid\", \"snort\", \"suricata\"].contains(ctx.event?.provider)"
"tag": "drop_9d7c46f8",
"if": "![\"filterlog\", \"openvpn\", \"charon\", \"dhcpd\", \"dnsmasq-dhcp\", \"dhclient\", \"dhcp6c\", \"unbound\", \"haproxy\", \"php-fpm\", \"squid\", \"snort\", \"suricata\"].contains(ctx.event?.provider)"
}
},
{
"append": {
"tag": "append_event_category_4780a983",
"field": "event.category",
"value": "network",
"if": "ctx.network != null"
@@ -179,6 +201,7 @@
},
{
"convert": {
"tag": "convert_source_address_to_source_ip_f5632a20",
"field": "source.address",
"target_field": "source.ip",
"type": "ip",
@@ -188,6 +211,7 @@
},
{
"convert": {
"tag": "convert_destination_address_to_destination_ip_f1388f0c",
"field": "destination.address",
"target_field": "destination.ip",
"type": "ip",
@@ -197,6 +221,7 @@
},
{
"set": {
"tag": "set_network_type_1f1d940a",
"field": "network.type",
"value": "ipv6",
"if": "ctx.source?.ip != null && ctx.source.ip.contains(\":\")"
@@ -204,6 +229,7 @@
},
{
"set": {
"tag": "set_network_type_69deca38",
"field": "network.type",
"value": "ipv4",
"if": "ctx.source?.ip != null && ctx.source.ip.contains(\".\")"
@@ -211,6 +237,7 @@
},
{
"geoip": {
"tag": "geoip_source_ip_to_source_geo_da2e41b2",
"field": "source.ip",
"target_field": "source.geo",
"ignore_missing": true
@@ -218,6 +245,7 @@
},
{
"geoip": {
"tag": "geoip_destination_ip_to_destination_geo_ab5e2968",
"field": "destination.ip",
"target_field": "destination.geo",
"ignore_missing": true
@@ -225,6 +253,7 @@
},
{
"geoip": {
"tag": "geoip_source_ip_to_source_as_28d69883",
"ignore_missing": true,
"database_file": "GeoLite2-ASN.mmdb",
"field": "source.ip",
@@ -237,6 +266,7 @@
},
{
"geoip": {
"tag": "geoip_destination_ip_to_destination_as_8a007787",
"database_file": "GeoLite2-ASN.mmdb",
"field": "destination.ip",
"target_field": "destination.as",
@@ -249,6 +279,7 @@
},
{
"rename": {
"tag": "rename_source_as_asn_to_source_as_number_a917047d",
"field": "source.as.asn",
"target_field": "source.as.number",
"ignore_missing": true
@@ -256,6 +287,7 @@
},
{
"rename": {
"tag": "rename_source_as_organization_name_to_source_as_organization_name_f1362d0b",
"field": "source.as.organization_name",
"target_field": "source.as.organization.name",
"ignore_missing": true
@@ -263,6 +295,7 @@
},
{
"rename": {
"tag": "rename_destination_as_asn_to_destination_as_number_3b459fcd",
"field": "destination.as.asn",
"target_field": "destination.as.number",
"ignore_missing": true
@@ -270,6 +303,7 @@
},
{
"rename": {
"tag": "rename_destination_as_organization_name_to_destination_as_organization_name_814bd459",
"field": "destination.as.organization_name",
"target_field": "destination.as.organization.name",
"ignore_missing": true
@@ -277,12 +311,14 @@
},
{
"community_id": {
"tag": "community_id_d2308e7a",
"target_field": "network.community_id",
"ignore_failure": true
}
},
{
"grok": {
"tag": "grok_observer_ingress_interface_name_968018d3",
"field": "observer.ingress.interface.name",
"patterns": [
"%{DATA}.%{NONNEGINT:observer.ingress.vlan.id}"
@@ -293,6 +329,7 @@
},
{
"set": {
"tag": "set_network_vlan_id_efd4d96a",
"field": "network.vlan.id",
"copy_from": "observer.ingress.vlan.id",
"ignore_empty_value": true
@@ -300,6 +337,7 @@
},
{
"append": {
"tag": "append_related_ip_c1a6356b",
"field": "related.ip",
"value": "{{{destination.ip}}}",
"allow_duplicates": false,
@@ -308,6 +346,7 @@
},
{
"append": {
"tag": "append_related_ip_8121c591",
"field": "related.ip",
"value": "{{{source.ip}}}",
"allow_duplicates": false,
@@ -316,6 +355,7 @@
},
{
"append": {
"tag": "append_related_ip_53b62ed8",
"field": "related.ip",
"value": "{{{source.nat.ip}}}",
"allow_duplicates": false,
@@ -324,6 +364,7 @@
},
{
"append": {
"tag": "append_related_hosts_6f162628",
"field": "related.hosts",
"value": "{{{destination.domain}}}",
"if": "ctx.destination?.domain != null"
@@ -331,6 +372,7 @@
},
{
"append": {
"tag": "append_related_user_c036eec2",
"field": "related.user",
"value": "{{{user.name}}}",
"if": "ctx.user?.name != null"
@@ -338,6 +380,7 @@
},
{
"set": {
"tag": "set_network_direction_cb1e3125",
"field": "network.direction",
"value": "{{{network.direction}}}bound",
"if": "ctx.network?.direction != null && ctx.network?.direction =~ /^(in|out)$/"
@@ -345,6 +388,7 @@
},
{
"remove": {
"tag": "remove_a82e20f2",
"field": [
"_tmp"
],
@@ -353,11 +397,21 @@
},
{
"script": {
"tag": "script_a7f2c062",
"lang": "painless",
"description": "This script processor iterates over the whole document to remove fields with null values.",
"source": "void handleMap(Map map) {\n for (def x : map.values()) {\n if (x instanceof Map) {\n handleMap(x);\n } else if (x instanceof List) {\n handleList(x);\n }\n }\n map.values().removeIf(v -> v == null || (v instanceof String && v == \"-\"));\n}\nvoid handleList(List list) {\n for (def x : list) {\n if (x instanceof Map) {\n handleMap(x);\n } else if (x instanceof List) {\n handleList(x);\n }\n }\n}\nhandleMap(ctx);\n"
}
},
{
"append": {
"tag": "append_preserve_original_event_on_error",
"field": "tags",
"value": "preserve_original_event",
"allow_duplicates": false,
"if": "ctx.error?.message != null"
}
},
{
"pipeline": {
"name": "global@custom",
@@ -405,7 +459,14 @@
{
"append": {
"field": "error.message",
"value": "{{{ _ingest.on_failure_message }}}"
"value": "Processor '{{{ _ingest.on_failure_processor_type }}}' {{#_ingest.on_failure_processor_tag}}with tag '{{{ _ingest.on_failure_processor_tag }}}' {{/_ingest.on_failure_processor_tag}}in pipeline '{{{ _ingest.pipeline }}}' failed with message '{{{ _ingest.on_failure_message }}}'"
}
},
{
"append": {
"field": "tags",
"value": "preserve_original_event",
"allow_duplicates": false
}
}
]
@@ -22,6 +22,12 @@
"ignore_failure": true
}
},
{
"lowercase": {
"field": "network.transport",
"ignore_failure": true
}
},
{
"rename": {
"field": "message2.in_iface",
@@ -45,3 +45,7 @@ appender.rolling_json.strategy.action.condition.nested_condition.age = 1D
rootLogger.level = info
rootLogger.appenderRef.rolling.ref = rolling
rootLogger.appenderRef.rolling_json.ref = rolling_json
# Suppress NotEntitledException WARNs (ES 9.3.3 bug)
logger.entitlement_security.name = org.elasticsearch.entitlement.runtime.policy.PolicyManager.x-pack-security.org.elasticsearch.security.org.elasticsearch.xpack.security
logger.entitlement_security.level = error
+62 -18
View File
@@ -14,15 +14,42 @@
{% set ES_INDEX_SETTINGS_ORIG = ELASTICSEARCHDEFAULTS.elasticsearch.index_settings %}
{% set ALL_ADDON_INTEGRATION_DEFAULTS = {} %}
{% set ALL_ADDON_SETTINGS_ORIG = {} %}
{% set ALL_ADDON_SETTINGS_GLOBAL_OVERRIDES = {} %}
{% set ALL_ADDON_SETTINGS = {} %}
{# start generation of integration default index_settings #}
{% if salt['file.file_exists']('/opt/so/state/esfleet_package_components.json') and salt['file.file_exists']('/opt/so/state/esfleet_component_templates.json') %}
{% set check_package_components = salt['file.stats']('/opt/so/state/esfleet_package_components.json') %}
{% if check_package_components.size > 1 %}
{% from 'elasticfleet/integration-defaults.map.jinja' import ADDON_INTEGRATION_DEFAULTS %}
{% for index, settings in ADDON_INTEGRATION_DEFAULTS.items() %}
{% do ES_INDEX_SETTINGS_ORIG.update({index: settings}) %}
{% endfor %}
{% endif%}
{% if salt['file.file_exists']('/opt/so/state/esfleet_component_templates.json') %}
{# import integration type defaults #}
{% if salt['file.file_exists']('/opt/so/state/esfleet_package_components.json') %}
{% set check_integration_package_components = salt['file.stats']('/opt/so/state/esfleet_package_components.json') %}
{% if check_integration_package_components.size > 1 %}
{% from 'elasticfleet/integration-defaults.map.jinja' import ADDON_INTEGRATION_DEFAULTS %}
{% do ALL_ADDON_INTEGRATION_DEFAULTS.update(ADDON_INTEGRATION_DEFAULTS) %}
{% endif %}
{% endif %}
{# import input type defaults #}
{% if salt['file.file_exists']('/opt/so/state/esfleet_input_package_components.json') %}
{% set check_input_package_components = salt['file.stats']('/opt/so/state/esfleet_input_package_components.json') %}
{% if check_input_package_components.size > 1 %}
{% from 'elasticfleet/input-defaults.map.jinja' import ADDON_INPUT_INTEGRATION_DEFAULTS %}
{% do ALL_ADDON_INTEGRATION_DEFAULTS.update(ADDON_INPUT_INTEGRATION_DEFAULTS) %}
{% endif %}
{% endif %}
{# import content type defaults #}
{% if salt['file.file_exists']('/opt/so/state/esfleet_content_package_components.json') %}
{% set check_content_package_components = salt['file.stats']('/opt/so/state/esfleet_content_package_components.json') %}
{% if check_content_package_components.size > 1 %}
{% from 'elasticfleet/content-defaults.map.jinja' import ADDON_CONTENT_INTEGRATION_DEFAULTS %}
{% do ALL_ADDON_INTEGRATION_DEFAULTS.update(ADDON_CONTENT_INTEGRATION_DEFAULTS) %}
{% endif %}
{% endif %}
{% for index, settings in ALL_ADDON_INTEGRATION_DEFAULTS.items() %}
{% do ALL_ADDON_SETTINGS_ORIG.update({index: settings}) %}
{% endfor %}
{% endif %}
{# end generation of integration default index_settings #}
@@ -31,25 +58,33 @@
{% do ES_INDEX_SETTINGS_GLOBAL_OVERRIDES.update({index: salt['defaults.merge'](ELASTICSEARCHDEFAULTS.elasticsearch.index_settings[index], PILLAR_GLOBAL_OVERRIDES, in_place=False)}) %}
{% endfor %}
{% if ALL_ADDON_SETTINGS_ORIG.keys() | length > 0 %}
{% for index in ALL_ADDON_SETTINGS_ORIG.keys() %}
{% do ALL_ADDON_SETTINGS_GLOBAL_OVERRIDES.update({index: salt['defaults.merge'](ALL_ADDON_SETTINGS_ORIG[index], PILLAR_GLOBAL_OVERRIDES, in_place=False)}) %}
{% endfor %}
{% endif %}
{% set ES_INDEX_SETTINGS = {} %}
{% do ES_INDEX_SETTINGS_GLOBAL_OVERRIDES.update(salt['defaults.merge'](ES_INDEX_SETTINGS_GLOBAL_OVERRIDES, ES_INDEX_PILLAR, in_place=False)) %}
{% for index, settings in ES_INDEX_SETTINGS_GLOBAL_OVERRIDES.items() %}
{% macro create_final_index_template(DEFINED_SETTINGS, GLOBAL_OVERRIDES, FINAL_INDEX_SETTINGS) %}
{% do GLOBAL_OVERRIDES.update(salt['defaults.merge'](GLOBAL_OVERRIDES, ES_INDEX_PILLAR, in_place=False)) %}
{% for index, settings in GLOBAL_OVERRIDES.items() %}
{# prevent this action from being performed on custom defined indices. #}
{# the custom defined index is not present in either of the dictionaries and fails to reder. #}
{% if index in ES_INDEX_SETTINGS_ORIG and index in ES_INDEX_SETTINGS_GLOBAL_OVERRIDES %}
{% if index in DEFINED_SETTINGS and index in GLOBAL_OVERRIDES %}
{# dont merge policy from the global_overrides if policy isn't defined in the original index settingss #}
{# this will prevent so-elasticsearch-ilm-policy-load from trying to load policy on non ILM manged indices #}
{% if not ES_INDEX_SETTINGS_ORIG[index].policy is defined and ES_INDEX_SETTINGS_GLOBAL_OVERRIDES[index].policy is defined %}
{% do ES_INDEX_SETTINGS_GLOBAL_OVERRIDES[index].pop('policy') %}
{% if not DEFINED_SETTINGS[index].policy is defined and GLOBAL_OVERRIDES[index].policy is defined %}
{% do GLOBAL_OVERRIDES[index].pop('policy') %}
{% endif %}
{# this prevents and index from inderiting a policy phase from global overrides if it wasnt defined in the defaults. #}
{% if ES_INDEX_SETTINGS_GLOBAL_OVERRIDES[index].policy is defined %}
{% for phase in ES_INDEX_SETTINGS_GLOBAL_OVERRIDES[index].policy.phases.copy() %}
{% if ES_INDEX_SETTINGS_ORIG[index].policy.phases[phase] is not defined %}
{% do ES_INDEX_SETTINGS_GLOBAL_OVERRIDES[index].policy.phases.pop(phase) %}
{% if GLOBAL_OVERRIDES[index].policy is defined %}
{% for phase in GLOBAL_OVERRIDES[index].policy.phases.copy() %}
{% if DEFINED_SETTINGS[index].policy.phases[phase] is not defined %}
{% do GLOBAL_OVERRIDES[index].policy.phases.pop(phase) %}
{% endif %}
{% endfor %}
{% endif %}
@@ -111,5 +146,14 @@
{% endfor %}
{% endif %}
{% do ES_INDEX_SETTINGS.update({index | replace("_x_", "."): ES_INDEX_SETTINGS_GLOBAL_OVERRIDES[index]}) %}
{% do FINAL_INDEX_SETTINGS.update({index | replace("_x_", "."): GLOBAL_OVERRIDES[index]}) %}
{% endfor %}
{% endmacro %}
{{ create_final_index_template(ES_INDEX_SETTINGS_ORIG, ES_INDEX_SETTINGS_GLOBAL_OVERRIDES, ES_INDEX_SETTINGS) }}
{{ create_final_index_template(ALL_ADDON_SETTINGS_ORIG, ALL_ADDON_SETTINGS_GLOBAL_OVERRIDES, ALL_ADDON_SETTINGS) }}
{% set SO_MANAGED_INDICES = [] %}
{% for index, settings in ES_INDEX_SETTINGS.items() %}
{% do SO_MANAGED_INDICES.append(index) %}
{% endfor %}
@@ -6,8 +6,19 @@
# Elastic License 2.0.
. /usr/sbin/so-common
if [ "$1" == "" ]; then
curl -K /opt/so/conf/elasticsearch/curl.config -s -k -L https://localhost:9200/_component_template | jq '.component_templates[] |.name'| sort
if [[ -z "$1" ]]; then
if output=$(so-elasticsearch-query "_component_template" --retry 3 --retry-delay 1 --fail); then
jq '[.component_templates[] | .name] | sort' <<< "$output"
else
echo "Failed to retrieve component templates from Elasticsearch."
exit 1
fi
else
curl -K /opt/so/conf/elasticsearch/curl.config -s -k -L https://localhost:9200/_component_template/$1 | jq
fi
if output=$(so-elasticsearch-query "_component_template/$1" --retry 3 --retry-delay 1 --fail); then
jq <<< "$output"
else
echo "Failed to retrieve component template '$1' from Elasticsearch."
exit 1
fi
fi
@@ -0,0 +1,276 @@
#!/bin/bash
# Copyright Security Onion Solutions LLC and/or licensed to Security Onion Solutions LLC under one
# or more contributor license agreements. Licensed under the Elastic License 2.0 as shown at
# https://securityonion.net/license; you may not use this file except in compliance with the
# Elastic License 2.0.
. /usr/sbin/so-common
SO_STATEFILE_SUCCESS=/opt/so/state/estemplates.txt
ADDON_STATEFILE_SUCCESS=/opt/so/state/addon_estemplates.txt
ELASTICSEARCH_TEMPLATES_DIR="/opt/so/conf/elasticsearch/templates"
SO_TEMPLATES_DIR="${ELASTICSEARCH_TEMPLATES_DIR}/index"
ADDON_TEMPLATES_DIR="${ELASTICSEARCH_TEMPLATES_DIR}/addon-index"
SO_LOAD_FAILURES=0
ADDON_LOAD_FAILURES=0
SO_LOAD_FAILURES_NAMES=()
ADDON_LOAD_FAILURES_NAMES=()
IS_HEAVYNODE="false"
FORCE="false"
VERBOSE="false"
SHOULD_EXIT_ON_FAILURE="true"
# If soup is running, ignore errors
pgrep soup >/dev/null && SHOULD_EXIT_ON_FAILURE="false"
while [[ $# -gt 0 ]]; do
case "$1" in
--heavynode)
IS_HEAVYNODE="true"
;;
--force)
FORCE="true"
;;
--verbose)
VERBOSE="true"
;;
*)
echo "Usage: $0 [options]"
echo "Options:"
echo " --heavynode Only loads index templates specific to heavynodes"
echo " --force Force reload all templates regardless of statefiles (default: false)"
echo " --verbose Enable verbose output"
exit 1
;;
esac
shift
done
load_template() {
local uri="$1"
local file="$2"
echo "Loading template file $file"
if ! output=$(retry 3 3 "so-elasticsearch-query $uri -d@$file -XPUT" "{\"acknowledged\":true}"); then
echo "$output"
return 1
elif [[ "$VERBOSE" == "true" ]]; then
echo "$output"
fi
}
check_required_component_template_exists() {
local required
local missing
local file=$1
required=$(jq '[((.composed_of //[]) - (.ignore_missing_component_templates // []))[]]' "$file")
missing=$(jq -n --argjson required "$required" --argjson component_templates "$component_templates" '(($required) - ($component_templates))')
if [[ $(jq length <<<"$missing") -gt 0 ]]; then
return 1
fi
}
check_heavynode_compatiable_index_template() {
# The only templates that are relevant to heavynodes are from datasets defined in elasticagent/files/elastic-agent.yml.jinja.
# Heavynodes do not have fleet server packages installed and do not support elastic agents reporting directly to them.
local -A heavynode_index_templates=(
["so-import"]=1
["so-syslog"]=1
["so-logs-soc"]=1
["so-suricata"]=1
["so-suricata.alerts"]=1
["so-zeek"]=1
["so-strelka"]=1
)
local template_name="$1"
if [[ ! -v heavynode_index_templates["$template_name"] ]]; then
return 1
fi
}
load_component_templates() {
local printed_name="$1"
local pattern="${ELASTICSEARCH_TEMPLATES_DIR}/component/$2"
local append_mappings="${3:-"false"}"
echo -e "\nLoading $printed_name component templates...\n"
if ! compgen -G "${pattern}/*.json" > /dev/null; then
echo "No $printed_name component templates found in ${pattern}, skipping."
return
fi
for component in "$pattern"/*.json; do
tmpl_name=$(basename "${component%.json}")
if [[ "$append_mappings" == "true" ]]; then
# avoid duplicating "-mappings" if it already exists in the component template filename
tmpl_name="${tmpl_name%-mappings}-mappings"
fi
if ! load_template "_component_template/${tmpl_name}" "$component"; then
SO_LOAD_FAILURES=$((SO_LOAD_FAILURES + 1))
SO_LOAD_FAILURES_NAMES+=("$component")
fi
done
}
check_elasticsearch_responsive() {
# Cannot load templates if Elasticsearch is not responding.
# NOTE: Slightly faster exit w/ failure than previous "retry 240 1" if there is a problem with Elasticsearch the
# script should exit sooner rather than hang at the 'so-elasticsearch-templates' salt state.
retry 3 15 "so-elasticsearch-query / --output /dev/null --fail" ||
fail "Elasticsearch is not responding. Please review Elasticsearch logs /opt/so/log/elasticsearch/securityonion.log for more details. Additionally, consider running so-elasticsearch-troubleshoot."
}
index_templates_exist() {
local templates_dir="$1"
if [[ ! -d "$templates_dir" ]]; then
return 1
fi
compgen -G "${templates_dir}/*.json" > /dev/null
}
should_load_addon_templates() {
if [[ "$IS_HEAVYNODE" == "true" ]]; then
return 1
fi
# Skip statefile checks when forcing template load
if [[ "$FORCE" != "true" ]]; then
if [[ ! -f "$SO_STATEFILE_SUCCESS" || -f "$ADDON_STATEFILE_SUCCESS" ]]; then
return 1
fi
fi
index_templates_exist "$ADDON_TEMPLATES_DIR"
}
if [[ "$FORCE" == "true" || ! -f "$SO_STATEFILE_SUCCESS" ]] && index_templates_exist "$SO_TEMPLATES_DIR"; then
check_elasticsearch_responsive
if [[ "$IS_HEAVYNODE" == "false" ]]; then
# TODO: Better way to check if fleet server is installed vs checking for Elastic Defend component template.
fleet_check="logs-endpoint.alerts@package"
if ! so-elasticsearch-query "_component_template/$fleet_check" --output /dev/null --retry 5 --retry-delay 3 --fail; then
# This check prevents so-elasticsearch-templates-load from running before so-elastic-fleet-setup has run.
echo -e "\nPackage $fleet_check not yet installed. Fleet Server may not be fully configured yet."
# Fleet Server is required because some SO index templates depend on components installed via
# specific integrations eg Elastic Defend. These are components that we do not manually create / manage
# via /opt/so/saltstack/salt/elasticsearch/templates/component/
exit 0
fi
fi
# load_component_templates "Name" "directory" "append '-mappings'?"
load_component_templates "ECS" "ecs" "true"
load_component_templates "Elastic Agent" "elastic-agent"
load_component_templates "Security Onion" "so"
component_templates=$(so-elasticsearch-component-templates-list)
echo -e "Loading Security Onion index templates...\n"
for so_idx_tmpl in "${SO_TEMPLATES_DIR}"/*.json; do
tmpl_name=$(basename "${so_idx_tmpl%-template.json}")
if [[ "$IS_HEAVYNODE" == "true" ]]; then
# TODO: Better way to load only heavynode specific templates
if ! check_heavynode_compatiable_index_template "$tmpl_name"; then
if [[ "$VERBOSE" == "true" ]]; then
echo "Skipping over $so_idx_tmpl, template is not a heavynode specific index template."
fi
continue
fi
fi
if check_required_component_template_exists "$so_idx_tmpl"; then
if ! load_template "_index_template/$tmpl_name" "$so_idx_tmpl"; then
SO_LOAD_FAILURES=$((SO_LOAD_FAILURES + 1))
SO_LOAD_FAILURES_NAMES+=("$so_idx_tmpl")
fi
else
echo "Skipping over $so_idx_tmpl due to missing required component template(s)."
SO_LOAD_FAILURES=$((SO_LOAD_FAILURES + 1))
SO_LOAD_FAILURES_NAMES+=("$so_idx_tmpl")
continue
fi
done
if [[ $SO_LOAD_FAILURES -eq 0 ]]; then
echo "All Security Onion core templates loaded successfully."
touch "$SO_STATEFILE_SUCCESS"
else
echo "Encountered $SO_LOAD_FAILURES failure(s) loading templates:"
for failed_template in "${SO_LOAD_FAILURES_NAMES[@]}"; do
echo " - $failed_template"
done
if [[ "$SHOULD_EXIT_ON_FAILURE" == "true" ]]; then
fail "Failed to load all Security Onion core templates successfully."
fi
fi
elif ! index_templates_exist "$SO_TEMPLATES_DIR"; then
echo "No Security Onion core index templates found in ${SO_TEMPLATES_DIR}, skipping."
elif [[ -f "$SO_STATEFILE_SUCCESS" ]]; then
echo "Security Onion core templates already loaded"
fi
# Start loading addon templates
if should_load_addon_templates; then
check_elasticsearch_responsive
echo -e "\nLoading addon integration index templates...\n"
component_templates=$(so-elasticsearch-component-templates-list)
for addon_idx_tmpl in "${ADDON_TEMPLATES_DIR}"/*.json; do
tmpl_name=$(basename "${addon_idx_tmpl%-template.json}")
if check_required_component_template_exists "$addon_idx_tmpl"; then
if ! load_template "_index_template/${tmpl_name}" "$addon_idx_tmpl"; then
ADDON_LOAD_FAILURES=$((ADDON_LOAD_FAILURES + 1))
ADDON_LOAD_FAILURES_NAMES+=("$addon_idx_tmpl")
fi
else
echo "Skipping over $addon_idx_tmpl due to missing required component template(s)."
ADDON_LOAD_FAILURES=$((ADDON_LOAD_FAILURES + 1))
ADDON_LOAD_FAILURES_NAMES+=("$addon_idx_tmpl")
continue
fi
done
if [[ $ADDON_LOAD_FAILURES -eq 0 ]]; then
echo "All addon integration templates loaded successfully."
touch "$ADDON_STATEFILE_SUCCESS"
else
echo "Encountered $ADDON_LOAD_FAILURES failure(s) loading addon integration templates:"
for failed_template in "${ADDON_LOAD_FAILURES_NAMES[@]}"; do
echo " - $failed_template"
done
if [[ "$SHOULD_EXIT_ON_FAILURE" == "true" ]]; then
fail "Failed to load all addon integration templates successfully."
fi
fi
elif [[ ! -f "$SO_STATEFILE_SUCCESS" && "$IS_HEAVYNODE" == "false" ]]; then
echo "Skipping loading addon integration templates until Security Onion core templates have been loaded."
elif [[ -f "$ADDON_STATEFILE_SUCCESS" && "$IS_HEAVYNODE" == "false" && "$FORCE" == "false" ]]; then
echo "Addon integration templates already loaded"
fi
@@ -7,6 +7,9 @@
. /usr/sbin/so-common
{%- from 'elasticsearch/template.map.jinja' import ES_INDEX_SETTINGS %}
{%- if GLOBALS.role != "so-heavynode" %}
{%- from 'elasticsearch/template.map.jinja' import ALL_ADDON_SETTINGS %}
{%- endif %}
{%- for index, settings in ES_INDEX_SETTINGS.items() %}
{%- if settings.policy is defined %}
@@ -33,3 +36,13 @@
{%- endif %}
{%- endfor %}
echo
{%- if GLOBALS.role != "so-heavynode" %}
{%- for index, settings in ALL_ADDON_SETTINGS.items() %}
{%- if settings.policy is defined %}
echo
echo "Setting up {{ index }}-logs policy..."
curl -K /opt/so/conf/elasticsearch/curl.config -b "sid=$SESSIONCOOKIE" -s -k -L -X PUT "https://localhost:9200/_ilm/policy/{{ index }}-logs" -H 'Content-Type: application/json' -d'{ "policy": {{ settings.policy | tojson(true) }} }'
echo
{%- endif %}
{%- endfor %}
{%- endif %}
@@ -1,165 +0,0 @@
#!/bin/bash
# Copyright Security Onion Solutions LLC and/or licensed to Security Onion Solutions LLC under one
# or more contributor license agreements. Licensed under the Elastic License 2.0 as shown at
# https://securityonion.net/license; you may not use this file except in compliance with the
# Elastic License 2.0.
{%- import_yaml 'elasticfleet/defaults.yaml' as ELASTICFLEETDEFAULTS %}
{% from 'vars/globals.map.jinja' import GLOBALS %}
STATE_FILE_INITIAL=/opt/so/state/estemplates_initial_load_attempt.txt
STATE_FILE_SUCCESS=/opt/so/state/estemplates.txt
if [[ -f $STATE_FILE_INITIAL ]]; then
# The initial template load has already run. As this is a subsequent load, all dependencies should
# already be satisified. Therefore, immediately exit/abort this script upon any template load failure
# since this is an unrecoverable failure.
should_exit_on_failure=1
else
# This is the initial template load, and there likely are some components not yet setup in Elasticsearch.
# Therefore load as many templates as possible at this time and if an error occurs proceed to the next
# template. But if at least one template fails to load do not mark the templates as having been loaded.
# This will allow the next load to resume the load of the templates that failed to load initially.
should_exit_on_failure=0
echo "This is the initial template load"
fi
# If soup is running, ignore errors
pgrep soup > /dev/null && should_exit_on_failure=0
load_failures=0
load_template() {
uri=$1
file=$2
echo "Loading template file $i"
if ! retry 3 1 "so-elasticsearch-query $uri -d@$file -XPUT" "{\"acknowledged\":true}"; then
if [[ $should_exit_on_failure -eq 1 ]]; then
fail "Could not load template file: $file"
else
load_failures=$((load_failures+1))
echo "Incremented load failure counter: $load_failures"
fi
fi
}
if [ ! -f $STATE_FILE_SUCCESS ]; then
echo "State file $STATE_FILE_SUCCESS not found. Running so-elasticsearch-templates-load."
. /usr/sbin/so-common
{% if GLOBALS.role != 'so-heavynode' %}
if [ -f /usr/sbin/so-elastic-fleet-common ]; then
. /usr/sbin/so-elastic-fleet-common
fi
{% endif %}
default_conf_dir=/opt/so/conf
# Define a default directory to load pipelines from
ELASTICSEARCH_TEMPLATES="$default_conf_dir/elasticsearch/templates/"
{% if GLOBALS.role == 'so-heavynode' %}
file="/opt/so/conf/elasticsearch/templates/index/so-common-template.json"
{% else %}
file="/usr/sbin/so-elastic-fleet-common"
{% endif %}
if [ -f "$file" ]; then
# Wait for ElasticSearch to initialize
echo -n "Waiting for ElasticSearch..."
retry 240 1 "so-elasticsearch-query / -k --output /dev/null --silent --head --fail" || fail "Connection attempt timed out. Unable to connect to ElasticSearch. \nPlease try: \n -checking log(s) in /var/log/elasticsearch/\n -running 'sudo docker ps' \n -running 'sudo so-elastic-restart'"
{% if GLOBALS.role != 'so-heavynode' %}
TEMPLATE="logs-endpoint.alerts@package"
INSTALLED=$(so-elasticsearch-query _component_template/$TEMPLATE | jq -r .component_templates[0].name)
if [ "$INSTALLED" != "$TEMPLATE" ]; then
echo
echo "Packages not yet installed."
echo
exit 0
fi
{% endif %}
touch $STATE_FILE_INITIAL
cd ${ELASTICSEARCH_TEMPLATES}/component/ecs
echo "Loading ECS component templates..."
for i in *; do
TEMPLATE=$(echo $i | cut -d '.' -f1)
load_template "_component_template/${TEMPLATE}-mappings" "$i"
done
echo
cd ${ELASTICSEARCH_TEMPLATES}/component/elastic-agent
echo "Loading Elastic Agent component templates..."
{% if GLOBALS.role == 'so-heavynode' %}
component_pattern="so-*"
{% else %}
component_pattern="*"
{% endif %}
for i in $component_pattern; do
TEMPLATE=${i::-5}
load_template "_component_template/$TEMPLATE" "$i"
done
echo
# Load SO-specific component templates
cd ${ELASTICSEARCH_TEMPLATES}/component/so
echo "Loading Security Onion component templates..."
for i in *; do
TEMPLATE=$(echo $i | cut -d '.' -f1);
load_template "_component_template/$TEMPLATE" "$i"
done
echo
# Load SO index templates
cd ${ELASTICSEARCH_TEMPLATES}/index
echo "Loading Security Onion index templates..."
shopt -s extglob
{% if GLOBALS.role == 'so-heavynode' %}
pattern="!(*1password*|*aws*|*azure*|*cloudflare*|*elastic_agent*|*fim*|*github*|*google*|*osquery*|*system*|*windows*|*endpoint*|*elasticsearch*|*generic*|*fleet_server*|*soc*)"
{% else %}
pattern="*"
{% endif %}
# Index templates will be skipped if the following conditions are met:
# 1. The template is part of the "so-logs-" template group
# 2. The template name does not correlate to at least one existing component template
# In this situation, the script will treat the skipped template as a temporary failure
# and allow the templates to be loaded again on the next run or highstate, whichever
# comes first.
COMPONENT_LIST=$(so-elasticsearch-component-templates-list)
for i in $pattern; do
TEMPLATE=${i::-14}
COMPONENT_PATTERN=${TEMPLATE:3}
MATCH=$(echo "$TEMPLATE" | grep -E "^so-logs-|^so-metrics" | grep -vE "detections|osquery")
if [[ -n "$MATCH" && ! "$COMPONENT_LIST" =~ "$COMPONENT_PATTERN" && ! "$COMPONENT_PATTERN" =~ \.generic|logs-winlog\.winlog ]]; then
load_failures=$((load_failures+1))
echo "Component template does not exist for $COMPONENT_PATTERN. The index template will not be loaded. Load failures: $load_failures"
else
load_template "_index_template/$TEMPLATE" "$i"
fi
done
else
{% if GLOBALS.role == 'so-heavynode' %}
echo "Common template does not exist. Exiting..."
{% else %}
echo "Elastic Fleet not configured. Exiting..."
{% endif %}
exit 0
fi
cd - >/dev/null
if [[ $load_failures -eq 0 ]]; then
echo "All templates loaded successfully"
touch $STATE_FILE_SUCCESS
else
echo "Encountered $load_failures templates that were unable to load, likely due to missing dependencies that will be available later; will retry on next highstate"
fi
else
echo "Templates already loaded"
fi
+3
View File
@@ -11,6 +11,7 @@
'so-kratos',
'so-hydra',
'so-nginx',
'so-postgres',
'so-redis',
'so-soc',
'so-strelka-coordinator',
@@ -34,6 +35,7 @@
'so-hydra',
'so-logstash',
'so-nginx',
'so-postgres',
'so-redis',
'so-soc',
'so-strelka-coordinator',
@@ -77,6 +79,7 @@
'so-kratos',
'so-hydra',
'so-nginx',
'so-postgres',
'so-soc'
] %}
+42
View File
@@ -98,6 +98,10 @@ firewall:
tcp:
- 8086
udp: []
postgres:
tcp:
- 5432
udp: []
kafka_controller:
tcp:
- 9093
@@ -193,6 +197,7 @@ firewall:
- kibana
- redis
- influxdb
- postgres
- elasticsearch_rest
- elasticsearch_node
- localrules
@@ -379,6 +384,7 @@ firewall:
- kibana
- redis
- influxdb
- postgres
- elasticsearch_rest
- elasticsearch_node
- docker_registry
@@ -392,6 +398,7 @@ firewall:
- elasticsearch_rest
- docker_registry
- influxdb
- postgres
- sensoroni
- yum
- beats_5044
@@ -404,6 +411,7 @@ firewall:
portgroups:
- docker_registry
- influxdb
- postgres
- sensoroni
- yum
- beats_5044
@@ -421,6 +429,7 @@ firewall:
- yum
- docker_registry
- influxdb
- postgres
- sensoroni
searchnode:
portgroups:
@@ -431,6 +440,7 @@ firewall:
- yum
- docker_registry
- influxdb
- postgres
- elastic_agent_control
- elastic_agent_data
- elastic_agent_update
@@ -444,6 +454,7 @@ firewall:
- yum
- docker_registry
- influxdb
- postgres
- elastic_agent_control
- elastic_agent_data
- elastic_agent_update
@@ -453,6 +464,7 @@ firewall:
- yum
- docker_registry
- influxdb
- postgres
- elastic_agent_control
- elastic_agent_data
- elastic_agent_update
@@ -486,6 +498,7 @@ firewall:
portgroups:
- docker_registry
- influxdb
- postgres
- sensoroni
- yum
- elastic_agent_control
@@ -496,6 +509,7 @@ firewall:
- yum
- docker_registry
- influxdb
- postgres
- elastic_agent_control
- elastic_agent_data
- elastic_agent_update
@@ -590,6 +604,7 @@ firewall:
- kibana
- redis
- influxdb
- postgres
- elasticsearch_rest
- elasticsearch_node
- docker_registry
@@ -603,6 +618,7 @@ firewall:
- elasticsearch_rest
- docker_registry
- influxdb
- postgres
- sensoroni
- yum
- beats_5044
@@ -615,6 +631,7 @@ firewall:
portgroups:
- docker_registry
- influxdb
- postgres
- sensoroni
- yum
- beats_5044
@@ -632,6 +649,7 @@ firewall:
- yum
- docker_registry
- influxdb
- postgres
- sensoroni
searchnode:
portgroups:
@@ -642,6 +660,7 @@ firewall:
- yum
- docker_registry
- influxdb
- postgres
- elastic_agent_control
- elastic_agent_data
- elastic_agent_update
@@ -655,6 +674,7 @@ firewall:
- yum
- docker_registry
- influxdb
- postgres
- elastic_agent_control
- elastic_agent_data
- elastic_agent_update
@@ -664,6 +684,7 @@ firewall:
- yum
- docker_registry
- influxdb
- postgres
- elastic_agent_control
- elastic_agent_data
- elastic_agent_update
@@ -695,6 +716,7 @@ firewall:
portgroups:
- docker_registry
- influxdb
- postgres
- sensoroni
- yum
- elastic_agent_control
@@ -705,6 +727,7 @@ firewall:
- yum
- docker_registry
- influxdb
- postgres
- elastic_agent_control
- elastic_agent_data
- elastic_agent_update
@@ -799,6 +822,7 @@ firewall:
- kibana
- redis
- influxdb
- postgres
- elasticsearch_rest
- elasticsearch_node
- docker_registry
@@ -812,6 +836,7 @@ firewall:
- elasticsearch_rest
- docker_registry
- influxdb
- postgres
- sensoroni
- yum
- beats_5044
@@ -824,6 +849,7 @@ firewall:
portgroups:
- docker_registry
- influxdb
- postgres
- sensoroni
- yum
- beats_5044
@@ -841,6 +867,7 @@ firewall:
- yum
- docker_registry
- influxdb
- postgres
- sensoroni
searchnode:
portgroups:
@@ -850,6 +877,7 @@ firewall:
- yum
- docker_registry
- influxdb
- postgres
- elastic_agent_control
- elastic_agent_data
- elastic_agent_update
@@ -862,6 +890,7 @@ firewall:
- yum
- docker_registry
- influxdb
- postgres
- elastic_agent_control
- elastic_agent_data
- elastic_agent_update
@@ -871,6 +900,7 @@ firewall:
- yum
- docker_registry
- influxdb
- postgres
- elastic_agent_control
- elastic_agent_data
- elastic_agent_update
@@ -904,6 +934,7 @@ firewall:
portgroups:
- docker_registry
- influxdb
- postgres
- sensoroni
- yum
- elastic_agent_control
@@ -914,6 +945,7 @@ firewall:
- yum
- docker_registry
- influxdb
- postgres
- elastic_agent_control
- elastic_agent_data
- elastic_agent_update
@@ -1011,6 +1043,7 @@ firewall:
- kibana
- redis
- influxdb
- postgres
- elasticsearch_rest
- elasticsearch_node
- docker_registry
@@ -1031,6 +1064,7 @@ firewall:
- elasticsearch_rest
- docker_registry
- influxdb
- postgres
- sensoroni
- yum
- beats_5044
@@ -1043,6 +1077,7 @@ firewall:
portgroups:
- docker_registry
- influxdb
- postgres
- sensoroni
- yum
- beats_5044
@@ -1054,6 +1089,7 @@ firewall:
portgroups:
- docker_registry
- influxdb
- postgres
- sensoroni
- yum
- beats_5044
@@ -1065,6 +1101,7 @@ firewall:
portgroups:
- docker_registry
- influxdb
- postgres
- sensoroni
- yum
- redis
@@ -1074,6 +1111,7 @@ firewall:
portgroups:
- docker_registry
- influxdb
- postgres
- sensoroni
- yum
- redis
@@ -1084,6 +1122,7 @@ firewall:
- yum
- docker_registry
- influxdb
- postgres
- elastic_agent_control
- elastic_agent_data
- elastic_agent_update
@@ -1120,6 +1159,7 @@ firewall:
portgroups:
- docker_registry
- influxdb
- postgres
- sensoroni
- yum
- elastic_agent_control
@@ -1130,6 +1170,7 @@ firewall:
- yum
- docker_registry
- influxdb
- postgres
- elastic_agent_control
- elastic_agent_data
- elastic_agent_update
@@ -1473,6 +1514,7 @@ firewall:
- kibana
- redis
- influxdb
- postgres
- elasticsearch_rest
- elasticsearch_node
- elastic_agent_control
-6
View File
@@ -11,18 +11,14 @@ global:
regexFailureMessage: You must enter a valid IP address or CIDR.
mdengine:
description: Which engine to use for meta data generation. Options are ZEEK and SURICATA.
regex: ^(ZEEK|SURICATA)$
options:
- ZEEK
- SURICATA
regexFailureMessage: You must enter either ZEEK or SURICATA.
global: True
pcapengine:
description: Which engine to use for generating pcap. Currently only SURICATA is supported.
regex: ^(SURICATA)$
options:
- SURICATA
regexFailureMessage: You must enter either SURICATA.
global: True
ids:
description: Which IDS engine to use. Currently only Suricata is supported.
@@ -42,11 +38,9 @@ global:
advanced: True
pipeline:
description: Sets which pipeline technology for events to use. The use of Kafka requires a Security Onion Pro license.
regex: ^(REDIS|KAFKA)$
options:
- REDIS
- KAFKA
regexFailureMessage: You must enter either REDIS or KAFKA.
global: True
advanced: True
repo_host:
+10 -3
View File
@@ -85,7 +85,10 @@ influxdb:
description: The log level to use for outputting log statements. Allowed values are debug, info, or error.
global: True
advanced: false
regex: ^(info|debug|error)$
options:
- info
- debug
- error
helpLink: influxdb
metrics-disabled:
description: If true, the HTTP endpoint that exposes internal InfluxDB metrics will be inaccessible.
@@ -140,7 +143,9 @@ influxdb:
description: Determines the type of storage used for secrets. Allowed values are bolt or vault.
global: True
advanced: True
regex: ^(bolt|vault)$
options:
- bolt
- vault
helpLink: influxdb
session-length:
description: Number of minutes that a user login session can remain authenticated.
@@ -260,7 +265,9 @@ influxdb:
description: The type of data store to use for HTTP resources. Allowed values are disk or memory. Memory should not be used for production Security Onion installations.
global: True
advanced: True
regex: ^(disk|memory)$
options:
- disk
- memory
helpLink: influxdb
tls-cert:
description: The container path to the certificate to use for TLS encryption of the HTTP requests and responses.
+14 -4
View File
@@ -128,10 +128,13 @@ kafka:
title: ssl.keystore.password
sensitive: True
helpLink: kafka
ssl_x_keystore_x_type:
ssl_x_keystore_x_type:
description: The key store file format.
title: ssl.keystore.type
regex: ^(JKS|PKCS12|PEM)$
options:
- JKS
- PKCS12
- PEM
helpLink: kafka
ssl_x_truststore_x_location:
description: The trust store file location within the Docker container.
@@ -160,7 +163,11 @@ kafka:
security_x_protocol:
description: 'Broker communication protocol. Options are: SASL_SSL, PLAINTEXT, SSL, SASL_PLAINTEXT'
title: security.protocol
regex: ^(SASL_SSL|PLAINTEXT|SSL|SASL_PLAINTEXT)
options:
- SASL_SSL
- PLAINTEXT
- SSL
- SASL_PLAINTEXT
helpLink: kafka
ssl_x_keystore_x_location:
description: The key store file location within the Docker container.
@@ -174,7 +181,10 @@ kafka:
ssl_x_keystore_x_type:
description: The key store file format.
title: ssl.keystore.type
regex: ^(JKS|PKCS12|PEM)$
options:
- JKS
- PKCS12
- PEM
helpLink: kafka
ssl_x_truststore_x_location:
description: The trust store file location within the Docker container.
+1 -1
View File
@@ -22,7 +22,7 @@ kibana:
- default
- file
migrations:
discardCorruptObjects: "8.18.8"
discardCorruptObjects: "9.3.3"
telemetry:
enabled: False
xpack:
@@ -9,5 +9,5 @@ SESSIONCOOKIE=$(curl -K /opt/so/conf/elasticsearch/curl.config -c - -X GET http:
# Disable certain Features from showing up in the Kibana UI
echo
echo "Setting up default Kibana Space:"
curl -K /opt/so/conf/elasticsearch/curl.config -b "sid=$SESSIONCOOKIE" -L -X PUT "localhost:5601/api/spaces/space/default" -H 'kbn-xsrf: true' -H 'Content-Type: application/json' -d' {"id":"default","name":"Default","disabledFeatures":["ml","enterpriseSearch","logs","infrastructure","apm","uptime","monitoring","stackAlerts","actions","securitySolutionCasesV3","inventory","dataQuality","searchSynonyms","enterpriseSearchApplications","enterpriseSearchAnalytics","securitySolutionTimeline","securitySolutionNotes","entityManager"]} ' >> /opt/so/log/kibana/misc.log
curl -K /opt/so/conf/elasticsearch/curl.config -b "sid=$SESSIONCOOKIE" -L -X PUT "localhost:5601/api/spaces/space/default" -H 'kbn-xsrf: true' -H 'Content-Type: application/json' -d' {"id":"default","name":"Default","disabledFeatures":["ml","enterpriseSearch","logs","infrastructure","apm","uptime","monitoring","stackAlerts","actions","securitySolutionCasesV3","inventory","dataQuality","searchSynonyms","searchQueryRules","enterpriseSearchApplications","enterpriseSearchAnalytics","securitySolutionTimeline","securitySolutionNotes","securitySolutionRulesV1","entityManager","streams","cloudConnect","slo"]} ' >> /opt/so/log/kibana/misc.log
echo
+10 -5
View File
@@ -3,8 +3,8 @@ kratos:
description: Enables or disables the Kratos authentication system. WARNING - Disabling this process will cause the grid to malfunction. Re-enabling this setting will require manual effort via SSH.
forcedType: bool
advanced: True
readonly: True
helpLink: kratos
oidc:
enabled:
description: Set to True to enable OIDC / Single Sign-On (SSO) to SOC. Requires a valid Security Onion license key.
@@ -21,8 +21,12 @@ kratos:
description: "Specify the provider type. Required. Valid values are: auth0, generic, github, google, microsoft"
global: True
forcedType: string
regex: "auth0|generic|github|google|microsoft"
regexFailureMessage: "Valid values are: auth0, generic, github, google, microsoft"
options:
- auth0
- generic
- github
- google
- microsoft
helpLink: oidc
client_id:
description: Specify the client ID, also referenced as the application ID. Required.
@@ -43,8 +47,9 @@ kratos:
description: The source of the subject identifier. Typically 'userinfo'. Only used when provider is 'microsoft'.
global: True
forcedType: string
regex: me|userinfo
regexFailureMessage: "Valid values are: me, userinfo"
options:
- me
- userinfo
helpLink: oidc
auth_url:
description: Provider's auth URL. Required when provider is 'generic'.
+48 -3
View File
@@ -132,8 +132,8 @@ function getinstallinfo() {
log "ERROR" "Failed to get install info from $MINION_ID"
return 1
fi
export $(echo "$INSTALLVARS" | xargs)
while read -r var; do export "$var"; done <<< "$INSTALLVARS"
if [ $? -ne 0 ]; then
log "ERROR" "Failed to source install variables"
return 1
@@ -273,7 +273,7 @@ function deleteMinionFiles () {
log "ERROR" "Failed to delete $PILLARFILE"
return 1
fi
rm -f $ADVPILLARFILE
if [ $? -ne 0 ]; then
log "ERROR" "Failed to delete $ADVPILLARFILE"
@@ -281,6 +281,39 @@ function deleteMinionFiles () {
fi
}
# Remove this minion's postgres Telegraf credential from the shared creds
# pillar and drop the matching role in Postgres. Always returns 0 so a dead
# or unreachable so-postgres doesn't block minion deletion — in that case we
# log a warning and leave the role behind for manual cleanup.
function remove_postgres_telegraf_from_minion() {
local MINION_SAFE
MINION_SAFE=$(echo "$MINION_ID" | tr '.-' '__' | tr '[:upper:]' '[:lower:]')
local PG_USER="so_telegraf_${MINION_SAFE}"
log "INFO" "Removing postgres telegraf cred for $MINION_ID"
so-telegraf-cred remove "$MINION_ID" >/dev/null 2>&1 || true
if docker ps --format '{{.Names}}' 2>/dev/null | grep -q '^so-postgres$'; then
if ! docker exec -i so-postgres psql -v ON_ERROR_STOP=1 -U postgres -d so_telegraf >/dev/null 2>&1 <<EOSQL
DO \$\$
BEGIN
IF EXISTS (SELECT FROM pg_catalog.pg_roles WHERE rolname = '$PG_USER') THEN
EXECUTE format('REASSIGN OWNED BY %I TO so_telegraf', '$PG_USER');
EXECUTE format('DROP OWNED BY %I', '$PG_USER');
EXECUTE format('DROP ROLE %I', '$PG_USER');
END IF;
END
\$\$;
EOSQL
then
log "WARN" "Failed to drop postgres role $PG_USER; pillar entry was removed — drop manually if the role persists"
fi
else
log "WARN" "so-postgres container is not running; skipping DB role cleanup for $PG_USER"
fi
}
# Create the minion file
function ensure_socore_ownership() {
log "INFO" "Setting socore ownership on minion files"
@@ -542,6 +575,17 @@ function add_telegraf_to_minion() {
log "ERROR" "Failed to add telegraf configuration to $PILLARFILE"
return 1
fi
# Provision the per-minion postgres Telegraf credential in the shared
# telegraf/creds.sls pillar. so-telegraf-cred is the only writer; it
# generates a password on first add and is a no-op on re-add so the cred
# is stable across repeated so-minion runs. postgres.telegraf_users on the
# manager creates/updates the DB role from the same pillar.
so-telegraf-cred add "$MINION_ID"
if [ $? -ne 0 ]; then
log "ERROR" "Failed to provision postgres telegraf cred for $MINION_ID"
return 1
fi
}
function add_influxdb_to_minion() {
@@ -1069,6 +1113,7 @@ case "$OPERATION" in
"delete")
log "INFO" "Removing minion $MINION_ID"
remove_postgres_telegraf_from_minion
deleteMinionFiles || {
log "ERROR" "Failed to delete minion files for $MINION_ID"
exit 1
+329
View File
@@ -0,0 +1,329 @@
#!/usr/bin/env python3
# Copyright Security Onion Solutions LLC and/or licensed to Security Onion Solutions LLC under one
# or more contributor license agreements. Licensed under the Elastic License 2.0 as shown at
# https://securityonion.net/license; you may not use this file except in compliance with the
# Elastic License 2.0.
"""
so-pillar-import — populate the so_pillar.* schema in so-postgres from the
on-disk Salt pillar tree.
Reads /opt/so/saltstack/local/pillar/, decomposes each .sls file into a
(scope, role|minion_id, pillar_path, data) tuple, and UPSERTs it into
so_pillar.pillar_entry. Idempotent — re-running with no SLS edits produces
no version bumps because the audit trigger only writes a row when data
actually changes.
Bootstrap and mine-driven files are skipped (see EXCLUDE_BASENAMES /
EXCLUDE_PREFIXES below). Files containing Jinja templates ({% or {{) are
also skipped — those stay disk-authoritative and ext_pillar_first: False
means they render before the PG overlay anyway.
All SQL goes through `docker exec so-postgres psql` so no separate DSN
config is required at first-install time. Designed to be called by
salt/postgres/schema_pillar.sls (initial seed) and by salt/manager/tools/
sbin/so-minion (per-minion sync on add/delete).
"""
import argparse
import json
import os
import shlex
import subprocess
import sys
from pathlib import Path
import yaml
PILLAR_LOCAL_ROOT = Path("/opt/so/saltstack/local/pillar")
PILLAR_DEFAULT_ROOT = Path("/opt/so/saltstack/default/pillar")
DOCKER_CONTAINER = "so-postgres"
PG_SUPERUSER = "postgres"
PG_DATABASE = "securityonion"
# Files that must NEVER move to Postgres. These are read by Salt before
# Postgres is reachable, or contain renderer-time computed values (mine, etc.).
EXCLUDE_BASENAMES = {
"secrets.sls",
"auth.sls", # postgres/auth.sls bootstrap
"top.sls",
}
# Filename prefixes to skip — these are renderer-time computed pillars
# (Salt mine, file_exists guards, etc.) that have to stay on disk.
EXCLUDE_PATH_FRAGMENTS = (
"/elasticsearch/nodes.sls",
"/redis/nodes.sls",
"/kafka/nodes.sls",
"/hypervisor/nodes.sls",
"/logstash/nodes.sls",
"/node_data/ips.sls",
"/postgres/auth.sls",
"/elasticsearch/auth.sls",
"/kibana/secrets.sls",
)
def log(level, msg):
print(f"[{level}] {msg}", file=sys.stderr)
def is_jinja_templated(content_bytes):
return b"{%" in content_bytes or b"{{" in content_bytes
def classify(path):
"""Return (scope, role_name, minion_id, pillar_path) for a pillar file
or None to skip it. role_name is None for now — the importer leaves role
membership to the so_pillar.minion trigger and the salt/auth reactor."""
rel_str = str(path)
if path.name in EXCLUDE_BASENAMES:
return None
for frag in EXCLUDE_PATH_FRAGMENTS:
if frag in rel_str:
return None
# /local/pillar/minions/<id>.sls or adv_<id>.sls
if path.parent.name == "minions":
stem = path.stem # filename without .sls
if stem.startswith("adv_"):
mid = stem[4:]
return ("minion", None, mid, f"minions.adv_{mid}")
return ("minion", None, stem, f"minions.{stem}")
# /local/pillar/<section>/<file>.sls
if path.parent.parent == PILLAR_LOCAL_ROOT or path.parent.parent == PILLAR_DEFAULT_ROOT:
section = path.parent.name
stem = path.stem
# Only soc_<section>.sls and adv_<section>.sls are SOC-managed pillar
# surfaces. Other files (e.g. nodes.sls, auth.sls, *.token) are
# either covered by EXCLUDE_PATH_FRAGMENTS or are bootstrap surfaces
# we leave alone for now.
if stem.startswith("soc_") or stem.startswith("adv_"):
return ("global", None, None, f"{section}.{stem}")
return None
return None
def parse_yaml_file(path):
with open(path, "rb") as f:
content = f.read()
if not content.strip():
return {}
if is_jinja_templated(content):
return None
data = yaml.safe_load(content)
if data is None:
return {}
if not isinstance(data, dict):
return {"_raw": data}
return data
def derive_node_type(minion_id):
"""Conventional Security Onion minion ids are <host>_<role>. Take the
last underscore-delimited token as the canonical role suffix."""
parts = minion_id.rsplit("_", 1)
if len(parts) == 2:
return parts[1]
return None
def docker_psql(sql, *, db=PG_DATABASE, user=PG_SUPERUSER, on_error_stop=True, capture=True):
"""Run sql via docker exec ... psql. Returns stdout as str."""
args = [
"docker", "exec", "-i", DOCKER_CONTAINER,
"psql", "-U", user, "-d", db, "-tA", "-q",
]
if on_error_stop:
args += ["-v", "ON_ERROR_STOP=1"]
proc = subprocess.run(
args, input=sql.encode(),
capture_output=capture, check=False,
)
if proc.returncode != 0:
sys.stderr.write(proc.stderr.decode(errors="replace"))
raise RuntimeError(f"docker exec psql failed (rc={proc.returncode})")
return proc.stdout.decode(errors="replace")
def upsert_minion(minion_id, node_type):
sql = (
"INSERT INTO so_pillar.minion (minion_id, node_type) "
f"VALUES ({pg_str(minion_id)}, {pg_str(node_type) if node_type else 'NULL'}) "
"ON CONFLICT (minion_id) DO UPDATE SET node_type = EXCLUDED.node_type;"
)
docker_psql(sql)
def delete_minion(minion_id):
"""CASCADE removes pillar_entry + role_member rows."""
sql = f"DELETE FROM so_pillar.minion WHERE minion_id = {pg_str(minion_id)};"
docker_psql(sql)
def upsert_pillar_entry(scope, role_name, minion_id, pillar_path, data, reason):
"""Insert or update the row keyed by the partial unique index that
matches scope. Audit trigger handles history; versioning trigger bumps
version only when data changes."""
data_json = json.dumps(data)
role_sql = pg_str(role_name) if role_name else "NULL"
minion_sql = pg_str(minion_id) if minion_id else "NULL"
reason_sql = pg_str(reason)
if scope == "global":
conflict = "(pillar_path) WHERE scope='global'"
elif scope == "role":
conflict = "(role_name, pillar_path) WHERE scope='role'"
elif scope == "minion":
conflict = "(minion_id, pillar_path) WHERE scope='minion'"
else:
raise ValueError(f"unknown scope {scope!r}")
sql = (
"BEGIN;\n"
f"SELECT set_config('so_pillar.change_reason', {reason_sql}, true);\n"
f"INSERT INTO so_pillar.pillar_entry "
f"(scope, role_name, minion_id, pillar_path, data, change_reason) "
f"VALUES ({pg_str(scope)}, {role_sql}, {minion_sql}, {pg_str(pillar_path)}, {pg_jsonb(data_json)}, {reason_sql}) "
f"ON CONFLICT {conflict} DO UPDATE "
f"SET data = EXCLUDED.data, change_reason = EXCLUDED.change_reason;\n"
"COMMIT;\n"
)
docker_psql(sql)
def pg_str(s):
"""Escape a Python str for inclusion in literal SQL. Pillar content has
already been validated as YAML; we just need standard SQL escaping."""
if s is None:
return "NULL"
return "'" + str(s).replace("'", "''") + "'"
def pg_jsonb(json_str):
return pg_str(json_str) + "::jsonb"
def walk_pillar_root(root, paths):
if not root.is_dir():
return
for path in root.rglob("*.sls"):
if path.is_file():
paths.append(path)
def import_minion(minion_id, node_type, dry_run, reason):
"""Re-import every pillar file for a single minion."""
if not minion_id:
raise ValueError("minion_id required for --scope minion")
upsert_minion(minion_id, node_type)
log("INFO", f"Upserted minion row {minion_id} (node_type={node_type})")
targets = [
PILLAR_LOCAL_ROOT / "minions" / f"{minion_id}.sls",
PILLAR_LOCAL_ROOT / "minions" / f"adv_{minion_id}.sls",
]
for path in targets:
if not path.exists():
log("INFO", f" (no file at {path})")
continue
klass = classify(path)
if not klass:
log("INFO", f" skip {path} (excluded)")
continue
scope, role, mid, pillar_path = klass
data = parse_yaml_file(path)
if data is None:
log("WARN", f" skip {path} (Jinja-templated; stays disk-only)")
continue
if dry_run:
log("DRY", f" would upsert {scope}/{pillar_path} = {len(json.dumps(data))} bytes")
continue
upsert_pillar_entry(scope, role, mid, pillar_path, data, reason)
log("INFO", f" imported {scope}/{pillar_path}")
def import_all(dry_run, reason):
"""Walk the entire local pillar tree and import every eligible file."""
paths = []
walk_pillar_root(PILLAR_LOCAL_ROOT, paths)
imported = 0
skipped = 0
minions_seen = set()
for path in sorted(paths):
klass = classify(path)
if not klass:
skipped += 1
continue
scope, role, minion_id, pillar_path = klass
data = parse_yaml_file(path)
if data is None:
log("WARN", f"skip {path} (Jinja-templated; stays disk-only)")
skipped += 1
continue
if scope == "minion" and minion_id not in minions_seen:
node_type = derive_node_type(minion_id)
if not dry_run:
upsert_minion(minion_id, node_type)
minions_seen.add(minion_id)
if dry_run:
log("DRY", f"would upsert {scope}/{pillar_path} ({len(json.dumps(data))} bytes)")
else:
upsert_pillar_entry(scope, role, minion_id, pillar_path, data, reason)
log("INFO", f"imported {scope}/{pillar_path}")
imported += 1
log("INFO", f"done: {imported} imported, {skipped} skipped")
def main():
ap = argparse.ArgumentParser(description=__doc__)
ap.add_argument("--scope", choices=("global", "role", "minion", "all"), default="all")
ap.add_argument("--minion-id")
ap.add_argument("--node-type", help="override node_type for --scope minion (default: derived from minion_id)")
ap.add_argument("--delete", action="store_true",
help="With --scope minion, remove the minion row (and its pillar rows via CASCADE)")
ap.add_argument("--dry-run", action="store_true")
ap.add_argument("--diff", action="store_true",
help="(reserved) print structural diffs vs current DB content")
ap.add_argument("--yes", action="store_true",
help="Skip confirmation prompts (currently unused; reserved)")
ap.add_argument("--reason", default="so-pillar-import",
help="change_reason recorded in pillar_entry_history")
args = ap.parse_args()
try:
if args.scope == "minion":
if not args.minion_id:
ap.error("--minion-id required when --scope minion")
if args.delete:
if args.dry_run:
log("DRY", f"would delete {args.minion_id}")
else:
delete_minion(args.minion_id)
log("INFO", f"deleted {args.minion_id}")
else:
node_type = args.node_type or derive_node_type(args.minion_id)
import_minion(args.minion_id, node_type, args.dry_run, args.reason)
elif args.scope == "all":
import_all(args.dry_run, args.reason)
else:
log("ERROR", f"--scope {args.scope} not yet implemented; use --scope all or --scope minion")
return 2
except Exception as e:
log("ERROR", str(e))
return 1
return 0
if __name__ == "__main__":
sys.exit(main())
+54
View File
@@ -0,0 +1,54 @@
#!/bin/bash
# Copyright Security Onion Solutions LLC and/or licensed to Security Onion Solutions LLC under one
# or more contributor license agreements. Licensed under the Elastic License 2.0 as shown at
# https://securityonion.net/license; you may not use this file except in compliance with the
# Elastic License 2.0.
# Single writer for the Telegraf Postgres credentials pillar. Thin wrapper
# around so-yaml.py that generates a password on first add and no-ops on
# re-add so the cred is stable across repeated so-minion runs.
#
# Note: so-yaml.py splits keys on '.' with no escape. SO minion ids are
# dot-free by construction (setup/so-functions:1884 takes the short_name
# before the first '.'), so using the raw minion id as the key is safe.
CREDS=/opt/so/saltstack/local/pillar/telegraf/creds.sls
usage() {
echo "Usage: $0 <add|remove> <minion_id>" >&2
exit 2
}
seed_creds_file() {
mkdir -p "$(dirname "$CREDS")" || return 1
if [[ ! -f "$CREDS" ]]; then
(umask 027 && printf 'telegraf:\n postgres_creds: {}\n' > "$CREDS") || return 1
chown socore:socore "$CREDS" 2>/dev/null || true
chmod 640 "$CREDS" || return 1
fi
}
OP=$1
MID=$2
[[ -z "$OP" || -z "$MID" ]] && usage
case "$OP" in
add)
SAFE=$(echo "$MID" | tr '.-' '__' | tr '[:upper:]' '[:lower:]')
seed_creds_file || exit 1
if so-yaml.py get -r "$CREDS" "telegraf.postgres_creds.${MID}.user" >/dev/null 2>&1; then
exit 0
fi
PASS=$(tr -dc 'A-Za-z0-9~!@#^&*()_=+[]|;:,.<>?-' < /dev/urandom | head -c 72)
so-yaml.py replace "$CREDS" "telegraf.postgres_creds.${MID}.user" "so_telegraf_${SAFE}" >/dev/null
so-yaml.py replace "$CREDS" "telegraf.postgres_creds.${MID}.pass" "$PASS" >/dev/null
;;
remove)
[[ -f "$CREDS" ]] || exit 0
so-yaml.py remove "$CREDS" "telegraf.postgres_creds.${MID}" >/dev/null 2>&1 || true
;;
*)
usage
;;
esac
+197 -5
View File
@@ -13,6 +13,64 @@ import json
lockFile = "/tmp/so-yaml.lock"
# postsalt: so-yaml supports three backend modes for PG-managed pillar paths:
#
# dual — write disk + mirror to so_pillar.*. Reads from disk.
# Used during the migration transition when disk is still
# canonical and PG runs as a shadow.
# postgres — write to so_pillar.* only. Reads from so_pillar.*. No disk
# file is touched. The end state once cutover is complete.
# disk — disk only, no PG. Emergency rollback escape hatch.
#
# Bootstrap and mine-driven files (secrets.sls, ca/init.sls, */nodes.sls,
# top.sls, etc.) are always handled on disk regardless of mode — those paths
# are explicitly excluded by so_yaml_postgres.locate() raising SkipPath.
#
# Mode resolution: SO_YAML_BACKEND env var, then /opt/so/conf/so-yaml/mode,
# then default 'dual' (safe upgrade behavior — flipping to 'postgres' is
# done by schema_pillar.sls after the schema is in place and the importer
# has run at least once).
MODE_FILE = "/opt/so/conf/so-yaml/mode"
VALID_MODES = ("dual", "postgres", "disk")
DEFAULT_MODE = "dual"
try:
sys.path.insert(0, os.path.dirname(os.path.abspath(__file__)))
import so_yaml_postgres
_SO_YAML_PG_AVAILABLE = True
except Exception as _exc:
_SO_YAML_PG_AVAILABLE = False
def _resolveBackendMode():
env = os.environ.get("SO_YAML_BACKEND")
if env and env in VALID_MODES:
return env
try:
with open(MODE_FILE, "r") as fh:
value = fh.read().strip()
if value in VALID_MODES:
return value
except (IOError, OSError):
pass
return DEFAULT_MODE
_BACKEND_MODE = _resolveBackendMode()
def _isPgManaged(filename):
"""True when so-yaml should route this file's reads/writes through
so_pillar.*. False for bootstrap/mine-driven files that always live on
disk, and for arbitrary YAML paths outside the pillar tree."""
if not _SO_YAML_PG_AVAILABLE:
return False
try:
return so_yaml_postgres.is_pg_managed(filename)
except Exception:
return False
def showUsage(args):
print('Usage: {} <COMMAND> <YAML_FILE> [ARGS...]'.format(sys.argv[0]), file=sys.stderr)
@@ -25,8 +83,14 @@ def showUsage(args):
print(' get [-r] - Displays (to stdout) the value stored in the given key. Requires KEY arg. Use -r for raw output without YAML formatting.', file=sys.stderr)
print(' remove - Removes a yaml key, if it exists. Requires KEY arg.', file=sys.stderr)
print(' replace - Replaces (or adds) a new key and set its value. Requires KEY and VALUE args.', file=sys.stderr)
print(' purge - Delete the YAML file from disk and remove its rows from so_pillar.* (no KEY arg).', file=sys.stderr)
print(' help - Prints this usage information.', file=sys.stderr)
print('', file=sys.stderr)
print(' Backend mode:', file=sys.stderr)
print(' Resolved from $SO_YAML_BACKEND, then /opt/so/conf/so-yaml/mode, default "dual".', file=sys.stderr)
print(' Valid values: dual | postgres | disk. Bootstrap pillar files (secrets, ca, *.nodes.sls)', file=sys.stderr)
print(' are always handled on disk regardless of mode.', file=sys.stderr)
print('', file=sys.stderr)
print(' Where:', file=sys.stderr)
print(' YAML_FILE - Path to the file that will be modified. Ex: /opt/so/conf/service/conf.yaml', file=sys.stderr)
print(' KEY - YAML key, does not support \' or " characters at this time. Ex: level1.level2', file=sys.stderr)
@@ -39,14 +103,128 @@ def showUsage(args):
def loadYaml(filename):
file = open(filename, "r")
content = file.read()
return yaml.safe_load(content)
"""Load a YAML file's content as a dict.
PG-canonical mode (`postgres`): for PG-managed paths, read from
so_pillar.pillar_entry. A missing row is treated as an empty dict so
that `replace`/`add` on a fresh path can populate it from scratch.
Other modes / non-PG-managed paths: read from disk as today.
"""
if _BACKEND_MODE == "postgres" and _isPgManaged(filename):
try:
data = so_yaml_postgres.read_yaml(filename)
except so_yaml_postgres.SkipPath:
data = None
except Exception as e:
print(f"so-yaml: pg read failed for {filename}: {e}", file=sys.stderr)
sys.exit(1)
return data if data is not None else {}
try:
with open(filename, "r") as file:
content = file.read()
return yaml.safe_load(content)
except FileNotFoundError:
print(f"File not found: {filename}", file=sys.stderr)
sys.exit(1)
except Exception as e:
print(f"Error reading file {filename}: {e}", file=sys.stderr)
sys.exit(1)
def writeYaml(filename, content):
"""Persist `content` for `filename`.
PG-canonical mode + PG-managed path: write only to so_pillar.*. A PG
failure is fatal (no disk fallback) — caller must retry.
Dual mode: write disk, then mirror to PG (failures are warnings).
Disk mode or non-PG-managed path: write disk only.
"""
if _BACKEND_MODE == "postgres" and _isPgManaged(filename):
if not _SO_YAML_PG_AVAILABLE:
print("so-yaml: PG-canonical mode requires so_yaml_postgres module", file=sys.stderr)
sys.exit(1)
ok, msg = so_yaml_postgres.write_yaml(
filename, content,
reason="so-yaml " + " ".join(sys.argv[1:2]))
if not ok:
print(f"so-yaml: pg write failed for {filename}: {msg}", file=sys.stderr)
sys.exit(1)
return None
file = open(filename, "w")
return yaml.safe_dump(content, file)
result = yaml.safe_dump(content, file)
file.close()
if _BACKEND_MODE == "dual":
_mirrorToPostgres(filename, content)
return result
def _mirrorToPostgres(filename, content):
"""Best-effort dual-write of a YAML mutation into so_pillar.*. Skips
files outside the PG-managed pillar surface (secrets.sls,
elasticsearch/nodes.sls, etc.) and silently degrades when so-postgres
is unreachable. Disk write is canonical in dual mode; this never
raises.
Only real PG failures (`pg write failed: ...`) are logged so the
common cases (skipped path, postgres not running) don't pollute
stderr."""
if not _SO_YAML_PG_AVAILABLE:
return
try:
ok, msg = so_yaml_postgres.write_yaml(filename, content,
reason="so-yaml " + " ".join(sys.argv[1:2]))
if not ok and msg.startswith("pg write failed"):
print(f"so-yaml: {msg}", file=sys.stderr)
except Exception as e: # pragma: no cover — defensive: never break disk write
print(f"so-yaml: pg mirror exception: {e}", file=sys.stderr)
def purgeFile(filename):
"""Delete a YAML file from disk and remove the matching rows from
so_pillar.*. Idempotent — missing file/row counts as success.
PG-canonical mode + PG-managed path: PG delete is canonical. If a stale
disk file from the dual-write era happens to still exist, it's removed
too as a cleanup courtesy. PG failure is fatal in this mode.
Dual / disk modes: remove disk first; PG cleanup is best-effort."""
if _BACKEND_MODE == "postgres" and _isPgManaged(filename):
if not _SO_YAML_PG_AVAILABLE:
print("so-yaml: PG-canonical mode requires so_yaml_postgres module", file=sys.stderr)
return 1
ok, msg = so_yaml_postgres.purge_yaml(filename, reason="so-yaml purge")
if not ok:
print(f"so-yaml: pg purge failed for {filename}: {msg}", file=sys.stderr)
return 1
if os.path.exists(filename):
try:
os.remove(filename)
except Exception as e:
print(f"so-yaml: warn — could not remove stale disk file {filename}: {e}", file=sys.stderr)
return 0
if os.path.exists(filename):
try:
os.remove(filename)
except Exception as e:
print(f"Failed to remove {filename}: {e}", file=sys.stderr)
return 1
if _BACKEND_MODE == "dual" and _SO_YAML_PG_AVAILABLE:
try:
ok, msg = so_yaml_postgres.purge_yaml(filename,
reason="so-yaml purge")
if not ok and msg.startswith("pg purge failed"):
print(f"so-yaml: {msg}", file=sys.stderr)
except Exception as e:
print(f"so-yaml: pg purge exception: {e}", file=sys.stderr)
return 0
def appendItem(content, key, listItem):
@@ -285,7 +463,8 @@ def add(args):
def removeKey(content, key):
pieces = key.split(".", 1)
if len(pieces) > 1:
removeKey(content[pieces[0]], pieces[1])
if pieces[0] in content:
removeKey(content[pieces[0]], pieces[1])
else:
content.pop(key, None)
@@ -363,6 +542,18 @@ def get(args):
return 0
def purge(args):
"""purge YAML_FILE — delete the file from disk and remove the matching
rows from so_pillar.* in so-postgres. Used by so-minion's delete path
(in place of `rm -f`) so the audit log captures the deletion and
role_member rows get cleaned up via FK CASCADE on so_pillar.minion."""
if len(args) != 1:
print('Missing filename arg', file=sys.stderr)
showUsage(None)
return 1
return purgeFile(args[0])
def main():
args = sys.argv[1:]
@@ -380,6 +571,7 @@ def main():
"get": get,
"remove": remove,
"replace": replace,
"purge": purge,
}
code = 1
+344
View File
@@ -973,3 +973,347 @@ class TestReplaceListObject(unittest.TestCase):
expected = "key1:\n- id: '1'\n status: updated\n- id: '2'\n status: inactive\n"
self.assertEqual(actual, expected)
class TestLoadYaml(unittest.TestCase):
def test_load_yaml_missing_file(self):
with patch('sys.exit', new=MagicMock()) as sysmock:
with patch('sys.stderr', new=StringIO()) as mock_stderr:
soyaml.loadYaml("/tmp/so-yaml_test-does-not-exist.yaml")
sysmock.assert_called_with(1)
self.assertIn("File not found:", mock_stderr.getvalue())
def test_load_yaml_read_error(self):
with patch('sys.exit', new=MagicMock()) as sysmock:
with patch('sys.stderr', new=StringIO()) as mock_stderr:
with patch('builtins.open', side_effect=PermissionError("denied")):
soyaml.loadYaml("/tmp/so-yaml_test-unreadable.yaml")
sysmock.assert_called_with(1)
self.assertIn("Error reading file", mock_stderr.getvalue())
class TestPurge(unittest.TestCase):
def test_purge_missing_arg(self):
# showUsage calls sys.exit(1); patch it like the other tests do.
with patch('sys.exit', new=MagicMock()):
with patch('sys.stderr', new=StringIO()) as mock_stderr:
rc = soyaml.purge([])
self.assertEqual(rc, 1)
self.assertIn("Missing filename", mock_stderr.getvalue())
def test_purge_existing_file(self):
filename = "/tmp/so-yaml_test_purge.yaml"
with open(filename, "w") as f:
f.write("key: value\n")
# Disable PG mirror so the test doesn't shell out to docker.
with patch.object(soyaml, '_SO_YAML_PG_AVAILABLE', False):
rc = soyaml.purge([filename])
self.assertEqual(rc, 0)
import os as _os
self.assertFalse(_os.path.exists(filename))
def test_purge_missing_file_idempotent(self):
filename = "/tmp/so-yaml_test_purge_missing.yaml"
import os as _os
if _os.path.exists(filename):
_os.remove(filename)
with patch.object(soyaml, '_SO_YAML_PG_AVAILABLE', False):
rc = soyaml.purge([filename])
self.assertEqual(rc, 0)
class TestSoYamlPostgres(unittest.TestCase):
"""Tests the path-locator and write/purge contract of the dual-write
backend module without actually contacting Postgres."""
def setUp(self):
import importlib
self.mod = importlib.import_module("so_yaml_postgres")
def test_locate_global_soc(self):
scope, role, mid, path = self.mod.locate(
"/opt/so/saltstack/local/pillar/soc/soc_soc.sls")
self.assertEqual(scope, "global")
self.assertIsNone(role)
self.assertIsNone(mid)
self.assertEqual(path, "soc.soc_soc")
def test_locate_global_advanced(self):
scope, role, mid, path = self.mod.locate(
"/opt/so/saltstack/local/pillar/soc/adv_soc.sls")
self.assertEqual(scope, "global")
self.assertEqual(path, "soc.adv_soc")
def test_locate_minion(self):
scope, role, mid, path = self.mod.locate(
"/opt/so/saltstack/local/pillar/minions/h1_sensor.sls")
self.assertEqual(scope, "minion")
self.assertEqual(mid, "h1_sensor")
self.assertEqual(path, "minions.h1_sensor")
def test_locate_minion_advanced(self):
scope, role, mid, path = self.mod.locate(
"/opt/so/saltstack/local/pillar/minions/adv_h1_sensor.sls")
self.assertEqual(scope, "minion")
self.assertEqual(mid, "h1_sensor")
self.assertEqual(path, "minions.adv_h1_sensor")
def test_locate_skip_secrets(self):
with self.assertRaises(self.mod.SkipPath):
self.mod.locate("/opt/so/saltstack/local/pillar/secrets.sls")
def test_locate_skip_postgres_auth(self):
with self.assertRaises(self.mod.SkipPath):
self.mod.locate("/opt/so/saltstack/local/pillar/postgres/auth.sls")
def test_locate_skip_mine_driven(self):
with self.assertRaises(self.mod.SkipPath):
self.mod.locate("/opt/so/saltstack/local/pillar/elasticsearch/nodes.sls")
def test_locate_skip_top(self):
with self.assertRaises(self.mod.SkipPath):
self.mod.locate("/opt/so/saltstack/local/pillar/top.sls")
def test_locate_skip_unrelated(self):
with self.assertRaises(self.mod.SkipPath):
self.mod.locate("/etc/hostname")
def test_pg_str_escapes(self):
self.assertEqual(self.mod._pg_str("a'b"), "'a''b'")
self.assertEqual(self.mod._pg_str(None), "NULL")
def test_conflict_target(self):
self.assertIn("scope='global'", self.mod._conflict_target("global"))
self.assertIn("scope='role'", self.mod._conflict_target("role"))
self.assertIn("scope='minion'", self.mod._conflict_target("minion"))
with self.assertRaises(ValueError):
self.mod._conflict_target("bogus")
def test_write_yaml_skips_disk_only_path(self):
with patch.object(self.mod, '_is_enabled', return_value=True):
ok, msg = self.mod.write_yaml(
"/opt/so/saltstack/local/pillar/secrets.sls",
{"secrets": {"foo": "bar"}})
self.assertFalse(ok)
self.assertIn("disk-only", msg)
def test_write_yaml_unreachable(self):
with patch.object(self.mod, '_is_enabled', return_value=False):
ok, msg = self.mod.write_yaml(
"/opt/so/saltstack/local/pillar/soc/soc_soc.sls",
{"soc": {"foo": "bar"}})
self.assertFalse(ok)
self.assertEqual(msg, "postgres unreachable")
def test_is_pg_managed_true(self):
self.assertTrue(self.mod.is_pg_managed(
"/opt/so/saltstack/local/pillar/minions/h1_sensor.sls"))
self.assertTrue(self.mod.is_pg_managed(
"/opt/so/saltstack/local/pillar/soc/soc_soc.sls"))
def test_is_pg_managed_false_for_bootstrap(self):
self.assertFalse(self.mod.is_pg_managed(
"/opt/so/saltstack/local/pillar/secrets.sls"))
self.assertFalse(self.mod.is_pg_managed(
"/opt/so/saltstack/local/pillar/postgres/auth.sls"))
self.assertFalse(self.mod.is_pg_managed(
"/opt/so/saltstack/local/pillar/elasticsearch/nodes.sls"))
def test_read_yaml_unreachable(self):
with patch.object(self.mod, '_is_enabled', return_value=False):
self.assertIsNone(self.mod.read_yaml(
"/opt/so/saltstack/local/pillar/soc/soc_soc.sls"))
def test_read_yaml_skips_disk_only(self):
with patch.object(self.mod, '_is_enabled', return_value=True):
with self.assertRaises(self.mod.SkipPath):
self.mod.read_yaml(
"/opt/so/saltstack/local/pillar/secrets.sls")
def test_read_yaml_returns_data(self):
with patch.object(self.mod, '_is_enabled', return_value=True):
with patch.object(self.mod, '_docker_psql',
return_value='{"soc": {"foo": "bar"}}\n'):
data = self.mod.read_yaml(
"/opt/so/saltstack/local/pillar/soc/soc_soc.sls")
self.assertEqual(data, {"soc": {"foo": "bar"}})
def test_read_yaml_returns_none_when_no_row(self):
with patch.object(self.mod, '_is_enabled', return_value=True):
with patch.object(self.mod, '_docker_psql', return_value=''):
data = self.mod.read_yaml(
"/opt/so/saltstack/local/pillar/soc/soc_soc.sls")
self.assertIsNone(data)
def test_read_yaml_minion_query_shape(self):
captured = {}
def fake_psql(sql):
captured['sql'] = sql
return '{"host": {"mainip": "10.0.0.1"}}'
with patch.object(self.mod, '_is_enabled', return_value=True):
with patch.object(self.mod, '_docker_psql', side_effect=fake_psql):
data = self.mod.read_yaml(
"/opt/so/saltstack/local/pillar/minions/h1_sensor.sls")
self.assertEqual(data, {"host": {"mainip": "10.0.0.1"}})
self.assertIn("scope='minion'", captured['sql'])
self.assertIn("'h1_sensor'", captured['sql'])
self.assertIn("'minions.h1_sensor'", captured['sql'])
def test_is_enabled_public_alias(self):
with patch.object(self.mod, '_is_enabled', return_value=True):
self.assertTrue(self.mod.is_enabled())
with patch.object(self.mod, '_is_enabled', return_value=False):
self.assertFalse(self.mod.is_enabled())
class TestSoYamlBackendMode(unittest.TestCase):
"""Tests so-yaml's backend-mode resolution and PG-canonical routing
for read/write/purge. The PG calls themselves are stubbed; what we're
asserting is that the right backend is chosen for each (mode, path)
combination."""
def test_resolve_mode_env_overrides_file(self):
with patch.dict('os.environ', {'SO_YAML_BACKEND': 'postgres'}):
self.assertEqual(soyaml._resolveBackendMode(), 'postgres')
with patch.dict('os.environ', {'SO_YAML_BACKEND': 'disk'}):
self.assertEqual(soyaml._resolveBackendMode(), 'disk')
def test_resolve_mode_invalid_env_falls_back(self):
with patch.dict('os.environ', {'SO_YAML_BACKEND': 'garbage'}, clear=False):
with patch('builtins.open', side_effect=IOError):
self.assertEqual(soyaml._resolveBackendMode(), 'dual')
def test_resolve_mode_default_dual(self):
env = {k: v for k, v in __import__('os').environ.items()
if k != 'SO_YAML_BACKEND'}
with patch.dict('os.environ', env, clear=True):
with patch('builtins.open', side_effect=IOError):
self.assertEqual(soyaml._resolveBackendMode(), 'dual')
def test_is_pg_managed_proxies(self):
with patch.object(soyaml, '_SO_YAML_PG_AVAILABLE', True):
self.assertTrue(soyaml._isPgManaged(
"/opt/so/saltstack/local/pillar/minions/h1_sensor.sls"))
self.assertFalse(soyaml._isPgManaged(
"/opt/so/saltstack/local/pillar/secrets.sls"))
def test_is_pg_managed_false_when_module_unavailable(self):
with patch.object(soyaml, '_SO_YAML_PG_AVAILABLE', False):
self.assertFalse(soyaml._isPgManaged(
"/opt/so/saltstack/local/pillar/minions/h1_sensor.sls"))
def test_load_yaml_postgres_mode_reads_pg(self):
with patch.object(soyaml, '_BACKEND_MODE', 'postgres'):
with patch.object(soyaml, '_SO_YAML_PG_AVAILABLE', True):
with patch.object(soyaml.so_yaml_postgres, 'is_pg_managed',
return_value=True):
with patch.object(soyaml.so_yaml_postgres, 'read_yaml',
return_value={"a": 1}):
result = soyaml.loadYaml(
"/opt/so/saltstack/local/pillar/soc/soc_soc.sls")
self.assertEqual(result, {"a": 1})
def test_load_yaml_postgres_mode_returns_empty_when_no_row(self):
with patch.object(soyaml, '_BACKEND_MODE', 'postgres'):
with patch.object(soyaml, '_SO_YAML_PG_AVAILABLE', True):
with patch.object(soyaml.so_yaml_postgres, 'is_pg_managed',
return_value=True):
with patch.object(soyaml.so_yaml_postgres, 'read_yaml',
return_value=None):
result = soyaml.loadYaml(
"/opt/so/saltstack/local/pillar/soc/soc_soc.sls")
self.assertEqual(result, {})
def test_load_yaml_postgres_mode_reads_disk_for_bootstrap(self):
import tempfile, os as _os
with tempfile.NamedTemporaryFile(mode='w', suffix='.yaml', delete=False) as f:
f.write("foo: bar\n")
tmp = f.name
try:
with patch.object(soyaml, '_BACKEND_MODE', 'postgres'):
with patch.object(soyaml, '_SO_YAML_PG_AVAILABLE', True):
with patch.object(soyaml.so_yaml_postgres,
'is_pg_managed', return_value=False):
result = soyaml.loadYaml(tmp)
self.assertEqual(result, {"foo": "bar"})
finally:
_os.unlink(tmp)
def test_write_yaml_postgres_mode_skips_disk(self):
import tempfile, os as _os
with tempfile.NamedTemporaryFile(mode='w', suffix='.yaml', delete=False) as f:
tmp = f.name
_os.unlink(tmp)
try:
with patch.object(soyaml, '_BACKEND_MODE', 'postgres'):
with patch.object(soyaml, '_SO_YAML_PG_AVAILABLE', True):
with patch.object(soyaml.so_yaml_postgres, 'is_pg_managed',
return_value=True):
with patch.object(soyaml.so_yaml_postgres, 'write_yaml',
return_value=(True, 'ok')) as mock_w:
soyaml.writeYaml(tmp, {"x": 1})
self.assertFalse(_os.path.exists(tmp))
mock_w.assert_called_once()
finally:
if _os.path.exists(tmp):
_os.unlink(tmp)
def test_write_yaml_postgres_mode_failure_is_fatal(self):
with patch.object(soyaml, '_BACKEND_MODE', 'postgres'):
with patch.object(soyaml, '_SO_YAML_PG_AVAILABLE', True):
with patch.object(soyaml.so_yaml_postgres, 'is_pg_managed',
return_value=True):
with patch.object(soyaml.so_yaml_postgres, 'write_yaml',
return_value=(False, 'pg write failed: connection refused')):
with patch('sys.exit', new=MagicMock()) as sysmock:
with patch('sys.stderr', new=StringIO()) as mock_err:
soyaml.writeYaml(
"/opt/so/saltstack/local/pillar/soc/soc_soc.sls",
{"x": 1})
sysmock.assert_called_with(1)
def test_write_yaml_disk_mode_skips_pg(self):
import tempfile, os as _os
with tempfile.NamedTemporaryFile(mode='w', suffix='.yaml', delete=False) as f:
tmp = f.name
try:
with patch.object(soyaml, '_BACKEND_MODE', 'disk'):
with patch.object(soyaml, '_SO_YAML_PG_AVAILABLE', True):
with patch.object(soyaml.so_yaml_postgres, 'write_yaml') as mock_w:
soyaml.writeYaml(tmp, {"x": 1})
mock_w.assert_not_called()
with open(tmp) as f:
self.assertIn('x: 1', f.read())
finally:
_os.unlink(tmp)
def test_purge_postgres_mode_calls_pg_only(self):
import tempfile, os as _os
with tempfile.NamedTemporaryFile(mode='w', suffix='.yaml', delete=False) as f:
tmp = f.name
_os.unlink(tmp)
with patch.object(soyaml, '_BACKEND_MODE', 'postgres'):
with patch.object(soyaml, '_SO_YAML_PG_AVAILABLE', True):
with patch.object(soyaml.so_yaml_postgres, 'is_pg_managed',
return_value=True):
with patch.object(soyaml.so_yaml_postgres, 'purge_yaml',
return_value=(True, 'ok')) as mock_p:
rc = soyaml.purgeFile(tmp)
self.assertEqual(rc, 0)
mock_p.assert_called_once()
def test_purge_postgres_mode_failure_returns_nonzero(self):
with patch.object(soyaml, '_BACKEND_MODE', 'postgres'):
with patch.object(soyaml, '_SO_YAML_PG_AVAILABLE', True):
with patch.object(soyaml.so_yaml_postgres, 'is_pg_managed',
return_value=True):
with patch.object(soyaml.so_yaml_postgres, 'purge_yaml',
return_value=(False, 'pg purge failed: x')):
with patch('sys.stderr', new=StringIO()):
rc = soyaml.purgeFile(
"/opt/so/saltstack/local/pillar/minions/h1_sensor.sls")
self.assertEqual(rc, 1)
+320
View File
@@ -0,0 +1,320 @@
# Copyright Security Onion Solutions LLC and/or licensed to Security Onion Solutions LLC under one
# or more contributor license agreements. Licensed under the Elastic License 2.0 as shown at
# https://securityonion.net/license; you may not use this file except in compliance with the
# Elastic License 2.0.
"""
so_yaml_postgres — Postgres-backed dual-write helpers for so-yaml.py.
so-yaml.py writes YAML pillar files on disk; this module mirrors those
writes into so_pillar.* in so-postgres so ext_pillar and the SOC
PostgresConfigstore see the same data. During the postsalt transition
disk is canonical; PG writes are best-effort and never fail the disk
operation.
Connection: shells out to `docker exec so-postgres psql -U postgres -d
securityonion`. Same pattern so-pillar-import uses; avoids needing a
separate DSN config at install time. Performance is fine because so-yaml
is invoked from infrequent code paths (setup scripts, so-minion,
so-firewall); SOC's hot path uses the in-process pgxpool in
PostgresConfigstore, not so-yaml.
Path-to-row mapping mirrors PostgresConfigstore.locateSetting in
securityonion-soc:
/opt/so/saltstack/local/pillar/<section>/soc_<section>.sls
-> scope=global, pillar_path=<section>.soc_<section>
/opt/so/saltstack/local/pillar/<section>/adv_<section>.sls
-> scope=global, pillar_path=<section>.adv_<section>
/opt/so/saltstack/local/pillar/minions/<id>.sls
-> scope=minion, minion_id=<id>, pillar_path=minions.<id>
/opt/so/saltstack/local/pillar/minions/adv_<id>.sls
-> scope=minion, minion_id=<id>, pillar_path=minions.adv_<id>
Files outside that mapping (notably secrets.sls, postgres/auth.sls,
elasticsearch/nodes.sls, etc.) are skipped — they stay disk-only forever
or render dynamically and don't belong in PG.
"""
import json
import os
import shlex
import subprocess
import sys
DOCKER_CONTAINER = os.environ.get("SO_PILLAR_PG_CONTAINER", "so-postgres")
PG_DATABASE = os.environ.get("SO_PILLAR_PG_DATABASE", "securityonion")
PG_USER = os.environ.get("SO_PILLAR_PG_USER", "postgres")
# File paths whose mutations stay disk-only forever. Mirrors EXCLUDE_*
# in so-pillar-import.
DISK_ONLY_PATHS = (
"/opt/so/saltstack/local/pillar/secrets.sls",
"/opt/so/saltstack/local/pillar/postgres/auth.sls",
"/opt/so/saltstack/local/pillar/elasticsearch/auth.sls",
"/opt/so/saltstack/local/pillar/kibana/secrets.sls",
)
DISK_ONLY_FRAGMENTS = (
"/elasticsearch/nodes.sls",
"/redis/nodes.sls",
"/kafka/nodes.sls",
"/hypervisor/nodes.sls",
"/logstash/nodes.sls",
"/node_data/ips.sls",
"/top.sls",
)
class SkipPath(Exception):
"""Raised when a file path is intentionally not mirrored to PG."""
def is_enabled():
"""Public alias for callers that want to probe PG reachability without
relying on a leading-underscore private name."""
return _is_enabled()
def _is_enabled():
"""PG dual-write only fires if so-postgres is reachable. Cheap probe.
Returns True when docker exec succeeds, False otherwise. We never
want a PG hiccup to fail a disk write on a manager whose Postgres is
momentarily unreachable."""
try:
proc = subprocess.run(
["docker", "exec", DOCKER_CONTAINER,
"pg_isready", "-h", "127.0.0.1", "-U", PG_USER, "-q"],
capture_output=True, timeout=5, check=False,
)
return proc.returncode == 0
except (FileNotFoundError, subprocess.TimeoutExpired, OSError):
return False
def locate(path):
"""Translate a so-yaml file path to (scope, role_name, minion_id, pillar_path).
Raises SkipPath when the file is not part of the PG-managed surface."""
norm = os.path.normpath(path)
if norm in DISK_ONLY_PATHS:
raise SkipPath(f"{path}: explicit disk-only allowlist")
for frag in DISK_ONLY_FRAGMENTS:
if frag in norm:
raise SkipPath(f"{path}: matches disk-only fragment {frag}")
parent = os.path.basename(os.path.dirname(norm))
grandparent = os.path.basename(os.path.dirname(os.path.dirname(norm)))
name = os.path.basename(norm)
if not name.endswith(".sls"):
raise SkipPath(f"{path}: not a .sls file")
stem = name[:-4]
if parent == "minions":
if stem.startswith("adv_"):
mid = stem[4:]
return ("minion", None, mid, f"minions.adv_{mid}")
return ("minion", None, stem, f"minions.{stem}")
# /local/pillar/<section>/<file>.sls
if grandparent == "pillar" and parent and parent != "":
if stem.startswith("soc_") or stem.startswith("adv_"):
return ("global", None, None, f"{parent}.{stem}")
raise SkipPath(f"{path}: <section>/{stem}.sls is not a soc_/adv_ file")
raise SkipPath(f"{path}: unrecognised pillar layout")
def _pg_str(s):
if s is None:
return "NULL"
return "'" + str(s).replace("'", "''") + "'"
def _docker_psql(sql):
"""Run sql via docker exec ... psql. Returns stdout. Caller catches
exceptions and downgrades to a warning."""
proc = subprocess.run(
["docker", "exec", "-i", DOCKER_CONTAINER,
"psql", "-U", PG_USER, "-d", PG_DATABASE,
"-tA", "-q", "-v", "ON_ERROR_STOP=1"],
input=sql.encode(), capture_output=True, check=False, timeout=30,
)
if proc.returncode != 0:
raise RuntimeError(proc.stderr.decode(errors="replace") or
f"docker exec psql exit {proc.returncode}")
return proc.stdout.decode(errors="replace")
def _conflict_target(scope):
if scope == "global":
return "(pillar_path) WHERE scope='global'"
if scope == "role":
return "(role_name, pillar_path) WHERE scope='role'"
if scope == "minion":
return "(minion_id, pillar_path) WHERE scope='minion'"
raise ValueError(f"unknown scope {scope!r}")
def is_pg_managed(path):
"""True if this path maps to a so_pillar.* row (locate() succeeds).
Bootstrap and mine-driven files return False — they always live on
disk regardless of so-yaml's backend mode."""
try:
locate(path)
return True
except SkipPath:
return False
def read_yaml(path):
"""Return the content dict stored in so_pillar.pillar_entry for `path`,
or None when no row exists. Raises SkipPath when `path` is not part of
the PG-managed surface (caller should read disk in that case).
Used by so-yaml.py PG-canonical mode so `replace`, `get`, etc. resolve
against the database rather than a stale (or absent) disk file."""
if not _is_enabled():
return None
scope, role, minion_id, pillar_path = locate(path)
if scope == "minion":
sql = ("SELECT data FROM so_pillar.pillar_entry "
"WHERE scope='minion' "
f"AND minion_id={_pg_str(minion_id)} "
f"AND pillar_path={_pg_str(pillar_path)}")
elif scope == "role":
sql = ("SELECT data FROM so_pillar.pillar_entry "
"WHERE scope='role' "
f"AND role_name={_pg_str(role)} "
f"AND pillar_path={_pg_str(pillar_path)}")
else:
sql = ("SELECT data FROM so_pillar.pillar_entry "
"WHERE scope='global' "
f"AND pillar_path={_pg_str(pillar_path)}")
try:
out = _docker_psql(sql).strip()
except Exception:
return None
if not out:
return None
try:
return json.loads(out)
except (ValueError, TypeError):
return None
def write_yaml(path, content_dict, *, reason="so-yaml dual-write"):
"""Mirror the disk write at `path` (whose content was just rendered as
`content_dict`) into so_pillar.pillar_entry. Best-effort: any failure
is swallowed so the caller (so-yaml.py) does not see it as a fatal."""
if not _is_enabled():
return False, "postgres unreachable"
try:
scope, role, minion_id, pillar_path = locate(path)
except SkipPath as e:
return False, str(e)
data_json = json.dumps(content_dict if content_dict is not None else {})
role_sql = _pg_str(role)
minion_sql = _pg_str(minion_id)
reason_sql = _pg_str(reason)
conflict = _conflict_target(scope)
sql_parts = []
if scope == "minion":
# FK requires the minion row before pillar_entry can reference it.
sql_parts.append(
f"INSERT INTO so_pillar.minion (minion_id) VALUES ({minion_sql}) "
"ON CONFLICT (minion_id) DO NOTHING;"
)
sql_parts.append(
"BEGIN;\n"
f"SELECT set_config('so_pillar.change_reason', {reason_sql}, true);\n"
"INSERT INTO so_pillar.pillar_entry "
"(scope, role_name, minion_id, pillar_path, data, change_reason) "
f"VALUES ({_pg_str(scope)}, {role_sql}, {minion_sql}, "
f"{_pg_str(pillar_path)}, {_pg_str(data_json)}::jsonb, {reason_sql}) "
f"ON CONFLICT {conflict} DO UPDATE "
"SET data = EXCLUDED.data, change_reason = EXCLUDED.change_reason;\n"
"COMMIT;\n"
)
try:
_docker_psql("\n".join(sql_parts))
except Exception as e:
return False, f"pg write failed: {e}"
return True, "ok"
def purge_yaml(path, *, reason="so-yaml purge"):
"""Mirror the disk file deletion at `path` by deleting the matching
pillar_entry rows. For minion files also deletes the so_pillar.minion
row (CASCADE removes pillar_entry + role_member rows)."""
if not _is_enabled():
return False, "postgres unreachable"
try:
scope, role, minion_id, pillar_path = locate(path)
except SkipPath as e:
return False, str(e)
reason_sql = _pg_str(reason)
parts = ["BEGIN;",
f"SELECT set_config('so_pillar.change_reason', {reason_sql}, true);"]
if scope == "minion":
# If both <id>.sls and adv_<id>.sls are gone the trigger / CASCADE
# cleans up role_member; otherwise we just remove this one row.
parts.append(
f"DELETE FROM so_pillar.pillar_entry "
f"WHERE scope='minion' AND minion_id={_pg_str(minion_id)} "
f"AND pillar_path={_pg_str(pillar_path)};"
)
parts.append(
f"DELETE FROM so_pillar.minion WHERE minion_id={_pg_str(minion_id)} "
"AND NOT EXISTS (SELECT 1 FROM so_pillar.pillar_entry "
f"WHERE minion_id={_pg_str(minion_id)});"
)
else:
parts.append(
f"DELETE FROM so_pillar.pillar_entry "
f"WHERE scope={_pg_str(scope)} AND pillar_path={_pg_str(pillar_path)};"
)
parts.append("COMMIT;")
try:
_docker_psql("\n".join(parts))
except Exception as e:
return False, f"pg purge failed: {e}"
return True, "ok"
# CLI for diagnostics. Not exercised by so-yaml.py itself.
def _main(argv):
import argparse
ap = argparse.ArgumentParser()
ap.add_argument("op", choices=("locate", "ping"))
ap.add_argument("path", nargs="?")
args = ap.parse_args(argv)
if args.op == "ping":
ok = _is_enabled()
print("ok" if ok else "unreachable")
return 0 if ok else 1
if args.op == "locate":
if not args.path:
ap.error("locate requires PATH")
try:
scope, role, minion_id, pillar_path = locate(args.path)
print(f"scope={scope} role={role} minion_id={minion_id} pillar_path={pillar_path}")
return 0
except SkipPath as e:
print(f"SKIP: {e}", file=sys.stderr)
return 2
return 1
if __name__ == "__main__":
sys.exit(_main(sys.argv[1:]))
+251 -47
View File
@@ -24,6 +24,14 @@ BACKUPTOPFILE=/opt/so/saltstack/default/salt/top.sls.backup
SALTUPGRADED=false
SALT_CLOUD_INSTALLED=false
SALT_CLOUD_CONFIGURED=false
# Check if salt-cloud is installed
if rpm -q salt-cloud &>/dev/null; then
SALT_CLOUD_INSTALLED=true
fi
# Check if salt-cloud is configured
if [[ -f /etc/salt/cloud.profiles.d/socloud.conf ]]; then
SALT_CLOUD_CONFIGURED=true
fi
# used to display messages to the user at the end of soup
declare -a FINAL_MESSAGE_QUEUE=()
@@ -305,7 +313,7 @@ clone_to_tmp() {
# Make a temp location for the files
mkdir -p /tmp/sogh
cd /tmp/sogh
SOUP_BRANCH="-b 2.4/main"
SOUP_BRANCH="-b 3/main"
if [ -n "$BRANCH" ]; then
SOUP_BRANCH="-b $BRANCH"
fi
@@ -363,6 +371,7 @@ preupgrade_changes() {
echo "Checking to see if changes are needed."
[[ "$INSTALLEDVERSION" =~ ^2\.4\.21[0-9]+$ ]] && up_to_3.0.0
[[ "$INSTALLEDVERSION" == "3.0.0" ]] && up_to_3.1.0
true
}
@@ -371,6 +380,7 @@ postupgrade_changes() {
echo "Running post upgrade processes."
[[ "$POSTVERSION" =~ ^2\.4\.21[0-9]+$ ]] && post_to_3.0.0
[[ "$POSTVERSION" == "3.0.0" ]] && post_to_3.1.0
true
}
@@ -445,7 +455,6 @@ migrate_pcap_to_suricata() {
}
up_to_3.0.0() {
determine_elastic_agent_upgrade
migrate_pcap_to_suricata
INSTALLEDVERSION=3.0.0
@@ -469,6 +478,217 @@ post_to_3.0.0() {
### 3.0.0 End ###
### 3.1.0 Scripts ###
elasticsearch_backup_index_templates() {
echo "Backing up current elasticsearch index templates in /opt/so/conf/elasticsearch/templates/index/ to /nsm/backup/3.0.0_elasticsearch_index_templates.tar.gz"
tar -czf /nsm/backup/3.0.0_elasticsearch_index_templates.tar.gz -C /opt/so/conf/elasticsearch/templates/index/ .
}
elasticfleet_set_agent_logging_level_warn() {
. /usr/sbin/so-elastic-fleet-common
local current_agent_policies
if ! current_agent_policies=$(fleet_api "agent_policies?perPage=1000"); then
echo "Warning: unable to retrieve Fleet agent policies"
return 0
fi
# Only updating policies that are within Security Onion defaults and do not already have any user configured advanced_settings.
local policies_to_update
policies_to_update=$(jq -c '
.items[]
| select(has("advanced_settings") | not)
| select(
.id == "so-grid-nodes_general"
or .id == "so-grid-nodes_heavy"
or .id == "endpoints-initial"
or (.id | startswith("FleetServer_"))
)
' <<< "$current_agent_policies")
if [[ -z "$policies_to_update" ]]; then
return 0
fi
while IFS= read -r policy; do
[[ -z "$policy" ]] && continue
local policy_id policy_name policy_namespace
policy_id=$(jq -r '.id' <<< "$policy")
policy_name=$(jq -r '.name' <<< "$policy")
policy_namespace=$(jq -r '.namespace' <<< "$policy")
local update_logging
update_logging=$(jq -n \
--arg name "$policy_name" \
--arg namespace "$policy_namespace" \
'{name: $name, namespace: $namespace, advanced_settings: {agent_logging_level: "warning"}}'
)
echo "Setting elastic agent_logging_level to warning on policy '$policy_name' ($policy_id)."
if ! fleet_api "agent_policies/$policy_id" -XPUT -H 'kbn-xsrf: true' -H 'Content-Type: application/json' -d "$update_logging" >/dev/null; then
echo " warning: failed to update agent policy '$policy_name' ($policy_id)" >&2
fi
done <<< "$policies_to_update"
}
check_transform_health_and_reauthorize() {
. /usr/sbin/so-elastic-fleet-common
echo "Checking integration transform jobs for unhealthy / unauthorized status..."
local transforms_doc stats_doc installed_doc
if ! transforms_doc=$(so-elasticsearch-query "_transform/_all?size=1000" --fail --retry 3 --retry-delay 5 2>/dev/null); then
echo "Unable to query for transform jobs, skipping reauthorization."
return 0
fi
if ! stats_doc=$(so-elasticsearch-query "_transform/_all/_stats?size=1000" --fail --retry 3 --retry-delay 5 2>/dev/null); then
echo "Unable to query for transform job stats, skipping reauthorization."
return 0
fi
if ! installed_doc=$(fleet_api "epm/packages/installed?perPage=500"); then
echo "Unable to list installed Fleet packages, skipping reauthorization."
return 0
fi
# Get all transforms that meet the following
# - unhealthy (any non-green health status)
# - metadata has run_as_kibana_system: false (this fix is specific to transforms started prior to Kibana 9.3.3)
# - are not orphaned (integration is not somehow missing/corrupt/uninstalled)
local unhealthy_transforms
unhealthy_transforms=$(jq -c -n \
--argjson t "$transforms_doc" \
--argjson s "$stats_doc" \
--argjson i "$installed_doc" '
($i.items | map({key: .name, value: .version}) | from_entries) as $pkg_ver
| ($s.transforms | map({key: .id, value: .health.status}) | from_entries) as $health
| [ $t.transforms[]
| select(._meta.run_as_kibana_system == false)
| select(($health[.id] // "unknown") != "green")
| {id, pkg: ._meta.package.name, ver: ($pkg_ver[._meta.package.name])}
]
| if length == 0 then empty else . end
| (map(select(.ver == null)) | map({orphan: .id})[]),
(map(select(.ver != null))
| group_by(.pkg)
| map({pkg: .[0].pkg, ver: .[0].ver, transformIds: map(.id)})[])
')
if [[ -z "$unhealthy_transforms" ]]; then
return 0
fi
local unhealthy_count
unhealthy_count=$(jq -s '[.[].transformIds? // empty | .[]] | length' <<< "$unhealthy_transforms")
echo "Found $unhealthy_count transform(s) needing reauthorization."
local total_failures=0
while IFS= read -r transform; do
[[ -z "$transform" ]] && continue
if jq -e 'has("orphan")' <<< "$transform" >/dev/null 2>&1; then
echo "Skipping transform not owned by any installed Fleet package: $(jq -r '.orphan' <<< "$transform")"
continue
fi
local pkg ver body resp
pkg=$(jq -r '.pkg' <<< "$transform")
ver=$(jq -r '.ver' <<< "$transform")
body=$(jq -c '{transforms: (.transformIds | map({transformId: .}))}' <<< "$transform")
echo "Reauthorizing transform(s) for ${pkg}-${ver}..."
resp=$(fleet_api "epm/packages/${pkg}/${ver}/transforms/authorize" \
-XPOST -H 'kbn-xsrf: true' -H 'Content-Type: application/json' \
-d "$body") || { echo "Could not reauthorize transform(s) for ${pkg}-${ver}"; continue; }
(( total_failures += $(jq 'map(select(.success != true)) | length' <<< "$resp" 2>/dev/null) ))
done <<< "$unhealthy_transforms"
if [[ "$total_failures" -gt 0 ]]; then
echo "Some transform(s) failed to reauthorize."
fi
}
ensure_postgres_local_pillar() {
# Postgres was added as a service after 3.0.0, so the new pillar/top.sls
# references postgres.soc_postgres / postgres.adv_postgres unconditionally.
# Managers upgrading from 3.0.0 have no /opt/so/saltstack/local/pillar/postgres/
# (make_some_dirs only runs at install time), so the stubs must be created
# here before salt-master restarts against the new top.sls.
echo "Ensuring postgres local pillar stubs exist."
local dir=/opt/so/saltstack/local/pillar/postgres
mkdir -p "$dir"
[[ -f "$dir/soc_postgres.sls" ]] || touch "$dir/soc_postgres.sls"
[[ -f "$dir/adv_postgres.sls" ]] || touch "$dir/adv_postgres.sls"
chown -R socore:socore "$dir"
}
ensure_postgres_secret() {
# On a fresh install, generate_passwords + secrets_pillar seed
# secrets:postgres_pass in /opt/so/saltstack/local/pillar/secrets.sls. That
# code path is skipped on upgrade (secrets.sls already exists from 3.0.0
# with import_pass/influx_pass but no postgres_pass), so the postgres
# container's POSTGRES_PASSWORD_FILE and SOC's PG_ADMIN_PASS would be empty
# after highstate. Generate one now if missing.
local secrets_file=/opt/so/saltstack/local/pillar/secrets.sls
if [[ ! -f "$secrets_file" ]]; then
echo "WARNING: $secrets_file missing; skipping postgres_pass backfill."
return 0
fi
if so-yaml.py get -r "$secrets_file" secrets.postgres_pass >/dev/null 2>&1; then
echo "secrets.postgres_pass already set; leaving as-is."
return 0
fi
echo "Seeding secrets.postgres_pass in $secrets_file."
so-yaml.py add "$secrets_file" secrets.postgres_pass "$(get_random_value)"
chown socore:socore "$secrets_file"
}
up_to_3.1.0() {
ensure_postgres_local_pillar
ensure_postgres_secret
determine_elastic_agent_upgrade
elasticsearch_backup_index_templates
# Clear existing component template state file.
rm -f /opt/so/state/esfleet_component_templates.json
INSTALLEDVERSION=3.1.0
}
post_to_3.1.0() {
/usr/sbin/so-kibana-space-defaults
# ensure manager has new version of socloud.conf
if [[ $SALT_CLOUD_CONFIGURED == true ]]; then
salt-call state.apply salt.cloud.config concurrent=True
fi
# Backfill the Telegraf creds pillar for every accepted minion. so-telegraf-cred
# add is idempotent — it no-ops when an entry already exists — so this is safe
# to run on every soup. The subsequent state.apply creates/updates the matching
# Postgres roles from the reconciled pillar.
echo "Reconciling Telegraf Postgres creds for accepted minions."
for mid in $(salt-key --out=json --list=accepted 2>/dev/null | jq -r '.minions[]?' 2>/dev/null); do
[[ -n "$mid" ]] || continue
/usr/sbin/so-telegraf-cred add "$mid" || echo " warning: so-telegraf-cred add $mid failed" >&2
done
# Run through the master (not --local) so state compilation uses the
# master's configured file_roots; the manager's /etc/salt/minion has no
# file_roots of its own and --local would fail with "No matching sls found".
salt-call state.apply postgres.telegraf_users queue=True || true
# Update default agent policies to use logging level warn.
elasticfleet_set_agent_logging_level_warn || true
# Check for unhealthy / unauthorized integration transform jobs and attempt reauthorizations
check_transform_health_and_reauthorize || true
POSTVERSION=3.1.0
}
### 3.1.0 End ###
repo_sync() {
echo "Sync the local repo."
su socore -c '/usr/sbin/so-repo-sync' || fail "Unable to complete so-repo-sync."
@@ -636,15 +856,6 @@ upgrade_check_salt() {
upgrade_salt() {
echo "Performing upgrade of Salt from $INSTALLEDSALTVERSION to $NEWSALTVERSION."
echo ""
# Check if salt-cloud is installed
if rpm -q salt-cloud &>/dev/null; then
SALT_CLOUD_INSTALLED=true
fi
# Check if salt-cloud is configured
if [[ -f /etc/salt/cloud.profiles.d/socloud.conf ]]; then
SALT_CLOUD_CONFIGURED=true
fi
echo "Removing yum versionlock for Salt."
echo ""
yum versionlock delete "salt"
@@ -728,12 +939,12 @@ verify_es_version_compatibility() {
local is_active_intermediate_upgrade=1
# supported upgrade paths for SO-ES versions
declare -A es_upgrade_map=(
["8.18.8"]="9.0.8"
["9.0.8"]="9.3.3"
)
# Elasticsearch MUST upgrade through these versions
declare -A es_to_so_version=(
["8.18.8"]="2.4.190-20251024"
["9.0.8"]="3.0.0-20260331"
)
# Get current Elasticsearch version
@@ -745,26 +956,17 @@ verify_es_version_compatibility() {
exit 160
fi
if ! target_es_version_raw=$(so-yaml.py get $UPDATE_DIR/salt/elasticsearch/defaults.yaml elasticsearch.version); then
# so-yaml.py failed to get the ES version from upgrade versions elasticsearch/defaults.yaml file. Likely they are upgrading to an SO version older than 2.4.110 prior to the ES version pinning and should be OKAY to continue with the upgrade.
if ! target_es_version=$(so-yaml.py get -r $UPDATE_DIR/salt/elasticsearch/defaults.yaml elasticsearch.version); then
echo "Couldn't determine the target Elasticsearch version (post soup version) to ensure compatibility with current Elasticsearch version. Exiting"
# if so-yaml.py failed to get the ES version AND the version we are upgrading to is newer than 2.4.110 then we should bail
if [[ $(cat $UPDATE_DIR/VERSION | cut -d'.' -f3) > 110 ]]; then
echo "Couldn't determine the target Elasticsearch version (post soup version) to ensure compatibility with current Elasticsearch version. Exiting"
exit 160
fi
# allow upgrade to version < 2.4.110 without checking ES version compatibility
return 0
else
target_es_version=$(sed -n '1p' <<< "$target_es_version_raw")
exit 160
fi
for statefile in "${es_required_version_statefile_base}"-*; do
[[ -f $statefile ]] || continue
local es_required_version_statefile_value=$(cat "$statefile")
local es_required_version_statefile_value
es_required_version_statefile_value=$(cat "$statefile")
if [[ "$es_required_version_statefile_value" == "$target_es_version" ]]; then
echo "Intermediate upgrade to ES $target_es_version is in progress. Skipping Elasticsearch version compatibility check."
@@ -773,7 +975,7 @@ verify_es_version_compatibility() {
fi
# use sort to check if es_required_statefile_value is < the current es_version.
if [[ "$(printf '%s\n' $es_required_version_statefile_value $es_version | sort -V | head -n1)" == "$es_required_version_statefile_value" ]]; then
if [[ "$(printf '%s\n' "$es_required_version_statefile_value" "$es_version" | sort -V | head -n1)" == "$es_required_version_statefile_value" ]]; then
rm -f "$statefile"
continue
fi
@@ -784,8 +986,7 @@ verify_es_version_compatibility() {
echo -e "\n##############################################################################################################################\n"
echo "A previously required intermediate Elasticsearch upgrade was detected. Verifying that all Searchnodes/Heavynodes have successfully upgraded Elasticsearch to $es_required_version_statefile_value before proceeding with soup to avoid potential data loss! This command can take up to an hour to complete."
timeout --foreground 4000 bash "$es_verification_script" "$es_required_version_statefile_value" "$statefile"
if [[ $? -ne 0 ]]; then
if ! timeout --foreground 4000 bash "$es_verification_script" "$es_required_version_statefile_value" "$statefile"; then
echo -e "\n!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!\n"
echo "A previous required intermediate Elasticsearch upgrade to $es_required_version_statefile_value has yet to successfully complete across the grid. Please allow time for all Searchnodes/Heavynodes to have upgraded Elasticsearch to $es_required_version_statefile_value before running soup again to avoid potential data loss!"
@@ -802,6 +1003,7 @@ verify_es_version_compatibility() {
return 0
fi
# shellcheck disable=SC2076 # Do not want a regex here eg usage " 8.18.8 9.0.8 " =~ " 9.0.8 "
if [[ " ${es_upgrade_map[$es_version]} " =~ " $target_es_version " || "$es_version" == "$target_es_version" ]]; then
# supported upgrade
return 0
@@ -810,7 +1012,7 @@ verify_es_version_compatibility() {
if [[ -z "$compatible_versions" ]]; then
# If current ES version is not explicitly defined in the upgrade map, we know they have an intermediate upgrade to do.
# We default to the lowest ES version defined in es_to_so_version as $first_es_required_version
local first_es_required_version=$(printf '%s\n' "${!es_to_so_version[@]}" | sort -V | head -n1)
first_es_required_version=$(printf '%s\n' "${!es_to_so_version[@]}" | sort -V | head -n1)
next_step_so_version=${es_to_so_version[$first_es_required_version]}
required_es_upgrade_version="$first_es_required_version"
else
@@ -829,7 +1031,7 @@ verify_es_version_compatibility() {
if [[ $is_airgap -eq 0 ]]; then
run_airgap_intermediate_upgrade
else
if [[ ! -z $ISOLOC ]]; then
if [[ -n $ISOLOC ]]; then
originally_requested_iso_location="$ISOLOC"
fi
# Make sure ISOLOC is not set. Network installs that used soup -f would have ISOLOC set.
@@ -861,7 +1063,8 @@ wait_for_salt_minion_with_restart() {
}
run_airgap_intermediate_upgrade() {
local originally_requested_so_version=$(cat $UPDATE_DIR/VERSION)
local originally_requested_so_version
originally_requested_so_version=$(cat "$UPDATE_DIR/VERSION")
# preserve ISOLOC value, so we can try to use it post intermediate upgrade
local originally_requested_iso_location="$ISOLOC"
@@ -873,7 +1076,8 @@ run_airgap_intermediate_upgrade() {
while [[ -z "$next_iso_location" ]] || [[ ! -f "$next_iso_location" && ! -b "$next_iso_location" ]]; do
# List removable devices if any are present
local removable_devices=$(lsblk -no PATH,SIZE,TYPE,MOUNTPOINTS,RM | awk '$NF==1')
local removable_devices
removable_devices=$(lsblk -no PATH,SIZE,TYPE,MOUNTPOINTS,RM | awk '$NF==1')
if [[ -n "$removable_devices" ]]; then
echo "PATH SIZE TYPE MOUNTPOINTS RM"
echo "$removable_devices"
@@ -894,21 +1098,21 @@ run_airgap_intermediate_upgrade() {
echo "Using $next_iso_location for required intermediary upgrade."
exec bash <<EOF
ISOLOC=$next_iso_location soup -y && \
ISOLOC=$next_iso_location soup -y && \
ISOLOC="$next_iso_location" soup -y && \
ISOLOC="$next_iso_location" soup -y && \
echo -e "\n##############################################################################################################################\n" && \
echo -e "Verifying Elasticsearch was successfully upgraded to $required_es_upgrade_version across the grid. This part can take a while as Searchnodes/Heavynodes sync up with the Manager! \n\nOnce verification completes the next soup will begin automatically. If verification takes longer than 1 hour it will stop waiting and your grid will remain at $next_step_so_version. Allowing for all Searchnodes/Heavynodes to upgrade Elasticsearch to the required version on their own time.\n" && \
timeout --foreground 4000 bash /tmp/so_intermediate_upgrade_verification.sh $required_es_upgrade_version $es_required_version_statefile && \
timeout --foreground 4000 bash /tmp/so_intermediate_upgrade_verification.sh "$required_es_upgrade_version" "$es_required_version_statefile" && \
echo -e "\n##############################################################################################################################\n" && \
# automatically start the next soup if the original ISO isn't using the same block device we just used
if [[ -n "$originally_requested_iso_location" ]] && [[ "$originally_requested_iso_location" != "$next_iso_location" ]]; then
umount /tmp/soagupdate
ISOLOC=$originally_requested_iso_location soup -y && \
ISOLOC=$originally_requested_iso_location soup -y
ISOLOC="$originally_requested_iso_location" soup -y && \
ISOLOC="$originally_requested_iso_location" soup -y
else
echo "Could not automatically start next soup to $originally_requested_so_version. Soup will now exit here at $(cat /etc/soversion)" && \
@@ -924,29 +1128,29 @@ run_network_intermediate_upgrade() {
if [[ -n "$BRANCH" ]]; then
local originally_requested_so_branch="$BRANCH"
else
local originally_requested_so_branch="2.4/main"
local originally_requested_so_branch="3/main"
fi
echo "Starting automated intermediate upgrade to $next_step_so_version."
echo "After completion, the system will automatically attempt to upgrade to the latest version."
echo -e "\n##############################################################################################################################\n"
exec bash << EOF
BRANCH=$next_step_so_version soup -y && \
BRANCH=$next_step_so_version soup -y && \
BRANCH="$next_step_so_version" soup -y && \
BRANCH="$next_step_so_version" soup -y && \
echo -e "\n##############################################################################################################################\n" && \
echo -e "Verifying Elasticsearch was successfully upgraded to $required_es_upgrade_version across the grid. This part can take a while as Searchnodes/Heavynodes sync up with the Manager! \n\nOnce verification completes the next soup will begin automatically. If verification takes longer than 1 hour it will stop waiting and your grid will remain at $next_step_so_version. Allowing for all Searchnodes/Heavynodes to upgrade Elasticsearch to the required version on their own time.\n" && \
timeout --foreground 4000 bash /tmp/so_intermediate_upgrade_verification.sh $required_es_upgrade_version $es_required_version_statefile && \
timeout --foreground 4000 bash /tmp/so_intermediate_upgrade_verification.sh "$required_es_upgrade_version" "$es_required_version_statefile" && \
echo -e "\n##############################################################################################################################\n" && \
if [[ -n "$originally_requested_iso_location" ]]; then
# nonairgap soup that used -f originally, runs intermediate upgrade using network + BRANCH, later coming back to the original ISO for the last soup
ISOLOC=$originally_requested_iso_location soup -y && \
ISOLOC=$originally_requested_iso_location soup -y
ISOLOC="$originally_requested_iso_location" soup -y && \
ISOLOC="$originally_requested_iso_location" soup -y
else
BRANCH=$originally_requested_so_branch soup -y && \
BRANCH=$originally_requested_so_branch soup -y
BRANCH="$originally_requested_so_branch" soup -y && \
BRANCH="$originally_requested_so_branch" soup -y
fi
echo -e "\n##############################################################################################################################\n"
EOF
+25
View File
@@ -25,8 +25,33 @@ manager_run_es_soc:
- salt: {{NEWNODE}}_update_mine
{% endif %}
# so-minion has already added the new minion's entry to telegraf/creds.sls
# via so-telegraf-cred before this orch fires. Reconcile the Postgres role
# on the manager so the new minion can authenticate on its first highstate,
# then refresh the minion's pillar so its telegraf.conf renders with the
# freshly-written cred.
manager_create_postgres_telegraf_role:
salt.state:
- tgt: {{ MANAGER }}
- sls:
- postgres.telegraf_users
- queue: True
- require:
- salt: {{NEWNODE}}_update_mine
{{NEWNODE}}_refresh_pillar:
salt.function:
- name: saltutil.refresh_pillar
- tgt: {{ NEWNODE }}
- kwarg:
wait: True
- require:
- salt: manager_create_postgres_telegraf_role
{{NEWNODE}}_run_highstate:
salt.state:
- tgt: {{ NEWNODE }}
- highstate: True
- queue: True
- require:
- salt: {{NEWNODE}}_refresh_pillar
+112
View File
@@ -0,0 +1,112 @@
# Copyright Security Onion Solutions LLC and/or licensed to Security Onion Solutions LLC under one
# or more contributor license agreements. Licensed under the Elastic License 2.0 as shown at
# https://securityonion.net/license; you may not use this file except in compliance with the
# Elastic License 2.0.
# Driven by the so_pillar_changed reactor. Translates a so_pillar.pillar_entry
# change into (cache.clear_pillar -> saltutil.refresh_pillar -> state.apply)
# on the appropriate target.
#
# Routing rules live in the DISPATCH map below — one entry per
# (pillar_path prefix) -> (state sls, role grain). Add new services here
# rather than wiring more reactors.
#
# Idempotent: state.apply is idempotent; if the pillar value didn't actually
# change anything observable, the affected state runs a no-op. Bulk imports
# and replays are safe.
{% set change = salt['pillar.get']('so_pillar_change', {}) %}
{% set scope = change.get('scope') %}
{% set role = change.get('role_name') %}
{% set minion = change.get('minion_id') %}
{% set changes = change.get('changes', []) %}
{# (pillar_path prefix) -> {sls: <state to apply>, role: <role grain that runs it>}
role is a grain value (e.g. 'so-sensor'), used to compute compound targets
when the change is global or role-scoped. #}
{% set DISPATCH = {
'suricata.': {'sls': 'suricata.config', 'roles': ['so-sensor', 'so-heavynode', 'so-standalone']},
'sensor.': {'sls': 'suricata.config', 'roles': ['so-sensor', 'so-heavynode', 'so-standalone']},
'zeek.': {'sls': 'zeek.config', 'roles': ['so-sensor', 'so-heavynode', 'so-standalone']},
'stenographer.': {'sls': 'stenographer.config', 'roles': ['so-sensor', 'so-heavynode', 'so-standalone']},
'pcap.': {'sls': 'pcap.config', 'roles': ['so-sensor', 'so-heavynode', 'so-standalone']},
'logstash.': {'sls': 'logstash.config', 'roles': ['so-manager', 'so-managersearch', 'so-managerhype', 'so-receiver']},
'redis.': {'sls': 'redis.config', 'roles': ['so-manager', 'so-managersearch', 'so-managerhype', 'so-standalone']},
'kafka.': {'sls': 'kafka.config', 'roles': ['so-manager', 'so-managersearch', 'so-managerhype', 'so-receiver', 'so-searchnode']},
'elasticsearch.': {'sls': 'elasticsearch.config','roles': ['so-manager', 'so-managersearch', 'so-managerhype', 'so-searchnode', 'so-heavynode', 'so-standalone']},
'kibana.': {'sls': 'kibana.config', 'roles': ['so-manager', 'so-managersearch', 'so-managerhype', 'so-standalone']},
'soc.': {'sls': 'soc.config', 'roles': ['so-manager', 'so-managersearch', 'so-managerhype', 'so-standalone']},
'telegraf.': {'sls': 'telegraf.config', 'roles': ['*']},
'fleet.': {'sls': 'fleet.config', 'roles': ['so-fleet']},
'strelka.': {'sls': 'strelka.config', 'roles': ['so-sensor', 'so-heavynode', 'so-standalone']},
} %}
{# Collect a deduplicated set of (sls, target_kind) actions. target_kind is
either 'minion:<id>' (scope=minion) or 'roles:so-x,so-y' (scope=role/global). #}
{% set actions = {} %}
{% for c in changes %}
{% set path = c.get('pillar_path', '') %}
{% for prefix, action in DISPATCH.items() %}
{% if path.startswith(prefix) %}
{% set sls = action['sls'] %}
{% if scope == 'minion' and minion %}
{% set key = sls ~ '|minion|' ~ minion %}
{% set _ = actions.update({key: {'sls': sls, 'tgt': minion, 'tgt_type': 'glob'}}) %}
{% else %}
{% set role_targets = action['roles'] %}
{% if '*' in role_targets %}
{% set tgt = '*' %}
{% set tgt_type = 'glob' %}
{% else %}
{% set tgt = ('I@role:' ~ role_targets|join(' or I@role:')) %}
{% set tgt_type = 'compound' %}
{% endif %}
{% set key = sls ~ '|' ~ tgt %}
{% set _ = actions.update({key: {'sls': sls, 'tgt': tgt, 'tgt_type': tgt_type}}) %}
{% endif %}
{% endif %}
{% endfor %}
{% endfor %}
{% if actions %}
{% for key, action in actions.items() %}
{% set safe_id = loop.index0 | string %}
so_pillar_reload_clear_cache_{{ safe_id }}:
salt.runner:
- name: cache.clear_pillar
- tgt: '{{ action.tgt }}'
- tgt_type: '{{ action.tgt_type }}'
so_pillar_reload_refresh_pillar_{{ safe_id }}:
salt.function:
- name: saltutil.refresh_pillar
- tgt: '{{ action.tgt }}'
- tgt_type: '{{ action.tgt_type }}'
- kwarg:
wait: True
- require:
- salt: so_pillar_reload_clear_cache_{{ safe_id }}
so_pillar_reload_apply_state_{{ safe_id }}:
salt.state:
- tgt: '{{ action.tgt }}'
- tgt_type: '{{ action.tgt_type }}'
- sls:
- {{ action.sls }}
- queue: True
- require:
- salt: so_pillar_reload_refresh_pillar_{{ safe_id }}
{% endfor %}
{% else %}
{# No DISPATCH entry matched. Pillar still gets refreshed so any other states
read fresh values, but no service-specific reload is invoked. #}
so_pillar_reload_unmapped_path_noop:
test.nop
{% do salt.log.info('orch.so_pillar_reload: no dispatch match for %s' % changes) %}
{% endif %}
+37
View File
@@ -0,0 +1,37 @@
# Copyright Security Onion Solutions LLC and/or licensed to Security Onion Solutions LLC under one
# or more contributor license agreements. Licensed under the Elastic License 2.0 as shown at
# https://securityonion.net/license; you may not use this file except in compliance with the
# Elastic License 2.0.
{% from 'allowed_states.map.jinja' import allowed_states %}
{% if sls in allowed_states %}
{% set DIGITS = "1234567890" %}
{% set LOWERCASE = "qwertyuiopasdfghjklzxcvbnm" %}
{% set UPPERCASE = "QWERTYUIOPASDFGHJKLZXCVBNM" %}
{% set SYMBOLS = "~!@#^&*()-_=+[]|;:,.<>?" %}
{% set CHARS = DIGITS~LOWERCASE~UPPERCASE~SYMBOLS %}
{% set so_postgres_user_pass = salt['pillar.get']('postgres:auth:users:so_postgres_user:pass', salt['random.get_str'](72, chars=CHARS)) %}
# Admin cred only. Per-minion Telegraf creds live in telegraf/creds.sls,
# managed by /usr/sbin/so-telegraf-cred (called from so-minion).
postgres_auth_pillar:
file.managed:
- name: /opt/so/saltstack/local/pillar/postgres/auth.sls
- mode: 640
- reload_pillar: True
- contents: |
postgres:
auth:
users:
so_postgres_user:
user: so_postgres
pass: "{{ so_postgres_user_pass }}"
- show_changes: False
{% else %}
{{sls}}_state_not_allowed:
test.fail_without_changes:
- name: {{sls}}_state_not_allowed
{% endif %}
+111
View File
@@ -0,0 +1,111 @@
# Copyright Security Onion Solutions LLC and/or licensed to Security Onion Solutions LLC under one
# or more contributor license agreements. Licensed under the Elastic License 2.0 as shown at
# https://securityonion.net/license; you may not use this file except in compliance with the
# Elastic License 2.0.
{% from 'allowed_states.map.jinja' import allowed_states %}
{% if sls.split('.')[0] in allowed_states %}
{% from 'postgres/map.jinja' import PGMERGED %}
# Postgres Setup
postgresconfdir:
file.directory:
- name: /opt/so/conf/postgres
- user: 939
- group: 939
- makedirs: True
postgressecretsdir:
file.directory:
- name: /opt/so/conf/postgres/secrets
- user: 939
- group: 939
- mode: 700
- require:
- file: postgresconfdir
postgresdatadir:
file.directory:
- name: /nsm/postgres
- user: 939
- group: 939
- makedirs: True
postgreslogdir:
file.directory:
- name: /opt/so/log/postgres
- user: 939
- group: 939
- makedirs: True
postgresinitdir:
file.directory:
- name: /opt/so/conf/postgres/init
- user: 939
- group: 939
- require:
- file: postgresconfdir
postgresinitusers:
file.managed:
- name: /opt/so/conf/postgres/init/init-users.sh
- source: salt://postgres/files/init-users.sh
- user: 939
- group: 939
- mode: 755
postgresconf:
file.managed:
- name: /opt/so/conf/postgres/postgresql.conf
- source: salt://postgres/files/postgresql.conf.jinja
- user: 939
- group: 939
- template: jinja
- defaults:
PGMERGED: {{ PGMERGED }}
postgreshba:
file.managed:
- name: /opt/so/conf/postgres/pg_hba.conf
- source: salt://postgres/files/pg_hba.conf
- user: 939
- group: 939
- mode: 640
postgres_super_secret:
file.managed:
- name: /opt/so/conf/postgres/secrets/postgres_password
- user: 939
- group: 939
- mode: 600
- contents_pillar: 'secrets:postgres_pass'
- show_changes: False
- require:
- file: postgressecretsdir
postgres_app_secret:
file.managed:
- name: /opt/so/conf/postgres/secrets/so_postgres_pass
- user: 939
- group: 939
- mode: 600
- contents_pillar: 'postgres:auth:users:so_postgres_user:pass'
- show_changes: False
- require:
- file: postgressecretsdir
postgres_sbin:
file.recurse:
- name: /usr/sbin
- source: salt://postgres/tools/sbin
- user: root
- group: root
- file_mode: 755
{% else %}
{{sls}}_state_not_allowed:
test.fail_without_changes:
- name: {{sls}}_state_not_allowed
{% endif %}
+19
View File
@@ -0,0 +1,19 @@
postgres:
enabled: True
telegraf:
retention_days: 14
config:
listen_addresses: '*'
port: 5432
max_connections: 100
shared_buffers: 256MB
ssl: 'on'
ssl_cert_file: '/conf/postgres.crt'
ssl_key_file: '/conf/postgres.key'
ssl_ca_file: '/conf/ca.crt'
hba_file: '/conf/pg_hba.conf'
log_destination: 'stderr'
logging_collector: 'off'
log_min_messages: 'warning'
shared_preload_libraries: pg_cron
cron.database_name: so_telegraf
+33
View File
@@ -0,0 +1,33 @@
# Copyright Security Onion Solutions LLC and/or licensed to Security Onion Solutions LLC under one
# or more contributor license agreements. Licensed under the Elastic License 2.0 as shown at
# https://securityonion.net/license; you may not use this file except in compliance with the
# Elastic License 2.0.
{% from 'allowed_states.map.jinja' import allowed_states %}
{% if sls.split('.')[0] in allowed_states %}
include:
- postgres.sostatus
so-postgres:
docker_container.absent:
- force: True
so-postgres_so-status.disabled:
file.comment:
- name: /opt/so/conf/so-status/so-status.conf
- regex: ^so-postgres$
so_postgres_backup:
cron.absent:
- name: /usr/sbin/so-postgres-backup > /dev/null 2>&1
- identifier: so_postgres_backup
- user: root
{% else %}
{{sls}}_state_not_allowed:
test.fail_without_changes:
- name: {{sls}}_state_not_allowed
{% endif %}
+109
View File
@@ -0,0 +1,109 @@
# Copyright Security Onion Solutions LLC and/or licensed to Security Onion Solutions LLC under one
# or more contributor license agreements. Licensed under the Elastic License 2.0 as shown at
# https://securityonion.net/license; you may not use this file except in compliance with the
# Elastic License 2.0.
{% from 'allowed_states.map.jinja' import allowed_states %}
{% if sls.split('.')[0] in allowed_states %}
{% from 'vars/globals.map.jinja' import GLOBALS %}
{% from 'docker/docker.map.jinja' import DOCKERMERGED %}
{% set SO_POSTGRES_USER = salt['pillar.get']('postgres:auth:users:so_postgres_user:user', 'so_postgres') %}
include:
- postgres.auth
- postgres.ssl
- postgres.config
- postgres.sostatus
- postgres.telegraf_users
so-postgres:
docker_container.running:
- image: {{ GLOBALS.registry_host }}:5000/{{ GLOBALS.image_repo }}/so-postgres:{{ GLOBALS.so_version }}
- hostname: so-postgres
- networks:
- sobridge:
- ipv4_address: {{ DOCKERMERGED.containers['so-postgres'].ip }}
- port_bindings:
{% for BINDING in DOCKERMERGED.containers['so-postgres'].port_bindings %}
- {{ BINDING }}
{% endfor %}
- environment:
- POSTGRES_DB=securityonion
# Passwords are delivered via mounted 0600 secret files, not plaintext env vars.
# The upstream postgres image resolves POSTGRES_PASSWORD_FILE; entrypoint.sh and
# init-users.sh resolve SO_POSTGRES_PASS_FILE the same way.
- POSTGRES_PASSWORD_FILE=/run/secrets/postgres_password
- SO_POSTGRES_USER={{ SO_POSTGRES_USER }}
- SO_POSTGRES_PASS_FILE=/run/secrets/so_postgres_pass
{% if DOCKERMERGED.containers['so-postgres'].extra_env %}
{% for XTRAENV in DOCKERMERGED.containers['so-postgres'].extra_env %}
- {{ XTRAENV }}
{% endfor %}
{% endif %}
- binds:
- /opt/so/log/postgres/:/log:rw
- /nsm/postgres:/var/lib/postgresql/data:rw
- /opt/so/conf/postgres/postgresql.conf:/conf/postgresql.conf:ro
- /opt/so/conf/postgres/pg_hba.conf:/conf/pg_hba.conf:ro
- /opt/so/conf/postgres/secrets:/run/secrets:ro
- /opt/so/conf/postgres/init/init-users.sh:/docker-entrypoint-initdb.d/init-users.sh:ro
- /etc/pki/postgres.crt:/conf/postgres.crt:ro
- /etc/pki/postgres.key:/conf/postgres.key:ro
- /etc/pki/tls/certs/intca.crt:/conf/ca.crt:ro
{% if DOCKERMERGED.containers['so-postgres'].custom_bind_mounts %}
{% for BIND in DOCKERMERGED.containers['so-postgres'].custom_bind_mounts %}
- {{ BIND }}
{% endfor %}
{% endif %}
{% if DOCKERMERGED.containers['so-postgres'].extra_hosts %}
- extra_hosts:
{% for XTRAHOST in DOCKERMERGED.containers['so-postgres'].extra_hosts %}
- {{ XTRAHOST }}
{% endfor %}
{% endif %}
{% if DOCKERMERGED.containers['so-postgres'].ulimits %}
- ulimits:
{% for ULIMIT in DOCKERMERGED.containers['so-postgres'].ulimits %}
- {{ ULIMIT.name }}={{ ULIMIT.soft }}:{{ ULIMIT.hard }}
{% endfor %}
{% endif %}
- watch:
- file: postgresconf
- file: postgreshba
- file: postgresinitusers
- file: postgres_super_secret
- file: postgres_app_secret
- x509: postgres_crt
- x509: postgres_key
- require:
- file: postgresconf
- file: postgreshba
- file: postgresinitusers
- file: postgres_super_secret
- file: postgres_app_secret
- x509: postgres_crt
- x509: postgres_key
delete_so-postgres_so-status.disabled:
file.uncomment:
- name: /opt/so/conf/so-status/so-status.conf
- regex: ^so-postgres$
so_postgres_backup:
cron.present:
- name: /usr/sbin/so-postgres-backup > /dev/null 2>&1
- identifier: so_postgres_backup
- user: root
- minute: '5'
- hour: '0'
- daymonth: '*'
- month: '*'
- dayweek: '*'
{% else %}
{{sls}}_state_not_allowed:
test.fail_without_changes:
- name: {{sls}}_state_not_allowed
{% endif %}
+34
View File
@@ -0,0 +1,34 @@
#!/bin/bash
set -e
# Create or update application user for SOC platform access
# This script runs on first database initialization via docker-entrypoint-initdb.d
# The password is properly escaped to handle special characters
if [ -z "${SO_POSTGRES_PASS:-}" ] && [ -n "${SO_POSTGRES_PASS_FILE:-}" ] && [ -r "$SO_POSTGRES_PASS_FILE" ]; then
SO_POSTGRES_PASS="$(< "$SO_POSTGRES_PASS_FILE")"
fi
psql -v ON_ERROR_STOP=1 --username "$POSTGRES_USER" --dbname "$POSTGRES_DB" <<-EOSQL
DO \$\$
BEGIN
IF NOT EXISTS (SELECT FROM pg_catalog.pg_roles WHERE rolname = '${SO_POSTGRES_USER}') THEN
EXECUTE format('CREATE ROLE %I WITH LOGIN PASSWORD %L', '${SO_POSTGRES_USER}', '${SO_POSTGRES_PASS}');
ELSE
EXECUTE format('ALTER ROLE %I WITH PASSWORD %L', '${SO_POSTGRES_USER}', '${SO_POSTGRES_PASS}');
END IF;
END
\$\$;
GRANT ALL PRIVILEGES ON DATABASE "$POSTGRES_DB" TO "$SO_POSTGRES_USER";
-- Lock the SOC database down at the connect layer; PUBLIC gets CONNECT
-- by default, which would let per-minion telegraf roles open sessions
-- here. They have no schema/table grants inside so reads fail, but
-- revoking CONNECT closes the soft edge entirely.
REVOKE CONNECT ON DATABASE "$POSTGRES_DB" FROM PUBLIC;
GRANT CONNECT ON DATABASE "$POSTGRES_DB" TO "$SO_POSTGRES_USER";
EOSQL
# Bootstrap the Telegraf metrics database. Per-minion roles + schemas are
# reconciled on every state.apply by postgres/telegraf_users.sls; this block
# only ensures the shared database exists on first initialization.
if ! psql -U "$POSTGRES_USER" -tAc "SELECT 1 FROM pg_database WHERE datname='so_telegraf'" | grep -q 1; then
psql -v ON_ERROR_STOP=1 -U "$POSTGRES_USER" -c "CREATE DATABASE so_telegraf"
fi
+16
View File
@@ -0,0 +1,16 @@
# Copyright Security Onion Solutions LLC and/or licensed to Security Onion Solutions LLC under one
# or more contributor license agreements. Licensed under the Elastic License 2.0 as shown at
# https://securityonion.net/license; you may not use this file except in compliance with the
# Elastic License 2.0.
#
# Managed by Salt — do not edit by hand.
# Client authentication config: only local (Unix socket) connections and TLS-wrapped TCP
# connections are accepted. Plain-text `host ...` lines are intentionally omitted so a
# misconfigured client with sslmode=disable cannot negotiate a cleartext session.
# Local connections (Unix socket, container-internal) use peer/trust.
local all all trust
# TCP connections MUST use TLS (hostssl) and authenticate with SCRAM.
hostssl all all 0.0.0.0/0 scram-sha-256
hostssl all all ::/0 scram-sha-256
@@ -0,0 +1,8 @@
{# Copyright Security Onion Solutions LLC and/or licensed to Security Onion Solutions LLC under one
or more contributor license agreements. Licensed under the Elastic License 2.0 as shown at
https://securityonion.net/license; you may not use this file except in compliance with the
Elastic License 2.0. #}
{% for key, value in PGMERGED.config.items() %}
{{ key }} = '{{ value | string | replace("'", "''") }}'
{% endfor %}
@@ -0,0 +1,124 @@
-- so_pillar schema: queryable, versioned, audited pillar config store.
-- Replaces flat-file Salt pillar consumed via salt.pillar.postgres ext_pillar.
-- Idempotent. Run via salt/postgres/schema_pillar.sls inside the so-postgres container.
CREATE SCHEMA IF NOT EXISTS so_pillar;
CREATE TABLE IF NOT EXISTS so_pillar.scope (
scope_kind text PRIMARY KEY,
precedence int NOT NULL,
description text
);
INSERT INTO so_pillar.scope(scope_kind, precedence, description) VALUES
('global', 100, 'Applies to every minion'),
('role', 200, 'Applies to minions whose minion_id matches a top.sls compound role match'),
('minion', 300, 'Applies only to a single minion (per-minion overlay)')
ON CONFLICT (scope_kind) DO NOTHING;
CREATE TABLE IF NOT EXISTS so_pillar.role (
role_name text PRIMARY KEY,
match_kind text NOT NULL CHECK (match_kind IN ('compound','grain','glob','list')),
match_expr text NOT NULL,
description text
);
CREATE TABLE IF NOT EXISTS so_pillar.minion (
minion_id text PRIMARY KEY,
node_type text,
hostname text,
extra_roles text[] NOT NULL DEFAULT '{}',
created_at timestamptz NOT NULL DEFAULT now(),
updated_at timestamptz NOT NULL DEFAULT now()
);
CREATE TABLE IF NOT EXISTS so_pillar.role_member (
role_name text NOT NULL REFERENCES so_pillar.role(role_name) ON DELETE CASCADE,
minion_id text NOT NULL REFERENCES so_pillar.minion(minion_id) ON DELETE CASCADE,
source text NOT NULL DEFAULT 'computed' CHECK (source IN ('computed','manual','imported')),
PRIMARY KEY (role_name, minion_id)
);
CREATE INDEX IF NOT EXISTS ix_role_member_minion ON so_pillar.role_member(minion_id);
-- pillar_entry holds the actual data. as_json=True ext_pillar reads `data` directly.
CREATE TABLE IF NOT EXISTS so_pillar.pillar_entry (
id bigserial PRIMARY KEY,
scope text NOT NULL REFERENCES so_pillar.scope(scope_kind),
role_name text REFERENCES so_pillar.role(role_name) ON DELETE CASCADE,
minion_id text REFERENCES so_pillar.minion(minion_id) ON DELETE CASCADE,
pillar_path text NOT NULL,
data jsonb NOT NULL,
is_secret boolean NOT NULL DEFAULT false,
sort_key int NOT NULL DEFAULT 0,
version int NOT NULL DEFAULT 1,
updated_at timestamptz NOT NULL DEFAULT now(),
updated_by text NOT NULL DEFAULT current_user,
change_reason text,
CONSTRAINT pillar_entry_scope_target CHECK (
(scope='global' AND role_name IS NULL AND minion_id IS NULL)
OR (scope='role' AND role_name IS NOT NULL AND minion_id IS NULL)
OR (scope='minion' AND role_name IS NULL AND minion_id IS NOT NULL)
),
-- Reserved namespaces that MUST stay rendered from SLS (mine-driven). Nothing
-- under these prefixes is allowed in the database; the merge logic relies on
-- ext_pillar leaving these subtrees alone.
CONSTRAINT pillar_entry_reserved_paths CHECK (
pillar_path NOT LIKE 'elasticsearch.nodes%'
AND pillar_path NOT LIKE 'redis.nodes%'
AND pillar_path NOT LIKE 'kafka.nodes%'
AND pillar_path NOT LIKE 'hypervisor.nodes%'
AND pillar_path NOT LIKE 'logstash.nodes%'
AND pillar_path NOT LIKE 'node_data.ips%'
)
);
CREATE UNIQUE INDEX IF NOT EXISTS ux_pillar_entry_global ON so_pillar.pillar_entry(pillar_path)
WHERE scope = 'global';
CREATE UNIQUE INDEX IF NOT EXISTS ux_pillar_entry_role ON so_pillar.pillar_entry(role_name, pillar_path)
WHERE scope = 'role';
CREATE UNIQUE INDEX IF NOT EXISTS ux_pillar_entry_minion ON so_pillar.pillar_entry(minion_id, pillar_path)
WHERE scope = 'minion';
CREATE INDEX IF NOT EXISTS ix_pillar_entry_minion_hot ON so_pillar.pillar_entry(minion_id)
WHERE scope = 'minion';
CREATE INDEX IF NOT EXISTS ix_pillar_entry_role_hot ON so_pillar.pillar_entry(role_name)
WHERE scope = 'role';
-- Append-only audit log for every change to pillar_entry. No FK to entry so DELETE
-- history survives the row removal.
CREATE TABLE IF NOT EXISTS so_pillar.pillar_entry_history (
history_id bigserial PRIMARY KEY,
entry_id bigint,
op text NOT NULL CHECK (op IN ('INSERT','UPDATE','DELETE')),
scope text NOT NULL,
role_name text,
minion_id text,
pillar_path text NOT NULL,
old_data jsonb,
new_data jsonb,
is_secret boolean,
version int,
changed_at timestamptz NOT NULL DEFAULT now(),
changed_by text NOT NULL DEFAULT current_user,
change_reason text
);
CREATE INDEX IF NOT EXISTS ix_pillar_history_entry ON so_pillar.pillar_entry_history(entry_id, changed_at DESC);
CREATE INDEX IF NOT EXISTS ix_pillar_history_minion ON so_pillar.pillar_entry_history(minion_id, changed_at DESC);
CREATE INDEX IF NOT EXISTS ix_pillar_history_role ON so_pillar.pillar_entry_history(role_name, changed_at DESC);
-- Drift watch — populated by a pg_cron job that re-renders the on-disk SLS files
-- and compares them to pillar_entry. Cleared once cutover completes.
CREATE TABLE IF NOT EXISTS so_pillar.drift_log (
id bigserial PRIMARY KEY,
scope text NOT NULL,
role_name text,
minion_id text,
pillar_path text NOT NULL,
disk_data jsonb,
db_data jsonb,
detected_at timestamptz NOT NULL DEFAULT now()
);
CREATE INDEX IF NOT EXISTS ix_drift_log_detected ON so_pillar.drift_log(detected_at DESC);
@@ -0,0 +1,49 @@
-- Views consumed by the Salt master's salt.pillar.postgres ext_pillar with
-- as_json=True. Each view exposes data ordered by (sort_key, pillar_path) so
-- the deep-merge in ext_pillar resolves precedence deterministically.
--
-- ext_pillar always binds exactly one parameter to the query: (minion_id,).
-- Master-config queries reference these views and add WHERE clauses, e.g.:
-- SELECT data FROM so_pillar.v_pillar_role WHERE minion_id = %s
-- SELECT data FROM so_pillar.v_pillar_minion WHERE minion_id = %s
-- For v_pillar_global the binding is satisfied with `WHERE %s IS NOT NULL`.
CREATE OR REPLACE VIEW so_pillar.v_pillar_global AS
SELECT pillar_path, sort_key, data
FROM so_pillar.pillar_entry
WHERE scope = 'global'
AND is_secret = false
ORDER BY sort_key, pillar_path;
-- Role view exposes minion_id so the master-config WHERE clause can filter to
-- the rows that apply to the requesting minion. JOIN to role_member fans out
-- one row per (role assignment, pillar entry) tuple.
CREATE OR REPLACE VIEW so_pillar.v_pillar_role AS
SELECT rm.minion_id,
pe.role_name,
pe.pillar_path,
pe.sort_key,
pe.data
FROM so_pillar.pillar_entry pe
JOIN so_pillar.role_member rm ON rm.role_name = pe.role_name
WHERE pe.scope = 'role'
AND pe.is_secret = false;
CREATE OR REPLACE VIEW so_pillar.v_pillar_minion AS
SELECT minion_id,
pillar_path,
sort_key,
data
FROM so_pillar.pillar_entry
WHERE scope = 'minion'
AND is_secret = false;
-- v_pillar_secrets is filled in by 004_secrets.sql once pgcrypto is available;
-- placeholder here returns no rows so initial schema deploy succeeds even on a
-- container that has not yet loaded pgcrypto.
CREATE OR REPLACE VIEW so_pillar.v_pillar_secrets AS
SELECT NULL::text AS minion_id,
NULL::text AS pillar_path,
NULL::int AS sort_key,
'{}'::jsonb AS data
WHERE false;
@@ -0,0 +1,120 @@
-- Audit trigger: every INSERT/UPDATE/DELETE on so_pillar.pillar_entry writes a
-- row to pillar_entry_history. Captures the actor (current_user), reason
-- (passed via SET LOCAL so_pillar.change_reason), and full before/after data.
CREATE OR REPLACE FUNCTION so_pillar.fn_pillar_entry_audit() RETURNS trigger
LANGUAGE plpgsql AS $fn$
DECLARE
v_reason text := current_setting('so_pillar.change_reason', true);
BEGIN
IF (TG_OP = 'INSERT') THEN
INSERT INTO so_pillar.pillar_entry_history(
entry_id, op, scope, role_name, minion_id, pillar_path,
old_data, new_data, is_secret, version, changed_by, change_reason)
VALUES (NEW.id, 'INSERT', NEW.scope, NEW.role_name, NEW.minion_id, NEW.pillar_path,
NULL, NEW.data, NEW.is_secret, NEW.version, NEW.updated_by, v_reason);
RETURN NEW;
ELSIF (TG_OP = 'UPDATE') THEN
IF OLD.data IS DISTINCT FROM NEW.data
OR OLD.is_secret IS DISTINCT FROM NEW.is_secret THEN
INSERT INTO so_pillar.pillar_entry_history(
entry_id, op, scope, role_name, minion_id, pillar_path,
old_data, new_data, is_secret, version, changed_by, change_reason)
VALUES (NEW.id, 'UPDATE', NEW.scope, NEW.role_name, NEW.minion_id, NEW.pillar_path,
OLD.data, NEW.data, NEW.is_secret, NEW.version, NEW.updated_by, v_reason);
END IF;
RETURN NEW;
ELSIF (TG_OP = 'DELETE') THEN
INSERT INTO so_pillar.pillar_entry_history(
entry_id, op, scope, role_name, minion_id, pillar_path,
old_data, new_data, is_secret, version, changed_by, change_reason)
VALUES (OLD.id, 'DELETE', OLD.scope, OLD.role_name, OLD.minion_id, OLD.pillar_path,
OLD.data, NULL, OLD.is_secret, OLD.version, current_user, v_reason);
RETURN OLD;
END IF;
RETURN NULL;
END
$fn$;
DROP TRIGGER IF EXISTS pillar_entry_audit ON so_pillar.pillar_entry;
CREATE TRIGGER pillar_entry_audit
AFTER INSERT OR UPDATE OR DELETE ON so_pillar.pillar_entry
FOR EACH ROW EXECUTE FUNCTION so_pillar.fn_pillar_entry_audit();
-- updated_at + version maintenance: bump version on every UPDATE that changes data.
CREATE OR REPLACE FUNCTION so_pillar.fn_pillar_entry_versioning() RETURNS trigger
LANGUAGE plpgsql AS $fn$
BEGIN
IF (TG_OP = 'UPDATE') THEN
IF OLD.data IS DISTINCT FROM NEW.data
OR OLD.is_secret IS DISTINCT FROM NEW.is_secret THEN
NEW.version := OLD.version + 1;
NEW.updated_at := now();
ELSE
NEW.version := OLD.version;
NEW.updated_at := OLD.updated_at;
END IF;
END IF;
RETURN NEW;
END
$fn$;
DROP TRIGGER IF EXISTS pillar_entry_versioning ON so_pillar.pillar_entry;
CREATE TRIGGER pillar_entry_versioning
BEFORE UPDATE ON so_pillar.pillar_entry
FOR EACH ROW EXECUTE FUNCTION so_pillar.fn_pillar_entry_versioning();
-- Recompute role_member rows for a minion based on node_type.
-- Compound matchers in pillar/top.sls are pure suffix patterns of the form
-- '*_<rolename>' plus the special multi-role 'manager/managersearch/managerhype'
-- bucket. node_type is split on common dashes/underscores; any token that
-- matches a known role_name produces a role_member row.
CREATE OR REPLACE FUNCTION so_pillar.fn_recompute_role_members(p_minion_id text)
RETURNS void LANGUAGE plpgsql AS $fn$
DECLARE
v_node_type text;
v_extra text[];
v_role text;
BEGIN
SELECT node_type, extra_roles INTO v_node_type, v_extra
FROM so_pillar.minion WHERE minion_id = p_minion_id;
IF v_node_type IS NULL THEN
RETURN;
END IF;
DELETE FROM so_pillar.role_member
WHERE minion_id = p_minion_id AND source = 'computed';
-- Main role from node_type.
IF EXISTS (SELECT 1 FROM so_pillar.role WHERE role_name = lower(v_node_type)) THEN
INSERT INTO so_pillar.role_member(role_name, minion_id, source)
VALUES (lower(v_node_type), p_minion_id, 'computed')
ON CONFLICT DO NOTHING;
END IF;
-- Extra roles supplied by the importer / reactor for compound matchers
-- that need to apply multiple buckets (e.g. managersearch also gets the
-- 'manager' bucket per top.sls line 36 grouping).
FOREACH v_role IN ARRAY COALESCE(v_extra, '{}'::text[]) LOOP
IF EXISTS (SELECT 1 FROM so_pillar.role WHERE role_name = v_role) THEN
INSERT INTO so_pillar.role_member(role_name, minion_id, source)
VALUES (v_role, p_minion_id, 'computed')
ON CONFLICT DO NOTHING;
END IF;
END LOOP;
END
$fn$;
CREATE OR REPLACE FUNCTION so_pillar.fn_minion_after_change() RETURNS trigger
LANGUAGE plpgsql AS $fn$
BEGIN
PERFORM so_pillar.fn_recompute_role_members(COALESCE(NEW.minion_id, OLD.minion_id));
RETURN COALESCE(NEW, OLD);
END
$fn$;
DROP TRIGGER IF EXISTS minion_role_sync ON so_pillar.minion;
CREATE TRIGGER minion_role_sync
AFTER INSERT OR UPDATE OF node_type, extra_roles ON so_pillar.minion
FOR EACH ROW EXECUTE FUNCTION so_pillar.fn_minion_after_change();
@@ -0,0 +1,130 @@
-- pgcrypto-backed secret storage for pillar_entry rows where is_secret = true.
-- The plaintext value is encrypted with a symmetric key held in a server-side
-- GUC (so_pillar.master_key) which is set per-role via ALTER ROLE so the key
-- never touches a flat file readable by Salt itself.
CREATE EXTENSION IF NOT EXISTS pgcrypto WITH SCHEMA public;
-- Encrypt a JSONB value using the configured master key. Stored as a JSONB
-- envelope {"_enc": "<armored ciphertext>"} so the same column type is reused.
CREATE OR REPLACE FUNCTION so_pillar.fn_encrypt_jsonb(p_value jsonb)
RETURNS jsonb LANGUAGE plpgsql AS $fn$
DECLARE
v_key text := current_setting('so_pillar.master_key', true);
BEGIN
IF v_key IS NULL OR v_key = '' THEN
RAISE EXCEPTION 'so_pillar.master_key GUC not configured';
END IF;
RETURN jsonb_build_object(
'_enc',
encode(pgp_sym_encrypt(p_value::text, v_key), 'base64')
);
END
$fn$;
-- Decrypt the envelope produced by fn_encrypt_jsonb. SECURITY DEFINER so callers
-- with no direct access to pgcrypto/master_key can still pull plaintext via the
-- v_pillar_secrets view.
CREATE OR REPLACE FUNCTION so_pillar.fn_decrypt_jsonb(p_envelope jsonb)
RETURNS jsonb LANGUAGE plpgsql SECURITY DEFINER AS $fn$
DECLARE
v_key text := current_setting('so_pillar.master_key', true);
v_ct text;
BEGIN
IF v_key IS NULL OR v_key = '' THEN
RAISE EXCEPTION 'so_pillar.master_key GUC not configured';
END IF;
v_ct := p_envelope->>'_enc';
IF v_ct IS NULL THEN
RETURN p_envelope; -- not encrypted; pass through
END IF;
RETURN pgp_sym_decrypt(decode(v_ct, 'base64'), v_key)::jsonb;
END
$fn$;
REVOKE ALL ON FUNCTION so_pillar.fn_decrypt_jsonb(jsonb) FROM PUBLIC;
-- Secrets view consumed by ext_pillar. Decrypts at the boundary so Salt sees
-- plaintext JSONB. Filters the rows to those that apply to the requesting
-- minion via current_setting, since views can't take parameters and ext_pillar
-- can only bind one parameter per query.
--
-- Master-config query: SELECT data FROM so_pillar.v_pillar_secrets WHERE %s IS NOT NULL
-- The %s satisfies the bound parameter; the view itself reads the minion_id
-- from a session GUC set by a small wrapper function (see fn_pillar_secrets).
CREATE OR REPLACE FUNCTION so_pillar.fn_pillar_secrets(p_minion_id text)
RETURNS TABLE(data jsonb)
LANGUAGE sql STABLE SECURITY DEFINER AS $fn$
SELECT so_pillar.fn_decrypt_jsonb(pe.data)
FROM so_pillar.pillar_entry pe
WHERE pe.is_secret = true
AND ( pe.scope = 'global'
OR (pe.scope = 'role'
AND pe.role_name IN (
SELECT role_name FROM so_pillar.role_member
WHERE minion_id = p_minion_id))
OR (pe.scope = 'minion' AND pe.minion_id = p_minion_id))
ORDER BY pe.sort_key, pe.pillar_path;
$fn$;
-- Replace the placeholder view from 002 with a parameterised version. Master
-- config query becomes:
-- SELECT data FROM so_pillar.fn_pillar_secrets(%s) AS s
DROP VIEW IF EXISTS so_pillar.v_pillar_secrets;
CREATE OR REPLACE VIEW so_pillar.v_pillar_secrets AS
SELECT NULL::text AS minion_id,
NULL::text AS pillar_path,
NULL::int AS sort_key,
'{}'::jsonb AS data
WHERE false;
COMMENT ON VIEW so_pillar.v_pillar_secrets IS
'Deprecated placeholder; use SELECT data FROM so_pillar.fn_pillar_secrets(minion_id) instead';
-- Convenience helper for so-yaml.py and the importer to set a secret without
-- ever exposing the master_key to the caller. SECURITY DEFINER means the
-- caller does not need read access to so_pillar.master_key.
CREATE OR REPLACE FUNCTION so_pillar.fn_set_secret(
p_scope text,
p_role_name text,
p_minion_id text,
p_pillar_path text,
p_value jsonb,
p_change_reason text DEFAULT NULL
) RETURNS bigint LANGUAGE plpgsql SECURITY DEFINER AS $fn$
DECLARE
v_envelope jsonb := so_pillar.fn_encrypt_jsonb(p_value);
v_id bigint;
BEGIN
PERFORM set_config('so_pillar.change_reason',
COALESCE(p_change_reason, 'fn_set_secret'),
true);
INSERT INTO so_pillar.pillar_entry(
scope, role_name, minion_id, pillar_path, data, is_secret, change_reason)
VALUES (p_scope, p_role_name, p_minion_id, p_pillar_path, v_envelope, true, p_change_reason)
ON CONFLICT (pillar_path) WHERE scope='global' DO UPDATE
SET data = EXCLUDED.data, is_secret = true, change_reason = EXCLUDED.change_reason
RETURNING id INTO v_id;
IF v_id IS NULL THEN
UPDATE so_pillar.pillar_entry
SET data = v_envelope, is_secret = true, change_reason = p_change_reason
WHERE scope = p_scope
AND COALESCE(role_name,'') = COALESCE(p_role_name,'')
AND COALESCE(minion_id,'') = COALESCE(p_minion_id,'')
AND pillar_path = p_pillar_path
RETURNING id INTO v_id;
IF v_id IS NULL THEN
INSERT INTO so_pillar.pillar_entry(
scope, role_name, minion_id, pillar_path, data, is_secret, change_reason)
VALUES (p_scope, p_role_name, p_minion_id, p_pillar_path, v_envelope, true, p_change_reason)
RETURNING id INTO v_id;
END IF;
END IF;
RETURN v_id;
END
$fn$;
REVOKE ALL ON FUNCTION so_pillar.fn_set_secret(text,text,text,text,jsonb,text) FROM PUBLIC;
@@ -0,0 +1,39 @@
-- Seed the so_pillar.role table with the role buckets defined in pillar/top.sls.
-- The match_expr column preserves the original Salt compound expression purely
-- as documentation; PG-side membership is materialised in role_member.
-- Idempotent: ON CONFLICT lets re-application leave existing rows untouched.
INSERT INTO so_pillar.role(role_name, match_kind, match_expr, description) VALUES
('manager', 'compound', '*_manager or *_managersearch or *_managerhype',
'Manager-class node. Includes managersearch and managerhype subtypes.'),
('managersearch', 'compound', '*_managersearch',
'Combined manager + searchnode role.'),
('managerhype', 'compound', '*_managerhype',
'Combined manager + hypervisor role.'),
('sensor', 'compound', '*_sensor',
'Sensor node running zeek/suricata/strelka.'),
('eval', 'compound', '*_eval',
'Single-node evaluation install (manager + sensor + storage on one host).'),
('standalone', 'compound', '*_standalone',
'Single-node production install (no distributed cluster).'),
('heavynode', 'compound', '*_heavynode',
'Distributed manager node carrying logstash + ES.'),
('idh', 'compound', '*_idh',
'Intrusion-detection-honeypot node.'),
('searchnode', 'compound', '*_searchnode',
'Distributed Elasticsearch search node.'),
('receiver', 'compound', '*_receiver',
'Kafka receiver node.'),
('import', 'compound', '*_import',
'Single-node import-only install.'),
('fleet', 'compound', '*_fleet',
'Elastic Fleet server node.'),
('hypervisor', 'compound', '*_hypervisor',
'Hypervisor host (libvirt). Hosts VM minions.'),
('desktop', 'compound', '*_desktop',
'Desktop minion (no firewall/nginx pillars apply).'),
('not_desktop', 'compound', '* and not *_desktop',
'Pseudo-role; matches every minion that is not a desktop. Used for global firewall/nginx.'),
('libvirt', 'grain', 'salt-cloud:driver:libvirt',
'Pseudo-role; matches any minion with grain salt-cloud.driver = libvirt.')
ON CONFLICT (role_name) DO NOTHING;
@@ -0,0 +1,107 @@
-- Roles + Row-Level Security policies for the so_pillar schema.
-- Three roles:
-- so_pillar_master — connected by salt-master ext_pillar. Read-only.
-- RLS forces it to skip is_secret rows; reads
-- encrypted secrets only via fn_pillar_secrets().
-- so_pillar_writer — connected by so-yaml dual-write and the SOC
-- PostgresConfigstore. Read+write on pillar_entry,
-- minion, role_member.
-- so_pillar_secret_owner — owns the master encryption key GUC; sole role
-- allowed to call fn_set_secret directly. Other
-- writers reach this function only via grants.
--
-- The existing app role so_postgres_user (created by init-users.sh) is granted
-- INTO so_pillar_writer so SOC keeps using its existing connection but inherits
-- pillar-write capability.
DO $$
BEGIN
IF NOT EXISTS (SELECT 1 FROM pg_roles WHERE rolname = 'so_pillar_master') THEN
CREATE ROLE so_pillar_master NOLOGIN;
END IF;
IF NOT EXISTS (SELECT 1 FROM pg_roles WHERE rolname = 'so_pillar_writer') THEN
CREATE ROLE so_pillar_writer NOLOGIN;
END IF;
IF NOT EXISTS (SELECT 1 FROM pg_roles WHERE rolname = 'so_pillar_secret_owner') THEN
CREATE ROLE so_pillar_secret_owner NOLOGIN;
END IF;
END
$$;
-- USAGE on the schema is the bare minimum needed to reference its tables.
-- CONNECT on the database is needed before the role can establish a session
-- at all (default privileges on a new DB grant CONNECT to PUBLIC, but if the
-- securityonion database is restricted that grant has to be explicit).
-- Password + LOGIN privileges are set later in schema_pillar.sls because
-- the password lives in pillar (secrets:pillar_master_pass) and plain SQL
-- can't substitute pillar values.
GRANT CONNECT ON DATABASE securityonion TO so_pillar_master, so_pillar_writer, so_pillar_secret_owner;
GRANT USAGE ON SCHEMA so_pillar TO so_pillar_master, so_pillar_writer, so_pillar_secret_owner;
-- Read access for ext_pillar through the views only.
GRANT SELECT ON so_pillar.v_pillar_global,
so_pillar.v_pillar_role,
so_pillar.v_pillar_minion
TO so_pillar_master;
GRANT EXECUTE ON FUNCTION so_pillar.fn_pillar_secrets(text) TO so_pillar_master;
-- (change_queue grants live in 008_change_notify.sql alongside the table itself,
-- since the table doesn't exist until 008 runs.)
-- Writer needs CRUD on pillar_entry/minion/role_member plus access to seed tables.
GRANT SELECT, INSERT, UPDATE, DELETE
ON so_pillar.pillar_entry,
so_pillar.minion,
so_pillar.role_member
TO so_pillar_writer;
GRANT SELECT ON so_pillar.role, so_pillar.scope TO so_pillar_writer;
GRANT SELECT, INSERT, UPDATE, DELETE ON so_pillar.drift_log TO so_pillar_writer;
GRANT USAGE, SELECT ON ALL SEQUENCES IN SCHEMA so_pillar TO so_pillar_writer;
GRANT SELECT ON so_pillar.pillar_entry_history TO so_pillar_writer;
-- Secret owner can call fn_set_secret directly; writer goes through it via the
-- function's SECURITY DEFINER attribute, which executes as the function owner.
GRANT EXECUTE ON FUNCTION so_pillar.fn_set_secret(text,text,text,text,jsonb,text)
TO so_pillar_writer, so_pillar_secret_owner;
-- so_postgres_user (SOC's existing app user, created by init-users.sh) inherits
-- writer privilege so the PostgresConfigstore in SOC can mutate pillars without
-- a second connection pool. Inheritance is per-PG default (NOINHERIT must be
-- explicit), so this just works.
DO $$
BEGIN
IF EXISTS (SELECT 1 FROM pg_roles WHERE rolname = current_setting('so_pillar.app_role', true))
THEN
EXECUTE format('GRANT so_pillar_writer TO %I',
current_setting('so_pillar.app_role', true));
ELSIF EXISTS (SELECT 1 FROM pg_roles WHERE rolname = 'so_postgres_user') THEN
GRANT so_pillar_writer TO so_postgres_user;
END IF;
END
$$;
-- RLS on pillar_entry: master sees only non-secret rows. Writer sees all
-- (it must, to UPDATE secret rows when so-yaml replaces them). Secret rows
-- still require fn_decrypt_jsonb to read plaintext.
ALTER TABLE so_pillar.pillar_entry ENABLE ROW LEVEL SECURITY;
ALTER TABLE so_pillar.pillar_entry FORCE ROW LEVEL SECURITY;
DROP POLICY IF EXISTS pillar_entry_master_read ON so_pillar.pillar_entry;
DROP POLICY IF EXISTS pillar_entry_writer_all ON so_pillar.pillar_entry;
DROP POLICY IF EXISTS pillar_entry_owner_all ON so_pillar.pillar_entry;
CREATE POLICY pillar_entry_master_read ON so_pillar.pillar_entry
FOR SELECT TO so_pillar_master
USING (NOT is_secret);
CREATE POLICY pillar_entry_writer_all ON so_pillar.pillar_entry
FOR ALL TO so_pillar_writer
USING (true)
WITH CHECK (true);
CREATE POLICY pillar_entry_owner_all ON so_pillar.pillar_entry
FOR ALL TO so_pillar_secret_owner
USING (true)
WITH CHECK (true);
-- minion / role_member do not need RLS — they hold no secrets.
@@ -0,0 +1,43 @@
-- Drift detection + retention via pg_cron. Optional — the schema_pillar.sls
-- state guards this file behind the postgres:so_pillar:drift_check_enabled
-- pillar flag because pg_cron may not be loaded on every install.
CREATE EXTENSION IF NOT EXISTS pg_cron;
-- Retention: trim pillar_entry_history older than a year. Adjustable via the
-- so_pillar.history_retention_days GUC (default 365 if unset).
CREATE OR REPLACE FUNCTION so_pillar.fn_history_retain()
RETURNS void LANGUAGE plpgsql AS $fn$
DECLARE
v_days int := COALESCE(current_setting('so_pillar.history_retention_days', true)::int, 365);
BEGIN
DELETE FROM so_pillar.pillar_entry_history
WHERE changed_at < (now() - (v_days::text || ' days')::interval);
END
$fn$;
-- Drift retention: keep two weeks of drift_log.
CREATE OR REPLACE FUNCTION so_pillar.fn_drift_retain()
RETURNS void LANGUAGE plpgsql AS $fn$
BEGIN
DELETE FROM so_pillar.drift_log
WHERE detected_at < (now() - interval '14 days');
END
$fn$;
-- pg_cron schedules (idempotent — unschedule any existing same-named job first).
DO $$
DECLARE
v_jobid bigint;
BEGIN
SELECT jobid INTO v_jobid FROM cron.job WHERE jobname = 'so_pillar_history_retain';
IF v_jobid IS NOT NULL THEN PERFORM cron.unschedule(v_jobid); END IF;
PERFORM cron.schedule('so_pillar_history_retain', '15 3 * * *',
'SELECT so_pillar.fn_history_retain();');
SELECT jobid INTO v_jobid FROM cron.job WHERE jobname = 'so_pillar_drift_retain';
IF v_jobid IS NOT NULL THEN PERFORM cron.unschedule(v_jobid); END IF;
PERFORM cron.schedule('so_pillar_drift_retain', '20 3 * * *',
'SELECT so_pillar.fn_drift_retain();');
END
$$;
@@ -0,0 +1,89 @@
-- pg_notify-driven change fan-out for so_pillar.pillar_entry.
--
-- Two layers:
-- 1. so_pillar.change_queue — durable, drained by the salt-master
-- engine. Survives engine downtime,
-- de-duplicated by id, processed once.
-- 2. pg_notify('so_pillar_change') — wakeup signal. Payload is the
-- change_queue row id and locator
-- (no secret data — channels are
-- snoopable by anyone with LISTEN).
--
-- The salt-master engine LISTENs on the channel for low-latency wakeup,
-- then SELECTs unprocessed change_queue rows so a missed notification
-- (engine restart, network blip) self-heals on the next event.
CREATE TABLE IF NOT EXISTS so_pillar.change_queue (
id bigserial PRIMARY KEY,
scope text NOT NULL,
role_name text,
minion_id text,
pillar_path text NOT NULL,
op text NOT NULL CHECK (op IN ('INSERT','UPDATE','DELETE')),
enqueued_at timestamptz NOT NULL DEFAULT now(),
processed_at timestamptz
);
-- Hot index for the engine's drain query.
CREATE INDEX IF NOT EXISTS ix_change_queue_unprocessed
ON so_pillar.change_queue (id)
WHERE processed_at IS NULL;
-- Retention index: pg_cron job in 007 sweeps processed rows older than 7d.
CREATE INDEX IF NOT EXISTS ix_change_queue_processed_at
ON so_pillar.change_queue (processed_at)
WHERE processed_at IS NOT NULL;
CREATE OR REPLACE FUNCTION so_pillar.fn_pillar_entry_notify()
RETURNS trigger
LANGUAGE plpgsql
AS $$
DECLARE
v_row record;
v_id bigint;
BEGIN
IF TG_OP = 'DELETE' THEN
v_row := OLD;
ELSE
v_row := NEW;
END IF;
INSERT INTO so_pillar.change_queue
(scope, role_name, minion_id, pillar_path, op)
VALUES
(v_row.scope, v_row.role_name, v_row.minion_id, v_row.pillar_path, TG_OP)
RETURNING id INTO v_id;
-- Payload is the queue id + locator only. Engine joins back to
-- pillar_entry if it needs the data — keeps secrets off the wire.
PERFORM pg_notify('so_pillar_change', json_build_object(
'queue_id', v_id,
'scope', v_row.scope,
'role_name', v_row.role_name,
'minion_id', v_row.minion_id,
'pillar_path', v_row.pillar_path,
'op', TG_OP
)::text);
RETURN NULL;
END;
$$;
DROP TRIGGER IF EXISTS tg_pillar_entry_notify ON so_pillar.pillar_entry;
CREATE TRIGGER tg_pillar_entry_notify
AFTER INSERT OR UPDATE OR DELETE
ON so_pillar.pillar_entry
FOR EACH ROW
EXECUTE FUNCTION so_pillar.fn_pillar_entry_notify();
-- Role grants on the change_queue table. Lived in 006_rls.sql historically but
-- moved here so the GRANT references resolve — 006 runs before this file does.
-- Engine reads + drains the change queue from the salt-master process. It
-- needs SELECT to find unprocessed rows and UPDATE to mark them processed.
-- The queue contains only locator metadata (no pillar data), so the master
-- role's existing privilege footprint is unchanged in practice.
GRANT SELECT, UPDATE ON so_pillar.change_queue TO so_pillar_master;
GRANT USAGE ON SEQUENCE so_pillar.change_queue_id_seq TO so_pillar_master;
-- Writer needs INSERT (the trigger runs as table owner, so this is just for
-- direct testing / manual replays from psql).
GRANT INSERT ON so_pillar.change_queue TO so_pillar_writer;
+14
View File
@@ -0,0 +1,14 @@
# Copyright Security Onion Solutions LLC and/or licensed to Security Onion Solutions LLC under one
# or more contributor license agreements. Licensed under the Elastic License 2.0 as shown at
# https://securityonion.net/license; you may not use this file except in compliance with the
# Elastic License 2.0.
{% from 'postgres/map.jinja' import PGMERGED %}
include:
{% if PGMERGED.enabled %}
- postgres.enabled
- postgres.schema_pillar
{% else %}
- postgres.disabled
{% endif %}
+7
View File
@@ -0,0 +1,7 @@
{# Copyright Security Onion Solutions LLC and/or licensed to Security Onion Solutions LLC under one
or more contributor license agreements. Licensed under the Elastic License 2.0 as shown at
https://securityonion.net/license; you may not use this file except in compliance with the
Elastic License 2.0. #}
{% import_yaml 'postgres/defaults.yaml' as PGDEFAULTS %}
{% set PGMERGED = salt['pillar.get']('postgres', PGDEFAULTS.postgres, merge=True) %}
+187
View File
@@ -0,0 +1,187 @@
# Copyright Security Onion Solutions LLC and/or licensed to Security Onion Solutions LLC under one
# or more contributor license agreements. Licensed under the Elastic License 2.0 as shown at
# https://securityonion.net/license; you may not use this file except in compliance with the
# Elastic License 2.0.
{% from 'allowed_states.map.jinja' import allowed_states %}
{% if 'postgres' in allowed_states %}
{% from 'vars/globals.map.jinja' import GLOBALS %}
# Deploys the so_pillar schema (tables, views, audit triggers, secrets,
# RLS, pg_cron retention) inside the so-postgres container. Idempotent —
# every CREATE / GRANT is wrapped in IF NOT EXISTS / ON CONFLICT or DO
# blocks so re-running the state is a no-op when the schema is current.
#
# Gated on the postgres:so_pillar:enabled feature flag (default false).
# Flip to true once the postsalt branch is ready to bring ext_pillar live.
include:
- postgres.enabled
{% set so_pillar_enabled = salt['pillar.get']('postgres:so_pillar:enabled', False) %}
{% if so_pillar_enabled %}
{% set drift_enabled = salt['pillar.get']('postgres:so_pillar:drift_check_enabled', False) %}
{% set schema_dir = '/opt/so/saltstack/default/salt/postgres/files/schema/pillar' %}
# Wait for postgres to actually accept TCP connections. Same idiom as
# telegraf_users.sls. The docker_container.running state returns earlier than
# the database is ready on first init.
so_pillar_postgres_wait_ready:
cmd.run:
- name: |
for i in $(seq 1 60); do
if docker exec so-postgres pg_isready -h 127.0.0.1 -U postgres -q 2>/dev/null; then
exit 0
fi
sleep 2
done
echo "so-postgres did not accept TCP connections within 120s" >&2
exit 1
- require:
- docker_container: so-postgres
{% set sql_files = [
'001_schema.sql',
'002_views.sql',
'003_history_trigger.sql',
'004_secrets.sql',
'005_seed_roles.sql',
'006_rls.sql',
] %}
{% if drift_enabled %}
{% do sql_files.append('007_drift_pgcron.sql') %}
{% endif %}
# 008 always applies — pg_notify-driven change fan-out is what the salt-master
# pg_notify_pillar engine consumes. Without it reactor wiring sees no events.
{% do sql_files.append('008_change_notify.sql') %}
{% for sql_file in sql_files %}
so_pillar_apply_{{ sql_file | replace('.', '_') }}:
cmd.run:
- name: |
docker exec -i so-postgres psql -v ON_ERROR_STOP=1 -U postgres -d securityonion \
< {{ schema_dir }}/{{ sql_file }}
- require:
- cmd: so_pillar_postgres_wait_ready
{% if not loop.first %}
- cmd: so_pillar_apply_{{ sql_files[loop.index0 - 1] | replace('.', '_') }}
{% endif %}
{% endfor %}
# Set the master encryption key GUC on the secret-owner role. The key itself
# is generated by setup/so-functions::secrets_pillar() (extended for postsalt)
# and lives in /opt/so/conf/postgres/so_pillar.key (mode 0400) — never read by
# Salt itself; the value flows into PG via ALTER ROLE so it sits only in the
# server's role catalog.
so_pillar_master_key_configure:
cmd.run:
- name: |
if [ -r /opt/so/conf/postgres/so_pillar.key ]; then
KEY="$(< /opt/so/conf/postgres/so_pillar.key)"
docker exec -i so-postgres psql -v ON_ERROR_STOP=1 -U postgres -d securityonion <<EOSQL
ALTER ROLE so_pillar_secret_owner SET so_pillar.master_key = '$KEY';
ALTER ROLE so_pillar_master SET so_pillar.master_key = '$KEY';
ALTER ROLE so_pillar_writer SET so_pillar.master_key = '$KEY';
EOSQL
else
echo "so_pillar.key not present yet; setup/so-functions must generate it before schema_pillar.sls" >&2
exit 1
fi
- require:
- cmd: so_pillar_apply_{{ sql_files[-1] | replace('.', '_') }}
# Set login passwords on the so_pillar_* roles. 006_rls.sql creates the roles
# as NOLOGIN with no password (plain SQL can't substitute pillar values), so
# the salt-master ext_pillar and the pg_notify_pillar engine — both of which
# connect as so_pillar_master via TCP — would fail with "password
# authentication failed" without this step. The password lives in pillar
# under secrets:pillar_master_pass (generated by setup/so-functions::secrets_pillar)
# and is the same one rendered into ext_pillar_postgres.conf.jinja and the
# engines.conf pg_notify_pillar block, so all three sides agree.
so_pillar_role_login_passwords:
cmd.run:
- name: |
docker exec -i so-postgres psql -v ON_ERROR_STOP=1 -U postgres -d securityonion <<EOSQL
ALTER ROLE so_pillar_master WITH LOGIN PASSWORD '{{ pillar['secrets']['pillar_master_pass'] }}';
ALTER ROLE so_pillar_writer WITH LOGIN PASSWORD '{{ pillar['secrets']['pillar_master_pass'] }}';
ALTER ROLE so_pillar_secret_owner WITH LOGIN PASSWORD '{{ pillar['secrets']['pillar_master_pass'] }}';
EOSQL
- require:
- cmd: so_pillar_master_key_configure
# Install psycopg2 into salt-master's bundled python so the pg_notify_pillar
# engine module can `import psycopg2`. Without this the engine's import fails
# silently in salt's loader and the engine just never starts. salt's bundled
# python at /opt/saltstack/salt/bin/python3 doesn't ship psycopg by default.
#
# Uses cmd.run with an `unless` import-test rather than pip.installed because
# pip exits non-zero if patchelf isn't on PATH (it tries to rewrite the
# psycopg2 wheel's RPATH after extraction), even though the wheel is fully
# installed and importable. salt's pip.installed surfaces the non-zero exit
# as a state failure and the cascade kills schema_pillar's downstream work.
# `import psycopg2` succeeds either way, so that's the actual readiness gate.
#
# Pip's stdout/stderr is redirected to /opt/so/log/so_pillar/psycopg2_install.log
# so the literal "ERROR: ... patchelf" line doesn't get hoovered up into
# /root/sosetup.log and then into /root/errors.log by verify_setup's
# substring-grep for "ERROR". The redirect target is preserved for
# triage if `import psycopg2` ever does fail.
so_pillar_psycopg2_in_salt_python:
cmd.run:
- name: |
mkdir -p /opt/so/log/so_pillar
/opt/saltstack/salt/bin/pip3 install --quiet psycopg2-binary \
>/opt/so/log/so_pillar/psycopg2_install.log 2>&1 \
|| true
- unless: /opt/saltstack/salt/bin/python3 -c "import psycopg2"
- require:
- cmd: so_pillar_role_login_passwords
# Run the importer once after the schema is in place. Idempotent — re-runs
# with no SLS edits produce zero row changes.
so_pillar_initial_import:
cmd.run:
- name: /usr/sbin/so-pillar-import --yes --reason 'schema_pillar.sls initial import'
- require:
- cmd: so_pillar_psycopg2_in_salt_python
# Flip so-yaml from dual-write to PG-canonical for managed paths now that
# the schema and importer are both in place. Bootstrap files (secrets.sls,
# postgres/auth.sls, ca/init.sls, *.nodes.sls, top.sls, ...) remain on disk
# regardless because so_yaml_postgres.locate() raises SkipPath for them.
so_pillar_so_yaml_mode_dir:
file.directory:
- name: /opt/so/conf/so-yaml
- user: socore
- group: socore
- mode: '0755'
- makedirs: True
so_pillar_so_yaml_mode_postgres:
file.managed:
- name: /opt/so/conf/so-yaml/mode
- contents: postgres
- user: socore
- group: socore
- mode: '0644'
- require:
- file: so_pillar_so_yaml_mode_dir
- cmd: so_pillar_initial_import
{% else %}
so_pillar_disabled_noop:
test.nop
{% endif %}
{% else %}
{{sls}}_state_not_allowed:
test.fail_without_changes:
- name: {{sls}}_state_not_allowed
{% endif %}
+89
View File
@@ -0,0 +1,89 @@
postgres:
enabled:
description: Whether the PostgreSQL database container is enabled on this grid. Backs the assistant store and the Telegraf metrics database.
forcedType: bool
readonly: True
helpLink: influxdb
telegraf:
retention_days:
description: Number of days of Telegraf metrics to keep in the so_telegraf database. Older partitions are dropped hourly by pg_partman.
forcedType: int
helpLink: postgres
config:
max_connections:
description: Maximum number of concurrent PostgreSQL connections.
forcedType: int
global: True
helpLink: postgres
shared_buffers:
description: Amount of memory PostgreSQL uses for shared buffers (e.g. 256MB, 1GB). Raising this improves read cache hit rate at the cost of system RAM.
global: True
helpLink: postgres
log_min_messages:
description: Minimum severity of server messages written to the PostgreSQL log.
options:
- debug1
- info
- notice
- warning
- error
- log
- fatal
global: True
helpLink: postgres
listen_addresses:
description: Interfaces PostgreSQL listens on. Must remain '*' so clients on the docker bridge network can connect.
global: True
advanced: True
helpLink: postgres
port:
description: TCP port PostgreSQL listens on inside the container. Firewall rules and container port mapping assume 5432.
forcedType: int
global: True
advanced: True
helpLink: postgres
ssl:
description: Whether PostgreSQL accepts TLS connections. Must remain 'on' — pg_hba.conf requires hostssl for TCP.
global: True
advanced: True
helpLink: postgres
ssl_cert_file:
description: Path (inside the container) to the TLS server certificate. Salt-managed.
global: True
advanced: True
helpLink: postgres
ssl_key_file:
description: Path (inside the container) to the TLS server private key. Salt-managed.
global: True
advanced: True
helpLink: postgres
ssl_ca_file:
description: Path (inside the container) to the CA bundle PostgreSQL uses to verify client certificates. Salt-managed.
global: True
advanced: True
helpLink: postgres
hba_file:
description: Path (inside the container) to the pg_hba.conf authentication file. Salt-managed — edit salt/postgres/files/pg_hba.conf.
global: True
advanced: True
helpLink: postgres
log_destination:
description: Where PostgreSQL writes its server log. 'stderr' routes to the container log stream.
global: True
advanced: True
helpLink: postgres
logging_collector:
description: Whether to run a separate logging collector process. Disabled because the docker log stream already captures stderr.
global: True
advanced: True
helpLink: postgres
shared_preload_libraries:
description: Comma-separated list of extensions loaded at server start. Required for pg_cron which drives pg_partman maintenance — do not remove.
global: True
advanced: True
helpLink: postgres
cron.database_name:
description: Database pg_cron schedules jobs in. Must be so_telegraf so partman maintenance runs in the right database context.
global: True
advanced: True
helpLink: postgres
+21
View File
@@ -0,0 +1,21 @@
# Copyright Security Onion Solutions LLC and/or licensed to Security Onion Solutions LLC under one
# or more contributor license agreements. Licensed under the Elastic License 2.0 as shown at
# https://securityonion.net/license; you may not use this file except in compliance with the
# Elastic License 2.0.
{% from 'allowed_states.map.jinja' import allowed_states %}
{% if sls.split('.')[0] in allowed_states %}
append_so-postgres_so-status.conf:
file.append:
- name: /opt/so/conf/so-status/so-status.conf
- text: so-postgres
- unless: grep -q so-postgres /opt/so/conf/so-status/so-status.conf
{% else %}
{{sls}}_state_not_allowed:
test.fail_without_changes:
- name: {{sls}}_state_not_allowed
{% endif %}
+55
View File
@@ -0,0 +1,55 @@
# Copyright Security Onion Solutions LLC and/or licensed to Security Onion Solutions LLC under one
# or more contributor license agreements. Licensed under the Elastic License 2.0 as shown at
# https://securityonion.net/license; you may not use this file except in compliance with the
# Elastic License 2.0.
{% from 'allowed_states.map.jinja' import allowed_states %}
{% if sls.split('.')[0] in allowed_states %}
{% from 'vars/globals.map.jinja' import GLOBALS %}
{% from 'ca/map.jinja' import CA %}
postgres_key:
x509.private_key_managed:
- name: /etc/pki/postgres.key
- keysize: 4096
- backup: True
- new: True
{% if salt['file.file_exists']('/etc/pki/postgres.key') -%}
- prereq:
- x509: /etc/pki/postgres.crt
{%- endif %}
- retry:
attempts: 5
interval: 30
postgres_crt:
x509.certificate_managed:
- name: /etc/pki/postgres.crt
- ca_server: {{ CA.server }}
- subjectAltName: DNS:{{ GLOBALS.hostname }}, IP:{{ GLOBALS.node_ip }}
- signing_policy: postgres
- private_key: /etc/pki/postgres.key
- CN: {{ GLOBALS.hostname }}
- days_remaining: 7
- days_valid: 820
- backup: True
- timeout: 30
- retry:
attempts: 5
interval: 30
postgresKeyperms:
file.managed:
- replace: False
- name: /etc/pki/postgres.key
- mode: 400
- user: 939
- group: 939
{% else %}
{{sls}}_state_not_allowed:
test.fail_without_changes:
- name: {{sls}}_state_not_allowed
{% endif %}
+157
View File
@@ -0,0 +1,157 @@
# Copyright Security Onion Solutions LLC and/or licensed to Security Onion Solutions LLC under one
# or more contributor license agreements. Licensed under the Elastic License 2.0 as shown at
# https://securityonion.net/license; you may not use this file except in compliance with the
# Elastic License 2.0.
{% from 'allowed_states.map.jinja' import allowed_states %}
{% if sls.split('.')[0] in allowed_states %}
{% from 'vars/globals.map.jinja' import GLOBALS %}
{% from 'telegraf/map.jinja' import TELEGRAFMERGED %}
{# postgres_wait_ready below requires `docker_container: so-postgres`, which is
declared in postgres.enabled. Include it here so state.apply postgres.telegraf_users
on its own (e.g. from orch.deploy_newnode) still has that ID in scope. Salt
de-duplicates the circular include. #}
include:
- postgres.enabled
{% set TG_OUT = TELEGRAFMERGED.output | upper %}
{% if TG_OUT in ['POSTGRES', 'BOTH'] %}
# docker_container.running returns as soon as the container starts, but on
# first-init docker-entrypoint.sh starts a temporary postgres with
# `listen_addresses=''` to run /docker-entrypoint-initdb.d scripts, then
# shuts it down before exec'ing the real CMD. A default pg_isready check
# (Unix socket) passes during that ephemeral phase and races the shutdown
# with "the database system is shutting down". Checking TCP readiness on
# 127.0.0.1 only succeeds after the final postgres binds the port.
postgres_wait_ready:
cmd.run:
- name: |
for i in $(seq 1 60); do
if docker exec so-postgres pg_isready -h 127.0.0.1 -U postgres -q 2>/dev/null; then
exit 0
fi
sleep 2
done
echo "so-postgres did not accept TCP connections within 120s" >&2
exit 1
- require:
- docker_container: so-postgres
# Ensure the shared Telegraf database exists. init-users.sh only runs on a
# fresh data dir, so hosts upgraded onto an existing /nsm/postgres volume
# would otherwise never get so_telegraf.
postgres_create_telegraf_db:
cmd.run:
- name: |
if ! docker exec so-postgres psql -U postgres -tAc "SELECT 1 FROM pg_database WHERE datname='so_telegraf'" | grep -q 1; then
docker exec so-postgres psql -v ON_ERROR_STOP=1 -U postgres -c "CREATE DATABASE so_telegraf"
fi
- require:
- cmd: postgres_wait_ready
# Provision the shared group role and schema once. Every per-minion role is a
# member of so_telegraf, and each Telegraf connection does SET ROLE so_telegraf
# (via options='-c role=so_telegraf' in the connection string) so tables created
# on first write are owned by the group role and every member can INSERT/SELECT.
postgres_telegraf_group_role:
cmd.run:
- name: |
docker exec -i so-postgres psql -v ON_ERROR_STOP=1 -U postgres -d so_telegraf <<'EOSQL'
DO $$
BEGIN
IF NOT EXISTS (SELECT FROM pg_catalog.pg_roles WHERE rolname = 'so_telegraf') THEN
CREATE ROLE so_telegraf NOLOGIN;
END IF;
END
$$;
GRANT CONNECT ON DATABASE so_telegraf TO so_telegraf;
CREATE SCHEMA IF NOT EXISTS telegraf AUTHORIZATION so_telegraf;
GRANT USAGE, CREATE ON SCHEMA telegraf TO so_telegraf;
CREATE SCHEMA IF NOT EXISTS partman;
CREATE EXTENSION IF NOT EXISTS pg_partman SCHEMA partman;
CREATE EXTENSION IF NOT EXISTS pg_cron;
-- Telegraf (running as so_telegraf) calls partman.create_parent()
-- on first write of each metric, which needs USAGE on the partman
-- schema, EXECUTE on its functions/procedures, and write access to
-- partman.part_config so it can register new partitioned parents.
GRANT USAGE, CREATE ON SCHEMA partman TO so_telegraf;
GRANT SELECT, INSERT, UPDATE, DELETE ON ALL TABLES IN SCHEMA partman TO so_telegraf;
GRANT EXECUTE ON ALL FUNCTIONS IN SCHEMA partman TO so_telegraf;
GRANT EXECUTE ON ALL PROCEDURES IN SCHEMA partman TO so_telegraf;
-- partman creates per-parent template tables (partman.template_*) at
-- runtime; default privileges extend DML/sequence access to them.
ALTER DEFAULT PRIVILEGES IN SCHEMA partman
GRANT SELECT, INSERT, UPDATE, DELETE ON TABLES TO so_telegraf;
ALTER DEFAULT PRIVILEGES IN SCHEMA partman
GRANT USAGE, SELECT, UPDATE ON SEQUENCES TO so_telegraf;
-- Hourly partman maintenance. cron.schedule is idempotent by jobname.
SELECT cron.schedule(
'telegraf-partman-maintenance',
'17 * * * *',
'CALL partman.run_maintenance_proc()'
);
EOSQL
- require:
- cmd: postgres_create_telegraf_db
{% set creds = salt['pillar.get']('telegraf:postgres_creds', {}) %}
{% for mid, entry in creds.items() %}
{% if entry.get('user') and entry.get('pass') %}
{% set u = entry.user %}
{% set p = entry.pass | replace("'", "''") %}
postgres_telegraf_role_{{ u }}:
cmd.run:
- name: |
docker exec -i so-postgres psql -v ON_ERROR_STOP=1 -U postgres -d so_telegraf <<'EOSQL'
DO $$
BEGIN
IF NOT EXISTS (SELECT FROM pg_catalog.pg_roles WHERE rolname = '{{ u }}') THEN
EXECUTE format('CREATE ROLE %I WITH LOGIN PASSWORD %L', '{{ u }}', '{{ p }}');
ELSE
EXECUTE format('ALTER ROLE %I WITH PASSWORD %L', '{{ u }}', '{{ p }}');
END IF;
END
$$;
GRANT CONNECT ON DATABASE so_telegraf TO "{{ u }}";
GRANT so_telegraf TO "{{ u }}";
EOSQL
- require:
- cmd: postgres_telegraf_group_role
{% endif %}
{% endfor %}
# Reconcile partman retention from pillar. Runs after role/schema setup so
# any partitioned parents Telegraf has already created get their retention
# refreshed whenever postgres.telegraf.retention_days changes.
{% set retention = salt['pillar.get']('postgres:telegraf:retention_days', 14) | int %}
postgres_telegraf_retention_reconcile:
cmd.run:
- name: |
docker exec -i so-postgres psql -v ON_ERROR_STOP=1 -U postgres -d so_telegraf <<'EOSQL'
DO $$
BEGIN
IF EXISTS (SELECT 1 FROM pg_catalog.pg_extension WHERE extname = 'pg_partman') THEN
UPDATE partman.part_config
SET retention = '{{ retention }} days',
retention_keep_table = false
WHERE parent_table LIKE 'telegraf.%';
END IF;
END
$$;
EOSQL
- require:
- cmd: postgres_telegraf_group_role
{% endif %}
{% else %}
{{sls}}_state_not_allowed:
test.fail_without_changes:
- name: {{sls}}_state_not_allowed
{% endif %}

Some files were not shown because too many files have changed in this diff Show More