Compare commits

...

5 Commits

Author SHA1 Message Date
Mike Reeves a433e9524d Move onionconfig writes out of so-yaml 2026-05-12 16:05:55 -04:00
Mike Reeves 3d11694d51 make so-yaml PG-canonical and add pillar-change reactor stack
Two coupled changes that together let so_pillar.* be the canonical
config store, with config edits driving service reloads automatically:

so-yaml PG-canonical mode
- Adds /opt/so/conf/so-yaml/mode (and SO_YAML_BACKEND env override) with
  three values: dual (legacy), postgres (PG-only for managed paths),
  disk (emergency rollback). Bootstrap files (secrets.sls, ca/init.sls,
  *.nodes.sls, top.sls, ...) stay disk-only regardless via the existing
  SkipPath allowlist in so_yaml_postgres.locate.
- loadYaml/writeYaml/purgeFile now route to so_pillar.* in postgres
  mode: replace/add/get all read+write the database with no disk file
  ever appearing. PG failure is fatal in postgres mode (no silent
  fallback); dual mode preserves the prior best-effort mirror.
- so_yaml_postgres gains read_yaml(path), is_pg_managed(path), and
  is_enabled() so so-yaml can answer "is this path PG-managed and is
  PG up" without reaching into private helpers.
- schema_pillar.sls writes /opt/so/conf/so-yaml/mode = postgres after
  the importer succeeds, so flipping postgres:so_pillar:enabled flips
  so-yaml's behavior in lockstep with the schema being live.

pg_notify-driven change fan-out
- 008_change_notify.sql adds so_pillar.change_queue + an AFTER trigger
  on pillar_entry that enqueues the locator and pg_notifies
  'so_pillar_change'. Queue is drained at-least-once so engine restarts
  don't lose events; pg_notify is just the wakeup signal.
- New salt-master engine pg_notify_pillar.py LISTENs on the channel,
  drains the queue with FOR UPDATE SKIP LOCKED, debounces bursts, and
  fires 'so/pillar/changed' events grouped by (scope, role, minion).
- Reactor so_pillar_changed.sls catches the tag and dispatches to
  orch.so_pillar_reload, which carries a DISPATCH map of pillar-path
  prefix -> (state sls, role grain set) so adding a new service to
  the auto-reload list is a one-line edit instead of a new reactor.
- Engine + reactor wiring is gated on the same postgres:so_pillar:enabled
  flag as the schema and ext_pillar config so the whole stack flips
  on/off together.

Tests: 21 new cases (112 total, all passing) covering mode resolution,
PG-managed detection, and PG-canonical read/write/purge routing with
the PG client stubbed.
2026-05-01 09:31:48 -04:00
Mike Reeves 23255f88e0 add so-yaml dual-write to so_pillar.* + purge verb
Hooks every so-yaml.py write through a new so_yaml_postgres helper that
mirrors disk YAML mutations into so_pillar.pillar_entry via docker exec
psql. Disk remains canonical during the transition; PG mirror failures
are logged only when a real write error occurs (skipped paths and
postgres-unreachable cases stay silent so existing callers don't see
new noise on stderr).

Adds a `purge YAML_FILE` verb on so-yaml that deletes the file from
disk and removes the matching pillar_entry rows. For minion files it
also drops the so_pillar.minion row, which CASCADEs to pillar_entry +
role_member. Designed for so-minion's delete path (replaces rm -f) so
the audit log captures the deletion.

setup/so-functions::generate_passwords + secrets_pillar generate
secrets:pillar_master_pass and /opt/so/conf/postgres/so_pillar.key on
fresh installs, and append the password to existing secrets.sls files
on upgrade.

- salt/manager/tools/sbin/so_yaml_postgres.py: locate(), write_yaml(),
  purge_yaml(), and a small CLI for diagnostics. Skips bootstrap and
  mine-driven paths via the same allowlist used by so-pillar-import.
- salt/manager/tools/sbin/so-yaml.py: import the helper, hook
  writeYaml() to mirror after every disk write, add purgeFile() and
  the purge verb.
- salt/manager/tools/sbin/so-yaml_test.py: 16 new tests covering the
  purge verb and the path-locator / write contract of so_yaml_postgres
  without contacting Postgres. All 91 tests pass.
- setup/so-functions: generate_passwords adds PILLARMASTERPASS and
  SO_PILLAR_KEY; secrets_pillar writes pillar_master_pass and the
  pgcrypto master key file.
2026-04-30 17:09:58 -04:00
Mike Reeves d30b52b327 add so-pillar-import — seeds so_pillar.* from on-disk pillar tree
Idempotent importer that schema_pillar.sls runs once at end of postgres
state on first install, and that so-minion can call per-minion on add /
delete. UPSERTs into so_pillar.pillar_entry; the audit trigger handles
versioning so re-runs without SLS edits produce no version bumps.

Connects via docker exec so-postgres psql, so no DSN config is required
at first-install time. Skips bootstrap files (secrets.sls, postgres/
auth.sls, etc.), mine-driven nodes.sls files, and any file containing
Jinja templates — those stay disk-authoritative and ext_pillar_first:
False means they render before the PG overlay.

Auto-syncs to /usr/sbin via the existing manager_sbin file.recurse.
2026-04-30 16:34:05 -04:00
Mike Reeves 3fad895d6a add so_pillar schema + ext_pillar wiring (postsalt foundation)
Lays the database-backed pillar foundation for the postsalt branch. Salt
continues to read on-disk SLS first; the new ext_pillar config overlays
values from the so_pillar.* schema in so-postgres.

- salt/postgres/files/schema/pillar/00{1..7}_*.sql: idempotent DDL for
  scope/role/role_member/minion/pillar_entry/pillar_entry_history/
  drift_log, secret pgcrypto helpers, RLS, pg_cron retention.
- salt/postgres/schema_pillar.sls: applies the SQL files inside the
  so-postgres container after it's healthy, configures the master_key
  GUC, and runs so-pillar-import once. Gated on
  postgres:so_pillar:enabled feature flag (default false).
- salt/salt/master/ext_pillar_postgres.{sls,conf.jinja}: drops
  /etc/salt/master.d/ext_pillar_postgres.conf with list-form ext_pillar
  queries (global/role/minion/secrets) and ext_pillar_first: False so
  bootstrap pillars on disk render before the PG overlay.
- salt/postgres/init.sls + salt/salt/master.sls: include the new states.

Both new state branches are guarded so a default install with the flag
off is a no-op.
2026-04-30 16:30:57 -04:00
16 changed files with 874 additions and 6 deletions
+14
View File
@@ -48,6 +48,13 @@ copy_so-yaml_manager_tools_sbin:
- force: True
- preserve: True
copy_so-config_manager_tools_sbin:
file.copy:
- name: /opt/so/saltstack/default/salt/manager/tools/sbin/so-config.py
- source: {{UPDATE_DIR}}/salt/manager/tools/sbin/so-config.py
- force: True
- preserve: True
copy_so-repo-sync_manager_tools_sbin:
file.copy:
- name: /opt/so/saltstack/default/salt/manager/tools/sbin/so-repo-sync
@@ -97,6 +104,13 @@ copy_so-yaml_sbin:
- force: True
- preserve: True
copy_so-config_sbin:
file.copy:
- name: /usr/sbin/so-config.py
- source: {{UPDATE_DIR}}/salt/manager/tools/sbin/so-config.py
- force: True
- preserve: True
copy_so-repo-sync_sbin:
file.copy:
- name: /usr/sbin/so-repo-sync
@@ -232,6 +232,7 @@ printf '%s\n'\
" grid_enrollment_general: '$GRIDNODESENROLLMENTOKENGENERAL'"\
" grid_enrollment_heavy: '$GRIDNODESENROLLMENTOKENHEAVY'"\
"" >> "$pillar_file"
/usr/sbin/so-config.py import-file "$pillar_file" --note "so-elastic-fleet-setup"
#Store Grid Nodes Enrollment token in Global pillar
global_pillar_file=/opt/so/saltstack/local/pillar/global/soc_global.sls
@@ -239,6 +240,7 @@ printf '%s\n'\
" fleet_grid_enrollment_token_general: '$GRIDNODESENROLLMENTOKENGENERAL'"\
" fleet_grid_enrollment_token_heavy: '$GRIDNODESENROLLMENTOKENHEAVY'"\
"" >> "$global_pillar_file"
/usr/sbin/so-config.py import-file "$global_pillar_file" --note "so-elastic-fleet-setup"
# Call Elastic-Fleet Salt State
printf "\nApplying elasticfleet state"
+5 -2
View File
@@ -20,8 +20,11 @@ so-kafka_so-status.disabled:
ensure_default_pipeline:
cmd.run:
- name: |
/usr/sbin/so-yaml.py replace /opt/so/saltstack/local/pillar/kafka/soc_kafka.sls kafka.enabled False;
set -e
/usr/sbin/so-yaml.py replace /opt/so/saltstack/local/pillar/kafka/soc_kafka.sls kafka.enabled False
/usr/sbin/so-config.py sync-yaml-mutation /opt/so/saltstack/local/pillar/kafka/soc_kafka.sls replace kafka.enabled False --note "kafka.disabled"
/usr/sbin/so-yaml.py replace /opt/so/saltstack/local/pillar/global/soc_global.sls global.pipeline REDIS
/usr/sbin/so-config.py sync-yaml-mutation /opt/so/saltstack/local/pillar/global/soc_global.sls replace global.pipeline REDIS --note "kafka.disabled"
{% endif %}
{# If Kafka has never been manually enabled, the 'Kafka' user does not exist. In this case certs for Kafka should not exist since they'll be owned by uid 960 #}
@@ -31,4 +34,4 @@ check_kafka_cert_{{cert}}:
- name: /etc/pki/{{cert}}
- onlyif: stat -c %U /etc/pki/{{cert}} | grep -q UNKNOWN
- show_changes: False
{% endfor %}
{% endfor %}
+448
View File
@@ -0,0 +1,448 @@
#!/usr/bin/env python3
# Copyright Security Onion Solutions LLC and/or licensed to Security Onion Solutions LLC under one
# or more contributor license agreements. Licensed under the Elastic License 2.0 as shown at
# https://securityonion.net/license; you may not use this file except in compliance with the
# Elastic License 2.0.
"""
so-config.py writes SOC/onionconfig settings to Postgres.
so-yaml.py remains a YAML file editor. Call this tool when a pillar-backed
setting also needs to be reflected in the onionconfig database.
"""
import argparse
import json
import os
from pathlib import Path
import subprocess
import sys
import yaml
PILLAR_ROOT = Path(os.environ.get("SO_CONFIG_PILLAR_ROOT", "/opt/so/saltstack/local/pillar"))
DOCKER_CONTAINER = os.environ.get("SO_CONFIG_PG_CONTAINER", "so-postgres")
PG_DATABASE = os.environ.get("SO_CONFIG_PG_DATABASE", "securityonion")
PG_USER = os.environ.get("SO_CONFIG_PG_USER", "postgres")
DEFAULT_USER_ID = os.environ.get("SO_CONFIG_USER_ID", "so-config")
EXCLUDE_BASENAMES = {
"secrets.sls",
"auth.sls",
"top.sls",
}
EXCLUDE_PATH_FRAGMENTS = (
"/elasticsearch/nodes.sls",
"/redis/nodes.sls",
"/kafka/nodes.sls",
"/hypervisor/nodes.sls",
"/logstash/nodes.sls",
"/node_data/ips.sls",
"/postgres/auth.sls",
"/elasticsearch/auth.sls",
"/kibana/secrets.sls",
)
class SkipPath(Exception):
pass
def pg_str(value):
if value is None:
return "NULL"
return "'" + str(value).replace("'", "''") + "'"
def pg_jsonb(value):
return pg_str(json.dumps(value)) + "::jsonb"
def docker_psql(sql):
proc = subprocess.run(
["docker", "exec", "-i", DOCKER_CONTAINER,
"psql", "-U", PG_USER, "-d", PG_DATABASE,
"-tA", "-q", "-v", "ON_ERROR_STOP=1"],
input=sql.encode(),
capture_output=True,
check=False,
timeout=60,
)
if proc.returncode != 0:
sys.stderr.write(proc.stderr.decode(errors="replace"))
raise RuntimeError(f"docker exec psql failed with rc={proc.returncode}")
return proc.stdout.decode(errors="replace")
def schema_ready():
sql = """
SELECT to_regclass('public.settings') IS NOT NULL
AND to_regclass('public.audit_settings') IS NOT NULL;
"""
return docker_psql(sql).strip() == "t"
def cmd_wait_schema(args):
import time
deadline = time.time() + args.timeout
while time.time() <= deadline:
if schema_ready():
return 0
time.sleep(args.interval)
print("so-config: onionconfig schema is not ready", file=sys.stderr)
return 1
def upsert_setting(setting_id, value, *, node_id="", duplicated_from_id=None,
user_id=DEFAULT_USER_ID, note=None):
note = note or "so-config upsert"
sql = f"""
BEGIN;
WITH old_row AS (
SELECT value
FROM settings
WHERE setting_id = {pg_str(setting_id)}
AND node_id = {pg_str(node_id)}
FOR UPDATE
),
upserted AS (
INSERT INTO settings (setting_id, value, duplicated_from_id, node_id)
VALUES ({pg_str(setting_id)}, {pg_jsonb(value)}, {pg_str(duplicated_from_id)}, {pg_str(node_id)})
ON CONFLICT (setting_id, node_id) DO UPDATE
SET value = EXCLUDED.value,
duplicated_from_id = EXCLUDED.duplicated_from_id
RETURNING value
)
INSERT INTO audit_settings (setting_id, node_id, user_id, old_value, new_value, note)
SELECT {pg_str(setting_id)},
{pg_str(node_id)},
{pg_str(user_id)},
(SELECT value FROM old_row),
(SELECT value FROM upserted),
{pg_str(note)}
WHERE NOT EXISTS (SELECT 1 FROM old_row)
OR (SELECT value FROM old_row) IS DISTINCT FROM (SELECT value FROM upserted);
COMMIT;
"""
docker_psql(sql)
def delete_setting(setting_id, *, node_id="", user_id=DEFAULT_USER_ID, note=None):
note = note or "so-config delete"
sql = f"""
BEGIN;
WITH deleted AS (
DELETE FROM settings
WHERE setting_id = {pg_str(setting_id)}
AND node_id = {pg_str(node_id)}
RETURNING value
)
INSERT INTO audit_settings (setting_id, node_id, user_id, old_value, new_value, note)
SELECT {pg_str(setting_id)}, {pg_str(node_id)}, {pg_str(user_id)}, value, NULL::jsonb, {pg_str(note)}
FROM deleted;
COMMIT;
"""
docker_psql(sql)
def delete_setting_prefix(setting_id, *, node_id="", user_id=DEFAULT_USER_ID, note=None):
if not setting_id:
raise ValueError("setting_id prefix cannot be empty")
note = note or "so-config delete-prefix"
sql = f"""
BEGIN;
WITH deleted AS (
DELETE FROM settings
WHERE node_id = {pg_str(node_id)}
AND (
setting_id = {pg_str(setting_id)}
OR substring(setting_id from 1 for char_length({pg_str(setting_id)}) + 1) = {pg_str(setting_id + ".")}
)
RETURNING setting_id, value
)
INSERT INTO audit_settings (setting_id, node_id, user_id, old_value, new_value, note)
SELECT setting_id, {pg_str(node_id)}, {pg_str(user_id)}, value, NULL::jsonb, {pg_str(note)}
FROM deleted;
COMMIT;
"""
docker_psql(sql)
def purge_node(node_id, *, user_id=DEFAULT_USER_ID, note=None):
note = note or "so-config purge-node"
sql = f"""
BEGIN;
WITH deleted AS (
DELETE FROM settings
WHERE node_id = {pg_str(node_id)}
RETURNING setting_id, value
)
INSERT INTO audit_settings (setting_id, node_id, user_id, old_value, new_value, note)
SELECT setting_id, {pg_str(node_id)}, {pg_str(user_id)}, value, NULL::jsonb, {pg_str(note)}
FROM deleted;
COMMIT;
"""
docker_psql(sql)
def parse_value(value, value_file=None):
if value_file:
with open(value_file, "r") as fh:
value = fh.read()
parsed = yaml.safe_load(value)
if parsed is None and value == "":
return ""
return parsed
def parse_yaml_file(path):
with open(path, "rb") as fh:
raw = fh.read()
if b"{%" in raw or b"{{" in raw:
raise SkipPath(f"{path}: Jinja-templated files stay disk-only")
if not raw.strip():
return {}
parsed = yaml.safe_load(raw)
return parsed if parsed is not None else {}
def flatten(prefix, value):
if isinstance(value, dict):
for key, child in value.items():
child_id = f"{prefix}.{key}" if prefix else str(key)
yield from flatten(child_id, child)
else:
yield prefix, value
def classify_pillar_path(path):
norm = Path(path).resolve()
norm_str = str(norm)
if norm.name in EXCLUDE_BASENAMES:
raise SkipPath(f"{path}: excluded basename")
for fragment in EXCLUDE_PATH_FRAGMENTS:
if fragment in norm_str:
raise SkipPath(f"{path}: excluded path fragment {fragment}")
if norm.suffix != ".sls":
raise SkipPath(f"{path}: not an .sls file")
parent = norm.parent.name
stem = norm.stem
if parent == "minions":
if stem.startswith("adv_"):
return {"kind": "advanced", "setting_id": "advanced", "node_id": stem[4:]}
return {"kind": "normal", "node_id": stem}
section = parent
if stem == f"soc_{section}":
return {"kind": "normal", "node_id": ""}
if stem == f"adv_{section}":
return {"kind": "advanced", "setting_id": f"{section}.advanced", "node_id": ""}
raise SkipPath(f"{path}: not a SOC-managed pillar file")
def import_pillar_file(path, *, user_id=DEFAULT_USER_ID, note=None):
meta = classify_pillar_path(path)
note = note or f"so-config import-file {path}"
if meta["kind"] == "advanced":
with open(path, "r") as fh:
upsert_setting(meta["setting_id"], fh.read(), node_id=meta["node_id"],
user_id=user_id, note=note)
return 1
data = parse_yaml_file(path)
if not isinstance(data, dict):
raise SkipPath(f"{path}: top-level YAML is not a map")
count = 0
for setting_id, value in flatten("", data):
upsert_setting(setting_id, value, node_id=meta["node_id"],
user_id=user_id, note=note)
count += 1
return count
def iter_pillar_files(root):
root = Path(root)
if not root.is_dir():
return
for path in sorted(root.rglob("*.sls")):
if path.is_file():
yield path
def cmd_set(args):
upsert_setting(args.setting_id, parse_value(args.value, args.value_file),
node_id=args.node_id,
duplicated_from_id=args.duplicated_from_id,
user_id=args.user_id,
note=args.note)
return 0
def cmd_delete(args):
delete_setting(args.setting_id, node_id=args.node_id,
user_id=args.user_id, note=args.note)
return 0
def cmd_delete_prefix(args):
delete_setting_prefix(args.setting_id, node_id=args.node_id,
user_id=args.user_id, note=args.note)
return 0
def cmd_purge_node(args):
purge_node(args.node_id, user_id=args.user_id, note=args.note)
return 0
def cmd_import_file(args):
count = import_pillar_file(args.path, user_id=args.user_id, note=args.note)
print(f"imported {count} settings from {args.path}")
return 0
def cmd_import_minion(args):
count = 0
for name in (f"{args.node_id}.sls", f"adv_{args.node_id}.sls"):
path = PILLAR_ROOT / "minions" / name
if path.exists():
count += import_pillar_file(path, user_id=args.user_id, note=args.note)
print(f"imported {count} settings for node {args.node_id}")
return 0
def cmd_import_all(args):
count = 0
skipped = 0
for path in iter_pillar_files(args.root):
try:
count += import_pillar_file(path, user_id=args.user_id, note=args.note)
except SkipPath as exc:
skipped += 1
if args.verbose:
print(f"skip: {exc}", file=sys.stderr)
print(f"imported {count} settings, skipped {skipped} files")
if args.state_file:
with open(args.state_file, "w") as fh:
fh.write("ok\n")
return 0
def cmd_sync_yaml_mutation(args):
meta = classify_pillar_path(args.path)
note = args.note or f"so-config sync-yaml-mutation {args.operation} {args.path}"
if meta["kind"] == "advanced":
import_pillar_file(args.path, user_id=args.user_id, note=note)
return 0
if args.operation in ("add", "replace"):
upsert_setting(args.key, parse_value(args.value, args.value_file),
node_id=meta["node_id"],
user_id=args.user_id,
note=note)
elif args.operation == "remove":
delete_setting_prefix(args.key, node_id=meta["node_id"],
user_id=args.user_id, note=note)
else:
raise ValueError(f"unsupported operation: {args.operation}")
return 0
def build_parser():
parser = argparse.ArgumentParser(description=__doc__)
sub = parser.add_subparsers(dest="command", required=True)
p = sub.add_parser("wait-schema", help="wait for SOC-created onionconfig tables")
p.add_argument("--timeout", type=int, default=120)
p.add_argument("--interval", type=int, default=2)
p.set_defaults(func=cmd_wait_schema)
p = sub.add_parser("set", help="upsert one setting")
p.add_argument("setting_id")
p.add_argument("value", nargs="?", default="")
p.add_argument("--value-file")
p.add_argument("--node-id", default="")
p.add_argument("--duplicated-from-id")
p.add_argument("--user-id", default=DEFAULT_USER_ID)
p.add_argument("--note")
p.set_defaults(func=cmd_set)
p = sub.add_parser("delete", help="delete one setting")
p.add_argument("setting_id")
p.add_argument("--node-id", default="")
p.add_argument("--user-id", default=DEFAULT_USER_ID)
p.add_argument("--note")
p.set_defaults(func=cmd_delete)
p = sub.add_parser("delete-prefix", help="delete one setting and all child settings")
p.add_argument("setting_id")
p.add_argument("--node-id", default="")
p.add_argument("--user-id", default=DEFAULT_USER_ID)
p.add_argument("--note")
p.set_defaults(func=cmd_delete_prefix)
p = sub.add_parser("purge-node", help="delete all settings for one node")
p.add_argument("node_id")
p.add_argument("--user-id", default=DEFAULT_USER_ID)
p.add_argument("--note")
p.set_defaults(func=cmd_purge_node)
p = sub.add_parser("import-file", help="import one SOC-managed pillar file")
p.add_argument("path")
p.add_argument("--user-id", default=DEFAULT_USER_ID)
p.add_argument("--note")
p.set_defaults(func=cmd_import_file)
p = sub.add_parser("import-minion", help="import one minion's pillar files")
p.add_argument("node_id")
p.add_argument("--user-id", default=DEFAULT_USER_ID)
p.add_argument("--note")
p.set_defaults(func=cmd_import_minion)
p = sub.add_parser("import-all", help="import all SOC-managed local pillar files")
p.add_argument("--root", default=str(PILLAR_ROOT))
p.add_argument("--state-file")
p.add_argument("--user-id", default=DEFAULT_USER_ID)
p.add_argument("--note", default="so-config initial import")
p.add_argument("--verbose", action="store_true")
p.set_defaults(func=cmd_import_all)
p = sub.add_parser("sync-yaml-mutation",
help="mirror one so-yaml add/replace/remove mutation to onionconfig")
p.add_argument("path")
p.add_argument("operation", choices=("add", "replace", "remove"))
p.add_argument("key")
p.add_argument("value", nargs="?", default="")
p.add_argument("--value-file")
p.add_argument("--user-id", default=DEFAULT_USER_ID)
p.add_argument("--note")
p.set_defaults(func=cmd_sync_yaml_mutation)
return parser
def main(argv):
parser = build_parser()
args = parser.parse_args(argv)
try:
return args.func(args)
except SkipPath as exc:
print(f"skip: {exc}", file=sys.stderr)
return 2
except Exception as exc:
print(f"so-config: {exc}", file=sys.stderr)
return 1
if __name__ == "__main__":
sys.exit(main(sys.argv[1:]))
+178
View File
@@ -0,0 +1,178 @@
import importlib
import os
import tempfile
import unittest
from unittest.mock import patch
soconfig = importlib.import_module("so-config")
class TestSoConfigPathMapping(unittest.TestCase):
def test_classify_global_soc(self):
meta = soconfig.classify_pillar_path(
"/opt/so/saltstack/local/pillar/soc/soc_soc.sls")
self.assertEqual(meta["kind"], "normal")
self.assertEqual(meta["node_id"], "")
def test_classify_global_advanced(self):
meta = soconfig.classify_pillar_path(
"/opt/so/saltstack/local/pillar/soc/adv_soc.sls")
self.assertEqual(meta["kind"], "advanced")
self.assertEqual(meta["setting_id"], "soc.advanced")
self.assertEqual(meta["node_id"], "")
def test_classify_minion(self):
meta = soconfig.classify_pillar_path(
"/opt/so/saltstack/local/pillar/minions/h1_sensor.sls")
self.assertEqual(meta["kind"], "normal")
self.assertEqual(meta["node_id"], "h1_sensor")
def test_classify_minion_advanced(self):
meta = soconfig.classify_pillar_path(
"/opt/so/saltstack/local/pillar/minions/adv_h1_sensor.sls")
self.assertEqual(meta["kind"], "advanced")
self.assertEqual(meta["setting_id"], "advanced")
self.assertEqual(meta["node_id"], "h1_sensor")
def test_classify_skips_bootstrap(self):
with self.assertRaises(soconfig.SkipPath):
soconfig.classify_pillar_path(
"/opt/so/saltstack/local/pillar/secrets.sls")
class TestSoConfigImport(unittest.TestCase):
def test_flatten_keeps_lists_as_values(self):
flattened = dict(soconfig.flatten("", {
"host": {"mainip": "10.0.0.1"},
"suricata": {"pcap": {"enabled": True}},
"items": ["a", "b"],
}))
self.assertEqual(flattened["host.mainip"], "10.0.0.1")
self.assertEqual(flattened["suricata.pcap.enabled"], True)
self.assertEqual(flattened["items"], ["a", "b"])
def test_import_file_upserts_flattened_settings(self):
with tempfile.TemporaryDirectory() as tmp:
path = os.path.join(tmp, "h1_sensor.sls")
minions = os.path.join(tmp, "minions")
os.mkdir(minions)
path = os.path.join(minions, "h1_sensor.sls")
with open(path, "w") as fh:
fh.write("host:\n mainip: 10.0.0.1\nsuricata:\n enabled: true\n")
calls = []
with patch.object(soconfig, "upsert_setting",
side_effect=lambda *args, **kwargs: calls.append((args, kwargs))):
count = soconfig.import_pillar_file(path)
self.assertEqual(count, 2)
self.assertIn((("host.mainip", "10.0.0.1"), {"node_id": "h1_sensor", "user_id": "so-config", "note": f"so-config import-file {path}"}), calls)
self.assertIn((("suricata.enabled", True), {"node_id": "h1_sensor", "user_id": "so-config", "note": f"so-config import-file {path}"}), calls)
def test_import_advanced_file_upserts_raw_content(self):
with tempfile.TemporaryDirectory() as tmp:
minions = os.path.join(tmp, "minions")
os.mkdir(minions)
path = os.path.join(minions, "adv_h1_sensor.sls")
with open(path, "w") as fh:
fh.write("custom:\n raw: true\n")
calls = []
with patch.object(soconfig, "upsert_setting",
side_effect=lambda *args, **kwargs: calls.append((args, kwargs))):
count = soconfig.import_pillar_file(path)
self.assertEqual(count, 1)
self.assertEqual(calls[0][0], ("advanced", "custom:\n raw: true\n"))
self.assertEqual(calls[0][1]["node_id"], "h1_sensor")
class TestSoConfigSql(unittest.TestCase):
def test_schema_ready_checks_soc_tables(self):
captured = {}
with patch.object(soconfig, "docker_psql",
side_effect=lambda sql: captured.update({"sql": sql}) or "t\n"):
ready = soconfig.schema_ready()
self.assertTrue(ready)
self.assertIn("to_regclass('public.settings')", captured["sql"])
self.assertIn("to_regclass('public.audit_settings')", captured["sql"])
def test_set_writes_settings_and_audit(self):
captured = {}
with patch.object(soconfig, "docker_psql",
side_effect=lambda sql: captured.setdefault("sql", sql)):
soconfig.upsert_setting("host.mainip", "10.0.0.1",
node_id="h1_sensor", user_id="tester", note="unit")
self.assertIn("INSERT INTO settings", captured["sql"])
self.assertIn("INSERT INTO audit_settings", captured["sql"])
self.assertIn("'host.mainip'", captured["sql"])
self.assertIn("'h1_sensor'", captured["sql"])
self.assertIn("'tester'", captured["sql"])
def test_purge_node_audits_deleted_rows(self):
captured = {}
with patch.object(soconfig, "docker_psql",
side_effect=lambda sql: captured.setdefault("sql", sql)):
soconfig.purge_node("h1_sensor", user_id="tester", note="unit")
self.assertIn("DELETE FROM settings", captured["sql"])
self.assertIn("WHERE node_id = 'h1_sensor'", captured["sql"])
self.assertIn("INSERT INTO audit_settings", captured["sql"])
def test_delete_prefix_removes_children_and_audits(self):
captured = {}
with patch.object(soconfig, "docker_psql",
side_effect=lambda sql: captured.setdefault("sql", sql)):
soconfig.delete_setting_prefix("elasticfleet", node_id="h1_sensor",
user_id="tester", note="unit")
self.assertIn("DELETE FROM settings", captured["sql"])
self.assertIn("setting_id = 'elasticfleet'", captured["sql"])
self.assertIn("'elasticfleet.'", captured["sql"])
self.assertIn("INSERT INTO audit_settings", captured["sql"])
def test_sync_yaml_replace_uses_path_node_id(self):
with tempfile.TemporaryDirectory() as tmp:
minions = os.path.join(tmp, "minions")
os.mkdir(minions)
path = os.path.join(minions, "h1_sensor.sls")
open(path, "w").close()
calls = []
args = soconfig.build_parser().parse_args([
"sync-yaml-mutation", path, "replace", "suricata.enabled", "true"
])
with patch.object(soconfig, "upsert_setting",
side_effect=lambda *a, **kw: calls.append((a, kw))):
soconfig.cmd_sync_yaml_mutation(args)
self.assertEqual(calls[0][0], ("suricata.enabled", True))
self.assertEqual(calls[0][1]["node_id"], "h1_sensor")
def test_sync_yaml_remove_deletes_prefix(self):
with tempfile.TemporaryDirectory() as tmp:
minions = os.path.join(tmp, "minions")
os.mkdir(minions)
path = os.path.join(minions, "h1_sensor.sls")
open(path, "w").close()
calls = []
args = soconfig.build_parser().parse_args([
"sync-yaml-mutation", path, "remove", "elasticfleet"
])
with patch.object(soconfig, "delete_setting_prefix",
side_effect=lambda *a, **kw: calls.append((a, kw))):
soconfig.cmd_sync_yaml_mutation(args)
self.assertEqual(calls[0][0], ("elasticfleet",))
self.assertEqual(calls[0][1]["node_id"], "h1_sensor")
if __name__ == "__main__":
unittest.main()
+30
View File
@@ -314,6 +314,24 @@ EOSQL
fi
}
function sync_minion_config_to_db() {
log "INFO" "Syncing minion config to onionconfig for $MINION_ID"
/usr/sbin/so-config.py import-minion "$MINION_ID" --note "so-minion $OPERATION"
if [ $? -ne 0 ]; then
log "ERROR" "Failed to sync minion config to onionconfig for $MINION_ID"
return 1
fi
}
function purge_minion_config_from_db() {
log "INFO" "Purging minion config from onionconfig for $MINION_ID"
/usr/sbin/so-config.py purge-node "$MINION_ID" --note "so-minion delete"
if [ $? -ne 0 ]; then
log "ERROR" "Failed to purge minion config from onionconfig for $MINION_ID"
return 1
fi
}
# Create the minion file
function ensure_socore_ownership() {
log "INFO" "Setting socore ownership on minion files"
@@ -1088,6 +1106,10 @@ case "$OPERATION" in
log "ERROR" "Failed to setup minion files for $MINION_ID"
exit 1
}
sync_minion_config_to_db || {
log "ERROR" "Failed to sync minion config to onionconfig for $MINION_ID"
exit 1
}
updateMineAndApplyStates || {
log "ERROR" "Failed to update mine and apply states for $MINION_ID"
exit 1
@@ -1108,12 +1130,20 @@ case "$OPERATION" in
log "ERROR" "Failed to setup VM minion files for $MINION_ID"
exit 1
}
sync_minion_config_to_db || {
log "ERROR" "Failed to sync VM minion config to onionconfig for $MINION_ID"
exit 1
}
log "INFO" "Successfully added VM minion $MINION_ID"
;;
"delete")
log "INFO" "Removing minion $MINION_ID"
remove_postgres_telegraf_from_minion
purge_minion_config_from_db || {
log "ERROR" "Failed to purge minion config from onionconfig for $MINION_ID"
exit 1
}
deleteMinionFiles || {
log "ERROR" "Failed to delete minion files for $MINION_ID"
exit 1
+25 -1
View File
@@ -25,6 +25,7 @@ def showUsage(args):
print(' get [-r] - Displays (to stdout) the value stored in the given key. Requires KEY arg. Use -r for raw output without YAML formatting.', file=sys.stderr)
print(' remove - Removes a yaml key, if it exists. Requires KEY arg.', file=sys.stderr)
print(' replace - Replaces (or adds) a new key and set its value. Requires KEY and VALUE args.', file=sys.stderr)
print(' purge - Delete the YAML file from disk (no KEY arg).', file=sys.stderr)
print(' help - Prints this usage information.', file=sys.stderr)
print('', file=sys.stderr)
print(' Where:', file=sys.stderr)
@@ -53,7 +54,20 @@ def loadYaml(filename):
def writeYaml(filename, content):
file = open(filename, "w")
return yaml.safe_dump(content, file)
result = yaml.safe_dump(content, file)
file.close()
return result
def purgeFile(filename):
"""Delete a YAML file from disk. Idempotent; missing files are success."""
if os.path.exists(filename):
try:
os.remove(filename)
except Exception as e:
print(f"Failed to remove {filename}: {e}", file=sys.stderr)
return 1
return 0
def appendItem(content, key, listItem):
@@ -371,6 +385,15 @@ def get(args):
return 0
def purge(args):
"""purge YAML_FILE - delete the file from disk."""
if len(args) != 1:
print('Missing filename arg', file=sys.stderr)
showUsage(None)
return 1
return purgeFile(args[0])
def main():
args = sys.argv[1:]
@@ -388,6 +411,7 @@ def main():
"get": get,
"remove": remove,
"replace": replace,
"purge": purge,
}
code = 1
+28
View File
@@ -991,3 +991,31 @@ class TestLoadYaml(unittest.TestCase):
soyaml.loadYaml("/tmp/so-yaml_test-unreadable.yaml")
sysmock.assert_called_with(1)
self.assertIn("Error reading file", mock_stderr.getvalue())
class TestPurge(unittest.TestCase):
def test_purge_missing_arg(self):
# showUsage calls sys.exit(1); patch it like the other tests do.
with patch('sys.exit', new=MagicMock()):
with patch('sys.stderr', new=StringIO()) as mock_stderr:
rc = soyaml.purge([])
self.assertEqual(rc, 1)
self.assertIn("Missing filename", mock_stderr.getvalue())
def test_purge_existing_file(self):
filename = "/tmp/so-yaml_test_purge.yaml"
with open(filename, "w") as f:
f.write("key: value\n")
rc = soyaml.purge([filename])
self.assertEqual(rc, 0)
import os as _os
self.assertFalse(_os.path.exists(filename))
def test_purge_missing_file_idempotent(self):
filename = "/tmp/so-yaml_test_purge_missing.yaml"
import os as _os
if _os.path.exists(filename):
_os.remove(filename)
rc = soyaml.purge([filename])
self.assertEqual(rc, 0)
@@ -33,8 +33,11 @@ so-elastic-fleet-stop --force
status "Deleting Fleet Data from Pillars..."
so-yaml.py remove /opt/so/saltstack/local/pillar/minions/{{ GLOBALS.minion_id }}.sls elasticfleet
/usr/sbin/so-config.py sync-yaml-mutation /opt/so/saltstack/local/pillar/minions/{{ GLOBALS.minion_id }}.sls remove elasticfleet --note "so-elastic-fleet-reset"
so-yaml.py remove /opt/so/saltstack/local/pillar/global/soc_global.sls global.fleet_grid_enrollment_token_general
/usr/sbin/so-config.py sync-yaml-mutation /opt/so/saltstack/local/pillar/global/soc_global.sls remove global.fleet_grid_enrollment_token_general --note "so-elastic-fleet-reset"
so-yaml.py remove /opt/so/saltstack/local/pillar/global/soc_global.sls global.fleet_grid_enrollment_token_heavy
/usr/sbin/so-config.py sync-yaml-mutation /opt/so/saltstack/local/pillar/global/soc_global.sls remove global.fleet_grid_enrollment_token_heavy --note "so-elastic-fleet-reset"
status "Restarting Kibana..."
so-kibana-restart --force
+20
View File
@@ -0,0 +1,20 @@
# Copyright Security Onion Solutions LLC and/or licensed to Security Onion Solutions LLC under one
# or more contributor license agreements. Licensed under the Elastic License 2.0 as shown at
# https://securityonion.net/license; you may not use this file except in compliance with the
# Elastic License 2.0.
{% from 'allowed_states.map.jinja' import allowed_states %}
{% if sls.split('.')[0] in allowed_states %}
# Deprecated: the old so_pillar schema has been replaced by SOC-owned
# onionconfig tables. SOC creates its schema on first startup.
postgres_schema_pillar_deprecated:
test.nop
{% else %}
{{sls}}_state_not_allowed:
test.fail_without_changes:
- name: {{sls}}_state_not_allowed
{% endif %}
+3 -3
View File
@@ -17,7 +17,7 @@ engines:
to:
'KAFKA':
- cmd.run:
cmd: /usr/sbin/so-yaml.py replace /opt/so/saltstack/local/pillar/kafka/soc_kafka.sls kafka.enabled True
cmd: /usr/sbin/so-yaml.py replace /opt/so/saltstack/local/pillar/kafka/soc_kafka.sls kafka.enabled True && /usr/sbin/so-config.py sync-yaml-mutation /opt/so/saltstack/local/pillar/kafka/soc_kafka.sls replace kafka.enabled True --note "pillarWatch global.pipeline"
- cmd.run:
cmd: salt -C 'G@role:so-standalone or G@role:so-manager or G@role:so-managersearch or G@role:so-receiver or G@role:so-searchnode' saltutil.kill_all_jobs
- cmd.run:
@@ -28,7 +28,7 @@ engines:
to:
'REDIS':
- cmd.run:
cmd: /usr/sbin/so-yaml.py replace /opt/so/saltstack/local/pillar/kafka/soc_kafka.sls kafka.enabled False
cmd: /usr/sbin/so-yaml.py replace /opt/so/saltstack/local/pillar/kafka/soc_kafka.sls kafka.enabled False && /usr/sbin/so-config.py sync-yaml-mutation /opt/so/saltstack/local/pillar/kafka/soc_kafka.sls replace kafka.enabled False --note "pillarWatch global.pipeline"
- cmd.run:
cmd: salt -C 'G@role:so-standalone or G@role:so-manager or G@role:so-managersearch or G@role:so-receiver or G@role:so-searchnode' saltutil.kill_all_jobs
- cmd.run:
@@ -66,5 +66,5 @@ engines:
- cmd.run:
cmd: salt -C 'G@role:so-standalone or G@role:so-manager or G@role:so-managersearch or G@role:so-receiver' state.apply kafka.disabled,kafka.reset
- cmd.run:
cmd: /usr/sbin/so-yaml.py remove /opt/so/saltstack/local/pillar/kafka/soc_kafka.sls kafka.reset
cmd: /usr/sbin/so-yaml.py remove /opt/so/saltstack/local/pillar/kafka/soc_kafka.sls kafka.reset && /usr/sbin/so-config.py sync-yaml-mutation /opt/so/saltstack/local/pillar/kafka/soc_kafka.sls remove kafka.reset --note "pillarWatch kafka.reset"
interval: 10
+2
View File
@@ -14,6 +14,8 @@
include:
- salt.minion
- salt.master.ext_pillar_postgres
- salt.master.pg_notify_pillar_engine
{% if 'vrt' in salt['pillar.get']('features', []) %}
- salt.cloud
- salt.cloud.reactor_config_hypervisor
+24
View File
@@ -0,0 +1,24 @@
# Copyright Security Onion Solutions LLC and/or licensed to Security Onion Solutions LLC under one
# or more contributor license agreements. Licensed under the Elastic License 2.0 as shown at
# https://securityonion.net/license; you may not use this file except in compliance with the
# Elastic License 2.0.
# Deprecated. SOC/onionconfig owns the settings database now; this state only
# removes the old so_pillar ext_pillar config if it was previously deployed.
{% from 'allowed_states.map.jinja' import allowed_states %}
{% if sls.split('.')[0] in allowed_states %}
ext_pillar_postgres_config_absent:
file.absent:
- name: /etc/salt/master.d/ext_pillar_postgres.conf
- watch_in:
- service: salt_master_service
{% else %}
{{sls}}_state_not_allowed:
test.fail_without_changes:
- name: {{sls}}_state_not_allowed
{% endif %}
@@ -0,0 +1,37 @@
# Copyright Security Onion Solutions LLC and/or licensed to Security Onion Solutions LLC under one
# or more contributor license agreements. Licensed under the Elastic License 2.0 as shown at
# https://securityonion.net/license; you may not use this file except in compliance with the
# Elastic License 2.0.
# Deprecated. SOC/onionconfig owns the settings database now; this state only
# removes the old so_pillar notify engine and reactor config if previously
# deployed.
{% from 'allowed_states.map.jinja' import allowed_states %}
{% if sls.split('.')[0] in allowed_states %}
pg_notify_pillar_engine_module_absent:
file.absent:
- name: /etc/salt/engines/pg_notify_pillar.py
- watch_in:
- service: salt_master_service
pg_notify_pillar_engine_config_absent:
file.absent:
- name: /etc/salt/master.d/pg_notify_pillar_engine.conf
- watch_in:
- service: salt_master_service
pg_notify_pillar_reactor_config_absent:
file.absent:
- name: /etc/salt/master.d/so_pillar_reactor.conf
- watch_in:
- service: salt_master_service
{% else %}
{{sls}}_state_not_allowed:
test.fail_without_changes:
- name: {{sls}}_state_not_allowed
{% endif %}
+23
View File
@@ -100,6 +100,29 @@ so-soc:
- file: socusersroles
- file: socclientsroles
onionconfig_initial_import:
cmd.run:
- name: |
set -e
SOCONFIG=/usr/sbin/so-config.py
if [ ! -x "$SOCONFIG" ]; then
SOCONFIG=/opt/so/saltstack/default/salt/manager/tools/sbin/so-config.py
fi
for i in $(seq 1 60); do
if docker exec so-postgres pg_isready -h 127.0.0.1 -U postgres -q >/dev/null 2>&1 \
&& curl -fsS --connect-timeout 2 http://{{ DOCKERMERGED.containers['so-soc'].ip }}:9822/ >/dev/null 2>&1; then
"$SOCONFIG" wait-schema --timeout 120
"$SOCONFIG" import-all --state-file /opt/so/state/onionconfig_initial_import.done
exit 0
fi
sleep 2
done
echo "so-soc or so-postgres did not become ready within 120s" >&2
exit 1
- unless: test -f /opt/so/state/onionconfig_initial_import.done
- require:
- docker_container: so-soc
delete_so-soc_so-status.disabled:
file.uncomment:
- name: /opt/so/conf/so-status/so-status.conf
+32
View File
@@ -1057,6 +1057,11 @@ generate_passwords(){
POSTGRESPASS=$(get_random_value)
SOCSRVKEY=$(get_random_value 64)
IMPORTPASS=$(get_random_value)
# postsalt: salt-master connects to so_pillar.* as so_pillar_master, and the
# so-postgres container needs a symmetric key for pgcrypto-encrypted secrets.
# Both are generated here so they survive reinstall like the other secrets.
PILLARMASTERPASS=$(get_random_value)
SO_PILLAR_KEY=$(get_random_value 64)
}
generate_interface_vars() {
@@ -1853,7 +1858,34 @@ secrets_pillar(){
"secrets:"\
" import_pass: $IMPORTPASS"\
" influx_pass: $INFLUXPASS"\
" pillar_master_pass: $PILLARMASTERPASS"\
" postgres_pass: $POSTGRESPASS" > $local_salt_dir/pillar/secrets.sls
elif ! grep -q '^[[:space:]]*pillar_master_pass:' $local_salt_dir/pillar/secrets.sls; then
# Existing install pre-postsalt — append the new key without disturbing
# the values already on disk. Keys we already wrote stay; only the new
# pillar_master_pass is added.
info "Appending pillar_master_pass to existing Secrets Pillar"
if [ -z "$PILLARMASTERPASS" ]; then
PILLARMASTERPASS=$(get_random_value)
fi
printf ' pillar_master_pass: %s\n' "$PILLARMASTERPASS" >> $local_salt_dir/pillar/secrets.sls
fi
# postsalt: write the so_pillar pgcrypto master key to a 0400 file owned by
# root. The key itself is never read by Salt — schema_pillar.sls loads it
# into the so-postgres container via ALTER ROLE so_pillar_secret_owner SET
# so_pillar.master_key = '<key>'; the file just lets the value survive
# container restarts.
if [ ! -f /opt/so/conf/postgres/so_pillar.key ]; then
info "Generating so_pillar pgcrypto master key"
mkdir -p /opt/so/conf/postgres
if [ -z "$SO_PILLAR_KEY" ]; then
SO_PILLAR_KEY=$(get_random_value 64)
fi
umask 077
printf '%s' "$SO_PILLAR_KEY" > /opt/so/conf/postgres/so_pillar.key
chmod 0400 /opt/so/conf/postgres/so_pillar.key
chown root:root /opt/so/conf/postgres/so_pillar.key
fi
}