Files
securityonion/salt/postgres/files/schema/pillar/008_change_notify.sql
T
Mike Reeves 92a7bb3053 fix: get postsalt's PG-canonical pillar actually working end-to-end
Five blockers turned up the first time the so_pillar schema was applied
against a fresh standalone install. Fixing them in order:

1. 006_rls.sql ordering bug
   006 GRANTed on so_pillar.change_queue and its sequence, but the table
   isn't created until 008_change_notify.sql. 006 errored mid-file with
   "relation so_pillar.change_queue does not exist", short-circuiting the
   rest of the pillar staging chain. Moved the three change_queue grants
   into 008 alongside the table creation so each file is self-contained.

2. so_pillar_* roles unable to log in
   006 created the roles as NOLOGIN and set no password. Salt-master's
   ext_pillar (postgres) and the pg_notify_pillar engine both connect as
   so_pillar_master via TCP, so both came up with "password authentication
   failed for user so_pillar_master". Added a templated cmd.run step in
   schema_pillar.sls (so_pillar_role_login_passwords) that ALTERs all three
   roles WITH LOGIN PASSWORD pulling from secrets:pillar_master_pass — the
   same password ext_pillar_postgres.conf.jinja and the engines.conf
   pg_notify_pillar block render with.

3. Missing GRANT CONNECT ON DATABASE securityonion
   USAGE on the schema is granted in 006 but CONNECT on the database isn't.
   Engine + ext_pillar succeeded auth then died with "permission denied
   for database securityonion". Added the explicit GRANT CONNECT in 006.

4. psycopg2 missing from salt's bundled python
   /opt/saltstack/salt/bin/python3 doesn't ship psycopg by default, so
   when salt-master tries to load the pg_notify_pillar engine its
   `import psycopg2` fails inside salt's loader and the engine silently
   doesn't start (no error in the salt log — you only notice when nothing
   ever drains so_pillar.change_queue). Added a pip.installed state in
   schema_pillar.sls bound to that interpreter via bin_env.

5. engines.conf vs pg_notify_pillar_engine.conf list-replace
   Salt's master.d/*.conf merge replaces top-level lists rather than
   concatenating them. The engine config used to live in its own
   master.d/pg_notify_pillar_engine.conf with `engines: [pg_notify_pillar]`
   alongside the legacy `engines.conf` carrying `engines: [checkmine,
   pillarWatch]`. Whichever loaded last won, so the engine never showed
   up in the loaded set even when the file existed. Fold the
   pg_notify_pillar declaration into engines.conf (now jinja-rendered,
   gated on postgres:so_pillar:enabled), drop the standalone state from
   pg_notify_pillar_engine.sls, and delete the now-orphaned conf jinja.

End state validated against a live standalone-net install on the dev rig:
salt-master ext_pillar reads from so_pillar.* with no errors, the
pg_notify_pillar engine LISTENs on so_pillar_change and drains the
change_queue (134-row backlog → 0 within seconds), and a so-yaml replace
on a pillar key flows disk → PG → ext_pillar → salt pillar.get with the
new value visible after a saltutil.refresh_pillar.
2026-05-04 19:47:38 -04:00

90 lines
3.5 KiB
PL/PgSQL

-- pg_notify-driven change fan-out for so_pillar.pillar_entry.
--
-- Two layers:
-- 1. so_pillar.change_queue — durable, drained by the salt-master
-- engine. Survives engine downtime,
-- de-duplicated by id, processed once.
-- 2. pg_notify('so_pillar_change') — wakeup signal. Payload is the
-- change_queue row id and locator
-- (no secret data — channels are
-- snoopable by anyone with LISTEN).
--
-- The salt-master engine LISTENs on the channel for low-latency wakeup,
-- then SELECTs unprocessed change_queue rows so a missed notification
-- (engine restart, network blip) self-heals on the next event.
CREATE TABLE IF NOT EXISTS so_pillar.change_queue (
id bigserial PRIMARY KEY,
scope text NOT NULL,
role_name text,
minion_id text,
pillar_path text NOT NULL,
op text NOT NULL CHECK (op IN ('INSERT','UPDATE','DELETE')),
enqueued_at timestamptz NOT NULL DEFAULT now(),
processed_at timestamptz
);
-- Hot index for the engine's drain query.
CREATE INDEX IF NOT EXISTS ix_change_queue_unprocessed
ON so_pillar.change_queue (id)
WHERE processed_at IS NULL;
-- Retention index: pg_cron job in 007 sweeps processed rows older than 7d.
CREATE INDEX IF NOT EXISTS ix_change_queue_processed_at
ON so_pillar.change_queue (processed_at)
WHERE processed_at IS NOT NULL;
CREATE OR REPLACE FUNCTION so_pillar.fn_pillar_entry_notify()
RETURNS trigger
LANGUAGE plpgsql
AS $$
DECLARE
v_row record;
v_id bigint;
BEGIN
IF TG_OP = 'DELETE' THEN
v_row := OLD;
ELSE
v_row := NEW;
END IF;
INSERT INTO so_pillar.change_queue
(scope, role_name, minion_id, pillar_path, op)
VALUES
(v_row.scope, v_row.role_name, v_row.minion_id, v_row.pillar_path, TG_OP)
RETURNING id INTO v_id;
-- Payload is the queue id + locator only. Engine joins back to
-- pillar_entry if it needs the data — keeps secrets off the wire.
PERFORM pg_notify('so_pillar_change', json_build_object(
'queue_id', v_id,
'scope', v_row.scope,
'role_name', v_row.role_name,
'minion_id', v_row.minion_id,
'pillar_path', v_row.pillar_path,
'op', TG_OP
)::text);
RETURN NULL;
END;
$$;
DROP TRIGGER IF EXISTS tg_pillar_entry_notify ON so_pillar.pillar_entry;
CREATE TRIGGER tg_pillar_entry_notify
AFTER INSERT OR UPDATE OR DELETE
ON so_pillar.pillar_entry
FOR EACH ROW
EXECUTE FUNCTION so_pillar.fn_pillar_entry_notify();
-- Role grants on the change_queue table. Lived in 006_rls.sql historically but
-- moved here so the GRANT references resolve — 006 runs before this file does.
-- Engine reads + drains the change queue from the salt-master process. It
-- needs SELECT to find unprocessed rows and UPDATE to mark them processed.
-- The queue contains only locator metadata (no pillar data), so the master
-- role's existing privilege footprint is unchanged in practice.
GRANT SELECT, UPDATE ON so_pillar.change_queue TO so_pillar_master;
GRANT USAGE ON SEQUENCE so_pillar.change_queue_id_seq TO so_pillar_master;
-- Writer needs INSERT (the trigger runs as table owner, so this is just for
-- direct testing / manual replays from psql).
GRANT INSERT ON so_pillar.change_queue TO so_pillar_writer;