fix: get postsalt's PG-canonical pillar actually working end-to-end

Five blockers turned up the first time the so_pillar schema was applied
against a fresh standalone install. Fixing them in order:

1. 006_rls.sql ordering bug
   006 GRANTed on so_pillar.change_queue and its sequence, but the table
   isn't created until 008_change_notify.sql. 006 errored mid-file with
   "relation so_pillar.change_queue does not exist", short-circuiting the
   rest of the pillar staging chain. Moved the three change_queue grants
   into 008 alongside the table creation so each file is self-contained.

2. so_pillar_* roles unable to log in
   006 created the roles as NOLOGIN and set no password. Salt-master's
   ext_pillar (postgres) and the pg_notify_pillar engine both connect as
   so_pillar_master via TCP, so both came up with "password authentication
   failed for user so_pillar_master". Added a templated cmd.run step in
   schema_pillar.sls (so_pillar_role_login_passwords) that ALTERs all three
   roles WITH LOGIN PASSWORD pulling from secrets:pillar_master_pass — the
   same password ext_pillar_postgres.conf.jinja and the engines.conf
   pg_notify_pillar block render with.

3. Missing GRANT CONNECT ON DATABASE securityonion
   USAGE on the schema is granted in 006 but CONNECT on the database isn't.
   Engine + ext_pillar succeeded auth then died with "permission denied
   for database securityonion". Added the explicit GRANT CONNECT in 006.

4. psycopg2 missing from salt's bundled python
   /opt/saltstack/salt/bin/python3 doesn't ship psycopg by default, so
   when salt-master tries to load the pg_notify_pillar engine its
   `import psycopg2` fails inside salt's loader and the engine silently
   doesn't start (no error in the salt log — you only notice when nothing
   ever drains so_pillar.change_queue). Added a pip.installed state in
   schema_pillar.sls bound to that interpreter via bin_env.

5. engines.conf vs pg_notify_pillar_engine.conf list-replace
   Salt's master.d/*.conf merge replaces top-level lists rather than
   concatenating them. The engine config used to live in its own
   master.d/pg_notify_pillar_engine.conf with `engines: [pg_notify_pillar]`
   alongside the legacy `engines.conf` carrying `engines: [checkmine,
   pillarWatch]`. Whichever loaded last won, so the engine never showed
   up in the loaded set even when the file existed. Fold the
   pg_notify_pillar declaration into engines.conf (now jinja-rendered,
   gated on postgres:so_pillar:enabled), drop the standalone state from
   pg_notify_pillar_engine.sls, and delete the now-orphaned conf jinja.

End state validated against a live standalone-net install on the dev rig:
salt-master ext_pillar reads from so_pillar.* with no errors, the
pg_notify_pillar engine LISTENs on so_pillar_change and drains the
change_queue (134-row backlog → 0 within seconds), and a so-yaml replace
on a pillar key flows disk → PG → ext_pillar → salt pillar.get with the
new value visible after a saltutil.refresh_pillar.
This commit is contained in:
Mike Reeves
2026-05-04 19:47:38 -04:00
parent 155b5c5d66
commit 92a7bb3053
7 changed files with 85 additions and 42 deletions
+10 -9
View File
@@ -28,6 +28,14 @@ BEGIN
END
$$;
-- USAGE on the schema is the bare minimum needed to reference its tables.
-- CONNECT on the database is needed before the role can establish a session
-- at all (default privileges on a new DB grant CONNECT to PUBLIC, but if the
-- securityonion database is restricted that grant has to be explicit).
-- Password + LOGIN privileges are set later in schema_pillar.sls because
-- the password lives in pillar (secrets:pillar_master_pass) and plain SQL
-- can't substitute pillar values.
GRANT CONNECT ON DATABASE securityonion TO so_pillar_master, so_pillar_writer, so_pillar_secret_owner;
GRANT USAGE ON SCHEMA so_pillar TO so_pillar_master, so_pillar_writer, so_pillar_secret_owner;
-- Read access for ext_pillar through the views only.
@@ -37,15 +45,8 @@ GRANT SELECT ON so_pillar.v_pillar_global,
TO so_pillar_master;
GRANT EXECUTE ON FUNCTION so_pillar.fn_pillar_secrets(text) TO so_pillar_master;
-- Engine reads + drains the change queue from the salt-master process. It
-- needs SELECT to find unprocessed rows and UPDATE to mark them processed.
-- The queue contains only locator metadata (no pillar data), so the master
-- role's existing privilege footprint is unchanged in practice.
GRANT SELECT, UPDATE ON so_pillar.change_queue TO so_pillar_master;
GRANT USAGE ON SEQUENCE so_pillar.change_queue_id_seq TO so_pillar_master;
-- Writer needs INSERT (the trigger runs as table owner, so this is just for
-- direct testing / manual replays from psql).
GRANT INSERT ON so_pillar.change_queue TO so_pillar_writer;
-- (change_queue grants live in 008_change_notify.sql alongside the table itself,
-- since the table doesn't exist until 008 runs.)
-- Writer needs CRUD on pillar_entry/minion/role_member plus access to seed tables.
GRANT SELECT, INSERT, UPDATE, DELETE
@@ -75,3 +75,15 @@ CREATE TRIGGER tg_pillar_entry_notify
ON so_pillar.pillar_entry
FOR EACH ROW
EXECUTE FUNCTION so_pillar.fn_pillar_entry_notify();
-- Role grants on the change_queue table. Lived in 006_rls.sql historically but
-- moved here so the GRANT references resolve — 006 runs before this file does.
-- Engine reads + drains the change queue from the salt-master process. It
-- needs SELECT to find unprocessed rows and UPDATE to mark them processed.
-- The queue contains only locator metadata (no pillar data), so the master
-- role's existing privilege footprint is unchanged in practice.
GRANT SELECT, UPDATE ON so_pillar.change_queue TO so_pillar_master;
GRANT USAGE ON SEQUENCE so_pillar.change_queue_id_seq TO so_pillar_master;
-- Writer needs INSERT (the trigger runs as table owner, so this is just for
-- direct testing / manual replays from psql).
GRANT INSERT ON so_pillar.change_queue TO so_pillar_writer;
+31 -1
View File
@@ -93,13 +93,43 @@ so_pillar_master_key_configure:
- require:
- cmd: so_pillar_apply_{{ sql_files[-1] | replace('.', '_') }}
# Set login passwords on the so_pillar_* roles. 006_rls.sql creates the roles
# as NOLOGIN with no password (plain SQL can't substitute pillar values), so
# the salt-master ext_pillar and the pg_notify_pillar engine — both of which
# connect as so_pillar_master via TCP — would fail with "password
# authentication failed" without this step. The password lives in pillar
# under secrets:pillar_master_pass (generated by setup/so-functions::secrets_pillar)
# and is the same one rendered into ext_pillar_postgres.conf.jinja and the
# engines.conf pg_notify_pillar block, so all three sides agree.
so_pillar_role_login_passwords:
cmd.run:
- name: |
docker exec -i so-postgres psql -v ON_ERROR_STOP=1 -U postgres -d securityonion <<EOSQL
ALTER ROLE so_pillar_master WITH LOGIN PASSWORD '{{ pillar['secrets']['pillar_master_pass'] }}';
ALTER ROLE so_pillar_writer WITH LOGIN PASSWORD '{{ pillar['secrets']['pillar_master_pass'] }}';
ALTER ROLE so_pillar_secret_owner WITH LOGIN PASSWORD '{{ pillar['secrets']['pillar_master_pass'] }}';
EOSQL
- require:
- cmd: so_pillar_master_key_configure
# Install psycopg2 into salt-master's bundled python so the pg_notify_pillar
# engine module can `import psycopg2`. Without this the engine's import fails
# silently in salt's loader and the engine just never starts. salt's bundled
# python at /opt/saltstack/salt/bin/python3 doesn't ship psycopg by default.
so_pillar_psycopg2_in_salt_python:
pip.installed:
- name: psycopg2-binary
- bin_env: /opt/saltstack/salt/bin/python3
- require:
- cmd: so_pillar_role_login_passwords
# Run the importer once after the schema is in place. Idempotent — re-runs
# with no SLS edits produce zero row changes.
so_pillar_initial_import:
cmd.run:
- name: /usr/sbin/so-pillar-import --yes --reason 'schema_pillar.sls initial import'
- require:
- cmd: so_pillar_master_key_configure
- pip: so_pillar_psycopg2_in_salt_python
# Flip so-yaml from dual-write to PG-canonical for managed paths now that
# the schema and importer are both in place. Bootstrap files (secrets.sls,
+20
View File
@@ -1,7 +1,27 @@
engines_dirs:
- /etc/salt/engines
# All salt-master engines must be declared in this single file.
# Salt's master.d/*.conf merge replaces top-level lists rather than
# concatenating them, so a sibling .conf with its own `engines:` list
# would silently overwrite this one (only the last loaded file's list
# would survive). Anything new — including postsalt's pg_notify_pillar
# engine, gated on postgres:so_pillar:enabled below — gets appended
# here under the same `engines:` key.
engines:
{% if salt['pillar.get']('postgres:so_pillar:enabled', False) %}
- pg_notify_pillar:
host: {{ pillar.get('postgres', {}).get('host', '127.0.0.1') }}
port: {{ pillar.get('postgres', {}).get('port', 5432) }}
dbname: securityonion
user: so_pillar_master
password: {{ pillar['secrets']['pillar_master_pass'] }}
channel: so_pillar_change
debounce_ms: {{ pillar.get('postgres', {}).get('so_pillar', {}).get('engine_debounce_ms', 500) }}
reconnect_backoff: {{ pillar.get('postgres', {}).get('so_pillar', {}).get('engine_reconnect_backoff', 5) }}
backlog_interval: {{ pillar.get('postgres', {}).get('so_pillar', {}).get('engine_backlog_interval', 30) }}
batch_limit: {{ pillar.get('postgres', {}).get('so_pillar', {}).get('engine_batch_limit', 500) }}
{% endif %}
- checkmine:
interval: 60
- pillarWatch:
+1
View File
@@ -63,6 +63,7 @@ engines_config:
file.managed:
- name: /etc/salt/master.d/engines.conf
- source: salt://salt/files/engines.conf
- template: jinja
# update the bootstrap script when used for salt-cloud
salt_bootstrap_cloud:
@@ -1,20 +0,0 @@
# /etc/salt/master.d/pg_notify_pillar_engine.conf
# Rendered by salt/salt/master/pg_notify_pillar_engine.sls.
#
# Subscribes the salt-master to so_pillar.change_queue via LISTEN
# so_pillar_change. The engine drains queued changes and re-publishes
# them on the event bus as 'so/pillar/changed'. Reactor wiring is in
# so_pillar_reactor.conf.
engines:
- pg_notify_pillar:
host: {{ pillar.get('postgres', {}).get('host', '127.0.0.1') }}
port: {{ pillar.get('postgres', {}).get('port', 5432) }}
dbname: securityonion
user: so_pillar_master
password: {{ pillar['secrets']['pillar_master_pass'] }}
channel: so_pillar_change
debounce_ms: {{ pillar.get('postgres', {}).get('so_pillar', {}).get('engine_debounce_ms', 500) }}
reconnect_backoff: {{ pillar.get('postgres', {}).get('so_pillar', {}).get('engine_reconnect_backoff', 5) }}
backlog_interval: {{ pillar.get('postgres', {}).get('so_pillar', {}).get('engine_backlog_interval', 30) }}
batch_limit: {{ pillar.get('postgres', {}).get('so_pillar', {}).get('engine_batch_limit', 500) }}
+11 -12
View File
@@ -3,11 +3,17 @@
# https://securityonion.net/license; you may not use this file except in compliance with the
# Elastic License 2.0.
# Deploys the pg_notify_pillar engine module + its master.d config so the
# Deploys the pg_notify_pillar engine module + its reactor config so the
# salt-master subscribes to so_pillar.change_queue and republishes changes
# on the salt event bus as so/pillar/changed. Reactor (so_pillar_changed.sls)
# matches that tag and dispatches the appropriate orch.
#
# The actual `engines:` declaration lives in salt/salt/files/engines.conf
# (jinja-rendered, also gated on postgres:so_pillar:enabled). It has to live
# in a single file because salt's master.d/*.conf merge replaces top-level
# lists rather than concatenating them — splitting `engines:` across multiple
# .conf files leaves only one loaded.
#
# Gated on the same postgres:so_pillar:enabled flag as the schema and
# ext_pillar config so the three components flip together.
@@ -27,17 +33,6 @@ pg_notify_pillar_engine_module:
- watch_in:
- service: salt_master_service
pg_notify_pillar_engine_config:
file.managed:
- name: /etc/salt/master.d/pg_notify_pillar_engine.conf
- source: salt://salt/master/files/pg_notify_pillar_engine.conf.jinja
- template: jinja
- mode: '0640'
- user: root
- group: salt
- watch_in:
- service: salt_master_service
pg_notify_pillar_reactor_config:
file.managed:
- name: /etc/salt/master.d/so_pillar_reactor.conf
@@ -59,6 +54,10 @@ pg_notify_pillar_engine_module_absent:
- service: salt_master_service
pg_notify_pillar_engine_config_absent:
# No-op now: the engine config used to live in master.d/pg_notify_pillar_engine.conf
# but was folded into engines.conf to work around salt's master.d list-replace
# merge. Keep this state alive (no-op test.nop) so any old installs that
# still have the file get it cleaned up.
file.absent:
- name: /etc/salt/master.d/pg_notify_pillar_engine.conf
- watch_in: