fix: correct allowed_states guard in ext_pillar_postgres + pg_notify_pillar_engine

Both SLS files used `sls.split('.')[0]` to derive what to look up in
allowed_states. For these files (sls='salt.master.ext_pillar_postgres'
and sls='salt.master.pg_notify_pillar_engine') that returns 'salt',
which is never in any role's allowed_states list — only specific keys
like 'salt.master', 'salt.minion', 'salt.cloud' are. The guard's else
branch fired on every highstate, emitting two cosmetic
  ID: <sls>_state_not_allowed
  Function: test.fail_without_changes
  Comment: Failure!
entries that polluted the so-setup error summary even on green installs.

Both states drop config under /etc/salt/master.d/ and watch_in the
salt-master service, so the natural intent is "only run when this node
hosts the salt master". Switching the guard to a literal
  {% if 'salt.master' in allowed_states %}
expresses that directly without string-parsing the SLS path, and
matches the existing membership in manager_states (which is in turn
included in every manager-bearing role: so-eval, so-manager,
so-managerhype, so-managersearch, so-standalone, so-import).
This commit is contained in:
Mike Reeves
2026-05-04 19:17:30 -04:00
parent 2e411625c4
commit f1746b0f59
2 changed files with 2 additions and 2 deletions
+1 -1
View File
@@ -10,7 +10,7 @@
# and the importer has run at least once.
{% from 'allowed_states.map.jinja' import allowed_states %}
{% if sls.split('.')[0] in allowed_states %}
{% if 'salt.master' in allowed_states %}
{% if salt['pillar.get']('postgres:so_pillar:enabled', False) %}
+1 -1
View File
@@ -12,7 +12,7 @@
# ext_pillar config so the three components flip together.
{% from 'allowed_states.map.jinja' import allowed_states %}
{% if sls.split('.')[0] in allowed_states %}
{% if 'salt.master' in allowed_states %}
{% if salt['pillar.get']('postgres:so_pillar:enabled', False) %}