Right now when you set module_config entries in your pillar data
like this:
salt:
minion:
module_config:
smtp.from: 'Kali Salt <admins+salt@kali.org>'
smtp.to: 'Kali Admins <admins+salt@kali.org>'
smtp.host: localhost
smtp.subject: 'Results of salt actions on'
smtp.fields: id,fun
On each run, you will always a different ordering of the various
fields in the minion configuration file, leading to spurious restart
of the minion and admin annoyance:
ID: salt-minion
Function: file.recurse
Name: /etc/salt/minion.d
Result: True
Comment: Recursively updated /etc/salt/minion.d
Started: 13:39:25.689775
Duration: 874.318 ms
Changes:
----------
/etc/salt/minion.d/f_defaults.conf:
----------
diff:
---
+++
@@ -930,10 +930,10 @@
# A dict for the test module:
#test.baz: {spam: sausage, cheese: bread}
#
+smtp.fields: id,fun
+smtp.from: Kali Salt <admins+salt@kali.org>
smtp.to: Kali Admins <admins+salt@kali.org>
-smtp.fields: id,fun
smtp.host: localhost
-smtp.from: Kali Salt <admins+salt@kali.org>
smtp.subject: Results of salt actions on
With the change here, this bad behaviour is gone...
Make the upstream salt package repository selectable, thus allowing the
use of archived salt versions (hosted in
https://archive.repo.saltproject.io), as well as custom salt versions
hosted in alternate repositories.
* CVE-2021-25283 enables Jinja2 safe mode, which breaks use of
`'dict' in x.__class__.__name__` workaround
* Workaround no longer needed as CentOS 6 is EOL
Use ugly `zypper lr --uri` hack to get around failure if the `base_url`
already exists under a different name:
```
ID: salt-pkgrepo-install-saltstack-suse
Function: pkgrepo.managed
Name: systemsmanagement_saltstack_products
Result: False
Comment: Failed to configure repo 'systemsmanagement_saltstack_products':
Repository 'systemsmanagement_saltstack_products' already exists as 'systemsmanagement_saltstack'.
Started: 09:28:39.154054
Duration: 2760.727 ms
```
Upstream code:
* 45cc49daed/salt/modules/zypperpkg.py (L1262-L1265)
When running a high-state on the salt-master to deploy itself, the run fails
with an Authentication error occurred because the master restarts half way though.
Instead of the default service.running + enabled, you can control
the actual state, via pillar.
And you can even say 'state = ignore' and no state will be generated
to control the service