Compare commits

..

92 Commits

Author SHA1 Message Date
Jorge Reyes
4014741562 Merge pull request #15113 from Security-Onion-Solutions/reyesj2/es-8188
generate new elastic agents in post soup
2025-10-07 13:11:55 -05:00
reyesj2
564374a8fb generate new elastic agents in post soup 2025-10-07 12:21:26 -05:00
Jorge Reyes
76f500f701 temp patch for soup'n 2025-10-06 16:51:18 -05:00
Jorge Reyes
dcfe6a1674 Merge pull request #15110 from Security-Onion-Solutions/reyesj2/es-8188
Elastic 8.18.8 elastic agent build
2025-10-06 16:26:34 -05:00
reyesj2
39432198cc Elastic 8.18.8 elastic agent build 2025-10-06 16:25:52 -05:00
Jorge Reyes
325e7ff44e Merge pull request #15109 from Security-Onion-Solutions/reyesj2/es-8188
es upgrade 8.18.8 pipeline updates
2025-10-06 16:23:55 -05:00
reyesj2
7af95317db es upgrade 8.18.8 pipeline updates 2025-10-06 16:23:22 -05:00
Jorge Reyes
ece25176cd Merge pull request #15108 from Security-Onion-Solutions/reyesj2/es-8188
es 8.18.8
2025-10-06 12:57:21 -05:00
reyesj2
8675193d1f elasticsearch upgrade 8.18.8 2025-10-06 12:56:31 -05:00
Jorge Reyes
5186603dbd Merge pull request #15107 from Security-Onion-Solutions/2.4/dev
2.4/dev
2025-10-06 12:42:47 -05:00
Jorge Reyes
3db6542398 Merge pull request #15105 from Security-Onion-Solutions/reyesj2/logstashout
update logstash fleet output policy
2025-10-03 12:07:36 -05:00
reyesj2
9fd1b9aec1 make sure to pass in variables to json_string.. 2025-10-02 16:38:47 -05:00
reyesj2
e5563eb9b8 send full new ssl config 2025-10-02 15:29:55 -05:00
Josh Patterson
e8de9e3c26 Merge pull request #15103 from Security-Onion-Solutions/byoh
byoh
2025-10-02 15:50:34 -04:00
reyesj2
c8a3603577 update logstash fleet output policy 2025-10-02 14:47:38 -05:00
Josh Patterson
05321cf1ed add --force-cleanup to nvme raid script 2025-10-02 15:03:11 -04:00
Josh Patterson
7deef44ff6 check defaults or pillar file 2025-10-02 11:55:50 -04:00
Jorge Reyes
37bfd9eb30 Update VERSION 2025-10-01 15:36:54 -05:00
Josh Patterson
e3ac1dd1b4 Merge remote-tracking branch 'origin/2.4/dev' into byoh 2025-10-01 14:57:51 -04:00
Josh Patterson
86eca53d4b support for byodmodel 2025-10-01 14:57:25 -04:00
Jason Ertel
bfd3d822b1 Merge pull request #15092 from Security-Onion-Solutions/jertel/wip
updates for wiretap lib
2025-10-01 12:20:06 -04:00
Jason Ertel
030e4961d7 updates for wiretap lib 2025-10-01 12:13:56 -04:00
Matthew Wright
14bd92067b Merge pull request #15091 from Security-Onion-Solutions/mwright/soc_soc-fix
Made lowBalanceColorAlert global
2025-10-01 11:03:50 -04:00
Matthew Wright
066e227325 made lowBalanceColorAlert global 2025-10-01 11:01:10 -04:00
coreyogburn
f1cfb9cd91 Merge pull request #15087 from Security-Onion-Solutions/cogburn/health-timeout
New field for assistant health check
2025-09-30 15:49:52 -06:00
Corey Ogburn
5a2e704909 New field for assistant health check
The health check has a smaller, configurable timeout.
2025-09-30 15:33:20 -06:00
Jorge Reyes
f04e54d1d5 Merge pull request #15086 from Security-Onion-Solutions/reyesj2/fltpatch
less strict exits for fleet configuration
2025-09-30 15:26:50 -05:00
reyesj2
e9af46a8cb less strict exits for fleet configuration 2025-09-30 14:28:42 -05:00
Josh Patterson
b4b051908b Merge pull request #15082 from Security-Onion-Solutions/vlb2
fix hypervisor bridge setup
2025-09-29 17:19:22 -04:00
Jason Ertel
0148e5638c Merge pull request #15080 from Security-Onion-Solutions/jertel/wip
restart registry after upgrading images (in airgap mode)
2025-09-29 17:02:47 -04:00
Josh Patterson
c8814d0632 removed commented code 2025-09-29 16:58:45 -04:00
Jason Ertel
6c892fed78 restart registry after upgrading images (in airgap mode) 2025-09-29 16:47:05 -04:00
Josh Patterson
e775299480 so-user target minions with pillar elasticsearch:enabled:true 2025-09-26 15:43:49 -04:00
Josh Patterson
c4ca9c62aa Merge remote-tracking branch 'origin/2.4/dev' into vlb2 2025-09-26 12:52:37 -04:00
Jorge Reyes
c37aeff364 Merge pull request #15075 from Security-Onion-Solutions/reyesj2/esfleetpatch
update so-elastic-fleet-setup
2025-09-26 11:36:35 -05:00
reyesj2
cdac49052f Merge branch '2.4/dev' of github.com:Security-Onion-Solutions/securityonion into reyesj2/esfleetpatch 2025-09-26 11:32:44 -05:00
reyesj2
8e5fa9576c create disabled so-manager_elasticsearch output policy first, update it then verify it is the only active output 2025-09-26 11:32:25 -05:00
Josh Patterson
cd04d1e5a7 Merge remote-tracking branch 'origin/2.4/dev' into vlb2 2025-09-25 16:06:36 -04:00
Josh Patterson
1fb558cc77 managerhype br0 setup 2025-09-25 16:06:25 -04:00
Jason Ertel
7f1b76912c Merge pull request #15072 from Security-Onion-Solutions/jertel/wip
retry kratos pulls since this is the first image to install during setup
2025-09-25 15:45:02 -04:00
Jason Ertel
3a2ceb0b6f retry kratos pulls since this is the first image to install during setup 2025-09-25 15:40:00 -04:00
Matthew Wright
1345756fce Merge pull request #15071 from Security-Onion-Solutions/mwright/temp
Updated default investigation prompt
2025-09-25 15:18:20 -04:00
Matthew Wright
d81d9a0722 small tweak to investigation prompt 2025-09-25 14:45:06 -04:00
Jorge Reyes
55074fda69 Merge pull request #15070 from Security-Onion-Solutions/reyesj2-patch-1
make sure fleet-default-output is not set as either default output p…
2025-09-25 09:55:54 -05:00
Jorge Reyes
23e12811a1 make sure fleet-default-output is not set as either default output policy 2025-09-25 09:51:32 -05:00
Josh Patterson
5d1edf6d86 Merge remote-tracking branch 'origin/2.4/dev' into vlb2 2025-09-24 17:32:08 -04:00
Josh Patterson
c836dd2acd set interface for network.ip_addrs for hypervisors 2025-09-24 16:50:29 -04:00
Josh Patterson
3a87af805f update service file, use salt.minion state to update mine_functions 2025-09-24 15:19:46 -04:00
Jorge Reyes
328ac329ec Merge pull request #15064 from Security-Onion-Solutions/reyesj2-patch-1
typo
2025-09-24 09:04:14 -05:00
Jorge Reyes
a3401aad11 typo 2025-09-24 08:56:40 -05:00
Jorge Reyes
431f71cc82 Merge pull request #15047 from Security-Onion-Solutions/reyesj2/es-fleet-patch
rework fleet scripts
2025-09-24 07:45:43 -05:00
Josh Patterson
4587301cca only update mine for managerhype during setup 2025-09-23 15:56:00 -04:00
Josh Patterson
14ddbd32ad salt-minion service file changes for hypervisor and managerhype 2025-09-22 16:38:40 -04:00
Josh Patterson
4599b95ae7 separate salt-minion service file 2025-09-22 16:37:16 -04:00
reyesj2
c92dc580a2 centralize MINION_ROLE lookup_role 2025-09-19 13:17:52 -05:00
reyesj2
4666aa9818 Merge branch 'reyesj2/es-fleet-patch' of github.com:Security-Onion-Solutions/securityonion into reyesj2/es-fleet-patch 2025-09-19 12:55:08 -05:00
reyesj2
f066baf6ba use only the characters up to the last seen '_' 2025-09-19 12:54:04 -05:00
Jorge Reyes
ba710c9944 import or eval should get updated 2025-09-19 12:26:08 -05:00
reyesj2
198695af03 Merge branch '2.4/dev' of github.com:Security-Onion-Solutions/securityonion into reyesj2/es-fleet-patch 2025-09-19 11:56:53 -05:00
Jorge Reyes
fec78f5fb5 Merge pull request #15051 from Security-Onion-Solutions/reyesj2/patch-lgchk
add oom check to so-log-check
2025-09-19 11:41:55 -05:00
reyesj2
d03dd7ac2d check for oom kill only in the last 24 hours
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2025-09-19 11:32:13 -05:00
reyesj2
d2dd52b42a Merge branch 'reyesj2/patch-lgchk' of github.com:Security-Onion-Solutions/securityonion into reyesj2/es-fleet-patch 2025-09-19 11:12:09 -05:00
reyesj2
c9db52433f add oom check to so-log-check
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2025-09-19 11:08:42 -05:00
reyesj2
138849d258 more typos 2025-09-18 17:33:42 -05:00
reyesj2
a9ec12e402 Merge branch 'reyesj2/es-fleet-patch' of github.com:Security-Onion-Solutions/securityonion into reyesj2/es-fleet-patch 2025-09-18 16:41:34 -05:00
reyesj2
87281efc24 typo 2025-09-18 16:41:33 -05:00
reyesj2
29ac4f23c6 typo 2025-09-18 16:26:37 -05:00
reyesj2
878a3f8962 flip logic to check there aren't two default policies and fleet-default-output is disabled 2025-09-18 16:05:34 -05:00
reyesj2
21e27bce87 Merge branch 'reyesj2/es-fleet-patch' of github.com:Security-Onion-Solutions/securityonion into reyesj2/es-fleet-patch 2025-09-18 15:42:28 -05:00
reyesj2
336ca0dbbd typos 2025-09-18 15:42:25 -05:00
reyesj2
d9eba3cd0e typo 2025-09-18 15:17:22 -05:00
reyesj2
81b7e2b420 Merge remote-tracking branch 'origin' into reyesj2/es-fleet-patch 2025-09-18 14:34:41 -05:00
reyesj2
cd5483623b update import/eval fleet output config -- try to prevent corrupt dual 'default' output polices from having a successful installation 2025-09-18 14:33:34 -05:00
reyesj2
faa112eddf update last so-elastic-fleet-common functions 2025-09-18 12:18:16 -05:00
reyesj2
f663f22628 elastic_fleet_integration_id 2025-09-18 10:27:54 -05:00
reyesj2
8b07ff453d elastic_fleet_integration_policy_package_version 2025-09-18 10:21:07 -05:00
reyesj2
24a0fa3f6d add fleet_api wrapper for curl retries 2025-09-18 10:15:57 -05:00
reyesj2
a5011b398d add err check and retries to elastic_fleet_integration_policy_package_name and associated scripts 2025-09-18 09:39:56 -05:00
reyesj2
5b70398c0a add error check & retries to elastic_fleet_integration_policy_names and associated scripts 2025-09-17 15:35:20 -05:00
reyesj2
f3aaee1e41 update elastic_fleet_agent_policy_ids scripts already check rc 2025-09-17 14:59:41 -05:00
reyesj2
d0e875928d add error checking and retries for elastic_fleet_installed_packages & associated script 2025-09-17 14:59:13 -05:00
reyesj2
3e16bc8335 Merge branch '2.4/dev' of github.com:Security-Onion-Solutions/securityonion into reyesj2/es-fleet-patch 2025-09-17 14:37:43 -05:00
Doug Burks
d1f4e26e29 Merge pull request #15043 from Security-Onion-Solutions/2.4/dev
2.4.180
2025-09-17 14:15:32 -04:00
reyesj2
9e24d21282 remove unused functions from so-elastic-fleet-common 2025-09-17 11:41:27 -05:00
reyesj2
5806999f63 add error check & retries to elastic_fleet_bulk_package_install 2025-09-17 11:39:06 -05:00
reyesj2
063a2b3348 update elastic_fleet_package_version_check & elastic_fleet_package_install to add error checking + retries. Update related scripts 2025-09-16 21:56:53 -05:00
reyesj2
bcd2e95fbe add error checking and retries to elastic_fleet_integration_policy_upgrade 2025-09-16 21:22:03 -05:00
reyesj2
94e8cd84e6 because of more aggressive exits use salt to rerun script as needed 2025-09-16 21:07:33 -05:00
reyesj2
948d72c282 add error check and retry to elastic_fleet_integration_update 2025-09-16 21:07:02 -05:00
reyesj2
bdeb92ab05 add err check and retries for elastic_fleet_integration_create 2025-09-16 20:30:45 -05:00
reyesj2
fdb5ad810a add err check and retries around func elastic_fleet_policy_create 2025-09-16 20:10:48 -05:00
reyesj2
f588a80ec7 fix jq error when indices don't exist (seen on fresh installs when fleet hasn't ever been installed) 2025-09-16 10:37:26 -05:00
36 changed files with 638 additions and 265 deletions

View File

@@ -1 +1 @@
2.4.190 2.4.0-foxtrot

View File

@@ -323,8 +323,8 @@ get_elastic_agent_vars() {
if [ -f "$defaultsfile" ]; then if [ -f "$defaultsfile" ]; then
ELASTIC_AGENT_TARBALL_VERSION=$(egrep " +version: " $defaultsfile | awk -F: '{print $2}' | tr -d '[:space:]') ELASTIC_AGENT_TARBALL_VERSION=$(egrep " +version: " $defaultsfile | awk -F: '{print $2}' | tr -d '[:space:]')
ELASTIC_AGENT_URL="https://repo.securityonion.net/file/so-repo/prod/2.4/elasticagent/elastic-agent_SO-$ELASTIC_AGENT_TARBALL_VERSION.tar.gz" ELASTIC_AGENT_URL="https://demo.jorgereyes.dev/elastic-agent_SO-$ELASTIC_AGENT_TARBALL_VERSION.tar.gz"
ELASTIC_AGENT_MD5_URL="https://repo.securityonion.net/file/so-repo/prod/2.4/elasticagent/elastic-agent_SO-$ELASTIC_AGENT_TARBALL_VERSION.md5" ELASTIC_AGENT_MD5_URL="https://demo.jorgereyes.dev/elastic-agent_SO-$ELASTIC_AGENT_TARBALL_VERSION.md5"
ELASTIC_AGENT_FILE="/nsm/elastic-fleet/artifacts/elastic-agent_SO-$ELASTIC_AGENT_TARBALL_VERSION.tar.gz" ELASTIC_AGENT_FILE="/nsm/elastic-fleet/artifacts/elastic-agent_SO-$ELASTIC_AGENT_TARBALL_VERSION.tar.gz"
ELASTIC_AGENT_MD5="/nsm/elastic-fleet/artifacts/elastic-agent_SO-$ELASTIC_AGENT_TARBALL_VERSION.md5" ELASTIC_AGENT_MD5="/nsm/elastic-fleet/artifacts/elastic-agent_SO-$ELASTIC_AGENT_TARBALL_VERSION.md5"
ELASTIC_AGENT_EXPANSION_DIR=/nsm/elastic-fleet/artifacts/beats/elastic-agent ELASTIC_AGENT_EXPANSION_DIR=/nsm/elastic-fleet/artifacts/beats/elastic-agent
@@ -441,8 +441,7 @@ lookup_grain() {
lookup_role() { lookup_role() {
id=$(lookup_grain id) id=$(lookup_grain id)
pieces=($(echo $id | tr '_' ' ')) echo "${id##*_}"
echo ${pieces[1]}
} }
is_feature_enabled() { is_feature_enabled() {

View File

@@ -268,6 +268,13 @@ for log_file in $(cat /tmp/log_check_files); do
tail -n $RECENT_LOG_LINES $log_file > /tmp/log_check tail -n $RECENT_LOG_LINES $log_file > /tmp/log_check
check_for_errors check_for_errors
done done
# Look for OOM specific errors in /var/log/messages which can lead to odd behavior / test failures
if [[ -f /var/log/messages ]]; then
status "Checking log file /var/log/messages"
if journalctl --since "24 hours ago" | grep -iE 'out of memory|oom-kill'; then
RESULT=1
fi
fi
# Cleanup temp files # Cleanup temp files
rm -f /tmp/log_check_files rm -f /tmp/log_check_files

View File

@@ -173,7 +173,7 @@ for PCAP in $INPUT_FILES; do
status "- assigning unique identifier to import: $HASH" status "- assigning unique identifier to import: $HASH"
pcap_data=$(pcapinfo "${PCAP}") pcap_data=$(pcapinfo "${PCAP}")
if ! echo "$pcap_data" | grep -q "First packet time:" || echo "$pcap_data" |egrep -q "Last packet time: 1970-01-01|Last packet time: n/a"; then if ! echo "$pcap_data" | grep -q "Earliest packet time:" || echo "$pcap_data" |egrep -q "Latest packet time: 1970-01-01|Latest packet time: n/a"; then
status "- this PCAP file is invalid; skipping" status "- this PCAP file is invalid; skipping"
INVALID_PCAPS_COUNT=$((INVALID_PCAPS_COUNT + 1)) INVALID_PCAPS_COUNT=$((INVALID_PCAPS_COUNT + 1))
else else
@@ -205,8 +205,8 @@ for PCAP in $INPUT_FILES; do
HASHES="${HASHES} ${HASH}" HASHES="${HASHES} ${HASH}"
fi fi
START=$(pcapinfo "${PCAP}" -a |grep "First packet time:" | awk '{print $4}') START=$(pcapinfo "${PCAP}" -a |grep "Earliest packet time:" | awk '{print $4}')
END=$(pcapinfo "${PCAP}" -e |grep "Last packet time:" | awk '{print $4}') END=$(pcapinfo "${PCAP}" -e |grep "Latest packet time:" | awk '{print $4}')
status "- found PCAP data spanning dates $START through $END" status "- found PCAP data spanning dates $START through $END"
# compare $START to $START_OLDEST # compare $START to $START_OLDEST

View File

@@ -135,12 +135,18 @@ so-elastic-fleet-package-statefile:
so-elastic-fleet-package-upgrade: so-elastic-fleet-package-upgrade:
cmd.run: cmd.run:
- name: /usr/sbin/so-elastic-fleet-package-upgrade - name: /usr/sbin/so-elastic-fleet-package-upgrade
- retry:
attempts: 3
interval: 10
- onchanges: - onchanges:
- file: /opt/so/state/elastic_fleet_packages.txt - file: /opt/so/state/elastic_fleet_packages.txt
so-elastic-fleet-integrations: so-elastic-fleet-integrations:
cmd.run: cmd.run:
- name: /usr/sbin/so-elastic-fleet-integration-policy-load - name: /usr/sbin/so-elastic-fleet-integration-policy-load
- retry:
attempts: 3
interval: 10
so-elastic-agent-grid-upgrade: so-elastic-agent-grid-upgrade:
cmd.run: cmd.run:
@@ -152,7 +158,11 @@ so-elastic-agent-grid-upgrade:
so-elastic-fleet-integration-upgrade: so-elastic-fleet-integration-upgrade:
cmd.run: cmd.run:
- name: /usr/sbin/so-elastic-fleet-integration-upgrade - name: /usr/sbin/so-elastic-fleet-integration-upgrade
- retry:
attempts: 3
interval: 10
{# Optional integrations script doesn't need the retries like so-elastic-fleet-integration-upgrade which loads the default integrations #}
so-elastic-fleet-addon-integrations: so-elastic-fleet-addon-integrations:
cmd.run: cmd.run:
- name: /usr/sbin/so-elastic-fleet-optional-integrations-load - name: /usr/sbin/so-elastic-fleet-optional-integrations-load

View File

@@ -20,7 +20,7 @@
], ],
"data_stream.dataset": "import", "data_stream.dataset": "import",
"custom": "", "custom": "",
"processors": "- dissect:\n tokenizer: \"/nsm/import/%{import.id}/evtx/%{import.file}\"\n field: \"log.file.path\"\n target_prefix: \"\"\n- decode_json_fields:\n fields: [\"message\"]\n target: \"\"\n- drop_fields:\n fields: [\"host\"]\n ignore_missing: true\n- add_fields:\n target: data_stream\n fields:\n type: logs\n dataset: system.security\n- add_fields:\n target: event\n fields:\n dataset: system.security\n module: system\n imported: true\n- add_fields:\n target: \"@metadata\"\n fields:\n pipeline: logs-system.security-2.5.4\n- if:\n equals:\n winlog.channel: 'Microsoft-Windows-Sysmon/Operational'\n then: \n - add_fields:\n target: data_stream\n fields:\n dataset: windows.sysmon_operational\n - add_fields:\n target: event\n fields:\n dataset: windows.sysmon_operational\n module: windows\n imported: true\n - add_fields:\n target: \"@metadata\"\n fields:\n pipeline: logs-windows.sysmon_operational-3.1.2\n- if:\n equals:\n winlog.channel: 'Application'\n then: \n - add_fields:\n target: data_stream\n fields:\n dataset: system.application\n - add_fields:\n target: event\n fields:\n dataset: system.application\n - add_fields:\n target: \"@metadata\"\n fields:\n pipeline: logs-system.application-2.5.4\n- if:\n equals:\n winlog.channel: 'System'\n then: \n - add_fields:\n target: data_stream\n fields:\n dataset: system.system\n - add_fields:\n target: event\n fields:\n dataset: system.system\n - add_fields:\n target: \"@metadata\"\n fields:\n pipeline: logs-system.system-2.5.4\n \n- if:\n equals:\n winlog.channel: 'Microsoft-Windows-PowerShell/Operational'\n then: \n - add_fields:\n target: data_stream\n fields:\n dataset: windows.powershell_operational\n - add_fields:\n target: event\n fields:\n dataset: windows.powershell_operational\n module: windows\n - add_fields:\n target: \"@metadata\"\n fields:\n pipeline: logs-windows.powershell_operational-3.1.2\n- add_fields:\n target: data_stream\n fields:\n dataset: import", "processors": "- dissect:\n tokenizer: \"/nsm/import/%{import.id}/evtx/%{import.file}\"\n field: \"log.file.path\"\n target_prefix: \"\"\n- decode_json_fields:\n fields: [\"message\"]\n target: \"\"\n- drop_fields:\n fields: [\"host\"]\n ignore_missing: true\n- add_fields:\n target: data_stream\n fields:\n type: logs\n dataset: system.security\n- add_fields:\n target: event\n fields:\n dataset: system.security\n module: system\n imported: true\n- add_fields:\n target: \"@metadata\"\n fields:\n pipeline: logs-system.security-2.6.1\n- if:\n equals:\n winlog.channel: 'Microsoft-Windows-Sysmon/Operational'\n then: \n - add_fields:\n target: data_stream\n fields:\n dataset: windows.sysmon_operational\n - add_fields:\n target: event\n fields:\n dataset: windows.sysmon_operational\n module: windows\n imported: true\n - add_fields:\n target: \"@metadata\"\n fields:\n pipeline: logs-windows.sysmon_operational-3.1.2\n- if:\n equals:\n winlog.channel: 'Application'\n then: \n - add_fields:\n target: data_stream\n fields:\n dataset: system.application\n - add_fields:\n target: event\n fields:\n dataset: system.application\n - add_fields:\n target: \"@metadata\"\n fields:\n pipeline: logs-system.application-2.6.1\n- if:\n equals:\n winlog.channel: 'System'\n then: \n - add_fields:\n target: data_stream\n fields:\n dataset: system.system\n - add_fields:\n target: event\n fields:\n dataset: system.system\n - add_fields:\n target: \"@metadata\"\n fields:\n pipeline: logs-system.system-2.6.1\n \n- if:\n equals:\n winlog.channel: 'Microsoft-Windows-PowerShell/Operational'\n then: \n - add_fields:\n target: data_stream\n fields:\n dataset: windows.powershell_operational\n - add_fields:\n target: event\n fields:\n dataset: windows.powershell_operational\n module: windows\n - add_fields:\n target: \"@metadata\"\n fields:\n pipeline: logs-windows.powershell_operational-3.1.2\n- add_fields:\n target: data_stream\n fields:\n dataset: import",
"tags": [ "tags": [
"import" "import"
] ]

View File

@@ -23,6 +23,13 @@ fi
# Define a banner to separate sections # Define a banner to separate sections
banner="=========================================================================" banner="========================================================================="
fleet_api() {
local QUERYPATH=$1
shift
curl -sK /opt/so/conf/elasticsearch/curl.config -L "localhost:5601/api/fleet/${QUERYPATH}" "$@" --retry 3 --retry-delay 10 --fail 2>/dev/null
}
elastic_fleet_integration_check() { elastic_fleet_integration_check() {
AGENT_POLICY=$1 AGENT_POLICY=$1
@@ -39,7 +46,9 @@ elastic_fleet_integration_create() {
JSON_STRING=$1 JSON_STRING=$1
curl -K /opt/so/conf/elasticsearch/curl.config -L -X POST "localhost:5601/api/fleet/package_policies" -H 'kbn-xsrf: true' -H 'Content-Type: application/json' -d "$JSON_STRING" if ! fleet_api "package_policies" -H 'kbn-xsrf: true' -H 'Content-Type: application/json' -XPOST -d "$JSON_STRING"; then
return 1
fi
} }
@@ -56,7 +65,10 @@ elastic_fleet_integration_remove() {
'{"packagePolicyIds":[$INTEGRATIONID]}' '{"packagePolicyIds":[$INTEGRATIONID]}'
) )
curl -K /opt/so/conf/elasticsearch/curl.config -L -X POST "localhost:5601/api/fleet/package_policies/delete" -H 'kbn-xsrf: true' -H 'Content-Type: application/json' -d "$JSON_STRING" if ! fleet_api "package_policies/delete" -XPOST -H 'kbn-xsrf: true' -H 'Content-Type: application/json' -d "$JSON_STRING"; then
echo "Error: Unable to delete '$NAME' from '$AGENT_POLICY'"
return 1
fi
} }
elastic_fleet_integration_update() { elastic_fleet_integration_update() {
@@ -65,7 +77,9 @@ elastic_fleet_integration_update() {
JSON_STRING=$2 JSON_STRING=$2
curl -K /opt/so/conf/elasticsearch/curl.config -L -X PUT "localhost:5601/api/fleet/package_policies/$UPDATE_ID" -H 'kbn-xsrf: true' -H 'Content-Type: application/json' -d "$JSON_STRING" if ! fleet_api "package_policies/$UPDATE_ID" -H 'kbn-xsrf: true' -H 'Content-Type: application/json' -XPUT -d "$JSON_STRING"; then
return 1
fi
} }
elastic_fleet_integration_policy_upgrade() { elastic_fleet_integration_policy_upgrade() {
@@ -77,78 +91,83 @@ elastic_fleet_integration_policy_upgrade() {
'{"packagePolicyIds":[$INTEGRATIONID]}' '{"packagePolicyIds":[$INTEGRATIONID]}'
) )
curl -K /opt/so/conf/elasticsearch/curl.config -L -X POST "localhost:5601/api/fleet/package_policies/upgrade" -H 'kbn-xsrf: true' -H 'Content-Type: application/json' -d "$JSON_STRING" if ! fleet_api "package_policies/upgrade" -XPOST -H 'kbn-xsrf: true' -H 'Content-Type: application/json' -d "$JSON_STRING"; then
return 1
fi
} }
elastic_fleet_package_version_check() { elastic_fleet_package_version_check() {
PACKAGE=$1 PACKAGE=$1
curl -s -K /opt/so/conf/elasticsearch/curl.config -b "sid=$SESSIONCOOKIE" -L -X GET "localhost:5601/api/fleet/epm/packages/$PACKAGE" | jq -r '.item.version'
if output=$(fleet_api "epm/packages/$PACKAGE"); then
echo "$output" | jq -r '.item.version'
else
echo "Error: Failed to get current package version for '$PACKAGE'"
return 1
fi
} }
elastic_fleet_package_latest_version_check() { elastic_fleet_package_latest_version_check() {
PACKAGE=$1 PACKAGE=$1
if output=$(curl -s -K /opt/so/conf/elasticsearch/curl.config -b "sid=$SESSIONCOOKIE" -L -X GET "localhost:5601/api/fleet/epm/packages/$PACKAGE" --fail); then if output=$(fleet_api "epm/packages/$PACKAGE"); then
if version=$(jq -e -r '.item.latestVersion' <<< $output); then if version=$(jq -e -r '.item.latestVersion' <<< $output); then
echo "$version" echo "$version"
fi fi
else else
echo "Error: Failed to get latest version for $PACKAGE" echo "Error: Failed to get latest version for '$PACKAGE'"
return 1
fi fi
} }
elastic_fleet_package_install() { elastic_fleet_package_install() {
PKG=$1 PKG=$1
VERSION=$2 VERSION=$2
curl -s -K /opt/so/conf/elasticsearch/curl.config -b "sid=$SESSIONCOOKIE" -L -X POST -H 'kbn-xsrf: true' -H 'Content-Type: application/json' -d '{"force":true}' "localhost:5601/api/fleet/epm/packages/$PKG/$VERSION" if ! fleet_api "epm/packages/$PKG/$VERSION" -XPOST -H 'kbn-xsrf: true' -H 'Content-Type: application/json' -d '{"force":true}'; then
return 1
fi
} }
elastic_fleet_bulk_package_install() { elastic_fleet_bulk_package_install() {
BULK_PKG_LIST=$1 BULK_PKG_LIST=$1
curl -s -K /opt/so/conf/elasticsearch/curl.config -b "sid=$SESSIONCOOKIE" -L -X POST -H 'kbn-xsrf: true' -H 'Content-Type: application/json' -d@$1 "localhost:5601/api/fleet/epm/packages/_bulk" if ! fleet_api "epm/packages/_bulk" -XPOST -H 'kbn-xsrf: true' -H 'Content-Type: application/json' -d@$BULK_PKG_LIST; then
} return 1
elastic_fleet_package_is_installed() {
PACKAGE=$1
curl -s -K /opt/so/conf/elasticsearch/curl.config -b "sid=$SESSIONCOOKIE" -L -X GET -H 'kbn-xsrf: true' "localhost:5601/api/fleet/epm/packages/$PACKAGE" | jq -r '.item.status'
}
elastic_fleet_installed_packages() {
curl -s -K /opt/so/conf/elasticsearch/curl.config -b "sid=$SESSIONCOOKIE" -L -X GET -H 'kbn-xsrf: true' -H 'Content-Type: application/json' "localhost:5601/api/fleet/epm/packages/installed?perPage=500"
}
elastic_fleet_agent_policy_ids() {
curl -s -K /opt/so/conf/elasticsearch/curl.config -b "sid=$SESSIONCOOKIE" -L -X GET "localhost:5601/api/fleet/agent_policies" | jq -r .items[].id
if [ $? -ne 0 ]; then
echo "Error: Failed to retrieve agent policies."
exit 1
fi fi
} }
elastic_fleet_agent_policy_names() { elastic_fleet_installed_packages() {
curl -s -K /opt/so/conf/elasticsearch/curl.config -b "sid=$SESSIONCOOKIE" -L -X GET "localhost:5601/api/fleet/agent_policies" | jq -r .items[].name if ! fleet_api "epm/packages/installed?perPage=500"; then
if [ $? -ne 0 ]; then return 1
fi
}
elastic_fleet_agent_policy_ids() {
if output=$(fleet_api "agent_policies"); then
echo "$output" | jq -r .items[].id
else
echo "Error: Failed to retrieve agent policies." echo "Error: Failed to retrieve agent policies."
exit 1 return 1
fi fi
} }
elastic_fleet_integration_policy_names() { elastic_fleet_integration_policy_names() {
AGENT_POLICY=$1 AGENT_POLICY=$1
curl -s -K /opt/so/conf/elasticsearch/curl.config -b "sid=$SESSIONCOOKIE" -L -X GET "localhost:5601/api/fleet/agent_policies/$AGENT_POLICY" | jq -r .item.package_policies[].name if output=$(fleet_api "agent_policies/$AGENT_POLICY"); then
if [ $? -ne 0 ]; then echo "$output" | jq -r .item.package_policies[].name
else
echo "Error: Failed to retrieve integrations for '$AGENT_POLICY'." echo "Error: Failed to retrieve integrations for '$AGENT_POLICY'."
exit 1 return 1
fi fi
} }
elastic_fleet_integration_policy_package_name() { elastic_fleet_integration_policy_package_name() {
AGENT_POLICY=$1 AGENT_POLICY=$1
INTEGRATION=$2 INTEGRATION=$2
curl -s -K /opt/so/conf/elasticsearch/curl.config -b "sid=$SESSIONCOOKIE" -L -X GET "localhost:5601/api/fleet/agent_policies/$AGENT_POLICY" | jq -r --arg INTEGRATION "$INTEGRATION" '.item.package_policies[] | select(.name==$INTEGRATION)| .package.name' if output=$(fleet_api "agent_policies/$AGENT_POLICY"); then
if [ $? -ne 0 ]; then echo "$output" | jq -r --arg INTEGRATION "$INTEGRATION" '.item.package_policies[] | select(.name==$INTEGRATION)| .package.name'
else
echo "Error: Failed to retrieve package name for '$INTEGRATION' in '$AGENT_POLICY'." echo "Error: Failed to retrieve package name for '$INTEGRATION' in '$AGENT_POLICY'."
exit 1 return 1
fi fi
} }
@@ -156,32 +175,32 @@ elastic_fleet_integration_policy_package_version() {
AGENT_POLICY=$1 AGENT_POLICY=$1
INTEGRATION=$2 INTEGRATION=$2
if output=$(curl -s -K /opt/so/conf/elasticsearch/curl.config -L -X GET "localhost:5601/api/fleet/agent_policies/$AGENT_POLICY" --fail); then if output=$(fleet_api "agent_policies/$AGENT_POLICY"); then
if version=$(jq -e -r --arg INTEGRATION "$INTEGRATION" '.item.package_policies[] | select(.name==$INTEGRATION)| .package.version' <<< $output); then if version=$(jq -e -r --arg INTEGRATION "$INTEGRATION" '.item.package_policies[] | select(.name==$INTEGRATION)| .package.version' <<< "$output"); then
echo "$version" echo "$version"
fi fi
else else
echo "Error: Failed to retrieve agent policy $AGENT_POLICY" echo "Error: Failed to retrieve integration version for '$INTEGRATION' in policy '$AGENT_POLICY'"
exit 1 return 1
fi fi
} }
elastic_fleet_integration_id() { elastic_fleet_integration_id() {
AGENT_POLICY=$1 AGENT_POLICY=$1
INTEGRATION=$2 INTEGRATION=$2
curl -s -K /opt/so/conf/elasticsearch/curl.config -b "sid=$SESSIONCOOKIE" -L -X GET "localhost:5601/api/fleet/agent_policies/$AGENT_POLICY" | jq -r --arg INTEGRATION "$INTEGRATION" '.item.package_policies[] | select(.name==$INTEGRATION)| .id' if output=$(fleet_api "agent_policies/$AGENT_POLICY"); then
if [ $? -ne 0 ]; then echo "$output" | jq -r --arg INTEGRATION "$INTEGRATION" '.item.package_policies[] | select(.name==$INTEGRATION)| .id'
else
echo "Error: Failed to retrieve integration ID for '$INTEGRATION' in '$AGENT_POLICY'." echo "Error: Failed to retrieve integration ID for '$INTEGRATION' in '$AGENT_POLICY'."
exit 1 return 1
fi fi
} }
elastic_fleet_integration_policy_dryrun_upgrade() { elastic_fleet_integration_policy_dryrun_upgrade() {
INTEGRATION_ID=$1 INTEGRATION_ID=$1
curl -s -K /opt/so/conf/elasticsearch/curl.config -b "sid=$SESSIONCOOKIE" -H "Content-Type: application/json" -H 'kbn-xsrf: true' -L -X POST "localhost:5601/api/fleet/package_policies/upgrade/dryrun" -d "{\"packagePolicyIds\":[\"$INTEGRATION_ID\"]}" if ! fleet_api "package_policies/upgrade/dryrun" -H "Content-Type: application/json" -H 'kbn-xsrf: true' -XPOST -d "{\"packagePolicyIds\":[\"$INTEGRATION_ID\"]}"; then
if [ $? -ne 0 ]; then
echo "Error: Failed to complete dry run for '$INTEGRATION_ID'." echo "Error: Failed to complete dry run for '$INTEGRATION_ID'."
exit 1 return 1
fi fi
} }
@@ -190,25 +209,18 @@ elastic_fleet_policy_create() {
NAME=$1 NAME=$1
DESC=$2 DESC=$2
FLEETSERVER=$3 FLEETSERVER=$3
TIMEOUT=$4 TIMEOUT=$4
JSON_STRING=$( jq -n \ JSON_STRING=$( jq -n \
--arg NAME "$NAME" \ --arg NAME "$NAME" \
--arg DESC "$DESC" \ --arg DESC "$DESC" \
--arg TIMEOUT $TIMEOUT \ --arg TIMEOUT $TIMEOUT \
--arg FLEETSERVER "$FLEETSERVER" \ --arg FLEETSERVER "$FLEETSERVER" \
'{"name": $NAME,"id":$NAME,"description":$DESC,"namespace":"default","monitoring_enabled":["logs"],"inactivity_timeout":$TIMEOUT,"has_fleet_server":$FLEETSERVER}' '{"name": $NAME,"id":$NAME,"description":$DESC,"namespace":"default","monitoring_enabled":["logs"],"inactivity_timeout":$TIMEOUT,"has_fleet_server":$FLEETSERVER}'
) )
# Create Fleet Policy # Create Fleet Policy
curl -K /opt/so/conf/elasticsearch/curl.config -L -X POST "localhost:5601/api/fleet/agent_policies" -H 'kbn-xsrf: true' -H 'Content-Type: application/json' -d "$JSON_STRING" if ! fleet_api "agent_policies" -XPOST -H 'kbn-xsrf: true' -H 'Content-Type: application/json' -d "$JSON_STRING"; then
return 1
fi
} }
elastic_fleet_policy_update() {
POLICYID=$1
JSON_STRING=$2
curl -K /opt/so/conf/elasticsearch/curl.config -L -X PUT "localhost:5601/api/fleet/agent_policies/$POLICYID" -H 'kbn-xsrf: true' -H 'Content-Type: application/json' -d "$JSON_STRING"
}

View File

@@ -8,6 +8,7 @@
. /usr/sbin/so-elastic-fleet-common . /usr/sbin/so-elastic-fleet-common
ERROR=false
# Manage Elastic Defend Integration for Initial Endpoints Policy # Manage Elastic Defend Integration for Initial Endpoints Policy
for INTEGRATION in /opt/so/conf/elastic-fleet/integrations/elastic-defend/*.json for INTEGRATION in /opt/so/conf/elastic-fleet/integrations/elastic-defend/*.json
do do
@@ -15,9 +16,20 @@ do
elastic_fleet_integration_check "endpoints-initial" "$INTEGRATION" elastic_fleet_integration_check "endpoints-initial" "$INTEGRATION"
if [ -n "$INTEGRATION_ID" ]; then if [ -n "$INTEGRATION_ID" ]; then
printf "\n\nIntegration $NAME exists - Upgrading integration policy\n" printf "\n\nIntegration $NAME exists - Upgrading integration policy\n"
elastic_fleet_integration_policy_upgrade "$INTEGRATION_ID" if ! elastic_fleet_integration_policy_upgrade "$INTEGRATION_ID"; then
echo -e "\nFailed to upgrade integration policy for ${INTEGRATION##*/}"
ERROR=true
continue
fi
else else
printf "\n\nIntegration does not exist - Creating integration\n" printf "\n\nIntegration does not exist - Creating integration\n"
elastic_fleet_integration_create "@$INTEGRATION" if ! elastic_fleet_integration_create "@$INTEGRATION"; then
echo -e "\nFailed to create integration for ${INTEGRATION##*/}"
ERROR=true
continue
fi
fi fi
done done
if [[ "$ERROR" == "true" ]]; then
exit 1
fi

View File

@@ -25,5 +25,9 @@ for POLICYNAME in $POLICY; do
.name = $name' /opt/so/conf/elastic-fleet/integrations/fleet-server/fleet-server.json) .name = $name' /opt/so/conf/elastic-fleet/integrations/fleet-server/fleet-server.json)
# Now update the integration policy using the modified JSON # Now update the integration policy using the modified JSON
elastic_fleet_integration_update "$INTEGRATION_ID" "$UPDATED_INTEGRATION_POLICY" if ! elastic_fleet_integration_update "$INTEGRATION_ID" "$UPDATED_INTEGRATION_POLICY"; then
# exit 1 on failure to update fleet integration policies, let salt handle retries
echo "Failed to update $POLICYNAME.."
exit 1
fi
done done

View File

@@ -13,11 +13,10 @@ if [ ! -f /opt/so/state/eaintegrations.txt ]; then
/usr/sbin/so-elastic-fleet-package-upgrade /usr/sbin/so-elastic-fleet-package-upgrade
# Second, update Fleet Server policies # Second, update Fleet Server policies
/sbin/so-elastic-fleet-integration-policy-elastic-fleet-server /usr/sbin/so-elastic-fleet-integration-policy-elastic-fleet-server
# Third, configure Elastic Defend Integration seperately # Third, configure Elastic Defend Integration seperately
/usr/sbin/so-elastic-fleet-integration-policy-elastic-defend /usr/sbin/so-elastic-fleet-integration-policy-elastic-defend
# Initial Endpoints # Initial Endpoints
for INTEGRATION in /opt/so/conf/elastic-fleet/integrations/endpoints-initial/*.json for INTEGRATION in /opt/so/conf/elastic-fleet/integrations/endpoints-initial/*.json
do do
@@ -25,10 +24,18 @@ if [ ! -f /opt/so/state/eaintegrations.txt ]; then
elastic_fleet_integration_check "endpoints-initial" "$INTEGRATION" elastic_fleet_integration_check "endpoints-initial" "$INTEGRATION"
if [ -n "$INTEGRATION_ID" ]; then if [ -n "$INTEGRATION_ID" ]; then
printf "\n\nIntegration $NAME exists - Updating integration\n" printf "\n\nIntegration $NAME exists - Updating integration\n"
elastic_fleet_integration_update "$INTEGRATION_ID" "@$INTEGRATION" if ! elastic_fleet_integration_update "$INTEGRATION_ID" "@$INTEGRATION"; then
echo -e "\nFailed to update integration for ${INTEGRATION##*/}"
RETURN_CODE=1
continue
fi
else else
printf "\n\nIntegration does not exist - Creating integration\n" printf "\n\nIntegration does not exist - Creating integration\n"
elastic_fleet_integration_create "@$INTEGRATION" if ! elastic_fleet_integration_create "@$INTEGRATION"; then
echo -e "\nFailed to create integration for ${INTEGRATION##*/}"
RETURN_CODE=1
continue
fi
fi fi
done done
@@ -39,10 +46,18 @@ if [ ! -f /opt/so/state/eaintegrations.txt ]; then
elastic_fleet_integration_check "so-grid-nodes_general" "$INTEGRATION" elastic_fleet_integration_check "so-grid-nodes_general" "$INTEGRATION"
if [ -n "$INTEGRATION_ID" ]; then if [ -n "$INTEGRATION_ID" ]; then
printf "\n\nIntegration $NAME exists - Updating integration\n" printf "\n\nIntegration $NAME exists - Updating integration\n"
elastic_fleet_integration_update "$INTEGRATION_ID" "@$INTEGRATION" if ! elastic_fleet_integration_update "$INTEGRATION_ID" "@$INTEGRATION"; then
echo -e "\nFailed to update integration for ${INTEGRATION##*/}"
RETURN_CODE=1
continue
fi
else else
printf "\n\nIntegration does not exist - Creating integration\n" printf "\n\nIntegration does not exist - Creating integration\n"
elastic_fleet_integration_create "@$INTEGRATION" if ! elastic_fleet_integration_create "@$INTEGRATION"; then
echo -e "\nFailed to create integration for ${INTEGRATION##*/}"
RETURN_CODE=1
continue
fi
fi fi
done done
if [[ "$RETURN_CODE" != "1" ]]; then if [[ "$RETURN_CODE" != "1" ]]; then
@@ -56,11 +71,19 @@ if [ ! -f /opt/so/state/eaintegrations.txt ]; then
elastic_fleet_integration_check "so-grid-nodes_heavy" "$INTEGRATION" elastic_fleet_integration_check "so-grid-nodes_heavy" "$INTEGRATION"
if [ -n "$INTEGRATION_ID" ]; then if [ -n "$INTEGRATION_ID" ]; then
printf "\n\nIntegration $NAME exists - Updating integration\n" printf "\n\nIntegration $NAME exists - Updating integration\n"
elastic_fleet_integration_update "$INTEGRATION_ID" "@$INTEGRATION" if ! elastic_fleet_integration_update "$INTEGRATION_ID" "@$INTEGRATION"; then
echo -e "\nFailed to update integration for ${INTEGRATION##*/}"
RETURN_CODE=1
continue
fi
else else
printf "\n\nIntegration does not exist - Creating integration\n" printf "\n\nIntegration does not exist - Creating integration\n"
if [ "$NAME" != "elasticsearch-logs" ]; then if [ "$NAME" != "elasticsearch-logs" ]; then
elastic_fleet_integration_create "@$INTEGRATION" if ! elastic_fleet_integration_create "@$INTEGRATION"; then
echo -e "\nFailed to create integration for ${INTEGRATION##*/}"
RETURN_CODE=1
continue
fi
fi fi
fi fi
done done
@@ -77,11 +100,19 @@ if [ ! -f /opt/so/state/eaintegrations.txt ]; then
elastic_fleet_integration_check "$FLEET_POLICY" "$INTEGRATION" elastic_fleet_integration_check "$FLEET_POLICY" "$INTEGRATION"
if [ -n "$INTEGRATION_ID" ]; then if [ -n "$INTEGRATION_ID" ]; then
printf "\n\nIntegration $NAME exists - Updating integration\n" printf "\n\nIntegration $NAME exists - Updating integration\n"
elastic_fleet_integration_update "$INTEGRATION_ID" "@$INTEGRATION" if ! elastic_fleet_integration_update "$INTEGRATION_ID" "@$INTEGRATION"; then
echo -e "\nFailed to update integration for ${INTEGRATION##*/}"
RETURN_CODE=1
continue
fi
else else
printf "\n\nIntegration does not exist - Creating integration\n" printf "\n\nIntegration does not exist - Creating integration\n"
if [ "$NAME" != "elasticsearch-logs" ]; then if [ "$NAME" != "elasticsearch-logs" ]; then
elastic_fleet_integration_create "@$INTEGRATION" if ! elastic_fleet_integration_create "@$INTEGRATION"; then
echo -e "\nFailed to create integration for ${INTEGRATION##*/}"
RETURN_CODE=1
continue
fi
fi fi
fi fi
fi fi

View File

@@ -24,12 +24,18 @@ fi
default_packages=({% for pkg in SUPPORTED_PACKAGES %}"{{ pkg }}"{% if not loop.last %} {% endif %}{% endfor %}) default_packages=({% for pkg in SUPPORTED_PACKAGES %}"{{ pkg }}"{% if not loop.last %} {% endif %}{% endfor %})
ERROR=false
for AGENT_POLICY in $agent_policies; do for AGENT_POLICY in $agent_policies; do
integrations=$(elastic_fleet_integration_policy_names "$AGENT_POLICY") if ! integrations=$(elastic_fleet_integration_policy_names "$AGENT_POLICY"); then
# this script upgrades default integration packages, exit 1 and let salt handle retrying
exit 1
fi
for INTEGRATION in $integrations; do for INTEGRATION in $integrations; do
if ! [[ "$INTEGRATION" == "elastic-defend-endpoints" ]] && ! [[ "$INTEGRATION" == "fleet_server-"* ]]; then if ! [[ "$INTEGRATION" == "elastic-defend-endpoints" ]] && ! [[ "$INTEGRATION" == "fleet_server-"* ]]; then
# Get package name so we know what package to look for when checking the current and latest available version # Get package name so we know what package to look for when checking the current and latest available version
PACKAGE_NAME=$(elastic_fleet_integration_policy_package_name "$AGENT_POLICY" "$INTEGRATION") if ! PACKAGE_NAME=$(elastic_fleet_integration_policy_package_name "$AGENT_POLICY" "$INTEGRATION"); then
exit 1
fi
{%- if not AUTO_UPGRADE_INTEGRATIONS %} {%- if not AUTO_UPGRADE_INTEGRATIONS %}
if [[ " ${default_packages[@]} " =~ " $PACKAGE_NAME " ]]; then if [[ " ${default_packages[@]} " =~ " $PACKAGE_NAME " ]]; then
{%- endif %} {%- endif %}
@@ -48,7 +54,9 @@ for AGENT_POLICY in $agent_policies; do
fi fi
# Get integration ID # Get integration ID
INTEGRATION_ID=$(elastic_fleet_integration_id "$AGENT_POLICY" "$INTEGRATION") if ! INTEGRATION_ID=$(elastic_fleet_integration_id "$AGENT_POLICY" "$INTEGRATION"); then
exit 1
fi
if [[ "$PACKAGE_VERSION" != "$AVAILABLE_VERSION" ]]; then if [[ "$PACKAGE_VERSION" != "$AVAILABLE_VERSION" ]]; then
# Dry run of the upgrade # Dry run of the upgrade
@@ -56,20 +64,23 @@ for AGENT_POLICY in $agent_policies; do
echo "Current $PACKAGE_NAME package version ($PACKAGE_VERSION) is not the same as the latest available package ($AVAILABLE_VERSION)..." echo "Current $PACKAGE_NAME package version ($PACKAGE_VERSION) is not the same as the latest available package ($AVAILABLE_VERSION)..."
echo "Upgrading $INTEGRATION..." echo "Upgrading $INTEGRATION..."
echo "Starting dry run..." echo "Starting dry run..."
DRYRUN_OUTPUT=$(elastic_fleet_integration_policy_dryrun_upgrade "$INTEGRATION_ID") if ! DRYRUN_OUTPUT=$(elastic_fleet_integration_policy_dryrun_upgrade "$INTEGRATION_ID"); then
exit 1
fi
DRYRUN_ERRORS=$(echo "$DRYRUN_OUTPUT" | jq .[].hasErrors) DRYRUN_ERRORS=$(echo "$DRYRUN_OUTPUT" | jq .[].hasErrors)
# If no errors with dry run, proceed with actual upgrade # If no errors with dry run, proceed with actual upgrade
if [[ "$DRYRUN_ERRORS" == "false" ]]; then if [[ "$DRYRUN_ERRORS" == "false" ]]; then
echo "No errors detected. Proceeding with upgrade..." echo "No errors detected. Proceeding with upgrade..."
elastic_fleet_integration_policy_upgrade "$INTEGRATION_ID" if ! elastic_fleet_integration_policy_upgrade "$INTEGRATION_ID"; then
if [ $? -ne 0 ]; then
echo "Error: Upgrade failed for $PACKAGE_NAME with integration ID '$INTEGRATION_ID'." echo "Error: Upgrade failed for $PACKAGE_NAME with integration ID '$INTEGRATION_ID'."
exit 1 ERROR=true
continue
fi fi
else else
echo "Errors detected during dry run for $PACKAGE_NAME policy upgrade..." echo "Errors detected during dry run for $PACKAGE_NAME policy upgrade..."
exit 1 ERROR=true
continue
fi fi
fi fi
{%- if not AUTO_UPGRADE_INTEGRATIONS %} {%- if not AUTO_UPGRADE_INTEGRATIONS %}
@@ -78,4 +89,7 @@ for AGENT_POLICY in $agent_policies; do
fi fi
done done
done done
if [[ "$ERROR" == "true" ]]; then
exit 1
fi
echo echo

View File

@@ -62,9 +62,17 @@ default_packages=({% for pkg in SUPPORTED_PACKAGES %}"{{ pkg }}"{% if not loop.l
in_use_integrations=() in_use_integrations=()
for AGENT_POLICY in $agent_policies; do for AGENT_POLICY in $agent_policies; do
integrations=$(elastic_fleet_integration_policy_names "$AGENT_POLICY")
if ! integrations=$(elastic_fleet_integration_policy_names "$AGENT_POLICY"); then
# skip the agent policy if we can't get required info, let salt retry. Integrations loaded by this script are non-default integrations.
echo "Skipping $AGENT_POLICY.. "
continue
fi
for INTEGRATION in $integrations; do for INTEGRATION in $integrations; do
PACKAGE_NAME=$(elastic_fleet_integration_policy_package_name "$AGENT_POLICY" "$INTEGRATION") if ! PACKAGE_NAME=$(elastic_fleet_integration_policy_package_name "$AGENT_POLICY" "$INTEGRATION"); then
echo "Not adding $INTEGRATION, couldn't get package name"
continue
fi
# non-default integrations that are in-use in any policy # non-default integrations that are in-use in any policy
if ! [[ " ${default_packages[@]} " =~ " $PACKAGE_NAME " ]]; then if ! [[ " ${default_packages[@]} " =~ " $PACKAGE_NAME " ]]; then
in_use_integrations+=("$PACKAGE_NAME") in_use_integrations+=("$PACKAGE_NAME")
@@ -160,7 +168,11 @@ if [[ -f $STATE_FILE_SUCCESS ]]; then
for file in "${pkg_filename}_"*.json; do for file in "${pkg_filename}_"*.json; do
[ -e "$file" ] || continue [ -e "$file" ] || continue
elastic_fleet_bulk_package_install $file >> $BULK_INSTALL_OUTPUT if ! elastic_fleet_bulk_package_install $file >> $BULK_INSTALL_OUTPUT; then
# integrations loaded my this script are non-essential and shouldn't cause exit, skip them for now next highstate run can retry
echo "Failed to complete a chunk of bulk package installs -- $file "
continue
fi
done done
# cleanup any temp files for chunked package install # cleanup any temp files for chunked package install
rm -f ${pkg_filename}_*.json $BULK_INSTALL_PACKAGE_LIST rm -f ${pkg_filename}_*.json $BULK_INSTALL_PACKAGE_LIST
@@ -168,8 +180,9 @@ if [[ -f $STATE_FILE_SUCCESS ]]; then
echo "Elastic integrations don't appear to need installation/updating..." echo "Elastic integrations don't appear to need installation/updating..."
fi fi
# Write out file for generating index/component/ilm templates # Write out file for generating index/component/ilm templates
latest_installed_package_list=$(elastic_fleet_installed_packages) if latest_installed_package_list=$(elastic_fleet_installed_packages); then
echo $latest_installed_package_list | jq '[.items[] | {name: .name, es_index_patterns: .dataStreams}]' > $PACKAGE_COMPONENTS echo $latest_installed_package_list | jq '[.items[] | {name: .name, es_index_patterns: .dataStreams}]' > $PACKAGE_COMPONENTS
fi
if retry 3 1 "so-elasticsearch-query / --fail --output /dev/null"; then if retry 3 1 "so-elasticsearch-query / --fail --output /dev/null"; then
# Refresh installed component template list # Refresh installed component template list
latest_component_templates_list=$(so-elasticsearch-query _component_template | jq '.component_templates[] | .name' | jq -s '.') latest_component_templates_list=$(so-elasticsearch-query _component_template | jq '.component_templates[] | .name' | jq -s '.')

View File

@@ -15,8 +15,21 @@ if ! is_manager_node; then
fi fi
function update_logstash_outputs() { function update_logstash_outputs() {
# Generate updated JSON payload if logstash_policy=$(curl -K /opt/so/conf/elasticsearch/curl.config -L "http://localhost:5601/api/fleet/outputs/so-manager_logstash" --retry 3 --retry-delay 10 --fail 2>/dev/null); then
JSON_STRING=$(jq -n --arg UPDATEDLIST $NEW_LIST_JSON '{"name":"grid-logstash","type":"logstash","hosts": $UPDATEDLIST,"is_default":true,"is_default_monitoring":true,"config_yaml":""}') SSL_CONFIG=$(echo "$logstash_policy" | jq -r '.item.ssl')
if SECRETS=$(echo "$logstash_policy" | jq -er '.item.secrets' 2>/dev/null); then
JSON_STRING=$(jq -n \
--arg UPDATEDLIST "$NEW_LIST_JSON" \
--argjson SECRETS "$SECRETS" \
--argjson SSL_CONFIG "$SSL_CONFIG" \
'{"name":"grid-logstash","type":"logstash","hosts": $UPDATEDLIST,"is_default":true,"is_default_monitoring":true,"config_yaml":"","ssl": $SSL_CONFIG,"secrets": $SECRETS}')
else
JSON_STRING=$(jq -n \
--arg UPDATEDLIST "$NEW_LIST_JSON" \
--argjson SSL_CONFIG "$SSL_CONFIG" \
'{"name":"grid-logstash","type":"logstash","hosts": $UPDATEDLIST,"is_default":true,"is_default_monitoring":true,"config_yaml":"","ssl": $SSL_CONFIG}')
fi
fi
# Update Logstash Outputs # Update Logstash Outputs
curl -K /opt/so/conf/elasticsearch/curl.config -L -X PUT "localhost:5601/api/fleet/outputs/so-manager_logstash" -H 'kbn-xsrf: true' -H 'Content-Type: application/json' -d "$JSON_STRING" | jq curl -K /opt/so/conf/elasticsearch/curl.config -L -X PUT "localhost:5601/api/fleet/outputs/so-manager_logstash" -H 'kbn-xsrf: true' -H 'Content-Type: application/json' -d "$JSON_STRING" | jq

View File

@@ -10,8 +10,16 @@
{%- for PACKAGE in SUPPORTED_PACKAGES %} {%- for PACKAGE in SUPPORTED_PACKAGES %}
echo "Setting up {{ PACKAGE }} package..." echo "Setting up {{ PACKAGE }} package..."
VERSION=$(elastic_fleet_package_version_check "{{ PACKAGE }}") if VERSION=$(elastic_fleet_package_version_check "{{ PACKAGE }}"); then
elastic_fleet_package_install "{{ PACKAGE }}" "$VERSION" if ! elastic_fleet_package_install "{{ PACKAGE }}" "$VERSION"; then
# packages loaded by this script should never fail to install and REQUIRED before an installation of SO can be considered successful
echo -e "\nERROR: Failed to install default integration package -- $PACKAGE $VERSION"
exit 1
fi
else
echo -e "\nERROR: Failed to get version information for integration $PACKAGE"
exit 1
fi
echo echo
{%- endfor %} {%- endfor %}
echo echo

View File

@@ -10,8 +10,15 @@
{%- for PACKAGE in SUPPORTED_PACKAGES %} {%- for PACKAGE in SUPPORTED_PACKAGES %}
echo "Upgrading {{ PACKAGE }} package..." echo "Upgrading {{ PACKAGE }} package..."
VERSION=$(elastic_fleet_package_latest_version_check "{{ PACKAGE }}") if VERSION=$(elastic_fleet_package_latest_version_check "{{ PACKAGE }}"); then
elastic_fleet_package_install "{{ PACKAGE }}" "$VERSION" if ! elastic_fleet_package_install "{{ PACKAGE }}" "$VERSION"; then
# exit 1 on failure to upgrade a default package, allow salt to handle retries
echo -e "\nERROR: Failed to upgrade $PACKAGE to version: $VERSION"
exit 1
fi
else
echo -e "\nERROR: Failed to get version information for integration $PACKAGE"
fi
echo echo
{%- endfor %} {%- endfor %}
echo echo

View File

@@ -23,18 +23,17 @@ if [[ "$RETURN_CODE" != "0" ]]; then
exit 1 exit 1
fi fi
ALIASES=".fleet-servers .fleet-policies-leader .fleet-policies .fleet-agents .fleet-artifacts .fleet-enrollment-api-keys .kibana_ingest" ALIASES=(.fleet-servers .fleet-policies-leader .fleet-policies .fleet-agents .fleet-artifacts .fleet-enrollment-api-keys .kibana_ingest)
for ALIAS in ${ALIASES} for ALIAS in "${ALIASES[@]}"; do
do
# Get all concrete indices from alias # Get all concrete indices from alias
INDXS=$(curl -K /opt/so/conf/kibana/curl.config -s -k -L -H "Content-Type: application/json" "https://localhost:9200/_resolve/index/${ALIAS}" | jq -r '.aliases[].indices[]') if INDXS_RAW=$(curl -sK /opt/so/conf/kibana/curl.config -s -k -L -H "Content-Type: application/json" "https://localhost:9200/_resolve/index/${ALIAS}" --fail 2>/dev/null); then
INDXS=$(echo "$INDXS_RAW" | jq -r '.aliases[].indices[]')
# Delete all resolved indices # Delete all resolved indices
for INDX in ${INDXS} for INDX in ${INDXS}; do
do
status "Deleting $INDX" status "Deleting $INDX"
curl -K /opt/so/conf/kibana/curl.config -s -k -L -H "Content-Type: application/json" "https://localhost:9200/${INDX}" -XDELETE curl -K /opt/so/conf/kibana/curl.config -s -k -L -H "Content-Type: application/json" "https://localhost:9200/${INDX}" -XDELETE
done done
fi
done done
# Restarting Kibana... # Restarting Kibana...
@@ -51,22 +50,61 @@ if [[ "$RETURN_CODE" != "0" ]]; then
fi fi
printf "\n### Create ES Token ###\n" printf "\n### Create ES Token ###\n"
ESTOKEN=$(curl -K /opt/so/conf/elasticsearch/curl.config -L -X POST "localhost:5601/api/fleet/service_tokens" -H 'kbn-xsrf: true' -H 'Content-Type: application/json' | jq -r .value) if ESTOKEN_RAW=$(fleet_api "service_tokens" -XPOST -H 'kbn-xsrf: true' -H 'Content-Type: application/json'); then
ESTOKEN=$(echo "$ESTOKEN_RAW" | jq -r .value)
else
echo -e "\nFailed to create ES token..."
exit 1
fi
### Create Outputs, Fleet Policy and Fleet URLs ### ### Create Outputs, Fleet Policy and Fleet URLs ###
# Create the Manager Elasticsearch Output first and set it as the default output # Create the Manager Elasticsearch Output first and set it as the default output
printf "\nAdd Manager Elasticsearch Output...\n" printf "\nAdd Manager Elasticsearch Output...\n"
ESCACRT=$(openssl x509 -in $INTCA) ESCACRT=$(openssl x509 -in "$INTCA" -outform DER | sha256sum | cut -d' ' -f1 | tr '[:lower:]' '[:upper:]')
JSON_STRING=$( jq -n \ JSON_STRING=$(jq -n \
--arg ESCACRT "$ESCACRT" \ --arg ESCACRT "$ESCACRT" \
'{"name":"so-manager_elasticsearch","id":"so-manager_elasticsearch","type":"elasticsearch","hosts":["https://{{ GLOBALS.manager_ip }}:9200","https://{{ GLOBALS.manager }}:9200"],"is_default":true,"is_default_monitoring":true,"config_yaml":"","ssl":{"certificate_authorities": [$ESCACRT]}}' ) '{"name":"so-manager_elasticsearch","id":"so-manager_elasticsearch","type":"elasticsearch","hosts":["https://{{ GLOBALS.manager_ip }}:9200","https://{{ GLOBALS.manager }}:9200"],"is_default":false,"is_default_monitoring":false,"config_yaml":"","ca_trusted_fingerprint": $ESCACRT}')
curl -K /opt/so/conf/elasticsearch/curl.config -L -X POST "localhost:5601/api/fleet/outputs" -H 'kbn-xsrf: true' -H 'Content-Type: application/json' -d "$JSON_STRING"
if ! fleet_api "outputs" -XPOST -H 'kbn-xsrf: true' -H 'Content-Type: application/json' -d "$JSON_STRING"; then
echo -e "\nFailed to create so-elasticsearch_manager policy..."
exit 1
fi
printf "\n\n" printf "\n\n"
# so-manager_elasticsearch should exist and be disabled. Now update it before checking its the only default policy
MANAGER_OUTPUT_ENABLED=$(echo "$JSON_STRING" | jq 'del(.id) | .is_default = true | .is_default_monitoring = true')
if ! curl -sK /opt/so/conf/elasticsearch/curl.config -L -X PUT "localhost:5601/api/fleet/outputs/so-manager_elasticsearch" -H 'kbn-xsrf: true' -H 'Content-Type: application/json' -d "$MANAGER_OUTPUT_ENABLED"; then
echo -e "\n failed to update so-manager_elasticsearch"
exit 1
fi
# At this point there should only be two policies. fleet-default-output & so-manager_elasticsearch
status "Verifying so-manager_elasticsearch policy is configured as the current default"
# Grab the fleet-default-output policy instead of so-manager_elasticsearch, because a weird state can exist where both fleet-default-output & so-elasticsearch_manager can be set as the active default output for logs / metrics. Resulting in logs not ingesting on import/eval nodes
if DEFAULTPOLICY=$(fleet_api "outputs/fleet-default-output"); then
fleet_default=$(echo "$DEFAULTPOLICY" | jq -er '.item.is_default')
fleet_default_monitoring=$(echo "$DEFAULTPOLICY" | jq -er '.item.is_default_monitoring')
# Check that fleet-default-output isn't configured as a default for anything ( both variables return false )
if [[ $fleet_default == "false" ]] && [[ $fleet_default_monitoring == "false" ]]; then
echo -e "\nso-manager_elasticsearch is configured as the current default policy..."
else
echo -e "\nVerification of so-manager_elasticsearch policy failed... The default 'fleet-default-output' output is still active..."
exit 1
fi
else
# fleet-output-policy is created automatically by fleet when started. Should always exist on any installation type
echo -e "\nDefault fleet-default-output policy doesn't exist...\n"
exit 1
fi
# Create the Manager Fleet Server Host Agent Policy # Create the Manager Fleet Server Host Agent Policy
# This has to be done while the Elasticsearch Output is set to the default Output # This has to be done while the Elasticsearch Output is set to the default Output
printf "Create Manager Fleet Server Policy...\n" printf "Create Manager Fleet Server Policy...\n"
elastic_fleet_policy_create "FleetServer_{{ GLOBALS.hostname }}" "Fleet Server - {{ GLOBALS.hostname }}" "false" "120" if ! elastic_fleet_policy_create "FleetServer_{{ GLOBALS.hostname }}" "Fleet Server - {{ GLOBALS.hostname }}" "false" "120"; then
echo -e "\n Failed to create Manager fleet server policy..."
exit 1
fi
# Modify the default integration policy to update the policy_id with the correct naming # Modify the default integration policy to update the policy_id with the correct naming
UPDATED_INTEGRATION_POLICY=$(jq --arg policy_id "FleetServer_{{ GLOBALS.hostname }}" --arg name "fleet_server-{{ GLOBALS.hostname }}" ' UPDATED_INTEGRATION_POLICY=$(jq --arg policy_id "FleetServer_{{ GLOBALS.hostname }}" --arg name "fleet_server-{{ GLOBALS.hostname }}" '
@@ -74,7 +112,10 @@ UPDATED_INTEGRATION_POLICY=$(jq --arg policy_id "FleetServer_{{ GLOBALS.hostname
.name = $name' /opt/so/conf/elastic-fleet/integrations/fleet-server/fleet-server.json) .name = $name' /opt/so/conf/elastic-fleet/integrations/fleet-server/fleet-server.json)
# Add the Fleet Server Integration to the new Fleet Policy # Add the Fleet Server Integration to the new Fleet Policy
elastic_fleet_integration_create "$UPDATED_INTEGRATION_POLICY" if ! elastic_fleet_integration_create "$UPDATED_INTEGRATION_POLICY"; then
echo -e "\nFailed to create Fleet server integration for Manager.."
exit 1
fi
# Now we can create the Logstash Output and set it to to be the default Output # Now we can create the Logstash Output and set it to to be the default Output
printf "\n\nCreate Logstash Output Config if node is not an Import or Eval install\n" printf "\n\nCreate Logstash Output Config if node is not an Import or Eval install\n"
@@ -86,9 +127,12 @@ JSON_STRING=$( jq -n \
--arg LOGSTASHCRT "$LOGSTASHCRT" \ --arg LOGSTASHCRT "$LOGSTASHCRT" \
--arg LOGSTASHKEY "$LOGSTASHKEY" \ --arg LOGSTASHKEY "$LOGSTASHKEY" \
--arg LOGSTASHCA "$LOGSTASHCA" \ --arg LOGSTASHCA "$LOGSTASHCA" \
'{"name":"grid-logstash","is_default":true,"is_default_monitoring":true,"id":"so-manager_logstash","type":"logstash","hosts":["{{ GLOBALS.manager_ip }}:5055", "{{ GLOBALS.manager }}:5055"],"config_yaml":"","ssl":{"certificate": $LOGSTASHCRT,"key": $LOGSTASHKEY,"certificate_authorities":[ $LOGSTASHCA ]},"proxy_id":null}' '{"name":"grid-logstash","is_default":true,"is_default_monitoring":true,"id":"so-manager_logstash","type":"logstash","hosts":["{{ GLOBALS.manager_ip }}:5055", "{{ GLOBALS.manager }}:5055"],"config_yaml":"","ssl":{"certificate": $LOGSTASHCRT,"certificate_authorities":[ $LOGSTASHCA ]},"secrets":{"ssl":{"key": $LOGSTASHKEY }},"proxy_id":null}'
) )
curl -K /opt/so/conf/elasticsearch/curl.config -L -X POST "localhost:5601/api/fleet/outputs" -H 'kbn-xsrf: true' -H 'Content-Type: application/json' -d "$JSON_STRING" if ! fleet_api "outputs" -XPOST -H 'kbn-xsrf: true' -H 'Content-Type: application/json' -d "$JSON_STRING"; then
echo -e "\nFailed to create logstash fleet output"
exit 1
fi
printf "\n\n" printf "\n\n"
{%- endif %} {%- endif %}
@@ -106,7 +150,10 @@ else
fi fi
## This array replaces whatever URLs are currently configured ## This array replaces whatever URLs are currently configured
curl -K /opt/so/conf/elasticsearch/curl.config -L -X POST "localhost:5601/api/fleet/fleet_server_hosts" -H 'kbn-xsrf: true' -H 'Content-Type: application/json' -d "$JSON_STRING" if ! fleet_api "fleet_server_hosts" -XPOST -H 'kbn-xsrf: true' -H 'Content-Type: application/json' -d "$JSON_STRING"; then
echo -e "\nFailed to add manager fleet URL"
exit 1
fi
printf "\n\n" printf "\n\n"
### Create Policies & Associated Integration Configuration ### ### Create Policies & Associated Integration Configuration ###
@@ -117,13 +164,22 @@ printf "\n\n"
/usr/sbin/so-elasticsearch-templates-load /usr/sbin/so-elasticsearch-templates-load
# Initial Endpoints Policy # Initial Endpoints Policy
elastic_fleet_policy_create "endpoints-initial" "Initial Endpoint Policy" "false" "1209600" if ! elastic_fleet_policy_create "endpoints-initial" "Initial Endpoint Policy" "false" "1209600"; then
echo -e "\nFailed to create endpoints-initial policy..."
exit 1
fi
# Grid Nodes - General Policy # Grid Nodes - General Policy
elastic_fleet_policy_create "so-grid-nodes_general" "SO Grid Nodes - General Purpose" "false" "1209600" if ! elastic_fleet_policy_create "so-grid-nodes_general" "SO Grid Nodes - General Purpose" "false" "1209600"; then
echo -e "\nFailed to create so-grid-nodes_general policy..."
exit 1
fi
# Grid Nodes - Heavy Node Policy # Grid Nodes - Heavy Node Policy
elastic_fleet_policy_create "so-grid-nodes_heavy" "SO Grid Nodes - Heavy Node" "false" "1209600" if ! elastic_fleet_policy_create "so-grid-nodes_heavy" "SO Grid Nodes - Heavy Node" "false" "1209600"; then
echo -e "\nFailed to create so-grid-nodes_heavy policy..."
exit 1
fi
# Load Integrations for default policies # Load Integrations for default policies
so-elastic-fleet-integration-policy-load so-elastic-fleet-integration-policy-load
@@ -135,14 +191,34 @@ JSON_STRING=$( jq -n \
'{"name":$NAME,"host":$URL,"is_default":true}' '{"name":$NAME,"host":$URL,"is_default":true}'
) )
curl -K /opt/so/conf/elasticsearch/curl.config -L -X POST "localhost:5601/api/fleet/agent_download_sources" -H 'kbn-xsrf: true' -H 'Content-Type: application/json' -d "$JSON_STRING" if ! fleet_api "agent_download_sources" -XPOST -H 'kbn-xsrf: true' -H 'Content-Type: application/json' -d "$JSON_STRING"; then
echo -e "\nFailed to update Elastic Agent artifact URL"
exit 1
fi
### Finalization ### ### Finalization ###
# Query for Enrollment Tokens for default policies # Query for Enrollment Tokens for default policies
ENDPOINTSENROLLMENTOKEN=$(curl -K /opt/so/conf/elasticsearch/curl.config -L "localhost:5601/api/fleet/enrollment_api_keys" -H 'kbn-xsrf: true' -H 'Content-Type: application/json' | jq .list | jq -r -c '.[] | select(.policy_id | contains("endpoints-initial")) | .api_key') if ENDPOINTSENROLLMENTOKEN_RAW=$(fleet_api "enrollment_api_keys" -H 'kbn-xsrf: true' -H 'Content-Type: application/json'); then
GRIDNODESENROLLMENTOKENGENERAL=$(curl -K /opt/so/conf/elasticsearch/curl.config -L "localhost:5601/api/fleet/enrollment_api_keys" -H 'kbn-xsrf: true' -H 'Content-Type: application/json' | jq .list | jq -r -c '.[] | select(.policy_id | contains("so-grid-nodes_general")) | .api_key') ENDPOINTSENROLLMENTOKEN=$(echo "$ENDPOINTSENROLLMENTOKEN_RAW" | jq .list | jq -r -c '.[] | select(.policy_id | contains("endpoints-initial")) | .api_key')
GRIDNODESENROLLMENTOKENHEAVY=$(curl -K /opt/so/conf/elasticsearch/curl.config -L "localhost:5601/api/fleet/enrollment_api_keys" -H 'kbn-xsrf: true' -H 'Content-Type: application/json' | jq .list | jq -r -c '.[] | select(.policy_id | contains("so-grid-nodes_heavy")) | .api_key') else
echo -e "\nFailed to query for Endpoints enrollment token"
exit 1
fi
if GRIDNODESENROLLMENTOKENGENERAL_RAW=$(fleet_api "enrollment_api_keys" -H 'kbn-xsrf: true' -H 'Content-Type: application/json'); then
GRIDNODESENROLLMENTOKENGENERAL=$(echo "$GRIDNODESENROLLMENTOKENGENERAL_RAW" | jq .list | jq -r -c '.[] | select(.policy_id | contains("so-grid-nodes_general")) | .api_key')
else
echo -e "\nFailed to query for Grid nodes - General enrollment token"
exit 1
fi
if GRIDNODESENROLLMENTOKENHEAVY_RAW=$(fleet_api "enrollment_api_keys" -H 'kbn-xsrf: true' -H 'Content-Type: application/json'); then
GRIDNODESENROLLMENTOKENHEAVY=$(echo "$GRIDNODESENROLLMENTOKENHEAVY_RAW" | jq .list | jq -r -c '.[] | select(.policy_id | contains("so-grid-nodes_heavy")) | .api_key')
else
echo -e "\nFailed to query for Grid nodes - Heavy enrollment token"
exit 1
fi
# Store needed data in minion pillar # Store needed data in minion pillar
pillar_file=/opt/so/saltstack/local/pillar/minions/{{ GLOBALS.minion_id }}.sls pillar_file=/opt/so/saltstack/local/pillar/minions/{{ GLOBALS.minion_id }}.sls

View File

@@ -1,6 +1,6 @@
elasticsearch: elasticsearch:
enabled: false enabled: false
version: 8.18.6 version: 8.18.8
index_clean: true index_clean: true
config: config:
action: action:

View File

@@ -13,6 +13,7 @@
{# Import defaults.yaml for model hardware capabilities #} {# Import defaults.yaml for model hardware capabilities #}
{% import_yaml 'hypervisor/defaults.yaml' as DEFAULTS %} {% import_yaml 'hypervisor/defaults.yaml' as DEFAULTS %}
{% set HYPERVISORMERGED = salt['pillar.get']('hypervisor', default=DEFAULTS.hypervisor, merge=True) %}
{# Get hypervisor nodes from pillar #} {# Get hypervisor nodes from pillar #}
{% set NODES = salt['pillar.get']('hypervisor:nodes', {}) %} {% set NODES = salt['pillar.get']('hypervisor:nodes', {}) %}
@@ -30,9 +31,10 @@
{% set model = '' %} {% set model = '' %}
{% if grains %} {% if grains %}
{% set minion_id = grains.keys() | first %} {% set minion_id = grains.keys() | first %}
{% set model = grains[minion_id].get('sosmodel', '') %} {% set model = grains[minion_id].get('sosmodel', grains[minion_id].get('byodmodel', '')) %}
{% endif %} {% endif %}
{% set model_config = DEFAULTS.hypervisor.model.get(model, {}) %}
{% set model_config = HYPERVISORMERGED.model.get(model, {}) %}
{# Get VM list from VMs file #} {# Get VM list from VMs file #}
{% set vms = {} %} {% set vms = {} %}

View File

@@ -30,7 +30,9 @@
# #
# WARNING: This script will DESTROY all data on the target drives! # WARNING: This script will DESTROY all data on the target drives!
# #
# USAGE: sudo ./so-nvme-raid1.sh # USAGE:
# sudo ./so-nvme-raid1.sh # Normal operation
# sudo ./so-nvme-raid1.sh --force-cleanup # Force cleanup of existing RAID
# #
################################################################# #################################################################
@@ -41,6 +43,19 @@ set -e
RAID_ARRAY_NAME="md0" RAID_ARRAY_NAME="md0"
RAID_DEVICE="/dev/${RAID_ARRAY_NAME}" RAID_DEVICE="/dev/${RAID_ARRAY_NAME}"
MOUNT_POINT="/nsm" MOUNT_POINT="/nsm"
FORCE_CLEANUP=false
# Parse command line arguments
for arg in "$@"; do
case $arg in
--force-cleanup)
FORCE_CLEANUP=true
shift
;;
*)
;;
esac
done
# Function to log messages # Function to log messages
log() { log() {
@@ -55,6 +70,91 @@ check_root() {
fi fi
} }
# Function to force cleanup all RAID components
force_cleanup_raid() {
log "=== FORCE CLEANUP MODE ==="
log "This will destroy all RAID configurations and data on target drives!"
# Stop all MD arrays
log "Stopping all MD arrays"
mdadm --stop --scan 2>/dev/null || true
# Wait for arrays to stop
sleep 2
# Remove any running md devices
for md in /dev/md*; do
if [ -b "$md" ]; then
log "Stopping $md"
mdadm --stop "$md" 2>/dev/null || true
fi
done
# Force cleanup both NVMe drives
for device in "/dev/nvme0n1" "/dev/nvme1n1"; do
log "Force cleaning $device"
# Kill any processes using the device
fuser -k "${device}"* 2>/dev/null || true
# Unmount any mounted partitions
for part in "${device}"*; do
if [ -b "$part" ]; then
umount -f "$part" 2>/dev/null || true
fi
done
# Force zero RAID superblocks on partitions
for part in "${device}"p*; do
if [ -b "$part" ]; then
log "Zeroing RAID superblock on $part"
mdadm --zero-superblock --force "$part" 2>/dev/null || true
fi
done
# Zero superblock on the device itself
log "Zeroing RAID superblock on $device"
mdadm --zero-superblock --force "$device" 2>/dev/null || true
# Remove LVM physical volumes
pvremove -ff -y "$device" 2>/dev/null || true
# Wipe all filesystem and partition signatures
log "Wiping all signatures from $device"
wipefs -af "$device" 2>/dev/null || true
# Overwrite the beginning of the drive (partition table area)
log "Clearing partition table on $device"
dd if=/dev/zero of="$device" bs=1M count=10 2>/dev/null || true
# Clear the end of the drive (backup partition table area)
local device_size=$(blockdev --getsz "$device" 2>/dev/null || echo "0")
if [ "$device_size" -gt 0 ]; then
dd if=/dev/zero of="$device" bs=512 seek=$(( device_size - 2048 )) count=2048 2>/dev/null || true
fi
# Force kernel to re-read partition table
blockdev --rereadpt "$device" 2>/dev/null || true
partprobe -s "$device" 2>/dev/null || true
done
# Clear mdadm configuration
log "Clearing mdadm configuration"
echo "DEVICE partitions" > /etc/mdadm.conf
# Remove any fstab entries for the RAID device or mount point
log "Cleaning fstab entries"
sed -i "\|${RAID_DEVICE}|d" /etc/fstab
sed -i "\|${MOUNT_POINT}|d" /etc/fstab
# Wait for system to settle
udevadm settle
sleep 5
log "Force cleanup complete!"
log "Proceeding with RAID setup..."
}
# Function to find MD arrays using specific devices # Function to find MD arrays using specific devices
find_md_arrays_using_devices() { find_md_arrays_using_devices() {
local target_devices=("$@") local target_devices=("$@")
@@ -205,10 +305,15 @@ check_existing_raid() {
fi fi
log "Error: $device appears to be part of an existing RAID array" log "Error: $device appears to be part of an existing RAID array"
log "To reuse this device, you must first:" log "Old RAID metadata detected but array is not running."
log "1. Unmount any filesystems" log ""
log "2. Stop the RAID array: mdadm --stop $array_name" log "To fix this, run the script with --force-cleanup:"
log "3. Zero the superblock: mdadm --zero-superblock ${device}p1" log " sudo $0 --force-cleanup"
log ""
log "Or manually clean up with:"
log "1. Stop any arrays: mdadm --stop --scan"
log "2. Zero superblocks: mdadm --zero-superblock --force ${device}p1"
log "3. Wipe signatures: wipefs -af $device"
exit 1 exit 1
fi fi
done done
@@ -238,7 +343,7 @@ ensure_devices_free() {
done done
# Clear MD superblock # Clear MD superblock
mdadm --zero-superblock "${device}"* 2>/dev/null || true mdadm --zero-superblock --force "${device}"* 2>/dev/null || true
# Remove LVM PV if exists # Remove LVM PV if exists
pvremove -ff -y "$device" 2>/dev/null || true pvremove -ff -y "$device" 2>/dev/null || true
@@ -263,6 +368,11 @@ main() {
# Check if running as root # Check if running as root
check_root check_root
# If force cleanup flag is set, do aggressive cleanup first
if [ "$FORCE_CLEANUP" = true ]; then
force_cleanup_raid
fi
# Check for existing RAID setup # Check for existing RAID setup
check_existing_raid check_existing_raid

View File

@@ -22,7 +22,7 @@ kibana:
- default - default
- file - file
migrations: migrations:
discardCorruptObjects: "8.18.6" discardCorruptObjects: "8.18.8"
telemetry: telemetry:
enabled: False enabled: False
security: security:

View File

@@ -54,6 +54,9 @@ so-kratos:
- file: kratosconfig - file: kratosconfig
- file: kratoslogdir - file: kratoslogdir
- file: kratosdir - file: kratosdir
- retry:
attempts: 10
interval: 10
delete_so-kratos_so-status.disabled: delete_so-kratos_so-status.disabled:
file.uncomment: file.uncomment:

View File

@@ -4,6 +4,9 @@
# Elastic License 2.0. # Elastic License 2.0.
# We do not import GLOBALS in this state because it is called during setup # We do not import GLOBALS in this state because it is called during setup
include:
- salt.minion.service_file
- salt.mine_functions
down_original_mgmt_interface: down_original_mgmt_interface:
cmd.run: cmd.run:
@@ -28,29 +31,14 @@ wait_for_br0_ip:
- timeout: 95 - timeout: 95
- onchanges: - onchanges:
- cmd: down_original_mgmt_interface - cmd: down_original_mgmt_interface
- onchanges_in:
{% if grains.role == 'so-hypervisor' %} - file: salt_minion_service_unit_file
- file: mine_functions
update_mine_functions:
file.managed:
- name: /etc/salt/minion.d/mine_functions.conf
- contents: |
mine_interval: 25
mine_functions:
network.ip_addrs:
- interface: br0
{%- if role in ['so-eval','so-import','so-manager','so-managerhype','so-managersearch','so-standalone'] %}
x509.get_pem_entries:
- glob_path: '/etc/pki/ca.crt'
{% endif %}
- onchanges:
- cmd: wait_for_br0_ip
restart_salt_minion_service: restart_salt_minion_service:
service.running: service.running:
- name: salt-minion - name: salt-minion
- enable: True - enable: True
- listen: - listen:
- file: update_mine_functions - file: salt_minion_service_unit_file
- file: mine_functions
{% endif %}

View File

@@ -206,7 +206,6 @@ git_config_set_safe_dirs:
- multivar: - multivar:
- /nsm/rules/custom-local-repos/local-sigma - /nsm/rules/custom-local-repos/local-sigma
- /nsm/rules/custom-local-repos/local-yara - /nsm/rules/custom-local-repos/local-yara
- /nsm/rules/custom-local-repos/local-playbooks
- /nsm/securityonion-resources - /nsm/securityonion-resources
- /opt/so/conf/soc/ai_summary_repos/securityonion-resources - /opt/so/conf/soc/ai_summary_repos/securityonion-resources
- /nsm/airgap-resources/playbooks - /nsm/airgap-resources/playbooks

View File

@@ -387,7 +387,7 @@ function syncElastic() {
if [[ -z "$SKIP_STATE_APPLY" ]]; then if [[ -z "$SKIP_STATE_APPLY" ]]; then
echo "Elastic state will be re-applied to affected minions. This will run in the background and may take several minutes to complete." echo "Elastic state will be re-applied to affected minions. This will run in the background and may take several minutes to complete."
echo "Applying elastic state to elastic minions at $(date)" >> /opt/so/log/soc/sync.log 2>&1 echo "Applying elastic state to elastic minions at $(date)" >> /opt/so/log/soc/sync.log 2>&1
salt --async -C 'G@role:so-standalone or G@role:so-eval or G@role:so-import or G@role:so-manager or G@role:so-managersearch or G@role:so-searchnode or G@role:so-heavynode' state.apply elasticsearch queue=True >> /opt/so/log/soc/sync.log 2>&1 salt --async -C 'I@elasticsearch:enabled:true' state.apply elasticsearch queue=True >> /opt/so/log/soc/sync.log 2>&1
fi fi
else else
echo "Newly generated users/roles files are incomplete; aborting." echo "Newly generated users/roles files are incomplete; aborting."

View File

@@ -169,6 +169,8 @@ airgap_update_dockers() {
tar xf "$AGDOCKER/registry.tar" -C /nsm/docker-registry/docker tar xf "$AGDOCKER/registry.tar" -C /nsm/docker-registry/docker
echo "Add Registry back" echo "Add Registry back"
docker load -i "$AGDOCKER/registry_image.tar" docker load -i "$AGDOCKER/registry_image.tar"
echo "Restart registry container"
salt-call state.apply registry queue=True
fi fi
fi fi
} }
@@ -420,6 +422,7 @@ preupgrade_changes() {
[[ "$INSTALLEDVERSION" == 2.4.150 ]] && up_to_2.4.160 [[ "$INSTALLEDVERSION" == 2.4.150 ]] && up_to_2.4.160
[[ "$INSTALLEDVERSION" == 2.4.160 ]] && up_to_2.4.170 [[ "$INSTALLEDVERSION" == 2.4.160 ]] && up_to_2.4.170
[[ "$INSTALLEDVERSION" == 2.4.170 ]] && up_to_2.4.180 [[ "$INSTALLEDVERSION" == 2.4.170 ]] && up_to_2.4.180
[[ "$INSTALLEDVERSION" == 2.4.180 ]] && up_to_2.4.190
true true
} }
@@ -450,6 +453,7 @@ postupgrade_changes() {
[[ "$POSTVERSION" == 2.4.150 ]] && post_to_2.4.160 [[ "$POSTVERSION" == 2.4.150 ]] && post_to_2.4.160
[[ "$POSTVERSION" == 2.4.160 ]] && post_to_2.4.170 [[ "$POSTVERSION" == 2.4.160 ]] && post_to_2.4.170
[[ "$POSTVERSION" == 2.4.170 ]] && post_to_2.4.180 [[ "$POSTVERSION" == 2.4.170 ]] && post_to_2.4.180
[[ "$POSTVERSION" == 2.4.180 ]] && post_to_2.4.190
true true
} }
@@ -599,15 +603,34 @@ post_to_2.4.170() {
} }
post_to_2.4.180() { post_to_2.4.180() {
echo "Regenerating Elastic Agent Installers"
/sbin/so-elastic-agent-gen-installers
# Force update to Kafka output policy # Force update to Kafka output policy
/usr/sbin/so-kafka-fleet-output-policy --force /usr/sbin/so-kafka-fleet-output-policy --force
POSTVERSION=2.4.180 POSTVERSION=2.4.180
} }
post_to_2.4.190() {
echo "Regenerating Elastic Agent Installers"
/sbin/so-elastic-agent-gen-installers
# Only need to update import / eval nodes
if [[ "$MINION_ROLE" == "import" ]] || [[ "$MINION_ROLE" == "eval" ]]; then
update_import_fleet_output
fi
# Check if expected default policy is logstash (global.pipeline is REDIS or "")
pipeline=$(lookup_pillar "pipeline" "global")
if [[ -z "$pipeline" ]] || [[ "$pipeline" == "REDIS" ]]; then
# Check if this grid is currently affected by corrupt fleet output policy
if elastic-agent status | grep "config: key file not configured" > /dev/null 2>&1; then
echo "Elastic Agent shows an ssl error connecting to logstash output. Updating output policy..."
update_default_logstash_output
fi
fi
POSTVERSION=2.4.190
}
repo_sync() { repo_sync() {
echo "Sync the local repo." echo "Sync the local repo."
su socore -c '/usr/sbin/so-repo-sync' || fail "Unable to complete so-repo-sync." su socore -c '/usr/sbin/so-repo-sync' || fail "Unable to complete so-repo-sync."
@@ -864,10 +887,15 @@ up_to_2.4.170() {
} }
up_to_2.4.180() { up_to_2.4.180() {
echo "Nothing to do for 2.4.180"
INSTALLEDVERSION=2.4.180
}
up_to_2.4.190() {
# Elastic Update for this release, so download Elastic Agent files # Elastic Update for this release, so download Elastic Agent files
determine_elastic_agent_upgrade determine_elastic_agent_upgrade
INSTALLEDVERSION=2.4.180 INSTALLEDVERSION=2.4.190
} }
add_hydra_pillars() { add_hydra_pillars() {
@@ -1143,6 +1171,44 @@ update_elasticsearch_index_settings() {
done done
} }
update_import_fleet_output() {
if output=$(curl -sK /opt/so/conf/elasticsearch/curl.config -L "localhost:5601/api/fleet/outputs/so-manager_elasticsearch" --retry 3 --fail 2>/dev/null); then
# Update the current config of so-manager_elasticsearch output policy in place (leaving any customizations like having changed the preset value from 'balanced' to 'performance')
CAFINGERPRINT=$(openssl x509 -in /etc/pki/tls/certs/intca.crt -outform DER | sha256sum | cut -d' ' -f1 | tr '[:lower:]' '[:upper:]')
updated_policy=$(jq --arg CAFINGERPRINT "$CAFINGERPRINT" '.item | (del(.id) | .ca_trusted_fingerprint = $CAFINGERPRINT)' <<< "$output")
if curl -sK /opt/so/conf/elasticsearch/curl.config -L "localhost:5601/api/fleet/outputs/so-manager_elasticsearch" -XPUT -H 'kbn-xsrf: true' -H 'Content-Type: application/json' -d "$updated_policy" --retry 3 --fail 2>/dev/null; then
echo "Successfully updated so-manager_elasticsearch fleet output policy"
else
fail "Failed to update so-manager_elasticsearch fleet output policy"
fi
fi
}
update_default_logstash_output() {
echo "Updating fleet logstash output policy grid-logstash"
if logstash_policy=$(curl -K /opt/so/conf/elasticsearch/curl.config -L "http://localhost:5601/api/fleet/outputs/so-manager_logstash" --retry 3 --retry-delay 10 --fail 2>/dev/null); then
# Keep already configured hosts for this update, subsequent host updates come from so-elastic-fleet-outputs-update
HOSTS=$(echo "$logstash_policy" | jq -r '.item.hosts')
DEFAULT_ENABLED=$(echo "$logstash_policy" | jq -r '.item.is_default')
DEFAULT_MONITORING_ENABLED=$(echo "$logstash_policy" | jq -r '.item.is_default_monitoring')
LOGSTASHKEY=$(openssl rsa -in /etc/pki/elasticfleet-logstash.key)
LOGSTASHCRT=$(openssl x509 -in /etc/pki/elasticfleet-logstash.crt)
LOGSTASHCA=$(openssl x509 -in /etc/pki/tls/certs/intca.crt)
JSON_STRING=$(jq -n \
--argjson HOSTS "$HOSTS" \
--arg DEFAULT_ENABLED "$DEFAULT_ENABLED" \
--arg DEFAULT_MONITORING_ENABLED "$DEFAULT_MONITORING_ENABLED" \
--arg LOGSTASHKEY "$LOGSTASHKEY" \
--arg LOGSTASHCRT "$LOGSTASHCRT" \
--arg LOGSTASHCA "$LOGSTASHCA" \
'{"name":"grid-logstash","type":"logstash","hosts": $HOSTS,"is_default": $DEFAULT_ENABLED,"is_default_monitoring": $DEFAULT_MONITORING_ENABLED,"config_yaml":"","ssl":{"certificate": $LOGSTASHCRT,"certificate_authorities":[ $LOGSTASHCA ]},"secrets":{"ssl":{"key": $LOGSTASHKEY }}}')
fi
if curl -K /opt/so/conf/elasticsearch/curl.config -L -X PUT "localhost:5601/api/fleet/outputs/so-manager_logstash" -H 'kbn-xsrf: true' -H 'Content-Type: application/json' -d "$JSON_STRING" --retry 3 --retry-delay 10 --fail; then
echo "Successfully updated grid-logstash fleet output policy"
fi
}
update_salt_mine() { update_salt_mine() {
echo "Populating the mine with mine_functions for each host." echo "Populating the mine with mine_functions for each host."
set +e set +e
@@ -1359,6 +1425,7 @@ main() {
fi fi
set_minionid set_minionid
MINION_ROLE=$(lookup_role)
echo "Found that Security Onion $INSTALLEDVERSION is currently installed." echo "Found that Security Onion $INSTALLEDVERSION is currently installed."
echo "" echo ""
if [[ $is_airgap -eq 0 ]]; then if [[ $is_airgap -eq 0 ]]; then
@@ -1401,7 +1468,7 @@ main() {
if [ "$is_hotfix" == "true" ]; then if [ "$is_hotfix" == "true" ]; then
echo "Applying $HOTFIXVERSION hotfix" echo "Applying $HOTFIXVERSION hotfix"
# since we don't run the backup.config_backup state on import we wont snapshot previous version states and pillars # since we don't run the backup.config_backup state on import we wont snapshot previous version states and pillars
if [[ ! "$MINIONID" =~ "_import" ]]; then if [[ ! "$MINION_ROLE" == "import" ]]; then
backup_old_states_pillars backup_old_states_pillars
fi fi
copy_new_files copy_new_files
@@ -1464,7 +1531,7 @@ main() {
fi fi
# since we don't run the backup.config_backup state on import we wont snapshot previous version states and pillars # since we don't run the backup.config_backup state on import we wont snapshot previous version states and pillars
if [[ ! "$MINIONID" =~ "_import" ]]; then if [[ ! "$MINION_ROLE" == "import" ]]; then
echo "" echo ""
echo "Creating snapshots of default and local Salt states and pillars and saving to /nsm/backup/" echo "Creating snapshots of default and local Salt states and pillars and saving to /nsm/backup/"
backup_old_states_pillars backup_old_states_pillars

View File

@@ -161,6 +161,7 @@ DEFAULT_BASE_PATH = '/opt/so/saltstack/local/salt/hypervisor/hosts'
VALID_ROLES = ['sensor', 'searchnode', 'idh', 'receiver', 'heavynode', 'fleet'] VALID_ROLES = ['sensor', 'searchnode', 'idh', 'receiver', 'heavynode', 'fleet']
LICENSE_PATH = '/opt/so/saltstack/local/pillar/soc/license.sls' LICENSE_PATH = '/opt/so/saltstack/local/pillar/soc/license.sls'
DEFAULTS_PATH = '/opt/so/saltstack/default/salt/hypervisor/defaults.yaml' DEFAULTS_PATH = '/opt/so/saltstack/default/salt/hypervisor/defaults.yaml'
HYPERVISOR_PILLAR_PATH = '/opt/so/saltstack/local/pillar/hypervisor/soc_hypervisor.sls'
# Define the retention period for destroyed VMs (in hours) # Define the retention period for destroyed VMs (in hours)
DESTROYED_VM_RETENTION_HOURS = 48 DESTROYED_VM_RETENTION_HOURS = 48
@@ -271,7 +272,7 @@ def parse_hardware_indices(hw_value: Any) -> List[int]:
return indices return indices
def get_hypervisor_model(hypervisor: str) -> str: def get_hypervisor_model(hypervisor: str) -> str:
"""Get sosmodel from hypervisor grains.""" """Get sosmodel or byodmodel from hypervisor grains."""
try: try:
# Get cached grains using Salt runner # Get cached grains using Salt runner
grains = runner.cmd( grains = runner.cmd(
@@ -283,9 +284,9 @@ def get_hypervisor_model(hypervisor: str) -> str:
# Get the first minion ID that matches our hypervisor # Get the first minion ID that matches our hypervisor
minion_id = next(iter(grains.keys())) minion_id = next(iter(grains.keys()))
model = grains[minion_id].get('sosmodel') model = grains[minion_id].get('sosmodel', grains[minion_id].get('byodmodel', ''))
if not model: if not model:
raise ValueError(f"No sosmodel grain found for hypervisor {hypervisor}") raise ValueError(f"No sosmodel or byodmodel grain found for hypervisor {hypervisor}")
log.debug("Found model %s for hypervisor %s", model, hypervisor) log.debug("Found model %s for hypervisor %s", model, hypervisor)
return model return model
@@ -295,16 +296,48 @@ def get_hypervisor_model(hypervisor: str) -> str:
raise raise
def load_hardware_defaults(model: str) -> dict: def load_hardware_defaults(model: str) -> dict:
"""Load hardware configuration from defaults.yaml.""" """Load hardware configuration from defaults.yaml and optionally override with pillar configuration."""
config = None
config_source = None
try: try:
# First, try to load from defaults.yaml
log.debug("Checking for model %s in %s", model, DEFAULTS_PATH)
defaults = read_yaml_file(DEFAULTS_PATH) defaults = read_yaml_file(DEFAULTS_PATH)
if not defaults or 'hypervisor' not in defaults: if not defaults or 'hypervisor' not in defaults:
raise ValueError("Invalid defaults.yaml structure") raise ValueError("Invalid defaults.yaml structure")
if 'model' not in defaults['hypervisor']: if 'model' not in defaults['hypervisor']:
raise ValueError("No model configurations found in defaults.yaml") raise ValueError("No model configurations found in defaults.yaml")
if model not in defaults['hypervisor']['model']:
raise ValueError(f"Model {model} not found in defaults.yaml") # Check if model exists in defaults
return defaults['hypervisor']['model'][model] if model in defaults['hypervisor']['model']:
config = defaults['hypervisor']['model'][model]
config_source = DEFAULTS_PATH
log.debug("Found model %s in %s", model, DEFAULTS_PATH)
# Then, try to load from pillar file (if it exists)
try:
log.debug("Checking for model %s in %s", model, HYPERVISOR_PILLAR_PATH)
pillar_config = read_yaml_file(HYPERVISOR_PILLAR_PATH)
if pillar_config and 'hypervisor' in pillar_config:
if 'model' in pillar_config['hypervisor']:
if model in pillar_config['hypervisor']['model']:
# Override with pillar configuration
config = pillar_config['hypervisor']['model'][model]
config_source = HYPERVISOR_PILLAR_PATH
log.debug("Found model %s in %s (overriding defaults)", model, HYPERVISOR_PILLAR_PATH)
except FileNotFoundError:
log.debug("Pillar file %s not found, using defaults only", HYPERVISOR_PILLAR_PATH)
except Exception as e:
log.warning("Failed to read pillar file %s: %s (using defaults)", HYPERVISOR_PILLAR_PATH, str(e))
# If model was not found in either file, raise an error
if config is None:
raise ValueError(f"Model {model} not found in {DEFAULTS_PATH} or {HYPERVISOR_PILLAR_PATH}")
log.debug("Using hardware configuration for model %s from %s", model, config_source)
return config
except Exception as e: except Exception as e:
log.error("Failed to load hardware defaults: %s", str(e)) log.error("Failed to load hardware defaults: %s", str(e))
raise raise

View File

@@ -4,7 +4,10 @@
Elastic License 2.0. #} Elastic License 2.0. #}
{% set role = salt['grains.get']('role', '') %} {% set role = salt['grains.get']('role', '') %}
{% if role in ['so-hypervisor','so-managerhype'] and salt['network.ip_addrs']('br0')|length > 0 %} {# We are using usebr0 mostly for setup of the so-managerhype node and controlling when we use br0 vs the physical interface #}
{% set usebr0 = salt['pillar.get']('usebr0', True) %}
{% if role in ['so-hypervisor','so-managerhype'] and usebr0 %}
{% set interface = 'br0' %} {% set interface = 'br0' %}
{% else %} {% else %}
{% set interface = pillar.host.mainint %} {% set interface = pillar.host.mainint %}

View File

@@ -3,7 +3,7 @@
# https://securityonion.net/license; you may not use this file except in compliance with the # https://securityonion.net/license; you may not use this file except in compliance with the
# Elastic License 2.0. # Elastic License 2.0.
# this state was seperated from salt.minion state since it is called during setup # this state was separated from salt.minion state since it is called during setup
# GLOBALS are imported in the salt.minion state and that is not available at that point in setup # GLOBALS are imported in the salt.minion state and that is not available at that point in setup
# this state is included in the salt.minion state # this state is included in the salt.minion state

View File

@@ -1,18 +1,22 @@
# Copyright Security Onion Solutions LLC and/or licensed to Security Onion Solutions LLC under one
# or more contributor license agreements. Licensed under the Elastic License 2.0 as shown at
# https://securityonion.net/license; you may not use this file except in compliance with the
# Elastic License 2.0.
{% from 'vars/globals.map.jinja' import GLOBALS %} {% from 'vars/globals.map.jinja' import GLOBALS %}
{% from 'salt/map.jinja' import UPGRADECOMMAND with context %} {% from 'salt/map.jinja' import UPGRADECOMMAND with context %}
{% from 'salt/map.jinja' import SALTVERSION %} {% from 'salt/map.jinja' import SALTVERSION %}
{% from 'salt/map.jinja' import INSTALLEDSALTVERSION %} {% from 'salt/map.jinja' import INSTALLEDSALTVERSION %}
{% from 'salt/map.jinja' import SALTPACKAGES %} {% from 'salt/map.jinja' import SALTPACKAGES %}
{% from 'salt/map.jinja' import SYSTEMD_UNIT_FILE %}
{% import_yaml 'salt/minion.defaults.yaml' as SALTMINION %} {% import_yaml 'salt/minion.defaults.yaml' as SALTMINION %}
include: include:
- salt.python_modules - salt.python_modules
- salt.patch.x509_v2 - salt.patch.x509_v2
- salt - salt
- systemd.reload
- repo.client - repo.client
- salt.mine_functions - salt.mine_functions
- salt.minion.service_file
{% if GLOBALS.role in GLOBALS.manager_roles %} {% if GLOBALS.role in GLOBALS.manager_roles %}
- ca - ca
{% endif %} {% endif %}
@@ -94,17 +98,6 @@ enable_startup_states:
- regex: '^startup_states: highstate$' - regex: '^startup_states: highstate$'
- unless: pgrep so-setup - unless: pgrep so-setup
# prior to 2.4.30 this managed file would restart the salt-minion service when updated
# since this file is currently only adding a delay service start
# it is not required to restart the service
salt_minion_service_unit_file:
file.managed:
- name: {{ SYSTEMD_UNIT_FILE }}
- source: salt://salt/service/salt-minion.service.jinja
- template: jinja
- onchanges_in:
- module: systemd_reload
{% endif %} {% endif %}
# this has to be outside the if statement above since there are <requisite>_in calls to this state # this has to be outside the if statement above since there are <requisite>_in calls to this state

View File

@@ -0,0 +1,26 @@
# Copyright Security Onion Solutions LLC and/or licensed to Security Onion Solutions LLC under one
# or more contributor license agreements. Licensed under the Elastic License 2.0 as shown at
# https://securityonion.net/license; you may not use this file except in compliance with the
# Elastic License 2.0.
{% from 'salt/map.jinja' import SALTVERSION %}
{% from 'salt/map.jinja' import INSTALLEDSALTVERSION %}
{% from 'salt/map.jinja' import SYSTEMD_UNIT_FILE %}
include:
- systemd.reload
{% if INSTALLEDSALTVERSION|string == SALTVERSION|string %}
# prior to 2.4.30 this managed file would restart the salt-minion service when updated
# since this file is currently only adding a delay service start
# it is not required to restart the service
salt_minion_service_unit_file:
file.managed:
- name: {{ SYSTEMD_UNIT_FILE }}
- source: salt://salt/service/salt-minion.service.jinja
- template: jinja
- onchanges_in:
- module: systemd_reload
{% endif %}

View File

@@ -249,22 +249,6 @@ add_readme_custom_local_sigma_repo_template:
- context: - context:
repo_type: "sigma" repo_type: "sigma"
create_custom_local_playbooks_repo_template:
git.present:
- name: /nsm/rules/custom-local-repos/local-playbooks
- bare: False
- force: True
add_readme_custom_local_playbooks_repo_template:
file.managed:
- name: /nsm/rules/custom-local-repos/local-playbooks/README
- source: salt://soc/files/soc/detections_custom_repo_template_readme.jinja
- user: 939
- group: 939
- template: jinja
- context:
repo_type: "playbooks"
socore_own_custom_repos: socore_own_custom_repos:
file.directory: file.directory:
- name: /nsm/rules/custom-local-repos/ - name: /nsm/rules/custom-local-repos/

View File

@@ -1487,16 +1487,13 @@ soc:
- repo: https://github.com/Security-Onion-Solutions/securityonion-resources-playbooks - repo: https://github.com/Security-Onion-Solutions/securityonion-resources-playbooks
branch: main branch: main
folder: securityonion-normalized folder: securityonion-normalized
- repo: file:///nsm/rules/custom-local-repos/local-playbooks
branch: main
airgap: airgap:
- repo: file:///nsm/airgap-resources/playbooks/securityonion-resources-playbooks - repo: file:///nsm/airgap-resources/playbooks/securityonion-resources-playbooks
branch: main branch: main
folder: securityonion-normalized folder: securityonion-normalized
- repo: file:///nsm/rules/custom-local-repos/local-playbooks
branch: main
assistant: assistant:
apiUrl: https://onionai.securityonion.net apiUrl: https://onionai.securityonion.net
healthTimeoutSeconds: 3
salt: salt:
queueDir: /opt/sensoroni/queue queueDir: /opt/sensoroni/queue
timeoutMs: 45000 timeoutMs: 45000
@@ -2549,7 +2546,7 @@ soc:
level: 'high' # info | low | medium | high | critical level: 'high' # info | low | medium | high | critical
assistant: assistant:
enabled: false enabled: false
investigationPrompt: Investigate Alert ID {socid} investigationPrompt: Investigate Alert ID {socId}
contextLimitSmall: 200000 contextLimitSmall: 200000
contextLimitLarge: 1000000 contextLimitLarge: 1000000
thresholdColorRatioLow: 0.5 thresholdColorRatioLow: 0.5

View File

@@ -91,51 +91,4 @@ Finally, commit it:
The next time the Elastalert / Sigma engine syncs, the new rule should be imported The next time the Elastalert / Sigma engine syncs, the new rule should be imported
If there are errors, review the sync log to troubleshoot further. If there are errors, review the sync log to troubleshoot further.
{% elif repo_type == 'playbooks' %}
# Playbooks Local Custom Repository
This folder has already been initialized as a git repo
and your Security Onion grid is configured to import any Playbook files found here.
Just add your playbook file and commit it.
For example:
** Note: If this is your first time making changes to this repo, you may run into the following error:
fatal: detected dubious ownership in repository at '/nsm/rules/custom-local-repos/local-playbooks'
To add an exception for this directory, call:
git config --global --add safe.directory /nsm/rules/custom-local-repos/local-playbooks
This means that the user you are running commands as does not match the user that is used for this git repo (socore).
You will need to make sure your playbook files are accessible to the socore user, so either su to socore
or add the exception and then chown the playbook files later.
Also, you will be asked to set some configuration:
```
Author identity unknown
*** Please tell me who you are.
Run
git config --global user.email "you@example.com"
git config --global user.name "Your Name"
to set your account's default identity.
Omit --global to set the identity only in this repository.
```
Run these commands, ommitting the `--global`.
With that out of the way:
First, create the playbook file with a .yml or .yaml extension:
`vi my_custom_playbook.yml`
Next, use git to stage the new playbook to be committed:
`git add my_custom_playbook.yml`
Finally, commit it:
`git commit -m "Initial commit of my_custom_playbook.yml"`
The next time SOC restarts, the new playbook should be imported
If there are errors, review the SOC log to troubleshoot further.
{% endif %} {% endif %}

View File

@@ -585,6 +585,10 @@ soc:
description: The URL of the AI gateway. description: The URL of the AI gateway.
advanced: True advanced: True
global: True global: True
healthTimeoutSeconds:
description: Timeout in seconds for the Onion AI health check.
global: True
advanced: True
client: client:
assistant: assistant:
enabled: enabled:
@@ -615,6 +619,7 @@ soc:
advanced: True advanced: True
lowBalanceColorAlert: lowBalanceColorAlert:
description: Onion AI credit amount at which balance turns red. description: Onion AI credit amount at which balance turns red.
global: True
advanced: True advanced: True
apiTimeoutMs: apiTimeoutMs:
description: Duration (in milliseconds) to wait for a response from the SOC server API before giving up and showing an error on the SOC UI. description: Duration (in milliseconds) to wait for a response from the SOC server API before giving up and showing an error on the SOC UI.

View File

@@ -541,8 +541,15 @@ configure_minion() {
"log_file: /opt/so/log/salt/minion"\ "log_file: /opt/so/log/salt/minion"\
"#startup_states: highstate" >> "$minion_config" "#startup_states: highstate" >> "$minion_config"
info "Running: salt-call state.apply salt.mine_functions --local --file-root=../salt/ -l info pillar='{"host": {"mainint": "$MNIC"}}'" # At the time the so-managerhype node does not yet have the bridge configured.
salt-call state.apply salt.mine_functions --local --file-root=../salt/ -l info pillar="{'host': {'mainint': $MNIC}}" # The so-hypervisor node doesn't either, but it doesn't cause issues here.
local usebr0=false
if [ "$minion_type" == 'hypervisor' ]; then
usebr0=true
fi
local pillar_json="{\"host\": {\"mainint\": \"$MNIC\"}, \"usebr0\": $usebr0}"
info "Running: salt-call state.apply salt.mine_functions --local --file-root=../salt/ -l info pillar='$pillar_json'"
salt-call state.apply salt.mine_functions --local --file-root=../salt/ -l info pillar="$pillar_json"
{ {
logCmd "systemctl enable salt-minion"; logCmd "systemctl enable salt-minion";
@@ -1194,10 +1201,7 @@ hypervisor_local_states() {
info "Running libvirt states for hypervisor" info "Running libvirt states for hypervisor"
logCmd "salt-call state.apply libvirt.64962 --local --file-root=../salt/ -l info queue=True" logCmd "salt-call state.apply libvirt.64962 --local --file-root=../salt/ -l info queue=True"
info "Setting up bridge for $MNIC" info "Setting up bridge for $MNIC"
salt-call state.apply libvirt.bridge --local --file-root=../salt/ -l info pillar='{"host": {"mainint": "'$MNIC'"}}' queue=True salt-call state.apply libvirt.bridge --local --file-root=../salt/ -l info pillar='{"host": {"mainint": "'$MNIC'"}}' queue=True
if [ $is_managerhype ]; then
logCmd "salt-call state.apply salt.minion queue=True"
fi
fi fi
} }

View File

@@ -762,6 +762,7 @@ if ! [[ -f $install_opt_file ]]; then
fi fi
logCmd "salt-call state.apply common.packages" logCmd "salt-call state.apply common.packages"
logCmd "salt-call state.apply common" logCmd "salt-call state.apply common"
hypervisor_local_states
# this will apply the salt.minion state first since salt.master includes salt.minion # this will apply the salt.minion state first since salt.master includes salt.minion
logCmd "salt-call state.apply salt.master" logCmd "salt-call state.apply salt.master"
# wait here until we get a response from the salt-master since it may have just restarted # wait here until we get a response from the salt-master since it may have just restarted
@@ -826,7 +827,6 @@ if ! [[ -f $install_opt_file ]]; then
checkin_at_boot checkin_at_boot
set_initial_firewall_access set_initial_firewall_access
logCmd "salt-call schedule.enable -linfo --local" logCmd "salt-call schedule.enable -linfo --local"
hypervisor_local_states
verify_setup verify_setup
else else
touch /root/accept_changes touch /root/accept_changes