This commit is contained in:
Josh Patterson
2025-10-07 12:26:58 -04:00
311 changed files with 184563 additions and 162629 deletions

View File

@@ -30,6 +30,8 @@ body:
- 2.4.150
- 2.4.160
- 2.4.170
- 2.4.180
- 2.4.190
- Other (please provide detail below)
validations:
required: true

View File

@@ -1,17 +1,17 @@
### 2.4.160-20250625 ISO image released on 2025/06/25
### 2.4.180-20250916 ISO image released on 2025/09/17
### Download and Verify
2.4.160-20250625 ISO image:
https://download.securityonion.net/file/securityonion/securityonion-2.4.160-20250625.iso
2.4.180-20250916 ISO image:
https://download.securityonion.net/file/securityonion/securityonion-2.4.180-20250916.iso
MD5: 78CF5602EFFAB84174C56AD2826E6E4E
SHA1: FC7EEC3EC95D97D3337501BAA7CA8CAE7C0E15EA
SHA256: 0ED965E8BEC80EE16AE90A0F0F96A3046CEF2D92720A587278DDDE3B656C01C2
MD5: DE93880E38DE4BE45D05A41E1745CB1F
SHA1: AEA6948911E50A4A38E8729E0E965C565402E3FC
SHA256: C9BD8CA071E43B048ABF9ED145B87935CB1D4BB839B2244A06FAD1BBA8EAC84A
Signature for ISO image:
https://github.com/Security-Onion-Solutions/securityonion/raw/2.4/main/sigs/securityonion-2.4.160-20250625.iso.sig
https://github.com/Security-Onion-Solutions/securityonion/raw/2.4/main/sigs/securityonion-2.4.180-20250916.iso.sig
Signing key:
https://raw.githubusercontent.com/Security-Onion-Solutions/securityonion/2.4/main/KEYS
@@ -25,22 +25,22 @@ wget https://raw.githubusercontent.com/Security-Onion-Solutions/securityonion/2.
Download the signature file for the ISO:
```
wget https://github.com/Security-Onion-Solutions/securityonion/raw/2.4/main/sigs/securityonion-2.4.160-20250625.iso.sig
wget https://github.com/Security-Onion-Solutions/securityonion/raw/2.4/main/sigs/securityonion-2.4.180-20250916.iso.sig
```
Download the ISO image:
```
wget https://download.securityonion.net/file/securityonion/securityonion-2.4.160-20250625.iso
wget https://download.securityonion.net/file/securityonion/securityonion-2.4.180-20250916.iso
```
Verify the downloaded ISO image using the signature file:
```
gpg --verify securityonion-2.4.160-20250625.iso.sig securityonion-2.4.160-20250625.iso
gpg --verify securityonion-2.4.180-20250916.iso.sig securityonion-2.4.180-20250916.iso
```
The output should show "Good signature" and the Primary key fingerprint should match what's shown below:
```
gpg: Signature made Wed 25 Jun 2025 10:13:33 AM EDT using RSA key ID FE507013
gpg: Signature made Tue 16 Sep 2025 06:30:19 PM EDT using RSA key ID FE507013
gpg: Good signature from "Security Onion Solutions, LLC <info@securityonionsolutions.com>"
gpg: WARNING: This key is not certified with a trusted signature!
gpg: There is no indication that the signature belongs to the owner.

View File

@@ -1 +1 @@
2.4.170
2.4.190

View File

@@ -262,6 +262,9 @@ base:
- minions.adv_{{ grains.id }}
- kafka.nodes
- kafka.soc_kafka
- stig.soc_stig
- elasticfleet.soc_elasticfleet
- elasticfleet.adv_elasticfleet
'*_import':
- node_data.ips
@@ -319,10 +322,12 @@ base:
- elasticfleet.adv_elasticfleet
- minions.{{ grains.id }}
- minions.adv_{{ grains.id }}
- stig.soc_stig
'*_hypervisor':
- minions.{{ grains.id }}
- minions.adv_{{ grains.id }}
- stig.soc_stig
'*_desktop':
- minions.{{ grains.id }}

View File

@@ -38,7 +38,7 @@ Examples:
Notes:
- Verifies Security Onion license
- Downloads and validates Oracle Linux KVM image if needed
- Generates Ed25519 SSH keys if not present
- Generates ECDSA SSH keys if not present
- Creates/recreates VM based on environment changes
- Forces hypervisor configuration via highstate after successful setup (when minion_id provided)
@@ -46,7 +46,7 @@ Examples:
The setup process includes:
1. License validation
2. Oracle Linux KVM image download and checksum verification
3. SSH key generation for secure VM access
3. ECDSA SSH key generation for secure VM access
4. Cloud-init configuration for VM provisioning
5. VM creation with specified disk size
6. Hypervisor configuration via highstate (when minion_id provided and setup successful)
@@ -74,7 +74,7 @@ import sys
import time
import yaml
from cryptography.hazmat.primitives import serialization
from cryptography.hazmat.primitives.asymmetric import ed25519
from cryptography.hazmat.primitives.asymmetric import ec
# Configure logging
log = logging.getLogger(__name__)
log.setLevel(logging.DEBUG)
@@ -232,7 +232,7 @@ def _check_ssh_keys_exist():
bool: True if both private and public keys exist, False otherwise
"""
key_dir = '/etc/ssh/auth_keys/soqemussh'
key_path = f'{key_dir}/id_ed25519'
key_path = f'{key_dir}/id_ecdsa'
pub_key_path = f'{key_path}.pub'
dest_dir = '/opt/so/saltstack/local/salt/libvirt/ssh/keys'
dest_path = os.path.join(dest_dir, os.path.basename(pub_key_path))
@@ -250,7 +250,7 @@ def _setup_ssh_keys():
"""
try:
key_dir = '/etc/ssh/auth_keys/soqemussh'
key_path = f'{key_dir}/id_ed25519'
key_path = f'{key_dir}/id_ecdsa'
pub_key_path = f'{key_path}.pub'
# Check if keys already exist
@@ -266,9 +266,9 @@ def _setup_ssh_keys():
os.makedirs(key_dir, exist_ok=True)
os.chmod(key_dir, 0o700)
# Generate new ed25519 key pair
# Generate new ECDSA key pair using SECP256R1 curve
log.info("Generating new SSH keys")
private_key = ed25519.Ed25519PrivateKey.generate()
private_key = ec.generate_private_key(ec.SECP256R1())
public_key = private_key.public_key()
# Serialize private key
@@ -540,7 +540,7 @@ def setup_environment(vm_name: str = 'sool9', disk_size: str = '220G', minion_id
Notes:
- Verifies Security Onion license
- Downloads and validates Oracle Linux KVM image if needed
- Generates Ed25519 SSH keys if not present
- Generates ECDSA SSH keys if not present
- Creates/recreates VM based on environment changes
- Forces hypervisor configuration via highstate after successful setup
(when minion_id is provided)
@@ -765,7 +765,7 @@ def create_vm(vm_name: str, disk_size: str = '220G'):
_set_ownership_and_perms(vm_dir, mode=0o750)
# Read the SSH public key
pub_key_path = '/opt/so/saltstack/local/salt/libvirt/ssh/keys/id_ed25519.pub'
pub_key_path = '/opt/so/saltstack/local/salt/libvirt/ssh/keys/id_ecdsa.pub'
try:
with salt.utils.files.fopen(pub_key_path, 'r') as f:
ssh_pub_key = f.read().strip()
@@ -844,7 +844,7 @@ output:
all: ">> /var/log/cloud-init.log"
# configure interaction with ssh server
ssh_genkeytypes: ['ed25519', 'rsa']
ssh_genkeytypes: ['ecdsa', 'rsa']
# set timezone for VM
timezone: UTC
@@ -1038,7 +1038,7 @@ def regenerate_ssh_keys():
Notes:
- Validates Security Onion license
- Removes existing keys if present
- Generates new Ed25519 key pair
- Generates new ECDSA key pair
- Sets secure permissions (600 for private, 644 for public)
- Distributes public key to required locations
@@ -1048,7 +1048,7 @@ def regenerate_ssh_keys():
2. Checks for existing SSH keys
3. Removes old keys if present
4. Creates required directories with secure permissions
5. Generates new Ed25519 key pair
5. Generates new ECDSA key pair
6. Sets appropriate file permissions
7. Distributes public key to required locations
@@ -1067,7 +1067,7 @@ def regenerate_ssh_keys():
# Remove existing keys
key_dir = '/etc/ssh/auth_keys/soqemussh'
key_path = f'{key_dir}/id_ed25519'
key_path = f'{key_dir}/id_ecdsa'
pub_key_path = f'{key_path}.pub'
dest_dir = '/opt/so/saltstack/local/salt/libvirt/ssh/keys'
dest_path = os.path.join(dest_dir, os.path.basename(pub_key_path))

View File

@@ -143,6 +143,7 @@
),
'so-fleet': (
ssl_states +
stig_states +
['logstash', 'nginx', 'healthcheck', 'elasticfleet']
),
'so-receiver': (

View File

@@ -11,6 +11,10 @@ TODAY=$(date '+%Y_%m_%d')
BACKUPDIR={{ DESTINATION }}
BACKUPFILE="$BACKUPDIR/so-config-backup-$TODAY.tar"
MAXBACKUPS=7
EXCLUSIONS=(
"--exclude=/opt/so/saltstack/local/salt/elasticfleet/files/so_agent-installers"
)
# Create backup dir if it does not exist
mkdir -p /nsm/backup
@@ -23,7 +27,7 @@ if [ ! -f $BACKUPFILE ]; then
# Loop through all paths defined in global.sls, and append them to backup file
{%- for LOCATION in BACKUPLOCATIONS %}
tar -rf $BACKUPFILE {{ LOCATION }}
tar -rf $BACKUPFILE "${EXCLUSIONS[@]}" {{ LOCATION }}
{%- endfor %}
fi

View File

@@ -441,8 +441,7 @@ lookup_grain() {
lookup_role() {
id=$(lookup_grain id)
pieces=($(echo $id | tr '_' ' '))
echo ${pieces[1]}
echo "${id##*_}"
}
is_feature_enabled() {

View File

@@ -268,6 +268,13 @@ for log_file in $(cat /tmp/log_check_files); do
tail -n $RECENT_LOG_LINES $log_file > /tmp/log_check
check_for_errors
done
# Look for OOM specific errors in /var/log/messages which can lead to odd behavior / test failures
if [[ -f /var/log/messages ]]; then
status "Checking log file /var/log/messages"
if journalctl --since "24 hours ago" | grep -iE 'out of memory|oom-kill'; then
RESULT=1
fi
fi
# Cleanup temp files
rm -f /tmp/log_check_files

View File

@@ -173,7 +173,7 @@ for PCAP in $INPUT_FILES; do
status "- assigning unique identifier to import: $HASH"
pcap_data=$(pcapinfo "${PCAP}")
if ! echo "$pcap_data" | grep -q "First packet time:" || echo "$pcap_data" |egrep -q "Last packet time: 1970-01-01|Last packet time: n/a"; then
if ! echo "$pcap_data" | grep -q "Earliest packet time:" || echo "$pcap_data" |egrep -q "Latest packet time: 1970-01-01|Latest packet time: n/a"; then
status "- this PCAP file is invalid; skipping"
INVALID_PCAPS_COUNT=$((INVALID_PCAPS_COUNT + 1))
else
@@ -205,8 +205,8 @@ for PCAP in $INPUT_FILES; do
HASHES="${HASHES} ${HASH}"
fi
START=$(pcapinfo "${PCAP}" -a |grep "First packet time:" | awk '{print $4}')
END=$(pcapinfo "${PCAP}" -e |grep "Last packet time:" | awk '{print $4}')
START=$(pcapinfo "${PCAP}" -a |grep "Earliest packet time:" | awk '{print $4}')
END=$(pcapinfo "${PCAP}" -e |grep "Latest packet time:" | awk '{print $4}')
status "- found PCAP data spanning dates $START through $END"
# compare $START to $START_OLDEST
@@ -248,7 +248,7 @@ fi
START_OLDEST_SLASH=$(echo $START_OLDEST | sed -e 's/-/%2F/g')
END_NEWEST_SLASH=$(echo $END_NEWEST | sed -e 's/-/%2F/g')
if [[ $VALID_PCAPS_COUNT -gt 0 ]] || [[ $SKIPPED_PCAPS_COUNT -gt 0 ]]; then
URL="https://{{ URLBASE }}/#/dashboards?q=$HASH_FILTERS%20%7C%20groupby%20event.module*%20%7C%20groupby%20-sankey%20event.module*%20event.dataset%20%7C%20groupby%20event.dataset%20%7C%20groupby%20source.ip%20%7C%20groupby%20destination.ip%20%7C%20groupby%20destination.port%20%7C%20groupby%20network.protocol%20%7C%20groupby%20rule.name%20rule.category%20event.severity_label%20%7C%20groupby%20dns.query.name%20%7C%20groupby%20file.mime_type%20%7C%20groupby%20http.virtual_host%20http.uri%20%7C%20groupby%20notice.note%20notice.message%20notice.sub_message%20%7C%20groupby%20ssl.server_name%20%7C%20groupby%20source_geo.organization_name%20source.geo.country_name%20%7C%20groupby%20destination_geo.organization_name%20destination.geo.country_name&t=${START_OLDEST_SLASH}%2000%3A00%3A00%20AM%20-%20${END_NEWEST_SLASH}%2000%3A00%3A00%20AM&z=UTC"
URL="https://{{ URLBASE }}/#/dashboards?q=$HASH_FILTERS%20%7C%20groupby%20event.module*%20%7C%20groupby%20-sankey%20event.module*%20event.dataset%20%7C%20groupby%20event.dataset%20%7C%20groupby%20source.ip%20%7C%20groupby%20destination.ip%20%7C%20groupby%20destination.port%20%7C%20groupby%20network.protocol%20%7C%20groupby%20rule.name%20rule.category%20event.severity_label%20%7C%20groupby%20dns.query.name%20%7C%20groupby%20file.mime_type%20%7C%20groupby%20http.virtual_host%20http.uri%20%7C%20groupby%20notice.note%20notice.message%20notice.sub_message%20%7C%20groupby%20ssl.server_name%20%7C%20groupby%20source.as.organization.name%20source.geo.country_name%20%7C%20groupby%20destination.as.organization.name%20destination.geo.country_name&t=${START_OLDEST_SLASH}%2000%3A00%3A00%20AM%20-%20${END_NEWEST_SLASH}%2000%3A00%3A00%20AM&z=UTC"
status "Import complete!"
status

View File

@@ -9,3 +9,6 @@ fleetartifactdir:
- user: 947
- group: 939
- makedirs: True
- recurse:
- user
- group

View File

@@ -9,6 +9,9 @@
{% from 'elasticfleet/map.jinja' import ELASTICFLEETMERGED %}
{% set node_data = salt['pillar.get']('node_data') %}
include:
- elasticfleet.artifact_registry
# Add EA Group
elasticfleetgroup:
group.present:

View File

@@ -38,6 +38,7 @@ elasticfleet:
- elasticsearch
- endpoint
- fleet_server
- filestream
- http_endpoint
- httpjson
- log

View File

@@ -67,6 +67,8 @@ so-elastic-fleet-auto-configure-artifact-urls:
elasticagent_syncartifacts:
file.recurse:
- name: /nsm/elastic-fleet/artifacts/beats
- user: 947
- group: 947
- source: salt://beats
{% endif %}
@@ -133,12 +135,18 @@ so-elastic-fleet-package-statefile:
so-elastic-fleet-package-upgrade:
cmd.run:
- name: /usr/sbin/so-elastic-fleet-package-upgrade
- retry:
attempts: 3
interval: 10
- onchanges:
- file: /opt/so/state/elastic_fleet_packages.txt
so-elastic-fleet-integrations:
cmd.run:
- name: /usr/sbin/so-elastic-fleet-integration-policy-load
- retry:
attempts: 3
interval: 10
so-elastic-agent-grid-upgrade:
cmd.run:
@@ -150,7 +158,11 @@ so-elastic-agent-grid-upgrade:
so-elastic-fleet-integration-upgrade:
cmd.run:
- name: /usr/sbin/so-elastic-fleet-integration-upgrade
- retry:
attempts: 3
interval: 10
{# Optional integrations script doesn't need the retries like so-elastic-fleet-integration-upgrade which loads the default integrations #}
so-elastic-fleet-addon-integrations:
cmd.run:
- name: /usr/sbin/so-elastic-fleet-optional-integrations-load

View File

@@ -0,0 +1,48 @@
{
"package": {
"name": "filestream",
"version": ""
},
"name": "agent-monitor",
"namespace": "",
"description": "",
"policy_ids": [
"so-grid-nodes_general"
],
"output_id": null,
"vars": {},
"inputs": {
"filestream-filestream": {
"enabled": true,
"streams": {
"filestream.generic": {
"enabled": true,
"vars": {
"paths": [
"/opt/so/log/agents/agent-monitor.log"
],
"data_stream.dataset": "agentmonitor",
"pipeline": "elasticagent.monitor",
"parsers": "",
"exclude_files": [
"\\.gz$"
],
"include_files": [],
"processors": "- decode_json_fields:\n fields: [\"message\"]\n target: \"\"\n- add_fields:\n target: event\n fields:\n module: gridmetrics",
"tags": [],
"recursive_glob": true,
"ignore_older": "72h",
"clean_inactive": -1,
"harvester_limit": 0,
"fingerprint": true,
"fingerprint_offset": 0,
"fingerprint_length": 64,
"file_identity_native": false,
"exclude_lines": [],
"include_lines": []
}
}
}
}
}
}

View File

@@ -19,7 +19,7 @@
],
"data_stream.dataset": "idh",
"tags": [],
"processors": "\n- decode_json_fields:\n fields: [\"message\"]\n target: \"\"\n add_error_key: true\n- drop_fields:\n when:\n equals:\n logtype: \"1001\"\n fields: [\"src_host\", \"src_port\", \"dst_host\", \"dst_port\" ]\n ignore_missing: true\n- rename:\n fields:\n - from: \"src_host\"\n to: \"source.ip\"\n - from: \"src_port\"\n to: \"source.port\"\n - from: \"dst_host\"\n to: \"destination.host\"\n - from: \"dst_port\"\n to: \"destination.port\"\n ignore_missing: true\n- convert:\n fields:\n - {from: \"logtype\", to: \"event.code\", type: \"string\"}\n ignore_missing: true\n- drop_fields:\n fields: '[\"prospector\", \"input\", \"offset\", \"beat\"]'\n- add_fields:\n target: event\n fields:\n category: host\n module: opencanary",
"processors": "\n- decode_json_fields:\n fields: [\"message\"]\n target: \"\"\n add_error_key: true\n- convert:\n fields:\n - {from: \"logtype\", to: \"event.code\", type: \"string\"}\n- drop_fields:\n when:\n equals:\n event.code: \"1001\"\n fields: [\"src_host\", \"src_port\", \"dst_host\", \"dst_port\" ]\n ignore_missing: true\n- rename:\n fields:\n - from: \"src_host\"\n to: \"source.ip\"\n - from: \"src_port\"\n to: \"source.port\"\n - from: \"dst_host\"\n to: \"destination.host\"\n - from: \"dst_port\"\n to: \"destination.port\"\n ignore_missing: true\n- drop_fields:\n fields: '[\"prospector\", \"input\", \"offset\", \"beat\"]'\n- add_fields:\n target: event\n fields:\n category: host\n module: opencanary",
"custom": "pipeline: common"
}
}

View File

@@ -20,7 +20,7 @@
],
"data_stream.dataset": "import",
"custom": "",
"processors": "- dissect:\n tokenizer: \"/nsm/import/%{import.id}/evtx/%{import.file}\"\n field: \"log.file.path\"\n target_prefix: \"\"\n- decode_json_fields:\n fields: [\"message\"]\n target: \"\"\n- drop_fields:\n fields: [\"host\"]\n ignore_missing: true\n- add_fields:\n target: data_stream\n fields:\n type: logs\n dataset: system.security\n- add_fields:\n target: event\n fields:\n dataset: system.security\n module: system\n imported: true\n- add_fields:\n target: \"@metadata\"\n fields:\n pipeline: logs-system.security-2.3.1\n- if:\n equals:\n winlog.channel: 'Microsoft-Windows-Sysmon/Operational'\n then: \n - add_fields:\n target: data_stream\n fields:\n dataset: windows.sysmon_operational\n - add_fields:\n target: event\n fields:\n dataset: windows.sysmon_operational\n module: windows\n imported: true\n - add_fields:\n target: \"@metadata\"\n fields:\n pipeline: logs-windows.sysmon_operational-3.0.0\n- if:\n equals:\n winlog.channel: 'Application'\n then: \n - add_fields:\n target: data_stream\n fields:\n dataset: system.application\n - add_fields:\n target: event\n fields:\n dataset: system.application\n - add_fields:\n target: \"@metadata\"\n fields:\n pipeline: logs-system.application-2.3.1\n- if:\n equals:\n winlog.channel: 'System'\n then: \n - add_fields:\n target: data_stream\n fields:\n dataset: system.system\n - add_fields:\n target: event\n fields:\n dataset: system.system\n - add_fields:\n target: \"@metadata\"\n fields:\n pipeline: logs-system.system-2.3.1\n \n- if:\n equals:\n winlog.channel: 'Microsoft-Windows-PowerShell/Operational'\n then: \n - add_fields:\n target: data_stream\n fields:\n dataset: windows.powershell_operational\n - add_fields:\n target: event\n fields:\n dataset: windows.powershell_operational\n module: windows\n - add_fields:\n target: \"@metadata\"\n fields:\n pipeline: logs-windows.powershell_operational-3.0.0\n- add_fields:\n target: data_stream\n fields:\n dataset: import",
"processors": "- dissect:\n tokenizer: \"/nsm/import/%{import.id}/evtx/%{import.file}\"\n field: \"log.file.path\"\n target_prefix: \"\"\n- decode_json_fields:\n fields: [\"message\"]\n target: \"\"\n- drop_fields:\n fields: [\"host\"]\n ignore_missing: true\n- add_fields:\n target: data_stream\n fields:\n type: logs\n dataset: system.security\n- add_fields:\n target: event\n fields:\n dataset: system.security\n module: system\n imported: true\n- add_fields:\n target: \"@metadata\"\n fields:\n pipeline: logs-system.security-2.5.4\n- if:\n equals:\n winlog.channel: 'Microsoft-Windows-Sysmon/Operational'\n then: \n - add_fields:\n target: data_stream\n fields:\n dataset: windows.sysmon_operational\n - add_fields:\n target: event\n fields:\n dataset: windows.sysmon_operational\n module: windows\n imported: true\n - add_fields:\n target: \"@metadata\"\n fields:\n pipeline: logs-windows.sysmon_operational-3.1.2\n- if:\n equals:\n winlog.channel: 'Application'\n then: \n - add_fields:\n target: data_stream\n fields:\n dataset: system.application\n - add_fields:\n target: event\n fields:\n dataset: system.application\n - add_fields:\n target: \"@metadata\"\n fields:\n pipeline: logs-system.application-2.5.4\n- if:\n equals:\n winlog.channel: 'System'\n then: \n - add_fields:\n target: data_stream\n fields:\n dataset: system.system\n - add_fields:\n target: event\n fields:\n dataset: system.system\n - add_fields:\n target: \"@metadata\"\n fields:\n pipeline: logs-system.system-2.5.4\n \n- if:\n equals:\n winlog.channel: 'Microsoft-Windows-PowerShell/Operational'\n then: \n - add_fields:\n target: data_stream\n fields:\n dataset: windows.powershell_operational\n - add_fields:\n target: event\n fields:\n dataset: windows.powershell_operational\n module: windows\n - add_fields:\n target: \"@metadata\"\n fields:\n pipeline: logs-windows.powershell_operational-3.1.2\n- add_fields:\n target: data_stream\n fields:\n dataset: import",
"tags": [
"import"
]

View File

@@ -23,6 +23,13 @@ fi
# Define a banner to separate sections
banner="========================================================================="
fleet_api() {
local QUERYPATH=$1
shift
curl -sK /opt/so/conf/elasticsearch/curl.config -L "localhost:5601/api/fleet/${QUERYPATH}" "$@" --retry 3 --retry-delay 10 --fail 2>/dev/null
}
elastic_fleet_integration_check() {
AGENT_POLICY=$1
@@ -39,7 +46,9 @@ elastic_fleet_integration_create() {
JSON_STRING=$1
curl -K /opt/so/conf/elasticsearch/curl.config -L -X POST "localhost:5601/api/fleet/package_policies" -H 'kbn-xsrf: true' -H 'Content-Type: application/json' -d "$JSON_STRING"
if ! fleet_api "package_policies" -H 'kbn-xsrf: true' -H 'Content-Type: application/json' -XPOST -d "$JSON_STRING"; then
return 1
fi
}
@@ -56,7 +65,10 @@ elastic_fleet_integration_remove() {
'{"packagePolicyIds":[$INTEGRATIONID]}'
)
curl -K /opt/so/conf/elasticsearch/curl.config -L -X POST "localhost:5601/api/fleet/package_policies/delete" -H 'kbn-xsrf: true' -H 'Content-Type: application/json' -d "$JSON_STRING"
if ! fleet_api "package_policies/delete" -XPOST -H 'kbn-xsrf: true' -H 'Content-Type: application/json' -d "$JSON_STRING"; then
echo "Error: Unable to delete '$NAME' from '$AGENT_POLICY'"
return 1
fi
}
elastic_fleet_integration_update() {
@@ -65,7 +77,9 @@ elastic_fleet_integration_update() {
JSON_STRING=$2
curl -K /opt/so/conf/elasticsearch/curl.config -L -X PUT "localhost:5601/api/fleet/package_policies/$UPDATE_ID" -H 'kbn-xsrf: true' -H 'Content-Type: application/json' -d "$JSON_STRING"
if ! fleet_api "package_policies/$UPDATE_ID" -H 'kbn-xsrf: true' -H 'Content-Type: application/json' -XPUT -d "$JSON_STRING"; then
return 1
fi
}
elastic_fleet_integration_policy_upgrade() {
@@ -77,101 +91,116 @@ elastic_fleet_integration_policy_upgrade() {
'{"packagePolicyIds":[$INTEGRATIONID]}'
)
curl -K /opt/so/conf/elasticsearch/curl.config -L -X POST "localhost:5601/api/fleet/package_policies/upgrade" -H 'kbn-xsrf: true' -H 'Content-Type: application/json' -d "$JSON_STRING"
if ! fleet_api "package_policies/upgrade" -XPOST -H 'kbn-xsrf: true' -H 'Content-Type: application/json' -d "$JSON_STRING"; then
return 1
fi
}
elastic_fleet_package_version_check() {
PACKAGE=$1
curl -s -K /opt/so/conf/elasticsearch/curl.config -b "sid=$SESSIONCOOKIE" -L -X GET "localhost:5601/api/fleet/epm/packages/$PACKAGE" | jq -r '.item.version'
if output=$(fleet_api "epm/packages/$PACKAGE"); then
echo "$output" | jq -r '.item.version'
else
echo "Error: Failed to get current package version for '$PACKAGE'"
return 1
fi
}
elastic_fleet_package_latest_version_check() {
PACKAGE=$1
curl -s -K /opt/so/conf/elasticsearch/curl.config -b "sid=$SESSIONCOOKIE" -L -X GET "localhost:5601/api/fleet/epm/packages/$PACKAGE" | jq -r '.item.latestVersion'
if output=$(fleet_api "epm/packages/$PACKAGE"); then
if version=$(jq -e -r '.item.latestVersion' <<< $output); then
echo "$version"
fi
else
echo "Error: Failed to get latest version for '$PACKAGE'"
return 1
fi
}
elastic_fleet_package_install() {
PKG=$1
VERSION=$2
curl -s -K /opt/so/conf/elasticsearch/curl.config -b "sid=$SESSIONCOOKIE" -L -X POST -H 'kbn-xsrf: true' -H 'Content-Type: application/json' -d '{"force":true}' "localhost:5601/api/fleet/epm/packages/$PKG/$VERSION"
if ! fleet_api "epm/packages/$PKG/$VERSION" -XPOST -H 'kbn-xsrf: true' -H 'Content-Type: application/json' -d '{"force":true}'; then
return 1
fi
}
elastic_fleet_bulk_package_install() {
BULK_PKG_LIST=$1
curl -s -K /opt/so/conf/elasticsearch/curl.config -b "sid=$SESSIONCOOKIE" -L -X POST -H 'kbn-xsrf: true' -H 'Content-Type: application/json' -d@$1 "localhost:5601/api/fleet/epm/packages/_bulk"
}
elastic_fleet_package_is_installed() {
PACKAGE=$1
curl -s -K /opt/so/conf/elasticsearch/curl.config -b "sid=$SESSIONCOOKIE" -L -X GET -H 'kbn-xsrf: true' "localhost:5601/api/fleet/epm/packages/$PACKAGE" | jq -r '.item.status'
}
elastic_fleet_installed_packages() {
curl -s -K /opt/so/conf/elasticsearch/curl.config -b "sid=$SESSIONCOOKIE" -L -X GET -H 'kbn-xsrf: true' -H 'Content-Type: application/json' "localhost:5601/api/fleet/epm/packages/installed?perPage=500"
}
elastic_fleet_agent_policy_ids() {
curl -s -K /opt/so/conf/elasticsearch/curl.config -b "sid=$SESSIONCOOKIE" -L -X GET "localhost:5601/api/fleet/agent_policies" | jq -r .items[].id
if [ $? -ne 0 ]; then
echo "Error: Failed to retrieve agent policies."
exit 1
if ! fleet_api "epm/packages/_bulk" -XPOST -H 'kbn-xsrf: true' -H 'Content-Type: application/json' -d@$BULK_PKG_LIST; then
return 1
fi
}
elastic_fleet_agent_policy_names() {
curl -s -K /opt/so/conf/elasticsearch/curl.config -b "sid=$SESSIONCOOKIE" -L -X GET "localhost:5601/api/fleet/agent_policies" | jq -r .items[].name
if [ $? -ne 0 ]; then
elastic_fleet_installed_packages() {
if ! fleet_api "epm/packages/installed?perPage=500"; then
return 1
fi
}
elastic_fleet_agent_policy_ids() {
if output=$(fleet_api "agent_policies"); then
echo "$output" | jq -r .items[].id
else
echo "Error: Failed to retrieve agent policies."
exit 1
return 1
fi
}
elastic_fleet_integration_policy_names() {
AGENT_POLICY=$1
curl -s -K /opt/so/conf/elasticsearch/curl.config -b "sid=$SESSIONCOOKIE" -L -X GET "localhost:5601/api/fleet/agent_policies/$AGENT_POLICY" | jq -r .item.package_policies[].name
if [ $? -ne 0 ]; then
if output=$(fleet_api "agent_policies/$AGENT_POLICY"); then
echo "$output" | jq -r .item.package_policies[].name
else
echo "Error: Failed to retrieve integrations for '$AGENT_POLICY'."
exit 1
return 1
fi
}
elastic_fleet_integration_policy_package_name() {
AGENT_POLICY=$1
INTEGRATION=$2
curl -s -K /opt/so/conf/elasticsearch/curl.config -b "sid=$SESSIONCOOKIE" -L -X GET "localhost:5601/api/fleet/agent_policies/$AGENT_POLICY" | jq -r --arg INTEGRATION "$INTEGRATION" '.item.package_policies[] | select(.name==$INTEGRATION)| .package.name'
if [ $? -ne 0 ]; then
if output=$(fleet_api "agent_policies/$AGENT_POLICY"); then
echo "$output" | jq -r --arg INTEGRATION "$INTEGRATION" '.item.package_policies[] | select(.name==$INTEGRATION)| .package.name'
else
echo "Error: Failed to retrieve package name for '$INTEGRATION' in '$AGENT_POLICY'."
exit 1
return 1
fi
}
elastic_fleet_integration_policy_package_version() {
AGENT_POLICY=$1
INTEGRATION=$2
curl -s -K /opt/so/conf/elasticsearch/curl.config -b "sid=$SESSIONCOOKIE" -L -X GET "localhost:5601/api/fleet/agent_policies/$AGENT_POLICY" | jq -r --arg INTEGRATION "$INTEGRATION" '.item.package_policies[] | select(.name==$INTEGRATION)| .package.version'
if [ $? -ne 0 ]; then
echo "Error: Failed to retrieve package version for '$INTEGRATION' in '$AGENT_POLICY'."
exit 1
if output=$(fleet_api "agent_policies/$AGENT_POLICY"); then
if version=$(jq -e -r --arg INTEGRATION "$INTEGRATION" '.item.package_policies[] | select(.name==$INTEGRATION)| .package.version' <<< "$output"); then
echo "$version"
fi
else
echo "Error: Failed to retrieve integration version for '$INTEGRATION' in policy '$AGENT_POLICY'"
return 1
fi
}
elastic_fleet_integration_id() {
AGENT_POLICY=$1
INTEGRATION=$2
curl -s -K /opt/so/conf/elasticsearch/curl.config -b "sid=$SESSIONCOOKIE" -L -X GET "localhost:5601/api/fleet/agent_policies/$AGENT_POLICY" | jq -r --arg INTEGRATION "$INTEGRATION" '.item.package_policies[] | select(.name==$INTEGRATION)| .id'
if [ $? -ne 0 ]; then
if output=$(fleet_api "agent_policies/$AGENT_POLICY"); then
echo "$output" | jq -r --arg INTEGRATION "$INTEGRATION" '.item.package_policies[] | select(.name==$INTEGRATION)| .id'
else
echo "Error: Failed to retrieve integration ID for '$INTEGRATION' in '$AGENT_POLICY'."
exit 1
return 1
fi
}
elastic_fleet_integration_policy_dryrun_upgrade() {
INTEGRATION_ID=$1
curl -s -K /opt/so/conf/elasticsearch/curl.config -b "sid=$SESSIONCOOKIE" -H "Content-Type: application/json" -H 'kbn-xsrf: true' -L -X POST "localhost:5601/api/fleet/package_policies/upgrade/dryrun" -d "{\"packagePolicyIds\":[\"$INTEGRATION_ID\"]}"
if [ $? -ne 0 ]; then
if ! fleet_api "package_policies/upgrade/dryrun" -H "Content-Type: application/json" -H 'kbn-xsrf: true' -XPOST -d "{\"packagePolicyIds\":[\"$INTEGRATION_ID\"]}"; then
echo "Error: Failed to complete dry run for '$INTEGRATION_ID'."
exit 1
return 1
fi
}
@@ -180,25 +209,18 @@ elastic_fleet_policy_create() {
NAME=$1
DESC=$2
FLEETSERVER=$3
TIMEOUT=$4
TIMEOUT=$4
JSON_STRING=$( jq -n \
--arg NAME "$NAME" \
--arg DESC "$DESC" \
--arg TIMEOUT $TIMEOUT \
--arg FLEETSERVER "$FLEETSERVER" \
'{"name": $NAME,"id":$NAME,"description":$DESC,"namespace":"default","monitoring_enabled":["logs"],"inactivity_timeout":$TIMEOUT,"has_fleet_server":$FLEETSERVER}'
)
--arg NAME "$NAME" \
--arg DESC "$DESC" \
--arg TIMEOUT $TIMEOUT \
--arg FLEETSERVER "$FLEETSERVER" \
'{"name": $NAME,"id":$NAME,"description":$DESC,"namespace":"default","monitoring_enabled":["logs"],"inactivity_timeout":$TIMEOUT,"has_fleet_server":$FLEETSERVER}'
)
# Create Fleet Policy
curl -K /opt/so/conf/elasticsearch/curl.config -L -X POST "localhost:5601/api/fleet/agent_policies" -H 'kbn-xsrf: true' -H 'Content-Type: application/json' -d "$JSON_STRING"
if ! fleet_api "agent_policies" -XPOST -H 'kbn-xsrf: true' -H 'Content-Type: application/json' -d "$JSON_STRING"; then
return 1
fi
}
elastic_fleet_policy_update() {
POLICYID=$1
JSON_STRING=$2
curl -K /opt/so/conf/elasticsearch/curl.config -L -X PUT "localhost:5601/api/fleet/agent_policies/$POLICYID" -H 'kbn-xsrf: true' -H 'Content-Type: application/json' -d "$JSON_STRING"
}

View File

@@ -8,6 +8,7 @@
. /usr/sbin/so-elastic-fleet-common
ERROR=false
# Manage Elastic Defend Integration for Initial Endpoints Policy
for INTEGRATION in /opt/so/conf/elastic-fleet/integrations/elastic-defend/*.json
do
@@ -15,9 +16,20 @@ do
elastic_fleet_integration_check "endpoints-initial" "$INTEGRATION"
if [ -n "$INTEGRATION_ID" ]; then
printf "\n\nIntegration $NAME exists - Upgrading integration policy\n"
elastic_fleet_integration_policy_upgrade "$INTEGRATION_ID"
if ! elastic_fleet_integration_policy_upgrade "$INTEGRATION_ID"; then
echo -e "\nFailed to upgrade integration policy for ${INTEGRATION##*/}"
ERROR=true
continue
fi
else
printf "\n\nIntegration does not exist - Creating integration\n"
elastic_fleet_integration_create "@$INTEGRATION"
if ! elastic_fleet_integration_create "@$INTEGRATION"; then
echo -e "\nFailed to create integration for ${INTEGRATION##*/}"
ERROR=true
continue
fi
fi
done
if [[ "$ERROR" == "true" ]]; then
exit 1
fi

View File

@@ -25,5 +25,9 @@ for POLICYNAME in $POLICY; do
.name = $name' /opt/so/conf/elastic-fleet/integrations/fleet-server/fleet-server.json)
# Now update the integration policy using the modified JSON
elastic_fleet_integration_update "$INTEGRATION_ID" "$UPDATED_INTEGRATION_POLICY"
if ! elastic_fleet_integration_update "$INTEGRATION_ID" "$UPDATED_INTEGRATION_POLICY"; then
# exit 1 on failure to update fleet integration policies, let salt handle retries
echo "Failed to update $POLICYNAME.."
exit 1
fi
done

View File

@@ -13,11 +13,10 @@ if [ ! -f /opt/so/state/eaintegrations.txt ]; then
/usr/sbin/so-elastic-fleet-package-upgrade
# Second, update Fleet Server policies
/sbin/so-elastic-fleet-integration-policy-elastic-fleet-server
/usr/sbin/so-elastic-fleet-integration-policy-elastic-fleet-server
# Third, configure Elastic Defend Integration seperately
/usr/sbin/so-elastic-fleet-integration-policy-elastic-defend
# Initial Endpoints
for INTEGRATION in /opt/so/conf/elastic-fleet/integrations/endpoints-initial/*.json
do
@@ -25,10 +24,18 @@ if [ ! -f /opt/so/state/eaintegrations.txt ]; then
elastic_fleet_integration_check "endpoints-initial" "$INTEGRATION"
if [ -n "$INTEGRATION_ID" ]; then
printf "\n\nIntegration $NAME exists - Updating integration\n"
elastic_fleet_integration_update "$INTEGRATION_ID" "@$INTEGRATION"
if ! elastic_fleet_integration_update "$INTEGRATION_ID" "@$INTEGRATION"; then
echo -e "\nFailed to update integration for ${INTEGRATION##*/}"
RETURN_CODE=1
continue
fi
else
printf "\n\nIntegration does not exist - Creating integration\n"
elastic_fleet_integration_create "@$INTEGRATION"
if ! elastic_fleet_integration_create "@$INTEGRATION"; then
echo -e "\nFailed to create integration for ${INTEGRATION##*/}"
RETURN_CODE=1
continue
fi
fi
done
@@ -39,10 +46,18 @@ if [ ! -f /opt/so/state/eaintegrations.txt ]; then
elastic_fleet_integration_check "so-grid-nodes_general" "$INTEGRATION"
if [ -n "$INTEGRATION_ID" ]; then
printf "\n\nIntegration $NAME exists - Updating integration\n"
elastic_fleet_integration_update "$INTEGRATION_ID" "@$INTEGRATION"
if ! elastic_fleet_integration_update "$INTEGRATION_ID" "@$INTEGRATION"; then
echo -e "\nFailed to update integration for ${INTEGRATION##*/}"
RETURN_CODE=1
continue
fi
else
printf "\n\nIntegration does not exist - Creating integration\n"
elastic_fleet_integration_create "@$INTEGRATION"
if ! elastic_fleet_integration_create "@$INTEGRATION"; then
echo -e "\nFailed to create integration for ${INTEGRATION##*/}"
RETURN_CODE=1
continue
fi
fi
done
if [[ "$RETURN_CODE" != "1" ]]; then
@@ -56,11 +71,19 @@ if [ ! -f /opt/so/state/eaintegrations.txt ]; then
elastic_fleet_integration_check "so-grid-nodes_heavy" "$INTEGRATION"
if [ -n "$INTEGRATION_ID" ]; then
printf "\n\nIntegration $NAME exists - Updating integration\n"
elastic_fleet_integration_update "$INTEGRATION_ID" "@$INTEGRATION"
if ! elastic_fleet_integration_update "$INTEGRATION_ID" "@$INTEGRATION"; then
echo -e "\nFailed to update integration for ${INTEGRATION##*/}"
RETURN_CODE=1
continue
fi
else
printf "\n\nIntegration does not exist - Creating integration\n"
if [ "$NAME" != "elasticsearch-logs" ]; then
elastic_fleet_integration_create "@$INTEGRATION"
if ! elastic_fleet_integration_create "@$INTEGRATION"; then
echo -e "\nFailed to create integration for ${INTEGRATION##*/}"
RETURN_CODE=1
continue
fi
fi
fi
done
@@ -77,11 +100,19 @@ if [ ! -f /opt/so/state/eaintegrations.txt ]; then
elastic_fleet_integration_check "$FLEET_POLICY" "$INTEGRATION"
if [ -n "$INTEGRATION_ID" ]; then
printf "\n\nIntegration $NAME exists - Updating integration\n"
elastic_fleet_integration_update "$INTEGRATION_ID" "@$INTEGRATION"
if ! elastic_fleet_integration_update "$INTEGRATION_ID" "@$INTEGRATION"; then
echo -e "\nFailed to update integration for ${INTEGRATION##*/}"
RETURN_CODE=1
continue
fi
else
printf "\n\nIntegration does not exist - Creating integration\n"
if [ "$NAME" != "elasticsearch-logs" ]; then
elastic_fleet_integration_create "@$INTEGRATION"
if ! elastic_fleet_integration_create "@$INTEGRATION"; then
echo -e "\nFailed to create integration for ${INTEGRATION##*/}"
RETURN_CODE=1
continue
fi
fi
fi
fi

View File

@@ -24,23 +24,39 @@ fi
default_packages=({% for pkg in SUPPORTED_PACKAGES %}"{{ pkg }}"{% if not loop.last %} {% endif %}{% endfor %})
ERROR=false
for AGENT_POLICY in $agent_policies; do
integrations=$(elastic_fleet_integration_policy_names "$AGENT_POLICY")
if ! integrations=$(elastic_fleet_integration_policy_names "$AGENT_POLICY"); then
# this script upgrades default integration packages, exit 1 and let salt handle retrying
exit 1
fi
for INTEGRATION in $integrations; do
if ! [[ "$INTEGRATION" == "elastic-defend-endpoints" ]] && ! [[ "$INTEGRATION" == "fleet_server-"* ]]; then
# Get package name so we know what package to look for when checking the current and latest available version
PACKAGE_NAME=$(elastic_fleet_integration_policy_package_name "$AGENT_POLICY" "$INTEGRATION")
if ! PACKAGE_NAME=$(elastic_fleet_integration_policy_package_name "$AGENT_POLICY" "$INTEGRATION"); then
exit 1
fi
{%- if not AUTO_UPGRADE_INTEGRATIONS %}
if [[ " ${default_packages[@]} " =~ " $PACKAGE_NAME " ]]; then
{%- endif %}
# Get currently installed version of package
PACKAGE_VERSION=$(elastic_fleet_integration_policy_package_version "$AGENT_POLICY" "$INTEGRATION")
# Get latest available version of package
AVAILABLE_VERSION=$(elastic_fleet_package_latest_version_check "$PACKAGE_NAME")
attempt=0
max_attempts=3
while [ $attempt -lt $max_attempts ]; do
if PACKAGE_VERSION=$(elastic_fleet_integration_policy_package_version "$AGENT_POLICY" "$INTEGRATION") && AVAILABLE_VERSION=$(elastic_fleet_package_latest_version_check "$PACKAGE_NAME"); then
break
fi
attempt=$((attempt + 1))
done
if [ $attempt -eq $max_attempts ]; then
echo "Error: Failed getting $PACKAGE_VERSION or $AVAILABLE_VERSION"
exit 1
fi
# Get integration ID
INTEGRATION_ID=$(elastic_fleet_integration_id "$AGENT_POLICY" "$INTEGRATION")
if ! INTEGRATION_ID=$(elastic_fleet_integration_id "$AGENT_POLICY" "$INTEGRATION"); then
exit 1
fi
if [[ "$PACKAGE_VERSION" != "$AVAILABLE_VERSION" ]]; then
# Dry run of the upgrade
@@ -48,20 +64,23 @@ for AGENT_POLICY in $agent_policies; do
echo "Current $PACKAGE_NAME package version ($PACKAGE_VERSION) is not the same as the latest available package ($AVAILABLE_VERSION)..."
echo "Upgrading $INTEGRATION..."
echo "Starting dry run..."
DRYRUN_OUTPUT=$(elastic_fleet_integration_policy_dryrun_upgrade "$INTEGRATION_ID")
if ! DRYRUN_OUTPUT=$(elastic_fleet_integration_policy_dryrun_upgrade "$INTEGRATION_ID"); then
exit 1
fi
DRYRUN_ERRORS=$(echo "$DRYRUN_OUTPUT" | jq .[].hasErrors)
# If no errors with dry run, proceed with actual upgrade
if [[ "$DRYRUN_ERRORS" == "false" ]]; then
echo "No errors detected. Proceeding with upgrade..."
elastic_fleet_integration_policy_upgrade "$INTEGRATION_ID"
if [ $? -ne 0 ]; then
if ! elastic_fleet_integration_policy_upgrade "$INTEGRATION_ID"; then
echo "Error: Upgrade failed for $PACKAGE_NAME with integration ID '$INTEGRATION_ID'."
exit 1
ERROR=true
continue
fi
else
echo "Errors detected during dry run for $PACKAGE_NAME policy upgrade..."
exit 1
ERROR=true
continue
fi
fi
{%- if not AUTO_UPGRADE_INTEGRATIONS %}
@@ -70,4 +89,7 @@ for AGENT_POLICY in $agent_policies; do
fi
done
done
if [[ "$ERROR" == "true" ]]; then
exit 1
fi
echo

View File

@@ -62,9 +62,17 @@ default_packages=({% for pkg in SUPPORTED_PACKAGES %}"{{ pkg }}"{% if not loop.l
in_use_integrations=()
for AGENT_POLICY in $agent_policies; do
integrations=$(elastic_fleet_integration_policy_names "$AGENT_POLICY")
if ! integrations=$(elastic_fleet_integration_policy_names "$AGENT_POLICY"); then
# skip the agent policy if we can't get required info, let salt retry. Integrations loaded by this script are non-default integrations.
echo "Skipping $AGENT_POLICY.. "
continue
fi
for INTEGRATION in $integrations; do
PACKAGE_NAME=$(elastic_fleet_integration_policy_package_name "$AGENT_POLICY" "$INTEGRATION")
if ! PACKAGE_NAME=$(elastic_fleet_integration_policy_package_name "$AGENT_POLICY" "$INTEGRATION"); then
echo "Not adding $INTEGRATION, couldn't get package name"
continue
fi
# non-default integrations that are in-use in any policy
if ! [[ " ${default_packages[@]} " =~ " $PACKAGE_NAME " ]]; then
in_use_integrations+=("$PACKAGE_NAME")
@@ -160,7 +168,11 @@ if [[ -f $STATE_FILE_SUCCESS ]]; then
for file in "${pkg_filename}_"*.json; do
[ -e "$file" ] || continue
elastic_fleet_bulk_package_install $file >> $BULK_INSTALL_OUTPUT
if ! elastic_fleet_bulk_package_install $file >> $BULK_INSTALL_OUTPUT; then
# integrations loaded my this script are non-essential and shouldn't cause exit, skip them for now next highstate run can retry
echo "Failed to complete a chunk of bulk package installs -- $file "
continue
fi
done
# cleanup any temp files for chunked package install
rm -f ${pkg_filename}_*.json $BULK_INSTALL_PACKAGE_LIST
@@ -168,8 +180,9 @@ if [[ -f $STATE_FILE_SUCCESS ]]; then
echo "Elastic integrations don't appear to need installation/updating..."
fi
# Write out file for generating index/component/ilm templates
latest_installed_package_list=$(elastic_fleet_installed_packages)
echo $latest_installed_package_list | jq '[.items[] | {name: .name, es_index_patterns: .dataStreams}]' > $PACKAGE_COMPONENTS
if latest_installed_package_list=$(elastic_fleet_installed_packages); then
echo $latest_installed_package_list | jq '[.items[] | {name: .name, es_index_patterns: .dataStreams}]' > $PACKAGE_COMPONENTS
fi
if retry 3 1 "so-elasticsearch-query / --fail --output /dev/null"; then
# Refresh installed component template list
latest_component_templates_list=$(so-elasticsearch-query _component_template | jq '.component_templates[] | .name' | jq -s '.')

View File

@@ -15,22 +15,49 @@ if ! is_manager_node; then
fi
function update_logstash_outputs() {
# Generate updated JSON payload
JSON_STRING=$(jq -n --arg UPDATEDLIST $NEW_LIST_JSON '{"name":"grid-logstash","type":"logstash","hosts": $UPDATEDLIST,"is_default":true,"is_default_monitoring":true,"config_yaml":""}')
if logstash_policy=$(curl -K /opt/so/conf/elasticsearch/curl.config -L "http://localhost:5601/api/fleet/outputs/so-manager_logstash" --retry 3 --retry-delay 10 --fail 2>/dev/null); then
SSL_CONFIG=$(echo "$logstash_policy" | jq -r '.item.ssl')
if SECRETS=$(echo "$logstash_policy" | jq -er '.item.secrets' 2>/dev/null); then
JSON_STRING=$(jq -n \
--arg UPDATEDLIST "$NEW_LIST_JSON" \
--argjson SECRETS "$SECRETS" \
--argjson SSL_CONFIG "$SSL_CONFIG" \
'{"name":"grid-logstash","type":"logstash","hosts": $UPDATEDLIST,"is_default":true,"is_default_monitoring":true,"config_yaml":"","ssl": $SSL_CONFIG,"secrets": $SECRETS}')
else
JSON_STRING=$(jq -n \
--arg UPDATEDLIST "$NEW_LIST_JSON" \
--argjson SSL_CONFIG "$SSL_CONFIG" \
'{"name":"grid-logstash","type":"logstash","hosts": $UPDATEDLIST,"is_default":true,"is_default_monitoring":true,"config_yaml":"","ssl": $SSL_CONFIG}')
fi
fi
# Update Logstash Outputs
curl -K /opt/so/conf/elasticsearch/curl.config -L -X PUT "localhost:5601/api/fleet/outputs/so-manager_logstash" -H 'kbn-xsrf: true' -H 'Content-Type: application/json' -d "$JSON_STRING" | jq
}
function update_kafka_outputs() {
# Make sure SSL configuration is included in policy updates for Kafka output. SSL is configured in so-elastic-fleet-setup
SSL_CONFIG=$(curl -K /opt/so/conf/elasticsearch/curl.config -L "http://localhost:5601/api/fleet/outputs/so-manager_kafka" | jq -r '.item.ssl')
JSON_STRING=$(jq -n \
--arg UPDATEDLIST "$NEW_LIST_JSON" \
--argjson SSL_CONFIG "$SSL_CONFIG" \
'{"name": "grid-kafka","type": "kafka","hosts": $UPDATEDLIST,"is_default": true,"is_default_monitoring": true,"config_yaml": "","ssl": $SSL_CONFIG}')
# Update Kafka outputs
curl -K /opt/so/conf/elasticsearch/curl.config -L -X PUT "localhost:5601/api/fleet/outputs/so-manager_kafka" -H 'kbn-xsrf: true' -H 'Content-Type: application/json' -d "$JSON_STRING" | jq
if kafka_policy=$(curl -K /opt/so/conf/elasticsearch/curl.config -L "http://localhost:5601/api/fleet/outputs/so-manager_kafka" --fail 2>/dev/null); then
SSL_CONFIG=$(echo "$kafka_policy" | jq -r '.item.ssl')
if SECRETS=$(echo "$kafka_policy" | jq -er '.item.secrets' 2>/dev/null); then
# Update policy when fleet has secrets enabled
JSON_STRING=$(jq -n \
--arg UPDATEDLIST "$NEW_LIST_JSON" \
--argjson SSL_CONFIG "$SSL_CONFIG" \
--argjson SECRETS "$SECRETS" \
'{"name": "grid-kafka","type": "kafka","hosts": $UPDATEDLIST,"is_default": true,"is_default_monitoring": true,"config_yaml": "","ssl": $SSL_CONFIG,"secrets": $SECRETS}')
else
# Update policy when fleet has secrets disabled or policy hasn't been force updated
JSON_STRING=$(jq -n \
--arg UPDATEDLIST "$NEW_LIST_JSON" \
--argjson SSL_CONFIG "$SSL_CONFIG" \
'{"name": "grid-kafka","type": "kafka","hosts": $UPDATEDLIST,"is_default": true,"is_default_monitoring": true,"config_yaml": "","ssl": $SSL_CONFIG}')
fi
# Update Kafka outputs
curl -K /opt/so/conf/elasticsearch/curl.config -L -X PUT "localhost:5601/api/fleet/outputs/so-manager_kafka" -H 'kbn-xsrf: true' -H 'Content-Type: application/json' -d "$JSON_STRING" | jq
else
printf "Failed to get current Kafka output policy..."
exit 1
fi
}
{% if GLOBALS.pipeline == "KAFKA" %}

View File

@@ -10,8 +10,16 @@
{%- for PACKAGE in SUPPORTED_PACKAGES %}
echo "Setting up {{ PACKAGE }} package..."
VERSION=$(elastic_fleet_package_version_check "{{ PACKAGE }}")
elastic_fleet_package_install "{{ PACKAGE }}" "$VERSION"
if VERSION=$(elastic_fleet_package_version_check "{{ PACKAGE }}"); then
if ! elastic_fleet_package_install "{{ PACKAGE }}" "$VERSION"; then
# packages loaded by this script should never fail to install and REQUIRED before an installation of SO can be considered successful
echo -e "\nERROR: Failed to install default integration package -- $PACKAGE $VERSION"
exit 1
fi
else
echo -e "\nERROR: Failed to get version information for integration $PACKAGE"
exit 1
fi
echo
{%- endfor %}
echo

View File

@@ -10,8 +10,15 @@
{%- for PACKAGE in SUPPORTED_PACKAGES %}
echo "Upgrading {{ PACKAGE }} package..."
VERSION=$(elastic_fleet_package_latest_version_check "{{ PACKAGE }}")
elastic_fleet_package_install "{{ PACKAGE }}" "$VERSION"
if VERSION=$(elastic_fleet_package_latest_version_check "{{ PACKAGE }}"); then
if ! elastic_fleet_package_install "{{ PACKAGE }}" "$VERSION"; then
# exit 1 on failure to upgrade a default package, allow salt to handle retries
echo -e "\nERROR: Failed to upgrade $PACKAGE to version: $VERSION"
exit 1
fi
else
echo -e "\nERROR: Failed to get version information for integration $PACKAGE"
fi
echo
{%- endfor %}
echo

View File

@@ -23,18 +23,17 @@ if [[ "$RETURN_CODE" != "0" ]]; then
exit 1
fi
ALIASES=".fleet-servers .fleet-policies-leader .fleet-policies .fleet-agents .fleet-artifacts .fleet-enrollment-api-keys .kibana_ingest"
for ALIAS in ${ALIASES}
do
ALIASES=(.fleet-servers .fleet-policies-leader .fleet-policies .fleet-agents .fleet-artifacts .fleet-enrollment-api-keys .kibana_ingest)
for ALIAS in "${ALIASES[@]}"; do
# Get all concrete indices from alias
INDXS=$(curl -K /opt/so/conf/kibana/curl.config -s -k -L -H "Content-Type: application/json" "https://localhost:9200/_resolve/index/${ALIAS}" | jq -r '.aliases[].indices[]')
# Delete all resolved indices
for INDX in ${INDXS}
do
if INDXS_RAW=$(curl -sK /opt/so/conf/kibana/curl.config -s -k -L -H "Content-Type: application/json" "https://localhost:9200/_resolve/index/${ALIAS}" --fail 2>/dev/null); then
INDXS=$(echo "$INDXS_RAW" | jq -r '.aliases[].indices[]')
# Delete all resolved indices
for INDX in ${INDXS}; do
status "Deleting $INDX"
curl -K /opt/so/conf/kibana/curl.config -s -k -L -H "Content-Type: application/json" "https://localhost:9200/${INDX}" -XDELETE
done
done
fi
done
# Restarting Kibana...
@@ -51,22 +50,61 @@ if [[ "$RETURN_CODE" != "0" ]]; then
fi
printf "\n### Create ES Token ###\n"
ESTOKEN=$(curl -K /opt/so/conf/elasticsearch/curl.config -L -X POST "localhost:5601/api/fleet/service_tokens" -H 'kbn-xsrf: true' -H 'Content-Type: application/json' | jq -r .value)
if ESTOKEN_RAW=$(fleet_api "service_tokens" -XPOST -H 'kbn-xsrf: true' -H 'Content-Type: application/json'); then
ESTOKEN=$(echo "$ESTOKEN_RAW" | jq -r .value)
else
echo -e "\nFailed to create ES token..."
exit 1
fi
### Create Outputs, Fleet Policy and Fleet URLs ###
# Create the Manager Elasticsearch Output first and set it as the default output
printf "\nAdd Manager Elasticsearch Output...\n"
ESCACRT=$(openssl x509 -in $INTCA)
JSON_STRING=$( jq -n \
--arg ESCACRT "$ESCACRT" \
'{"name":"so-manager_elasticsearch","id":"so-manager_elasticsearch","type":"elasticsearch","hosts":["https://{{ GLOBALS.manager_ip }}:9200","https://{{ GLOBALS.manager }}:9200"],"is_default":true,"is_default_monitoring":true,"config_yaml":"","ssl":{"certificate_authorities": [$ESCACRT]}}' )
curl -K /opt/so/conf/elasticsearch/curl.config -L -X POST "localhost:5601/api/fleet/outputs" -H 'kbn-xsrf: true' -H 'Content-Type: application/json' -d "$JSON_STRING"
ESCACRT=$(openssl x509 -in "$INTCA" -outform DER | sha256sum | cut -d' ' -f1 | tr '[:lower:]' '[:upper:]')
JSON_STRING=$(jq -n \
--arg ESCACRT "$ESCACRT" \
'{"name":"so-manager_elasticsearch","id":"so-manager_elasticsearch","type":"elasticsearch","hosts":["https://{{ GLOBALS.manager_ip }}:9200","https://{{ GLOBALS.manager }}:9200"],"is_default":false,"is_default_monitoring":false,"config_yaml":"","ca_trusted_fingerprint": $ESCACRT}')
if ! fleet_api "outputs" -XPOST -H 'kbn-xsrf: true' -H 'Content-Type: application/json' -d "$JSON_STRING"; then
echo -e "\nFailed to create so-elasticsearch_manager policy..."
exit 1
fi
printf "\n\n"
# so-manager_elasticsearch should exist and be disabled. Now update it before checking its the only default policy
MANAGER_OUTPUT_ENABLED=$(echo "$JSON_STRING" | jq 'del(.id) | .is_default = true | .is_default_monitoring = true')
if ! curl -sK /opt/so/conf/elasticsearch/curl.config -L -X PUT "localhost:5601/api/fleet/outputs/so-manager_elasticsearch" -H 'kbn-xsrf: true' -H 'Content-Type: application/json' -d "$MANAGER_OUTPUT_ENABLED"; then
echo -e "\n failed to update so-manager_elasticsearch"
exit 1
fi
# At this point there should only be two policies. fleet-default-output & so-manager_elasticsearch
status "Verifying so-manager_elasticsearch policy is configured as the current default"
# Grab the fleet-default-output policy instead of so-manager_elasticsearch, because a weird state can exist where both fleet-default-output & so-elasticsearch_manager can be set as the active default output for logs / metrics. Resulting in logs not ingesting on import/eval nodes
if DEFAULTPOLICY=$(fleet_api "outputs/fleet-default-output"); then
fleet_default=$(echo "$DEFAULTPOLICY" | jq -er '.item.is_default')
fleet_default_monitoring=$(echo "$DEFAULTPOLICY" | jq -er '.item.is_default_monitoring')
# Check that fleet-default-output isn't configured as a default for anything ( both variables return false )
if [[ $fleet_default == "false" ]] && [[ $fleet_default_monitoring == "false" ]]; then
echo -e "\nso-manager_elasticsearch is configured as the current default policy..."
else
echo -e "\nVerification of so-manager_elasticsearch policy failed... The default 'fleet-default-output' output is still active..."
exit 1
fi
else
# fleet-output-policy is created automatically by fleet when started. Should always exist on any installation type
echo -e "\nDefault fleet-default-output policy doesn't exist...\n"
exit 1
fi
# Create the Manager Fleet Server Host Agent Policy
# This has to be done while the Elasticsearch Output is set to the default Output
printf "Create Manager Fleet Server Policy...\n"
elastic_fleet_policy_create "FleetServer_{{ GLOBALS.hostname }}" "Fleet Server - {{ GLOBALS.hostname }}" "false" "120"
if ! elastic_fleet_policy_create "FleetServer_{{ GLOBALS.hostname }}" "Fleet Server - {{ GLOBALS.hostname }}" "false" "120"; then
echo -e "\n Failed to create Manager fleet server policy..."
exit 1
fi
# Modify the default integration policy to update the policy_id with the correct naming
UPDATED_INTEGRATION_POLICY=$(jq --arg policy_id "FleetServer_{{ GLOBALS.hostname }}" --arg name "fleet_server-{{ GLOBALS.hostname }}" '
@@ -74,7 +112,10 @@ UPDATED_INTEGRATION_POLICY=$(jq --arg policy_id "FleetServer_{{ GLOBALS.hostname
.name = $name' /opt/so/conf/elastic-fleet/integrations/fleet-server/fleet-server.json)
# Add the Fleet Server Integration to the new Fleet Policy
elastic_fleet_integration_create "$UPDATED_INTEGRATION_POLICY"
if ! elastic_fleet_integration_create "$UPDATED_INTEGRATION_POLICY"; then
echo -e "\nFailed to create Fleet server integration for Manager.."
exit 1
fi
# Now we can create the Logstash Output and set it to to be the default Output
printf "\n\nCreate Logstash Output Config if node is not an Import or Eval install\n"
@@ -86,9 +127,12 @@ JSON_STRING=$( jq -n \
--arg LOGSTASHCRT "$LOGSTASHCRT" \
--arg LOGSTASHKEY "$LOGSTASHKEY" \
--arg LOGSTASHCA "$LOGSTASHCA" \
'{"name":"grid-logstash","is_default":true,"is_default_monitoring":true,"id":"so-manager_logstash","type":"logstash","hosts":["{{ GLOBALS.manager_ip }}:5055", "{{ GLOBALS.manager }}:5055"],"config_yaml":"","ssl":{"certificate": $LOGSTASHCRT,"key": $LOGSTASHKEY,"certificate_authorities":[ $LOGSTASHCA ]},"proxy_id":null}'
'{"name":"grid-logstash","is_default":true,"is_default_monitoring":true,"id":"so-manager_logstash","type":"logstash","hosts":["{{ GLOBALS.manager_ip }}:5055", "{{ GLOBALS.manager }}:5055"],"config_yaml":"","ssl":{"certificate": $LOGSTASHCRT,"certificate_authorities":[ $LOGSTASHCA ]},"secrets":{"ssl":{"key": $LOGSTASHKEY }},"proxy_id":null}'
)
curl -K /opt/so/conf/elasticsearch/curl.config -L -X POST "localhost:5601/api/fleet/outputs" -H 'kbn-xsrf: true' -H 'Content-Type: application/json' -d "$JSON_STRING"
if ! fleet_api "outputs" -XPOST -H 'kbn-xsrf: true' -H 'Content-Type: application/json' -d "$JSON_STRING"; then
echo -e "\nFailed to create logstash fleet output"
exit 1
fi
printf "\n\n"
{%- endif %}
@@ -106,7 +150,10 @@ else
fi
## This array replaces whatever URLs are currently configured
curl -K /opt/so/conf/elasticsearch/curl.config -L -X POST "localhost:5601/api/fleet/fleet_server_hosts" -H 'kbn-xsrf: true' -H 'Content-Type: application/json' -d "$JSON_STRING"
if ! fleet_api "fleet_server_hosts" -XPOST -H 'kbn-xsrf: true' -H 'Content-Type: application/json' -d "$JSON_STRING"; then
echo -e "\nFailed to add manager fleet URL"
exit 1
fi
printf "\n\n"
### Create Policies & Associated Integration Configuration ###
@@ -117,13 +164,22 @@ printf "\n\n"
/usr/sbin/so-elasticsearch-templates-load
# Initial Endpoints Policy
elastic_fleet_policy_create "endpoints-initial" "Initial Endpoint Policy" "false" "1209600"
if ! elastic_fleet_policy_create "endpoints-initial" "Initial Endpoint Policy" "false" "1209600"; then
echo -e "\nFailed to create endpoints-initial policy..."
exit 1
fi
# Grid Nodes - General Policy
elastic_fleet_policy_create "so-grid-nodes_general" "SO Grid Nodes - General Purpose" "false" "1209600"
if ! elastic_fleet_policy_create "so-grid-nodes_general" "SO Grid Nodes - General Purpose" "false" "1209600"; then
echo -e "\nFailed to create so-grid-nodes_general policy..."
exit 1
fi
# Grid Nodes - Heavy Node Policy
elastic_fleet_policy_create "so-grid-nodes_heavy" "SO Grid Nodes - Heavy Node" "false" "1209600"
if ! elastic_fleet_policy_create "so-grid-nodes_heavy" "SO Grid Nodes - Heavy Node" "false" "1209600"; then
echo -e "\nFailed to create so-grid-nodes_heavy policy..."
exit 1
fi
# Load Integrations for default policies
so-elastic-fleet-integration-policy-load
@@ -135,14 +191,34 @@ JSON_STRING=$( jq -n \
'{"name":$NAME,"host":$URL,"is_default":true}'
)
curl -K /opt/so/conf/elasticsearch/curl.config -L -X POST "localhost:5601/api/fleet/agent_download_sources" -H 'kbn-xsrf: true' -H 'Content-Type: application/json' -d "$JSON_STRING"
if ! fleet_api "agent_download_sources" -XPOST -H 'kbn-xsrf: true' -H 'Content-Type: application/json' -d "$JSON_STRING"; then
echo -e "\nFailed to update Elastic Agent artifact URL"
exit 1
fi
### Finalization ###
# Query for Enrollment Tokens for default policies
ENDPOINTSENROLLMENTOKEN=$(curl -K /opt/so/conf/elasticsearch/curl.config -L "localhost:5601/api/fleet/enrollment_api_keys" -H 'kbn-xsrf: true' -H 'Content-Type: application/json' | jq .list | jq -r -c '.[] | select(.policy_id | contains("endpoints-initial")) | .api_key')
GRIDNODESENROLLMENTOKENGENERAL=$(curl -K /opt/so/conf/elasticsearch/curl.config -L "localhost:5601/api/fleet/enrollment_api_keys" -H 'kbn-xsrf: true' -H 'Content-Type: application/json' | jq .list | jq -r -c '.[] | select(.policy_id | contains("so-grid-nodes_general")) | .api_key')
GRIDNODESENROLLMENTOKENHEAVY=$(curl -K /opt/so/conf/elasticsearch/curl.config -L "localhost:5601/api/fleet/enrollment_api_keys" -H 'kbn-xsrf: true' -H 'Content-Type: application/json' | jq .list | jq -r -c '.[] | select(.policy_id | contains("so-grid-nodes_heavy")) | .api_key')
if ENDPOINTSENROLLMENTOKEN_RAW=$(fleet_api "enrollment_api_keys" -H 'kbn-xsrf: true' -H 'Content-Type: application/json'); then
ENDPOINTSENROLLMENTOKEN=$(echo "$ENDPOINTSENROLLMENTOKEN_RAW" | jq .list | jq -r -c '.[] | select(.policy_id | contains("endpoints-initial")) | .api_key')
else
echo -e "\nFailed to query for Endpoints enrollment token"
exit 1
fi
if GRIDNODESENROLLMENTOKENGENERAL_RAW=$(fleet_api "enrollment_api_keys" -H 'kbn-xsrf: true' -H 'Content-Type: application/json'); then
GRIDNODESENROLLMENTOKENGENERAL=$(echo "$GRIDNODESENROLLMENTOKENGENERAL_RAW" | jq .list | jq -r -c '.[] | select(.policy_id | contains("so-grid-nodes_general")) | .api_key')
else
echo -e "\nFailed to query for Grid nodes - General enrollment token"
exit 1
fi
if GRIDNODESENROLLMENTOKENHEAVY_RAW=$(fleet_api "enrollment_api_keys" -H 'kbn-xsrf: true' -H 'Content-Type: application/json'); then
GRIDNODESENROLLMENTOKENHEAVY=$(echo "$GRIDNODESENROLLMENTOKENHEAVY_RAW" | jq .list | jq -r -c '.[] | select(.policy_id | contains("so-grid-nodes_heavy")) | .api_key')
else
echo -e "\nFailed to query for Grid nodes - Heavy enrollment token"
exit 1
fi
# Store needed data in minion pillar
pillar_file=/opt/so/saltstack/local/pillar/minions/{{ GLOBALS.minion_id }}.sls

View File

@@ -5,46 +5,78 @@
# Elastic License 2.0.
{% from 'vars/globals.map.jinja' import GLOBALS %}
{% if GLOBALS.role in ['so-manager', 'so-standalone', 'so-managersearch'] %}
{% if GLOBALS.role in ['so-manager', 'so-standalone', 'so-managersearch', 'so-managerhype'] %}
. /usr/sbin/so-common
force=false
while [[ $# -gt 0 ]]; do
case $1 in
-f|--force)
force=true
shift
;;
*)
echo "Unknown option $1"
echo "Usage: $0 [-f|--force]"
exit 1
;;
esac
done
# Check to make sure that Kibana API is up & ready
RETURN_CODE=0
wait_for_web_response "http://localhost:5601/api/fleet/settings" "fleet" 300 "curl -K /opt/so/conf/elasticsearch/curl.config"
RETURN_CODE=$?
if [[ "$RETURN_CODE" != "0" ]]; then
printf "Kibana API not accessible, can't setup Elastic Fleet output policy for Kafka..."
exit 1
echo -e "\nKibana API not accessible, can't setup Elastic Fleet output policy for Kafka...\n"
exit 1
fi
output=$(curl -sK /opt/so/conf/elasticsearch/curl.config -L "http://localhost:5601/api/fleet/outputs" | jq -r .items[].id)
KAFKACRT=$(openssl x509 -in /etc/pki/elasticfleet-kafka.crt)
KAFKAKEY=$(openssl rsa -in /etc/pki/elasticfleet-kafka.key)
KAFKACA=$(openssl x509 -in /etc/pki/tls/certs/intca.crt)
KAFKA_OUTPUT_VERSION="2.6.0"
if ! echo "$output" | grep -q "so-manager_kafka"; then
KAFKACRT=$(openssl x509 -in /etc/pki/elasticfleet-kafka.crt)
KAFKAKEY=$(openssl rsa -in /etc/pki/elasticfleet-kafka.key)
KAFKACA=$(openssl x509 -in /etc/pki/tls/certs/intca.crt)
KAFKA_OUTPUT_VERSION="2.6.0"
if ! kafka_output=$(curl -sK /opt/so/conf/elasticsearch/curl.config -L "http://localhost:5601/api/fleet/outputs/so-manager_kafka" --fail 2>/dev/null); then
# Create a new output policy for Kafka. Default is disabled 'is_default: false & is_default_monitoring: false'
JSON_STRING=$( jq -n \
--arg KAFKACRT "$KAFKACRT" \
--arg KAFKAKEY "$KAFKAKEY" \
--arg KAFKACA "$KAFKACA" \
--arg MANAGER_IP "{{ GLOBALS.manager_ip }}:9092" \
--arg KAFKA_OUTPUT_VERSION "$KAFKA_OUTPUT_VERSION" \
'{ "name": "grid-kafka", "id": "so-manager_kafka", "type": "kafka", "hosts": [ $MANAGER_IP ], "is_default": false, "is_default_monitoring": false, "config_yaml": "", "ssl": { "certificate_authorities": [ $KAFKACA ], "certificate": $KAFKACRT, "key": $KAFKAKEY, "verification_mode": "full" }, "proxy_id": null, "client_id": "Elastic", "version": $KAFKA_OUTPUT_VERSION, "compression": "none", "auth_type": "ssl", "partition": "round_robin", "round_robin": { "group_events": 10 }, "topics":[{"topic":"default-securityonion"}], "headers": [ { "key": "", "value": "" } ], "timeout": 30, "broker_timeout": 30, "required_acks": 1 }'
)
curl -sK /opt/so/conf/elasticsearch/curl.config -L -X POST "localhost:5601/api/fleet/outputs" -H 'kbn-xsrf: true' -H 'Content-Type: application/json' -d "$JSON_STRING" -o /dev/null
refresh_output=$(curl -sK /opt/so/conf/elasticsearch/curl.config -L "http://localhost:5601/api/fleet/outputs" | jq -r .items[].id)
if ! echo "$refresh_output" | grep -q "so-manager_kafka"; then
echo -e "\nFailed to setup Elastic Fleet output policy for Kafka...\n"
--arg KAFKACRT "$KAFKACRT" \
--arg KAFKAKEY "$KAFKAKEY" \
--arg KAFKACA "$KAFKACA" \
--arg MANAGER_IP "{{ GLOBALS.manager_ip }}:9092" \
--arg KAFKA_OUTPUT_VERSION "$KAFKA_OUTPUT_VERSION" \
'{"name":"grid-kafka", "id":"so-manager_kafka","type":"kafka","hosts":[ $MANAGER_IP ],"is_default":false,"is_default_monitoring":false,"config_yaml":"","ssl":{"certificate_authorities":[ $KAFKACA ],"certificate": $KAFKACRT ,"key":"","verification_mode":"full"},"proxy_id":null,"client_id":"Elastic","version": $KAFKA_OUTPUT_VERSION ,"compression":"none","auth_type":"ssl","partition":"round_robin","round_robin":{"group_events":10},"topics":[{"topic":"default-securityonion"}],"headers":[{"key":"","value":""}],"timeout":30,"broker_timeout":30,"required_acks":1,"secrets":{"ssl":{"key": $KAFKAKEY }}}'
)
if ! response=$(curl -sK /opt/so/conf/elasticsearch/curl.config -L -X POST "localhost:5601/api/fleet/outputs" -H 'kbn-xsrf: true' -H 'Content-Type: application/json' -d "$JSON_STRING" --fail 2>/dev/null); then
echo -e "\nFailed to setup Elastic Fleet output policy for Kafka...\n"
exit 1
else
echo -e "\nSuccessfully setup Elastic Fleet output policy for Kafka...\n"
exit 0
fi
elif kafka_output=$(curl -sK /opt/so/conf/elasticsearch/curl.config -L "http://localhost:5601/api/fleet/outputs/so-manager_kafka" --fail 2>/dev/null) && [[ "$force" == "true" ]]; then
# force an update to Kafka policy. Keep the current value of Kafka output policy (enabled/disabled).
ENABLED_DISABLED=$(echo "$kafka_output" | jq -e .item.is_default)
HOSTS=$(echo "$kafka_output" | jq -r '.item.hosts')
JSON_STRING=$( jq -n \
--arg KAFKACRT "$KAFKACRT" \
--arg KAFKAKEY "$KAFKAKEY" \
--arg KAFKACA "$KAFKACA" \
--arg ENABLED_DISABLED "$ENABLED_DISABLED"\
--arg KAFKA_OUTPUT_VERSION "$KAFKA_OUTPUT_VERSION" \
--argjson HOSTS "$HOSTS" \
'{"name":"grid-kafka","type":"kafka","hosts":$HOSTS,"is_default":$ENABLED_DISABLED,"is_default_monitoring":$ENABLED_DISABLED,"config_yaml":"","ssl":{"certificate_authorities":[ $KAFKACA ],"certificate": $KAFKACRT ,"key":"","verification_mode":"full"},"proxy_id":null,"client_id":"Elastic","version": $KAFKA_OUTPUT_VERSION ,"compression":"none","auth_type":"ssl","partition":"round_robin","round_robin":{"group_events":10},"topics":[{"topic":"default-securityonion"}],"headers":[{"key":"","value":""}],"timeout":30,"broker_timeout":30,"required_acks":1,"secrets":{"ssl":{"key": $KAFKAKEY }}}'
)
if ! response=$(curl -sK /opt/so/conf/elasticsearch/curl.config -L -X PUT "localhost:5601/api/fleet/outputs/so-manager_kafka" -H 'kbn-xsrf: true' -H 'Content-Type: application/json' -d "$JSON_STRING" --fail 2>/dev/null); then
echo -e "\nFailed to force update to Elastic Fleet output policy for Kafka...\n"
exit 1
elif echo "$refresh_output" | grep -q "so-manager_kafka"; then
echo -e "\nSuccessfully setup Elastic Fleet output policy for Kafka...\n"
else
echo -e "\nForced update to Elastic Fleet output policy for Kafka...\n"
fi
elif echo "$output" | grep -q "so-manager_kafka"; then
else
echo -e "\nElastic Fleet output policy for Kafka already exists...\n"
fi
{% else %}

View File

@@ -1,6 +1,6 @@
elasticsearch:
enabled: false
version: 8.18.3
version: 8.18.6
index_clean: true
config:
action:
@@ -284,6 +284,86 @@ elasticsearch:
hot:
actions: {}
min_age: 0ms
so-assistant-chat:
index_sorting: false
index_template:
composed_of:
- assistant-chat-mappings
- assistant-chat-settings
data_stream:
allow_custom_routing: false
hidden: false
ignore_missing_component_templates: []
index_patterns:
- so-assistant-chat*
priority: 501
template:
mappings:
date_detection: false
dynamic_templates:
- strings_as_keyword:
mapping:
ignore_above: 1024
type: keyword
match_mapping_type: string
settings:
index:
lifecycle:
name: so-assistant-chat-logs
mapping:
total_fields:
limit: 1500
number_of_replicas: 0
number_of_shards: 1
refresh_interval: 1s
sort:
field: '@timestamp'
order: desc
policy:
phases:
hot:
actions: {}
min_age: 0ms
so-assistant-session:
index_sorting: false
index_template:
composed_of:
- assistant-session-mappings
- assistant-session-settings
data_stream:
allow_custom_routing: false
hidden: false
ignore_missing_component_templates: []
index_patterns:
- so-assistant-session*
priority: 501
template:
mappings:
date_detection: false
dynamic_templates:
- strings_as_keyword:
mapping:
ignore_above: 1024
type: keyword
match_mapping_type: string
settings:
index:
lifecycle:
name: so-assistant-session-logs
mapping:
total_fields:
limit: 1500
number_of_replicas: 0
number_of_shards: 1
refresh_interval: 1s
sort:
field: '@timestamp'
order: desc
policy:
phases:
hot:
actions: {}
min_age: 0ms
so-endgame:
index_sorting: false
index_template:
@@ -567,6 +647,7 @@ elasticsearch:
- common-settings
- common-dynamic-mappings
- winlog-mappings
- hash-mappings
data_stream: {}
ignore_missing_component_templates: []
index_patterns:
@@ -1242,6 +1323,68 @@ elasticsearch:
set_priority:
priority: 50
min_age: 30d
so-elastic-agent-monitor:
index_sorting: false
index_template:
composed_of:
- event-mappings
- so-elastic-agent-monitor
- so-fleet_integrations.ip_mappings-1
- so-fleet_globals-1
- so-fleet_agent_id_verification-1
data_stream:
allow_custom_routing: false
hidden: false
index_patterns:
- logs-agentmonitor-*
priority: 501
template:
mappings:
_meta:
managed: true
managed_by: security_onion
package:
name: elastic_agent
settings:
index:
lifecycle:
name: so-elastic-agent-monitor-logs
mapping:
total_fields:
limit: 5000
number_of_replicas: 0
sort:
field: '@timestamp'
order: desc
policy:
_meta:
managed: true
managed_by: security_onion
package:
name: elastic_agent
phases:
cold:
actions:
set_priority:
priority: 0
min_age: 60d
delete:
actions:
delete: {}
min_age: 365d
hot:
actions:
rollover:
max_age: 30d
max_primary_shard_size: 50gb
set_priority:
priority: 100
min_age: 0ms
warm:
actions:
set_priority:
priority: 50
min_age: 30d
so-logs-elastic_agent_x_apm_server:
index_sorting: false
index_template:
@@ -3874,6 +4017,7 @@ elasticsearch:
- vulnerability-mappings
- common-settings
- common-dynamic-mappings
- hash-mappings
data_stream: {}
ignore_missing_component_templates: []
index_patterns:
@@ -3987,6 +4131,7 @@ elasticsearch:
- vulnerability-mappings
- common-settings
- common-dynamic-mappings
- hash-mappings
data_stream: {}
ignore_missing_component_templates: []
index_patterns:
@@ -4028,7 +4173,7 @@ elasticsearch:
hot:
actions:
rollover:
max_age: 1d
max_age: 30d
max_primary_shard_size: 50gb
set_priority:
priority: 100
@@ -4100,6 +4245,7 @@ elasticsearch:
- vulnerability-mappings
- common-settings
- common-dynamic-mappings
- hash-mappings
data_stream: {}
ignore_missing_component_templates: []
index_patterns:
@@ -4329,6 +4475,7 @@ elasticsearch:
- zeek-mappings
- common-settings
- common-dynamic-mappings
- hash-mappings
data_stream: {}
ignore_missing_component_templates: []
index_patterns:

View File

@@ -26,7 +26,7 @@
{
"geoip": {
"field": "destination.ip",
"target_field": "destination_geo",
"target_field": "destination.as",
"database_file": "GeoLite2-ASN.mmdb",
"ignore_missing": true,
"ignore_failure": true,
@@ -36,13 +36,17 @@
{
"geoip": {
"field": "source.ip",
"target_field": "source_geo",
"target_field": "source.as",
"database_file": "GeoLite2-ASN.mmdb",
"ignore_missing": true,
"ignore_failure": true,
"properties": ["ip", "asn", "organization_name", "network"]
}
},
{ "rename": { "field": "destination.as.organization_name", "target_field": "destination.as.organization.name", "ignore_failure": true, "ignore_missing": true } },
{ "rename": { "field": "source.as.organization_name", "target_field": "source.as.organization.name", "ignore_failure": true, "ignore_missing": true } },
{ "rename": { "field": "destination.as.asn", "target_field": "destination.as.number", "ignore_failure": true, "ignore_missing": true } },
{ "rename": { "field": "source.as.asn", "target_field": "source.as.number", "ignore_failure": true, "ignore_missing": true } },
{ "set": { "if": "ctx.event?.severity == 1", "field": "event.severity_label", "value": "low", "override": true } },
{ "set": { "if": "ctx.event?.severity == 2", "field": "event.severity_label", "value": "medium", "override": true } },
{ "set": { "if": "ctx.event?.severity == 3", "field": "event.severity_label", "value": "high", "override": true } },

View File

@@ -0,0 +1,22 @@
{
"processors": [
{
"convert": {
"field": "_ingest._value",
"type": "ip",
"target_field": "_ingest._temp_ip",
"ignore_failure": true
}
},
{
"append": {
"field": "temp._valid_ips",
"allow_duplicates": false,
"value": [
"{{{_ingest._temp_ip}}}"
],
"ignore_failure": true
}
}
]
}

View File

@@ -0,0 +1,36 @@
{
"processors": [
{
"set": {
"field": "event.dataset",
"value": "gridmetrics.agents",
"ignore_failure": true
}
},
{
"set": {
"field": "event.module",
"value": "gridmetrics",
"ignore_failure": true
}
},
{
"remove": {
"field": [
"host",
"elastic_agent",
"agent"
],
"ignore_missing": true,
"ignore_failure": true
}
},
{
"json": {
"field": "message",
"add_to_root": true,
"ignore_failure": true
}
}
]
}

View File

@@ -24,7 +24,7 @@
{ "rename": { "if": "ctx.winlog?.provider_name == 'Microsoft-Windows-Windows Defender'", "ignore_missing": true, "field": "winlog.event_data.Threat Name", "target_field": "winlog.event_data.threat_name" } },
{ "set": { "if": "ctx?.metadata?.kafka != null" , "field": "kafka.id", "value": "{{metadata.kafka.partition}}{{metadata.kafka.offset}}{{metadata.kafka.timestamp}}", "ignore_failure": true } },
{"append": {"field":"related.ip","value":["{{source.ip}}","{{destination.ip}}"],"allow_duplicates":false,"if":"ctx?.event?.dataset == 'endpoint.events.network' && ctx?.source?.ip != null","ignore_failure":true}},
{"foreach": {"field":"host.ip","processor":{"append":{"field":"related.ip","value":"{{_ingest._value}}","allow_duplicates":false}},"if":"ctx?.event?.module == 'endpoint'","description":"Extract IPs from Elastic Agent events (host.ip) and adds them to related.ip"}},
{"foreach": {"field":"host.ip","processor":{"append":{"field":"related.ip","value":"{{_ingest._value}}","allow_duplicates":false}},"if":"ctx?.event?.module == 'endpoint' && ctx?.host?.ip != null","ignore_missing":true, "description":"Extract IPs from Elastic Agent events (host.ip) and adds them to related.ip"}},
{ "remove": { "field": [ "message2", "type", "fields", "category", "module", "dataset", "event.dataset_temp", "dataset_tag_temp", "module_temp", "datastream_dataset_temp" ], "ignore_missing": true, "ignore_failure": true } }
]
}

View File

@@ -107,61 +107,61 @@
},
{
"pipeline": {
"name": "logs-pfsense.log-1.23.0-firewall",
"name": "logs-pfsense.log-1.23.1-firewall",
"if": "ctx.event.provider == 'filterlog'"
}
},
{
"pipeline": {
"name": "logs-pfsense.log-1.23.0-openvpn",
"name": "logs-pfsense.log-1.23.1-openvpn",
"if": "ctx.event.provider == 'openvpn'"
}
},
{
"pipeline": {
"name": "logs-pfsense.log-1.23.0-ipsec",
"name": "logs-pfsense.log-1.23.1-ipsec",
"if": "ctx.event.provider == 'charon'"
}
},
{
"pipeline": {
"name": "logs-pfsense.log-1.23.0-dhcp",
"name": "logs-pfsense.log-1.23.1-dhcp",
"if": "[\"dhcpd\", \"dhclient\", \"dhcp6c\"].contains(ctx.event.provider)"
}
},
{
"pipeline": {
"name": "logs-pfsense.log-1.23.0-unbound",
"name": "logs-pfsense.log-1.23.1-unbound",
"if": "ctx.event.provider == 'unbound'"
}
},
{
"pipeline": {
"name": "logs-pfsense.log-1.23.0-haproxy",
"name": "logs-pfsense.log-1.23.1-haproxy",
"if": "ctx.event.provider == 'haproxy'"
}
},
{
"pipeline": {
"name": "logs-pfsense.log-1.23.0-php-fpm",
"name": "logs-pfsense.log-1.23.1-php-fpm",
"if": "ctx.event.provider == 'php-fpm'"
}
},
{
"pipeline": {
"name": "logs-pfsense.log-1.23.0-squid",
"name": "logs-pfsense.log-1.23.1-squid",
"if": "ctx.event.provider == 'squid'"
}
},
{
"pipeline": {
"name": "logs-pfsense.log-1.23.0-snort",
"name": "logs-pfsense.log-1.23.1-snort",
"if": "ctx.event.provider == 'snort'"
}
},
{
"pipeline": {
"name": "logs-pfsense.log-1.23.0-suricata",
"name": "logs-pfsense.log-1.23.1-suricata",
"if": "ctx.event.provider == 'suricata'"
}
},

View File

@@ -24,6 +24,10 @@
{ "rename": { "field": "message2.resp_cc", "target_field": "server.country_code", "ignore_missing": true } },
{ "rename": { "field": "message2.sensorname", "target_field": "observer.name", "ignore_missing": true } },
{ "rename": { "field": "message2.vlan", "target_field": "network.vlan.id", "ignore_missing": true } },
{ "rename": { "field": "message2.ja4l", "target_field": "hash.ja4l", "ignore_missing" : true, "if": "ctx.message2?.ja4l != null && ctx.message2.ja4l.length() > 0" }},
{ "rename": { "field": "message2.ja4ls", "target_field": "hash.ja4ls", "ignore_missing" : true, "if": "ctx.message2?.ja4ls != null && ctx.message2.ja4ls.length() > 0" }},
{ "rename": { "field": "message2.ja4t", "target_field": "hash.ja4t", "ignore_missing" : true, "if": "ctx.message2?.ja4t != null && ctx.message2.ja4t.length() > 0" }},
{ "rename": { "field": "message2.ja4ts", "target_field": "hash.ja4ts", "ignore_missing" : true, "if": "ctx.message2?.ja4ts != null && ctx.message2.ja4ts.length() > 0" }},
{ "script": { "lang": "painless", "source": "ctx.network.bytes = (ctx.client.bytes + ctx.server.bytes)", "ignore_failure": true } },
{ "set": { "if": "ctx.connection?.state == 'S0'", "field": "connection.state_description", "value": "Connection attempt seen, no reply" } },
{ "set": { "if": "ctx.connection?.state == 'S1'", "field": "connection.state_description", "value": "Connection established, not terminated" } },

View File

@@ -21,7 +21,10 @@
{ "rename": { "field": "message2.RA", "target_field": "dns.recursion.available", "ignore_missing": true } },
{ "rename": { "field": "message2.Z", "target_field": "dns.reserved", "ignore_missing": true } },
{ "rename": { "field": "message2.answers", "target_field": "dns.answers.name", "ignore_missing": true } },
{ "script": { "lang": "painless", "if": "ctx.dns != null && ctx.dns.answers != null && ctx.dns.answers.name != null", "source": "def ips = []; for (item in ctx.dns.answers.name) { if (item =~ /^(?:[0-9]{1,3}\\.){3}[0-9]{1,3}$/ || item =~ /^([a-fA-F0-9:]+:+)+[a-fA-F0-9]+$/) { ips.add(item); } } ctx.dns.resolved_ip = ips;" } },
{ "foreach": {"field": "dns.answers.name","processor": {"pipeline": {"name": "common.ip_validation"}},"if": "ctx.dns != null && ctx.dns.answers != null && ctx.dns.answers.name != null","ignore_failure": true}},
{ "foreach": {"field": "temp._valid_ips","processor": {"append": {"field": "dns.resolved_ip","allow_duplicates": false,"value": "{{{_ingest._value}}}","ignore_failure": true}},"ignore_failure": true}},
{ "script": { "source": "if (ctx.dns.resolved_ip != null && ctx.dns.resolved_ip instanceof List) {\n ctx.dns.resolved_ip.removeIf(item -> item == null || item.toString().trim().isEmpty());\n }","ignore_failure": true }},
{ "remove": {"field": ["temp"], "ignore_missing": true ,"ignore_failure": true } },
{ "rename": { "field": "message2.TTLs", "target_field": "dns.ttls", "ignore_missing": true } },
{ "rename": { "field": "message2.rejected", "target_field": "dns.query.rejected", "ignore_missing": true } },
{ "script": { "lang": "painless", "source": "ctx.dns.query.length = ctx.dns.query.name.length()", "ignore_failure": true } },

View File

@@ -27,6 +27,7 @@
{ "rename": { "field": "message2.resp_fuids", "target_field": "log.id.resp_fuids", "ignore_missing": true } },
{ "rename": { "field": "message2.resp_filenames", "target_field": "file.resp_filenames", "ignore_missing": true } },
{ "rename": { "field": "message2.resp_mime_types", "target_field": "file.resp_mime_types", "ignore_missing": true } },
{ "rename": { "field": "message2.ja4h", "target_field": "hash.ja4h", "ignore_missing": true, "if": "ctx?.message2?.ja4h != null && ctx.message2.ja4h.length() > 0" } },
{ "script": { "lang": "painless", "source": "ctx.uri_length = ctx.uri.length()", "ignore_failure": true } },
{ "script": { "lang": "painless", "source": "ctx.useragent_length = ctx.useragent.length()", "ignore_failure": true } },
{ "script": { "lang": "painless", "source": "ctx.virtual_host_length = ctx.virtual_host.length()", "ignore_failure": true } },

View File

@@ -27,6 +27,7 @@
{ "rename": { "field": "message2.resp_filenames", "target_field": "file.resp_filenames", "ignore_missing": true } },
{ "rename": { "field": "message2.resp_mime_types", "target_field": "file.resp_mime_types", "ignore_missing": true } },
{ "rename": { "field": "message2.stream_id", "target_field": "http2.stream_id", "ignore_missing": true } },
{ "rename": { "field": "message2.ja4h", "target_field": "hash.ja4h", "ignore_missing": true, "if": "ctx?.message2?.ja4h != null && ctx.message2.ja4h.length() > 0" } },
{ "remove": { "field": "message2.tags", "ignore_failure": true } },
{ "remove": { "field": ["host"], "ignore_failure": true } },
{ "script": { "lang": "painless", "source": "ctx.uri_length = ctx.uri.length()", "ignore_failure": true } },

View File

@@ -0,0 +1,10 @@
{
"description": "zeek.ja4ssh",
"processors": [
{"set": {"field": "event.dataset","value": "ja4ssh"}},
{"remove": {"field": "host","ignore_missing": true,"ignore_failure": true}},
{"json": {"field": "message","target_field": "message2","ignore_failure": true}},
{"rename": {"field": "message2.ja4ssh", "target_field": "hash.ja4ssh", "ignore_missing": true, "if": "ctx?.message2?.ja4ssh != null && ctx.message2.ja4ssh.length() > 0" }},
{"pipeline": {"name": "zeek.common"}}
]
}

View File

@@ -23,6 +23,8 @@
{ "rename": { "field": "message2.validation_status","target_field": "ssl.validation_status", "ignore_missing": true } },
{ "rename": { "field": "message2.ja3", "target_field": "hash.ja3", "ignore_missing": true } },
{ "rename": { "field": "message2.ja3s", "target_field": "hash.ja3s", "ignore_missing": true } },
{ "rename": { "field": "message2.ja4", "target_field": "hash.ja4", "ignore_missing": true, "if": "ctx?.message2?.ja4 != null && ctx.message2.ja4.length() > 0" } },
{ "rename": { "field": "message2.ja4s", "target_field": "hash.ja4s", "ignore_missing": true, "if": "ctx?.message2?.ja4s != null && ctx.message2.ja4s.length() > 0" } },
{ "foreach":
{
"if": "ctx?.tls?.client?.hash?.sha256 !=null",

View File

@@ -42,6 +42,7 @@
{ "dot_expander": { "field": "basic_constraints.path_length", "path": "message2", "ignore_failure": true } },
{ "rename": { "field": "message2.basic_constraints.path_length", "target_field": "x509.basic_constraints.path_length", "ignore_missing": true } },
{ "rename": { "field": "message2.fingerprint", "target_field": "hash.sha256", "ignore_missing": true } },
{ "rename": { "field": "message2.ja4x", "target_field": "hash.ja4x", "ignore_missing": true, "if": "ctx?.message2?.ja4x != null && ctx.message2.ja4x.length() > 0" } },
{ "pipeline": { "name": "zeek.common_ssl" } }
]
}

View File

@@ -0,0 +1,69 @@
{
"template": {
"mappings": {
"properties": {
"hash": {
"type": "object",
"properties": {
"ja3": {
"type": "keyword",
"ignore_above": 1024
},
"ja3s": {
"type": "keyword",
"ignore_above": 1024
},
"hassh": {
"type": "keyword",
"ignore_above": 1024
},
"md5": {
"type": "keyword",
"ignore_above": 1024
},
"sha1": {
"type": "keyword",
"ignore_above": 1024
},
"sha256": {
"type": "keyword",
"ignore_above": 1024
},
"ja4": {
"type": "keyword",
"ignore_above": 1024
},
"ja4l": {
"type": "keyword",
"ignore_above": 1024
},
"ja4ls": {
"type": "keyword",
"ignore_above": 1024
},
"ja4t": {
"type": "keyword",
"ignore_above": 1024
},
"ja4ts": {
"type": "keyword",
"ignore_above": 1024
},
"ja4ssh": {
"type": "keyword",
"ignore_above": 1024
},
"ja4h": {
"type": "keyword",
"ignore_above": 1024
},
"ja4x": {
"type": "keyword",
"ignore_above": 1024
}
}
}
}
}
}
}

View File

@@ -0,0 +1,43 @@
{
"template": {
"mappings": {
"properties": {
"agent": {
"type": "object",
"properties": {
"hostname": {
"ignore_above": 1024,
"type": "keyword"
},
"id": {
"ignore_above": 1024,
"type": "keyword"
},
"last_checkin_status": {
"ignore_above": 1024,
"type": "keyword"
},
"last_checkin": {
"type": "date"
},
"name": {
"ignore_above": 1024,
"type": "keyword"
},
"offline_duration_hours": {
"type": "integer"
},
"policy_id": {
"ignore_above": 1024,
"type": "keyword"
},
"status": {
"ignore_above": 1024,
"type": "keyword"
}
}
}
}
}
}
}

View File

@@ -0,0 +1,104 @@
{
"template": {
"mappings": {
"properties": {
"@timestamp": {
"type": "date"
},
"so_kind": {
"ignore_above": 1024,
"type": "keyword"
},
"so_operation": {
"ignore_above": 1024,
"type": "keyword"
},
"so_chat": {
"properties": {
"role": {
"ignore_above": 1024,
"type": "keyword"
},
"content": {
"type": "object",
"enabled": false
},
"sessionId": {
"ignore_above": 1024,
"type": "keyword"
},
"createTime": {
"type": "date"
},
"deletedAt": {
"type": "date"
},
"tags": {
"ignore_above": 1024,
"type": "keyword"
},
"tool_use_id": {
"ignore_above": 1024,
"type": "keyword"
},
"userId": {
"ignore_above": 1024,
"type": "keyword"
},
"message": {
"properties": {
"id": {
"ignore_above": 1024,
"type": "keyword"
},
"type": {
"ignore_above": 1024,
"type": "keyword"
},
"role": {
"ignore_above": 1024,
"type": "keyword"
},
"model": {
"ignore_above": 1024,
"type": "keyword"
},
"contentStr": {
"type": "text"
},
"contentBlocks": {
"type": "nested",
"enabled": false
},
"stopReason": {
"ignore_above": 1024,
"type": "keyword"
},
"stopSequence": {
"ignore_above": 1024,
"type": "keyword"
},
"usage": {
"properties": {
"input_tokens": {
"type": "long"
},
"output_tokens": {
"type": "long"
},
"credits": {
"type": "long"
}
}
}
}
}
}
}
}
}
},
"_meta": {
"ecs_version": "1.12.2"
}
}

View File

@@ -0,0 +1,7 @@
{
"template": {},
"version": 1,
"_meta": {
"description": "default settings for common Security Onion Assistant indices"
}
}

View File

@@ -0,0 +1,44 @@
{
"template": {
"mappings": {
"properties": {
"@timestamp": {
"type": "date"
},
"so_kind": {
"ignore_above": 1024,
"type": "keyword"
},
"so_session": {
"properties": {
"title": {
"ignore_above": 1024,
"type": "keyword"
},
"sessionId": {
"ignore_above": 1024,
"type": "keyword"
},
"createTime": {
"type": "date"
},
"deleteTime": {
"type": "date"
},
"tags": {
"ignore_above": 1024,
"type": "keyword"
},
"userId": {
"ignore_above": 1024,
"type": "keyword"
}
}
}
}
}
},
"_meta": {
"ecs_version": "1.12.2"
}
}

View File

@@ -0,0 +1,7 @@
{
"template": {},
"version": 1,
"_meta": {
"description": "default settings for common Security Onion Assistant indices"
}
}

View File

@@ -21,7 +21,7 @@ while [[ "$COUNT" -le 240 ]]; do
ELASTICSEARCH_CONNECTED="yes"
echo "connected!"
# Check cluster health once connected
so-elasticsearch-query _cluster/health?wait_for_status=yellow > /dev/null 2>&1
so-elasticsearch-query _cluster/health?wait_for_status=yellow\&timeout=120s > /dev/null 2>&1
break
else
((COUNT+=1))

View File

@@ -0,0 +1,195 @@
#!/bin/bash
. /usr/sbin/so-common
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
BOLD='\033[1;37m'
NC='\033[0m'
log_title() {
if [ $1 == "LOG" ]; then
echo -e "\n${BOLD}================ $2 ================${NC}\n"
elif [ $1 == "OK" ]; then
echo -e "${GREEN} $2 ${NC}"
elif [ $1 == "WARN" ]; then
echo -e "${YELLOW} $2 ${NC}"
elif [ $1 == "ERROR" ]; then
echo -e "${RED} $2 ${NC}"
fi
}
health_report() {
if ! health_report_output=$(so-elasticsearch-query _health_report?format=json --fail 2>/dev/null); then
log_title "ERROR" "Failed to retrieve health report from Elasticsearch"
return 1
fi
non_green_count=$(echo "$health_report_output" | jq '[.indicators | to_entries[] | select(.value.status != "green")] | length')
if [ "$non_green_count" -gt 0 ]; then
echo "$health_report_output" | jq -r '.indicators | to_entries[] | select(.value.status != "green") | .key' | while read -r indicator_name; do
indicator=$(echo "$health_report_output" | jq -r ".indicators.\"$indicator_name\"")
status=$(echo "$indicator" | jq -r '.status')
symptom=$(echo "$indicator" | jq -r '.symptom // "No symptom available"')
# reormat indicator name
display_name=$(echo "$indicator_name" | tr '_' ' ' | sed 's/\b\(.\)/\u\1/g')
if [ "$status" = "yellow" ]; then
log_title "WARN" "$display_name: $symptom"
else
log_title "ERROR" "$display_name: $symptom"
fi
# diagnosis if available
echo "$indicator" | jq -c '.diagnosis[]? // empty' | while read -r diagnosis; do
cause=$(echo "$diagnosis" | jq -r '.cause // "Unknown"')
action=$(echo "$diagnosis" | jq -r '.action // "No action specified"')
echo -e " ${BOLD}Cause:${NC} $cause\n"
echo -e " ${BOLD}Action:${NC} $action\n"
# Check for affected indices
affected_indices=$(echo "$diagnosis" | jq -r '.affected_resources.indices[]? // empty')
if [ -n "$affected_indices" ]; then
echo -e " ${BOLD}Affected indices:${NC}"
total_indices=$(echo "$affected_indices" | wc -l)
echo "$affected_indices" | head -10 | while read -r index; do
echo " - $index"
done
if [ "$total_indices" -gt 10 ]; then
remaining=$((total_indices - 10))
echo " ... and $remaining more indices (truncated for readability)"
fi
fi
echo
done
done
else
log_title "OK" "All health indicators are green"
fi
}
elasticsearch_status() {
log_title "LOG" "Elasticsearch Status"
if so-elasticsearch-query / --fail --output /dev/null; then
health_report
else
log_title "ERROR" "Elasticsearch API is not accessible"
so-status
log_title "ERROR" "Make sure Elasticsearch is running. Addtionally, check for startup errors in /opt/so/log/elasticsearch/securityonion.log${NC}\n"
exit 1
fi
}
indices_by_age() {
log_title "LOG" "Indices by Creation Date - Size > 1KB"
log_title "WARN" "Since high/flood watermark has been reached consider updating ILM policies.\n"
if ! indices_output=$(so-elasticsearch-query '_cat/indices?v&s=creation.date:asc&h=creation.date.string,index,status,health,docs.count,pri.store.size&bytes=b&format=json' --fail 2>/dev/null); then
log_title "ERROR" "Failed to retrieve indices list from Elasticsearch"
return 1
fi
# Filter for indices with size > 1KB (1024 bytes) and format output
echo -e "${BOLD}Creation Date Name Size${NC}"
echo -e "${BOLD}--------------------------------------------------------------------------------------------------------------${NC}"
# Create list of indices excluding .internal, so-detection*, so-case*
echo "$indices_output" | jq -r '.[] | select((."pri.store.size" | tonumber) > 1024) | select(.index | (startswith(".internal") or startswith("so-detection") or startswith("so-case")) | not ) | "\(."creation.date.string") | \(.index) | \(."pri.store.size")"' | while IFS='|' read -r creation_date index_name size_bytes; do
# Convert bytes to GB / MB
if [ "$size_bytes" -gt 1073741824 ]; then
size_human=$(echo "scale=2; $size_bytes / 1073741824" | bc)GB
else
size_human=$(echo "scale=2; $size_bytes / 1048576" | bc)MB
fi
creation_date=$(date -d "$creation_date" '+%Y-%m-%dT%H:%MZ' )
# Format output with spacing
printf "%-19s %-76s %10s\n" "$creation_date" "$index_name" "$size_human"
done
}
watermark_settings() {
watermark_path=".defaults.cluster.routing.allocation.disk.watermark"
if ! watermark_output=$(so-elasticsearch-query _cluster/settings?include_defaults=true\&filter_path=*.cluster.routing.allocation.disk.* --fail 2>/dev/null); then
log_title "ERROR" "Failed to retrieve watermark settings from Elasticsearch"
return 1
fi
if ! disk_allocation_output=$(so-elasticsearch-query _cat/nodes?v\&h=name,ip,disk.used_percent,disk.avail,disk.total,node.role\&format=json --fail 2>/dev/null); then
log_title "ERROR" "Failed to retrieve disk allocation data from Elasticsearch"
return 1
fi
flood=$(echo $watermark_output | jq -r "$watermark_path.flood_stage" )
high=$(echo $watermark_output | jq -r "$watermark_path.high" )
low=$(echo $watermark_output | jq -r "$watermark_path.low" )
# Strip percentage signs for comparison
flood_num=${flood%\%}
high_num=${high%\%}
low_num=${low%\%}
# Check each nodes disk usage
log_title "LOG" "Disk Usage Check"
echo -e "${BOLD}LOW:${GREEN}$low${NC}${BOLD} HIGH:${YELLOW}${high}${NC}${BOLD} FLOOD:${RED}${flood}${NC}\n"
# Only show data nodes (d=data, h=hot, w=warm, c=cold, f=frozen, s=content)
echo "$disk_allocation_output" | jq -r '.[] | select(.["node.role"] | test("[dhwcfs]")) | "\(.name)|\(.["disk.used_percent"])"' | while IFS='|' read -r node_name disk_used; do
disk_used_num=$(echo $disk_used | bc)
if (( $(echo "$disk_used_num >= $flood_num" | bc -l) )); then
log_title "ERROR" "$node_name is at or above the flood watermark ($flood)! Disk usage: ${disk_used}%"
touch /tmp/watermark_reached
elif (( $(echo "$disk_used_num >= $high_num" | bc -l) )); then
log_title "ERROR" "$node_name is at or above the high watermark ($high)! Disk usage: ${disk_used}%"
touch /tmp/watermark_reached
else
log_title "OK" "$node_name disk usage: ${disk_used}%"
fi
done
# Check if we need to show indices by age
if [ -f /tmp/watermark_reached ]; then
indices_by_age
rm -f /tmp/watermark_reached
fi
}
unassigned_shards() {
if ! unassigned_shards_output=$(so-elasticsearch-query _cat/shards?v\&h=index,shard,prirep,state,unassigned.reason,unassigned.details\&s=state\&format=json --fail 2>/dev/null); then
log_title "ERROR" "Failed to retrieve shard data from Elasticsearch"
return 1
fi
log_title "LOG" "Unassigned Shards Check"
# Check if there are any UNASSIGNED shards
unassigned_count=$(echo "$unassigned_shards_output" | jq '[.[] | select(.state == "UNASSIGNED")] | length')
if [ "$unassigned_count" -gt 0 ]; then
echo "$unassigned_shards_output" | jq -r '.[] | select(.state == "UNASSIGNED") | "\(.index)|\(.shard)|\(.prirep)|\(."unassigned.reason")"' | while IFS='|' read -r index shard prirep reason; do
if [ "$prirep" = "r" ]; then
log_title "WARN" "Replica shard for index $index is unassigned. Reason: $reason"
elif [ "$prirep" = "p" ]; then
log_title "ERROR" "Primary shard for index $index is unassigned. Reason: $reason"
fi
done
else
log_title "OK" "All shards are assigned"
fi
}
main() {
elasticsearch_status
watermark_settings
unassigned_shards
}
main

View File

@@ -909,6 +909,15 @@ firewall:
- elastic_agent_control
- elastic_agent_data
- elastic_agent_update
hypervisor:
portgroups:
- yum
- docker_registry
- influxdb
- elastic_agent_control
- elastic_agent_data
- elastic_agent_update
- sensoroni
customhostgroup0:
portgroups: []
customhostgroup1:
@@ -961,6 +970,9 @@ firewall:
desktop:
portgroups:
- salt_manager
hypervisor:
portgroups:
- salt_manager
self:
portgroups:
- syslog
@@ -1113,6 +1125,15 @@ firewall:
- elastic_agent_control
- elastic_agent_data
- elastic_agent_update
hypervisor:
portgroups:
- yum
- docker_registry
- influxdb
- elastic_agent_control
- elastic_agent_data
- elastic_agent_update
- sensoroni
customhostgroup0:
portgroups: []
customhostgroup1:
@@ -1168,6 +1189,9 @@ firewall:
desktop:
portgroups:
- salt_manager
hypervisor:
portgroups:
- salt_manager
self:
portgroups:
- syslog
@@ -1206,6 +1230,10 @@ firewall:
portgroups:
- elasticsearch_node
- elasticsearch_rest
managerhype:
portgroups:
- elasticsearch_node
- elasticsearch_rest
standalone:
portgroups:
- elasticsearch_node
@@ -1353,6 +1381,10 @@ firewall:
portgroups:
- elasticsearch_node
- elasticsearch_rest
managerhype:
portgroups:
- elasticsearch_node
- elasticsearch_rest
standalone:
portgroups:
- elasticsearch_node
@@ -1555,6 +1587,9 @@ firewall:
portgroups:
- redis
- elastic_agent_data
managerhype:
portgroups:
- elastic_agent_data
self:
portgroups:
- redis
@@ -1672,6 +1707,9 @@ firewall:
managersearch:
portgroups:
- openssh
managerhype:
portgroups:
- openssh
standalone:
portgroups:
- openssh
@@ -1734,6 +1772,8 @@ firewall:
portgroups: []
managersearch:
portgroups: []
managerhype:
portgroups: []
standalone:
portgroups: []
customhostgroup0:

View File

@@ -91,7 +91,7 @@ COMMIT
-A INPUT -m conntrack --ctstate INVALID -j DROP
-A INPUT -p icmp -j ACCEPT
-A INPUT -j LOGGING
{% if GLOBALS.role in ['so-hypervisor', 'so-managerhyper'] -%}
{% if GLOBALS.role in ['so-hypervisor', 'so-managerhype'] -%}
-A FORWARD -m state --state RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -i br0 -o br0 -j ACCEPT
{%- endif %}

View File

@@ -25,7 +25,7 @@
{% set KAFKA_EXTERNAL_ACCESS = salt['pillar.get']('kafka:config:external_access:enabled', default=False) %}
{% set kafka_node_type = salt['pillar.get']('kafka:nodes:'+ GLOBALS.hostname + ':role') %}
{% if role in ['manager', 'managersearch', 'standalone'] %}
{% if role.startswith('manager') or role == 'standalone' %}
{% do FIREWALL_DEFAULT.firewall.role[role].chain["DOCKER-USER"].hostgroups[role].portgroups.append('kafka_controller') %}
{% do FIREWALL_DEFAULT.firewall.role[role].chain["DOCKER-USER"].hostgroups.receiver.portgroups.append('kafka_controller') %}
{% endif %}
@@ -38,8 +38,8 @@
{% do FIREWALL_DEFAULT.firewall.role[role].chain["DOCKER-USER"].hostgroups.receiver.portgroups.append('kafka_controller') %}
{% endif %}
{% if role in ['manager', 'managersearch', 'standalone', 'receiver'] %}
{% for r in ['manager', 'managersearch', 'standalone', 'receiver', 'fleet', 'idh', 'sensor', 'searchnode','heavynode', 'elastic_agent_endpoint', 'desktop'] %}
{% if role.startswith('manager') or role in ['standalone', 'receiver'] %}
{% for r in ['manager', 'managersearch', 'managerhype', 'standalone', 'receiver', 'fleet', 'idh', 'sensor', 'searchnode','heavynode', 'elastic_agent_endpoint', 'desktop'] %}
{% if FIREWALL_DEFAULT.firewall.role[role].chain["DOCKER-USER"].hostgroups[r] is defined %}
{% do FIREWALL_DEFAULT.firewall.role[role].chain["DOCKER-USER"].hostgroups[r].portgroups.append('kafka_data') %}
{% endif %}
@@ -48,11 +48,11 @@
{% if KAFKA_EXTERNAL_ACCESS %}
{# Kafka external access only applies for Kafka nodes with the broker role. #}
{% if role in ['manager', 'managersearch', 'standalone', 'receiver'] and 'broker' in kafka_node_type %}
{% if role.startswith('manager') or role in ['standalone', 'receiver'] and 'broker' in kafka_node_type %}
{% do FIREWALL_DEFAULT.firewall.role[role].chain["DOCKER-USER"].hostgroups.external_kafka.portgroups.append('kafka_external_access') %}
{% endif %}
{% endif %}
{% endif %}
{% set FIREWALL_MERGED = salt['pillar.get']('firewall', FIREWALL_DEFAULT.firewall, merge=True) %}
{% set FIREWALL_MERGED = salt['pillar.get']('firewall', FIREWALL_DEFAULT.firewall, merge=True) %}

View File

@@ -33,6 +33,7 @@ hypervisor:
3: pci_0000_41_00_0
4: pci_0000_41_00_1
SOSSNNV-DE02:
hardware:
cpu: 128
memory: 384
disk:
@@ -55,6 +56,7 @@ hypervisor:
9: pci_0000_c5_00_2
10: pci_0000_c5_00_3
SOSSN7200:
hardware:
cpu: 128
memory: 256
copper:
@@ -70,6 +72,7 @@ hypervisor:
9: pci_0000_81_00_2
10: pci_0000_81_00_3
SOSSN7200-DE02:
hardware:
cpu: 128
memory: 384
copper:
@@ -87,6 +90,7 @@ hypervisor:
11: pci_0000_c6_00_2
12: pci_0000_c6_00_3
SOS4000:
hardware:
cpu: 128
memory: 256
copper:
@@ -102,6 +106,7 @@ hypervisor:
9: pci_0000_81_00_2
10: pci_0000_81_00_3
SOS5000-DE02:
hardware:
cpu: 128
memory: 384
copper:

View File

@@ -13,6 +13,7 @@
{# Import defaults.yaml for model hardware capabilities #}
{% import_yaml 'hypervisor/defaults.yaml' as DEFAULTS %}
{% set HYPERVISORMERGED = salt['pillar.get']('hypervisor', default=DEFAULTS.hypervisor, merge=True) %}
{# Get hypervisor nodes from pillar #}
{% set NODES = salt['pillar.get']('hypervisor:nodes', {}) %}
@@ -30,9 +31,10 @@
{% set model = '' %}
{% if grains %}
{% set minion_id = grains.keys() | first %}
{% set model = grains[minion_id].get('sosmodel', '') %}
{% set model = grains[minion_id].get('sosmodel', grains[minion_id].get('byodmodel', '')) %}
{% endif %}
{% set model_config = DEFAULTS.hypervisor.model.get(model, {}) %}
{% set model_config = HYPERVISORMERGED.model.get(model, {}) %}
{# Get VM list from VMs file #}
{% set vms = {} %}

View File

@@ -12,7 +12,7 @@
# - Detects and reports existing RAID configurations
# - Thoroughly cleans target drives of any existing data/configurations
# - Creates GPT partition tables with RAID-type partitions
# - Establishes RAID-1 array (/dev/md0) for data redundancy
# - Establishes RAID-1 array (${RAID_DEVICE}) for data redundancy
# - Formats the array with XFS filesystem for performance
# - Automatically mounts at /nsm and configures for boot persistence
# - Provides monitoring information for resync operations
@@ -30,13 +30,33 @@
#
# WARNING: This script will DESTROY all data on the target drives!
#
# USAGE: sudo ./raid_setup.sh
# USAGE:
# sudo ./so-nvme-raid1.sh # Normal operation
# sudo ./so-nvme-raid1.sh --force-cleanup # Force cleanup of existing RAID
#
#################################################################
# Exit on any error
set -e
# Configuration variables
RAID_ARRAY_NAME="md0"
RAID_DEVICE="/dev/${RAID_ARRAY_NAME}"
MOUNT_POINT="/nsm"
FORCE_CLEANUP=false
# Parse command line arguments
for arg in "$@"; do
case $arg in
--force-cleanup)
FORCE_CLEANUP=true
shift
;;
*)
;;
esac
done
# Function to log messages
log() {
echo "[$(date '+%Y-%m-%d %H:%M:%S')] $1"
@@ -50,51 +70,250 @@ check_root() {
fi
}
# Function to force cleanup all RAID components
force_cleanup_raid() {
log "=== FORCE CLEANUP MODE ==="
log "This will destroy all RAID configurations and data on target drives!"
# Stop all MD arrays
log "Stopping all MD arrays"
mdadm --stop --scan 2>/dev/null || true
# Wait for arrays to stop
sleep 2
# Remove any running md devices
for md in /dev/md*; do
if [ -b "$md" ]; then
log "Stopping $md"
mdadm --stop "$md" 2>/dev/null || true
fi
done
# Force cleanup both NVMe drives
for device in "/dev/nvme0n1" "/dev/nvme1n1"; do
log "Force cleaning $device"
# Kill any processes using the device
fuser -k "${device}"* 2>/dev/null || true
# Unmount any mounted partitions
for part in "${device}"*; do
if [ -b "$part" ]; then
umount -f "$part" 2>/dev/null || true
fi
done
# Force zero RAID superblocks on partitions
for part in "${device}"p*; do
if [ -b "$part" ]; then
log "Zeroing RAID superblock on $part"
mdadm --zero-superblock --force "$part" 2>/dev/null || true
fi
done
# Zero superblock on the device itself
log "Zeroing RAID superblock on $device"
mdadm --zero-superblock --force "$device" 2>/dev/null || true
# Remove LVM physical volumes
pvremove -ff -y "$device" 2>/dev/null || true
# Wipe all filesystem and partition signatures
log "Wiping all signatures from $device"
wipefs -af "$device" 2>/dev/null || true
# Overwrite the beginning of the drive (partition table area)
log "Clearing partition table on $device"
dd if=/dev/zero of="$device" bs=1M count=10 2>/dev/null || true
# Clear the end of the drive (backup partition table area)
local device_size=$(blockdev --getsz "$device" 2>/dev/null || echo "0")
if [ "$device_size" -gt 0 ]; then
dd if=/dev/zero of="$device" bs=512 seek=$(( device_size - 2048 )) count=2048 2>/dev/null || true
fi
# Force kernel to re-read partition table
blockdev --rereadpt "$device" 2>/dev/null || true
partprobe -s "$device" 2>/dev/null || true
done
# Clear mdadm configuration
log "Clearing mdadm configuration"
echo "DEVICE partitions" > /etc/mdadm.conf
# Remove any fstab entries for the RAID device or mount point
log "Cleaning fstab entries"
sed -i "\|${RAID_DEVICE}|d" /etc/fstab
sed -i "\|${MOUNT_POINT}|d" /etc/fstab
# Wait for system to settle
udevadm settle
sleep 5
log "Force cleanup complete!"
log "Proceeding with RAID setup..."
}
# Function to find MD arrays using specific devices
find_md_arrays_using_devices() {
local target_devices=("$@")
local found_arrays=()
# Parse /proc/mdstat to find arrays using our target devices
if [ -f "/proc/mdstat" ]; then
while IFS= read -r line; do
if [[ $line =~ ^(md[0-9]+) ]]; then
local array_name="${BASH_REMATCH[1]}"
local array_path="/dev/$array_name"
# Check if this array uses any of our target devices
for device in "${target_devices[@]}"; do
if echo "$line" | grep -q "${device##*/}"; then
found_arrays+=("$array_path")
break
fi
done
fi
done < /proc/mdstat
fi
printf '%s\n' "${found_arrays[@]}"
}
# Function to check if RAID is already set up
check_existing_raid() {
if [ -e "/dev/md0" ]; then
if mdadm --detail /dev/md0 &>/dev/null; then
local raid_state=$(mdadm --detail /dev/md0 | grep "State" | awk '{print $3}')
local mount_point="/nsm"
log "Found existing RAID array /dev/md0 (State: $raid_state)"
if mountpoint -q "$mount_point"; then
log "RAID is already mounted at $mount_point"
log "Current RAID details:"
mdadm --detail /dev/md0
local target_devices=("/dev/nvme0n1p1" "/dev/nvme1n1p1")
local found_arrays=($(find_md_arrays_using_devices "${target_devices[@]}"))
# Check if we found any arrays using our target devices
if [ ${#found_arrays[@]} -gt 0 ]; then
for array_path in "${found_arrays[@]}"; do
if mdadm --detail "$array_path" &>/dev/null; then
local raid_state=$(mdadm --detail "$array_path" | grep "State" | awk '{print $3}')
local mount_point="/nsm"
# Check if resyncing
if grep -q "resync" /proc/mdstat; then
log "RAID is currently resyncing:"
grep resync /proc/mdstat
log "You can monitor progress with: watch -n 60 cat /proc/mdstat"
log "Found existing RAID array $array_path (State: $raid_state)"
# Check what's currently mounted at /nsm
local current_mount=$(findmnt -n -o SOURCE "$mount_point" 2>/dev/null || echo "")
if [ -n "$current_mount" ]; then
if [ "$current_mount" = "$array_path" ]; then
log "RAID array $array_path is already correctly mounted at $mount_point"
log "Current RAID details:"
mdadm --detail "$array_path"
# Check if resyncing
if grep -q "resync" /proc/mdstat; then
log "RAID is currently resyncing:"
grep resync /proc/mdstat
log "You can monitor progress with: watch -n 60 cat /proc/mdstat"
else
log "RAID is fully synced and operational"
fi
# Show disk usage
log "Current disk usage:"
df -h "$mount_point"
exit 0
else
log "Found $mount_point mounted on $current_mount, but RAID array $array_path exists"
log "Will unmount current filesystem and remount on RAID array"
# Unmount current filesystem
log "Unmounting $mount_point"
umount "$mount_point"
# Remove old fstab entry
log "Removing old fstab entry for $current_mount"
sed -i "\|$current_mount|d" /etc/fstab
# Mount the RAID array
log "Mounting RAID array $array_path at $mount_point"
mount "$array_path" "$mount_point"
# Update fstab
log "Updating fstab for RAID array"
sed -i "\|${array_path}|d" /etc/fstab
echo "${array_path} ${mount_point} xfs defaults,nofail 0 0" >> /etc/fstab
log "RAID array is now mounted at $mount_point"
log "Current RAID details:"
mdadm --detail "$array_path"
# Check if resyncing
if grep -q "resync" /proc/mdstat; then
log "RAID is currently resyncing:"
grep resync /proc/mdstat
log "You can monitor progress with: watch -n 60 cat /proc/mdstat"
else
log "RAID is fully synced and operational"
fi
# Show disk usage
log "Current disk usage:"
df -h "$mount_point"
exit 0
fi
else
log "RAID is fully synced and operational"
# /nsm not mounted, mount the RAID array
log "Mounting RAID array $array_path at $mount_point"
mount "$array_path" "$mount_point"
# Update fstab
log "Updating fstab for RAID array"
sed -i "\|${array_path}|d" /etc/fstab
echo "${array_path} ${mount_point} xfs defaults,nofail 0 0" >> /etc/fstab
log "RAID array is now mounted at $mount_point"
log "Current RAID details:"
mdadm --detail "$array_path"
# Show disk usage
log "Current disk usage:"
df -h "$mount_point"
exit 0
fi
# Show disk usage
log "Current disk usage:"
df -h "$mount_point"
exit 0
fi
fi
done
fi
# Check if any of the target devices are in use
for device in "/dev/nvme0n1" "/dev/nvme1n1"; do
if lsblk -o NAME,MOUNTPOINT "$device" | grep -q "nsm"; then
log "Error: $device is already mounted at /nsm"
exit 1
fi
for device in "/dev/nvme0n1" "/dev/nvme1n1"; do
if mdadm --examine "$device" &>/dev/null || mdadm --examine "${device}p1" &>/dev/null; then
# Find the actual array name for this device
local device_arrays=($(find_md_arrays_using_devices "${device}p1"))
local array_name=""
if [ ${#device_arrays[@]} -gt 0 ]; then
array_name="${device_arrays[0]}"
else
# Fallback: try to find array name from /proc/mdstat
local partition_name="${device##*/}p1"
array_name=$(grep -l "$partition_name" /proc/mdstat 2>/dev/null | head -1)
if [ -n "$array_name" ]; then
array_name=$(grep "^md[0-9]" /proc/mdstat | grep "$partition_name" | awk '{print "/dev/" $1}' | head -1)
fi
# Final fallback
if [ -z "$array_name" ]; then
array_name="$RAID_DEVICE"
fi
fi
log "Error: $device appears to be part of an existing RAID array"
log "To reuse this device, you must first:"
log "1. Unmount any filesystems"
log "2. Stop the RAID array: mdadm --stop /dev/md0"
log "3. Zero the superblock: mdadm --zero-superblock ${device}p1"
log "Old RAID metadata detected but array is not running."
log ""
log "To fix this, run the script with --force-cleanup:"
log " sudo $0 --force-cleanup"
log ""
log "Or manually clean up with:"
log "1. Stop any arrays: mdadm --stop --scan"
log "2. Zero superblocks: mdadm --zero-superblock --force ${device}p1"
log "3. Wipe signatures: wipefs -af $device"
exit 1
fi
done
@@ -124,7 +343,7 @@ ensure_devices_free() {
done
# Clear MD superblock
mdadm --zero-superblock "${device}"* 2>/dev/null || true
mdadm --zero-superblock --force "${device}"* 2>/dev/null || true
# Remove LVM PV if exists
pvremove -ff -y "$device" 2>/dev/null || true
@@ -149,6 +368,11 @@ main() {
# Check if running as root
check_root
# If force cleanup flag is set, do aggressive cleanup first
if [ "$FORCE_CLEANUP" = true ]; then
force_cleanup_raid
fi
# Check for existing RAID setup
check_existing_raid
@@ -183,20 +407,20 @@ main() {
fi
log "Creating RAID array"
mdadm --create /dev/md0 --level=1 --raid-devices=2 \
mdadm --create "$RAID_DEVICE" --level=1 --raid-devices=2 \
--metadata=1.2 \
/dev/nvme0n1p1 /dev/nvme1n1p1 \
--force --run
log "Creating XFS filesystem"
mkfs.xfs -f /dev/md0
mkfs.xfs -f "$RAID_DEVICE"
log "Creating mount point"
mkdir -p /nsm
log "Updating fstab"
sed -i '/\/dev\/md0/d' /etc/fstab
echo "/dev/md0 /nsm xfs defaults,nofail 0 0" >> /etc/fstab
sed -i "\|${RAID_DEVICE}|d" /etc/fstab
echo "${RAID_DEVICE} ${MOUNT_POINT} xfs defaults,nofail 0 0" >> /etc/fstab
log "Reloading systemd daemon"
systemctl daemon-reload
@@ -209,7 +433,7 @@ main() {
log "RAID setup complete"
log "RAID array details:"
mdadm --detail /dev/md0
mdadm --detail "$RAID_DEVICE"
if grep -q "resync" /proc/mdstat; then
log "RAID is currently resyncing. You can monitor progress with:"

View File

@@ -86,7 +86,7 @@ idh_sbin:
file.recurse:
- name: /usr/sbin
- source: salt://idh/tools/sbin
- user: 934
- user: 939
- group: 939
- file_mode: 755

View File

@@ -20,7 +20,7 @@ idstools_sbin:
file.recurse:
- name: /usr/sbin
- source: salt://idstools/tools/sbin
- user: 934
- user: 939
- group: 939
- file_mode: 755
@@ -29,7 +29,7 @@ idstools_sbin:
# file.recurse:
# - name: /usr/sbin
# - source: salt://idstools/tools/sbin_jinja
# - user: 934
# - user: 939
# - group: 939
# - file_mode: 755
# - template: jinja
@@ -38,7 +38,7 @@ idstools_so-rule-update:
file.managed:
- name: /usr/sbin/so-rule-update
- source: salt://idstools/tools/sbin_jinja/so-rule-update
- user: 934
- user: 939
- group: 939
- mode: 755
- template: jinja

View File

@@ -22,7 +22,7 @@ kibana:
- default
- file
migrations:
discardCorruptObjects: "8.18.3"
discardCorruptObjects: "8.18.6"
telemetry:
enabled: False
security:

View File

@@ -1,3 +1,3 @@
{% import_yaml 'elasticsearch/defaults.yaml' as ELASTICSEARCHDEFAULTS -%}
{"attributes": {"buildNum": 39457,"defaultIndex": "logs-*","defaultRoute": "/app/dashboards#/view/a8411b30-6d03-11ea-b301-3d6c35840645","discover:sampleSize": 100,"theme:darkMode": true,"timepicker:timeDefaults": "{\n \"from\": \"now-24h\",\n \"to\": \"now\"\n}"},"coreMigrationVersion": "{{ ELASTICSEARCHDEFAULTS.elasticsearch.version }}","id": "{{ ELASTICSEARCHDEFAULTS.elasticsearch.version }}","references": [],"type": "config","updated_at": "2021-10-10T10:10:10.105Z","version": "WzI5NzUsMl0="}
{"attributes": {"buildNum": 39457,"defaultIndex": "logs-*","defaultRoute": "/app/dashboards#/view/a8411b30-6d03-11ea-b301-3d6c35840645","discover:sampleSize": 100,"savedObjects:listingLimit":1500,"theme:darkMode": true,"timepicker:timeDefaults": "{\n \"from\": \"now-24h\",\n \"to\": \"now\"\n}"},"coreMigrationVersion": "{{ ELASTICSEARCHDEFAULTS.elasticsearch.version }}","id": "{{ ELASTICSEARCHDEFAULTS.elasticsearch.version }}","references": [],"type": "config","version": "WzI5NzUsMl0="}

View File

@@ -54,6 +54,9 @@ so-kratos:
- file: kratosconfig
- file: kratoslogdir
- file: kratosdir
- retry:
attempts: 10
interval: 10
delete_so-kratos_so-status.disabled:
file.uncomment:

View File

@@ -3,8 +3,10 @@
# https://securityonion.net/license; you may not use this file except in compliance with the
# Elastic License 2.0.
{% from 'libvirt/map.jinja' import LIBVIRTMERGED %}
{% from 'salt/map.jinja' import SYSTEMD_UNIT_FILE %}
# We do not import GLOBALS in this state because it is called during setup
include:
- salt.minion.service_file
- salt.mine_functions
down_original_mgmt_interface:
cmd.run:
@@ -29,21 +31,14 @@ wait_for_br0_ip:
- timeout: 95
- onchanges:
- cmd: down_original_mgmt_interface
update_mine_functions:
file.managed:
- name: /etc/salt/minion.d/mine_functions.conf
- contents: |
mine_interval: 25
mine_functions:
network.ip_addrs:
- interface: br0
- onchanges:
- cmd: wait_for_br0_ip
- onchanges_in:
- file: salt_minion_service_unit_file
- file: mine_functions
restart_salt_minion_service:
service.running:
- name: salt-minion
- enable: True
- listen:
- file: update_mine_functions
- file: salt_minion_service_unit_file
- file: mine_functions

View File

@@ -48,6 +48,7 @@ manage_userdata_sool9:
file.managed:
- name: /nsm/libvirt/images/sool9/user-data
- source: salt://libvirt/images/sool9/user-data
- show_changes: False
# Manage qcow2 image
manage_qcow2_sool9:

View File

@@ -1,2 +1,2 @@
Host *
IdentityFile /etc/ssh/auth_keys/soqemussh/id_ed25519
Match user soqemussh
IdentityFile /etc/ssh/auth_keys/soqemussh/id_ecdsa

View File

@@ -16,10 +16,17 @@
{% if GLOBALS.is_manager %}
qemu_ssh_client_config:
file.managed:
root_ssh_config:
file.touch:
- name: /root/.ssh/config
qemu_ssh_client_config:
file.blockreplace:
- name: /root/.ssh/config
- marker_start: "# START of block managed by Salt - soqemussh config"
- marker_end: "# END of block managed by Salt - soqemussh config"
- source: salt://libvirt/ssh/files/config
- prepend_if_not_found: True
{% endif %}
@@ -39,7 +46,7 @@ create_soqemussh_user:
soqemussh_pub_key:
ssh_auth.present:
- user: soqemussh
- source: salt://libvirt/ssh/keys/id_ed25519.pub
- source: salt://libvirt/ssh/keys/id_ecdsa.pub
{% endif %}

View File

@@ -268,3 +268,12 @@ logrotate:
- nocompress
- create
- sharedscripts
/opt/so/log/agents/agent-monitor*_x_log:
- daily
- rotate 14
- missingok
- compress
- create
- extension .log
- dateext
- dateyesterday

View File

@@ -175,3 +175,10 @@ logrotate:
multiline: True
global: True
forcedType: "[]string"
"/opt/so/log/agents/agent-monitor*_x_log":
description: List of logrotate options for this file.
title: /opt/so/log/agents/agent-monitor*.log
advanced: True
multiline: True
global: True
forcedType: "[]string"

View File

@@ -17,7 +17,7 @@
{% for node_type, node_details in redis_node_data.items() | sort %}
{% if GLOBALS.role in ['so-searchnode', 'so-standalone', 'so-managersearch', 'so-fleet'] %}
{% if node_type in ['manager', 'managersearch', 'standalone', 'receiver' ] %}
{% if node_type.startswith('manager') or node_type in ['standalone', 'receiver'] %}
{% for hostname in redis_node_data[node_type].keys() %}
{% do LOGSTASH_REDIS_NODES.append({hostname:node_details[hostname].ip}) %}
{% endfor %}
@@ -47,7 +47,7 @@
{% endif %}
{# Disable logstash on manager & receiver nodes unless it has an override configured #}
{% if not KAFKA_LOGSTASH %}
{% if GLOBALS.role in ['so-manager', 'so-receiver'] and GLOBALS.hostname not in KAFKA_LOGSTASH %}
{% if GLOBALS.role in ['so-manager', 'so-managerhype', 'so-receiver'] and GLOBALS.hostname not in KAFKA_LOGSTASH %}
{% do LOGSTASH_MERGED.update({'enabled': False}) %}
{% endif %}
{% endif %}

View File

@@ -5,3 +5,12 @@ manager:
minute: 0
additionalCA: ''
insecureSkipVerify: False
agent_monitoring:
enabled: False
config:
critical_agents: []
custom_kquery:
offline_threshold: 5
realert_threshold: 5
page_size: 250
run_interval: 5

View File

@@ -16,9 +16,9 @@
# Check if hypervisor environment has been set up
{% set ssh_user_exists = salt['user.info']('soqemussh') %}
{% set ssh_keys_exist = salt['file.file_exists']('/etc/ssh/auth_keys/soqemussh/id_ed25519') and
salt['file.file_exists']('/etc/ssh/auth_keys/soqemussh/id_ed25519.pub') and
salt['file.file_exists']('/opt/so/saltstack/local/salt/libvirt/ssh/keys/id_ed25519.pub') %}
{% set ssh_keys_exist = salt['file.file_exists']('/etc/ssh/auth_keys/soqemussh/id_ecdsa') and
salt['file.file_exists']('/etc/ssh/auth_keys/soqemussh/id_ecdsa.pub') and
salt['file.file_exists']('/opt/so/saltstack/local/salt/libvirt/ssh/keys/id_ecdsa.pub') %}
{% set base_image_exists = salt['file.file_exists']('/nsm/libvirt/boot/OL9U5_x86_64-kvm-b253.qcow2') %}
{% set vm_files_exist = salt['file.directory_exists']('/opt/so/saltstack/local/salt/libvirt/images/sool9') and
salt['file.file_exists']('/opt/so/saltstack/local/salt/libvirt/images/sool9/sool9.qcow2') and

View File

@@ -34,6 +34,26 @@ agents_log_dir:
- user
- group
agents_conf_dir:
file.directory:
- name: /opt/so/conf/agents
- user: root
- group: root
- recurse:
- user
- group
{% if MANAGERMERGED.agent_monitoring.config.critical_agents | length > 0 %}
critical_agents_patterns:
file.managed:
- name: /opt/so/conf/agents/critical-agents.txt
- contents: {{ MANAGERMERGED.agent_monitoring.config.critical_agents }}
{% else %}
remove_critical_agents_config:
file.absent:
- name: /opt/so/conf/agents/critical-agents.txt
{% endif %}
yara_log_dir:
file.directory:
- name: /opt/so/log/yarasync
@@ -127,6 +147,21 @@ so_fleetagent_status:
- month: '*'
- dayweek: '*'
so_fleetagent_monitor:
{% if MANAGERMERGED.agent_monitoring.enabled %}
cron.present:
{% else %}
cron.absent:
{% endif %}
- name: /bin/flock -n /opt/so/log/agents/agent-monitor.lock /usr/sbin/so-elastic-agent-monitor
- identifier: so_fleetagent_monitor
- user: root
- minute: '*/{{ MANAGERMERGED.agent_monitoring.config.run_interval }}'
- hour: '*'
- daymonth: '*'
- month: '*'
- dayweek: '*'
socore_own_saltstack_default:
file.directory:
- name: /opt/so/saltstack/default

View File

@@ -37,3 +37,44 @@ manager:
forcedType: bool
global: True
helpLink: proxy.html
agent_monitoring:
enabled:
description: Enable monitoring elastic agents for health issues. Can be used to trigger an alert when a 'critical' agent hasn't checked in with fleet for longer than the configured offline threshold.
global: True
helpLink: elastic-fleet.html
forcedType: bool
config:
critical_agents:
description: List of 'critical' agents to log when they haven't checked in longer than the maximum allowed time. If there are no 'critical' agents specified all offline agents will be logged once they reach the offline threshold.
global: True
multiline: True
helpLink: elastic-fleet.html
forcedType: "[]string"
custom_kquery:
description: For more granular control over what agents to monitor for offline|degraded status add a kquery here. It is recommended to create & test within Elastic Fleet first to ensure your agents are targeted correctly using the query. eg 'status:offline AND tags:INFRA'
global: True
helpLink: elastic-fleet.html
forcedType: string
advanced: True
offline_threshold:
description: The maximum allowed time in hours a 'critical' agent has been offline before being logged.
global: True
helpLink: elastic-fleet.html
forcedType: int
realert_threshold:
description: The time to pass before another alert for an offline agent exceeding the offline_threshold is generated.
global: True
helpLink: elastic-fleet.html
forcedType: int
page_size:
description: The amount of agents that can be processed per API request to fleet.
global: True
helpLink: elastic-fleet.html
forcedType: int
advanced: True
run_interval:
description: The time in minutes between checking fleet agent statuses.
global: True
advanced: True
helpLink: elastic-fleet.html
forcedType: int

View File

@@ -454,6 +454,7 @@ function add_sensor_to_minion() {
echo "sensor:"
echo " interface: '$INTERFACE'"
echo " mtu: 9000"
echo " channels: 1"
echo "zeek:"
echo " enabled: True"
echo " config:"

View File

@@ -387,7 +387,7 @@ function syncElastic() {
if [[ -z "$SKIP_STATE_APPLY" ]]; then
echo "Elastic state will be re-applied to affected minions. This will run in the background and may take several minutes to complete."
echo "Applying elastic state to elastic minions at $(date)" >> /opt/so/log/soc/sync.log 2>&1
salt --async -C 'G@role:so-standalone or G@role:so-eval or G@role:so-import or G@role:so-manager or G@role:so-managersearch or G@role:so-searchnode or G@role:so-heavynode' state.apply elasticsearch queue=True >> /opt/so/log/soc/sync.log 2>&1
salt --async -C 'I@elasticsearch:enabled:true' state.apply elasticsearch queue=True >> /opt/so/log/soc/sync.log 2>&1
fi
else
echo "Newly generated users/roles files are incomplete; aborting."

View File

@@ -169,6 +169,8 @@ airgap_update_dockers() {
tar xf "$AGDOCKER/registry.tar" -C /nsm/docker-registry/docker
echo "Add Registry back"
docker load -i "$AGDOCKER/registry_image.tar"
echo "Restart registry container"
salt-call state.apply registry queue=True
fi
fi
}
@@ -419,6 +421,8 @@ preupgrade_changes() {
[[ "$INSTALLEDVERSION" == 2.4.141 ]] && up_to_2.4.150
[[ "$INSTALLEDVERSION" == 2.4.150 ]] && up_to_2.4.160
[[ "$INSTALLEDVERSION" == 2.4.160 ]] && up_to_2.4.170
[[ "$INSTALLEDVERSION" == 2.4.170 ]] && up_to_2.4.180
[[ "$INSTALLEDVERSION" == 2.4.180 ]] && up_to_2.4.190
true
}
@@ -448,6 +452,8 @@ postupgrade_changes() {
[[ "$POSTVERSION" == 2.4.141 ]] && post_to_2.4.150
[[ "$POSTVERSION" == 2.4.150 ]] && post_to_2.4.160
[[ "$POSTVERSION" == 2.4.160 ]] && post_to_2.4.170
[[ "$POSTVERSION" == 2.4.170 ]] && post_to_2.4.180
[[ "$POSTVERSION" == 2.4.180 ]] && post_to_2.4.190
true
}
@@ -588,9 +594,6 @@ post_to_2.4.160() {
}
post_to_2.4.170() {
echo "Regenerating Elastic Agent Installers"
/sbin/so-elastic-agent-gen-installers
# Update kibana default space
salt-call state.apply kibana.config queue=True
echo "Updating Kibana default space"
@@ -599,6 +602,35 @@ post_to_2.4.170() {
POSTVERSION=2.4.170
}
post_to_2.4.180() {
echo "Regenerating Elastic Agent Installers"
/sbin/so-elastic-agent-gen-installers
# Force update to Kafka output policy
/usr/sbin/so-kafka-fleet-output-policy --force
POSTVERSION=2.4.180
}
post_to_2.4.190() {
# Only need to update import / eval nodes
if [[ "$MINION_ROLE" == "import" ]] || [[ "$MINION_ROLE" == "eval" ]]; then
update_import_fleet_output
fi
# Check if expected default policy is logstash (global.pipeline is REDIS or "")
pipeline=$(lookup_pillar "pipeline" "global")
if [[ -z "$pipeline" ]] || [[ "$pipeline" == "REDIS" ]]; then
# Check if this grid is currently affected by corrupt fleet output policy
if elastic-agent status | grep "config: key file not configured" > /dev/null 2>&1; then
echo "Elastic Agent shows an ssl error connecting to logstash output. Updating output policy..."
update_default_logstash_output
fi
fi
POSTVERSION=2.4.190
}
repo_sync() {
echo "Sync the local repo."
su socore -c '/usr/sbin/so-repo-sync' || fail "Unable to complete so-repo-sync."
@@ -850,10 +882,20 @@ up_to_2.4.170() {
touch /opt/so/saltstack/local/pillar/$state/adv_$state.sls /opt/so/saltstack/local/pillar/$state/soc_$state.sls
done
INSTALLEDVERSION=2.4.170
}
up_to_2.4.180() {
# Elastic Update for this release, so download Elastic Agent files
determine_elastic_agent_upgrade
INSTALLEDVERSION=2.4.170
INSTALLEDVERSION=2.4.180
}
up_to_2.4.190() {
echo "Nothing to do for 2.4.190"
INSTALLEDVERSION=2.4.190
}
add_hydra_pillars() {
@@ -1129,6 +1171,44 @@ update_elasticsearch_index_settings() {
done
}
update_import_fleet_output() {
if output=$(curl -sK /opt/so/conf/elasticsearch/curl.config -L "localhost:5601/api/fleet/outputs/so-manager_elasticsearch" --retry 3 --fail 2>/dev/null); then
# Update the current config of so-manager_elasticsearch output policy in place (leaving any customizations like having changed the preset value from 'balanced' to 'performance')
CAFINGERPRINT=$(openssl x509 -in /etc/pki/tls/certs/intca.crt -outform DER | sha256sum | cut -d' ' -f1 | tr '[:lower:]' '[:upper:]')
updated_policy=$(jq --arg CAFINGERPRINT "$CAFINGERPRINT" '.item | (del(.id) | .ca_trusted_fingerprint = $CAFINGERPRINT)' <<< "$output")
if curl -sK /opt/so/conf/elasticsearch/curl.config -L "localhost:5601/api/fleet/outputs/so-manager_elasticsearch" -XPUT -H 'kbn-xsrf: true' -H 'Content-Type: application/json' -d "$updated_policy" --retry 3 --fail 2>/dev/null; then
echo "Successfully updated so-manager_elasticsearch fleet output policy"
else
fail "Failed to update so-manager_elasticsearch fleet output policy"
fi
fi
}
update_default_logstash_output() {
echo "Updating fleet logstash output policy grid-logstash"
if logstash_policy=$(curl -K /opt/so/conf/elasticsearch/curl.config -L "http://localhost:5601/api/fleet/outputs/so-manager_logstash" --retry 3 --retry-delay 10 --fail 2>/dev/null); then
# Keep already configured hosts for this update, subsequent host updates come from so-elastic-fleet-outputs-update
HOSTS=$(echo "$logstash_policy" | jq -r '.item.hosts')
DEFAULT_ENABLED=$(echo "$logstash_policy" | jq -r '.item.is_default')
DEFAULT_MONITORING_ENABLED=$(echo "$logstash_policy" | jq -r '.item.is_default_monitoring')
LOGSTASHKEY=$(openssl rsa -in /etc/pki/elasticfleet-logstash.key)
LOGSTASHCRT=$(openssl x509 -in /etc/pki/elasticfleet-logstash.crt)
LOGSTASHCA=$(openssl x509 -in /etc/pki/tls/certs/intca.crt)
JSON_STRING=$(jq -n \
--argjson HOSTS "$HOSTS" \
--arg DEFAULT_ENABLED "$DEFAULT_ENABLED" \
--arg DEFAULT_MONITORING_ENABLED "$DEFAULT_MONITORING_ENABLED" \
--arg LOGSTASHKEY "$LOGSTASHKEY" \
--arg LOGSTASHCRT "$LOGSTASHCRT" \
--arg LOGSTASHCA "$LOGSTASHCA" \
'{"name":"grid-logstash","type":"logstash","hosts": $HOSTS,"is_default": $DEFAULT_ENABLED,"is_default_monitoring": $DEFAULT_MONITORING_ENABLED,"config_yaml":"","ssl":{"certificate": $LOGSTASHCRT,"certificate_authorities":[ $LOGSTASHCA ]},"secrets":{"ssl":{"key": $LOGSTASHKEY }}}')
fi
if curl -K /opt/so/conf/elasticsearch/curl.config -L -X PUT "localhost:5601/api/fleet/outputs/so-manager_logstash" -H 'kbn-xsrf: true' -H 'Content-Type: application/json' -d "$JSON_STRING" --retry 3 --retry-delay 10 --fail; then
echo "Successfully updated grid-logstash fleet output policy"
fi
}
update_salt_mine() {
echo "Populating the mine with mine_functions for each host."
set +e
@@ -1345,6 +1425,7 @@ main() {
fi
set_minionid
MINION_ROLE=$(lookup_role)
echo "Found that Security Onion $INSTALLEDVERSION is currently installed."
echo ""
if [[ $is_airgap -eq 0 ]]; then
@@ -1387,7 +1468,7 @@ main() {
if [ "$is_hotfix" == "true" ]; then
echo "Applying $HOTFIXVERSION hotfix"
# since we don't run the backup.config_backup state on import we wont snapshot previous version states and pillars
if [[ ! "$MINIONID" =~ "_import" ]]; then
if [[ ! "$MINION_ROLE" == "import" ]]; then
backup_old_states_pillars
fi
copy_new_files
@@ -1450,7 +1531,7 @@ main() {
fi
# since we don't run the backup.config_backup state on import we wont snapshot previous version states and pillars
if [[ ! "$MINIONID" =~ "_import" ]]; then
if [[ ! "$MINION_ROLE" == "import" ]]; then
echo ""
echo "Creating snapshots of default and local Salt states and pillars and saving to /nsm/backup/"
backup_old_states_pillars

View File

@@ -0,0 +1,254 @@
{%- from 'manager/map.jinja' import MANAGERMERGED -%}
{%- set OFFLINE_THRESHOLD_HOURS = MANAGERMERGED.agent_monitoring.config.offline_threshold -%}
{%- set PAGE_SIZE = MANAGERMERGED.agent_monitoring.config.page_size -%}
{%- set CUSTOM_KQUERY = MANAGERMERGED.agent_monitoring.config.custom_kquery -%}
{%- set REALERT_THRESHOLD = MANAGERMERGED.agent_monitoring.config.realert_threshold -%}
#!/bin/bash
set -euo pipefail
LOG_DIR="/opt/so/log/agents"
LOG_FILE="$LOG_DIR/agent-monitor.log"
CURL_CONFIG="/opt/so/conf/elasticsearch/curl.config"
FLEET_API="http://localhost:5601/api/fleet/agents"
{#- When using custom kquery ignore critical agents patterns. Since we want all the results of custom query logged #}
{%- if CUSTOM_KQUERY != None and CUSTOM_KQUERY | length > 0 %}
CRITICAL_AGENTS_FILE="/dev/null"
{%- else %}
CRITICAL_AGENTS_FILE="/opt/so/conf/agents/critical-agents.txt"
{%- endif %}
OFFLINE_THRESHOLD_HOURS={{ OFFLINE_THRESHOLD_HOURS }}
REALERT_THRESHOLD={{ REALERT_THRESHOLD }}
PAGE_SIZE="{{ PAGE_SIZE }}"
log_message() {
local level="$1"
local message="$2"
echo "$(date -u +"%Y-%m-%dT%H:%M:%SZ") [$level] $message" >&2
}
matches_critical_pattern() {
local hostname="$1"
local pattern_file="$2"
# If critical agents file doesn't exist or is empty, match all
if [ ! -f "$pattern_file" ] || [ ! -s "$pattern_file" ]; then
return 0
fi
local hostname_lower=$(echo "$hostname" | tr '[:upper:]' '[:lower:]')
while IFS= read -r pattern || [ -n "$pattern" ]; do
# empty lines and comments
[[ -z "$pattern" || "$pattern" =~ ^[[:space:]]*# ]] && continue
# cut whitespace
pattern=$(echo "$pattern" | xargs)
local pattern_lower=$(echo "$pattern" | tr '[:upper:]' '[:lower:]')
# Replace * with bash wildcard
local bash_pattern="${pattern_lower//\*/.*}"
# Check if hostname matches the pattern
if [[ "$hostname_lower" =~ ^${bash_pattern}$ ]]; then
return 0
fi
done < "$pattern_file"
return 1
}
calculate_offline_hours() {
local last_checkin="$1"
local current_time=$(date +%s)
local checkin_time=$(date -d "$last_checkin" +%s 2>/dev/null || echo "0")
if [ "$checkin_time" -eq "0" ]; then
echo "0"
return
fi
local diff=$((current_time - checkin_time))
echo $((diff / 3600))
}
check_recent_log_entries() {
local agent_hostname="$1"
if [ ! -f "$LOG_FILE" ]; then
return 1
fi
local current_time=$(date +%s)
local threshold_seconds=$((REALERT_THRESHOLD * 3600))
local agent_hostname_lower=$(echo "$agent_hostname" | tr '[:upper:]' '[:lower:]')
local most_recent_timestamp=""
while IFS= read -r line; do
[ -z "$line" ] && continue
local logged_hostname=$(echo "$line" | jq -r '.["agent.hostname"] // empty' 2>/dev/null)
local logged_timestamp=$(echo "$line" | jq -r '.["@timestamp"] // empty' 2>/dev/null)
[ -z "$logged_hostname" ] || [ -z "$logged_timestamp" ] && continue
local logged_hostname_lower=$(echo "$logged_hostname" | tr '[:upper:]' '[:lower:]')
if [ "$logged_hostname_lower" = "$agent_hostname_lower" ]; then
most_recent_timestamp="$logged_timestamp"
fi
done < <(tail -n 1000 "$LOG_FILE" 2>/dev/null)
# If there is agent entry (within last 1000), check the time difference
if [ -n "$most_recent_timestamp" ]; then
local logged_time=$(date -d "$most_recent_timestamp" +%s 2>/dev/null || echo "0")
if [ "$logged_time" -ne "0" ]; then
local time_diff=$((current_time - logged_time))
local hours_diff=$((time_diff / 3600))
# Skip if last agent timestamp was more recent than realert threshold
if ((hours_diff < REALERT_THRESHOLD)); then
return 0
fi
fi
fi
# Agent has not been logged within realert threshold
return 1
}
main() {
log_message "INFO" "Starting Fleet agent status check"
# Check if critical agents file is configured
if [ -f "$CRITICAL_AGENTS_FILE" ] && [ -s "$CRITICAL_AGENTS_FILE" ]; then
log_message "INFO" "Using critical agents filter from: $CRITICAL_AGENTS_FILE"
log_message "INFO" "Patterns: $(grep -v '^#' "$CRITICAL_AGENTS_FILE" 2>/dev/null | xargs | tr ' ' ',')"
else
log_message "INFO" "No critical agents filter found, monitoring all agents"
fi
log_message "INFO" "Querying Fleet API"
local page=1
local total_agents=0
local processed_agents=0
local current_timestamp=$(date -u +"%Y-%m-%dT%H:%M:%SZ")
{%- if CUSTOM_KQUERY != None and CUSTOM_KQUERY | length > 0 %}
log_message "INFO" "Using custom kquery: {{ CUSTOM_KQUERY }}"
FLEET_QUERY="${FLEET_API}?kuery={{ CUSTOM_KQUERY | urlencode }}&perPage=${PAGE_SIZE}&page=${page}"
{%- else %}
log_message "INFO" "Using default query (all offline or degraded agents)"
FLEET_QUERY="${FLEET_API}?kuery=status%3Aoffline%20OR%20status%3Adegraded&perPage=${PAGE_SIZE}&page=${page}"
{%- endif %}
while true; do
log_message "INFO" "Fetching page $page (${PAGE_SIZE} agents per page)"
if ! response_body=$(curl -K "$CURL_CONFIG" \
-s --fail \
"$FLEET_QUERY" \
-H 'kbn-xsrf: true' 2>/dev/null); then
log_message "ERROR" "Failed to query Fleet API (page $page)"
exit 1
fi
# pagination info
current_total=$(echo "$response_body" | jq -r '.total // 0')
current_page=$(echo "$response_body" | jq -r '.page // 1')
agents_in_page=$(echo "$response_body" | jq -r '.list | length')
# Update total
if [ "$page" -eq 1 ]; then
total_agents="$current_total"
log_message "INFO" "Found $total_agents total agents across all pages"
fi
log_message "INFO" "Processing page $current_page with $agents_in_page agents"
# Process agents from current page
mapfile -t agents < <(echo "$response_body" | jq -c '.list[]')
for agent in "${agents[@]}"; do
# Grab agent details
agent_id=$(echo "$agent" | jq -r '.id // "unknown"')
agent_hostname=$(echo "$agent" | jq -r '.local_metadata.host.hostname // "unknown"')
agent_name=$(echo "$agent" | jq -r '.local_metadata.host.name // "unknown"')
agent_status=$(echo "$agent" | jq -r '.status // "unknown"')
last_checkin=$(echo "$agent" | jq -r '.last_checkin // ""')
last_checkin_status=$(echo "$agent" | jq -r '.last_checkin_status // "unknown"')
policy_id=$(echo "$agent" | jq -r '.policy_id // "unknown"')
# Only log agents that are offline or degraded (skip inactive agents)
# Fleetserver agents can show multiple versions as 'inactive'
if [ "$agent_status" = "offline" ] || [ "$agent_status" = "degraded" ]; then
# Check if agent matches critical agent patterns (if configured)
if ! matches_critical_pattern "$agent_hostname" "$CRITICAL_AGENTS_FILE"; then
log_message "WARN" "${agent_hostname^^} is ${agent_status^^}, but does not match configured critical agents patterns. Not logging ${agent_status^^} agent"
continue # Skip this agent if it doesn't match any critical agent pattern
fi
offline_hours=$(calculate_offline_hours "$last_checkin")
if [ "$offline_hours" -lt "$OFFLINE_THRESHOLD_HOURS" ]; then
log_message "INFO" "${agent_hostname^^} has been offline for ${offline_hours}h (threshold: ${OFFLINE_THRESHOLD_HOURS}h). Not logging ${agent_status^^} agent until it reaches threshold"
continue
fi
# Check if this agent was already logged within the realert_threshold
if check_recent_log_entries "$agent_hostname"; then
log_message "INFO" "Skipping $agent_hostname (status: $agent_status) - already logged within last ${REALERT_THRESHOLD}h"
continue
fi
log_entry=$(echo 'null' | jq -c \
--arg ts "$current_timestamp" \
--arg id "$agent_id" \
--arg hostname "$agent_hostname" \
--arg name "$agent_name" \
--arg status "$agent_status" \
--arg last_checkin "$last_checkin" \
--arg last_checkin_status "$last_checkin_status" \
--arg policy_id "$policy_id" \
--arg offline_hours "$offline_hours" \
'{
"@timestamp": $ts,
"agent.id": $id,
"agent.hostname": $hostname,
"agent.name": $name,
"agent.status": $status,
"agent.last_checkin": $last_checkin,
"agent.last_checkin_status": $last_checkin_status,
"agent.policy_id": $policy_id,
"agent.offline_duration_hours": ($offline_hours | tonumber)
}')
echo "$log_entry" >> "$LOG_FILE"
log_message "INFO" "Logged offline agent: $agent_hostname (status: $agent_status, offline: ${offline_hours}h)"
fi
done
processed_agents=$((processed_agents + agents_in_page))
if [ "$agents_in_page" -eq 0 ] || [ "$processed_agents" -ge "$total_agents" ]; then
log_message "INFO" "Completed processing all pages. Total processed: $processed_agents agents"
break
fi
page=$((page + 1))
# Limit pagination loops incase of any issues. If agent count is high enough increase page_size in SOC manager.agent_monitoring.config.page_size
if [ "$page" -gt 100 ]; then
log_message "ERROR" "Reached maximum page limit (100). Issue with script or extremely large fleet deployment. Consider increasing page_size in SOC -> manager.agent_monitoring.config.page_size"
break
fi
done
log_message "INFO" "Fleet agent status check completed. Processed $processed_agents out of $total_agents agents"
}
main "$@"

View File

@@ -15,6 +15,7 @@ require_manager
echo
echo "This script will remove the current Elastic Fleet install and all of its data and then rerun Elastic Fleet setup."
echo "Deployed Elastic Agents will no longer be enrolled and will need to be reinstalled."
echo "Only the Elastic Fleet instance on the Manager will be reinstalled - dedicated Fleet node config will removed and will need to be reinstalled."
echo "This script should only be used as a last resort to reinstall Elastic Fleet."
echo
echo "If you would like to proceed, then type AGREE and press ENTER."

View File

@@ -77,10 +77,10 @@ Examples:
1. Static IP Configuration with Multiple PCI Devices:
Command:
so-salt-cloud -p sool9-hyper1 vm1_sensor --static4 --ip4 192.168.1.10/24 --gw4 192.168.1.1 \
so-salt-cloud -p sool9_hyper1 vm1_sensor --static4 --ip4 192.168.1.10/24 --gw4 192.168.1.1 \
--dns4 192.168.1.1,192.168.1.2 --search4 example.local -c 4 -m 8192 -P 0000:c7:00.0 -P 0000:c4:00.0
This command provisions a VM named vm1_sensor using the sool9-hyper1 profile with the following settings:
This command provisions a VM named vm1_sensor using the sool9_hyper1 profile with the following settings:
- Static IPv4 configuration:
- IP Address: 192.168.1.10/24
@@ -95,21 +95,21 @@ Examples:
2. DHCP Configuration with Default Hardware Settings:
Command:
so-salt-cloud -p sool9-hyper1 vm2_master --dhcp4
so-salt-cloud -p sool9_hyper1 vm2_master --dhcp4
This command provisions a VM named vm2_master using the sool9-hyper1 profile with DHCP for network configuration and default hardware settings.
This command provisions a VM named vm2_master using the sool9_hyper1 profile with DHCP for network configuration and default hardware settings.
3. Static IP Configuration without Hardware Specifications:
Command:
so-salt-cloud -p sool9-hyper1 vm3_search --static4 --ip4 192.168.1.20/24 --gw4 192.168.1.1
so-salt-cloud -p sool9_hyper1 vm3_search --static4 --ip4 192.168.1.20/24 --gw4 192.168.1.1
This command provisions a VM named vm3_search with a static IP configuration and default hardware settings.
4. DHCP Configuration with Custom Hardware Specifications and Multiple PCI Devices:
Command:
so-salt-cloud -p sool9-hyper1 vm4_node --dhcp4 -c 8 -m 16384 -P 0000:c7:00.0 -P 0000:c4:00.0 -P 0000:c4:00.1
so-salt-cloud -p sool9_hyper1 vm4_node --dhcp4 -c 8 -m 16384 -P 0000:c7:00.0 -P 0000:c4:00.0 -P 0000:c4:00.1
This command provisions a VM named vm4_node using DHCP for network configuration and custom hardware settings:
@@ -120,9 +120,9 @@ Examples:
5. Static IP Configuration with DNS and Search Domain:
Command:
so-salt-cloud -p sool9-hyper1 vm1_sensor --static4 --ip4 192.168.1.10/24 --gw4 192.168.1.1 --dns4 192.168.1.1 --search4 example.local
so-salt-cloud -p sool9_hyper1 vm1_sensor --static4 --ip4 192.168.1.10/24 --gw4 192.168.1.1 --dns4 192.168.1.1 --search4 example.local
This command provisions a VM named vm1_sensor using the sool9-hyper1 profile with static IPv4 configuration:
This command provisions a VM named vm1_sensor using the sool9_hyper1 profile with static IPv4 configuration:
- Static IPv4 configuration:
- IP Address: 192.168.1.10/24
@@ -133,14 +133,14 @@ Examples:
6. Delete a VM with Confirmation:
Command:
so-salt-cloud -p sool9-hyper1 vm1_sensor -d
so-salt-cloud -p sool9_hyper1 vm1_sensor -d
This command deletes the VM named vm1_sensor and will prompt for confirmation before proceeding.
7. Delete a VM without Confirmation:
Command:
so-salt-cloud -p sool9-hyper1 vm1_sensor -yd
so-salt-cloud -p sool9_hyper1 vm1_sensor -yd
This command deletes the VM named vm1_sensor without prompting for confirmation.
@@ -439,8 +439,8 @@ def call_salt_cloud(profile, vm_name, destroy=False, assume_yes=False):
delete_vm(profile, vm_name, assume_yes)
return
# Extract hypervisor hostname from profile (e.g., sool9-jpphype1 -> jpphype1)
hypervisor = profile.split('-', 1)[1] if '-' in profile else None
# Extract hypervisor hostname from profile (e.g., sool9_hype1 -> hype1)
hypervisor = profile.split('_', 1)[1] if '_' in profile else None
if hypervisor:
logger.info("Ensuring host key exists for hypervisor %s", hypervisor)
if not _add_hypervisor_host_key(hypervisor):
@@ -512,7 +512,7 @@ def format_qcow2_output(operation, result):
logger.info(f"{operation} result from {host}: {host_result}")
def run_qcow2_modify_hardware_config(profile, vm_name, cpu=None, memory=None, pci_list=None, start=False):
hv_name = profile.split('-')[1]
hv_name = profile.split('_')[1]
target = hv_name + "_*"
try:
@@ -592,7 +592,7 @@ def run_qcow2_create_volume_config(profile, vm_name, size_gb, cpu=None, memory=N
logger.error(f"An error occurred while creating volume and configuring hardware: {e}")
def run_qcow2_modify_network_config(profile, vm_name, mode, ip=None, gateway=None, dns=None, search_domain=None):
hv_name = profile.split('-')[1]
hv_name = profile.split('_')[1]
target = hv_name + "_*"
image = '/nsm/libvirt/images/sool9/sool9.qcow2'
interface = 'enp1s0'

View File

@@ -196,19 +196,23 @@ http {
}
location / {
auth_request /auth/sessions/whoami;
auth_request_set $userid $upstream_http_x_kratos_authenticated_identity_id;
proxy_set_header x-user-id $userid;
proxy_pass http://{{ GLOBALS.manager }}:9822/;
proxy_read_timeout 300;
proxy_connect_timeout 300;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Proxy "";
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
proxy_set_header X-Forwarded-Proto $scheme;
auth_request /auth/sessions/whoami;
auth_request_set $userid $upstream_http_x_kratos_authenticated_identity_id;
proxy_set_header x-user-id $userid;
proxy_pass http://{{ GLOBALS.manager }}:9822/;
proxy_read_timeout 300;
proxy_connect_timeout 300;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Proxy "";
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
proxy_set_header X-Forwarded-Proto $scheme;
proxy_buffering off;
proxy_cache off;
proxy_request_buffering off;
}
location ~ ^/auth/.*?(login|oidc/callback) {

View File

@@ -1,7 +1,7 @@
# NTP server list
{%- for SERVER in NTPCONFIG.servers %}
server {{ SERVER }} iburst
server {{ SERVER }} iburst maxpoll 10
{%- endfor %}
# Config options
@@ -9,3 +9,5 @@ driftfile /var/lib/chrony/drift
makestep 1.0 3
rtcsync
logdir /var/log/chrony
port 0
cmdport 0

View File

@@ -70,13 +70,13 @@
{% set vm_name = tag.split('/')[2] %}
{% do salt.log.debug('dyanno_hypervisor_orch: Got vm_name from tag: ' ~ vm_name) %}
{% if tag.endswith('/deploying') %}
{% set hypervisor = data.get('kwargs').get('cloud_grains').get('profile').split('-')[1] %}
{% set hypervisor = data.get('kwargs').get('cloud_grains').get('profile').split('_')[1] %}
{% endif %}
{# Set the hypervisor #}
{# First try to get it from the event #}
{% if data.get('profile', False) %}
{% do salt.log.debug('dyanno_hypervisor_orch: Did not get cache.grains.') %}
{% set hypervisor = data.profile.split('-')[1] %}
{% set hypervisor = data.profile.split('_')[1] %}
{% do salt.log.debug('dyanno_hypervisor_orch: Got hypervisor from data: ' ~ hypervisor) %}
{% else %}
{% set hypervisor = find_hypervisor_from_status(vm_name) %}

View File

@@ -18,11 +18,19 @@ include:
# This directory needs to exist regardless of whether STENO is enabled or not, in order for
# Sensoroni to be able to look at old steno PCAP data
# if stenographer has never run as the pcap engine no 941 user is created, so we use socore as a placeholder.
# /nsm/pcap is empty until stenographer is used as pcap engine
{% set pcap_id = 941 %}
{% set user_list = salt['user.list_users']() %}
{% if GLOBALS.pcap_engine == "SURICATA" and 'stenographer' not in user_list %}
{% set pcap_id = 939 %}
{% endif %}
pcapdir:
file.directory:
- name: /nsm/pcap
- user: 941
- group: 941
- user: {{ pcap_id }}
- group: {{ pcap_id }}
- makedirs: True
pcapoutdir:

View File

@@ -26,9 +26,9 @@
'rocky-devel.repo',
'rocky-extras.repo',
'rocky.repo',
'oracle-linux-ol9',
'uek-ol9',
'virt-oll9'
'oracle-linux-ol9.repo',
'uek-ol9.repo',
'virt-ol9.repo'
]
%}
{% else %}

View File

@@ -6,12 +6,12 @@
{%- for role, hosts in HYPERVISORS.items() %}
{%- for host in hosts.keys() %}
sool9-{{host}}:
sool9_{{host}}:
provider: kvm-ssh-{{host}}
base_domain: sool9
ip_source: qemu-agent
ssh_username: soqemussh
private_key: /etc/ssh/auth_keys/soqemussh/id_ed25519
private_key: /etc/ssh/auth_keys/soqemussh/id_ecdsa
sudo: True
deploy_command: sh /tmp/.saltcloud-*/deploy.sh
script_args: -r -F -x python3 stable 3006.9

View File

@@ -161,6 +161,7 @@ DEFAULT_BASE_PATH = '/opt/so/saltstack/local/salt/hypervisor/hosts'
VALID_ROLES = ['sensor', 'searchnode', 'idh', 'receiver', 'heavynode', 'fleet']
LICENSE_PATH = '/opt/so/saltstack/local/pillar/soc/license.sls'
DEFAULTS_PATH = '/opt/so/saltstack/default/salt/hypervisor/defaults.yaml'
HYPERVISOR_PILLAR_PATH = '/opt/so/saltstack/local/pillar/hypervisor/soc_hypervisor.sls'
# Define the retention period for destroyed VMs (in hours)
DESTROYED_VM_RETENTION_HOURS = 48
@@ -271,7 +272,7 @@ def parse_hardware_indices(hw_value: Any) -> List[int]:
return indices
def get_hypervisor_model(hypervisor: str) -> str:
"""Get sosmodel from hypervisor grains."""
"""Get sosmodel or byodmodel from hypervisor grains."""
try:
# Get cached grains using Salt runner
grains = runner.cmd(
@@ -283,9 +284,9 @@ def get_hypervisor_model(hypervisor: str) -> str:
# Get the first minion ID that matches our hypervisor
minion_id = next(iter(grains.keys()))
model = grains[minion_id].get('sosmodel')
model = grains[minion_id].get('sosmodel', grains[minion_id].get('byodmodel', ''))
if not model:
raise ValueError(f"No sosmodel grain found for hypervisor {hypervisor}")
raise ValueError(f"No sosmodel or byodmodel grain found for hypervisor {hypervisor}")
log.debug("Found model %s for hypervisor %s", model, hypervisor)
return model
@@ -295,16 +296,48 @@ def get_hypervisor_model(hypervisor: str) -> str:
raise
def load_hardware_defaults(model: str) -> dict:
"""Load hardware configuration from defaults.yaml."""
"""Load hardware configuration from defaults.yaml and optionally override with pillar configuration."""
config = None
config_source = None
try:
# First, try to load from defaults.yaml
log.debug("Checking for model %s in %s", model, DEFAULTS_PATH)
defaults = read_yaml_file(DEFAULTS_PATH)
if not defaults or 'hypervisor' not in defaults:
raise ValueError("Invalid defaults.yaml structure")
if 'model' not in defaults['hypervisor']:
raise ValueError("No model configurations found in defaults.yaml")
if model not in defaults['hypervisor']['model']:
raise ValueError(f"Model {model} not found in defaults.yaml")
return defaults['hypervisor']['model'][model]
# Check if model exists in defaults
if model in defaults['hypervisor']['model']:
config = defaults['hypervisor']['model'][model]
config_source = DEFAULTS_PATH
log.debug("Found model %s in %s", model, DEFAULTS_PATH)
# Then, try to load from pillar file (if it exists)
try:
log.debug("Checking for model %s in %s", model, HYPERVISOR_PILLAR_PATH)
pillar_config = read_yaml_file(HYPERVISOR_PILLAR_PATH)
if pillar_config and 'hypervisor' in pillar_config:
if 'model' in pillar_config['hypervisor']:
if model in pillar_config['hypervisor']['model']:
# Override with pillar configuration
config = pillar_config['hypervisor']['model'][model]
config_source = HYPERVISOR_PILLAR_PATH
log.debug("Found model %s in %s (overriding defaults)", model, HYPERVISOR_PILLAR_PATH)
except FileNotFoundError:
log.debug("Pillar file %s not found, using defaults only", HYPERVISOR_PILLAR_PATH)
except Exception as e:
log.warning("Failed to read pillar file %s: %s (using defaults)", HYPERVISOR_PILLAR_PATH, str(e))
# If model was not found in either file, raise an error
if config is None:
raise ValueError(f"Model {model} not found in {DEFAULTS_PATH} or {HYPERVISOR_PILLAR_PATH}")
log.debug("Using hardware configuration for model %s from %s", model, config_source)
return config
except Exception as e:
log.error("Failed to load hardware defaults: %s", str(e))
raise
@@ -679,7 +712,7 @@ def process_vm_creation(hypervisor_path: str, vm_config: dict) -> None:
create_vm_tracking_file(hypervisor_path, vm_name, vm_config)
# Build and execute so-salt-cloud command
cmd = ['so-salt-cloud', '-p', f'sool9-{hypervisor}', vm_name]
cmd = ['so-salt-cloud', '-p', f'sool9_{hypervisor}', vm_name]
# Add network configuration
if vm_config['network_mode'] == 'static4':
@@ -822,7 +855,7 @@ def process_vm_deletion(hypervisor_path: str, vm_name: str) -> None:
log.warning("Failed to read VM config from tracking file %s: %s", vm_file, str(e))
# Attempt VM deletion with so-salt-cloud
cmd = ['so-salt-cloud', '-p', f'sool9-{hypervisor}', vm_name, '-yd']
cmd = ['so-salt-cloud', '-p', f'sool9_{hypervisor}', vm_name, '-yd']
log.info("Executing: %s", ' '.join(cmd))
result = subprocess.run(cmd, capture_output=True, text=True, check=True)

View File

@@ -4,7 +4,10 @@
Elastic License 2.0. #}
{% set role = salt['grains.get']('role', '') %}
{% if role in ['so-hypervisor','so-managerhype'] and salt['network.ip_addrs']('br0')|length > 0 %}
{# We are using usebr0 mostly for setup of the so-managerhype node and controlling when we use br0 vs the physical interface #}
{% set usebr0 = salt['pillar.get']('usebr0', True) %}
{% if role in ['so-hypervisor','so-managerhype'] and usebr0 %}
{% set interface = 'br0' %}
{% else %}
{% set interface = pillar.host.mainint %}
@@ -26,9 +29,9 @@
{% if INSTALLEDSALTVERSION != SALTVERSION %}
{% if grains.os_family|lower == 'redhat' %}
{% set UPGRADECOMMAND = 'yum clean all ; /usr/sbin/bootstrap-salt.sh -s 120 -r -F stable ' ~ SALTVERSION %}
{% set UPGRADECOMMAND = 'yum clean all ; /usr/sbin/bootstrap-salt.sh -X -r -F stable ' ~ SALTVERSION %}
{% elif grains.os_family|lower == 'debian' %}
{% set UPGRADECOMMAND = '/usr/sbin/bootstrap-salt.sh -s 120 -F stable ' ~ SALTVERSION %}
{% set UPGRADECOMMAND = '/usr/sbin/bootstrap-salt.sh -X -F stable ' ~ SALTVERSION %}
{% endif %}
{% else %}
{% set UPGRADECOMMAND = 'echo Already running Salt Minion version ' ~ SALTVERSION %}

View File

@@ -23,11 +23,6 @@ sync_runners:
- name: saltutil.sync_runners
{% endif %}
hold_salt_master_package:
module.run:
- pkg.hold:
- name: salt-master
# prior to 2.4.30 this engine ran on the manager with salt-minion
# this has changed to running with the salt-master in 2.4.30
remove_engines_config:

View File

@@ -3,7 +3,7 @@
# https://securityonion.net/license; you may not use this file except in compliance with the
# Elastic License 2.0.
# this state was seperated from salt.minion state since it is called during setup
# this state was separated from salt.minion state since it is called during setup
# GLOBALS are imported in the salt.minion state and that is not available at that point in setup
# this state is included in the salt.minion state

View File

@@ -1,18 +1,22 @@
# Copyright Security Onion Solutions LLC and/or licensed to Security Onion Solutions LLC under one
# or more contributor license agreements. Licensed under the Elastic License 2.0 as shown at
# https://securityonion.net/license; you may not use this file except in compliance with the
# Elastic License 2.0.
{% from 'vars/globals.map.jinja' import GLOBALS %}
{% from 'salt/map.jinja' import UPGRADECOMMAND with context %}
{% from 'salt/map.jinja' import SALTVERSION %}
{% from 'salt/map.jinja' import INSTALLEDSALTVERSION %}
{% from 'salt/map.jinja' import SALTPACKAGES %}
{% from 'salt/map.jinja' import SYSTEMD_UNIT_FILE %}
{% import_yaml 'salt/minion.defaults.yaml' as SALTMINION %}
include:
- salt.python_modules
- salt.patch.x509_v2
- salt
- systemd.reload
- repo.client
- salt.mine_functions
- salt.minion.service_file
{% if GLOBALS.role in GLOBALS.manager_roles %}
- ca
{% endif %}
@@ -38,25 +42,36 @@ unhold_salt_packages:
{% endfor %}
install_salt_minion:
cmd.run:
- name: /bin/sh -c '{{ UPGRADECOMMAND }}'
# minion service is in failed state after upgrade. this command will start it after the state run for the upgrade completes
start_minion_post_upgrade:
cmd.run:
- name: |
exec 0>&- # close stdin
exec 1>&- # close stdout
exec 2>&- # close stderr
nohup /bin/sh -c '{{ UPGRADECOMMAND }}' &
nohup /bin/sh -c 'sleep 30; systemctl start salt-minion' &
- require:
- cmd: install_salt_minion
- watch:
- cmd: install_salt_minion
- order: last
{% endif %}
{% if INSTALLEDSALTVERSION|string == SALTVERSION|string %}
{% for package in SALTPACKAGES %}
# only hold the package if it is already installed
hold_salt_packages:
{% if salt['pkg.version'](package) %}
hold_{{ package }}_package:
pkg.held:
- pkgs:
{% for package in SALTPACKAGES %}
{% if salt['pkg.version'](package) %}
- {{ package }}: {{SALTVERSION}}-0.*
{% endif %}
{% endfor %}
- name: {{ package }}
- version: {{SALTVERSION}}-0.*
{% endif %}
{% endfor %}
remove_error_log_level_logfile:
file.line:
@@ -83,17 +98,6 @@ enable_startup_states:
- regex: '^startup_states: highstate$'
- unless: pgrep so-setup
# prior to 2.4.30 this managed file would restart the salt-minion service when updated
# since this file is currently only adding a sleep timer on service start
# it is not required to restart the service
salt_minion_service_unit_file:
file.managed:
- name: {{ SYSTEMD_UNIT_FILE }}
- source: salt://salt/service/salt-minion.service.jinja
- template: jinja
- onchanges_in:
- module: systemd_reload
{% endif %}
# this has to be outside the if statement above since there are <requisite>_in calls to this state

View File

@@ -0,0 +1,26 @@
# Copyright Security Onion Solutions LLC and/or licensed to Security Onion Solutions LLC under one
# or more contributor license agreements. Licensed under the Elastic License 2.0 as shown at
# https://securityonion.net/license; you may not use this file except in compliance with the
# Elastic License 2.0.
{% from 'salt/map.jinja' import SALTVERSION %}
{% from 'salt/map.jinja' import INSTALLEDSALTVERSION %}
{% from 'salt/map.jinja' import SYSTEMD_UNIT_FILE %}
include:
- systemd.reload
{% if INSTALLEDSALTVERSION|string == SALTVERSION|string %}
# prior to 2.4.30 this managed file would restart the salt-minion service when updated
# since this file is currently only adding a delay service start
# it is not required to restart the service
salt_minion_service_unit_file:
file.managed:
- name: {{ SYSTEMD_UNIT_FILE }}
- source: salt://salt/service/salt-minion.service.jinja
- template: jinja
- onchanges_in:
- module: systemd_reload
{% endif %}

View File

@@ -1,5 +1,10 @@
{% from 'vars/globals.map.jinja' import GLOBALS %}
highstate_schedule:
schedule.present:
- function: state.highstate
- minutes: 15
- maxrunning: 1
{% if not GLOBALS.is_manager %}
- splay: 120
{% endif %}

View File

@@ -0,0 +1,4 @@
sensor:
interface: bond0
mtu: 9000
channels: 1

View File

@@ -9,6 +9,8 @@
# in the software, and you may not remove or obscure any functionality in the
# software that is protected by the license key."
{% from 'sensor/map.jinja' import SENSORMERGED %}
{% if 'vrt' in salt['pillar.get']('features') and salt['grains.get']('salt-cloud', {}) %}
include:
@@ -28,3 +30,18 @@ execute_checksum:
- name: /etc/NetworkManager/dispatcher.d/pre-up.d/99-so-checksum-offload-disable
- onchanges:
- file: offload_script
combine_bond_script:
file.managed:
- name: /usr/sbin/so-combine-bond
- source: salt://sensor/tools/sbin_jinja/so-combine-bond
- mode: 755
- template: jinja
- defaults:
CHANNELS: {{ SENSORMERGED.channels }}
execute_combine_bond:
cmd.run:
- name: /usr/sbin/so-combine-bond
- onlyif:
- ip link show bond0

7
salt/sensor/map.jinja Normal file
View File

@@ -0,0 +1,7 @@
{# Copyright Security Onion Solutions LLC and/or licensed to Security Onion Solutions LLC under one
or more contributor license agreements. Licensed under the Elastic License 2.0 as shown at
https://securityonion.net/license; you may not use this file except in compliance with the
Elastic License 2.0. #}
{% import_yaml 'sensor/defaults.yaml' as SENSORDEFAULTS %}
{% set SENSORMERGED = salt['pillar.get']('sensor', SENSORDEFAULTS.sensor, merge=True) %}

View File

@@ -7,3 +7,9 @@ sensor:
description: Maximum Transmission Unit (MTU) of the sensor monitoring interface.
helpLink: network.html
readonly: True
channels:
description: Set the size of the nic channels. This is rarely changed from 1
helpLink: network.html
forcedType: int
node: True
advanced: True

View File

@@ -0,0 +1,70 @@
#!/bin/bash
# Script to find all interfaces of bond0 and set channel parameters
# Compatible with Oracle Linux 9, Ubuntu, and Debian
. /usr/sbin/so-common
# Number of channels to set
CHANNELS={{ CHANNELS }}
# Exit on any error
set -e
# Check if running as root
if [[ $EUID -ne 0 ]]; then
exit 1
fi
# Check if bond0 exists
if ! ip link show bond0 &>/dev/null; then
exit 0
fi
# Function to get slave interfaces - works across distributions
get_bond_slaves() {
local bond_name="$1"
local slaves=""
# Method 1: Try /sys/class/net first (most reliable)
if [ -f "/sys/class/net/$bond_name/bonding/slaves" ]; then
slaves=$(cat "/sys/class/net/$bond_name/bonding/slaves" 2>/dev/null)
fi
# Method 2: Try /proc/net/bonding (older systems)
if [ -z "$slaves" ] && [ -f "/proc/net/bonding/$bond_name" ]; then
slaves=$(grep "Slave Interface:" "/proc/net/bonding/$bond_name" 2>/dev/null | awk '{print $3}' | tr '\n' ' ')
fi
# Method 3: Parse ip link output (universal fallback)
if [ -z "$slaves" ]; then
slaves=$(ip -o link show | grep "master $bond_name" | awk -F': ' '{print $2}' | cut -d'@' -f1 | tr '\n' ' ')
fi
echo "$slaves"
}
# Get slave interfaces
SLAVES=$(get_bond_slaves bond0)
if [ -z "$SLAVES" ]; then
exit 0
fi
# Process each slave interface
for interface in $SLAVES; do
# Skip if interface doesn't exist
if ! ip link show "$interface" &>/dev/null; then
continue
fi
# Try combined mode first
if ethtool -L "$interface" combined $CHANNELS &>/dev/null; then
continue
fi
# Fall back to separate rx/tx
ethtool -L "$interface" rx $CHANNELS tx $CHANNELS &>/dev/null || true
done
exit 0

View File

@@ -18,6 +18,7 @@ sensoroniagentconf:
- group: 939
- mode: 600
- template: jinja
- show_changes: False
analyzersdir:
file.directory:
@@ -43,6 +44,22 @@ analyzerscripts:
- source: salt://sensoroni/files/analyzers
- show_changes: False
templatesdir:
file.directory:
- name: /opt/so/conf/sensoroni/templates
- user: 939
- group: 939
- makedirs: True
sensoronitemplates:
file.recurse:
- name: /opt/so/conf/sensoroni/templates
- source: salt://sensoroni/files/templates
- user: 939
- group: 939
- file_mode: 664
- show_changes: False
sensoroni_sbin:
file.recurse:
- name: /usr/sbin

View File

@@ -34,6 +34,8 @@ sensoroni:
api_version: community
localfile:
file_path: []
malwarebazaar:
api_key:
otx:
base_url: https://otx.alienvault.com/api/v1/
api_key:
@@ -49,12 +51,16 @@ sensoroni:
live_flow: False
mailbox_email_address:
message_source_id:
threatfox:
api_key:
urlscan:
base_url: https://urlscan.io/api/v1/
api_key:
enabled: False
visibility: public
timeout: 180
urlhaus:
api_key:
virustotal:
base_url: https://www.virustotal.com/api/v3/search?query=
api_key:

View File

@@ -22,6 +22,7 @@ so-sensoroni:
- /nsm/pcapout:/nsm/pcapout:rw
- /opt/so/conf/sensoroni/sensoroni.json:/opt/sensoroni/sensoroni.json:ro
- /opt/so/conf/sensoroni/analyzers:/opt/sensoroni/analyzers:rw
- /opt/so/conf/sensoroni/templates:/opt/sensoroni/templates:ro
- /opt/so/log/sensoroni:/opt/sensoroni/logs:rw
- /nsm/suripcap/:/nsm/suripcap:rw
{% if DOCKER.containers['so-sensoroni'].custom_bind_mounts %}

View File

@@ -35,15 +35,15 @@ Many analyzers require authentication, via an API key or similar. The table belo
[EchoTrail](https://www.echotrail.io/docs/quickstart) |&check;|
[EmailRep](https://emailrep.io/key) |&check;|
[Elasticsearch](https://www.elastic.co/guide/en/elasticsearch/reference/7.17/setting-up-authentication.html) |&check;|
[GreyNoise](https://www.greynoise.io/plans/community) |&check;|
[GreyNoise (community)](https://www.greynoise.io/plans/community) |&cross;|
[LocalFile](https://github.com/Security-Onion-Solutions/securityonion/tree/fix/sublime_analyzer_documentation/salt/sensoroni/files/analyzers/localfile) |&cross;|
[Malware Hash Registry](https://hash.cymru.com/docs_whois) |&cross;|
[MalwareBazaar](https://bazaar.abuse.ch/) |&cross;|
[MalwareBazaar](https://bazaar.abuse.ch/) |&check;|
[Pulsedive](https://pulsedive.com/api/) |&check;|
[Spamhaus](https://www.spamhaus.org/dbl/) |&cross;|
[Sublime Platform](https://sublime.security) |&check;|
[ThreatFox](https://threatfox.abuse.ch/) |&cross;|
[Urlhaus](https://urlhaus.abuse.ch/) |&cross;|
[ThreatFox](https://threatfox.abuse.ch/) |&check;|
[Urlhaus](https://urlhaus.abuse.ch/) |&check;|
[Urlscan](https://urlscan.io/docs/api/) |&check;|
[VirusTotal](https://developers.virustotal.com/reference/overview) |&check;|
[WhoisLookup](https://github.com/meeb/whoisit) |&cross;|

Some files were not shown because too many files have changed in this diff Show More