Compare commits

..

7 Commits

Author SHA1 Message Date
Jorge Reyes
4014741562 Merge pull request #15113 from Security-Onion-Solutions/reyesj2/es-8188
generate new elastic agents in post soup
2025-10-07 13:11:55 -05:00
Jorge Reyes
76f500f701 temp patch for soup'n 2025-10-06 16:51:18 -05:00
Jorge Reyes
dcfe6a1674 Merge pull request #15110 from Security-Onion-Solutions/reyesj2/es-8188
Elastic 8.18.8 elastic agent build
2025-10-06 16:26:34 -05:00
Jorge Reyes
325e7ff44e Merge pull request #15109 from Security-Onion-Solutions/reyesj2/es-8188
es upgrade 8.18.8 pipeline updates
2025-10-06 16:23:55 -05:00
Jorge Reyes
ece25176cd Merge pull request #15108 from Security-Onion-Solutions/reyesj2/es-8188
es 8.18.8
2025-10-06 12:57:21 -05:00
Jorge Reyes
5186603dbd Merge pull request #15107 from Security-Onion-Solutions/2.4/dev
2.4/dev
2025-10-06 12:42:47 -05:00
Jorge Reyes
37bfd9eb30 Update VERSION 2025-10-01 15:36:54 -05:00
56 changed files with 85 additions and 3204 deletions

View File

@@ -32,7 +32,6 @@ body:
- 2.4.170 - 2.4.170
- 2.4.180 - 2.4.180
- 2.4.190 - 2.4.190
- 2.4.200
- Other (please provide detail below) - Other (please provide detail below)
validations: validations:
required: true required: true

View File

@@ -1,17 +1,17 @@
### 2.4.190-20251024 ISO image released on 2025/10/24 ### 2.4.180-20250916 ISO image released on 2025/09/17
### Download and Verify ### Download and Verify
2.4.190-20251024 ISO image: 2.4.180-20250916 ISO image:
https://download.securityonion.net/file/securityonion/securityonion-2.4.190-20251024.iso https://download.securityonion.net/file/securityonion/securityonion-2.4.180-20250916.iso
MD5: 25358481FB876226499C011FC0710358 MD5: DE93880E38DE4BE45D05A41E1745CB1F
SHA1: 0B26173C0CE136F2CA40A15046D1DFB78BCA1165 SHA1: AEA6948911E50A4A38E8729E0E965C565402E3FC
SHA256: 4FD9F62EDA672408828B3C0C446FE5EA9FF3C4EE8488A7AB1101544A3C487872 SHA256: C9BD8CA071E43B048ABF9ED145B87935CB1D4BB839B2244A06FAD1BBA8EAC84A
Signature for ISO image: Signature for ISO image:
https://github.com/Security-Onion-Solutions/securityonion/raw/2.4/main/sigs/securityonion-2.4.190-20251024.iso.sig https://github.com/Security-Onion-Solutions/securityonion/raw/2.4/main/sigs/securityonion-2.4.180-20250916.iso.sig
Signing key: Signing key:
https://raw.githubusercontent.com/Security-Onion-Solutions/securityonion/2.4/main/KEYS https://raw.githubusercontent.com/Security-Onion-Solutions/securityonion/2.4/main/KEYS
@@ -25,22 +25,22 @@ wget https://raw.githubusercontent.com/Security-Onion-Solutions/securityonion/2.
Download the signature file for the ISO: Download the signature file for the ISO:
``` ```
wget https://github.com/Security-Onion-Solutions/securityonion/raw/2.4/main/sigs/securityonion-2.4.190-20251024.iso.sig wget https://github.com/Security-Onion-Solutions/securityonion/raw/2.4/main/sigs/securityonion-2.4.180-20250916.iso.sig
``` ```
Download the ISO image: Download the ISO image:
``` ```
wget https://download.securityonion.net/file/securityonion/securityonion-2.4.190-20251024.iso wget https://download.securityonion.net/file/securityonion/securityonion-2.4.180-20250916.iso
``` ```
Verify the downloaded ISO image using the signature file: Verify the downloaded ISO image using the signature file:
``` ```
gpg --verify securityonion-2.4.190-20251024.iso.sig securityonion-2.4.190-20251024.iso gpg --verify securityonion-2.4.180-20250916.iso.sig securityonion-2.4.180-20250916.iso
``` ```
The output should show "Good signature" and the Primary key fingerprint should match what's shown below: The output should show "Good signature" and the Primary key fingerprint should match what's shown below:
``` ```
gpg: Signature made Thu 23 Oct 2025 07:21:46 AM EDT using RSA key ID FE507013 gpg: Signature made Tue 16 Sep 2025 06:30:19 PM EDT using RSA key ID FE507013
gpg: Good signature from "Security Onion Solutions, LLC <info@securityonionsolutions.com>" gpg: Good signature from "Security Onion Solutions, LLC <info@securityonionsolutions.com>"
gpg: WARNING: This key is not certified with a trusted signature! gpg: WARNING: This key is not certified with a trusted signature!
gpg: There is no indication that the signature belongs to the owner. gpg: There is no indication that the signature belongs to the owner.

View File

@@ -1 +1 @@
2.4.200 2.4.0-foxtrot

View File

@@ -1,91 +0,0 @@
#!/opt/saltstack/salt/bin/python3
# Copyright Security Onion Solutions LLC and/or licensed to Security Onion Solutions LLC under one
# or more contributor license agreements. Licensed under the Elastic License 2.0 as shown at
# https://securityonion.net/license; you may not use this file except in compliance with the
# Elastic License 2.0.
#
# Note: Per the Elastic License 2.0, the second limitation states:
#
# "You may not move, change, disable, or circumvent the license key functionality
# in the software, and you may not remove or obscure any functionality in the
# software that is protected by the license key."
"""
Salt execution module for hypervisor operations.
This module provides functions for managing hypervisor configurations,
including VM file management.
"""
import json
import logging
import os
log = logging.getLogger(__name__)
__virtualname__ = 'hypervisor'
def __virtual__():
"""
Only load this module if we're on a system that can manage hypervisors.
"""
return __virtualname__
def remove_vm_from_vms_file(vms_file_path, vm_hostname, vm_role):
"""
Remove a VM entry from the hypervisorVMs file.
Args:
vms_file_path (str): Path to the hypervisorVMs file
vm_hostname (str): Hostname of the VM to remove (without role suffix)
vm_role (str): Role of the VM
Returns:
dict: Result dictionary with success status and message
CLI Example:
salt '*' hypervisor.remove_vm_from_vms_file /opt/so/saltstack/local/salt/hypervisor/hosts/hypervisor1VMs node1 nsm
"""
try:
# Check if file exists
if not os.path.exists(vms_file_path):
msg = f"VMs file not found: {vms_file_path}"
log.error(msg)
return {'result': False, 'comment': msg}
# Read current VMs
with open(vms_file_path, 'r') as f:
content = f.read().strip()
vms = json.loads(content) if content else []
# Find and remove the VM entry
original_count = len(vms)
vms = [vm for vm in vms if not (vm.get('hostname') == vm_hostname and vm.get('role') == vm_role)]
if len(vms) < original_count:
# VM was found and removed, write back to file
with open(vms_file_path, 'w') as f:
json.dump(vms, f, indent=2)
# Set socore:socore ownership (939:939)
os.chown(vms_file_path, 939, 939)
msg = f"Removed VM {vm_hostname}_{vm_role} from {vms_file_path}"
log.info(msg)
return {'result': True, 'comment': msg}
else:
msg = f"VM {vm_hostname}_{vm_role} not found in {vms_file_path}"
log.warning(msg)
return {'result': False, 'comment': msg}
except json.JSONDecodeError as e:
msg = f"Failed to parse JSON in {vms_file_path}: {str(e)}"
log.error(msg)
return {'result': False, 'comment': msg}
except Exception as e:
msg = f"Failed to remove VM {vm_hostname}_{vm_role} from {vms_file_path}: {str(e)}"
log.error(msg)
return {'result': False, 'comment': msg}

View File

@@ -7,14 +7,12 @@
""" """
Salt module for managing QCOW2 image configurations and VM hardware settings. This module provides functions Salt module for managing QCOW2 image configurations and VM hardware settings. This module provides functions
for modifying network configurations within QCOW2 images, adjusting virtual machine hardware settings, and for modifying network configurations within QCOW2 images and adjusting virtual machine hardware settings.
creating virtual storage volumes. It serves as a Salt interface to the so-qcow2-modify-network, It serves as a Salt interface to the so-qcow2-modify-network and so-kvm-modify-hardware scripts.
so-kvm-modify-hardware, and so-kvm-create-volume scripts.
The module offers three main capabilities: The module offers two main capabilities:
1. Network Configuration: Modify network settings (DHCP/static IP) within QCOW2 images 1. Network Configuration: Modify network settings (DHCP/static IP) within QCOW2 images
2. Hardware Configuration: Adjust VM hardware settings (CPU, memory, PCI passthrough) 2. Hardware Configuration: Adjust VM hardware settings (CPU, memory, PCI passthrough)
3. Volume Management: Create and attach virtual storage volumes for NSM data
This module is intended to work with Security Onion's virtualization infrastructure and is typically This module is intended to work with Security Onion's virtualization infrastructure and is typically
used in conjunction with salt-cloud for VM provisioning and management. used in conjunction with salt-cloud for VM provisioning and management.
@@ -246,90 +244,3 @@ def modify_hardware_config(vm_name, cpu=None, memory=None, pci=None, start=False
except Exception as e: except Exception as e:
log.error('qcow2 module: An error occurred while executing the script: {}'.format(e)) log.error('qcow2 module: An error occurred while executing the script: {}'.format(e))
raise raise
def create_volume_config(vm_name, size_gb, start=False):
'''
Usage:
salt '*' qcow2.create_volume_config vm_name=<name> size_gb=<size> [start=<bool>]
Options:
vm_name
Name of the virtual machine to attach the volume to
size_gb
Volume size in GB (positive integer)
This determines the capacity of the virtual storage volume
start
Boolean flag to start the VM after volume creation
Optional - defaults to False
Examples:
1. **Create 500GB Volume:**
```bash
salt '*' qcow2.create_volume_config vm_name='sensor1_sensor' size_gb=500
```
This creates a 500GB virtual volume for NSM storage
2. **Create 1TB Volume and Start VM:**
```bash
salt '*' qcow2.create_volume_config vm_name='sensor1_sensor' size_gb=1000 start=True
```
This creates a 1TB volume and starts the VM after attachment
Notes:
- VM must be stopped before volume creation
- Volume is created as a qcow2 image and attached to the VM
- This is an alternative to disk passthrough via modify_hardware_config
- Volume is automatically attached to the VM's libvirt configuration
- Requires so-kvm-create-volume script to be installed
- Volume files are stored in the hypervisor's VM storage directory
Description:
This function creates and attaches a virtual storage volume to a KVM virtual machine
using the so-kvm-create-volume script. It creates a qcow2 disk image of the specified
size and attaches it to the VM for NSM (Network Security Monitoring) storage purposes.
This provides an alternative to physical disk passthrough, allowing flexible storage
allocation without requiring dedicated hardware. The VM can optionally be started
after the volume is successfully created and attached.
Exit Codes:
0: Success
1: Invalid parameters
2: VM state error (running when should be stopped)
3: Volume creation error
4: System command error
255: Unexpected error
Logging:
- All operations are logged to the salt minion log
- Log entries are prefixed with 'qcow2 module:'
- Volume creation and attachment operations are logged
- Errors include detailed messages and stack traces
- Final status of volume creation is logged
'''
# Validate size_gb parameter
if not isinstance(size_gb, int) or size_gb <= 0:
raise ValueError('size_gb must be a positive integer.')
cmd = ['/usr/sbin/so-kvm-create-volume', '-v', vm_name, '-s', str(size_gb)]
if start:
cmd.append('-S')
log.info('qcow2 module: Executing command: {}'.format(' '.join(shlex.quote(arg) for arg in cmd)))
try:
result = subprocess.run(cmd, capture_output=True, text=True, check=False)
ret = {
'retcode': result.returncode,
'stdout': result.stdout,
'stderr': result.stderr
}
if result.returncode != 0:
log.error('qcow2 module: Script execution failed with return code {}: {}'.format(result.returncode, result.stderr))
else:
log.info('qcow2 module: Script executed successfully.')
return ret
except Exception as e:
log.error('qcow2 module: An error occurred while executing the script: {}'.format(e))
raise

View File

@@ -1,21 +0,0 @@
# Copyright Security Onion Solutions LLC and/or licensed to Security Onion Solutions LLC under one
# or more contributor license agreements. Licensed under the Elastic License 2.0 as shown at
# https://securityonion.net/license; you may not use this file except in compliance with the
# Elastic License 2.0.
{% set nsm_exists = salt['file.directory_exists']('/nsm') %}
{% if nsm_exists %}
{% set nsm_total = salt['cmd.shell']('df -BG /nsm | tail -1 | awk \'{print $2}\'') %}
nsm_total:
grains.present:
- name: nsm_total
- value: {{ nsm_total }}
{% else %}
nsm_missing:
test.succeed_without_changes:
- name: /nsm does not exist, skipping grain assignment
{% endif %}

View File

@@ -4,7 +4,6 @@
{% from 'vars/globals.map.jinja' import GLOBALS %} {% from 'vars/globals.map.jinja' import GLOBALS %}
include: include:
- common.grains
- common.packages - common.packages
{% if GLOBALS.role in GLOBALS.manager_roles %} {% if GLOBALS.role in GLOBALS.manager_roles %}
- manager.elasticsearch # needed for elastic_curl_config state - manager.elasticsearch # needed for elastic_curl_config state

View File

@@ -220,22 +220,12 @@ compare_es_versions() {
} }
copy_new_files() { copy_new_files() {
# Define files to exclude from deletion (relative to their respective base directories)
local EXCLUDE_FILES=(
"salt/hypervisor/soc_hypervisor.yaml"
)
# Build rsync exclude arguments
local EXCLUDE_ARGS=()
for file in "${EXCLUDE_FILES[@]}"; do
EXCLUDE_ARGS+=(--exclude="$file")
done
# Copy new files over to the salt dir # Copy new files over to the salt dir
cd $UPDATE_DIR cd $UPDATE_DIR
rsync -a salt $DEFAULT_SALT_DIR/ --delete "${EXCLUDE_ARGS[@]}" rsync -a salt $DEFAULT_SALT_DIR/ --delete
rsync -a pillar $DEFAULT_SALT_DIR/ --delete "${EXCLUDE_ARGS[@]}" rsync -a pillar $DEFAULT_SALT_DIR/ --delete
chown -R socore:socore $DEFAULT_SALT_DIR/ chown -R socore:socore $DEFAULT_SALT_DIR/
chmod 755 $DEFAULT_SALT_DIR/pillar/firewall/addfirewall.sh
cd /tmp cd /tmp
} }
@@ -333,8 +323,8 @@ get_elastic_agent_vars() {
if [ -f "$defaultsfile" ]; then if [ -f "$defaultsfile" ]; then
ELASTIC_AGENT_TARBALL_VERSION=$(egrep " +version: " $defaultsfile | awk -F: '{print $2}' | tr -d '[:space:]') ELASTIC_AGENT_TARBALL_VERSION=$(egrep " +version: " $defaultsfile | awk -F: '{print $2}' | tr -d '[:space:]')
ELASTIC_AGENT_URL="https://repo.securityonion.net/file/so-repo/prod/2.4/elasticagent/elastic-agent_SO-$ELASTIC_AGENT_TARBALL_VERSION.tar.gz" ELASTIC_AGENT_URL="https://demo.jorgereyes.dev/elastic-agent_SO-$ELASTIC_AGENT_TARBALL_VERSION.tar.gz"
ELASTIC_AGENT_MD5_URL="https://repo.securityonion.net/file/so-repo/prod/2.4/elasticagent/elastic-agent_SO-$ELASTIC_AGENT_TARBALL_VERSION.md5" ELASTIC_AGENT_MD5_URL="https://demo.jorgereyes.dev/elastic-agent_SO-$ELASTIC_AGENT_TARBALL_VERSION.md5"
ELASTIC_AGENT_FILE="/nsm/elastic-fleet/artifacts/elastic-agent_SO-$ELASTIC_AGENT_TARBALL_VERSION.tar.gz" ELASTIC_AGENT_FILE="/nsm/elastic-fleet/artifacts/elastic-agent_SO-$ELASTIC_AGENT_TARBALL_VERSION.tar.gz"
ELASTIC_AGENT_MD5="/nsm/elastic-fleet/artifacts/elastic-agent_SO-$ELASTIC_AGENT_TARBALL_VERSION.md5" ELASTIC_AGENT_MD5="/nsm/elastic-fleet/artifacts/elastic-agent_SO-$ELASTIC_AGENT_TARBALL_VERSION.md5"
ELASTIC_AGENT_EXPANSION_DIR=/nsm/elastic-fleet/artifacts/beats/elastic-agent ELASTIC_AGENT_EXPANSION_DIR=/nsm/elastic-fleet/artifacts/beats/elastic-agent

View File

@@ -222,7 +222,6 @@ if [[ $EXCLUDE_KNOWN_ERRORS == 'Y' ]]; then
EXCLUDED_ERRORS="$EXCLUDED_ERRORS|Initialized license manager" # SOC log: before fields.status was changed to fields.licenseStatus EXCLUDED_ERRORS="$EXCLUDED_ERRORS|Initialized license manager" # SOC log: before fields.status was changed to fields.licenseStatus
EXCLUDED_ERRORS="$EXCLUDED_ERRORS|from NIC checksum offloading" # zeek reporter.log EXCLUDED_ERRORS="$EXCLUDED_ERRORS|from NIC checksum offloading" # zeek reporter.log
EXCLUDED_ERRORS="$EXCLUDED_ERRORS|marked for removal" # docker container getting recycled EXCLUDED_ERRORS="$EXCLUDED_ERRORS|marked for removal" # docker container getting recycled
EXCLUDED_ERRORS="$EXCLUDED_ERRORS|tcp 127.0.0.1:6791: bind: address already in use" # so-elastic-fleet agent restarting. Seen starting w/ 8.18.8 https://github.com/elastic/kibana/issues/201459
fi fi
RESULT=0 RESULT=0

View File

@@ -40,7 +40,7 @@
"enabled": true, "enabled": true,
"vars": { "vars": {
"paths": [ "paths": [
"/opt/so/log/elasticsearch/*.json" "/opt/so/log/elasticsearch/*.log"
] ]
} }
}, },

View File

@@ -1991,70 +1991,6 @@ elasticsearch:
set_priority: set_priority:
priority: 50 priority: 50
min_age: 30d min_age: 30d
so-logs-elasticsearch_x_server:
index_sorting: false
index_template:
composed_of:
- logs-elasticsearch.server@package
- logs-elasticsearch.server@custom
- so-fleet_integrations.ip_mappings-1
- so-fleet_globals-1
- so-fleet_agent_id_verification-1
data_stream:
allow_custom_routing: false
hidden: false
ignore_missing_component_templates:
- logs-elasticsearch.server@custom
index_patterns:
- logs-elasticsearch.server-*
priority: 501
template:
mappings:
_meta:
managed: true
managed_by: security_onion
package:
name: elastic_agent
settings:
index:
lifecycle:
name: so-logs-elasticsearch.server-logs
mapping:
total_fields:
limit: 5000
number_of_replicas: 0
sort:
field: '@timestamp'
order: desc
policy:
_meta:
managed: true
managed_by: security_onion
package:
name: elastic_agent
phases:
cold:
actions:
set_priority:
priority: 0
min_age: 60d
delete:
actions:
delete: {}
min_age: 365d
hot:
actions:
rollover:
max_age: 30d
max_primary_shard_size: 50gb
set_priority:
priority: 100
min_age: 0ms
warm:
actions:
set_priority:
priority: 50
min_age: 30d
so-logs-endpoint_x_actions: so-logs-endpoint_x_actions:
index_sorting: false index_sorting: false
index_template: index_template:

View File

@@ -23,7 +23,6 @@
{ "set": { "if": "ctx.event?.module == 'fim'", "override": true, "field": "event.module", "value": "file_integrity" } }, { "set": { "if": "ctx.event?.module == 'fim'", "override": true, "field": "event.module", "value": "file_integrity" } },
{ "rename": { "if": "ctx.winlog?.provider_name == 'Microsoft-Windows-Windows Defender'", "ignore_missing": true, "field": "winlog.event_data.Threat Name", "target_field": "winlog.event_data.threat_name" } }, { "rename": { "if": "ctx.winlog?.provider_name == 'Microsoft-Windows-Windows Defender'", "ignore_missing": true, "field": "winlog.event_data.Threat Name", "target_field": "winlog.event_data.threat_name" } },
{ "set": { "if": "ctx?.metadata?.kafka != null" , "field": "kafka.id", "value": "{{metadata.kafka.partition}}{{metadata.kafka.offset}}{{metadata.kafka.timestamp}}", "ignore_failure": true } }, { "set": { "if": "ctx?.metadata?.kafka != null" , "field": "kafka.id", "value": "{{metadata.kafka.partition}}{{metadata.kafka.offset}}{{metadata.kafka.timestamp}}", "ignore_failure": true } },
{ "set": { "if": "ctx.event?.dataset != null && ctx.event?.dataset == 'elasticsearch.server'", "field": "event.module", "value":"elasticsearch" }},
{"append": {"field":"related.ip","value":["{{source.ip}}","{{destination.ip}}"],"allow_duplicates":false,"if":"ctx?.event?.dataset == 'endpoint.events.network' && ctx?.source?.ip != null","ignore_failure":true}}, {"append": {"field":"related.ip","value":["{{source.ip}}","{{destination.ip}}"],"allow_duplicates":false,"if":"ctx?.event?.dataset == 'endpoint.events.network' && ctx?.source?.ip != null","ignore_failure":true}},
{"foreach": {"field":"host.ip","processor":{"append":{"field":"related.ip","value":"{{_ingest._value}}","allow_duplicates":false}},"if":"ctx?.event?.module == 'endpoint' && ctx?.host?.ip != null","ignore_missing":true, "description":"Extract IPs from Elastic Agent events (host.ip) and adds them to related.ip"}}, {"foreach": {"field":"host.ip","processor":{"append":{"field":"related.ip","value":"{{_ingest._value}}","allow_duplicates":false}},"if":"ctx?.event?.module == 'endpoint' && ctx?.host?.ip != null","ignore_missing":true, "description":"Extract IPs from Elastic Agent events (host.ip) and adds them to related.ip"}},
{ "remove": { "field": [ "message2", "type", "fields", "category", "module", "dataset", "event.dataset_temp", "dataset_tag_temp", "module_temp", "datastream_dataset_temp" ], "ignore_missing": true, "ignore_failure": true } } { "remove": { "field": [ "message2", "type", "fields", "category", "module", "dataset", "event.dataset_temp", "dataset_tag_temp", "module_temp", "datastream_dataset_temp" ], "ignore_missing": true, "ignore_failure": true } }

View File

@@ -20,28 +20,8 @@ appender.rolling.strategy.type = DefaultRolloverStrategy
appender.rolling.strategy.action.type = Delete appender.rolling.strategy.action.type = Delete
appender.rolling.strategy.action.basepath = /var/log/elasticsearch appender.rolling.strategy.action.basepath = /var/log/elasticsearch
appender.rolling.strategy.action.condition.type = IfFileName appender.rolling.strategy.action.condition.type = IfFileName
appender.rolling.strategy.action.condition.glob = *.log.gz appender.rolling.strategy.action.condition.glob = *.gz
appender.rolling.strategy.action.condition.nested_condition.type = IfLastModified appender.rolling.strategy.action.condition.nested_condition.type = IfLastModified
appender.rolling.strategy.action.condition.nested_condition.age = 7D appender.rolling.strategy.action.condition.nested_condition.age = 7D
appender.rolling_json.type = RollingFile
appender.rolling_json.name = rolling_json
appender.rolling_json.fileName = ${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs.cluster_name}.json
appender.rolling_json.layout.type = ECSJsonLayout
appender.rolling_json.layout.dataset = elasticsearch.server
appender.rolling_json.filePattern = ${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs.cluster_name}-%d{yyyy-MM-dd}.json.gz
appender.rolling_json.policies.type = Policies
appender.rolling_json.policies.time.type = TimeBasedTriggeringPolicy
appender.rolling_json.policies.time.interval = 1
appender.rolling_json.policies.time.modulate = true
appender.rolling_json.strategy.type = DefaultRolloverStrategy
appender.rolling_json.strategy.action.type = Delete
appender.rolling_json.strategy.action.basepath = /var/log/elasticsearch
appender.rolling_json.strategy.action.condition.type = IfFileName
appender.rolling_json.strategy.action.condition.glob = *.json.gz
appender.rolling_json.strategy.action.condition.nested_condition.type = IfLastModified
appender.rolling_json.strategy.action.condition.nested_condition.age = 1D
rootLogger.level = info rootLogger.level = info
rootLogger.appenderRef.rolling.ref = rolling rootLogger.appenderRef.rolling.ref = rolling
rootLogger.appenderRef.rolling_json.ref = rolling_json

View File

@@ -392,7 +392,6 @@ elasticsearch:
so-logs-elastic_agent_x_metricbeat: *indexSettings so-logs-elastic_agent_x_metricbeat: *indexSettings
so-logs-elastic_agent_x_osquerybeat: *indexSettings so-logs-elastic_agent_x_osquerybeat: *indexSettings
so-logs-elastic_agent_x_packetbeat: *indexSettings so-logs-elastic_agent_x_packetbeat: *indexSettings
so-logs-elasticsearch_x_server: *indexSettings
so-metrics-endpoint_x_metadata: *indexSettings so-metrics-endpoint_x_metadata: *indexSettings
so-metrics-endpoint_x_metrics: *indexSettings so-metrics-endpoint_x_metrics: *indexSettings
so-metrics-endpoint_x_policy: *indexSettings so-metrics-endpoint_x_policy: *indexSettings

View File

@@ -58,26 +58,10 @@
{% set role = vm.get('role', '') %} {% set role = vm.get('role', '') %}
{% do salt.log.debug('salt/hypervisor/map.jinja: Processing VM - hostname: ' ~ hostname ~ ', role: ' ~ role) %} {% do salt.log.debug('salt/hypervisor/map.jinja: Processing VM - hostname: ' ~ hostname ~ ', role: ' ~ role) %}
{# Try to load VM configuration from config file first, then .error file if config doesn't exist #} {# Load VM configuration from config file #}
{% set vm_file = 'hypervisor/hosts/' ~ hypervisor ~ '/' ~ hostname ~ '_' ~ role %} {% set vm_file = 'hypervisor/hosts/' ~ hypervisor ~ '/' ~ hostname ~ '_' ~ role %}
{% set vm_error_file = vm_file ~ '.error' %}
{% do salt.log.debug('salt/hypervisor/map.jinja: VM config file: ' ~ vm_file) %} {% do salt.log.debug('salt/hypervisor/map.jinja: VM config file: ' ~ vm_file) %}
{% import_json vm_file as vm_state %}
{# Check if base config file exists #}
{% set config_exists = salt['file.file_exists']('/opt/so/saltstack/local/salt/' ~ vm_file) %}
{% set error_exists = salt['file.file_exists']('/opt/so/saltstack/local/salt/' ~ vm_error_file) %}
{% set vm_state = none %}
{% if config_exists %}
{% import_json vm_file as vm_state %}
{% do salt.log.debug('salt/hypervisor/map.jinja: Loaded VM config from base file') %}
{% elif error_exists %}
{% import_json vm_error_file as vm_state %}
{% do salt.log.debug('salt/hypervisor/map.jinja: Loaded VM config from .error file') %}
{% else %}
{% do salt.log.warning('salt/hypervisor/map.jinja: No config or error file found for VM ' ~ hostname ~ '_' ~ role) %}
{% endif %}
{% if vm_state %} {% if vm_state %}
{% do salt.log.debug('salt/hypervisor/map.jinja: VM config content: ' ~ vm_state | tojson) %} {% do salt.log.debug('salt/hypervisor/map.jinja: VM config content: ' ~ vm_state | tojson) %}
{% set vm_data = {'config': vm_state.config} %} {% set vm_data = {'config': vm_state.config} %}
@@ -101,7 +85,7 @@
{% endif %} {% endif %}
{% do vms.update({hostname ~ '_' ~ role: vm_data}) %} {% do vms.update({hostname ~ '_' ~ role: vm_data}) %}
{% else %} {% else %}
{% do salt.log.debug('salt/hypervisor/map.jinja: Skipping VM ' ~ hostname ~ '_' ~ role ~ ' - no config available') %} {% do salt.log.debug('salt/hypervisor/map.jinja: Config file empty: ' ~ vm_file) %}
{% endif %} {% endif %}
{% endfor %} {% endfor %}

View File

@@ -1,586 +0,0 @@
#!/usr/bin/python3
# Copyright Security Onion Solutions LLC and/or licensed to Security Onion Solutions LLC under one
# or more contributor license agreements. Licensed under the Elastic License 2.0 as shown at
# https://securityonion.net/license; you may not use this file except in compliance with the
# Elastic License 2.0.
#
# Note: Per the Elastic License 2.0, the second limitation states:
#
# "You may not move, change, disable, or circumvent the license key functionality
# in the software, and you may not remove or obscure any functionality in the
# software that is protected by the license key."
{% if 'vrt' in salt['pillar.get']('features', []) %}
"""
Script for creating and attaching virtual volumes to KVM virtual machines for NSM storage.
This script provides functionality to create pre-allocated raw disk images and attach them
to VMs as virtio-blk devices for high-performance network security monitoring data storage.
The script handles the complete volume lifecycle:
1. Volume Creation: Creates pre-allocated raw disk images using qemu-img
2. Volume Attachment: Attaches volumes to VMs as virtio-blk devices
3. VM Management: Stops/starts VMs as needed during the process
This script is designed to work with Security Onion's virtualization infrastructure and is typically
used during VM provisioning to add dedicated NSM storage volumes.
**Usage:**
so-kvm-create-volume -v <vm_name> -s <size_gb> [-S]
**Options:**
-v, --vm Name of the virtual machine to attach the volume to (required).
-s, --size Size of the volume in GB (required, must be a positive integer).
-S, --start Start the VM after volume creation and attachment (optional).
**Examples:**
1. **Create and Attach 500GB Volume:**
```bash
so-kvm-create-volume -v vm1_sensor -s 500
```
This command creates and attaches a volume with the following settings:
- VM Name: `vm1_sensor`
- Volume Size: `500` GB
- Volume Path: `/nsm/libvirt/volumes/vm1_sensor-nsm.img`
- Device: `/dev/vdb` (virtio-blk)
- VM remains stopped after attachment
2. **Create Volume and Start VM:**
```bash
so-kvm-create-volume -v vm2_sensor -s 1000 -S
```
This command creates a volume and starts the VM:
- VM Name: `vm2_sensor`
- Volume Size: `1000` GB (1 TB)
- VM is started after volume attachment due to the `-S` flag
3. **Create Large Volume for Heavy Traffic:**
```bash
so-kvm-create-volume -v vm3_sensor -s 2000 -S
```
This command creates a large volume for high-traffic environments:
- VM Name: `vm3_sensor`
- Volume Size: `2000` GB (2 TB)
- VM is started after attachment
**Notes:**
- The script automatically stops the VM if it's running before creating and attaching the volume.
- Volumes are created with full pre-allocation for optimal performance.
- Volume files are stored in `/nsm/libvirt/volumes/` with naming pattern `<vm_name>-nsm.img`.
- Volumes are attached as `/dev/vdb` using virtio-blk for high performance.
- The script checks available disk space before creating the volume.
- Ownership is set to `qemu:qemu` with permissions `640`.
- Without the `-S` flag, the VM remains stopped after volume attachment.
**Description:**
The `so-kvm-create-volume` script creates and attaches NSM storage volumes using the following process:
1. **Pre-flight Checks:**
- Validates input parameters (VM name, size)
- Checks available disk space in `/nsm/libvirt/volumes/`
- Ensures sufficient space for the requested volume size
2. **VM State Management:**
- Connects to the local libvirt daemon
- Stops the VM if it's currently running
- Retrieves current VM configuration
3. **Volume Creation:**
- Creates volume directory if it doesn't exist
- Uses `qemu-img create` with full pre-allocation
- Sets proper ownership (qemu:qemu) and permissions (640)
- Validates volume creation success
4. **Volume Attachment:**
- Modifies VM's libvirt XML configuration
- Adds disk element with virtio-blk driver
- Configures cache='none' and io='native' for performance
- Attaches volume as `/dev/vdb`
5. **VM Redefinition:**
- Applies the new configuration by redefining the VM
- Optionally starts the VM if requested
- Emits deployment status events for monitoring
6. **Error Handling:**
- Validates all input parameters
- Checks disk space before creation
- Handles volume creation failures
- Handles volume attachment failures
- Provides detailed error messages for troubleshooting
**Exit Codes:**
- `0`: Success
- `1`: An error occurred during execution
**Logging:**
- Logs are written to `/opt/so/log/hypervisor/so-kvm-create-volume.log`
- Both file and console logging are enabled for real-time monitoring
- Log entries include timestamps and severity levels
- Log prefixes: VOLUME:, VM:, HARDWARE:, SPACE:
- Detailed error messages are logged for troubleshooting
"""
import argparse
import sys
import os
import libvirt
import logging
import socket
import subprocess
import pwd
import grp
import xml.etree.ElementTree as ET
from io import StringIO
from so_vm_utils import start_vm, stop_vm
from so_logging_utils import setup_logging
# Get hypervisor name from local hostname
HYPERVISOR = socket.gethostname()
# Volume storage directory
VOLUME_DIR = '/nsm/libvirt/volumes'
# Custom exception classes
class InsufficientSpaceError(Exception):
"""Raised when there is insufficient disk space for volume creation."""
pass
class VolumeCreationError(Exception):
"""Raised when volume creation fails."""
pass
class VolumeAttachmentError(Exception):
"""Raised when volume attachment fails."""
pass
# Custom log handler to capture output
class StringIOHandler(logging.Handler):
def __init__(self):
super().__init__()
self.strio = StringIO()
def emit(self, record):
msg = self.format(record)
self.strio.write(msg + '\n')
def get_value(self):
return self.strio.getvalue()
def parse_arguments():
"""Parse command-line arguments."""
parser = argparse.ArgumentParser(description='Create and attach a virtual volume to a KVM virtual machine for NSM storage.')
parser.add_argument('-v', '--vm', required=True, help='Name of the virtual machine to attach the volume to.')
parser.add_argument('-s', '--size', type=int, required=True, help='Size of the volume in GB (must be a positive integer).')
parser.add_argument('-S', '--start', action='store_true', help='Start the VM after volume creation and attachment.')
args = parser.parse_args()
# Validate size is positive
if args.size <= 0:
parser.error("Volume size must be a positive integer.")
return args
def check_disk_space(size_gb, logger):
"""
Check if there is sufficient disk space available for volume creation.
Args:
size_gb: Size of the volume in GB
logger: Logger instance
Raises:
InsufficientSpaceError: If there is not enough disk space
"""
try:
stat = os.statvfs(VOLUME_DIR)
# Available space in bytes
available_bytes = stat.f_bavail * stat.f_frsize
# Required space in bytes (add 10% buffer)
required_bytes = size_gb * 1024 * 1024 * 1024 * 1.1
available_gb = available_bytes / (1024 * 1024 * 1024)
required_gb = required_bytes / (1024 * 1024 * 1024)
logger.info(f"SPACE: Available: {available_gb:.2f} GB, Required: {required_gb:.2f} GB")
if available_bytes < required_bytes:
raise InsufficientSpaceError(
f"Insufficient disk space. Available: {available_gb:.2f} GB, Required: {required_gb:.2f} GB"
)
logger.info(f"SPACE: Sufficient disk space available for {size_gb} GB volume")
except OSError as e:
logger.error(f"SPACE: Failed to check disk space: {e}")
raise
def create_volume_file(vm_name, size_gb, logger):
"""
Create a pre-allocated raw disk image for the VM.
Args:
vm_name: Name of the VM
size_gb: Size of the volume in GB
logger: Logger instance
Returns:
Path to the created volume file
Raises:
VolumeCreationError: If volume creation fails
"""
# Define volume path (directory already created in main())
volume_path = os.path.join(VOLUME_DIR, f"{vm_name}-nsm.img")
# Check if volume already exists
if os.path.exists(volume_path):
logger.error(f"VOLUME: Volume already exists: {volume_path}")
raise VolumeCreationError(f"Volume already exists: {volume_path}")
logger.info(f"VOLUME: Creating {size_gb} GB volume at {volume_path}")
# Create volume using qemu-img with full pre-allocation
try:
cmd = [
'qemu-img', 'create',
'-f', 'raw',
'-o', 'preallocation=full',
volume_path,
f"{size_gb}G"
]
result = subprocess.run(
cmd,
capture_output=True,
text=True,
check=True
)
logger.info(f"VOLUME: Volume created successfully")
if result.stdout:
logger.debug(f"VOLUME: qemu-img output: {result.stdout.strip()}")
except subprocess.CalledProcessError as e:
logger.error(f"VOLUME: Failed to create volume: {e}")
if e.stderr:
logger.error(f"VOLUME: qemu-img error: {e.stderr.strip()}")
raise VolumeCreationError(f"Failed to create volume: {e}")
# Set ownership to qemu:qemu
try:
qemu_uid = pwd.getpwnam('qemu').pw_uid
qemu_gid = grp.getgrnam('qemu').gr_gid
os.chown(volume_path, qemu_uid, qemu_gid)
logger.info(f"VOLUME: Set ownership to qemu:qemu")
except (KeyError, OSError) as e:
logger.error(f"VOLUME: Failed to set ownership: {e}")
raise VolumeCreationError(f"Failed to set ownership: {e}")
# Set permissions to 640
try:
os.chmod(volume_path, 0o640)
logger.info(f"VOLUME: Set permissions to 640")
except OSError as e:
logger.error(f"VOLUME: Failed to set permissions: {e}")
raise VolumeCreationError(f"Failed to set permissions: {e}")
# Verify volume was created
if not os.path.exists(volume_path):
logger.error(f"VOLUME: Volume file not found after creation: {volume_path}")
raise VolumeCreationError(f"Volume file not found after creation: {volume_path}")
volume_size = os.path.getsize(volume_path)
logger.info(f"VOLUME: Volume created: {volume_path} ({volume_size} bytes)")
return volume_path
def attach_volume_to_vm(conn, vm_name, volume_path, logger):
"""
Attach the volume to the VM's libvirt XML configuration.
Args:
conn: Libvirt connection
vm_name: Name of the VM
volume_path: Path to the volume file
logger: Logger instance
Raises:
VolumeAttachmentError: If volume attachment fails
"""
try:
# Get the VM domain
dom = conn.lookupByName(vm_name)
# Get the XML description of the VM
xml_desc = dom.XMLDesc()
root = ET.fromstring(xml_desc)
# Find the devices element
devices_elem = root.find('./devices')
if devices_elem is None:
logger.error("VM: Could not find <devices> element in XML")
raise VolumeAttachmentError("Could not find <devices> element in VM XML")
# Log ALL devices with PCI addresses to find conflicts
logger.info("DISK_DEBUG: Examining ALL devices with PCI addresses")
for device in devices_elem:
address = device.find('./address')
if address is not None and address.get('type') == 'pci':
bus = address.get('bus', 'unknown')
slot = address.get('slot', 'unknown')
function = address.get('function', 'unknown')
logger.info(f"DISK_DEBUG: Device {device.tag}: bus={bus}, slot={slot}, function={function}")
# Log existing disk configuration for debugging
logger.info("DISK_DEBUG: Examining existing disk configuration")
existing_disks = devices_elem.findall('./disk')
for idx, disk in enumerate(existing_disks):
target = disk.find('./target')
source = disk.find('./source')
address = disk.find('./address')
dev_name = target.get('dev') if target is not None else 'unknown'
source_file = source.get('file') if source is not None else 'unknown'
if address is not None:
slot = address.get('slot', 'unknown')
bus = address.get('bus', 'unknown')
logger.info(f"DISK_DEBUG: Disk {idx}: dev={dev_name}, source={source_file}, slot={slot}, bus={bus}")
else:
logger.info(f"DISK_DEBUG: Disk {idx}: dev={dev_name}, source={source_file}, no address element")
# Check if vdb already exists
for disk in devices_elem.findall('./disk'):
target = disk.find('./target')
if target is not None and target.get('dev') == 'vdb':
logger.error("VM: Device vdb already exists in VM configuration")
raise VolumeAttachmentError("Device vdb already exists in VM configuration")
logger.info(f"VM: Attaching volume to {vm_name} as /dev/vdb")
# Create disk element
disk_elem = ET.SubElement(devices_elem, 'disk', attrib={
'type': 'file',
'device': 'disk'
})
# Add driver element
ET.SubElement(disk_elem, 'driver', attrib={
'name': 'qemu',
'type': 'raw',
'cache': 'none',
'io': 'native'
})
# Add source element
ET.SubElement(disk_elem, 'source', attrib={
'file': volume_path
})
# Add target element
ET.SubElement(disk_elem, 'target', attrib={
'dev': 'vdb',
'bus': 'virtio'
})
# Add address element
# Use bus 0x07 with slot 0x00 to ensure NSM volume appears after OS disk (which is on bus 0x04)
# Bus 0x05 is used by memballoon, bus 0x06 is used by rng device
# Libvirt requires slot <= 0 for non-zero buses
# This ensures vda = OS disk, vdb = NSM volume
ET.SubElement(disk_elem, 'address', attrib={
'type': 'pci',
'domain': '0x0000',
'bus': '0x07',
'slot': '0x00',
'function': '0x0'
})
logger.info(f"HARDWARE: Added disk configuration for vdb")
# Log disk ordering after adding new disk
logger.info("DISK_DEBUG: Disk configuration after adding NSM volume")
all_disks = devices_elem.findall('./disk')
for idx, disk in enumerate(all_disks):
target = disk.find('./target')
source = disk.find('./source')
address = disk.find('./address')
dev_name = target.get('dev') if target is not None else 'unknown'
source_file = source.get('file') if source is not None else 'unknown'
if address is not None:
slot = address.get('slot', 'unknown')
bus = address.get('bus', 'unknown')
logger.info(f"DISK_DEBUG: Disk {idx}: dev={dev_name}, source={source_file}, slot={slot}, bus={bus}")
else:
logger.info(f"DISK_DEBUG: Disk {idx}: dev={dev_name}, source={source_file}, no address element")
# Convert XML back to string
new_xml_desc = ET.tostring(root, encoding='unicode')
# Redefine the VM with the new XML
conn.defineXML(new_xml_desc)
logger.info(f"VM: VM redefined with volume attached")
except libvirt.libvirtError as e:
logger.error(f"VM: Failed to attach volume: {e}")
raise VolumeAttachmentError(f"Failed to attach volume: {e}")
except Exception as e:
logger.error(f"VM: Failed to attach volume: {e}")
raise VolumeAttachmentError(f"Failed to attach volume: {e}")
def emit_status_event(vm_name, status):
"""
Emit a deployment status event.
Args:
vm_name: Name of the VM
status: Status message
"""
try:
subprocess.run([
'so-salt-emit-vm-deployment-status-event',
'-v', vm_name,
'-H', HYPERVISOR,
'-s', status
], check=True)
except subprocess.CalledProcessError as e:
# Don't fail the entire operation if status event fails
pass
def main():
"""Main function to orchestrate volume creation and attachment."""
# Set up logging using the so_logging_utils library
string_handler = StringIOHandler()
string_handler.setFormatter(logging.Formatter('%(asctime)s - %(levelname)s - %(message)s'))
logger = setup_logging(
logger_name='so-kvm-create-volume',
log_file_path='/opt/so/log/hypervisor/so-kvm-create-volume.log',
log_level=logging.INFO,
format_str='%(asctime)s - %(levelname)s - %(message)s'
)
logger.addHandler(string_handler)
vm_name = None
try:
# Parse arguments
args = parse_arguments()
vm_name = args.vm
size_gb = args.size
start_vm_flag = args.start
logger.info(f"VOLUME: Starting volume creation for VM '{vm_name}' with size {size_gb} GB")
# Emit start status event
emit_status_event(vm_name, 'Volume Creation')
# Ensure volume directory exists before checking disk space
try:
os.makedirs(VOLUME_DIR, mode=0o754, exist_ok=True)
qemu_uid = pwd.getpwnam('qemu').pw_uid
qemu_gid = grp.getgrnam('qemu').gr_gid
os.chown(VOLUME_DIR, qemu_uid, qemu_gid)
logger.debug(f"VOLUME: Ensured volume directory exists: {VOLUME_DIR}")
except Exception as e:
logger.error(f"VOLUME: Failed to create volume directory: {e}")
emit_status_event(vm_name, 'Volume Configuration Failed')
sys.exit(1)
# Check disk space
check_disk_space(size_gb, logger)
# Connect to libvirt
try:
conn = libvirt.open(None)
logger.info("VM: Connected to libvirt")
except libvirt.libvirtError as e:
logger.error(f"VM: Failed to open connection to libvirt: {e}")
emit_status_event(vm_name, 'Volume Configuration Failed')
sys.exit(1)
# Stop VM if running
dom = stop_vm(conn, vm_name, logger)
# Create volume file
volume_path = create_volume_file(vm_name, size_gb, logger)
# Attach volume to VM
attach_volume_to_vm(conn, vm_name, volume_path, logger)
# Start VM if -S or --start argument is provided
if start_vm_flag:
dom = conn.lookupByName(vm_name)
start_vm(dom, logger)
logger.info(f"VM: VM '{vm_name}' started successfully")
else:
logger.info("VM: Start flag not provided; VM will remain stopped")
# Close connection
conn.close()
# Emit success status event
emit_status_event(vm_name, 'Volume Configuration')
logger.info(f"VOLUME: Volume creation and attachment completed successfully for VM '{vm_name}'")
except KeyboardInterrupt:
error_msg = "Operation cancelled by user"
logger.error(error_msg)
if vm_name:
emit_status_event(vm_name, 'Volume Configuration Failed')
sys.exit(1)
except InsufficientSpaceError as e:
error_msg = f"SPACE: {str(e)}"
logger.error(error_msg)
if vm_name:
emit_status_event(vm_name, 'Volume Configuration Failed')
sys.exit(1)
except VolumeCreationError as e:
error_msg = f"VOLUME: {str(e)}"
logger.error(error_msg)
if vm_name:
emit_status_event(vm_name, 'Volume Configuration Failed')
sys.exit(1)
except VolumeAttachmentError as e:
error_msg = f"VM: {str(e)}"
logger.error(error_msg)
if vm_name:
emit_status_event(vm_name, 'Volume Configuration Failed')
sys.exit(1)
except Exception as e:
error_msg = f"An error occurred: {str(e)}"
logger.error(error_msg)
if vm_name:
emit_status_event(vm_name, 'Volume Configuration Failed')
sys.exit(1)
if __name__ == '__main__':
main()
{%- else -%}
echo "Hypervisor nodes are a feature supported only for customers with a valid license. \
Contact Security Onion Solutions, LLC via our website at https://securityonionsolutions.com \
for more information about purchasing a license to enable this feature."
{% endif -%}

File diff suppressed because one or more lines are too long

View File

@@ -31,19 +31,6 @@ libvirt_conf_dir:
- group: 939 - group: 939
- makedirs: True - makedirs: True
libvirt_volumes:
file.directory:
- name: /nsm/libvirt/volumes
- user: qemu
- group: qemu
- dir_mode: 755
- file_mode: 640
- recurse:
- user
- group
- mode
- makedirs: True
libvirt_config: libvirt_config:
file.managed: file.managed:
- name: /opt/so/conf/libvirt/libvirtd.conf - name: /opt/so/conf/libvirt/libvirtd.conf

View File

@@ -1,3 +0,0 @@
#!/bin/bash
curl -s -L http://localhost:9600/_node/stats/flow | jq

View File

@@ -1,3 +0,0 @@
#!/bin/bash
curl -s -L http://localhost:9600/_health_report | jq

View File

@@ -1,3 +0,0 @@
#!/bin/bash
curl -s -L http://localhost:9600/_node/stats/jvm | jq

View File

@@ -5,12 +5,10 @@
# https://securityonion.net/license; you may not use this file except in compliance with the # https://securityonion.net/license; you may not use this file except in compliance with the
# Elastic License 2.0. # Elastic License 2.0.
default_salt_dir=/opt/so/saltstack/default
VERBOSE=0
VERY_VERBOSE=0
TEST_MODE=0
default_salt_dir=/opt/so/saltstack/default
clone_to_tmp() { clone_to_tmp() {
# TODO Need to add a air gap option # TODO Need to add a air gap option
# Make a temp location for the files # Make a temp location for the files
mkdir /tmp/sogh mkdir /tmp/sogh
@@ -18,110 +16,19 @@ clone_to_tmp() {
#git clone -b dev https://github.com/Security-Onion-Solutions/securityonion.git #git clone -b dev https://github.com/Security-Onion-Solutions/securityonion.git
git clone https://github.com/Security-Onion-Solutions/securityonion.git git clone https://github.com/Security-Onion-Solutions/securityonion.git
cd /tmp cd /tmp
}
show_file_changes() {
local source_dir="$1"
local dest_dir="$2"
local dir_type="$3" # "salt" or "pillar"
if [ $VERBOSE -eq 0 ]; then
return
fi
echo "=== Changes for $dir_type directory ==="
# Find all files in source directory
if [ -d "$source_dir" ]; then
find "$source_dir" -type f | while read -r source_file; do
# Get relative path
rel_path="${source_file#$source_dir/}"
dest_file="$dest_dir/$rel_path"
if [ ! -f "$dest_file" ]; then
echo "ADDED: $dest_file"
if [ $VERY_VERBOSE -eq 1 ]; then
echo " (New file - showing first 20 lines)"
head -n 20 "$source_file" | sed 's/^/ + /'
echo ""
fi
elif ! cmp -s "$source_file" "$dest_file"; then
echo "MODIFIED: $dest_file"
if [ $VERY_VERBOSE -eq 1 ]; then
echo " (Changes:)"
diff -u "$dest_file" "$source_file" | sed 's/^/ /'
echo ""
fi
fi
done
fi
# Find deleted files (exist in dest but not in source)
if [ -d "$dest_dir" ]; then
find "$dest_dir" -type f | while read -r dest_file; do
# Get relative path
rel_path="${dest_file#$dest_dir/}"
source_file="$source_dir/$rel_path"
if [ ! -f "$source_file" ]; then
echo "DELETED: $dest_file"
if [ $VERY_VERBOSE -eq 1 ]; then
echo " (File was deleted)"
echo ""
fi
fi
done
fi
echo ""
} }
copy_new_files() { copy_new_files() {
# Copy new files over to the salt dir # Copy new files over to the salt dir
cd /tmp/sogh/securityonion cd /tmp/sogh/securityonion
git checkout $BRANCH git checkout $BRANCH
VERSION=$(cat VERSION) VERSION=$(cat VERSION)
if [ $TEST_MODE -eq 1 ]; then
echo "=== TEST MODE: Showing what would change without making changes ==="
echo "Branch: $BRANCH"
echo "Version: $VERSION"
echo ""
fi
# Show changes before copying if verbose mode is enabled OR if in test mode
if [ $VERBOSE -eq 1 ] || [ $TEST_MODE -eq 1 ]; then
if [ $TEST_MODE -eq 1 ]; then
# In test mode, force at least basic verbose output
local old_verbose=$VERBOSE
if [ $VERBOSE -eq 0 ]; then
VERBOSE=1
fi
fi
echo "Analyzing file changes..."
show_file_changes "$(pwd)/salt" "$default_salt_dir/salt" "salt"
show_file_changes "$(pwd)/pillar" "$default_salt_dir/pillar" "pillar"
if [ $TEST_MODE -eq 1 ] && [ $old_verbose -eq 0 ]; then
# Restore original verbose setting
VERBOSE=$old_verbose
fi
fi
# If in test mode, don't copy files
if [ $TEST_MODE -eq 1 ]; then
echo "=== TEST MODE: No files were modified ==="
echo "To apply these changes, run without --test option"
rm -rf /tmp/sogh
return
fi
# We need to overwrite if there is a repo file # We need to overwrite if there is a repo file
if [ -d /opt/so/repo ]; then if [ -d /opt/so/repo ]; then
tar -czf /opt/so/repo/"$VERSION".tar.gz -C "$(pwd)/.." . tar -czf /opt/so/repo/"$VERSION".tar.gz -C "$(pwd)/.." .
fi fi
rsync -a salt $default_salt_dir/ rsync -a salt $default_salt_dir/
rsync -a pillar $default_salt_dir/ rsync -a pillar $default_salt_dir/
chown -R socore:socore $default_salt_dir/salt chown -R socore:socore $default_salt_dir/salt
@@ -138,64 +45,11 @@ got_root(){
fi fi
} }
show_usage() {
echo "Usage: $0 [-v] [-vv] [--test] [branch]"
echo " -v Show verbose output (files changed/added/deleted)"
echo " -vv Show very verbose output (includes file diffs)"
echo " --test Test mode - show what would change without making changes"
echo " branch Git branch to checkout (default: 2.4/main)"
echo ""
echo "Examples:"
echo " $0 # Normal operation"
echo " $0 -v # Show which files change"
echo " $0 -vv # Show files and their diffs"
echo " $0 --test # See what would change (dry run)"
echo " $0 --test -vv # Test mode with detailed diffs"
echo " $0 -v dev-branch # Use specific branch with verbose output"
exit 1
}
# Parse command line arguments
while [[ $# -gt 0 ]]; do
case $1 in
-v)
VERBOSE=1
shift
;;
-vv)
VERBOSE=1
VERY_VERBOSE=1
shift
;;
--test)
TEST_MODE=1
shift
;;
-h|--help)
show_usage
;;
-*)
echo "Unknown option $1"
show_usage
;;
*)
# This should be the branch name
if [ -z "$BRANCH" ]; then
BRANCH="$1"
else
echo "Too many arguments"
show_usage
fi
shift
;;
esac
done
# Set default branch if not provided
if [ -z "$BRANCH" ]; then
BRANCH=2.4/main
fi
got_root got_root
if [ $# -ne 1 ] ; then
BRANCH=2.4/main
else
BRANCH=$1
fi
clone_to_tmp clone_to_tmp
copy_new_files copy_new_files

View File

@@ -21,9 +21,6 @@ whiptail_title='Security Onion UPdater'
NOTIFYCUSTOMELASTICCONFIG=false NOTIFYCUSTOMELASTICCONFIG=false
TOPFILE=/opt/so/saltstack/default/salt/top.sls TOPFILE=/opt/so/saltstack/default/salt/top.sls
BACKUPTOPFILE=/opt/so/saltstack/default/salt/top.sls.backup BACKUPTOPFILE=/opt/so/saltstack/default/salt/top.sls.backup
SALTUPGRADED=false
SALT_CLOUD_INSTALLED=false
SALT_CLOUD_CONFIGURED=false
# used to display messages to the user at the end of soup # used to display messages to the user at the end of soup
declare -a FINAL_MESSAGE_QUEUE=() declare -a FINAL_MESSAGE_QUEUE=()
@@ -630,8 +627,6 @@ post_to_2.4.190() {
update_default_logstash_output update_default_logstash_output
fi fi
fi fi
# Apply new elasticsearch.server index template
rollover_index "logs-elasticsearch.server-default"
POSTVERSION=2.4.190 POSTVERSION=2.4.190
} }
@@ -1263,43 +1258,24 @@ upgrade_check_salt() {
} }
upgrade_salt() { upgrade_salt() {
SALTUPGRADED=True
echo "Performing upgrade of Salt from $INSTALLEDSALTVERSION to $NEWSALTVERSION." echo "Performing upgrade of Salt from $INSTALLEDSALTVERSION to $NEWSALTVERSION."
echo "" echo ""
# If rhel family # If rhel family
if [[ $is_rpm ]]; then if [[ $is_rpm ]]; then
# Check if salt-cloud is installed
if rpm -q salt-cloud &>/dev/null; then
SALT_CLOUD_INSTALLED=true
fi
# Check if salt-cloud is configured
if [[ -f /etc/salt/cloud.profiles.d/socloud.conf ]]; then
SALT_CLOUD_CONFIGURED=true
fi
echo "Removing yum versionlock for Salt." echo "Removing yum versionlock for Salt."
echo "" echo ""
yum versionlock delete "salt" yum versionlock delete "salt"
yum versionlock delete "salt-minion" yum versionlock delete "salt-minion"
yum versionlock delete "salt-master" yum versionlock delete "salt-master"
# Remove salt-cloud versionlock if installed
if [[ $SALT_CLOUD_INSTALLED == true ]]; then
yum versionlock delete "salt-cloud"
fi
echo "Updating Salt packages." echo "Updating Salt packages."
echo "" echo ""
set +e set +e
# if oracle run with -r to ignore repos set by bootstrap # if oracle run with -r to ignore repos set by bootstrap
if [[ $OS == 'oracle' ]]; then if [[ $OS == 'oracle' ]]; then
# Add -L flag only if salt-cloud is already installed run_check_net_err \
if [[ $SALT_CLOUD_INSTALLED == true ]]; then "sh $UPDATE_DIR/salt/salt/scripts/bootstrap-salt.sh -X -r -F -M stable \"$NEWSALTVERSION\"" \
run_check_net_err \ "Could not update salt, please check $SOUP_LOG for details."
"sh $UPDATE_DIR/salt/salt/scripts/bootstrap-salt.sh -X -r -L -F -M stable \"$NEWSALTVERSION\"" \
"Could not update salt, please check $SOUP_LOG for details."
else
run_check_net_err \
"sh $UPDATE_DIR/salt/salt/scripts/bootstrap-salt.sh -X -r -F -M stable \"$NEWSALTVERSION\"" \
"Could not update salt, please check $SOUP_LOG for details."
fi
# if another rhel family variant we want to run without -r to allow the bootstrap script to manage repos # if another rhel family variant we want to run without -r to allow the bootstrap script to manage repos
else else
run_check_net_err \ run_check_net_err \
@@ -1312,10 +1288,6 @@ upgrade_salt() {
yum versionlock add "salt-0:$NEWSALTVERSION-0.*" yum versionlock add "salt-0:$NEWSALTVERSION-0.*"
yum versionlock add "salt-minion-0:$NEWSALTVERSION-0.*" yum versionlock add "salt-minion-0:$NEWSALTVERSION-0.*"
yum versionlock add "salt-master-0:$NEWSALTVERSION-0.*" yum versionlock add "salt-master-0:$NEWSALTVERSION-0.*"
# Add salt-cloud versionlock if installed
if [[ $SALT_CLOUD_INSTALLED == true ]]; then
yum versionlock add "salt-cloud-0:$NEWSALTVERSION-0.*"
fi
# Else do Ubuntu things # Else do Ubuntu things
elif [[ $is_deb ]]; then elif [[ $is_deb ]]; then
echo "Removing apt hold for Salt." echo "Removing apt hold for Salt."
@@ -1348,7 +1320,6 @@ upgrade_salt() {
echo "" echo ""
exit 1 exit 1
else else
SALTUPGRADED=true
echo "Salt upgrade success." echo "Salt upgrade success."
echo "" echo ""
fi fi
@@ -1592,11 +1563,6 @@ main() {
# ensure the mine is updated and populated before highstates run, following the salt-master restart # ensure the mine is updated and populated before highstates run, following the salt-master restart
update_salt_mine update_salt_mine
if [[ $SALT_CLOUD_CONFIGURED == true && $SALTUPGRADED == true ]]; then
echo "Updating salt-cloud config to use the new Salt version"
salt-call state.apply salt.cloud.config concurrent=True
fi
enable_highstate enable_highstate
echo "" echo ""

View File

@@ -211,7 +211,7 @@ Exit Codes:
Logging: Logging:
- Logs are written to /opt/so/log/salt/so-salt-cloud. - Logs are written to /opt/so/log/salt/so-salt-cloud.log.
- Both file and console logging are enabled for real-time monitoring. - Both file and console logging are enabled for real-time monitoring.
""" """
@@ -233,7 +233,7 @@ local = salt.client.LocalClient()
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
logger.setLevel(logging.INFO) logger.setLevel(logging.INFO)
file_handler = logging.FileHandler('/opt/so/log/salt/so-salt-cloud') file_handler = logging.FileHandler('/opt/so/log/salt/so-salt-cloud.log')
console_handler = logging.StreamHandler() console_handler = logging.StreamHandler()
formatter = logging.Formatter('%(asctime)s %(message)s') formatter = logging.Formatter('%(asctime)s %(message)s')
@@ -516,85 +516,23 @@ def run_qcow2_modify_hardware_config(profile, vm_name, cpu=None, memory=None, pc
target = hv_name + "_*" target = hv_name + "_*"
try: try:
args_list = ['vm_name=' + vm_name] args_list = [
'vm_name=' + vm_name,
# Only add parameters that are actually specified 'cpu=' + str(cpu) if cpu else '',
if cpu is not None: 'memory=' + str(memory) if memory else '',
args_list.append('cpu=' + str(cpu)) 'start=' + str(start)
if memory is not None: ]
args_list.append('memory=' + str(memory))
# Add PCI devices if provided # Add PCI devices if provided
if pci_list: if pci_list:
# Pass all PCI devices as a comma-separated list # Pass all PCI devices as a comma-separated list
args_list.append('pci=' + ','.join(pci_list)) args_list.append('pci=' + ','.join(pci_list))
# Always add start parameter
args_list.append('start=' + str(start))
result = local.cmd(target, 'qcow2.modify_hardware_config', args_list) result = local.cmd(target, 'qcow2.modify_hardware_config', args_list)
format_qcow2_output('Hardware configuration', result) format_qcow2_output('Hardware configuration', result)
except Exception as e: except Exception as e:
logger.error(f"An error occurred while running qcow2.modify_hardware_config: {e}") logger.error(f"An error occurred while running qcow2.modify_hardware_config: {e}")
def run_qcow2_create_volume_config(profile, vm_name, size_gb, cpu=None, memory=None, start=False):
"""Create a volume for the VM and optionally configure CPU/memory.
Args:
profile (str): The cloud profile name
vm_name (str): The name of the VM
size_gb (int): Size of the volume in GB
cpu (int, optional): Number of CPUs to assign
memory (int, optional): Amount of memory in MiB
start (bool): Whether to start the VM after configuration
"""
hv_name = profile.split('_')[1]
target = hv_name + "_*"
try:
# Step 1: Create the volume
logger.info(f"Creating {size_gb}GB volume for VM {vm_name}")
volume_result = local.cmd(
target,
'qcow2.create_volume_config',
kwarg={
'vm_name': vm_name,
'size_gb': size_gb,
'start': False # Don't start yet if we need to configure CPU/memory
}
)
format_qcow2_output('Volume creation', volume_result)
# Step 2: Configure CPU and memory if specified
if cpu or memory:
logger.info(f"Configuring hardware for VM {vm_name}: CPU={cpu}, Memory={memory}MiB")
hw_result = local.cmd(
target,
'qcow2.modify_hardware_config',
kwarg={
'vm_name': vm_name,
'cpu': cpu,
'memory': memory,
'start': start
}
)
format_qcow2_output('Hardware configuration', hw_result)
elif start:
# If no CPU/memory config needed but we need to start the VM
logger.info(f"Starting VM {vm_name}")
start_result = local.cmd(
target,
'qcow2.modify_hardware_config',
kwarg={
'vm_name': vm_name,
'start': True
}
)
format_qcow2_output('VM startup', start_result)
except Exception as e:
logger.error(f"An error occurred while creating volume and configuring hardware: {e}")
def run_qcow2_modify_network_config(profile, vm_name, mode, ip=None, gateway=None, dns=None, search_domain=None): def run_qcow2_modify_network_config(profile, vm_name, mode, ip=None, gateway=None, dns=None, search_domain=None):
hv_name = profile.split('_')[1] hv_name = profile.split('_')[1]
target = hv_name + "_*" target = hv_name + "_*"
@@ -648,7 +586,6 @@ def parse_arguments():
network_group.add_argument('-c', '--cpu', type=int, help='Number of virtual CPUs to assign.') network_group.add_argument('-c', '--cpu', type=int, help='Number of virtual CPUs to assign.')
network_group.add_argument('-m', '--memory', type=int, help='Amount of memory to assign in MiB.') network_group.add_argument('-m', '--memory', type=int, help='Amount of memory to assign in MiB.')
network_group.add_argument('-P', '--pci', action='append', help='PCI hardware ID(s) to passthrough to the VM (e.g., 0000:c7:00.0). Can be specified multiple times.') network_group.add_argument('-P', '--pci', action='append', help='PCI hardware ID(s) to passthrough to the VM (e.g., 0000:c7:00.0). Can be specified multiple times.')
network_group.add_argument('--nsm-size', type=int, help='Size in GB for NSM volume creation. Can be used with copper/sfp NICs (--pci). Only disk passthrough (without --nsm-size) prevents volume creation.')
args = parser.parse_args() args = parser.parse_args()
@@ -684,8 +621,6 @@ def main():
hw_config.append(f"{args.memory}MB RAM") hw_config.append(f"{args.memory}MB RAM")
if args.pci: if args.pci:
hw_config.append(f"PCI devices: {', '.join(args.pci)}") hw_config.append(f"PCI devices: {', '.join(args.pci)}")
if args.nsm_size:
hw_config.append(f"NSM volume: {args.nsm_size}GB")
hw_string = f" and hardware config: {', '.join(hw_config)}" if hw_config else "" hw_string = f" and hardware config: {', '.join(hw_config)}" if hw_config else ""
logger.info(f"Received request to create VM '{args.vm_name}' using profile '{args.profile}' {network_config}{hw_string}") logger.info(f"Received request to create VM '{args.vm_name}' using profile '{args.profile}' {network_config}{hw_string}")
@@ -708,58 +643,8 @@ def main():
# Step 2: Provision the VM (without starting it) # Step 2: Provision the VM (without starting it)
call_salt_cloud(args.profile, args.vm_name) call_salt_cloud(args.profile, args.vm_name)
# Step 3: Determine storage configuration approach # Step 3: Modify hardware configuration
# Priority: disk passthrough > volume creation (but volume can coexist with copper/sfp NICs) run_qcow2_modify_hardware_config(args.profile, args.vm_name, cpu=args.cpu, memory=args.memory, pci_list=args.pci, start=True)
# Note: virtual_node_manager.py already filters out --nsm-size when disk is present,
# so if both --pci and --nsm-size are present here, the PCI devices are copper/sfp NICs
use_passthrough = False
use_volume_creation = False
has_nic_passthrough = False
if args.nsm_size:
# Validate nsm_size
if args.nsm_size <= 0:
logger.error(f"Invalid nsm_size value: {args.nsm_size}. Must be a positive integer.")
sys.exit(1)
use_volume_creation = True
logger.info(f"Using volume creation with size {args.nsm_size}GB (--nsm-size parameter specified)")
if args.pci:
# If both nsm_size and PCI are present, PCI devices are copper/sfp NICs
# (virtual_node_manager.py filters out nsm_size when disk is present)
has_nic_passthrough = True
logger.info(f"PCI devices (copper/sfp NICs) will be passed through along with volume: {', '.join(args.pci)}")
elif args.pci:
# Only PCI devices, no nsm_size - could be disk or NICs
# this script is called by virtual_node_manager and that strips any possibility that nsm_size and the disk pci slot is sent to this script
# we might have not specified a disk passthrough or nsm_size, but pass another pci slot and we end up here
use_passthrough = True
logger.info(f"Configuring PCI device passthrough.(--pci parameter specified without --nsm-size)")
# Step 4: Configure hardware based on storage approach
if use_volume_creation:
# Create volume first
run_qcow2_create_volume_config(args.profile, args.vm_name, size_gb=args.nsm_size, cpu=args.cpu, memory=args.memory, start=False)
# Then configure NICs if present
if has_nic_passthrough:
logger.info(f"Configuring NIC passthrough for VM {args.vm_name}")
run_qcow2_modify_hardware_config(args.profile, args.vm_name, cpu=None, memory=None, pci_list=args.pci, start=True)
else:
# No NICs, just start the VM
logger.info(f"Starting VM {args.vm_name}")
run_qcow2_modify_hardware_config(args.profile, args.vm_name, cpu=None, memory=None, pci_list=None, start=True)
elif use_passthrough:
# Use existing passthrough logic via modify_hardware_config
run_qcow2_modify_hardware_config(args.profile, args.vm_name, cpu=args.cpu, memory=args.memory, pci_list=args.pci, start=True)
else:
# No storage configuration, just configure CPU/memory if specified
if args.cpu or args.memory:
run_qcow2_modify_hardware_config(args.profile, args.vm_name, cpu=args.cpu, memory=args.memory, pci_list=None, start=True)
else:
# No hardware configuration needed, just start the VM
logger.info(f"No hardware configuration specified, starting VM {args.vm_name}")
run_qcow2_modify_hardware_config(args.profile, args.vm_name, cpu=None, memory=None, pci_list=None, start=True)
except KeyboardInterrupt: except KeyboardInterrupt:
logger.error("so-salt-cloud: Operation cancelled by user.") logger.error("so-salt-cloud: Operation cancelled by user.")

View File

@@ -14,7 +14,7 @@ sool9_{{host}}:
private_key: /etc/ssh/auth_keys/soqemussh/id_ecdsa private_key: /etc/ssh/auth_keys/soqemussh/id_ecdsa
sudo: True sudo: True
deploy_command: sh /tmp/.saltcloud-*/deploy.sh deploy_command: sh /tmp/.saltcloud-*/deploy.sh
script_args: -r -F -x python3 stable {{ SALTVERSION }} script_args: -r -F -x python3 stable 3006.9
minion: minion:
master: {{ grains.host }} master: {{ grains.host }}
master_port: 4506 master_port: 4506

View File

@@ -13,7 +13,6 @@
{% if '.'.join(sls.split('.')[:2]) in allowed_states %} {% if '.'.join(sls.split('.')[:2]) in allowed_states %}
{% if 'vrt' in salt['pillar.get']('features', []) %} {% if 'vrt' in salt['pillar.get']('features', []) %}
{% set HYPERVISORS = salt['pillar.get']('hypervisor:nodes', {} ) %} {% set HYPERVISORS = salt['pillar.get']('hypervisor:nodes', {} ) %}
{% from 'salt/map.jinja' import SALTVERSION %}
{% if HYPERVISORS %} {% if HYPERVISORS %}
cloud_providers: cloud_providers:
@@ -21,7 +20,7 @@ cloud_providers:
- name: /etc/salt/cloud.providers.d/libvirt.conf - name: /etc/salt/cloud.providers.d/libvirt.conf
- source: salt://salt/cloud/cloud.providers.d/libvirt.conf.jinja - source: salt://salt/cloud/cloud.providers.d/libvirt.conf.jinja
- defaults: - defaults:
HYPERVISORS: {{ HYPERVISORS }} HYPERVISORS: {{HYPERVISORS}}
- template: jinja - template: jinja
- makedirs: True - makedirs: True
@@ -30,17 +29,11 @@ cloud_profiles:
- name: /etc/salt/cloud.profiles.d/socloud.conf - name: /etc/salt/cloud.profiles.d/socloud.conf
- source: salt://salt/cloud/cloud.profiles.d/socloud.conf.jinja - source: salt://salt/cloud/cloud.profiles.d/socloud.conf.jinja
- defaults: - defaults:
HYPERVISORS: {{ HYPERVISORS }} HYPERVISORS: {{HYPERVISORS}}
MANAGERHOSTNAME: {{ grains.host }} MANAGERHOSTNAME: {{ grains.host }}
MANAGERIP: {{ pillar.host.mainip }} MANAGERIP: {{ pillar.host.mainip }}
SALTVERSION: {{ SALTVERSION }}
- template: jinja - template: jinja
- makedirs: True - makedirs: True
{% else %}
no_hypervisors_configured:
test.succeed_without_changes:
- name: no_hypervisors_configured
- comment: No hypervisors are configured
{% endif %} {% endif %}
{% else %} {% else %}

View File

@@ -117,7 +117,7 @@ Exit Codes:
4: VM provisioning failure (so-salt-cloud execution failed) 4: VM provisioning failure (so-salt-cloud execution failed)
Logging: Logging:
Log files are written to /opt/so/log/salt/engines/virtual_node_manager Log files are written to /opt/so/log/salt/engines/virtual_node_manager.log
Comprehensive logging includes: Comprehensive logging includes:
- Hardware validation details - Hardware validation details
- PCI ID conversion process - PCI ID conversion process
@@ -138,49 +138,23 @@ import pwd
import grp import grp
import salt.config import salt.config
import salt.runner import salt.runner
import salt.client
from typing import Dict, List, Optional, Tuple, Any from typing import Dict, List, Optional, Tuple, Any
from datetime import datetime, timedelta from datetime import datetime, timedelta
from threading import Lock from threading import Lock
# Initialize Salt runner and local client once # Get socore uid/gid
SOCORE_UID = pwd.getpwnam('socore').pw_uid
SOCORE_GID = grp.getgrnam('socore').gr_gid
# Initialize Salt runner once
opts = salt.config.master_config('/etc/salt/master') opts = salt.config.master_config('/etc/salt/master')
opts['output'] = 'json' opts['output'] = 'json'
runner = salt.runner.RunnerClient(opts) runner = salt.runner.RunnerClient(opts)
local = salt.client.LocalClient()
# Get socore uid/gid for file ownership
SOCORE_UID = pwd.getpwnam('socore').pw_uid
SOCORE_GID = grp.getgrnam('socore').gr_gid
# Configure logging # Configure logging
log = logging.getLogger(__name__) log = logging.getLogger(__name__)
log.setLevel(logging.DEBUG) log.setLevel(logging.DEBUG)
# Prevent propagation to parent loggers to avoid duplicate log entries
log.propagate = False
# Add file handler for dedicated log file
log_dir = '/opt/so/log/salt'
log_file = os.path.join(log_dir, 'virtual_node_manager')
# Create log directory if it doesn't exist
os.makedirs(log_dir, exist_ok=True)
# Create file handler
file_handler = logging.FileHandler(log_file)
file_handler.setLevel(logging.DEBUG)
# Create formatter
formatter = logging.Formatter(
'%(asctime)s [%(name)s:%(lineno)d][%(levelname)-8s][%(process)d] %(message)s',
datefmt='%Y-%m-%d %H:%M:%S'
)
file_handler.setFormatter(formatter)
# Add handler to logger
log.addHandler(file_handler)
# Constants # Constants
DEFAULT_INTERVAL = 30 DEFAULT_INTERVAL = 30
DEFAULT_BASE_PATH = '/opt/so/saltstack/local/salt/hypervisor/hosts' DEFAULT_BASE_PATH = '/opt/so/saltstack/local/salt/hypervisor/hosts'
@@ -229,39 +203,6 @@ def write_json_file(file_path: str, data: Any) -> None:
except Exception as e: except Exception as e:
log.error("Failed to write JSON file %s: %s", file_path, str(e)) log.error("Failed to write JSON file %s: %s", file_path, str(e))
raise raise
def remove_vm_from_vms_file(vms_file_path: str, vm_hostname: str, vm_role: str) -> bool:
"""
Remove a VM entry from the hypervisorVMs file.
Args:
vms_file_path: Path to the hypervisorVMs file
vm_hostname: Hostname of the VM to remove (without role suffix)
vm_role: Role of the VM
Returns:
bool: True if VM was removed, False otherwise
"""
try:
# Read current VMs
vms = read_json_file(vms_file_path)
# Find and remove the VM entry
original_count = len(vms)
vms = [vm for vm in vms if not (vm.get('hostname') == vm_hostname and vm.get('role') == vm_role)]
if len(vms) < original_count:
# VM was found and removed, write back to file
write_json_file(vms_file_path, vms)
log.info("Removed VM %s_%s from %s", vm_hostname, vm_role, vms_file_path)
return True
else:
log.warning("VM %s_%s not found in %s", vm_hostname, vm_role, vms_file_path)
return False
except Exception as e:
log.error("Failed to remove VM %s_%s from %s: %s", vm_hostname, vm_role, vms_file_path, str(e))
return False
def read_yaml_file(file_path: str) -> dict: def read_yaml_file(file_path: str) -> dict:
"""Read and parse a YAML file.""" """Read and parse a YAML file."""
@@ -617,13 +558,6 @@ def mark_vm_failed(vm_file: str, error_code: int, message: str) -> None:
# Remove the original file since we'll create an error file # Remove the original file since we'll create an error file
os.remove(vm_file) os.remove(vm_file)
# Clear hardware resource claims so failed VMs don't consume resources
# Keep nsm_size for reference but clear cpu, memory, sfp, copper
config.pop('cpu', None)
config.pop('memory', None)
config.pop('sfp', None)
config.pop('copper', None)
# Create error file # Create error file
error_file = f"{vm_file}.error" error_file = f"{vm_file}.error"
data = { data = {
@@ -652,16 +586,8 @@ def mark_invalid_hardware(hypervisor_path: str, vm_name: str, config: dict, erro
# Join all messages with proper sentence structure # Join all messages with proper sentence structure
full_message = "Hardware validation failure: " + " ".join(error_messages) full_message = "Hardware validation failure: " + " ".join(error_messages)
# Clear hardware resource claims so failed VMs don't consume resources
# Keep nsm_size for reference but clear cpu, memory, sfp, copper
config_copy = config.copy()
config_copy.pop('cpu', None)
config_copy.pop('memory', None)
config_copy.pop('sfp', None)
config_copy.pop('copper', None)
data = { data = {
'config': config_copy, 'config': config,
'status': 'error', 'status': 'error',
'timestamp': datetime.now().isoformat(), 'timestamp': datetime.now().isoformat(),
'error_details': { 'error_details': {
@@ -708,61 +634,6 @@ def validate_vrt_license() -> bool:
log.error("Error reading license file: %s", str(e)) log.error("Error reading license file: %s", str(e))
return False return False
def check_hypervisor_disk_space(hypervisor: str, size_gb: int) -> Tuple[bool, Optional[str]]:
"""
Check if hypervisor has sufficient disk space for volume creation.
Args:
hypervisor: Hypervisor hostname
size_gb: Required size in GB
Returns:
Tuple of (has_space, error_message)
"""
try:
# Get hypervisor minion ID
hypervisor_minion = f"{hypervisor}_hypervisor"
# Check disk space on /nsm/libvirt/volumes using LocalClient
result = local.cmd(
hypervisor_minion,
'cmd.run',
["df -BG /nsm/libvirt/volumes | tail -1 | awk '{print $4}' | sed 's/G//'"]
)
if not result or hypervisor_minion not in result:
log.error("Failed to check disk space on hypervisor %s", hypervisor)
return False, "Failed to check disk space on hypervisor"
available_gb_str = result[hypervisor_minion].strip()
if not available_gb_str:
log.error("Empty disk space response from hypervisor %s", hypervisor)
return False, "Failed to get disk space information"
try:
available_gb = float(available_gb_str)
except ValueError:
log.error("Invalid disk space value from hypervisor %s: %s", hypervisor, available_gb_str)
return False, f"Invalid disk space value: {available_gb_str}"
# Add 10% buffer for filesystem overhead
required_gb = size_gb * 1.1
log.debug("Hypervisor %s disk space check: Available=%.2fGB, Required=%.2fGB",
hypervisor, available_gb, required_gb)
if available_gb < required_gb:
error_msg = f"Insufficient disk space on hypervisor {hypervisor}. Available: {available_gb:.2f}GB, Required: {required_gb:.2f}GB (including 10% overhead)"
log.error(error_msg)
return False, error_msg
log.info("Hypervisor %s has sufficient disk space for %dGB volume", hypervisor, size_gb)
return True, None
except Exception as e:
log.error("Error checking disk space on hypervisor %s: %s", hypervisor, str(e))
return False, f"Error checking disk space: {str(e)}"
def process_vm_creation(hypervisor_path: str, vm_config: dict) -> None: def process_vm_creation(hypervisor_path: str, vm_config: dict) -> None:
""" """
Process a single VM creation request. Process a single VM creation request.
@@ -795,62 +666,6 @@ def process_vm_creation(hypervisor_path: str, vm_config: dict) -> None:
except subprocess.CalledProcessError as e: except subprocess.CalledProcessError as e:
logger.error(f"Failed to emit success status event: {e}") logger.error(f"Failed to emit success status event: {e}")
# Validate nsm_size if present
if 'nsm_size' in vm_config:
try:
size = int(vm_config['nsm_size'])
if size <= 0:
log.error("VM: %s - nsm_size must be a positive integer, got: %d", vm_name, size)
mark_invalid_hardware(hypervisor_path, vm_name, vm_config,
{'nsm_size': 'Invalid nsm_size: must be positive integer'})
return
if size > 10000: # 10TB reasonable maximum
log.error("VM: %s - nsm_size %dGB exceeds reasonable maximum (10000GB)", vm_name, size)
mark_invalid_hardware(hypervisor_path, vm_name, vm_config,
{'nsm_size': f'Invalid nsm_size: {size}GB exceeds maximum (10000GB)'})
return
log.debug("VM: %s - nsm_size validated: %dGB", vm_name, size)
except (ValueError, TypeError) as e:
log.error("VM: %s - nsm_size must be a valid integer, got: %s", vm_name, vm_config.get('nsm_size'))
mark_invalid_hardware(hypervisor_path, vm_name, vm_config,
{'nsm_size': 'Invalid nsm_size: must be valid integer'})
return
# Check for conflicting storage configurations
has_disk = 'disk' in vm_config and vm_config['disk']
has_nsm_size = 'nsm_size' in vm_config and vm_config['nsm_size']
if has_disk and has_nsm_size:
log.warning("VM: %s - Both disk and nsm_size specified. disk takes precedence, nsm_size will be ignored.",
vm_name)
# Check disk space BEFORE creating VM if nsm_size is specified
if has_nsm_size and not has_disk:
size_gb = int(vm_config['nsm_size'])
has_space, space_error = check_hypervisor_disk_space(hypervisor, size_gb)
if not has_space:
log.error("VM: %s - %s", vm_name, space_error)
# Send Hypervisor NSM Disk Full status event
try:
subprocess.run([
'so-salt-emit-vm-deployment-status-event',
'-v', vm_name,
'-H', hypervisor,
'-s', 'Hypervisor NSM Disk Full'
], check=True)
except subprocess.CalledProcessError as e:
log.error("Failed to emit volume create failed event for %s: %s", vm_name, str(e))
mark_invalid_hardware(
hypervisor_path,
vm_name,
vm_config,
{'disk_space': f"Insufficient disk space for {size_gb}GB volume: {space_error}"}
)
return
log.debug("VM: %s - Hypervisor has sufficient space for %dGB volume", vm_name, size_gb)
# Initial hardware validation against model # Initial hardware validation against model
is_valid, errors = validate_hardware_request(model_config, vm_config) is_valid, errors = validate_hardware_request(model_config, vm_config)
if not is_valid: if not is_valid:
@@ -887,11 +702,6 @@ def process_vm_creation(hypervisor_path: str, vm_config: dict) -> None:
memory_mib = int(vm_config['memory']) * 1024 memory_mib = int(vm_config['memory']) * 1024
cmd.extend(['-m', str(memory_mib)]) cmd.extend(['-m', str(memory_mib)])
# Add nsm_size if specified and disk is not specified
if 'nsm_size' in vm_config and vm_config['nsm_size'] and not ('disk' in vm_config and vm_config['disk']):
cmd.extend(['--nsm-size', str(vm_config['nsm_size'])])
log.debug("VM: %s - Adding nsm_size parameter: %s", vm_name, vm_config['nsm_size'])
# Add PCI devices # Add PCI devices
for hw_type in ['disk', 'copper', 'sfp']: for hw_type in ['disk', 'copper', 'sfp']:
if hw_type in vm_config and vm_config[hw_type]: if hw_type in vm_config and vm_config[hw_type]:
@@ -1123,21 +933,12 @@ def process_hypervisor(hypervisor_path: str) -> None:
if not nodes_config: if not nodes_config:
log.debug("Empty VMs configuration in %s", vms_file) log.debug("Empty VMs configuration in %s", vms_file)
# Get existing VMs and track failed VMs separately # Get existing VMs
existing_vms = set() existing_vms = set()
failed_vms = set() # VMs with .error files
for file_path in glob.glob(os.path.join(hypervisor_path, '*_*')): for file_path in glob.glob(os.path.join(hypervisor_path, '*_*')):
basename = os.path.basename(file_path) basename = os.path.basename(file_path)
# Skip status files # Skip error and status files
if basename.endswith('.status'): if not basename.endswith('.error') and not basename.endswith('.status'):
continue
# Track VMs with .error files separately
if basename.endswith('.error'):
vm_name = basename[:-6] # Remove '.error' suffix
failed_vms.add(vm_name)
existing_vms.add(vm_name) # Also add to existing to prevent recreation
log.debug(f"Found failed VM with .error file: {vm_name}")
else:
existing_vms.add(basename) existing_vms.add(basename)
# Process new VMs # Process new VMs
@@ -1154,37 +955,12 @@ def process_hypervisor(hypervisor_path: str) -> None:
# process_vm_creation handles its own locking # process_vm_creation handles its own locking
process_vm_creation(hypervisor_path, vm_config) process_vm_creation(hypervisor_path, vm_config)
# Process VM deletions (but skip failed VMs that only have .error files) # Process VM deletions
vms_to_delete = existing_vms - configured_vms vms_to_delete = existing_vms - configured_vms
log.debug(f"Existing VMs: {existing_vms}") log.debug(f"Existing VMs: {existing_vms}")
log.debug(f"Configured VMs: {configured_vms}") log.debug(f"Configured VMs: {configured_vms}")
log.debug(f"Failed VMs: {failed_vms}")
log.debug(f"VMs to delete: {vms_to_delete}") log.debug(f"VMs to delete: {vms_to_delete}")
for vm_name in vms_to_delete: for vm_name in vms_to_delete:
# Skip deletion if VM only has .error file (no actual VM to delete)
if vm_name in failed_vms:
error_file = os.path.join(hypervisor_path, f"{vm_name}.error")
base_file = os.path.join(hypervisor_path, vm_name)
# Only skip if there's no base file (VM never successfully created)
if not os.path.exists(base_file):
log.info(f"Skipping deletion of failed VM {vm_name} (VM never successfully created)")
# Clean up the .error and .status files since VM is no longer configured
if os.path.exists(error_file):
os.remove(error_file)
log.info(f"Removed .error file for unconfigured VM: {vm_name}")
status_file = os.path.join(hypervisor_path, f"{vm_name}.status")
if os.path.exists(status_file):
os.remove(status_file)
log.info(f"Removed .status file for unconfigured VM: {vm_name}")
# Trigger hypervisor annotation update to reflect the removal
try:
log.info(f"Triggering hypervisor annotation update after removing failed VM: {vm_name}")
runner.cmd('state.orch', ['orch.dyanno_hypervisor'])
except Exception as e:
log.error(f"Failed to trigger hypervisor annotation update for {vm_name}: {str(e)}")
continue
log.info(f"Initiating deletion process for VM: {vm_name}") log.info(f"Initiating deletion process for VM: {vm_name}")
process_vm_deletion(hypervisor_path, vm_name) process_vm_deletion(hypervisor_path, vm_name)

View File

@@ -1,4 +1,4 @@
# version cannot be used elsewhere in this pillar as soup is grepping for it to determine if Salt needs to be patched # version cannot be used elsewhere in this pillar as soup is grepping for it to determine if Salt needs to be patched
salt: salt:
master: master:
version: '3006.16' version: '3006.9'

View File

@@ -1,5 +1,5 @@
# version cannot be used elsewhere in this pillar as soup is grepping for it to determine if Salt needs to be patched # version cannot be used elsewhere in this pillar as soup is grepping for it to determine if Salt needs to be patched
salt: salt:
minion: minion:
version: '3006.16' version: '3006.9'
check_threshold: 3600 # in seconds, threshold used for so-salt-minion-check. any value less than 600 seconds may cause a lot of salt-minion restarts since the job to touch the file occurs every 5-8 minutes by default check_threshold: 3600 # in seconds, threshold used for so-salt-minion-check. any value less than 600 seconds may cause a lot of salt-minion restarts since the job to touch the file occurs every 5-8 minutes by default

View File

@@ -5,12 +5,6 @@ sensoroni:
enabled: False enabled: False
timeout_ms: 900000 timeout_ms: 900000
parallel_limit: 5 parallel_limit: 5
export:
timeout_ms: 1200000
cache_refresh_interval_ms: 10000
export_metric_limit: 10000
export_event_limit: 10000
csv_separator: ','
node_checkin_interval_ms: 10000 node_checkin_interval_ms: 10000
sensoronikey: sensoronikey:
soc_host: soc_host:

View File

@@ -21,13 +21,7 @@
}, },
{%- endif %} {%- endif %}
"importer": {}, "importer": {},
"export": { "export": {},
"timeoutMs": {{ SENSORONIMERGED.config.export.timeout_ms }},
"cacheRefreshIntervalMs": {{ SENSORONIMERGED.config.export.cache_refresh_interval_ms }},
"exportMetricLimit": {{ SENSORONIMERGED.config.export.export_metric_limit }},
"exportEventLimit": {{ SENSORONIMERGED.config.export.export_event_limit }},
"csvSeparator": "{{ SENSORONIMERGED.config.export.csv_separator }}"
},
"statickeyauth": { "statickeyauth": {
"apiKey": "{{ GLOBALS.sensoroni_key }}" "apiKey": "{{ GLOBALS.sensoroni_key }}"
{% if GLOBALS.is_sensor %} {% if GLOBALS.is_sensor %}

View File

@@ -17,27 +17,6 @@ sensoroni:
description: Parallel limit for the analyzer. description: Parallel limit for the analyzer.
advanced: True advanced: True
helpLink: cases.html helpLink: cases.html
export:
timeout_ms:
description: Timeout period for the exporter to finish export-related tasks.
advanced: True
helpLink: reports.html
cache_refresh_interval_ms:
description: Refresh interval for cache updates. Longer intervals result in less compute usage but risks stale data included in reports.
advanced: True
helpLink: reports.html
export_metric_limit:
description: Maximum number of metric values to include in each metric aggregation group.
advanced: True
helpLink: reports.html
export_event_limit:
description: Maximum number of events to include per event list.
advanced: True
helpLink: reports.html
csv_separator:
description: Separator character to use for CSV exports.
advanced: False
helpLink: reports.html
node_checkin_interval_ms: node_checkin_interval_ms:
description: Interval in ms to checkin to the soc_host. description: Interval in ms to checkin to the soc_host.
advanced: True advanced: True

View File

@@ -1494,8 +1494,6 @@ soc:
assistant: assistant:
apiUrl: https://onionai.securityonion.net apiUrl: https://onionai.securityonion.net
healthTimeoutSeconds: 3 healthTimeoutSeconds: 3
systemPromptAddendum: ""
systemPromptAddendumMaxLength: 50000
salt: salt:
queueDir: /opt/sensoroni/queue queueDir: /opt/sensoroni/queue
timeoutMs: 45000 timeoutMs: 45000
@@ -1638,9 +1636,6 @@ soc:
- name: socExcludeToggle - name: socExcludeToggle
filter: 'NOT event.module:"soc"' filter: 'NOT event.module:"soc"'
enabled: true enabled: true
- name: onionaiExcludeToggle
filter: 'NOT _index:"*:so-assistant-*"'
enabled: true
queries: queries:
- name: Default Query - name: Default Query
description: Show all events grouped by the observer host description: Show all events grouped by the observer host

View File

@@ -63,22 +63,18 @@ hypervisor:
required: true required: true
readonly: true readonly: true
forcedType: int forcedType: int
- field: nsm_size
label: "Size of virtual disk to create and use for /nsm, in GB. Only applicable if no pass-through disk."
forcedType: int
readonly: true
- field: disk - field: disk
label: "Disk(s) to pass through for /nsm. Free: FREE | Total: TOTAL" label: "Disk(s) for passthrough. Free: FREE | Total: TOTAL"
readonly: true readonly: true
options: [] options: []
forcedType: '[]int' forcedType: '[]int'
- field: copper - field: copper
label: "Copper port(s) to pass through. Free: FREE | Total: TOTAL" label: "Copper port(s) for passthrough. Free: FREE | Total: TOTAL"
readonly: true readonly: true
options: [] options: []
forcedType: '[]int' forcedType: '[]int'
- field: sfp - field: sfp
label: "SFP port(s) to pass through. Free: FREE | Total: TOTAL" label: "SFP port(s) for passthrough. Free: FREE | Total: TOTAL"
readonly: true readonly: true
options: [] options: []
forcedType: '[]int' forcedType: '[]int'

View File

@@ -3,14 +3,11 @@
{# Define the list of process steps in order (case-sensitive) #} {# Define the list of process steps in order (case-sensitive) #}
{% set PROCESS_STEPS = [ {% set PROCESS_STEPS = [
'Processing', 'Processing',
'Hypervisor NSM Disk Full',
'IP Configuration', 'IP Configuration',
'Starting Create', 'Starting Create',
'Executing Deploy Script', 'Executing Deploy Script',
'Initialize Minion Pillars', 'Initialize Minion Pillars',
'Created Instance', 'Created Instance',
'Volume Creation',
'Volume Configuration',
'Hardware Configuration', 'Hardware Configuration',
'Highstate Initiated', 'Highstate Initiated',
'Destroyed Instance' 'Destroyed Instance'

View File

@@ -1,51 +0,0 @@
# Copyright Security Onion Solutions LLC and/or licensed to Security Onion Solutions LLC under one
# or more contributor license agreements. Licensed under the Elastic License 2.0 as shown at
# https://securityonion.net/license; you may not use this file except in compliance with the
# Elastic License 2.0.
#
# Note: Per the Elastic License 2.0, the second limitation states:
#
# "You may not move, change, disable, or circumvent the license key functionality
# in the software, and you may not remove or obscure any functionality in the
# software that is protected by the license key."
{% if 'vrt' in salt['pillar.get']('features', []) %}
{% do salt.log.info('soc/dyanno/hypervisor/remove_failed_vm: Running') %}
{% set vm_name = pillar.get('vm_name') %}
{% set hypervisor = pillar.get('hypervisor') %}
{% if vm_name and hypervisor %}
{% set vm_parts = vm_name.split('_') %}
{% if vm_parts | length >= 2 %}
{% set vm_role = vm_parts[-1] %}
{% set vm_hostname = '_'.join(vm_parts[:-1]) %}
{% set vms_file = '/opt/so/saltstack/local/salt/hypervisor/hosts/' ~ hypervisor ~ 'VMs' %}
{% do salt.log.info('soc/dyanno/hypervisor/remove_failed_vm: Removing VM ' ~ vm_name ~ ' from ' ~ vms_file) %}
remove_vm_{{ vm_name }}_from_vms_file:
module.run:
- name: hypervisor.remove_vm_from_vms_file
- vms_file_path: {{ vms_file }}
- vm_hostname: {{ vm_hostname }}
- vm_role: {{ vm_role }}
{% else %}
{% do salt.log.error('soc/dyanno/hypervisor/remove_failed_vm: Invalid vm_name format: ' ~ vm_name) %}
{% endif %}
{% else %}
{% do salt.log.error('soc/dyanno/hypervisor/remove_failed_vm: Missing required pillar data (vm_name or hypervisor)') %}
{% endif %}
{% do salt.log.info('soc/dyanno/hypervisor/remove_failed_vm: Completed') %}
{% else %}
{% do salt.log.error(
'Hypervisor nodes are a feature supported only for customers with a valid license. '
'Contact Security Onion Solutions, LLC via our website at https://securityonionsolutions.com '
'for more information about purchasing a license to enable this feature.'
) %}
{% endif %}

View File

@@ -13,6 +13,7 @@
{%- import_yaml 'soc/dyanno/hypervisor/hypervisor.yaml' as ANNOTATION -%} {%- import_yaml 'soc/dyanno/hypervisor/hypervisor.yaml' as ANNOTATION -%}
{%- from 'hypervisor/map.jinja' import HYPERVISORS -%} {%- from 'hypervisor/map.jinja' import HYPERVISORS -%}
{%- from 'soc/dyanno/hypervisor/map.jinja' import PROCESS_STEPS -%}
{%- set TEMPLATE = ANNOTATION.hypervisor.hosts.pop('defaultHost') -%} {%- set TEMPLATE = ANNOTATION.hypervisor.hosts.pop('defaultHost') -%}
@@ -26,6 +27,7 @@
{%- if baseDomainStatus == 'Initialized' %} {%- if baseDomainStatus == 'Initialized' %}
{%- if vm_list %} {%- if vm_list %}
#### Virtual Machines #### Virtual Machines
Status values: {% for step in PROCESS_STEPS %}{{ step }}{% if not loop.last %}, {% endif %}{% endfor %}. "Last Updated" shows when status changed. After "Highstate Initiated", only "Destroyed Instance" updates the timestamp.
| Name | Status | CPU Cores | Memory (GB)| Disk | Copper | SFP | Last Updated | | Name | Status | CPU Cores | Memory (GB)| Disk | Copper | SFP | Last Updated |
|--------------------|--------------------|-----------|------------|------|--------|------|---------------------| |--------------------|--------------------|-----------|------------|------|--------|------|---------------------|
@@ -40,6 +42,7 @@
{%- endfor %} {%- endfor %}
{%- else %} {%- else %}
#### Virtual Machines #### Virtual Machines
Status values: {% for step in PROCESS_STEPS %}{{ step }}{% if not loop.last %}, {% endif %}{% endfor %}. "Last Updated" shows when status changed. After "Highstate Initiated", only "Destroyed Instance" updates the timestamp.
No Virtual Machines Found No Virtual Machines Found
{%- endif %} {%- endif %}
@@ -93,21 +96,9 @@ Base domain has not been initialized.
{%- endif -%} {%- endif -%}
{%- endfor -%} {%- endfor -%}
{# Determine host OS overhead based on role #} {# Calculate available resources #}
{%- if role == 'hypervisor' -%} {%- set cpu_free = hw_config.cpu - ns.used_cpu -%}
{%- set host_os_cpu = 8 -%} {%- set mem_free = hw_config.memory - ns.used_memory -%}
{%- set host_os_memory = 16 -%}
{%- elif role == 'managerhype' -%}
{%- set host_os_cpu = 16 -%}
{%- set host_os_memory = 32 -%}
{%- else -%}
{%- set host_os_cpu = 0 -%}
{%- set host_os_memory = 0 -%}
{%- endif -%}
{# Calculate available resources (subtract both VM usage and host OS overhead) #}
{%- set cpu_free = hw_config.cpu - ns.used_cpu - host_os_cpu -%}
{%- set mem_free = hw_config.memory - ns.used_memory - host_os_memory -%}
{# Get used PCI indices #} {# Get used PCI indices #}
{%- set used_disk = [] -%} {%- set used_disk = [] -%}

View File

@@ -237,22 +237,10 @@ function manage_salt() {
case "$op" in case "$op" in
state) state)
log "Performing '$op' for '$state' on minion '$minion'"
state=$(echo "$request" | jq -r .state) state=$(echo "$request" | jq -r .state)
async=$(echo "$request" | jq -r .async) response=$(salt --async "$minion" state.apply "$state" queue=2)
if [[ $async == "true" ]]; then
log "Performing async '$op' on minion $minion with state '$state'"
response=$(salt --async "$minion" state.apply "$state" queue=2)
else
log "Performing '$op' on minion $minion with state '$state'"
response=$(salt "$minion" state.apply "$state")
fi
exit_code=$? exit_code=$?
if [[ $exit_code -ne 0 && "$response" =~ "is running as PID" ]]; then
log "Salt already running: $response ($exit_code)"
respond "$id" "ERROR_SALT_ALREADY_RUNNING"
return
fi
;; ;;
highstate) highstate)
log "Performing '$op' on minion $minion" log "Performing '$op' on minion $minion"
@@ -271,7 +259,7 @@ function manage_salt() {
;; ;;
esac esac
if [[ $exit_code -eq 0 ]]; then if [[ exit_code -eq 0 ]]; then
log "Successful command execution: $response" log "Successful command execution: $response"
respond "$id" "true" respond "$id" "true"
else else

View File

@@ -589,15 +589,6 @@ soc:
description: Timeout in seconds for the Onion AI health check. description: Timeout in seconds for the Onion AI health check.
global: True global: True
advanced: True advanced: True
systemPromptAddendum:
description: Additional context to provide to the AI assistant about this SOC deployment. This can include information about your environment, policies, or any other relevant details that can help the AI provide more accurate and tailored assistance. Long prompts may be shortened.
global: True
advanced: False
multiline: True
systemPromptAddendumMaxLength:
description: Maximum length of the system prompt addendum. Longer prompts will be truncated.
global: True
advanced: True
client: client:
assistant: assistant:
enabled: enabled:

View File

@@ -4,17 +4,10 @@
# Elastic License 2.0. # Elastic License 2.0.
{% set nvme_devices = salt['cmd.shell']("ls /dev/nvme*n1 2>/dev/null || echo ''") %} {% set nvme_devices = salt['cmd.shell']("find /dev -name 'nvme*n1' 2>/dev/null") %}
{% set virtio_devices = salt['cmd.shell']("test -b /dev/vdb && echo '/dev/vdb' || echo ''") %}
{% if nvme_devices %} {% if nvme_devices %}
include: include:
- storage.nsm_mount_nvme - storage.nsm_mount
{% elif virtio_devices %}
include:
- storage.nsm_mount_virtio
{% endif %} {% endif %}

View File

@@ -22,8 +22,8 @@ storage_nsm_mount_logdir:
# Install the NSM mount script # Install the NSM mount script
storage_nsm_mount_script: storage_nsm_mount_script:
file.managed: file.managed:
- name: /usr/sbin/so-nsm-mount-nvme - name: /usr/sbin/so-nsm-mount
- source: salt://storage/tools/sbin/so-nsm-mount-nvme - source: salt://storage/tools/sbin/so-nsm-mount
- mode: 755 - mode: 755
- user: root - user: root
- group: root - group: root
@@ -34,7 +34,7 @@ storage_nsm_mount_script:
# Execute the mount script if not already mounted # Execute the mount script if not already mounted
storage_nsm_mount_execute: storage_nsm_mount_execute:
cmd.run: cmd.run:
- name: /usr/sbin/so-nsm-mount-nvme - name: /usr/sbin/so-nsm-mount
- unless: mountpoint -q /nsm - unless: mountpoint -q /nsm
- require: - require:
- file: storage_nsm_mount_script - file: storage_nsm_mount_script

View File

@@ -1,39 +0,0 @@
# Copyright Security Onion Solutions LLC and/or licensed to Security Onion Solutions LLC under one
# or more contributor license agreements. Licensed under the Elastic License 2.0 as shown at
# https://securityonion.net/license; you may not use this file except in compliance with the
# Elastic License 2.0.
# Install required packages
storage_nsm_mount_virtio_packages:
pkg.installed:
- pkgs:
- xfsprogs
# Ensure log directory exists
storage_nsm_mount_virtio_logdir:
file.directory:
- name: /opt/so/log
- makedirs: True
- user: root
- group: root
- mode: 755
# Install the NSM mount script
storage_nsm_mount_virtio_script:
file.managed:
- name: /usr/sbin/so-nsm-mount-virtio
- source: salt://storage/tools/sbin/so-nsm-mount-virtio
- mode: 755
- user: root
- group: root
- require:
- pkg: storage_nsm_mount_virtio_packages
- file: storage_nsm_mount_virtio_logdir
# Execute the mount script if not already mounted
storage_nsm_mount_virtio_execute:
cmd.run:
- name: /usr/sbin/so-nsm-mount-virtio
- unless: mountpoint -q /nsm
- require:
- file: storage_nsm_mount_virtio_script

View File

@@ -81,7 +81,7 @@
set -e set -e
LOG_FILE="/opt/so/log/so-nsm-mount-nvme" LOG_FILE="/opt/so/log/so-nsm-mount.log"
VG_NAME="" VG_NAME=""
LV_NAME="nsm" LV_NAME="nsm"
MOUNT_POINT="/nsm" MOUNT_POINT="/nsm"

View File

@@ -1,171 +0,0 @@
#!/bin/bash
# Copyright Security Onion Solutions LLC and/or licensed to Security Onion Solutions LLC under one
# or more contributor license agreements. Licensed under the Elastic License 2.0 as shown at
# https://securityonion.net/license; you may not use this file except in compliance with the
# Elastic License 2.0.
# Usage:
# so-nsm-mount-virtio
#
# Options:
# None - script automatically configures /dev/vdb
#
# Examples:
# 1. Configure and mount virtio-blk device:
# ```bash
# sudo so-nsm-mount-virtio
# ```
#
# Notes:
# - Requires root privileges
# - Mounts /dev/vdb as /nsm
# - Creates XFS filesystem if needed
# - Configures persistent mount via /etc/fstab
# - Safe to run multiple times
#
# Description:
# This script automates the configuration and mounting of virtio-blk devices
# as /nsm in Security Onion virtual machines. It performs these steps:
#
# Dependencies:
# - xfsprogs: Required for XFS filesystem operations
#
# 1. Safety Checks:
# - Verifies root privileges
# - Checks if /nsm is already mounted
# - Verifies /dev/vdb exists
#
# 2. Filesystem Creation:
# - Creates XFS filesystem on /dev/vdb if not already formatted
#
# 3. Mount Configuration:
# - Creates /nsm directory if needed
# - Adds entry to /etc/fstab for persistence
# - Mounts the filesystem as /nsm
#
# Exit Codes:
# 0: Success conditions:
# - Device configured and mounted
# - Already properly mounted
# 1: Error conditions:
# - Must be run as root
# - Device /dev/vdb not found
# - Filesystem creation failed
# - Mount operation failed
#
# Logging:
# - All operations logged to /opt/so/log/so-nsm-mount-virtio
set -e
LOG_FILE="/opt/so/log/so-nsm-mount-virtio"
DEVICE="/dev/vdb"
MOUNT_POINT="/nsm"
# Function to log messages
log() {
echo "$(date '+%Y-%m-%d %H:%M:%S') $1" | tee -a "$LOG_FILE"
}
# Function to log errors
log_error() {
echo "$(date '+%Y-%m-%d %H:%M:%S') ERROR: $1" | tee -a "$LOG_FILE" >&2
}
# Function to check if running as root
check_root() {
if [ "$EUID" -ne 0 ]; then
log_error "Must be run as root"
exit 1
fi
}
# Main execution
main() {
log "=========================================="
log "Starting virtio-blk NSM mount process"
log "=========================================="
# Check root privileges
check_root
# Check if already mounted
if mountpoint -q "$MOUNT_POINT"; then
log "$MOUNT_POINT is already mounted"
log "=========================================="
exit 0
fi
# Check if device exists
if [ ! -b "$DEVICE" ]; then
log_error "Device $DEVICE not found"
log "=========================================="
exit 1
fi
log "Found device: $DEVICE"
# Get device size
local size=$(lsblk -dbn -o SIZE "$DEVICE" 2>/dev/null | numfmt --to=iec)
log "Device size: $size"
# Check if device has filesystem
if ! blkid "$DEVICE" | grep -q 'TYPE="xfs"'; then
log "Creating XFS filesystem on $DEVICE"
if ! mkfs.xfs -f "$DEVICE" 2>&1 | tee -a "$LOG_FILE"; then
log_error "Failed to create filesystem"
log "=========================================="
exit 1
fi
log "Filesystem created successfully"
else
log "Device already has XFS filesystem"
fi
# Create mount point
if [ ! -d "$MOUNT_POINT" ]; then
log "Creating mount point $MOUNT_POINT"
mkdir -p "$MOUNT_POINT"
fi
# Add to fstab if not present
if ! grep -q "$DEVICE.*$MOUNT_POINT" /etc/fstab; then
log "Adding entry to /etc/fstab"
echo "$DEVICE $MOUNT_POINT xfs defaults 0 0" >> /etc/fstab
log "Entry added to /etc/fstab"
else
log "Entry already exists in /etc/fstab"
fi
# Mount the filesystem
log "Mounting $DEVICE to $MOUNT_POINT"
if mount "$MOUNT_POINT" 2>&1 | tee -a "$LOG_FILE"; then
log "Successfully mounted $DEVICE to $MOUNT_POINT"
# Verify mount
if mountpoint -q "$MOUNT_POINT"; then
log "Mount verified successfully"
# Display mount information
log "Mount details:"
df -h "$MOUNT_POINT" | tail -n 1 | tee -a "$LOG_FILE"
else
log_error "Mount verification failed"
log "=========================================="
exit 1
fi
else
log_error "Failed to mount $DEVICE"
log "=========================================="
exit 1
fi
log "=========================================="
log "Virtio-blk NSM mount process completed successfully"
log "=========================================="
exit 0
}
# Run main function
main

View File

@@ -337,5 +337,4 @@
] ]
data_format = "influx" data_format = "influx"
interval = "1h" interval = "1h"
timeout = "120s"
{%- endif %} {%- endif %}

View File

@@ -1,30 +0,0 @@
hook DNS::log_policy(rec: DNS::Info, id: Log::ID, filter: Log::Filter)
{
# Only put a single name per line otherwise there will be memory issues!
# If the query comes back blank don't log
if (!rec?$query)
break;
# If the query comes back with one of these don't log
if (rec?$query && /google.com$/ in rec$query)
break;
# If the query comes back with one of these don't log
if (rec?$query && /.apple.com$/ in rec$query)
break;
# Don't log reverse lookups
if (rec?$query && /.in-addr.arpa/ in to_lower(rec$query))
break;
# Don't log netbios lookups. This generates a cray amount of logs
if (rec?$qtype_name && /NB/ in rec$qtype_name)
break;
}
event zeek_init()
{
Log::remove_default_filter(DNS::LOG);
local filter: Log::Filter = [$name="dns-filter"];
Log::add_filter(DNS::LOG, filter);
}

View File

@@ -1,13 +0,0 @@
hook Files::log_policy(rec: Files::Info, id: Log::ID, filter: Log::Filter)
{
# Turn off a specific mimetype
if (rec?$mime_type && ( /soap+xml/ | /json/ | /xml/ | /x509/ )in rec$mime_type)
break;
}
event zeek_init()
{
Log::remove_default_filter(Files::LOG);
local filter: Log::Filter = [$name="files-filter"];
Log::add_filter(Files::LOG, filter);
}

View File

@@ -1,20 +0,0 @@
### HTTP filter by host entries by string #####
module Filterhttp;
export {
global remove_host_entries: set[string] = {"www.genevalab.com", "www.google.com"};
}
hook HTTP::log_policy(rec: HTTP::Info, id: Log::ID, filter: Log::Filter)
{
# Remove HTTP host entries
if ( ! rec?$host || rec$host in remove_host_entries )
break;
}
event zeek_init()
{
Log::remove_default_filter(HTTP::LOG);
local filter: Log::Filter = [$name="http-filter"];
Log::add_filter(HTTP::LOG, filter);
}

View File

@@ -1,14 +0,0 @@
### HTTP filter by uri using pattern ####
hook HTTP::log_policy(rec: HTTP::Info, id: Log::ID, filter: Log::Filter)
{
# Remove HTTP uri entries by regex
if ( rec?$uri && /^\/kratos\// in rec$uri )
break;
}
event zeek_init()
{
Log::remove_default_filter(HTTP::LOG);
local filter: Log::Filter = [$name="http-filter"];
Log::add_filter(HTTP::LOG, filter);
}

View File

@@ -1,29 +0,0 @@
### Log filter by JA3S md5 hash:
hook SSL::log_policy(rec: SSL::Info, id: Log::ID, filter: Log::Filter)
{
# SSL log filter Ja3s by md5
if (rec?c$ssl$ja3s_cipher && ( /623de93db17d313345d7ea481e7443cf/ )in rec$c$ssl$ja3s_cipher)
break;
}
event zeek_init()
{
Log::remove_default_filter(SSL::LOG);
local filter: Log::Filter = [$name="ssl-filter"];
Log::add_filter(SSL::LOG, filter);
}
### Log filter by server name:
hook SSL::log_policy(rec: SSL::Info, id: Log::ID, filter: Log::Filter)
{
# SSL log filter by server name
if (rec?$server_name && ( /api.github.com$/ ) in rec$server_name)
break;
}
event zeek_init()
{
Log::remove_default_filter(SSL::LOG);
local filter: Log::Filter = [$name="ssl-filter"];
Log::add_filter(SSL::LOG, filter);
}

View File

@@ -1,17 +0,0 @@
global tunnel_subnet: set[subnet]={
10.19.0.0/24
};
hook Tunnel::log_policy(rec: Tunnel::Info, id: Log::ID, Filter: Log::Filter)
{
if (rec$id$orig_h in tunnel_subnet || rec$id$resp_h in tunnel_subnet)
break;
}
event zeek_init()
{
Log::remove_default_filter(Tunnel::LOG);
local filter: Log::Filter = [$name="tunnel-filter"];
Log::add_filter(Tunnel::LOG, filter);
}

View File

@@ -61,48 +61,6 @@ zeek:
global: True global: True
advanced: True advanced: True
duplicates: True duplicates: True
dns:
description: DNS Filter for Zeek. This is an advanced setting and will take further action to enable.
helpLink: zeek.html
file: True
global: True
advanced: True
duplicates: True
files:
description: Files Filter for Zeek. This is an advanced setting and will take further action to enable.
helpLink: zeek.html
file: True
global: True
advanced: True
duplicates: True
httphost:
description: HTTP Hosts Filter for Zeek. This is an advanced setting and will take further action to enable.
helpLink: zeek.html
file: True
global: True
advanced: True
duplicates: True
httpuri:
description: HTTP URI Filter for Zeek. This is an advanced setting and will take further action to enable.
helpLink: zeek.html
file: True
global: True
advanced: True
duplicates: True
ssl:
description: SSL Filter for Zeek. This is an advanced setting and will take further action to enable.
helpLink: zeek.html
file: True
global: True
advanced: True
duplicates: True
tunnel:
description: Tunnel Filter for Zeek. This is an advanced setting and will take further action to enable.
helpLink: zeek.html
file: True
global: True
advanced: True
duplicates: True
file_extraction: file_extraction:
description: Contains a list of file or MIME types Zeek will extract from the network streams. Values must adhere to the following format - {"MIME_TYPE":"FILE_EXTENSION"} description: Contains a list of file or MIME types Zeek will extract from the network streams. Values must adhere to the following format - {"MIME_TYPE":"FILE_EXTENSION"}
forcedType: "[]{}" forcedType: "[]{}"

View File

@@ -2305,7 +2305,7 @@ set_redirect() {
set_timezone() { set_timezone() {
timedatectl set-timezone Etc/UTC logCmd "timedatectl set-timezone Etc/UTC"
} }

View File

@@ -68,7 +68,6 @@ log_has_errors() {
grep -vE "Command failed with exit code" | \ grep -vE "Command failed with exit code" | \
grep -vE "Running scope as unit" | \ grep -vE "Running scope as unit" | \
grep -vE "securityonion-resources/sigma/stable" | \ grep -vE "securityonion-resources/sigma/stable" | \
grep -vE "remove_failed_vm.sls" | \
grep -vE "log-.*-pipeline_failed_attempts" &> "$error_log" grep -vE "log-.*-pipeline_failed_attempts" &> "$error_log"
if [[ $? -eq 0 ]]; then if [[ $? -eq 0 ]]; then