Merge pull request #14930 from Security-Onion-Solutions/vlb2

Vlb2
This commit is contained in:
Josh Patterson
2025-08-14 16:37:15 -04:00
committed by GitHub
20 changed files with 197 additions and 111 deletions

View File

@@ -30,6 +30,7 @@ body:
- 2.4.150 - 2.4.150
- 2.4.160 - 2.4.160
- 2.4.170 - 2.4.170
- 2.4.180
- Other (please provide detail below) - Other (please provide detail below)
validations: validations:
required: true required: true

View File

@@ -1,17 +1,17 @@
### 2.4.160-20250625 ISO image released on 2025/06/25 ### 2.4.170-20250812 ISO image released on 2025/08/12
### Download and Verify ### Download and Verify
2.4.160-20250625 ISO image: 2.4.170-20250812 ISO image:
https://download.securityonion.net/file/securityonion/securityonion-2.4.160-20250625.iso https://download.securityonion.net/file/securityonion/securityonion-2.4.170-20250812.iso
MD5: 78CF5602EFFAB84174C56AD2826E6E4E MD5: 50ECAAD05736298452DECEAE074FA773
SHA1: FC7EEC3EC95D97D3337501BAA7CA8CAE7C0E15EA SHA1: 1B1EB520DE61ECC4BF34E512DAFE307317D7666A
SHA256: 0ED965E8BEC80EE16AE90A0F0F96A3046CEF2D92720A587278DDDE3B656C01C2 SHA256: 87D176A48A58BAD1C2D57196F999BED23DE9B526226E3754F0C166C866CCDC1A
Signature for ISO image: Signature for ISO image:
https://github.com/Security-Onion-Solutions/securityonion/raw/2.4/main/sigs/securityonion-2.4.160-20250625.iso.sig https://github.com/Security-Onion-Solutions/securityonion/raw/2.4/main/sigs/securityonion-2.4.170-20250812.iso.sig
Signing key: Signing key:
https://raw.githubusercontent.com/Security-Onion-Solutions/securityonion/2.4/main/KEYS https://raw.githubusercontent.com/Security-Onion-Solutions/securityonion/2.4/main/KEYS
@@ -25,22 +25,22 @@ wget https://raw.githubusercontent.com/Security-Onion-Solutions/securityonion/2.
Download the signature file for the ISO: Download the signature file for the ISO:
``` ```
wget https://github.com/Security-Onion-Solutions/securityonion/raw/2.4/main/sigs/securityonion-2.4.160-20250625.iso.sig wget https://github.com/Security-Onion-Solutions/securityonion/raw/2.4/main/sigs/securityonion-2.4.170-20250812.iso.sig
``` ```
Download the ISO image: Download the ISO image:
``` ```
wget https://download.securityonion.net/file/securityonion/securityonion-2.4.160-20250625.iso wget https://download.securityonion.net/file/securityonion/securityonion-2.4.170-20250812.iso
``` ```
Verify the downloaded ISO image using the signature file: Verify the downloaded ISO image using the signature file:
``` ```
gpg --verify securityonion-2.4.160-20250625.iso.sig securityonion-2.4.160-20250625.iso gpg --verify securityonion-2.4.170-20250812.iso.sig securityonion-2.4.170-20250812.iso
``` ```
The output should show "Good signature" and the Primary key fingerprint should match what's shown below: The output should show "Good signature" and the Primary key fingerprint should match what's shown below:
``` ```
gpg: Signature made Wed 25 Jun 2025 10:13:33 AM EDT using RSA key ID FE507013 gpg: Signature made Fri 08 Aug 2025 06:24:56 PM EDT using RSA key ID FE507013
gpg: Good signature from "Security Onion Solutions, LLC <info@securityonionsolutions.com>" gpg: Good signature from "Security Onion Solutions, LLC <info@securityonionsolutions.com>"
gpg: WARNING: This key is not certified with a trusted signature! gpg: WARNING: This key is not certified with a trusted signature!
gpg: There is no indication that the signature belongs to the owner. gpg: There is no indication that the signature belongs to the owner.

View File

@@ -1 +1 @@
2.4.170 2.4.180

View File

@@ -38,7 +38,7 @@ Examples:
Notes: Notes:
- Verifies Security Onion license - Verifies Security Onion license
- Downloads and validates Oracle Linux KVM image if needed - Downloads and validates Oracle Linux KVM image if needed
- Generates Ed25519 SSH keys if not present - Generates ECDSA SSH keys if not present
- Creates/recreates VM based on environment changes - Creates/recreates VM based on environment changes
- Forces hypervisor configuration via highstate after successful setup (when minion_id provided) - Forces hypervisor configuration via highstate after successful setup (when minion_id provided)
@@ -46,7 +46,7 @@ Examples:
The setup process includes: The setup process includes:
1. License validation 1. License validation
2. Oracle Linux KVM image download and checksum verification 2. Oracle Linux KVM image download and checksum verification
3. SSH key generation for secure VM access 3. ECDSA SSH key generation for secure VM access
4. Cloud-init configuration for VM provisioning 4. Cloud-init configuration for VM provisioning
5. VM creation with specified disk size 5. VM creation with specified disk size
6. Hypervisor configuration via highstate (when minion_id provided and setup successful) 6. Hypervisor configuration via highstate (when minion_id provided and setup successful)
@@ -74,7 +74,7 @@ import sys
import time import time
import yaml import yaml
from cryptography.hazmat.primitives import serialization from cryptography.hazmat.primitives import serialization
from cryptography.hazmat.primitives.asymmetric import ed25519 from cryptography.hazmat.primitives.asymmetric import ec
# Configure logging # Configure logging
log = logging.getLogger(__name__) log = logging.getLogger(__name__)
log.setLevel(logging.DEBUG) log.setLevel(logging.DEBUG)
@@ -232,7 +232,7 @@ def _check_ssh_keys_exist():
bool: True if both private and public keys exist, False otherwise bool: True if both private and public keys exist, False otherwise
""" """
key_dir = '/etc/ssh/auth_keys/soqemussh' key_dir = '/etc/ssh/auth_keys/soqemussh'
key_path = f'{key_dir}/id_ed25519' key_path = f'{key_dir}/id_ecdsa'
pub_key_path = f'{key_path}.pub' pub_key_path = f'{key_path}.pub'
dest_dir = '/opt/so/saltstack/local/salt/libvirt/ssh/keys' dest_dir = '/opt/so/saltstack/local/salt/libvirt/ssh/keys'
dest_path = os.path.join(dest_dir, os.path.basename(pub_key_path)) dest_path = os.path.join(dest_dir, os.path.basename(pub_key_path))
@@ -250,7 +250,7 @@ def _setup_ssh_keys():
""" """
try: try:
key_dir = '/etc/ssh/auth_keys/soqemussh' key_dir = '/etc/ssh/auth_keys/soqemussh'
key_path = f'{key_dir}/id_ed25519' key_path = f'{key_dir}/id_ecdsa'
pub_key_path = f'{key_path}.pub' pub_key_path = f'{key_path}.pub'
# Check if keys already exist # Check if keys already exist
@@ -266,9 +266,9 @@ def _setup_ssh_keys():
os.makedirs(key_dir, exist_ok=True) os.makedirs(key_dir, exist_ok=True)
os.chmod(key_dir, 0o700) os.chmod(key_dir, 0o700)
# Generate new ed25519 key pair # Generate new ECDSA key pair using SECP256R1 curve
log.info("Generating new SSH keys") log.info("Generating new SSH keys")
private_key = ed25519.Ed25519PrivateKey.generate() private_key = ec.generate_private_key(ec.SECP256R1())
public_key = private_key.public_key() public_key = private_key.public_key()
# Serialize private key # Serialize private key
@@ -540,7 +540,7 @@ def setup_environment(vm_name: str = 'sool9', disk_size: str = '220G', minion_id
Notes: Notes:
- Verifies Security Onion license - Verifies Security Onion license
- Downloads and validates Oracle Linux KVM image if needed - Downloads and validates Oracle Linux KVM image if needed
- Generates Ed25519 SSH keys if not present - Generates ECDSA SSH keys if not present
- Creates/recreates VM based on environment changes - Creates/recreates VM based on environment changes
- Forces hypervisor configuration via highstate after successful setup - Forces hypervisor configuration via highstate after successful setup
(when minion_id is provided) (when minion_id is provided)
@@ -765,7 +765,7 @@ def create_vm(vm_name: str, disk_size: str = '220G'):
_set_ownership_and_perms(vm_dir, mode=0o750) _set_ownership_and_perms(vm_dir, mode=0o750)
# Read the SSH public key # Read the SSH public key
pub_key_path = '/opt/so/saltstack/local/salt/libvirt/ssh/keys/id_ed25519.pub' pub_key_path = '/opt/so/saltstack/local/salt/libvirt/ssh/keys/id_ecdsa.pub'
try: try:
with salt.utils.files.fopen(pub_key_path, 'r') as f: with salt.utils.files.fopen(pub_key_path, 'r') as f:
ssh_pub_key = f.read().strip() ssh_pub_key = f.read().strip()
@@ -844,7 +844,7 @@ output:
all: ">> /var/log/cloud-init.log" all: ">> /var/log/cloud-init.log"
# configure interaction with ssh server # configure interaction with ssh server
ssh_genkeytypes: ['ed25519', 'rsa'] ssh_genkeytypes: ['ecdsa', 'rsa']
# set timezone for VM # set timezone for VM
timezone: UTC timezone: UTC
@@ -1038,7 +1038,7 @@ def regenerate_ssh_keys():
Notes: Notes:
- Validates Security Onion license - Validates Security Onion license
- Removes existing keys if present - Removes existing keys if present
- Generates new Ed25519 key pair - Generates new ECDSA key pair
- Sets secure permissions (600 for private, 644 for public) - Sets secure permissions (600 for private, 644 for public)
- Distributes public key to required locations - Distributes public key to required locations
@@ -1048,7 +1048,7 @@ def regenerate_ssh_keys():
2. Checks for existing SSH keys 2. Checks for existing SSH keys
3. Removes old keys if present 3. Removes old keys if present
4. Creates required directories with secure permissions 4. Creates required directories with secure permissions
5. Generates new Ed25519 key pair 5. Generates new ECDSA key pair
6. Sets appropriate file permissions 6. Sets appropriate file permissions
7. Distributes public key to required locations 7. Distributes public key to required locations
@@ -1067,7 +1067,7 @@ def regenerate_ssh_keys():
# Remove existing keys # Remove existing keys
key_dir = '/etc/ssh/auth_keys/soqemussh' key_dir = '/etc/ssh/auth_keys/soqemussh'
key_path = f'{key_dir}/id_ed25519' key_path = f'{key_dir}/id_ecdsa'
pub_key_path = f'{key_path}.pub' pub_key_path = f'{key_path}.pub'
dest_dir = '/opt/so/saltstack/local/salt/libvirt/ssh/keys' dest_dir = '/opt/so/saltstack/local/salt/libvirt/ssh/keys'
dest_path = os.path.join(dest_dir, os.path.basename(pub_key_path)) dest_path = os.path.join(dest_dir, os.path.basename(pub_key_path))

View File

@@ -909,6 +909,15 @@ firewall:
- elastic_agent_control - elastic_agent_control
- elastic_agent_data - elastic_agent_data
- elastic_agent_update - elastic_agent_update
hypervisor:
portgroups:
- yum
- docker_registry
- influxdb
- elastic_agent_control
- elastic_agent_data
- elastic_agent_update
- sensoroni
customhostgroup0: customhostgroup0:
portgroups: [] portgroups: []
customhostgroup1: customhostgroup1:
@@ -961,6 +970,9 @@ firewall:
desktop: desktop:
portgroups: portgroups:
- salt_manager - salt_manager
hypervisor:
portgroups:
- salt_manager
self: self:
portgroups: portgroups:
- syslog - syslog
@@ -1113,6 +1125,15 @@ firewall:
- elastic_agent_control - elastic_agent_control
- elastic_agent_data - elastic_agent_data
- elastic_agent_update - elastic_agent_update
hypervisor:
portgroups:
- yum
- docker_registry
- influxdb
- elastic_agent_control
- elastic_agent_data
- elastic_agent_update
- sensoroni
customhostgroup0: customhostgroup0:
portgroups: [] portgroups: []
customhostgroup1: customhostgroup1:
@@ -1168,6 +1189,9 @@ firewall:
desktop: desktop:
portgroups: portgroups:
- salt_manager - salt_manager
hypervisor:
portgroups:
- salt_manager
self: self:
portgroups: portgroups:
- syslog - syslog

View File

@@ -30,7 +30,7 @@
# #
# WARNING: This script will DESTROY all data on the target drives! # WARNING: This script will DESTROY all data on the target drives!
# #
# USAGE: sudo ./raid_setup.sh # USAGE: sudo ./so-nvme-raid1.sh
# #
################################################################# #################################################################
@@ -95,20 +95,83 @@ check_existing_raid() {
log "Found existing RAID array $array_path (State: $raid_state)" log "Found existing RAID array $array_path (State: $raid_state)"
if mountpoint -q "$mount_point"; then # Check what's currently mounted at /nsm
log "RAID is already mounted at $mount_point" local current_mount=$(findmnt -n -o SOURCE "$mount_point" 2>/dev/null || echo "")
if [ -n "$current_mount" ]; then
if [ "$current_mount" = "$array_path" ]; then
log "RAID array $array_path is already correctly mounted at $mount_point"
log "Current RAID details:"
mdadm --detail "$array_path"
# Check if resyncing
if grep -q "resync" /proc/mdstat; then
log "RAID is currently resyncing:"
grep resync /proc/mdstat
log "You can monitor progress with: watch -n 60 cat /proc/mdstat"
else
log "RAID is fully synced and operational"
fi
# Show disk usage
log "Current disk usage:"
df -h "$mount_point"
exit 0
else
log "Found $mount_point mounted on $current_mount, but RAID array $array_path exists"
log "Will unmount current filesystem and remount on RAID array"
# Unmount current filesystem
log "Unmounting $mount_point"
umount "$mount_point"
# Remove old fstab entry
log "Removing old fstab entry for $current_mount"
sed -i "\|$current_mount|d" /etc/fstab
# Mount the RAID array
log "Mounting RAID array $array_path at $mount_point"
mount "$array_path" "$mount_point"
# Update fstab
log "Updating fstab for RAID array"
sed -i "\|${array_path}|d" /etc/fstab
echo "${array_path} ${mount_point} xfs defaults,nofail 0 0" >> /etc/fstab
log "RAID array is now mounted at $mount_point"
log "Current RAID details:"
mdadm --detail "$array_path"
# Check if resyncing
if grep -q "resync" /proc/mdstat; then
log "RAID is currently resyncing:"
grep resync /proc/mdstat
log "You can monitor progress with: watch -n 60 cat /proc/mdstat"
else
log "RAID is fully synced and operational"
fi
# Show disk usage
log "Current disk usage:"
df -h "$mount_point"
exit 0
fi
else
# /nsm not mounted, mount the RAID array
log "Mounting RAID array $array_path at $mount_point"
mount "$array_path" "$mount_point"
# Update fstab
log "Updating fstab for RAID array"
sed -i "\|${array_path}|d" /etc/fstab
echo "${array_path} ${mount_point} xfs defaults,nofail 0 0" >> /etc/fstab
log "RAID array is now mounted at $mount_point"
log "Current RAID details:" log "Current RAID details:"
mdadm --detail "$array_path" mdadm --detail "$array_path"
# Check if resyncing
if grep -q "resync" /proc/mdstat; then
log "RAID is currently resyncing:"
grep resync /proc/mdstat
log "You can monitor progress with: watch -n 60 cat /proc/mdstat"
else
log "RAID is fully synced and operational"
fi
# Show disk usage # Show disk usage
log "Current disk usage:" log "Current disk usage:"
df -h "$mount_point" df -h "$mount_point"
@@ -120,12 +183,7 @@ check_existing_raid() {
fi fi
# Check if any of the target devices are in use # Check if any of the target devices are in use
for device in "/dev/nvme0n1" "/dev/nvme1n1"; do for device in "/dev/nvme0n1" "/dev/nvme1n1"; do
if lsblk -o NAME,MOUNTPOINT "$device" | grep -q "nsm"; then
log "Error: $device is already mounted at /nsm"
exit 1
fi
if mdadm --examine "$device" &>/dev/null || mdadm --examine "${device}p1" &>/dev/null; then if mdadm --examine "$device" &>/dev/null || mdadm --examine "${device}p1" &>/dev/null; then
# Find the actual array name for this device # Find the actual array name for this device
local device_arrays=($(find_md_arrays_using_devices "${device}p1")) local device_arrays=($(find_md_arrays_using_devices "${device}p1"))

View File

@@ -1,2 +1,2 @@
Match user soqemussh Match user soqemussh
IdentityFile /etc/ssh/auth_keys/soqemussh/id_ed25519 IdentityFile /etc/ssh/auth_keys/soqemussh/id_ecdsa

View File

@@ -46,7 +46,7 @@ create_soqemussh_user:
soqemussh_pub_key: soqemussh_pub_key:
ssh_auth.present: ssh_auth.present:
- user: soqemussh - user: soqemussh
- source: salt://libvirt/ssh/keys/id_ed25519.pub - source: salt://libvirt/ssh/keys/id_ecdsa.pub
{% endif %} {% endif %}

View File

@@ -16,9 +16,9 @@
# Check if hypervisor environment has been set up # Check if hypervisor environment has been set up
{% set ssh_user_exists = salt['user.info']('soqemussh') %} {% set ssh_user_exists = salt['user.info']('soqemussh') %}
{% set ssh_keys_exist = salt['file.file_exists']('/etc/ssh/auth_keys/soqemussh/id_ed25519') and {% set ssh_keys_exist = salt['file.file_exists']('/etc/ssh/auth_keys/soqemussh/id_ecdsa') and
salt['file.file_exists']('/etc/ssh/auth_keys/soqemussh/id_ed25519.pub') and salt['file.file_exists']('/etc/ssh/auth_keys/soqemussh/id_ecdsa.pub') and
salt['file.file_exists']('/opt/so/saltstack/local/salt/libvirt/ssh/keys/id_ed25519.pub') %} salt['file.file_exists']('/opt/so/saltstack/local/salt/libvirt/ssh/keys/id_ecdsa.pub') %}
{% set base_image_exists = salt['file.file_exists']('/nsm/libvirt/boot/OL9U5_x86_64-kvm-b253.qcow2') %} {% set base_image_exists = salt['file.file_exists']('/nsm/libvirt/boot/OL9U5_x86_64-kvm-b253.qcow2') %}
{% set vm_files_exist = salt['file.directory_exists']('/opt/so/saltstack/local/salt/libvirt/images/sool9') and {% set vm_files_exist = salt['file.directory_exists']('/opt/so/saltstack/local/salt/libvirt/images/sool9') and
salt['file.file_exists']('/opt/so/saltstack/local/salt/libvirt/images/sool9/sool9.qcow2') and salt['file.file_exists']('/opt/so/saltstack/local/salt/libvirt/images/sool9/sool9.qcow2') and

View File

@@ -77,10 +77,10 @@ Examples:
1. Static IP Configuration with Multiple PCI Devices: 1. Static IP Configuration with Multiple PCI Devices:
Command: Command:
so-salt-cloud -p sool9-hyper1 vm1_sensor --static4 --ip4 192.168.1.10/24 --gw4 192.168.1.1 \ so-salt-cloud -p sool9_hyper1 vm1_sensor --static4 --ip4 192.168.1.10/24 --gw4 192.168.1.1 \
--dns4 192.168.1.1,192.168.1.2 --search4 example.local -c 4 -m 8192 -P 0000:c7:00.0 -P 0000:c4:00.0 --dns4 192.168.1.1,192.168.1.2 --search4 example.local -c 4 -m 8192 -P 0000:c7:00.0 -P 0000:c4:00.0
This command provisions a VM named vm1_sensor using the sool9-hyper1 profile with the following settings: This command provisions a VM named vm1_sensor using the sool9_hyper1 profile with the following settings:
- Static IPv4 configuration: - Static IPv4 configuration:
- IP Address: 192.168.1.10/24 - IP Address: 192.168.1.10/24
@@ -95,21 +95,21 @@ Examples:
2. DHCP Configuration with Default Hardware Settings: 2. DHCP Configuration with Default Hardware Settings:
Command: Command:
so-salt-cloud -p sool9-hyper1 vm2_master --dhcp4 so-salt-cloud -p sool9_hyper1 vm2_master --dhcp4
This command provisions a VM named vm2_master using the sool9-hyper1 profile with DHCP for network configuration and default hardware settings. This command provisions a VM named vm2_master using the sool9_hyper1 profile with DHCP for network configuration and default hardware settings.
3. Static IP Configuration without Hardware Specifications: 3. Static IP Configuration without Hardware Specifications:
Command: Command:
so-salt-cloud -p sool9-hyper1 vm3_search --static4 --ip4 192.168.1.20/24 --gw4 192.168.1.1 so-salt-cloud -p sool9_hyper1 vm3_search --static4 --ip4 192.168.1.20/24 --gw4 192.168.1.1
This command provisions a VM named vm3_search with a static IP configuration and default hardware settings. This command provisions a VM named vm3_search with a static IP configuration and default hardware settings.
4. DHCP Configuration with Custom Hardware Specifications and Multiple PCI Devices: 4. DHCP Configuration with Custom Hardware Specifications and Multiple PCI Devices:
Command: Command:
so-salt-cloud -p sool9-hyper1 vm4_node --dhcp4 -c 8 -m 16384 -P 0000:c7:00.0 -P 0000:c4:00.0 -P 0000:c4:00.1 so-salt-cloud -p sool9_hyper1 vm4_node --dhcp4 -c 8 -m 16384 -P 0000:c7:00.0 -P 0000:c4:00.0 -P 0000:c4:00.1
This command provisions a VM named vm4_node using DHCP for network configuration and custom hardware settings: This command provisions a VM named vm4_node using DHCP for network configuration and custom hardware settings:
@@ -120,9 +120,9 @@ Examples:
5. Static IP Configuration with DNS and Search Domain: 5. Static IP Configuration with DNS and Search Domain:
Command: Command:
so-salt-cloud -p sool9-hyper1 vm1_sensor --static4 --ip4 192.168.1.10/24 --gw4 192.168.1.1 --dns4 192.168.1.1 --search4 example.local so-salt-cloud -p sool9_hyper1 vm1_sensor --static4 --ip4 192.168.1.10/24 --gw4 192.168.1.1 --dns4 192.168.1.1 --search4 example.local
This command provisions a VM named vm1_sensor using the sool9-hyper1 profile with static IPv4 configuration: This command provisions a VM named vm1_sensor using the sool9_hyper1 profile with static IPv4 configuration:
- Static IPv4 configuration: - Static IPv4 configuration:
- IP Address: 192.168.1.10/24 - IP Address: 192.168.1.10/24
@@ -133,14 +133,14 @@ Examples:
6. Delete a VM with Confirmation: 6. Delete a VM with Confirmation:
Command: Command:
so-salt-cloud -p sool9-hyper1 vm1_sensor -d so-salt-cloud -p sool9_hyper1 vm1_sensor -d
This command deletes the VM named vm1_sensor and will prompt for confirmation before proceeding. This command deletes the VM named vm1_sensor and will prompt for confirmation before proceeding.
7. Delete a VM without Confirmation: 7. Delete a VM without Confirmation:
Command: Command:
so-salt-cloud -p sool9-hyper1 vm1_sensor -yd so-salt-cloud -p sool9_hyper1 vm1_sensor -yd
This command deletes the VM named vm1_sensor without prompting for confirmation. This command deletes the VM named vm1_sensor without prompting for confirmation.
@@ -439,8 +439,8 @@ def call_salt_cloud(profile, vm_name, destroy=False, assume_yes=False):
delete_vm(profile, vm_name, assume_yes) delete_vm(profile, vm_name, assume_yes)
return return
# Extract hypervisor hostname from profile (e.g., sool9-jpphype1 -> jpphype1) # Extract hypervisor hostname from profile (e.g., sool9_hype1 -> hype1)
hypervisor = profile.split('-', 1)[1] if '-' in profile else None hypervisor = profile.split('_', 1)[1] if '_' in profile else None
if hypervisor: if hypervisor:
logger.info("Ensuring host key exists for hypervisor %s", hypervisor) logger.info("Ensuring host key exists for hypervisor %s", hypervisor)
if not _add_hypervisor_host_key(hypervisor): if not _add_hypervisor_host_key(hypervisor):
@@ -512,7 +512,7 @@ def format_qcow2_output(operation, result):
logger.info(f"{operation} result from {host}: {host_result}") logger.info(f"{operation} result from {host}: {host_result}")
def run_qcow2_modify_hardware_config(profile, vm_name, cpu=None, memory=None, pci_list=None, start=False): def run_qcow2_modify_hardware_config(profile, vm_name, cpu=None, memory=None, pci_list=None, start=False):
hv_name = profile.split('-')[1] hv_name = profile.split('_')[1]
target = hv_name + "_*" target = hv_name + "_*"
try: try:
@@ -534,7 +534,7 @@ def run_qcow2_modify_hardware_config(profile, vm_name, cpu=None, memory=None, pc
logger.error(f"An error occurred while running qcow2.modify_hardware_config: {e}") logger.error(f"An error occurred while running qcow2.modify_hardware_config: {e}")
def run_qcow2_modify_network_config(profile, vm_name, mode, ip=None, gateway=None, dns=None, search_domain=None): def run_qcow2_modify_network_config(profile, vm_name, mode, ip=None, gateway=None, dns=None, search_domain=None):
hv_name = profile.split('-')[1] hv_name = profile.split('_')[1]
target = hv_name + "_*" target = hv_name + "_*"
image = '/nsm/libvirt/images/sool9/sool9.qcow2' image = '/nsm/libvirt/images/sool9/sool9.qcow2'
interface = 'enp1s0' interface = 'enp1s0'

View File

@@ -70,13 +70,13 @@
{% set vm_name = tag.split('/')[2] %} {% set vm_name = tag.split('/')[2] %}
{% do salt.log.debug('dyanno_hypervisor_orch: Got vm_name from tag: ' ~ vm_name) %} {% do salt.log.debug('dyanno_hypervisor_orch: Got vm_name from tag: ' ~ vm_name) %}
{% if tag.endswith('/deploying') %} {% if tag.endswith('/deploying') %}
{% set hypervisor = data.get('kwargs').get('cloud_grains').get('profile').split('-')[1] %} {% set hypervisor = data.get('kwargs').get('cloud_grains').get('profile').split('_')[1] %}
{% endif %} {% endif %}
{# Set the hypervisor #} {# Set the hypervisor #}
{# First try to get it from the event #} {# First try to get it from the event #}
{% if data.get('profile', False) %} {% if data.get('profile', False) %}
{% do salt.log.debug('dyanno_hypervisor_orch: Did not get cache.grains.') %} {% do salt.log.debug('dyanno_hypervisor_orch: Did not get cache.grains.') %}
{% set hypervisor = data.profile.split('-')[1] %} {% set hypervisor = data.profile.split('_')[1] %}
{% do salt.log.debug('dyanno_hypervisor_orch: Got hypervisor from data: ' ~ hypervisor) %} {% do salt.log.debug('dyanno_hypervisor_orch: Got hypervisor from data: ' ~ hypervisor) %}
{% else %} {% else %}
{% set hypervisor = find_hypervisor_from_status(vm_name) %} {% set hypervisor = find_hypervisor_from_status(vm_name) %}

View File

@@ -6,12 +6,12 @@
{%- for role, hosts in HYPERVISORS.items() %} {%- for role, hosts in HYPERVISORS.items() %}
{%- for host in hosts.keys() %} {%- for host in hosts.keys() %}
sool9-{{host}}: sool9_{{host}}:
provider: kvm-ssh-{{host}} provider: kvm-ssh-{{host}}
base_domain: sool9 base_domain: sool9
ip_source: qemu-agent ip_source: qemu-agent
ssh_username: soqemussh ssh_username: soqemussh
private_key: /etc/ssh/auth_keys/soqemussh/id_ed25519 private_key: /etc/ssh/auth_keys/soqemussh/id_ecdsa
sudo: True sudo: True
deploy_command: sh /tmp/.saltcloud-*/deploy.sh deploy_command: sh /tmp/.saltcloud-*/deploy.sh
script_args: -r -F -x python3 stable 3006.9 script_args: -r -F -x python3 stable 3006.9

View File

@@ -650,7 +650,7 @@ def process_vm_creation(hypervisor_path: str, vm_config: dict) -> None:
create_vm_tracking_file(hypervisor_path, vm_name, vm_config) create_vm_tracking_file(hypervisor_path, vm_name, vm_config)
# Build and execute so-salt-cloud command # Build and execute so-salt-cloud command
cmd = ['so-salt-cloud', '-p', f'sool9-{hypervisor}', vm_name] cmd = ['so-salt-cloud', '-p', f'sool9_{hypervisor}', vm_name]
# Add network configuration # Add network configuration
if vm_config['network_mode'] == 'static4': if vm_config['network_mode'] == 'static4':
@@ -788,7 +788,7 @@ def process_vm_deletion(hypervisor_path: str, vm_name: str) -> None:
log.warning("Failed to read VM config from tracking file %s: %s", vm_file, str(e)) log.warning("Failed to read VM config from tracking file %s: %s", vm_file, str(e))
# Attempt VM deletion with so-salt-cloud # Attempt VM deletion with so-salt-cloud
cmd = ['so-salt-cloud', '-p', f'sool9-{hypervisor}', vm_name, '-yd'] cmd = ['so-salt-cloud', '-p', f'sool9_{hypervisor}', vm_name, '-yd']
log.info("Executing: %s", ' '.join(cmd)) log.info("Executing: %s", ' '.join(cmd))
result = subprocess.run(cmd, capture_output=True, text=True, check=True) result = subprocess.run(cmd, capture_output=True, text=True, check=True)

View File

@@ -4,7 +4,7 @@
Elastic License 2.0. #} Elastic License 2.0. #}
{% set nodetype = grains.id.split("_") | last %} {% set nodetype = grains.id.split("_") | last %}
{% set hypervisor = salt['grains.get']('salt-cloud:profile').split('-')[1] %} {% set hypervisor = salt['grains.get']('salt-cloud:profile').split('_')[1] %}
{# Import hardware details from VM hardware tracking file #} {# Import hardware details from VM hardware tracking file #}
{% import_json 'hypervisor/hosts/' ~ hypervisor ~ '/' ~ grains.id as vm_hardware %} {% import_json 'hypervisor/hosts/' ~ hypervisor ~ '/' ~ grains.id as vm_hardware %}

View File

@@ -70,41 +70,7 @@ Base domain has not been initialized.
{%- endmacro -%} {%- endmacro -%}
{%- macro update_resource_field(field, free_value, total_value, unit_label) -%} {%- macro update_resource_field(field, free_value, total_value, unit_label) -%}
{%- set resource_regex = '' -%} {%- set resource_regex = '^[0-9]{1,3}$' -%}
{%- if free_value < 10 -%}
{%- set resource_regex = '^[1-' ~ free_value ~ ']$' -%}
{%- elif free_value < 100 -%}
{%- set tens_digit = free_value // 10 -%}
{%- set ones_digit = free_value % 10 -%}
{%- if ones_digit == 0 -%}
{%- set resource_regex = '^([1-9]|[1-' ~ (tens_digit-1) ~ '][0-9]|' ~ tens_digit ~ '0)$' -%}
{%- else -%}
{%- set resource_regex = '^([1-9]|[1-' ~ (tens_digit-1) ~ '][0-9]|' ~ tens_digit ~ '[0-' ~ ones_digit ~ '])$' -%}
{%- endif -%}
{%- elif free_value < 1000 -%}
{%- set hundreds_digit = free_value // 100 -%}
{%- set tens_digit = (free_value % 100) // 10 -%}
{%- set ones_digit = free_value % 10 -%}
{%- if hundreds_digit == 1 -%}
{%- if tens_digit == 0 and ones_digit == 0 -%}
{%- set resource_regex = '^([1-9]|[1-9][0-9]|100)$' -%}
{%- elif tens_digit == 0 -%}
{%- set resource_regex = '^([1-9]|[1-9][0-9]|10[0-' ~ ones_digit ~ '])$' -%}
{%- elif ones_digit == 0 -%}
{%- set resource_regex = '^([1-9]|[1-9][0-9]|10[0-9]|1[1-' ~ tens_digit ~ ']0)$' -%}
{%- else -%}
{%- set resource_regex = '^([1-9]|[1-9][0-9]|10[0-9]|1[1-' ~ (tens_digit-1) ~ '][0-9]|1' ~ tens_digit ~ '[0-' ~ ones_digit ~ '])$' -%}
{%- endif -%}
{%- else -%}
{%- if tens_digit == 0 and ones_digit == 0 -%}
{%- set resource_regex = '^([1-9]|[1-9][0-9]|[1-9][0-9][0-9]|[1-' ~ (hundreds_digit-1) ~ '][0-9][0-9]|' ~ hundreds_digit ~ '00)$' -%}
{%- elif ones_digit == 0 -%}
{%- set resource_regex = '^([1-9]|[1-9][0-9]|[1-9][0-9][0-9]|[1-' ~ (hundreds_digit-1) ~ '][0-9][0-9]|' ~ hundreds_digit ~ '[0-' ~ tens_digit ~ ']0)$' -%}
{%- else -%}
{%- set resource_regex = '^([1-9]|[1-9][0-9]|[1-9][0-9][0-9]|[1-' ~ (hundreds_digit-1) ~ '][0-9][0-9]|' ~ hundreds_digit ~ '[0-' ~ (tens_digit-1) ~ '][0-9]|' ~ hundreds_digit ~ tens_digit ~ '[0-' ~ ones_digit ~ '])$' -%}
{%- endif -%}
{%- endif -%}
{%- endif -%}
{%- do field.update({ {%- do field.update({
'label': field.label | replace('FREE', free_value | string) | replace('TOTAL', total_value | string), 'label': field.label | replace('FREE', free_value | string) | replace('TOTAL', total_value | string),
'regex': resource_regex, 'regex': resource_regex,

View File

@@ -19,7 +19,7 @@ vm_highstate_trigger:
- data: - data:
status: Highstate Initiated status: Highstate Initiated
vm_name: {{ grains.id }} vm_name: {{ grains.id }}
hypervisor: {{ salt['grains.get']('salt-cloud:profile', '').split('-')[1] }} hypervisor: {{ salt['grains.get']('salt-cloud:profile', '').split('_')[1] }}
- unless: test -f /opt/so/state/highstate_trigger.txt - unless: test -f /opt/so/state/highstate_trigger.txt
- order: 1 # Ensure this runs early in the highstate process - order: 1 # Ensure this runs early in the highstate process

View File

@@ -29,8 +29,46 @@ title() {
} }
fail_setup() { fail_setup() {
error "Setup encounted an unrecoverable failure, exiting" local failure_reason="${1:-Unknown failure}"
touch /root/failure
# Capture call stack information
local calling_function="${FUNCNAME[1]:-main}"
local calling_line="${BASH_LINENO[0]:-unknown}"
local calling_file="${BASH_SOURCE[1]:-unknown}"
# Build call stack trace
local call_stack=""
local i=1
while [[ $i -lt ${#FUNCNAME[@]} ]]; do
local func="${FUNCNAME[$i]}"
local file="${BASH_SOURCE[$i]##*/}" # Get basename only
local line="${BASH_LINENO[$((i-1))]}"
if [[ -n "$call_stack" ]]; then
call_stack="$call_stack -> "
fi
call_stack="$call_stack$func($file:$line)"
((i++))
done
# Enhanced error logging with call stack
error "FAILURE: Called from $calling_function() at line $calling_line"
error "REASON: $failure_reason"
error "STACK: $call_stack"
error "Setup encountered an unrecoverable failure: $failure_reason"
# Create detailed failure file with enhanced information
{
echo "SETUP_FAILURE_TIMESTAMP=$(date -u '+%Y-%m-%d %H:%M:%S UTC')"
echo "SETUP_FAILURE_REASON=$failure_reason"
echo "SETUP_CALLING_FUNCTION=$calling_function"
echo "SETUP_CALLING_LINE=$calling_line"
echo "SETUP_CALLING_FILE=${calling_file##*/}"
echo "SETUP_CALL_STACK=$call_stack"
echo "SETUP_LOG_LOCATION=$setup_log"
echo "SETUP_FAILURE_DETAILS=Check $setup_log for complete error details"
} > /root/failure
exit 1 exit 1
} }
@@ -1194,7 +1232,7 @@ hypervisor_local_states() {
info "Running libvirt states for hypervisor" info "Running libvirt states for hypervisor"
logCmd "salt-call state.apply libvirt.64962 --local --file-root=../salt/ -l info" logCmd "salt-call state.apply libvirt.64962 --local --file-root=../salt/ -l info"
info "Setting up bridge for $MNIC" info "Setting up bridge for $MNIC"
salt-call state.apply libvirt.bridge --local --file-root=../salt/ -l info pillar="{\"host\": {\"mainint\": \"$MNIC\"}}" salt-call state.apply libvirt.bridge --local --file-root=../salt/ -l info pillar="{\"host\": {\"mainint\": \"$MNIC\"}}"
fi fi
} }

View File

@@ -755,7 +755,7 @@ if ! [[ -f $install_opt_file ]]; then
logCmd "salt-key -ya $MINION_ID" logCmd "salt-key -ya $MINION_ID"
logCmd "salt-call saltutil.sync_all" logCmd "salt-call saltutil.sync_all"
# we need to sync the runner and generate the soqemussh user keys so that first highstate after license created # we need to sync the runner and generate the soqemussh user keys so that first highstate after license created
# doesnt have a state failure for soqemussh_pub_key source for id_ed25519.pub missing # doesnt have a state failure for soqemussh_pub_key source for id_ecdsa.pub missing
if [[ $is_manager || $is_managerhype ]]; then if [[ $is_manager || $is_managerhype ]]; then
logCmd "salt-run saltutil.sync_all" logCmd "salt-run saltutil.sync_all"
logCmd "salt-run setup_hypervisor.regenerate_ssh_keys" logCmd "salt-run setup_hypervisor.regenerate_ssh_keys"

View File

@@ -654,10 +654,9 @@ whiptail_install_type_dist_new() {
Note: MANAGER is the recommended option for most users. MANAGERSEARCH should only be used in very specific situations. Note: MANAGER is the recommended option for most users. MANAGERSEARCH should only be used in very specific situations.
EOM EOM
install_type=$(whiptail --title "$whiptail_title" --menu "$mngr_msg" 20 75 3 \ install_type=$(whiptail --title "$whiptail_title" --menu "$mngr_msg" 20 75 2 \
"MANAGER" "New grid, requires separate search node(s) " \ "MANAGER" "New grid, requires separate search node(s) " \
"MANAGERSEARCH" "New grid, separate search node(s) are optional " \ "MANAGERSEARCH" "New grid, separate search node(s) are optional " \
"MANAGERHYPE" "Manager with hypervisor - Security Onion Pro required " \
3>&1 1>&2 2>&3 3>&1 1>&2 2>&3
) )

Binary file not shown.